As an absolute beginner to the language I needed to write my first Perl script. As a big fan of Test-Driven Development (TDD) I thought it would be a good idea to start with a test when doing the first Perl program. And it worked really nice. This post should be a simple step-by-step tutorial for Perl beginners who want to write simple Unit Tests for Perl. I will use my first Perl script as an example.
The Example: A ClearCase trigger
To customize the behaviour of ClearCase you have to write Perl scripts
which can be associated with any ClearCase command as a so called
ClearCase trigger (see IBM Rational ClearCase: The ten best triggers). For my example, I needed a trigger that updates a FitNesse Wiki page (the file name is always "content.txt") when it is checked-in to ClearCase. If the file contains a string like "$Revision: \main\MAINLINE_SQE\3 $" the Perl script should update the version information. That's it.
Step-by-Step Tutorial
Install Perl.
Create a folder "PerlScripts" for the new Perl scripts. We will have two files in this folder: "CiVersionFitnesseTrigger.pl" is the Perl script for the trigger. "CiVersionFitnesseTriggerTests.pl" is the Perl script for the corresponding Unit Tests.
Download the Test::Simple Perl module. Unpack the gz archive. We will only need the file "Simple.pm" from the folder "lib/Test". Create a folder "Test" as sub folder of our "PerlScripts" folder. Copy the file "Simple.pm" to this "Test" folder.
We start writing our first test in "CiVersionFitnesseTriggerTests.pl":
use Test::Simple tests => 1;
# System under test
require 'CiVersionFitnesseTrigger.pl';
# Testing is_fitnesse_wiki_page() method
ok(FitTrigger::is_fitnesse_wiki_page('content.txt'), 'content.txt is page');
We start defining an empty sub routine and an empty main routine in "CiVersionFitnesseTrigger.pl":
package FitTrigger;
sub is_fitnesse_wiki_page {
return 0;
}
#
# Main method
#
1;
We can now run the first unit test and see it failing:
Now we have the infrastructure to start implementation. We fix the first failing test:
package FitTrigger;
sub is_fitnesse_wiki_page {
my ($file_name) = @_;
return $file_name =~ m/^(.*\\)?content\.txt$/
}
#
# Main method
#
1;
Now run the unit test again and it succeeds:
We continue the cycle of writing new unit tests and implementing the script step by step. In the end we have 12 unit tests and 1 integration test:
use Test::Simple tests => 13;
# System under test
require 'CiVersionFitnesseTrigger.pl';
# Testing is_fitnesse_wiki_page() method
ok(FitTrigger::is_fitnesse_wiki_page('content.txt'), 'content.txt is page');
ok(FitTrigger::is_fitnesse_wiki_page('c:\content.txt'), 'c:\content.txt is page');
ok(FitTrigger::is_fitnesse_wiki_page('..\content.txt') , '..\content.txt is page');
ok(FitTrigger::is_fitnesse_wiki_page('c:\temp\content.txt'), 'c:\temp\content.txt is page');
ok(!FitTrigger::is_fitnesse_wiki_page('content.txt.old') , 'content.txt.old is not a page');
ok(!FitTrigger::is_fitnesse_wiki_page('somecontent.txt') , 'somecontent.txt is not a page');
ok(!FitTrigger::is_fitnesse_wiki_page('content.txt\something.txt') , 'content.txt\something.txt is not a page');
# Testing getTempFolder() method
my $tmpFolder = FitTrigger::get_temp_folder();
ok(defined($tmpFolder) && $tmpFolder ne '' && length($tmpFolder) > 1 , 'temporary folder not empty');
# Testing getTempFile() method
my $tmpFile = FitTrigger::get_temp_file();
ok(defined($tmpFile) && $tmpFile ne '' && length($tmpFile) > 1 , 'temporary file not empty');
# Testing update_revision_in_target() method
my $testFile = "$tmpFolder\\test.txt";
my $targetFile = "$tmpFolder\\target.txt";
open("TESTFILE", ">$testFile") ||
&error("Could not open test File $testFile for writing");
print TESTFILE "hallo1\nhallo2\n\$Revision: VERSION_ZZZ \$\n";
close TESTFILE;
my $newVersion = 'VERSION_111';
FitTrigger::update_revision_in_target($testFile,$targetFile,$newVersion);
open(F,"$targetFile");
my @list = ;
my $content=join('',@list);
close F;
my $expectedContent = "hallo1\nhallo2\n\$Revision: VERSION_111 \$\n";
ok($content eq $expectedContent, 'version was updated in target file');
# Testing overwrite_file() method
FitTrigger::overwrite_file($targetFile,$testFile);
open(F2,"$testFile");
@list=;
my $newContent =join('',@list);
close F2;
ok($newContent eq $expectedContent, 'file was overwritten with a modified file');
ok(! -e $targetFile, 'modified file is deleted');
# Testing main() method
$testFile = "$tmpFolder\\content.txt";
open("TESTFILE", ">$testFile") ||
&error("Could not open test File $testFile for writing");
print TESTFILE "hallo1\nhallo2\n\$Revision: VERSION_ZZZ \$\n";
close TESTFILE;
$ENV{CLEARCASE_PN}=$testFile;
$ENV{CLEARCASE_ID_STR}='VERSION_888';
system ("perl CiVersionFitnesseTrigger.pl");
my $expectedContentMain = "hallo1\nhallo2\n\$Revision: VERSION_888 \$\n";
open(F3,"$testFile");
@list=;
my $newContentMain =join('',@list);
close F3;
ok($newContentMain eq $expectedContentMain, 'perl script has updated content.txt');
The complete implementation in "CiVersionFitnesseTrigger.pl" looks like:
package FitTrigger;
sub is_fitnesse_wiki_page {
my ($file_name) = @_;
return $file_name =~ m/^(.*\\)?content\.txt$/
}
sub get_temp_folder {
my $tmp_folder = $ENV{TMP};
$tmp_folder = $ENV{TEMP} unless ($tmp_folder);
$tmp_folder = "/tmp" unless ($tmp_folder);
return $tmp_folder;
}
sub get_temp_file {
my $tmp_folder = get_temp_folder();
return "$tmpFolder\\ccTriggerTmp.$$";
}
sub update_revision_in_target {
my $source = @_[0];
my $target = @_[1];
my $revision = @_[2];
open("SOURCE", "$source") ||
&error("Could not open source file $source for reading");
open("TARGET", ">$target") ||
&error("Could not open target file $target for reading");
while ()
{
if (/\$Revision:?.*\$/) {
s/\$Revision:?.*\$/\$Revision: $revision \$/;
}
print TARGET;
}
close SOURCE;
close TARGET;
}
sub overwrite_file {
my $source = @_[0];
my $target = @_[1];
open (SOURCE, "$source") ||
&error ("Could not open source file $source for reading");
open (TARGET, ">$target") ||
&error ("Could not open target file $target for writing");
while() {
print TARGET;
}
close(SOURCE);
close(TARGET);
unlink($source);
}
sub error {
my ($message) = @_;
die ($message."\nUnable to continue checkin ...\n");
}
#
# Main method
#
# Summary:
# If the name of the checkin file is ‘content.txt’ then search in the content of the file for a string like
# „$Revision: \main\MAINLINE_22_WIPID\4 $“. This string will then be replaced
# with e.g. „$Revision: \main\MAINLINE_22_WIPID\5 $“.
my $check_in_file = $ENV{'CLEARCASE_PN'};
my $revision = $ENV{'CLEARCASE_ID_STR'};
if(is_fitnesse_wiki_page($check_in_file)) {
my $targetFile = get_temp_file();
update_revision_in_target($check_in_file,$targetFile,$revision);
overwrite_file($targetFile,$check_in_file);
}
1;
Currently I'm part of a team that tries to introduce test automation to our organization. We are developing products for the healthcare sector with relatively long release cycles due to high regulatory requirements. These long release cycles are resulting mainly because of high manual test efforts and missing test automation.
There were some discussions which tools to use for test automation. Two main possibilities are available:
Traditional commercial, heavyweight, GUI-based, record and replay tools like HP QuickTest Professional, IBM Rational Robot, QF-Test or Borland SilkTest.
In a pilot project we found some advantages of Fitnesse over traditional commercial testing tools:
No Licence costs for Fitnesse
Ok, in a big company it's not a big issues to spend some money for commercial tools, but even then you will not buy licences for every machine and every employee. Fitnesse we use on every developer machine, on every tester laptop, on every machine in the test lab, on laptops for presentations in meetings. You can use it on every machine you want without filling order forms and waiting weeks for completion of the order process. So the use of Fitnesse is not limited to the test specialist but instead we can use it cross-functional for application specialists, testers, developers and software architects.
Simple Installation of Fitnesse
Fitnesse can be brought to a machine simply by copying a folder with its subfolders. Or you can run it from an USB stick, which is quite practical for tests on systems wich are not connected to the corporate network.
Test-First approach with Fitnesse
It is a natural approach to write down the test specification before or during the development of the software, because developers need the input to provide test fixtures for connecting the Fitnesse tests with the production code.
Please refer to Elisabeth Hendrickson's blog for similar and more advantages:
Agile-Friendly Test Automation Tools/Frameworks
When writing unit tests you have to use mocks or stubs for dependant objects. Very often it is convenient to use a mocking library (like RhinoMocks for .NET or jMock for Java). Just using a mocking library does not guarantee to get readable and maintainable unit tests. It make sense to have a set of rules which are guiding developers when writing unit tests with mock objects. I compiled a list of such best practices for mock objects ("mocking" rules). I use the term mock object for a objects verifying expectations about calls to themselves. Test stubs are only used for feeding inputs into the system under test.
Rule 1: Try to avoid using mock objects and prefer state verification over behaviour verification if it is possible.
If you can verify the outcome of an method by checking the return value or by checking the state of the system under test, this is the preferable method, because it is simpler and makes your test more independent from the implementation details. You may need test stubs to feed indirect inputs into the system under test.
Rule 2: Apply the Law of Demeter („no train wrecks”) to your code you want to unit test as much as possible.
Testing code like "employee.GetDepartment().GetManager().GetOffice().GetAddress().GetZip()" is much harder than "employee.GetManagersOfficeZipCode()".
Avoiding "train wrecks" reduces the number of mocks and stubs you need and therefore improves readability and maintainability for your tests.
Rule 3: Use a minimum number of mock objects per test, preferable is only one mock object.
Concentrate on one aspect per test. A reader can identify the most important part more easily. In one test you may check the call to one depended-on component (DOC) and in another test you will check another DOC. You may need additional test stubs to feed indirect inputs into the system under test.
Rule 4: Define a minimum number of expectations as possible.
It’s easier for the reader to see what is important. Tests are less brittle when code changes. Test what code does, not how.
Divide all object's methods of your code into two sharply separated categories:
Queries: Return a result and do not change the observable state of the system (are free of side effects).
Commands: Change the state of a system but do not return a value.
Queries deliver input for tests, commands are output. Only check the output (what)
Rule 6: At the borderline between your own code towards foreign code, it may be wise to not stub or mock the foreign code directly.
Very often it is better to create an own interface and implement a small layer of adapter code. Test this small layer with integration tests including the foreign code. It is much easier then to stub or mock your own adapter interface when you test your other own code. Examples for foreign code are database access code (like ADO.NET, JDBC in Java, Hibernate, NHibernate), active directory access, network access, and so on.
Currently I'm reading the great book "Growing Object-Oriented Software, Guided by Tests" by Steve Freeman and Nat Pryce. They encourage us to drive software development in the large with an outer loop of end-to-end acceptance tests and in the small with an inner loop of unit tests. While implementing just for fun the Nine Men's Morris board game with a simple console user interface, I tried to get a feeling for "test guided growing of software" as it is described in the book.
During that exercise I needed to find a solution to control the input towards and the output from the console. The following example source code shows one possible solution.
I started with an acceptance test. I choose to implement it with JUnit, but Fitnesse tool would have been an alternative.
public class NineMensMorrisAcceptanceTests {
private ApplicationRunner application = new ApplicationRunner();
@Test
public void applicationAsksForUserMoveAndThenMakesOwnMove()
{
application.startGame();
application.hasDisplayed("Nine Men's Morris");
application.hasDisplayed("Please enter spot to place piece:");
application.userEnters("1\r\n");
application.hasDisplayed("Computer places piece on spot: 2");
}
}
The ApplicationRunner class starts the console application in a new thread and acquires control over the input and output streams. Luckily Java has such a well designed IO system, which allows easy test set up. The game application writes to the console via System.out and reads from the console via System.in:
public class ApplicationRunner {
private PipedOutputStream pipedOutputStream;
private PipedInputStream pipedInputStream;
private ByteArrayOutputStream outputStream;
public ApplicationRunner(){
pipedOutputStream = new PipedOutputStream();
pipedInputStream = new PipedInputStream(pipedOutputStream);
System.setIn(pipedInputStream);
outputStream = new ByteArrayOutputStream();
System.setOut(new PrintStream(outputStream));
}
public void startGame() {
Thread thread = new Thread("Test Application"){
@Override public void run(){Console.main(null);}
};
thread.setDaemon(true);
thread.start();
}
public void hasDisplayed(String text) {
boolean displayed = false; int tries = 20;
while(tries>0 && !displayed){
Thread.sleep(100);
displayed = outputStream.toString().contains(text) ? true : false;
tries--;
}
if (!displayed){
throw new AssertionError("Missing text in output: " + text);
}
}
public void userEnters(String userInput) {
pipedOutputStream.write(userInput.getBytes());
}
}
The Console.main() method set ups and starts the console application:
public static void main(String[] args) {
ConsoleGameUI consoleGameUI = new ConsoleGameUI();
GameController controller = new GameController(
consoleGameUI,
new Engine(),
new MoveGenerator());
consoleGameUI.init(new InputParser(),controller);
controller.start();
}
When we develop the ConsoleGameUI class, we will write some unit tests. There we can use also the hijacked streams to control inputs and outputs. Because this time the test runs synchronously we can use a ByteArrayInputStream instead of PipedInputStream to supply the user input to the system under test:
public class ConsoleGameUITests {
// Class under test
ConsoleGameUI consoleGameUI;
private String userInput;
private ByteArrayInputStream inputStream;
private ByteArrayOutputStream outputStream;
...
@Before public void setUp() {
userInput = "some input from user";
inputStream = new ByteArrayInputStream(userInput.getBytes());
outputStream = new ByteArrayOutputStream();
System.setIn(inputStream);
System.setOut(new PrintStream(outputStream));
...
consoleGameUI = new ConsoleGameUI();
consoleGameUI.init(inputParserMock, gameControllerMock);
}
@Test public void shouldPromptTheUserToEnterSpotToPlaceAPiece(){
consoleGameUI.askUserForMove(Turn.PLACE_WHITE);
assertTrue(outputStream.toString().contains("Please enter spot to place piece:"));
}
@Test public void shouldPromptTheUserToEnterSpotsToSlidePiece(){
consoleGameUI.askUserForMove(Turn.SLIDE_WHITE);
assertTrue(outputStream.toString().contains("Please enter spots to slide piece:"));
}
@Test public void shouldReadInputAndCallParser()
{
context.checking(new Expectations() {{
oneOf(inputParserMock).Parse(userInput);
...
}});
consoleGameUI.askUserForMove(Turn.PLACE_WHITE);
context.assertIsSatisfied();
}
...
}
In the ConsoleGameUI class we use System.out and System.in:
public class ConsoleGameUI implements GameUI {
private final Scanner scanner;
private IInputParser parser;
private IGameController gameController;
public ConsoleGameUI(){
this.scanner = new Scanner(System.in);
}
public void init(IInputParser parser, IGameController gameController){...}
@Override
public MoveRequest askUserForMove(Turn turn) {
switch(turn){
case PLACE_WHITE:
System.out.print("Please enter spot to place piece:");
break;
case SLIDE_WHITE:
System.out.print("Please enter spots to slide piece:");
break;
...
String line = scanner.nextLine();
MoveRequest request = parser.Parse(line);
return request;
}
...
}
Recently I held a training session about Test-Driven Development(TDD) for .NET developers. A important part in this training was about Mocking, which is essential when you apply TDD to the real world. I presented several examples with Rhino Mocks and used the brilliant AAA syntax of Rhino Mocks. As supporting material for the practical exercises of the participants I missed a quick reference or an API documentation for this syntax style. Based on Ayende's article Rhino Mocks 3.5 and the Rhino Mocks 3.3 Quick Reference I created a new document with code examples:
Rhino Mocks AAA Syntax Quick Reference on Google DocsRhino Mocks AAA Syntax Quick Reference on Scribd
We are currently introducing 'Design By Contract' to a software development group of about 60 developers, which are developing different components. We started by defining 'Design By Contract' policies for C# and Java. It is quite challenging to manage this change effort.
One piece of the change strategy is to measure the progress. We are counting the number of classes and the number of contract assertions (Preconditions, post conditions and invariants). So we have two statistics:
Absolut number of contract assertions per component
Average number of contract assertions per class per component
The metrics tell us whether contracts are "at all" be used. We want to increase the code quality with contracts. If we see a team not implementing any contracts or only very few, we can support the team with training and consulting.
The metrics are published on a regular basis and serve as a means for motivation.
The limitation of thise metrics is, that they do not tell whether a component has enough contracts, so its understandability, maintainability and so on is best supported with 'Design By Contract'. Quality of contracts is not covered by the metrics.
NRefactory is part of the open source IDE SharpDevelop for the .NET platform. NRefactory is a parser library for C# and VB. It can create an Abstract Syntax Tree (AST) that represents all constructs that are available in C# or VB. This AST can be used to analyze source code or to modify and generate code again.
You can download the SharpDevelop IDE, install it and then find the ICSharpCode.NRefactory.dll in the bin folder of the installation. Or you download the SharpDevelop source code and compile the dll yourself.
The example below shows how to parse C# source code files, generate an AST and then using the AST to create metrics about the number of classes and the number of Code Contracts.
using System;
using System.IO;
using System.Diagnostics.Contracts;
using ICSharpCode.NRefactory;
namespace ContractCounter
{
class Program
{
public static void Main(string[] args)
{
TextReader reader = File.OpenText("Program.cs");
using (IParser parser = ParserFactory.CreateParser(SupportedLanguage.CSharp, reader))
{
parser.Parse();
if (parser.Errors.Count <= 0)
{
// Here we will use the parser.CompilationUnit(AST)
...
}
else
{
Console.WriteLine("Parse error: " + parser.Errors.ErrorOutput);
}
}
Console.Write("Press any key to continue . . . ");
Console.ReadKey(true);
}
}
}
To traverse the AST we can use the Visitor pattern (see [Gamma et.al:Design Patterns]. We implement a new visitor 'CounterVisitor' for our purposes. We can inherit from the predefined 'AbstractAstVisitor'. In NRefactory visitors are responsible for traversing the AST by themselfs so we have to call the children when we are visiting certain node types:
using System.Diagnostics.Contracts;
using ICSharpCode.NRefactory.Ast;
using ICSharpCode.NRefactory.Visitors;
namespace ContractCounter
{
public class CounterVisitor : AbstractAstVisitor
{
public override object VisitCompilationUnit(CompilationUnit compilationUnit, object data)
{
Contract.Requires(compilationUnit != null);
// Visit children (E.g. TypeDcelarion objects)
compilationUnit.AcceptChildren(this, data);
return null;
}
public override object VisitTypeDeclaration(TypeDeclaration typeDeclaration, object data)
{
Contract.Requires(typeDeclaration != null);
// Is this a class but not a test fixture?
if (IsClass(typeDeclaration) && !HasTestFixtureAttribute(typeDeclaration))
{
classCount++;
}
// Visit children (E.g. MethodDeclarion objects)
typeDeclaration.AcceptChildren(this, data);
return null;
}
public override object VisitMethodDeclaration(MethodDeclaration methodDeclaration, object data)
{
Contract.Requires(methodDeclaration != null);
// Visit the body block statement of method declaration
methodDeclaration.Body.AcceptVisitor(this, null);
return null;
}
public override object VisitBlockStatement(BlockStatement blockStatement, object data)
{
Contract.Requires(blockStatement != null);
// Visit children of block statement (E.g. several ExpressionStatement objects)
blockStatement.AcceptChildren(this, data);
return null;
}
public override object VisitExpressionStatement(ExpressionStatement expressionStatement, object data)
{
Contract.Requires(expressionStatement != null);
// Visit the expression of the expression statement (E.g InnvocationExpression)
expressionStatement.Expression.AcceptVisitor(this, null);
return null;
}
public override object VisitInvocationExpression(InvocationExpression invocationExpression, object data)
{
Contract.Requires(invocationExpression != null);
// Visit the target object of the invocation expression (E.g MemberReferenceExpression)
invocationExpression.TargetObject.AcceptVisitor(this, null);
return null;
}
public override object VisitMemberReferenceExpression(MemberReferenceExpression memberReferenceExpression, object data)
{
Contract.Requires(memberReferenceExpression != null);
IdentifierExpression identifierExpression = memberReferenceExpression.TargetObject as IdentifierExpression;
// Is this a call to Contract.Requires(), Contract.Ensures() or Contract.Invariant()?
if ( identifierExpression != null &&
identifierExpression.Identifier == "Contract" &&
(memberReferenceExpression.MemberName == "Requires" ||
memberReferenceExpression.MemberName == "Ensures" ||
memberReferenceExpression.MemberName == "Invariant") )
{
assertionCount++;
}
return null;
}
public int ClassCount {
get { return classCount; }
}
public int AssertionCount
{
get { return assertionCount; }
}
#region private members
private int classCount;
private int assertionCount;
static private bool IsClass(TypeDeclaration typeDeclaration)
{
return typeDeclaration.Type == ClassType.Class;
}
static private bool HasTestFixtureAttribute(TypeDeclaration typeDeclaration)
{
bool hasTestFixtureAttribute = false;
foreach (AttributeSection section in typeDeclaration.Attributes) {
foreach (Attribute attribute in section.Attributes) {
if (attribute.Name == "TestFixture") {
hasTestFixtureAttribute = true;
break;
}
}
}
return hasTestFixtureAttribute;
}
#endregion
}
}
The actual counting takes place in the VisitTypeDeclaration() and the VisitMemberReferenceExpression() methods. All other methods are just neccesary for traversing the tree.
We now have to start the vistor to traverse the AST in the Main() method:
...
// Here we will use the parser.CompilationUnit(AST)
CounterVisitor visitor = new CounterVisitor();
parser.CompilationUnit.AcceptVisitor(visitor, null);
Console.WriteLine("The file contains " + visitor.ClassCount + " class(es)");
Console.WriteLine("The file contains " + visitor.AssertionCount + " contract(s)");
...
For exploring the structure of the NRefactory AST you can use the NRefactoryDemo application, which is part of the SharpDevelop source code. You can enter source code and let the application create the according AST: