Assignment for IN3205 -...
Transcript of Assignment for IN3205 -...
EGit – the test suite put to the test
Assignment for IN3205
Joey van den Heuvel
6-2-2010
2
Index
Research Approach Page 3
• Git / JGit / EGit Page 3
• Used Programs Page 4
Project Overview Page 5
• Connection between JGit and EGit Page 5
• The Test Package Page 6
Testing: Packages Page 9
• the egit.core.internal.storage package Page 10
• the egit.core.test.internal.mapping package Page 11
• the egit.core.synchronize package Page 12
• the egit.core.test.op package Page 14
Testing: Code Coverage Page 16
Testing: Documentation Page 20
Testing: the egit.ui.test package Page 22
Testing: Examples of test methods and test code Page 23
Conclusion Page 28
Appendix A – PushOperationResult.java Page 29
3
Research Approach
This document is about giving a complete overview of the test suite of the EGit project. I will research
the current situations and try to find out what kind of testing methods are used and look for things to
improve in the test suite. First I explain the used programs and the projects itself. Then I will look
into the test classes for the EGit core project with UML. After that I will look into the core project
with a code coverage analysis. Then I will look into the documentation part of the core project. Then
briefly look at the code coverage for the UI test suite. And finally I will draw some conclusions about
the whole project.
Git / JGit / EGit
The EGit project is built on the JGit project. The JGit project is based on the Git project. In this
chapter I will describe all of them briefly.
• Git
Before we look into EGit and JGit we first have to look at Git itself. As stated on the website of
Git1. Git is a “distributed version control system”. Currently there are a lot of version control
systems available, like for example Subversion. The main difference between Git and the other
version controls systems is that Git has a distributed nature. Being a distributed version control
system means that multiple redundant repositories and branching are concepts of the tool. Every
user has a complete copy of the data stored locally. Some major benefits of this kind of behavior
are that the history of the data can be accessed extremely fast (without connection to the
network), users still have full functionality of the data and every user has a complete backup of
the repository. This is not the case in a centralized version control system like Subversion.
• JGit
JGit is a pure java library implementation of the Git version control system which was developed
for the Linux Kernel. With the arrival of JGit the distributed version control system became
available for any Unix-like platform and the Microsoft Windows operating systems.
• EGit
EGit is an Eclipse plugin, providing a distributed version control systems into Eclipse which
project teams can use, built on the JGit implementation. This EGit project is currently in the
incubation phase of Eclipse. The purpose of this phase is to establish a fully-functioning open-
source project. The good thing about open-sourcing the project is that everyone with skills can
contribute to the project making it a better tool for the community.
1 http://git.scm.com
4
Used Programs
For this research I have used two tools:
UML tool
The UML tool used for getting an overview of the project structure and creating the UML is
Architexa. Architexa is “a tool suite to help you easily understand large/complex Java code bases by
building easy-to-use diagrams directly within the Eclipse IDE” 2. With this tool I was able to
understand the dependency of EGit on JGit and create UML diagrams of the unit test projects to get a
deeper understanding in the structure of the program.
Code coverage tool
For the code coverage analysis I used EclEmma. EclEmma is “a java code coverage tool for eclipse.
Internally it is based on EMMA java code coverage tool trying to adopt EMMA’s philosophy for the
Eclipse workbench” 3. EMMA on its turn is “an open-source toolkit for measuring and reporting Java
code coverage” 4. EclEmma provided me with a nice interface in Eclipse to quickly get results of the
code coverage of the unit tests.
Flow Control graph tool
For the creation of the flow control graph I used the Dr. Garbage Control Flow Graph Factory 3.65 for
Eclipse. This is a Eclipse plugin for generating, editing and exporting control flow graphs. With this
tool I could generate flowdiagrams and with the editing functionality I could easily get a good view
on the flow of the program code.
2 www.architexa.com
3 www.eclemma.org
4 www.emma.sourceforge.net
5 http://www.drgarbage.com/
Project overview
Connection between EGit en JGit
As stated before EGit is built on the JGit implementation. Using
layered diagram below which shows us the dependencies from the egit.core (which is the whole
project) to the classes in the JGit project. As you can see the EGit
We will see in the code coverage part is that a lot of c
EGit project. The arrows of the diagram show how many connections between the EGit core and the
classes of JGit there are. A thicker arrow means more connections to the class.
UML for this report was impossible in the time I was given. The
time and the UML with the connection between EGit and JGit
insightful. With the code coverage part
project.
Layered Diagram of the dependencies of the egit.core package and the JGit project
Connection between EGit en JGit
on the JGit implementation. Using Architexa I was able to create
which shows us the dependencies from the egit.core (which is the whole
project) to the classes in the JGit project. As you can see the EGit project uses a lot of the JGit project
e will see in the code coverage part is that a lot of code from the JGit is used throughout the whole
The arrows of the diagram show how many connections between the EGit core and the
. A thicker arrow means more connections to the class. Making
s report was impossible in the time I was given. The EGit project is grown quite big over
with the connection between EGit and JGit would take multiple pages to be
insightful. With the code coverage part however we can determine enough for the scope of this
Layered Diagram of the dependencies of the egit.core package and the JGit project
5
I was able to create the
which shows us the dependencies from the egit.core (which is the whole
uses a lot of the JGit project.
used throughout the whole
The arrows of the diagram show how many connections between the EGit core and the
Making an insightful
project is grown quite big over
would take multiple pages to be
the scope of this
Layered Diagram of the dependencies of the egit.core package and the JGit project
6
The Test Package
This package is a help package for all the test classes. It is used in all the other test packages. I will
describe every class in the core.test package briefly.
The abstract GitTestCase.java
Almost all the test classes in the project implement the GitTestCase abstract. In this class there are
basic functions defined which are needed for almost every test class. In the setup function a clean
project and a gitRepository are created so that every test case has a clean start. Then there are two
functions createFile() and createFileCorruptShort() that are used in several other test classes, which
we will find in the UML diagrams of the other packages.
UML diagram of the GitTestCase class
7
The rest of the egit.core.test package
The other classes in the core.test package are all classes that do not contain testing methods itself
except the AdaptableFileTreeIterator. These classes create a lot of the basic functionality and basic
situations to be used in the test cases. I will go in dept of each one of them.
• AdaptableFileTreeIteratorTest.java
This is the only class in the core.test package which contains some test functions. In this class
the basic walking along the repository tree is tested. Don’t know why it is in this package
since the rest of the classes are supporting classes and not test classes itself. This class should
be moved to another package.
UML Diagram of the AdaptableFileTreeIteratorTest class
• DualRepositoryTestCase.java
This is an abstract class for test cases just like the GitTestCase but then for situations where
more than one repository is used.
UML Diagram of the DualRepositoryTestCase class
8
• TestProject.java
This class defines a standard java project in Eclipse with all the methods that are needed for
EGit to work with a Java project. This class is also used a lot in all the test cases which we will
see later in the UML diagrams.
UML Diagram of the TestProject class
• TestUtils.java
A collection of help functions that can be used, for example functions that delete files, create
temporary directories, read full input streams and close them. All kind of standard
procedures that you don’t want to code every time you need them.
UML Diagram for the TestUtils class
9
• TestRepository.java
This class defines an EGit repository with all the standard methods like connect, disconnect,
add, commit, etc. It can be used to quickly execute some functions in a repository in the test
cases. This class is also heavily used through all the test classes.
UML Diagram for the TestRepository class
Testing: Packages
Now that all the help classes of the test package have been covered I can look into the packages
which define the test methods. I will go over every package and try to explain what is tested, show
the test structure and used help methods through UML and reason about the sort of testing that is
done in the package.
10
Testing: the egit.core.internal.storage package
The internally used blob storage object is tested here. A blob is used as a representation of a file in
the repository. As you can see below the blob tests use some of the help methods from the core.test
package.
UML Diagram of the BlobStorageTest class
This package uses the Catalog-Based testing method6. The method that is tested here is the
blobStorage.getContents() method and they have defined all kind of situations where you should not
be able to read from the blob object. First a test is done with a correct blobObject. (the testOk()
method).
Then a couple of tests are done that list the situations where you should not be able to read from the
blobObject. These situations are:
• File defined in the blob is not present in the repository
• File defined in the blob has the wrong ID
• File defined in the blob has corrupted data
• File defined in the blob has corrupted data and a wrong ID
These are all the situations that can go wrong with a blob object and thus the specification is
unfolded of the blob object into five boundary cases and a complete test case it is created.
6 Book: Software Testing and Analysis Chapter 11.4 – Mauro Pezzé and Michal Young
11
Testing: the egit.core.test.internal.mapping package
In this part of the unit tests the history integrity of the repository is tested. A small repository is
created, some files are committed and some changes are made. And then a check, to see if the
history in the repository is still correct, is done. As you can see in the UML the help methods are used
in every function.
UML Diagram of the HistoryTest class
After some research I came to the same conclusion as the previous package that the Catalog-Based
method is used. This research part took more time because the methods names are not as clear as
the previous package. The setup function takes care of all the preconditions and operations that
need to be done. Then every function takes care of a certain post condition. These post conditions
are:
• You should be able to find the file with the correct id
• You should get a ‘null’ value if you search with a incorrect id
• You should be able to find the file (with history) with the correct id
• You should be able to get the index file and the content from a file with no history
• You should be able to get the index file and the content from a file with history
• You should be able to get the history from a file in a single branch (no history present)
• You should be able to get the history from a file in all branches (no history present)
• You should be able to get the history from a file in all branches (history present)
It must be noted that I see it as Catalog-Based testing. The name giving of the functions and the lack
of documentation could point out that the creator of the code just “wrote some tests” and I
interpreted them as a catalog-based testing style.
12
Testing: the egit.core.synchronize package
This package contains all the test classes that test the underlying tree structure of the EGit project.
The tree structure is the main component of version control system. There are quite extensive tests
for the compare functions of trees. The package uses the help classes from the test package a lot as
seen in the UML diagram.
UML Diagram of the synchronize package
One weird thing about this package is that the methods “GitResourceVariantTreeSubscriber” and
“GitResourceVariantTreeSubscriber 1” are doing exactly the same. The 1 variant has more methods
but those are currently ignored in the testing procedure. There is no documentation available in the
code why this is done so for me it is very confusing at the moment. The EGit team should look into
this.
13
In this package the Catalog-Based testing method is used quite clearly and in a good way. Every
function consists of three parts. “When”, “Given”, and “Then”. In the “When” part all the objects are
created to be used in the test. Then in the “Given” part operations and variables are defined to
create a certain testable situation (the preconditions). In the last part, the “Then” part, the post
conditions are tested. So this is a perfect example of the Catalog-Based method.
Here is an example of a synchronize test method:
/**
* Comparing two remote files that have the same git ObjectId should return
* true.
* @throws Exception
*/
@Test
public void shouldReturnTrueWhenComparingRemoteVariant() throws Exception {
// when
GitResourceVariantComparator grvc = new GitResourceVariantComparator(null);
// given
File file = testRepo.createFile(iProject, "test-file");
RevCommit commit = testRepo.appendContentAndCommit(iProject, file, "a",
"initial commit");
String path = Repository.stripWorkDir(repo.getWorkTree(), file);
GitBlobResourceVariant base = new GitBlobResourceVariant(repo,
null,commit.getTree(), path);
GitBlobResourceVariant remote = new GitBlobResourceVariant(repo,
null,commit.getTree(), path);
// then
assertTrue(grvc.compare(base, remote));
}
In the “When” part a comparator is setup to be used in the test method. Then in the “Given” part all
the preconditions are setup. They create a file, commit the file to the repository and then they
retrieve two remote Blob object from the same ObjectId. With the variables defined and operations
done they go on to the “Then” part and test if the two remote Blob objects are the same (the
postcondition).
14
Testing: the egit.core.test.op package
In this section all the functional operations used by the eGit program are tested. The test classes are
divided in two partitions. The normal operations and the dual repository operations. Every class tests
a certain operation that is used in the EGit. Below here you can see the UML-diagrams. These test
classes depend heavily on the core.test package as you can see.
UML Diagram of the operation package (single repository commands)
15
UML Diagram of the operation package (single repository commands)
The testing method used in this package is much harder to define because of the lack of
documentation and small number, sometimes just one, test cases in each class. When I looked
further into these test methods it became clear that almost every test method is just doing the
operation and checks if the data is correct after the operation is done.
For example the AddOperation class has several test methods. These methods are:
• testTrackFile()
• testTrackFilesInFolder()
• testAddFile()
• testAddFilesInFolder()
• testAddFilesInFolderWithDerivedFile()
• testAddWholeProject()
Now the question is: What kind of testing is used here? If it was made with the Catalog-Based
method I would expect test function that test if you get an error if you try to add “NULL” objects. This
also goes for structural / branch-testing7 since the adding method checks if the file to be added is
correct. This means that if you test the function with the branch method that the branch with the
“NULL” check is tested too.
The conclusion is that I cannot really pinpoint a certain testing method here. What I can see is that a
lot of extra testing can be done here. I would suggest that they use the Catalog-Based method used
in the synchronize class since it is a nice way of testing things and documenting the tests for the next
users properly.
7 Book: Software Testing and Analysis Chapter 12 – Mauro Pezzé and Michal Young
16
Testing: Code Coverage
Using EclEmma I was able to get the code coverage for the core.test functions. And I could look at
this coverage with some different views called counters. They have five types of counters which I will
explain briefly.
• Basic Block counter
This is EclEmma’s fundamental unit of coverage. A basic block is a sequence of bytecode
instructions without any jumps or jump targets. A basic block is considered as covered when
its last instruction has been executed.
• Line counter
The number of covered Java source code lines. A source line is considered as covered if it
contains at least one covered basic block.
• Bytecode Instruction counter
The number of bytecode instructions within the basic blocks. If no line information is
available this is a good approximation for line coverage.
• Method counter
A method is considered as covered if at least one basic block of the method has been
executed.
• Type counter
A Java type is considered as covered if it has been loaded and initialized.
What can we do with these counters?
If you look at these five types it is interesting to see what you can do with the different counters. The
best counter, as stated by EMMA, is the basic block coverage. Then you have the line counter and
instruction counter. These counters have one major drawback and that is that every flow control
statement is covered in these coverage even when it is false. This means that a line like if(10 == 0) is
counted in these counters which brings the average result higher.
Then you have the method counter which has a major drawback if you use it stand alone. A method
is counted if there is one basic block in the method that is executed. This means that the method is
counted if only one part of the method is fired. This brings the method result much higher than the
basic block result. What you can do with this counter is look at the percentage of code in the
methods that is not hit by taking the method percentage and subtract the basic block percentages.
What you get from this is the percentage of code that isn’t hit in the methods of all the methods that
are used in the test cases. You can use this to calculate if you’re missing a lot of possible branches in
the methods.
Then the last one the Type counter. This is a very global counter and I did not use it in my research.
17
Connection between EGit and JGit
As stated in the beginning of this document we can use the code coverage percentage to see how
much of the code is used from the JGit project. If we take a look at the following numbers we see
something interesting.
Block coverage result for the egit.core package created with EclEmma
As you can see the current test classes for the EGit project have a code coverage of 30,3% of the JGit
project. And the coverage of the EGit project is only 50,1%. So the total code coverage of the JGit
project is higher than the stated 30,3% here. This is a quite interesting result. A good percentage of
the code of the JGit project is used in this EGit project. We can conclude that the EGit team has really
looked into the JGit project code before building their own project on top of it.
18
Code coverage of the EGit project
If we look again at the code coverage graph we see that only 50,1% of the EGit code is covered. Now
the question is: “is this a good coverage?” My first answer would be that I do not know if it is good or
bad. Because I do not know which code coverage percentage the EGit team has set themselves to
achieve their quality targets, or even if they have a document with the quality targets in them.
My second answer would be that it is a good start but not done yet. We have to determine what a
good code coverage percentage is. According to the paper: “Code Coverage, What Does It Mean in
Terms of Quality?”8 the quality of the coverage is not a straight line (believed) but an exponential line
(actual coverage) which you can see in the graph below.
The next thing we should consider is the amount of work it needs to get a bigger coverage. 100% is a
good quality coverage but takes a lot of time to produce. You have to create test cases that throw
certain exceptions which are really hard. Taking these two things in consideration a good code
coverage lies between 80% till 95%. So a reasonable coverage for this program should be between
those numbers in my point of view. They have to do more testing to get to that percentage for a
good test suite.
8 “Code Coverage, What Does It Mean in Terms of Quality?”
T.W. Williams, M.R. Mercer, J.P. Mucha, R. Kapur
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=902502
19
How can they improve the test suite?
When you take a look at the following diagram of the code coverage for the methods you can see
two areas where there is room for improvement in the test code and thereby getting a better code
coverage percentage.
Method coverage result for the egit.core package created with EclEmma
The first thing you can look into is the difference between the method coverage and block coverage.
As stated before we can determine the amount of code in the methods that is not covered. So if we
take the 62.4% and subtract the 50,1% we get a number of 12,3% of the total code that is in a
method that is called but not covered through the test cases. This means that certain branches in
methods are not tested in the test code. This is an area of improvement. If you look in the code you
see that this 12,3% consists of if statements, case statements, and exceptions. All kind of flow control
statements. I suggest that they take another look with the branch testing method to see which
branches are missed by the current test cases. With that information the test cases can be adjusted
or new test cases can be written to complete the branch coverage.
The second thing you can look at is the percentage of 62.4% itself. Only 62.4% of the methods are
called. This means that 37,6% is not tested at all. If you look at the quality graph 62.4% of the code is
not a nice result to assure the quality of the code. This 37,6% consist of methods ranging from very
small (setters and getters) to very big methods (run methods for operation for example). This big
methods that are not tested is a place where bugs can be undetected. A lot can be improved here.
20
Testing: Documentation
I researched the documentation and the names given to the test methods of the test classes. Me,
being a first time user of the source code and test code, should be able to understand what is going
on in the code, why certain tests are being conducted and what the expected behavior of a test
method is. I looked through each test package if there is any documentation available and if the
documentation is short but good.
• Org.eclipse.egit.core.internal.storage
This package has no documented methods at all. Although the methods are easy to
understand I still find it a good practice to document every methods for a complete overview
over the methods. Even a simple method like testFailNotFound() could for some users raise
questions. Why can we not find then? A simple line like: “We should not be able to read
from a blobStorage object when the actual file is not present on the repository” makes it a
lot more clearly for a new user.
• Org.eclipse.egit.core.synchronize
Every test method in these test classes is an example of how documentation of test classes
should be done. The documentation is short but really clear. The function
shouldReturnFalseWhenRemoteDoesNotExists() already says what the function does but the
documentation (see below) makes it easier to understand.
/**
* When remote variant wasn't found, compare method is called with null as
* second parameter. In this case compare should return false.
*/
• Org.eclipse.egit.core.test
The package with all the help functions for the test classes has around 50% of the methods
commented. And most of the documented functions are the small functions like
GetSourceFolder() which has the following documentation.
/**
* @return Returns the sourceFolder.
*/
The large functions are less commented then the smaller methods. I find this a great
drawback since these are the harder methods to understand. And this .test package is used a
lot through all the test classes (see UML diagrams). So you would expect a lot of
documentation in this package.
21
• Org.eclipse.egit.core.test.internal
In this package no documentation is written for the test classes at all. Which is disturbing
since they are using method names that are not so clear as in the internal.storage package.
For instance testDeepHistory_A() and testDeepHistory_B() are used. It is not clear if there is a
difference between these two methods without documentation. Documentation should be
added in this class and the method names should be reconsidered.
• Org.eclipse.core.test.op
From the eleven test classes present only three have documentation and one of those three
only has one help method documented. It is good that the method names are correctly done
as in the internal.storage package. So if you look at the method name you can determine
what is tested but you cannot be sure until you look through the code. Like the
internal.storage package it needs documentation to make the method working clear for the
user.
The conclusion about the documentation of the test code is that they should really overlook it again.
Good documentation makes it a lot easier for new source-code users to understand what is going on
in the test classes. In the current situation there is room for user interpretation errors about what a
method does which can lead to more work for a new user or even work that needs to be redone or is
redundant with already existing methods.
22
Testing: the egit.ui.test package
The UI test classes use SWTbot to test all the UI functionality. SWTBot is a Java based UI/functional
testing tool for testing SWT and Eclipse based applications. In the time given for this project I did not
have the time to create all the UML for the UI packages and reason about these packages to reason
about the testing methods used. I was able to create a code coverage overview for the package.
Block coverage result for the egit.ui package created with EclEmma
We can see here that the UI test project has almost the same code coverage as the core test project.
Considering the previous mentioned reasons about the code coverage number of the core test
package I think this number is to low as well. Also the gap between the block coverage and the
method coverage is around 10% which is almost the same as in the core project. So the conclusion
for the UI test project is the same. They should look more into untested methods and look for
branches that are not tested now for a more complete test suite.
It also concerns me that the egit.UI package hits 49.7% of the code of EGit when the EGit project is
only tested for 50.1%. It can happen that a UI test fails because the underlying code in the EGit
project throws an exception for instance. So this is another reason that the EGit project itself should
have a higher code coverage percentage.
23
Testing: Examples of test methods and test code
In this part I will give two examples of ways to enhance the test suite. Also I point out some pieces of
code where the technique can be used as well. A short example of the catalog based method is given
first and then a bigger method will be put to the test with the branch method. The actual testcode
can be seen in the PushOperationResultTest.java file which is in the appendix.
An example with the Catalog based test method
To give an example how to use the catalog based test method I have chosen a method from the
PushOperationResult class. This class defines the result of a push operation in EGit. For the catalog
based test method you have to do the following steps:
- Define the different input sets
- Define test cases according to the input sets
- Write the test cases
The method that I will be testing with the catalog-based method is the
isSuccessfulConnectionForAnyURI of the PushOperationResult class shown below. I have chosen this
method because the documentation is quite clear. With this documentation we can create a set of
conditions which tests the method completely.
/**
* @return true if connection was successful for any repository
* (URI), false otherwise.
*/
public boolean isSuccessfulConnectionForAnyURI() {
for (final URIish uri : getURIs()) {
if (isSuccessfulConnection(uri))
return true;
}
return false;
}
If you look at the documentation you see that the method returns true if any of the URI’s is a
successful connection. This means that the Set<URIish> returned from the getURIs() method must
contain a successful connection to make this method return true. Else it will return false. The input
sets we can define for this method are:
- A empty set (should return false)
- A set with no successful connections (should return false)
- A set with successful connections (should return true)
These three sets are all the different situations that can arise using this method. Making three test
cases with these sets is enough to test the method thoroughly. This resulted in the following test
methods in the code:
- shouldBeFalseWithNoUris()
- shouldBeFalseWithUrisNoneConnected()
- shouldBeTrueWithUrisOneConnected()
See the PushOperationResultTest.java in appendix A for the created code.
24
An example with the Branch test method
To give an example how to use the branch test method I have chosen a method from the
PushOperationResult class. This class defines the result of a push operation in EGit. For the branch
test method you have to do the following steps:
- Create a flow control graph of the function
- Define test cases so that all the branches are hit during test case execution
- Write the test cases
The method that I will be testing with the branch test method is the equals method of the
PushOperationResult class shown below. I have chosen this method because it has a lot of flow
control statements. All this control statements create a quite complex method. At first sight at the
equals method it is quite hard to see which tests we need to conduct to fully cover the method. With
the branch test method I can easily create a overview of the equals method and write test cases in a
structured way.
public boolean equals(final Object obj) {
if (obj == this)
return true;
if (!(obj instanceof PushOperationResult))
return false;
final PushOperationResult other = (PushOperationResult) obj;
// Check successful connections/URIs two-ways:
final Set<URIish> otherURIs = other.getURIs();
for (final URIish uri : getURIs()) {
if (isSuccessfulConnection(uri) && (!otherURIs.contains(uri) ||
!other.isSuccessfulConnection(uri)))
return false;
}
for (final URIish uri : other.getURIs()) {
if (other.isSuccessfulConnection(uri) &&
(!urisEntries.containsKey(uri) || !isSuccessfulConnection(uri)))
return false;
}
for (final URIish uri : getURIs()) {
if (!isSuccessfulConnection(uri))
continue;
final PushResult otherPushResult = other.getPushResult(uri);
for (final RemoteRefUpdate rru :
getPushResult(uri).getRemoteUpdates()) {
final RemoteRefUpdate otherRru =
otherPushResult.getRemoteUpdate(rru.getRemoteName());
if (otherRru == null)
return false;
if (otherRru.getStatus() != rru.getStatus() ||
otherRru.getNewObjectId() != rru.getNewObjectId())
return false;
}
}
return true;
}
The flow control graph
To create a better view on the equals method I have created a flow control graph. In this graph you
can see the flow of the program code. This is a much better view on the method then the code itself.
The flow control graph is made using the Dr
Control flow diag
on the equals method I have created a flow control graph. In this graph you
can see the flow of the program code. This is a much better view on the method then the code itself.
The flow control graph is made using the Dr. Garbage plugin for Eclipse.
gram PushOperationResult equals method created with Dr. Garbag
25
on the equals method I have created a flow control graph. In this graph you
can see the flow of the program code. This is a much better view on the method then the code itself.
e
26
Define the test cases
From this flow control graph it is easy to define the test cases to get a 100% branch coverage and
thereby a 100% code coverage. If you define seven test cases for the seven return statements you
need to go through all the branches except one. The continue statement in the 3th for loop. So to get
a 100% branch coverage method it is sufficient to define 7 test cases with one of them having the
continue statement hit during execution. These test cases are:
- Return True when the other object is this object
- Return False when the other object is not a instance of the PushOperationResult object
- Return False when the other object does not contain a URI that this object has
- Return False when the other object has a URI that this object does not
- Return False when the other object has a URI with a NULL as RemoteRefUpdate object
- Return False when the other object has a RemoteRefUpdate with a different status
- Return True when this and the other object are equal
I have chosen to add the continue statement in the last test case. This resulted in the following
methods names in the code:
- shouldReturnTrueWhenObjIsTheSameObject()
- shouldReturnFalseWhenObjIsNotAPushOperationResult()
- shouldReturnFalseWhenOtherResultDoesNotHaveTheUri()
- shouldReturnFalseWhenResultOtherResultHasUri()
- shouldReturnFalseWhenOtherHasANullRemoteRefUpdate()
- shouldReturnFalseWhenOtherRemoteRefUpdateHasADifferentStatus()
- shouldReturnTrueWhenObjectsAreEqualAndHitTheContinueBlock ()
Write the test cases
With the test cases defined I started with writing the test code for the equals function and
immediately ran into a problem. I could not manually create this objects because the constructor of
the PushOperationResult was set to private. I did understand why it was private. The reason is that
you can only have a PushOperationResult when there is a actual PushOperation been executed. So
that left me with two options.
The first option was to try to understand how I could manipulate the PushOperation that I could get
the different situations needed for the test cases. So I started reading into the PushOperation
methods and the test cases done for this PushOperation and I quickly realized that it is going to be
very hard for me, with no 100% complete understanding of the EGit code, to create the 7 different
situations.
The second option was to change the constructor and two other methods visibility to public to get
full control over the PushOperationResult object. This solution is viable for the scope of this research
project itself but is not a good solution for the EGit team. Considering that the EGit team has enough
knowledge about the code they can create all the situations using the PushOperation object. With
this methods set to public I could create valid PushOperationResult objects very quickly for the test
cases. See the PushOperationResultTest.java in appendix A for the created code.
27
What can we learn from the examples
Considering the two methods presented before we can also take a quick look at a code snippet from
the run method of the PushOperation class. The current test suite misses the red parts in the testing
procedure. With both the branch method and the catalog-based method those two if statements will
be tested if they applied the methods in the correct way. I suggest that the EGit team starts using a
more structured way of testing that you avoid having bugs in this kind of subparts in the code.
if (dryRun)
monitor.beginTask(CoreText.PushOperation_taskNameDryRun, totalWork);
else
monitor.beginTask(CoreText.PushOperation_taskNameNormalRun,totalWork);
operationResult = new PushOperationResult();
for (final URIish uri : specification.getURIs()) {
final SubProgressMonitor subMonitor = new SubProgressMonitor(
monitor, WORK_UNITS_PER_TRANSPORT,
SubProgressMonitor.PREPEND_MAIN_LABEL_TO_SUBTASK);
Transport transport = null;
try {
if (monitor.isCanceled()) {
operationResult.addOperationResult(uri,
CoreText.PushOperation_resultCancelled);
continue;
}
28
Conclusion
With this project a clear overview is created of the current structure of the EGit project, containing
the connection to JGit and the structure of the core and UI package. There is an explanation about
each test package which testing methods the EGit project team has used.
The conclusion about the test suit is that the EGit project team has made a good start with the test
suite but there is much room for improvement. A good code coverage percentage is between 80%
and 90% and both packages are around 50%. Two solutions were given:
• Look at methods that are not tested at all in the current situation
• Use the branch testing method to test more code inside methods flow control statements
Another conclusion was that the documentation of the test code is not complete. It is now possible
for new users to make interpretation errors. The EGit project team should add more documentation
to the test code and overlook the written documentation if it is clear enough. Also the method
names should be revised if they are clear.
The examples of the two test methods showed that using a test method is a structured way to create
test cases. As seen in the original code the EGit team sometimes misses little pieces of code in the
test procedure. This can lead to undetected bugs.
29
Appendix A – PushOperationResultTest.java
package org.eclipse.egit.core.test.op;
import static org.junit.Assert.*;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import org.eclipse.egit.core.op.PushOperationResult;
import org.eclipse.egit.core.test.DualRepositoryTestCase;
import org.eclipse.egit.core.test.TestRepository;
import org.eclipse.jgit.lib.Constants;
import org.eclipse.jgit.lib.ObjectId;
import org.eclipse.jgit.storage.file.FileRepository;
import org.eclipse.jgit.transport.PushResult;
import org.eclipse.jgit.transport.RemoteRefUpdate;
import org.eclipse.jgit.transport.RemoteRefUpdate.Status;
import org.eclipse.jgit.transport.URIish;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
public class PushOperationResultTest extends DualRepositoryTestCase {
File workdir;
File workdir2;
PushOperationResult pushOperationResult;
PushOperationResult otherPushOperationResult;
URIish uri;
URIish uri2;
PushResult pushResultNull;
ObjectId objectID;
String projectName = "PushTest";
/**
* Setup all the objects that are needed in the testcases
*
* @throws Exception
*/
@Before
public void setUp() throws Exception
{
workdir = testUtils.createTempDir("Repository1");
workdir2 = testUtils.createTempDir("Repository2");
pushOperationResult = new PushOperationResult();
otherPushOperationResult = new PushOperationResult();
repository1 = new TestRepository(new File(workdir,
Constants.DOT_GIT));
repository2 = new TestRepository(new FileRepository(new File(workdir2,
Constants.DOT_GIT)));
uri = new URIish("file:///" +
repository1.getRepository().getDirectory().toString());
uri2 = new URIish("file:///" +
repository2.getRepository().getDirectory().toString());
objectID =
ObjectId.fromString("0123456789012345678901234567890123456789");
pushResultNull = null;
}
30
@After
public void tearDown() throws Exception {
repository1.dispose();
repository2.dispose();
repository1 = null;
repository2 = null;
testUtils.deleteTempDirs();
}
//the catalog based tests
/**
* Test the isSuccessfulConnectionForAnyURI with no uri in the object
*
*/
@Test
public void shouldBeFalseWithNoUris()
{
assertFalse(pushOperationResult.isSuccessfulConnectionForAnyURI());
}
/**
* Test the isSuccessfulConnectionForAnyURI with multiple unsuccessful
* connection in the object
*
*/
@Test
public void shouldBeFalseWithUrisNoneConnected()
{
pushOperationResult.addOperationResult(uri, pushResultNull);
pushOperationResult.addOperationResult(uri2, pushResultNull);
assertFalse(pushOperationResult.isSuccessfulConnectionForAnyURI());
}
/**
* Test the isSuccessfulConnectionForAnyURI with multiple connections in the
* object
* One is successful
*
*/
@Test
public void shouldBeTrueWithUrisOneConnected()
{
pushOperationResult.addOperationResult(uri, new PushResult());
pushOperationResult.addOperationResult(uri2, pushResultNull);
assertTrue(pushOperationResult.isSuccessfulConnectionForAnyURI());
}
31
//The test functions below are all test methods needed to get a
//100% branch coverage for the equals function.
@Test
/**
* Test for the equals function with the same object
*
*/
public void shouldReturnTrueWhenObjIsTheSameObject()
{
assertEquals(pushOperationResult, pushOperationResult);
}
/**
* Test for the equals function with a non PushOperationResult object
*
*/
@Test
public void shouldReturnFalseWhenObjIsNotAPushOperationResult()
{
assertFalse(pushOperationResult.equals("this is a string object"));
}
/**
* Test for the equals function with a PushOperationResult object with no
* uri's
*
*/
@Test
public void shouldReturnFalseWhenOtherResultDoesNotHaveTheUri()
{
pushOperationResult.addOperationResult(uri, new PushResult());
assertFalse(pushOperationResult.equals(otherPushOperationResult));
}
/**
* Test for the equals function with a empty PushOperationResult object
* compared with a PushOperationResult object with uri's
*
*/
@Test
public void shouldReturnFalseWhenResultOtherResultHasUri()
{
otherPushOperationResult.addOperationResult(uri, new PushResult());
assertFalse(pushOperationResult.equals(otherPushOperationResult));
}
/**
* Test for the equals function with nonSuccessFulConnections
*
*/
@Test
public void shouldReturnTrueWhenObjectsAreEqualAndHitTheContinueBlock()
{
pushOperationResult.addOperationResult(uri, new PushResult());
pushOperationResult.addOperationResult(uri2, pushResultNull);
otherPushOperationResult.addOperationResult(uri, new PushResult());
assertTrue(pushOperationResult.equals(otherPushOperationResult));
}
32
/**
* Test for the equals function with a a PushOperationResult object with
* RemoteRefUpdates
* compared to a PushOperationResult object with a null object in the
* RemoteRefUpdates
*
* @throws IOException
*/
@Test
public void shouldReturnFalseWhenOtherHasANullRemoteRefUpdate() throws
IOException
{
PushResult pushResult = new PushResult();
HashMap<String, RemoteRefUpdate> remoteUpdates = new HashMap<String,
RemoteRefUpdate>();
remoteUpdates.put("testValue",
newRemoteRefUpdate(repository1.getRepository(),null,"remoteName",
true, "localname", objectID));
pushResult.setRemoteUpdates(remoteUpdates);
PushResult pushResult2 = new PushResult();
HashMap<String, RemoteRefUpdate> remoteUpdates2 = new HashMap<String,
RemoteRefUpdate>();
remoteUpdates2.put("testValue", null);
pushResult2.setRemoteUpdates(remoteUpdates2);
pushOperationResult.addOperationResult(uri, pushResult);
otherPushOperationResult.addOperationResult(uri, pushResult2);
assertFalse(pushOperationResult.equals(otherPushOperationResult));
}
/**
* Test for the equals function with a a PushOperationResult object with
* RemoteRefUpdates object
* compared to a PushOperationResult object with a different RemoteRefUpdates
* object (the status is different)
*
* @throws IOException
*/
@Test
public void shouldReturnFalseWhenOtherRemoteRefUpdateHasADifferentStatus()
throws IOException
{
PushResult pushResult = new PushResult();
HashMap<String, RemoteRefUpdate> remoteUpdates = new HashMap<String,
RemoteRefUpdate>();
RemoteRefUpdate remoteRefUpdate = new
RemoteRefUpdate(repository1.getRepository(),null,"testValue", true,
"localname", objectID);
remoteUpdates.put("testValue", remoteRefUpdate);
remoteRefUpdate.setStatus(Status.OK);
pushResult.setRemoteUpdates(remoteUpdates);
PushResult pushResult2 = new PushResult();
HashMap<String, RemoteRefUpdate> remoteUpdates2 = new HashMap<String,
RemoteRefUpdate>();
RemoteRefUpdate remoteRefUpdate2 = new
RemoteRefUpdate(repository1.getRepository(),null,"testValue", true,
"localname", objectID);
remoteUpdates2.put("testValue", remoteRefUpdate2);
remoteRefUpdate2.setStatus(Status.NOT_ATTEMPTED);
pushResult2.setRemoteUpdates(remoteUpdates2);
pushOperationResult.addOperationResult(uri, pushResult);
otherPushOperationResult.addOperationResult(uri, pushResult2);
assertFalse(pushOperationResult.equals(otherPushOperationResult));
}
}