Introduction to JUnit
CSA-Prep, Fall 2004
junit.framework.AssertIndex:
assertEquals(): equality as objects or primitive
types.
Arguments:
String for labelling a failure message (if
there is one)float or double, an
additional argument
of the same type is required for the "tolerance" of floating-point
equalityassertEquals(10, 2*x)would succeed if
x is an integer variable with
value 5, and would fail if the integer x has any other value.
assertEquals("x test", 10, 2*x)
would succeed if x is an integer variable with
value 5, and would fail if the integer x has any other
value. If this call fails, the failure message would include the
string "x test".MyClass a = new MyClass(7); MyClass b = new MyClass(7);Then the call
assertEquals(a, b);would succeed if
a.equals(b) is true, and would
fail if it wasn't true. The programmer must define an
equals() method for MyClass in order to have a
chance of success; this equals() method overrides
Object.equals(), which is automatically false for
distinct objects in memory (such as a and
b).
Circle.circumference(float r) is
supposed to return the circumference of a circle of radius
r, then the call assertEquals(3.1416, Circle.circumference(1.0), .0001)should succeed, since the computed approximation to pi should be within .0001 of 3.1416.
assertSame() is for checking identity among
objects. Arguments:
String for labelling a failure message (if
there is one)MyClass a = new MyClass(7); MyClass b = new MyClass(7);Then the call
assertSame(a, b);will fail, even if
a.equals(b) is true.
MyClass a = new MyClass(7); MyClass c; c = a;the following call will then succeed:
assertSame(a, c);This is because the names
a and c
refer to the same object in memory.assertNotSame, cf.
assertSame.
assertNull() takes one Object argument
(optionally preceded by a String failure message), and
succeeds if that Object is null.
assertNotNull, cf.
assertNull.
assertTrue() takes one boolean expression argument
(optionally preceded by a String failure message), and
succeeds if that expression is true.
assertFalse, cf.
assertTrue.
fail() takes one optional String
failure message argument, and forces a test failure.
For example, the
following code tests an exception condition for the
Factorial class.
try {
Factorial.iterative(-3);
fail("Factorial.iterative(-3) exception not thrown!");
} catch (IllegalArgumentException e) {;}
... // NEXT
Factorial.iterative() should throw an exception for
a negative argument. If it does, that exception is caught and nothing
is done, so the test succeeds and processing continues at
NEXT. But if no exception is thrown, then
fail() to generate a JUnit failure.junit.framework.TestCaseWith JUnit, the tests for a class C are located in
a custom subclass of TestCase, ordinarily named
CText, that generally includes a test method
"testMyMethod()" for each method "myMethod()" of
that class C.
For example, consider the Account
class.
The subclass of TestCase would be named
AccountTest.
This class AccountTest would contain methods
testGetBalance(), testGetAccountNumber(),
testDeposit(), etc., to perform tests on the
Account methods getBalance(),
getAccountNumber(), deposit(), etc.
No matter what the arguments, return values, and exceptions of a method may be, the corresponding test method has no arguments, no return value, and no exceptions.
TestCase is a subclass of Assert, so
the assertX methods above are
available for creating test methods. The Eclipse menu choice
Run --> Run As --> JUnit Test automatically calls
all the test methods and reports any failures together with a red bar;
if all test methods succeed, a green bar is shown.
Much of the work in devising JUnit tests resides in the
construction of good example data and objects for running those
tests. State variables of the test class are used for this purpose;
such state variables are called fixtures, being available for
all of the test methods. Fixtures are initialized in the method
setUp() and any necessary cleanup is performed in
tearDown(). These methods of TestCase may
be overridden in the test class to provide a known initial state of
all fixtures.
Note that setUp() is called before each test
method, and tearDown() is called after each. Thus,
by writing setUp() and tearDown()
appropriately, the developer can insure that each method has the same
initial state.
In the Extreme Programming methodology, developers write the test before the code.
The rationale is that testing after writing code works "against human nature" for most developers: once apparently correct code has been produced, they want to go on to the next task, not try to break what they've written. (Beck) Some other benefits of writing the test before the code (Gallardo et al): the test sets goals for the code; the test help to clarify the API; and they provide example calls. In traditional software design, such technical details typically appear in design documents; by expressing them as pre-written tests instead, one productively accomplishes multiple goals at once.
JUnit was created for Extreme Programming, and accomplishes some of its specific goals: JUnit tests are isolated, in that they do not interact with other tests; and they are automatic, returning an objective "yes/no" answer at any point in development. Observe that both of these goals work well with other XP practices such as continuous integration, simple design, and small releases.
Note that testing alone never proves correctness. There can always be cases that a developer forgets to test, or is unable to check; even in situations where it is possible to test every possible outcome, one must prove that every case is included.
This illustrates the tradeoff between expedient productivity through testing and expensive formal verification of correctness. These opposing values carry different weights in different applications; for example, the control of bowling pinsetters is seldom lethal, whereas the regulation of X-ray machines could be.
Unit tests are not always sufficient for a particular application. In particular, a user typically provides functional tests to determine whether a story is completed. These frequently cannot be automated, and their outcomes are not necessarily "yes/no" as is the case for unit tests.
Here are some other types of non-unit tests that may be appropriate for particular systems.
Parallel tests, which show that a new, reimplemented system behaves the same way an old system did (or which shows how small the difference is between a new and old system).
Stress test, simulating the worst possible load on a system.
"Monkey test," to insure that the system behaves well in the face of nonsense input.
Eclipse provides a convenient JUnit wizard for
automatically generating a subclass of TestCase from a
stub implementation of a class to be tested.
Source --> Format. To save,
you can right-click on that editor area and select Save.New --> Other.Java list (using the triangle to the left)
to reveal a selection for JUnit, and choose that selection.TestCase on the right side of the
New view, then click Next.setUp() and tearDown() for generating method
stubs in the test class (subclass of TestCase). Click
Next. Finish. TextCase, which you can fill in with
assertX methods, etc. Of course, this wizard can also be
used to create stubs of unit tests for classes you have already
written.JUnit is not designed to test main() methods.
Instead, create strategic helper methods that can be tested using
JUnit.
For example, in a program to print a table of square
roots to standard output, a developer could create a method
printTable() with one OutputStream argument
that prints the desired table on that OutputStream. In
main(), System.out could be passed as the
argument in a call to printTable(); in a test method
testPrintTable(), a file fixture could be passed as that
argument, and the resulting file examined for correctness.
This "helper method" approach may help when testing other aspects of
the system that cannot be readily checked using assertX
methods, such as GUI components: try encapsulating any data
manipulation needed for that GUI into helper methods.
If you make changes in a class or its corresponding JUnit test class that break the tests in a bad way, and you just want to go back to a prior consistent state, you have two automated options.
You can retrieve an earlier version of that code from CVS. Of course, you may have to dig around to find the earlier version you want, if others on your development team have checked in that file since you checked it out.
You can revert from the local history as follows:
Compare With --> LocalHistory.rab@stolaf.edu, October 27, 2004