I am trying fully understand the final console output generated from my unit tests:
Test Suite 'Multiple Selected Tests' finished at 2013-02-21 22:54:57 +0000.
Executed 6 tests, with 0 failures (0 unexpected) in 0.034 (0.052) seconds
most of it self explanatory, but the last I am unsure about. Specifically in 0.034 (0.052) seconds
. It can't be an average because each test shows output like the following:
Test Suite 'MMProductLogicTests' started at 2013-02-21 22:54:57 +0000 Test Case
-[MMProductLogicTests testProductMissingFormURL]' started.
Test Case '-[MMProductLogicTests testProductMissingFormURL]' passed (0.005 seconds).
Test Suite 'MMProductLogicTests' finished at 2013-02-21 22:54:57 +0000.
All six tests show passed (0.005 seconds)
so an average does not make sense. 0.034
appears to be the total time for executing, I am confused about what (0.052)
represents?
0.034 is the 'testDuration'; 0.052 is the 'totalDuration'.
Here is the SenTestingKit source code (older version):
+ (void) testSuiteDidStop:(NSNotification *) aNotification
{
SenTestRun *run = [aNotification run];
testlog ([NSString stringWithFormat:@"Test Suite '%@' finished at %@.\nPassed %d test%s, with %d failure%s (%d unexpected) in %.3f (%.3f) seconds\n",
[run test],
[run stopDate],
[run testCaseCount], ([run testCaseCount] != 1 ? "s" : ""),
[run totalFailureCount], ([run totalFailureCount] != 1 ? "s" : ""),
[run unexpectedExceptionCount],
[run testDuration],
[run totalDuration]]);
}
Unfortunately further examination of the code does not reveal the differences between the two.