A quick refactoring using TestNG’s expectedExceptions

I recently crossed a TestNG-test which looked like this:

@Test
public void noResultsTest() {
	// test
	String query = " ";
	Exception exception = null;
	List foos = null;
	try {
		foos = barService.search(query);
	} catch (SystemError e) {
		// input was wrong
		exception = e;
	}

	// assert
	Assert.assertNotNull(exception, "There was an exception");
	Assert.assertNull(foos);
}

Obviously this test should check that barService#search throws an Exception in case the input is a blank.

But there are a few things to mention:

  • TestNG is capable of checking if an expected exception was thrown in a test. Simply use @Test(expectedExceptions={..}).
  • The check that Assert.assertNull(foos) holds true is useless. Either no exception was thrown and Assert.assertNotNull(exception, "There was an exception") will fail or an exception was thrown and foos will not be assigned with any value.

So the test could be rewritten like this:

@Test(expectedExceptions={ SystemError.class })
public void keineBerufeTest2() throws SystemError {
	foos = barService.search(" ");
}

Fail instead of Skip a Test when TestNG’s DataProvider throws an Exception

A quite tricky problem I have been faced with was a failing TestNG-DataProvider which threw an exception because of corrupt test data. The critical point was that neither the Maven Surefire Plugin nor the Eclipe-TestNG-Plugin failed dependent tests. These tests were only skipped. That’s problematic because when the test data gets corrupt (e.g. due to an update) I actually want to be informed explicitly—this information shouldn’t just be swallowed. But the only indicator for a failure was the amount of skipped test which Surefire prints after each run. A position which could be missed easily in a large console output.

The problem

So I constructed a minimal working example with a DataProvider that just throws a NullPointerException and a test that does nothing but depending on the broken DataProvider. Then I took the sources of TestNG and turned on my debugger. I finally got to the class org.testng.internal.Invoker: there’s a method invokeTestMethods(..) in which I found the following code:

if (bag.hasErrors()) {
    failureCount = handleInvocationResults(testMethod, bag.errorResults, null, failureCount, expectedExceptionHolder, true, true /* collect results */);
    ITestResult tr = registerSkippedTestResult(testMethod, instances[0], start, bag.errorResults.get(0).getThrowable());
    result.add(tr);
    continue;
}

Now consider the bag-instance has errors (and the internal hold errorResult-instance also says explicitly that the status is “failure”). The method call registerSkippedTestResult(..) changes that to a skip! Obviously I located the reason for my problem—although the intention of this code is still not clear to me…

Solution 1: “Selftest”-Method in DataProvider

The (easiest and somehow most naive) solution is to provide a self test which invokes the DataProvider directly. The tests which depend on the DataProvider will still be skipped but the additional self test bypasses the TestNG-mechanism and throws an exception directly. Hence TestNG fails the test run as it should be.

@Test
public void selftest() {
    TestDataProvider.createTestcases();
}

The benefit of this solution is its simplicity and its robustness: no future releases of TestNG can break this approach (except a change which ignores exceptions occurring in test methods, but I can’t hardly believe that will ever happen). The drawback is that you have to modify your code because of an issue in TestNG (I really don’t like being forced to make changes in my code because of problems in third party libraries). Furthermore there are additional runtime costs to consider—the DataProvider is called at least once more than actually needed. Depending on the DataProviders logic a more or less critical fact.

Solution 2: Return an empty array

Cédric Beust (the developer behind TestNG) gave in [1] a solution how to handle this issue. The trick is to surround the failable code in the DataProvider with a try..catch and return an empty array in the catch-clause:

@DataProvider(name="testcases")
public static Object[][] testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return new Object[][] {{}};
    }
}

The solution adopted to Iterators:

@DataProvider(name="testcases")
public static Iterator testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return Arrays.asList(new Object[][] {{}}).iterator();
    }
}

The big pro of this approach is its lightweight. There isn’t much code to write and the solution is somehow easy to adopt for other DataProvider. Again the drawback is that you have to modify your code because of an issue in TestNG. Furthermore without any documentation every developer would wonder about the strange looking statement inside the catch-clause, as well as about the widely disfavored coding style “catching a Throwable”. Finally the answer why this solution works is something I guess only Céderic Beust understands. By the way: using an empty list doesn’t interestingly do the job for me…

@DataProvider(name="testcases") 
public static Iterator testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return Collections.emptyList().iterator(); // DOESN'T WORK !!!
    }
}

Solution 3: Exception-Iterator

Another approach is limited to DataProviders which returns an Iterator. Therefore you create an “ExceptionIterator” which simply throws an Exception when using it:

public class ExceptionIterator implements Iterator {
    private Throwable e;

    public ExceptionIterator(Throwable e) {
        this.e = e;
    }

    @Override
    public boolean hasNext() {
        throw new RuntimeException(e);
    }

    @Override
    public Object[] next() {
        throw new RuntimeException(e);			
    }

    @Override
        public void remove() {
        throw new RuntimeException(e);
    }
}

In case of a failure while retrieving the test data, the ExceptionIterator is being used:

@DataProvider(name="testcases")
public static Iterator testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return new ExceptionIterator(e);
    }
}

The pros and cons of this solution are mainly equal to solution “Return an empty array” as the approaches are very similar. One benefit is that it is more stable against changes/updates of TestNG. If in some future release the reason whyever solution 2 works gets broken, this solution will still going to perform well. A disadvantage is the approach itself: an Iterator which only throws Exceptions isn’t a piece of code one could be proud of.

Solution 4: FailListener

TestNG gives the ability to register listeners to your test execution [2]. So you can code a “FailListener” which switches every skipped test to a failed one:

public class FailListener extends TestListenerAdapter {
    @Override
    public void onTestSkipped(ITestResult tr) {
        tr.setStatus(ITestResult.FAILURE);
    }
}

The listener can be attached to the test class like this (for some other ways, see [2]):

@Listeners({FailListener.class})
public class TestNGTest { .. }

One of the big benefits is the loose coupling: test code and workaround-code are separated in two independent classes. This also supports DRY (“Don’t repeat yourself”) since no try…catches-blocks (like in the other solutions) are needed. But on the other hand setting the ITestResult to fail like this must be named a dirty hack. What if in a future release the given TestResult is a clone of the original one? Changes on that instance wouldn’t be recognized by TestNG and the whole solution could be dropped. Furthermore all skipped tests are marked as fail even if they are supposed to be skipped. So you ban the usage of skip indirectly in your test environment. That’s especially problematic because although the listener is attached to a class it is called for all tests in the same test suite!

Conclusion

In this post I discussed 4 different approaches to solve a TestNG-issue related to exceptions occurring in DataProviders. Depending tests are skipped instead of being marked as fail. All approaches have their pros and cons so that the given circumstances will determine which solution helps the best. For me solution 2, “Return an empty array” did the job.

Reference

  1. [1] http://markmail.org/message/54dr2wnte6kdfnqv#query:+page:1+mid:wbzu3xs2icdr7sqp+state:results
  2. [2] http://testng.org/doc/documentation-main.html#testng-listeners

Running TestNG-, JUnit3- and JUnit4-tests with Maven Surefire in one run

Fortunately, the team I’m currently working in emphasizes on a high degree of test coverage. First we started with JUnit4 but switched to TestNG after a while concerning design and performance purposes. Although we migrated most of the existing tests, we stuck in the situation that there were still JUnit4- and JUnit3-tests which couldn’t be converted to TestNG. Those tests depend on third-party-libraries which in turn are related to JUnit.

So how to execute TestNG, JUnit3- and JUnit4-tests on the same compilation run using Maven?

Imagine a project with just one TestNG-, one JUnit3- and one JUnit4-test. Simply adding the Surefire plugin to the pom.xml (and of course the JUnit4- and TestNG-dependencies) will make Surefire use TestNG as default. Thus a call of mvn clear package will lead to:

Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.194 sec

As you can see, Surefire triggers just two of the tests (TestNG and JUnit3). That’s because:

  • TestNG executes TestNG- and JUnit3-tests only
  • JUnit4 executes JUnit3- and JUnit4-tests only

See [1] for details.

Solution 1: Profiles

So the first solution for this issue bases on profiles. Beside the default Surefire-run with TestNG there is an additional profile (here named “JUnitRun”) in which the attribute “TestNGArtifactName” is set to “none”. As a result JUnit4 will be taken by Surefire when calling Maven with the profile “JUnitRun”.

<profiles>
      <profile>
          <id>JUnitRun</id>
          <activation>
              <activeByDefault>false</activeByDefault>
          </activation>
          <build>
              <plugins>
                  <plugin>
                      <groupId>org.apache.maven.plugins</groupId>
                      <artifactId>maven-surefire-plugin</artifactId>
                      <configuration>
                      <testNGArtifactName>none:none</testNGArtifactName>
                      </configuration>
                  </plugin>
              </plugins>
          </build>
      </profile>
</profiles>
<build>
      <plugins>
          <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-surefire-plugin</artifactId>
          </plugin>
      </plugins>
</build>

Although this provides a solution it also has a big drawback: you need two compilations to get all tests executed (mvn clean install and mvn clean install -P JunitRun). In addition the JUnit3-tests are executed two times (once a run). This isn’t acceptable for a continuous integration system. So how to join two executions to one?

Solution 2: Two executions in one run

Maven can trigger more than just one execution of the same plugin [2]. This is utilized in the following solution: there’s a (default) execution for running TestNG and a second execution which TestNGArtifactName is set to “none” (again for calling JUnit4 instead of TestNG). So the pom.xml looks like this:

<plugins>
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <executions>
            <execution>
                <phase>test</phase>
                <goals>
                    <goal>test</goal>
                </goals>
                <configuration>
                    <testNGArtifactName>none:none</testNGArtifactName>
                </configuration>
            </execution>
        </executions>
    </plugin>
</plugins>

And the command line shows that all tests have been executed:

Running TestSuite
Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.19 sec
[...]
Running JUnit3Test
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.021 sec
Running JUnit4Test
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec

But the drawback remains that the JUnit3-tests are triggered two times. In a project with many JUnit3-tests and/or some long running ones this is an indefensible circumstance. Also reporting tools might get confused while calculating their results.

Solution 3: Two executions, one restricted

Improving the second attempt can be done by deciding whether the TestNG- or the JUnit4-tests should be treated in a special way. Assuming that there are just a few JUnit4- but lots of TestNG-tests, the final solution is to call every TestNG- and every JUnit3-test within the default execution (as it is already done in solution 2). But now the automatic execution of tests in the second run is rejected. Instead the second execution triggers the JUnit4-tests explicitly by using the “includes”-tag. Inside those tags it is considerable to either list up the JUnit4-tests one by one or follow a team-agreement like “every Junit4-Test ends with *Junit4Test”—as it is done in the following listing:

<build>
    <plugins>
        <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <executions>
            <execution>
                <phase>test</phase>
                <goals>
                <goal>test</goal>
                </goals>
                <configuration>
                    <testNGArtifactName>none:none</testNGArtifactName>
                    <reportsDirectory>${project.build.directory}/surefire-reports/junit-junit4-results</reportsDirectory>
                    <includes>
                        <include>**/*JUnit4Test.java</include>
                    </includes>
                </configuration>
            </execution>
        </executions>
        </plugin>
    </plugins>
</build>

With these changes Maven will print to the command line:

Running TestSuite
Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.19 sec
[...]
Running JUnit4Test
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec

As expected all TestNG, JUnit3- and JUnit4-tests will have been executed—and every one only once. Happy testing.

Reference

  1. [1] http://groups.google.com/group/testng-users/msg/b1fa90f8877134c8
  2. [2] http://maven.apache.org/guides/mini/guide-configuring-plugins.html#Using_the_executions_Tag

Avoid creation of emailable-report.html when using Maven, Surefire and TestNG

The problem

Last week, my Eclipse ran into a lot of “java.lang.OutOfMemoryError: Java heap space”-errors, telling me about that incidence (using a pop-up) and asking me if I really want to work on with my workspace (using an additional pop-up) again and again. Coding became hardly practicable so I started to search the cause of this failure.

What was the problem? The html-validation plugin of Eclipse tried to scan a file called “emailable-report.html” inside the target-directory (more precisely: in the directory target/surefire-reports). That directory had been created by m2eclipse but a simple “mvn clean install” created it, too.

Having a closer look on emailable-report.html with my favorite file manager I couldn’t believe my eyes: that file had a size of 440MB! I also found an emailable-report.html in every projects target folder inside my workspace, some smaller, some larger. The largest: 1.4GB—and by the way—who sends reports with that size via email?? I opened one of these files with a text editor and found the test results of the TestNG-tests, which are executed on a Maven build. Depending on a large set of test data (here: thousand of useragents) those files were blown up with information.

So the html-validation plugin simply crashed because of a (very) large html-file.

The solution

At first I blamed the Maven Surefire plugin for creating the reports but couldn’t find an option to turn off this behavior. In fact I even didn’t find a relation between Surefire and the emailable-report.html. And that’s the point–those files aren’t written by Surefire, they are written by TestNG itself! After googling a while I got to the property “usedefaultlisteners”. Unfortunately, it isn’t specified in the main TestNG-manual but in the TestNG-Ant manual. However, negating this option will shut down the result-writing (as well as all other default listeners).

So I added the following to the Surefire configuration in my pom.xml:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
        <properties>
            <property>
                <name>usedefaultlisteners</name>
                <key>false</key>
            </property>
        </properties>
    </configuration>
</plugin>

After a “mvn clean install” the problem was gone. There is also a nice side-effect of this modification: before editing the maven build took about 2:30min on my developer-machine, now it’s done in 1:30min.