Fail instead of Skip a Test when TestNG’s DataProvider throws an Exception

A quite tricky problem I have been faced with was a failing TestNG-DataProvider which threw an exception because of corrupt test data. The critical point was that neither the Maven Surefire Plugin nor the Eclipe-TestNG-Plugin failed dependent tests. These tests were only skipped. That’s problematic because when the test data gets corrupt (e.g. due to an update) I actually want to be informed explicitly—this information shouldn’t just be swallowed. But the only indicator for a failure was the amount of skipped test which Surefire prints after each run. A position which could be missed easily in a large console output.

The problem

So I constructed a minimal working example with a DataProvider that just throws a NullPointerException and a test that does nothing but depending on the broken DataProvider. Then I took the sources of TestNG and turned on my debugger. I finally got to the class org.testng.internal.Invoker: there’s a method invokeTestMethods(..) in which I found the following code:

if (bag.hasErrors()) {
    failureCount = handleInvocationResults(testMethod, bag.errorResults, null, failureCount, expectedExceptionHolder, true, true /* collect results */);
    ITestResult tr = registerSkippedTestResult(testMethod, instances[0], start, bag.errorResults.get(0).getThrowable());
    result.add(tr);
    continue;
}

Now consider the bag-instance has errors (and the internal hold errorResult-instance also says explicitly that the status is “failure”). The method call registerSkippedTestResult(..) changes that to a skip! Obviously I located the reason for my problem—although the intention of this code is still not clear to me…

Solution 1: “Selftest”-Method in DataProvider

The (easiest and somehow most naive) solution is to provide a self test which invokes the DataProvider directly. The tests which depend on the DataProvider will still be skipped but the additional self test bypasses the TestNG-mechanism and throws an exception directly. Hence TestNG fails the test run as it should be.

@Test
public void selftest() {
    TestDataProvider.createTestcases();
}

The benefit of this solution is its simplicity and its robustness: no future releases of TestNG can break this approach (except a change which ignores exceptions occurring in test methods, but I can’t hardly believe that will ever happen). The drawback is that you have to modify your code because of an issue in TestNG (I really don’t like being forced to make changes in my code because of problems in third party libraries). Furthermore there are additional runtime costs to consider—the DataProvider is called at least once more than actually needed. Depending on the DataProviders logic a more or less critical fact.

Solution 2: Return an empty array

Cédric Beust (the developer behind TestNG) gave in [1] a solution how to handle this issue. The trick is to surround the failable code in the DataProvider with a try..catch and return an empty array in the catch-clause:

@DataProvider(name="testcases")
public static Object[][] testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return new Object[][] {{}};
    }
}

The solution adopted to Iterators:

@DataProvider(name="testcases")
public static Iterator testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return Arrays.asList(new Object[][] {{}}).iterator();
    }
}

The big pro of this approach is its lightweight. There isn’t much code to write and the solution is somehow easy to adopt for other DataProvider. Again the drawback is that you have to modify your code because of an issue in TestNG. Furthermore without any documentation every developer would wonder about the strange looking statement inside the catch-clause, as well as about the widely disfavored coding style “catching a Throwable”. Finally the answer why this solution works is something I guess only Céderic Beust understands. By the way: using an empty list doesn’t interestingly do the job for me…

@DataProvider(name="testcases") 
public static Iterator testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return Collections.emptyList().iterator(); // DOESN'T WORK !!!
    }
}

Solution 3: Exception-Iterator

Another approach is limited to DataProviders which returns an Iterator. Therefore you create an “ExceptionIterator” which simply throws an Exception when using it:

public class ExceptionIterator implements Iterator {
    private Throwable e;

    public ExceptionIterator(Throwable e) {
        this.e = e;
    }

    @Override
    public boolean hasNext() {
        throw new RuntimeException(e);
    }

    @Override
    public Object[] next() {
        throw new RuntimeException(e);			
    }

    @Override
        public void remove() {
        throw new RuntimeException(e);
    }
}

In case of a failure while retrieving the test data, the ExceptionIterator is being used:

@DataProvider(name="testcases")
public static Iterator testcases() {
    try {
        return createTestcases(); // throws an exception
    } catch (Throwable e) {
        return new ExceptionIterator(e);
    }
}

The pros and cons of this solution are mainly equal to solution “Return an empty array” as the approaches are very similar. One benefit is that it is more stable against changes/updates of TestNG. If in some future release the reason whyever solution 2 works gets broken, this solution will still going to perform well. A disadvantage is the approach itself: an Iterator which only throws Exceptions isn’t a piece of code one could be proud of.

Solution 4: FailListener

TestNG gives the ability to register listeners to your test execution [2]. So you can code a “FailListener” which switches every skipped test to a failed one:

public class FailListener extends TestListenerAdapter {
    @Override
    public void onTestSkipped(ITestResult tr) {
        tr.setStatus(ITestResult.FAILURE);
    }
}

The listener can be attached to the test class like this (for some other ways, see [2]):

@Listeners({FailListener.class})
public class TestNGTest { .. }

One of the big benefits is the loose coupling: test code and workaround-code are separated in two independent classes. This also supports DRY (“Don’t repeat yourself”) since no try…catches-blocks (like in the other solutions) are needed. But on the other hand setting the ITestResult to fail like this must be named a dirty hack. What if in a future release the given TestResult is a clone of the original one? Changes on that instance wouldn’t be recognized by TestNG and the whole solution could be dropped. Furthermore all skipped tests are marked as fail even if they are supposed to be skipped. So you ban the usage of skip indirectly in your test environment. That’s especially problematic because although the listener is attached to a class it is called for all tests in the same test suite!

Conclusion

In this post I discussed 4 different approaches to solve a TestNG-issue related to exceptions occurring in DataProviders. Depending tests are skipped instead of being marked as fail. All approaches have their pros and cons so that the given circumstances will determine which solution helps the best. For me solution 2, “Return an empty array” did the job.

Reference

  1. [1] http://markmail.org/message/54dr2wnte6kdfnqv#query:+page:1+mid:wbzu3xs2icdr7sqp+state:results
  2. [2] http://testng.org/doc/documentation-main.html#testng-listeners

Rolf Engelhard

 

Comments (12) Write a comment

  1. You call this “An issue in TestNG”, I call it “Working as designed” :-)

    Failure in a dataprovider is not a test failure, it’s a configuration failure. It doesn’t mean there’s something wrong in your code, just that your testing harness failed to initialize properly. This is why it’s marked as a SKIP and not a FAIL.

    Reply

    • First of all, welcome to my site and thank you for sharing your point of view.

      Of course I agree with you that a failure in a dataprovider and a failure in my code (detected by a failing test) isn’t the same. But as you said: a failure in a dataprovider is a configuration *failure*. And a tester want to know about occuring failures. A SKIP doesn’t satisfy that fact.

      Furthermore I can skip tests by myself – and that’s not an occuring configuration failure (well, most of the time ;-)). So how should I decide that a SKIP is wanted (= ‘manually’ skipped) or an outcome of an error in my dataprovider?

      If there were e.g. an additional state “CONFIGURATION_FAILURE” besides SUCCESS, FAIL and SKIP I could react on that state properly. Or an additional attribute for the @DataProvider-annotation in which I could explicitly declare that I want to have a FAIL instead of a SKIP…. Christmas is just around the corner, maybe there’s a nice present in one of the next TestNG-releases for me? :)

      Reply

  2. This is already how it’s working. Take a look at this screen shot:

    http://imgur.com/jP9Iu

    You can see that init (a @BeforeClass) is marked as a failure while the other methods (all @Test) are marked as skipped.

    Reply

    • Thank you for your answer!

      Well, at first I couldn’t reproduce your example. I build a “broken” DataProvider which simply throws a NullPointerException and a test with an empty init-method (annotated with @BeforeClass). No fail.

      Then I called the DataProvider directly in the init-method – which leads me to the same result you show in the attached image. But writing an init-method which isn’t needed for the test setting and calling the DataProvider without the need of receiving the testdata isn’t really a ‘clean’ solution for me. Maybe I misunderstood your suggestion?

      Reply

    • This is still present in the current TestNG version (6.8.5).
      Failures in data providers are reported as test skips and nothing different (i.e. not as a configuration failure).
      IMHO this is clearly a bug and is not “Working as designed”

      Reply

  3. Sorry Cedric, but that is a very crappy behavior of TestNG.

    This why JUnit has the disctinction between failure und error.

    Fail fast behavior is better than “skip fast” in the area of testing because it makes problems more transparent and faster to notice and understand.

    Reply

  4. Here’s my solution, based on the stack overflow post I linked above.

    package picard;

    import org.testng.Assert;
    import org.testng.annotations.DataProvider;
    import org.testng.annotations.Test;

    import java.io.File;
    import java.io.IOException;
    import java.lang.reflect.InvocationTargetException;
    import java.lang.reflect.Method;
    import java.lang.reflect.Modifier;
    import java.net.URI;
    import java.net.URISyntaxException;
    import java.net.URL;
    import java.util.ArrayList;
    import java.util.Enumeration;
    import java.util.List;

    /**
    * Created by farjoun on 9/19/17.
    */
    public class TestDataProviders {

    /**
    * Taken from: https://stackoverflow.com/a/862130/360496
    *
    * Scans all classes accessible from the context class loader which belong
    * to the given package and subpackages.
    *
    * @param packageName The base package
    * @return The classes
    * @throws ClassNotFoundException
    * @throws IOException
    */
    private Iterable getClasses(String packageName) throws ClassNotFoundException, IOException, URISyntaxException {
    ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
    String path = packageName.replace(‘.’, ‘/’);
    Enumeration resources = classLoader.getResources(path);
    List dirs = new ArrayList();
    while (resources.hasMoreElements()) {
    URL resource = resources.nextElement();
    URI uri = new URI(resource.toString());
    dirs.add(new File(uri.getPath()));
    }
    List classes = new ArrayList();
    for (File directory : dirs) {
    classes.addAll(findClasses(directory, packageName));
    }

    return classes;
    }

    /**
    * Recursive method used to find all classes in a given directory and
    * subdirs.
    *
    * @param directory The base directory
    * @param packageName The package name for classes found inside the base directory
    * @return The classes
    * @throws ClassNotFoundException
    */
    private List findClasses(File directory, String packageName) throws ClassNotFoundException {
    List classes = new ArrayList();
    if (!directory.exists()) {
    return classes;
    }
    File[] files = directory.listFiles();
    assert files != null;
    for (File file : files) {
    if (file.isDirectory()) {
    classes.addAll(findClasses(file, packageName + “.” + file.getName()));
    } else if (file.getName().endsWith(“.class”)) {
    classes.add(Class.forName(packageName + ‘.’ + file.getName().substring(0, file.getName().length() – 6)));
    }
    }
    return classes;
    }

    @Test
    public void testAllDataProviders() throws URISyntaxException, IOException, ClassNotFoundException, IllegalAccessException, InstantiationException, InvocationTargetException {
    int i = 0;
    for (Class testClass : getClasses(“picard”)) {
    if (Modifier.isAbstract(testClass.getModifiers())) continue;
    for (final Method method : testClass.getMethods()) {
    if (method.isAnnotationPresent(DataProvider.class)) {
    System.err.println(“Method: ” + method.getName());
    method.invoke(testClass.newInstance());
    i++;
    }
    }
    }
    Assert.assertNotSame(i,0);
    System.err.println(“Found: “+ i + ” @DataProviders.”);
    }
    }

    Reply

  5. Thank you! I was trying to figure out why a test suite I inherited was passing even though the test output was riddled with exceptions – it was the data providers!

    Reply

  6. its right…i face the same problem now…we create the seperate poc for that but still while skipped test create the another count problem while we try to hard coded but its not right solution even we try for skipped…but you are right while skipped test should be failed while we using dataproviders…

    Reply

Leave a Reply to Rolf Engelhard Cancel reply

Required fields are marked *.