Calmcode - pytest tricks: xfail

Xfail in Pytest

1 2 3 4 5 6 7 8 9 10

Let's now consider a normalize function one more time. We're about to test it.

import pytest
import numpy as np

def normalize(X):
    X = X.astype(np.float)
    return (X - X.min())/(X.max() - X.min())

@pytest.fixture(params=[(1,1), (2,2), (3,3), (4,4)], ids=lambda d: f"rows: {d[0]} cols: {d[1]}")
def random_numpy_array(request):
    return np.random.normal(request.param)

def test_min_max_normalise(random_numpy_array):
    X_norm = normalize(random_numpy_array)
    assert X_norm.min() == 0.0
    assert X_norm.max() == 1.0

The interesting thing here is that this test will fail, depending on your version of numpy! As of version 1.20 the np.float and np.int aliases have been deprecated.

Conditional Skip

One way of dealing with this is that you might consider skipping this test via @pytest.mark.skipif depending on the version of Numpy.

import pytest
import numpy as np

def normalize(X):
    X = X.astype(np.float)
    return (X - X.min())/(X.max() - X.min())

@pytest.fixture(params=[(1,1), (2,2), (3,3), (4,4)], ids=lambda d: f"rows: {d[0]} cols: {d[1]}")
def random_numpy_array(request):
    return np.random.normal(request.param)

@pytest.mark.skipif(np.__version__ > "1.20.0", reason="new numpy version breaks this API")
def test_min_max_normalise(random_numpy_array):
    X_norm = normalize(random_numpy_array)
    assert X_norm.min() == 0.0
    assert X_norm.max() == 1.0

But in this case, this may not be that relevant because you may not be testing against multiple Numpy versions.

Xfail

Instead of skipping, we can also declare this test to be something that we expect to fail via @pytest.mark.xfail.

import pytest
import numpy as np

def normalize(X):
    X = X.astype(np.float)
    return (X - X.min())/(X.max() - X.min())

@pytest.fixture(params=[(1,1), (2,2), (3,3), (4,4)], ids=lambda d: f"rows: {d[0]} cols: {d[1]}")
def random_numpy_array(request):
    return np.random.normal(request.param)

@pytest.mark.xfail(reason="this needs to be fixed once we support Numpy > 1.20")
def test_min_max_normalise(random_numpy_array):
    X_norm = normalize(random_numpy_array)
    assert X_norm.min() == 0.0
    assert X_norm.max() == 1.0

This way, you can declare a failing test that is failing now but that you may want to fix later. In this example the change would be so simple that it wouldn't be worth adding an xfail. Sometimes though, especially in a larger project with many dependencies, you might be dealing with a known failure that should not be skipped, but should get adressed. Note that you can also add an extra setting.

  • pytest.mark.xfail(strict=True) will complain when the test actually passes
  • pytest.mark.xfail(strict=False) will allow the test to pass, even when the test does not fail

xfail vs. skip

You might wonder about the difference between xfail and skip at this point, because they certainly do similar things. In general:

  • skip is meant for situations where you only want the test to run when certain conditions are met. You can choose to skip a test alltogether, but most typically it's meant to skip tests depending on operating systems, like having some Windows-only tests.
  • xfail means that you can really expect the test to fail. So if a feature is not yet implemented, or if a bug can't be fixed right now, that's when an xfail would be a more appropriate convention.

Note that more information on skipping and xfailing can be found on the pytest documentation.