Skip to content

Note on how the test suite works #252

@edoardob90

Description

@edoardob90

This is just a note to remind everybody a subtle detail of how the entire test suite works.

Sometimes the output of testing a solution reads "Your code could not run because of an error. Please, double-check it". And it confuses people (us included).

That output is displayed if, for any reason, any of the tests failed and at least one of them returned a TEST_ERROR status. Reminder: A status can be one of PASS, FAIL, or TEST_ERROR.

But where do we decide which status to return? Here:

outcome = (
TestOutcome.FAIL
if exc.errisinstance(AssertionError)
else TestOutcome.TEST_ERROR
)

Which means that, if something else than AssertionError was raised during the test, then that test will be labeled TEST_ERROR. For example: if we're testing a function that should take 3 parameters, but the tested solution takes none, then we have a TypeError:

In [3]: def func(a, b, c):
   ...:     pass
   ...:

In [4]: func()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[4], line 1
----> 1 func()

TypeError: func() missing 3 required positional arguments: 'a', 'b', and 'c'

In this case, the test runner doesn't consider the testing "finished". The reason behind the choice of returning FAIL only if we raised an AssertionError is that we should strive to test all the important features of a solution, not only its return value. Again: we should make sure the solution function is passed the correct number of arguments.

A couple of "best practices" I thought:

  1. Always use assert statement to test something and don't use boolean expressions.
  2. Remember that pytest will run every test that matches the string test_{function_name}_*, so in principle we could split a test function into two. But beware: if the 1st test is taking care of the function's signature (for example) and the 2nd test assumes that the function takes 3 parameters (but it's not), you will be in the scenario described above and might think that the solution is wrong. It is, in a sense, but the test is lacking some logic. Right now we have no way of telling pytest "don't run test nr.2 if nr.1 failed".

Metadata

Metadata

Assignees

No one assigned

    Labels

    clarificationSomething to expand or make clearer

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions