You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to note here the way how to test visualizations.
After some googling (1), (2), I've decided that a good solution would be to use py.test plugin to validate Jupyter notebooks.
The plugin adds functionality to py.test to recognise and collect Jupyter notebooks. The intended purpose of the tests is to determine whether execution of the stored inputs match the stored outputs of the .ipynb file. Whilst also ensuring that the notebooks are running without errors.
The tests were designed to ensure that Jupyter notebooks (especially those for reference and documentation), are executing consistently.
Comparing cell-output while testing with the stored one in a notebook will cause all cells to fail because the output information contains the memory address where the figure is stored (obviously, this is the unique value so it can't be compared).
But we can make sure that the notebooks (with visualisations) are running without errors.
The only drawback I see is adding a new package-requirement - nbval.
The alternative solution to consider is mentioned here. The idea is to add the following test
I prefer the 1st solution - use the py.test extension.
By the way, do we care about the time needed for Travis to run tests? I mean, in both cases, I will create a jupyter notebook with visualisations, but I can not figure out the "length" of this notebook. Should I include all the possible ways how to call any visualisation function? Or including a few different calls (around 5-7) of every function would be sufficient?
Heya,
I would like to note here the way how to test visualizations.
After some googling (1), (2), I've decided that a good solution would be to use py.test plugin to validate Jupyter notebooks.
Comparing cell-output while testing with the stored one in a notebook will cause all cells to fail because the output information contains the memory address where the figure is stored (obviously, this is the unique value so it can't be compared).
But we can make sure that the notebooks (with visualisations) are running without errors.
The only drawback I see is adding a new package-requirement - nbval.
The alternative solution to consider is mentioned here. The idea is to add the following test
Alternative test
I prefer the 1st solution - use the py.test extension.
By the way, do we care about the time needed for Travis to run tests? I mean, in both cases, I will create a jupyter notebook with visualisations, but I can not figure out the "length" of this notebook. Should I include all the possible ways how to call any visualisation function? Or including a few different calls (around 5-7) of every function would be sufficient?
Sources:
The text was updated successfully, but these errors were encountered: