You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/actions/get-test-groups/action.yml
+1-1
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,6 @@ runs:
29
29
if [[ ${{ inputs.TEST_KIND }} == "nightly" ]]; then
30
30
test_groups_str=$(python -c 'from tests.ci.azureml_tests.test_groups import nightly_test_groups; print([t for t in nightly_test_groups.keys() if "${{inputs.TEST_ENV}}" in t])')
Copy file name to clipboardExpand all lines: README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -135,13 +135,13 @@ This project adheres to [Microsoft's Open Source Code of Conduct](CODE_OF_CONDUC
135
135
136
136
## Build Status
137
137
138
-
These tests are the nightly builds, which compute the smoke and integration tests. `main` is our principal branch and `staging` is our development branch. We use [pytest](https://docs.pytest.org/) for testing python utilities in [recommenders](recommenders) and [Papermill](https://github.com/nteract/papermill) and [Scrapbook](https://nteract-scrapbook.readthedocs.io/en/latest/) for the [notebooks](examples).
138
+
These tests are the nightly builds, which compute the asynchronous tests. `main` is our principal branch and `staging` is our development branch. We use [pytest](https://docs.pytest.org/) for testing python utilities in [recommenders](recommenders) and [Papermill](https://github.com/nteract/papermill) and [Scrapbook](https://nteract-scrapbook.readthedocs.io/en/latest/) for the [notebooks](examples).
139
139
140
140
For more information about the testing pipelines, please see the [test documentation](tests/README.md).
141
141
142
142
### AzureML Nightly Build Status
143
143
144
-
Smoke and integration tests are run daily on AzureML.
144
+
The nightly build tests are run daily on AzureML.
145
145
146
146
| Build Type | Branch | Status || Branch | Status |
Copy file name to clipboardExpand all lines: SETUP.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -156,9 +156,9 @@ First make sure that the tag that you want to add, e.g. `0.6.0`, is added in [`r
156
156
1. Make sure that the code in main passes all the tests (unit and nightly tests).
157
157
1. Create a tag with the version number: e.g. `git tag -a 0.6.0 -m "Recommenders 0.6.0"`.
158
158
1. Push the tag to the remote server: `git push origin 0.6.0`.
159
-
1. When the new tag is pushed, a release pipeline is executed. This pipeline runs all the tests again (unit, smoke and integration), generates a wheel and a tar.gz which are uploaded to a [GitHub draft release](https://github.com/microsoft/recommenders/releases).
159
+
1. When the new tag is pushed, a release pipeline is executed. This pipeline runs all the tests again (PR gate and nightly builds), generates a wheel and a tar.gz which are uploaded to a [GitHub draft release](https://github.com/microsoft/recommenders/releases).
160
160
1. Fill up the draft release with all the recent changes in the code.
161
161
1. Download the wheel and tar.gz locally, these files shouldn't have any bug, since they passed all the tests.
162
162
1. Install twine: `pip install twine`
163
-
1. Publish the wheel and tar.gz to pypi: `twine upload recommenders*`
163
+
1. Publish the wheel and tar.gz to PyPI: `twine upload recommenders*`
Copy file name to clipboardExpand all lines: tests/README.md
+28-115
Original file line number
Diff line number
Diff line change
@@ -63,7 +63,9 @@ In this section we show how to create tests and add them to the test pipeline. T
63
63
1. Create your code in the library and/or notebooks.
64
64
1. Design the unit tests for the code.
65
65
1. If you have written a notebook, design the notebook tests and check that the metrics they return is what you expect.
66
-
1. Add the tests to the AzureML pipeline in the corresponding [test group](./ci/azureml_tests/test_groups.py). **Please note that if you don't add your tests to the pipeline, they will not be executed.**
66
+
1. Add the tests to the AzureML pipeline in the corresponding [test group](./ci/azureml_tests/test_groups.py).
67
+
68
+
**Please note that if you don't add your tests to the pipeline, they will not be executed.**
67
69
68
70
### How to create tests for the Recommenders library
69
71
@@ -74,8 +76,6 @@ You want to make sure that all your code works before you submit it to the repos
74
76
* Use the mark `@pytest.mark.gpu` if you want the test to be executed
75
77
in a GPU environment. Use `@pytest.mark.spark` if you want the test
76
78
to be executed in a Spark environment.
77
-
* Use `@pytest.mark.smoke` and `@pytest.mark.integration` to mark the
78
-
tests as smoke tests and integration tests.
79
79
* Use `@pytest.mark.notebooks` if you are testing a notebook.
80
80
* Avoid using `is` in the asserts, instead use the operator `==`.
81
81
* Follow the pattern `assert computation == value`, for example:
@@ -113,7 +113,7 @@ For executing this test, first make sure you are in the correct environment as d
113
113
*Notice that the next instruction executes the tests from the root folder.*
#### Developing nightly tests with Papermill and Scrapbook
@@ -124,7 +124,7 @@ The first step is to tag the parameters that we are going to inject. For it we n
124
124
125
125
The way papermill works to inject parameters is very simple, it generates a copy of the notebook (in our code we call it `OUTPUT_NOTEBOOK`), and creates a new cell with the injected variables.
126
126
127
-
The second modification that we need to do to the notebook is to record the metrics we want to test using `sb.glue("output_variable", python_variable_name)`. We normally use the last cell of the notebook to record all the metrics. These are the metrics that we are going to control in the smoke and integration tests.
127
+
The second modification that we need to do to the notebook is to record the metrics we want to test using `sb.glue("output_variable", python_variable_name)`. We normally use the last cell of the notebook to record all the metrics. These are the metrics that we are going to control in the smoke and functional tests.
128
128
129
129
This is an example on how we do a smoke test. The complete code can be found in [smoke/examples/test_notebooks_python.py](./smoke/examples/test_notebooks_python.py):
More details on how to integrate Papermill with notebooks can be found in their [repo](https://github.com/nteract/papermill). Also, you can check the [Scrapbook repo](https://github.com/nteract/scrapbook).
166
165
167
166
### How to add tests to the AzureML pipeline
168
167
169
-
To add a new test to the AzureML pipeline, add the test path to an appropriate test group listed in [test_groups.py](https://github.com/microsoft/recommenders/blob/main/tests/ci/azureml_tests/test_groups.py).
168
+
To add a new test to the AzureML pipeline, add the test path to an appropriate test group listed in [test_groups.py](./ci/azureml_tests/test_groups.py).
170
169
171
170
Tests in `group_cpu_xxx` groups are executed on a CPU-only AzureML compute cluster node. Tests in `group_gpu_xxx` groups are executed on a GPU-enabled AzureML compute cluster node with GPU related dependencies added to the AzureML run environment. Tests in `group_pyspark_xxx` groups are executed on a CPU-only AzureML compute cluster node, with the PySpark related dependencies added to the AzureML run environment.
172
171
@@ -177,15 +176,13 @@ Example of adding a new test:
177
176
1. In the environment that you are running your code, first see if there is a group whose total runtime is less than the threshold.
@@ -217,133 +214,50 @@ Then, follow the steps below to create the AzureML infrastructure:
217
214
218
215
To manually execute the tests in the CPU, GPU or Spark environments, first **make sure you are in the correct environment as described in the [SETUP.md](../SETUP.md)**.
219
216
220
-
*Click on the following menus* to see more details on how to execute the unit, smoke and integration tests:
Unit tests ensure that each class or function behaves as it should. Every time a developer makes a pull request to staging or main branch, a battery of unit tests is executed.
226
-
227
-
*Note that the next instructions execute the tests from the root folder.*
228
-
229
-
For executing the Python unit tests for the utilities:
230
-
231
-
pytest tests/unit -m "not notebooks and not spark and not gpu" --durations 0
232
-
233
-
For executing the Python unit tests for the notebooks:
234
-
235
-
pytest tests/unit -m "notebooks and not spark and not gpu" --durations 0
236
-
237
-
For executing the Python GPU unit tests for the utilities:
238
-
239
-
pytest tests/unit -m "not notebooks and not spark and gpu" --durations 0
240
-
241
-
For executing the Python GPU unit tests for the notebooks:
242
-
243
-
pytest tests/unit -m "notebooks and not spark and gpu" --durations 0
244
-
245
-
For executing the PySpark unit tests for the utilities:
246
-
247
-
pytest tests/unit -m "not notebooks and spark and not gpu" --durations 0
248
-
249
-
For executing the PySpark unit tests for the notebooks:
250
-
251
-
pytest tests/unit -m "notebooks and spark and not gpu" --durations 0
252
-
253
-
*NOTE: Adding `--durations 0` shows the computation time of all tests.*
254
-
255
-
*NOTE: Adding `--disable-warnings` will disable the warning messages.*
<td>VW pip package has installation incompatibilities</td>
338
-
</tr>
339
-
<tr>
340
-
<td>*/test_notebooks_python</td>
341
-
<td>test_nni*</td>
342
-
<td>Linux</td>
343
-
<td>NNI pip package has installation incompatibilities</td>
344
-
</tr>
345
-
</table>
346
-
347
261
In order to skip a test because there is an OS or upstream issue which cannot be resolved you can use pytest [annotations](https://docs.pytest.org/en/latest/skipping.html).
0 commit comments