Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use shared implementation of triage workflow #504

Merged
merged 2 commits into from
Jun 8, 2022
Merged

Use shared implementation of triage workflow #504

merged 2 commits into from
Jun 8, 2022

Conversation

benfred
Copy link
Member

@benfred benfred commented Jun 8, 2022

Use a reusable workflow for triaging issues that will be shared across repos,
rather than have a bunch of boilerplate defined here.

Use a reusable workflow for triaging issues that will be shared across repos,
rather than have a bunch of boilerplate defined here.
@github-actions
Copy link

github-actions bot commented Jun 8, 2022

Documentation preview

https://nvidia-merlin.github.io/models/review/pr-504

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #504 of commit f549498d83298b4be5f30c0e0a08153e0497c4e2, no merge conflicts.
Running as SYSTEM
Setting status of f549498d83298b4be5f30c0e0a08153e0497c4e2 to PENDING with url https://10.20.13.93:8080/job/merlin_models/426/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/504/*:refs/remotes/origin/pr/504/* # timeout=10
 > git rev-parse f549498d83298b4be5f30c0e0a08153e0497c4e2^{commit} # timeout=10
Checking out Revision f549498d83298b4be5f30c0e0a08153e0497c4e2 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f549498d83298b4be5f30c0e0a08153e0497c4e2 # timeout=10
Commit message: "Use shared implementation of triage workflow"
 > git rev-list --no-walk 1744301a17b12804810a4b7892525e8a886e8806 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins10408802971464844105.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /var/jenkins_home/.local/lib/python3.8/site-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from testbook) (0.5.13)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.2.2.post1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.6.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.10.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.15.3)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.3)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (21.4.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.7.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.1.0)
Requirement already satisfied: tornado>=6.0 in /var/jenkins_home/.local/lib/python3.8/site-packages/tornado-6.1-py3.8-linux-x86_64.egg (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.1)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 444 items / 3 skipped

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 1%]
tests/unit/datasets/test_ecommerce.py .Fsss [ 2%]
tests/unit/datasets/test_entertainment.py ....sss. [ 4%]
tests/unit/datasets/test_social.py . [ 4%]
tests/unit/datasets/test_synthetic.py ..... [ 5%]
tests/unit/tf/test_core.py ...... [ 6%]
tests/unit/tf/test_dataset.py ............... [ 10%]
tests/unit/tf/test_public_api.py . [ 10%]
tests/unit/tf/blocks/test_cross.py ........... [ 13%]
tests/unit/tf/blocks/test_dlrm.py ........ [ 14%]
tests/unit/tf/blocks/test_interactions.py . [ 15%]
tests/unit/tf/blocks/test_mlp.py ............................. [ 21%]
tests/unit/tf/blocks/core/test_aggregation.py ......... [ 23%]
tests/unit/tf/blocks/core/test_base.py .. [ 24%]
tests/unit/tf/blocks/core/test_combinators.py ... [ 24%]
tests/unit/tf/blocks/core/test_index.py ... [ 25%]
tests/unit/tf/blocks/core/test_masking.py ....... [ 27%]
tests/unit/tf/blocks/core/test_tabular.py ... [ 27%]
tests/unit/tf/blocks/core/test_transformations.py ........... [ 30%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 30%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 33%]
tests/unit/tf/examples/test_01_getting_started.py . [ 33%]
tests/unit/tf/examples/test_02_dataschema.py F [ 33%]
tests/unit/tf/examples/test_03_exploring_different_models.py F [ 33%]
tests/unit/tf/examples/test_04_export_ranking_models.py F [ 34%]
tests/unit/tf/examples/test_05_export_retrieval_model.py F [ 34%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 34%]
tests/unit/tf/inputs/test_continuous.py ..... [ 35%]
tests/unit/tf/inputs/test_embedding.py .............. [ 38%]
tests/unit/tf/inputs/test_tabular.py ....... [ 40%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 48%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 49%]
tests/unit/tf/metrics/test_metrics_ranking.py ................. [ 53%]
tests/unit/tf/models/test_base.py ....... [ 55%]
tests/unit/tf/models/test_benchmark.py .. [ 55%]
tests/unit/tf/models/test_ranking.py ................ [ 59%]
tests/unit/tf/models/test_retrieval.py ........................... [ 65%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%]
tests/unit/tf/prediction_tasks/test_multi_task.py ....... [ 67%]
tests/unit/tf/prediction_tasks/test_next_item.py .................... [ 71%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 72%]
tests/unit/tf/prediction_tasks/test_sampling.py .................... [ 76%]
tests/unit/tf/utils/test_batch.py .... [ 77%]
tests/unit/torch/test_dataset.py ......... [ 79%]
tests/unit/torch/test_public_api.py . [ 79%]
tests/unit/torch/block/test_base.py .... [ 80%]
tests/unit/torch/block/test_mlp.py . [ 81%]
tests/unit/torch/features/test_continuous.py .. [ 81%]
tests/unit/torch/features/test_embedding.py .............. [ 84%]
tests/unit/torch/features/test_tabular.py .... [ 85%]
tests/unit/torch/model/test_head.py ............ [ 88%]
tests/unit/torch/model/test_model.py .. [ 88%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [100%]

=================================== FAILURES ===================================
________________________ test_synthetic_aliccp_raw_data ________________________

tmp_path = PosixPath('/tmp/pytest-of-jenkins/pytest-27/test_synthetic_aliccp_raw_data0')

def test_synthetic_aliccp_raw_data(tmp_path):
    dataset = generate_data("aliccp-raw", 100)

    assert isinstance(dataset, merlin.io.Dataset)
    assert dataset.num_rows == 100
    assert len(dataset.schema) == 25
    assert sorted(dataset.to_ddf().compute().columns) == [
        "click",
        "conversion",
        "item_brand",
        "item_category",
        "item_id",
        "item_intention",
        "item_shop",
        "position",
        "user_age",
        "user_brands",
        "user_categories",
        "user_consumption_1",
        "user_consumption_2",
        "user_gender",
        "user_geography",
        "user_group",
        "user_id",
        "user_intentions",
        "user_is_occupied",
        "user_item_brands",
        "user_item_categories",
        "user_item_intentions",
        "user_item_shops",
        "user_profile",
        "user_shops",
    ]
  ecommerce.transform_aliccp((dataset, dataset), tmp_path)

tests/unit/datasets/test_ecommerce.py:56:


merlin/datasets/ecommerce/aliccp/dataset.py:253: in transform_aliccp
nvt_workflow = nvt_workflow or default_aliccp_transformation(**locals())


add_target_encoding = True
kwargs = {'data': (<merlin.io.dataset.Dataset object at 0x7f06f94a88e0>, <merlin.io.dataset.Dataset object at 0x7f06f94a88e0>),...{}, 'nvt_workflow': None, 'output_path': PosixPath('/tmp/pytest-of-jenkins/pytest-27/test_synthetic_aliccp_raw_data0')}

def default_aliccp_transformation(add_target_encoding=True, **kwargs):
  import nvtabular as nvt

E ModuleNotFoundError: No module named 'nvtabular'

merlin/datasets/ecommerce/aliccp/dataset.py:177: ModuleNotFoundError
_______________________ test_example_02_nvt_integration ________________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f06f9a100a0>

@testbook(REPO_ROOT / "examples/02-Merlin-Models-and-NVTabular-integration.ipynb", execute=False)
def test_example_02_nvt_integration(tb):
    tb.inject(
        """
        import os
        os.environ["INPUT_DATA_DIR"] = "/tmp/data/"
        from unittest.mock import patch
        from merlin.datasets.synthetic import generate_data
        mock_train, mock_valid = generate_data(
            input="movielens-1m",
            num_rows=1000,
            set_sizes=(0.8, 0.2)
        )
        p1 = patch(
            "merlin.datasets.entertainment.get_movielens",
            return_value=[mock_train, mock_valid]
        )
        p1.start()
        p2 = patch(
            "merlin.core.utils.download_file",
            return_value=[]
        )
        p2.start()
        import numpy as np
        import pandas
        from pathlib import Path
        from merlin.datasets.synthetic import generate_data
        mock_data = generate_data(
            input="movielens-1m-raw-ratings",
            num_rows=1000
        )
        mock_data = mock_data.to_ddf().compute()
        if not isinstance(mock_data, pandas.core.frame.DataFrame):
            mock_data = mock_data.to_pandas()
        input_path = os.environ.get(
            "INPUT_DATA_DIR",
            os.path.expanduser("~/merlin-models-data/movielens/")
        )
        path = Path(input_path + "ml-1m/")
        path.mkdir(parents=True, exist_ok=True)
        np.savetxt(
            input_path + 'ml-1m/ratings.dat',
            mock_data.values,
            delimiter='::',
            fmt='%s',
            encoding='utf-8'
        )
        """
    )
  tb.execute()

tests/unit/tf/examples/test_02_dataschema.py:55:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f06f9a100a0>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '157fe18f', 'metadata': {'execution': {'iopub.status.busy': '2022-06...ls import download_file\nfrom merlin.datasets.entertainment import get_movielens\nfrom merlin.schema.tags import Tags'}
cell_index = 2
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '6eaaf1a9-...e, 'engine': '6eaaf1a9-ca29-4608-a0a1-611af75a21b1', 'started': '2022-06-08T18:35:28.220473Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E import pandas as pd
E import nvtabular as nvt
E from merlin.models.utils.example_utils import workflow_fit_transform
E import merlin.io
E
E import merlin.models.tf as mm
E
E from nvtabular import ops
E from merlin.core.utils import download_file
E from merlin.datasets.entertainment import get_movielens
E from merlin.schema.tags import Tags
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mpandas�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mpd�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E �[1;32m 5�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mio�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
__________________ test_example_03_exploring_different_models __________________

self = <testbook.client.TestbookNotebookClient object at 0x7f06f9328400>
cell = {'cell_type': 'markdown', 'id': '56ecea23', 'metadata': {'pycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}
kwargs = {}, cell_indexes = [0, 1, 2, 3, 4, 5, ...]
executed_cells = [{'cell_type': 'code', 'execution_count': 2, 'id': '5f49a48e', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution':...ycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}]
idx = 4

def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]:
    """
    Executes a cell or list of cells
    """
    if isinstance(cell, slice):
        start, stop = self._cell_index(cell.start), self._cell_index(cell.stop)
        if cell.step is not None:
            raise TestbookError('testbook does not support step argument')

        cell = range(start, stop + 1)
    elif isinstance(cell, str) or isinstance(cell, int):
        cell = [cell]

    cell_indexes = cell

    if all(isinstance(x, str) for x in cell):
        cell_indexes = [self._cell_index(tag) for tag in cell]

    executed_cells = []
    for idx in cell_indexes:
        try:
          cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs)

../../../.local/lib/python3.8/site-packages/testbook/client.py:133:


args = (<testbook.client.TestbookNotebookClient object at 0x7f06f9328400>, {'cell_type': 'code', 'execution_count': 3, 'outpu...cute_reply': '2022-06-08T18:35:30.955267Z', 'iopub.status.idle': '2022-06-08T18:35:30.956012Z'}}, 'id': '4ae08667'}, 4)
kwargs = {}

def wrapped(*args, **kwargs):
  return just_run(coro(*args, **kwargs))

../../../.local/lib/python3.8/site-packages/nbclient/util.py:84:


coro = <coroutine object NotebookClient.async_execute_cell at 0x7f066dd73140>

def just_run(coro: Awaitable) -> Any:
    """Make the coroutine run, even if there is an event loop running (using nest_asyncio)"""
    # original from vaex/asyncio.py
    loop = asyncio._get_running_loop()
    if loop is None:
        had_running_loop = False
        try:
            loop = asyncio.get_event_loop()
        except RuntimeError:
            # we can still get 'There is no current event loop in ...'
            loop = asyncio.new_event_loop()
            asyncio.set_event_loop(loop)
    else:
        had_running_loop = True
    if had_running_loop:
        # if there is a running loop, we patch using nest_asyncio
        # to have reentrant event loops
        check_ipython()
        import nest_asyncio

        nest_asyncio.apply()
        check_patch_tornado()
  return loop.run_until_complete(coro)

../../../.local/lib/python3.8/site-packages/nbclient/util.py:62:


self = <_UnixSelectorEventLoop running=False closed=False debug=False>
future = <Task finished name='Task-104' coro=<NotebookClient.async_execute_cell() done, defined at /var/jenkins_home/.local/lib...x1b[0;31mModuleNotFoundError\x1b[0m: No module named 'nvtabular'\nModuleNotFoundError: No module named 'nvtabular'\n")>

def run_until_complete(self, future):
    """Run until the Future is done.

    If the argument is a coroutine, it is wrapped in a Task.

    WARNING: It would be disastrous to call run_until_complete()
    with the same coroutine twice -- it would wrap it in two
    different Tasks and that can't be good.

    Return the Future's result, or raise its exception.
    """
    self._check_closed()
    self._check_running()

    new_task = not futures.isfuture(future)
    future = tasks.ensure_future(future, loop=self)
    if new_task:
        # An exception is raised if the future didn't complete, so there
        # is no need to log the "destroy pending task" message
        future._log_destroy_pending = False

    future.add_done_callback(_run_until_complete_cb)
    try:
        self.run_forever()
    except:
        if new_task and future.done() and not future.cancelled():
            # The coroutine raised a BaseException. Consume the exception
            # to not log a warning, the caller doesn't have access to the
            # local task.
            future.exception()
        raise
    finally:
        future.remove_done_callback(_run_until_complete_cb)
    if not future.done():
        raise RuntimeError('Event loop stopped before Future completed.')
  return future.result()

/usr/lib/python3.8/asyncio/base_events.py:616:


self = <testbook.client.TestbookNotebookClient object at 0x7f06f9328400>
cell = {'cell_type': 'code', 'execution_count': 3, 'outputs': [{'output_type': 'error', 'ename': 'ModuleNotFoundError', 'eval....execute_reply': '2022-06-08T18:35:30.955267Z', 'iopub.status.idle': '2022-06-08T18:35:30.956012Z'}}, 'id': '4ae08667'}
cell_index = 4, execution_count = None, store_history = True

async def async_execute_cell(
    self,
    cell: NotebookNode,
    cell_index: int,
    execution_count: t.Optional[int] = None,
    store_history: bool = True,
) -> NotebookNode:
    """
    Executes a single code cell.

    To execute all cells see :meth:`execute`.

    Parameters
    ----------
    cell : nbformat.NotebookNode
        The cell which is currently being processed.
    cell_index : int
        The position of the cell within the notebook object.
    execution_count : int
        The execution count to be assigned to the cell (default: Use kernel response)
    store_history : bool
        Determines if history should be stored in the kernel (default: False).
        Specific to ipython kernels, which can store command histories.

    Returns
    -------
    output : dict
        The execution output payload (or None for no output).

    Raises
    ------
    CellExecutionError
        If execution failed and should raise an exception, this will be raised
        with defaults about the failure.

    Returns
    -------
    cell : NotebookNode
        The cell which was just processed.
    """
    assert self.kc is not None

    await run_hook(self.on_cell_start, cell=cell, cell_index=cell_index)

    if cell.cell_type != 'code' or not cell.source.strip():
        self.log.debug("Skipping non-executing cell %s", cell_index)
        return cell

    if self.skip_cells_with_tag in cell.metadata.get("tags", []):
        self.log.debug("Skipping tagged cell %s", cell_index)
        return cell

    if self.record_timing:  # clear execution metadata prior to execution
        cell['metadata']['execution'] = {}

    self.log.debug("Executing cell:\n%s", cell.source)

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors or "raises-exception" in cell.metadata.get("tags", [])
    )

    await run_hook(self.on_cell_execute, cell=cell, cell_index=cell_index)
    parent_msg_id = await ensure_async(
        self.kc.execute(
            cell.source, store_history=store_history, stop_on_error=not cell_allows_errors
        )
    )
    await run_hook(self.on_cell_complete, cell=cell, cell_index=cell_index)
    # We launched a code cell to execute
    self.code_cells_executed += 1
    exec_timeout = self._get_timeout(cell)

    cell.outputs = []
    self.clear_before_next_output = False

    task_poll_kernel_alive = asyncio.ensure_future(self._async_poll_kernel_alive())
    task_poll_output_msg = asyncio.ensure_future(
        self._async_poll_output_msg(parent_msg_id, cell, cell_index)
    )
    self.task_poll_for_reply = asyncio.ensure_future(
        self._async_poll_for_reply(
            parent_msg_id, cell, exec_timeout, task_poll_output_msg, task_poll_kernel_alive
        )
    )
    try:
        exec_reply = await self.task_poll_for_reply
    except asyncio.CancelledError:
        # can only be cancelled by task_poll_kernel_alive when the kernel is dead
        task_poll_output_msg.cancel()
        raise DeadKernelError("Kernel died")
    except Exception as e:
        # Best effort to cancel request if it hasn't been resolved
        try:
            # Check if the task_poll_output is doing the raising for us
            if not isinstance(e, CellControlSignal):
                task_poll_output_msg.cancel()
        finally:
            raise

    if execution_count:
        cell['execution_count'] = execution_count
  await self._check_raise_for_error(cell, cell_index, exec_reply)

../../../.local/lib/python3.8/site-packages/nbclient/client.py:965:


self = <testbook.client.TestbookNotebookClient object at 0x7f06f9328400>
cell = {'cell_type': 'code', 'execution_count': 3, 'outputs': [{'output_type': 'error', 'ename': 'ModuleNotFoundError', 'eval....execute_reply': '2022-06-08T18:35:30.955267Z', 'iopub.status.idle': '2022-06-08T18:35:30.956012Z'}}, 'id': '4ae08667'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '6376590c-...e, 'engine': '6376590c-a815-4879-a501-a6f199fb7798', 'started': '2022-06-08T18:35:30.480346Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E import numpy as np
E
E from nvtabular.loader.tf_utils import configure_tensorflow
E
E configure_tensorflow()
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform, save_results
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 4>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnumpy�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnp�[39;00m
E �[0;32m----> 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mloader�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mtf_utils�[39;00m �[38;5;28;01mimport�[39;00m configure_tensorflow
E �[1;32m 6�[0m configure_tensorflow()
E �[1;32m 8�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError

During handling of the above exception, another exception occurred:

tb = <testbook.client.TestbookNotebookClient object at 0x7f06f9328400>

@testbook(REPO_ROOT / "examples/03-Exploring-different-models.ipynb", execute=False)
def test_example_03_exploring_different_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
    NUM_OF_CELLS = len(tb.cells)
  tb.execute_cell(list(range(0, NUM_OF_CELLS - 5)))

tests/unit/tf/examples/test_03_exploring_different_models.py:18:


self = <testbook.client.TestbookNotebookClient object at 0x7f06f9328400>
cell = {'cell_type': 'markdown', 'id': '56ecea23', 'metadata': {'pycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}
kwargs = {}, cell_indexes = [0, 1, 2, 3, 4, 5, ...]
executed_cells = [{'cell_type': 'code', 'execution_count': 2, 'id': '5f49a48e', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution':...ycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}]
idx = 4

def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]:
    """
    Executes a cell or list of cells
    """
    if isinstance(cell, slice):
        start, stop = self._cell_index(cell.start), self._cell_index(cell.stop)
        if cell.step is not None:
            raise TestbookError('testbook does not support step argument')

        cell = range(start, stop + 1)
    elif isinstance(cell, str) or isinstance(cell, int):
        cell = [cell]

    cell_indexes = cell

    if all(isinstance(x, str) for x in cell):
        cell_indexes = [self._cell_index(tag) for tag in cell]

    executed_cells = []
    for idx in cell_indexes:
        try:
            cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs)
        except CellExecutionError as ce:
          raise TestbookRuntimeError(ce.evalue, ce, self._get_error_class(ce.ename))

E testbook.exceptions.TestbookRuntimeError: An error occurred while executing the following cell:
E ------------------
E import os
E import numpy as np
E
E from nvtabular.loader.tf_utils import configure_tensorflow
E
E configure_tensorflow()
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform, save_results
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 4>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnumpy�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnp�[39;00m
E �[0;32m----> 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mloader�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mtf_utils�[39;00m �[38;5;28;01mimport�[39;00m configure_tensorflow
E �[1;32m 6�[0m configure_tensorflow()
E �[1;32m 8�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/testbook/client.py:135: TestbookRuntimeError
___________________ test_example_04_exporting_ranking_models ___________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f06f9333910>

@testbook(REPO_ROOT / "examples/04-Exporting-ranking-models.ipynb", execute=False)
def test_example_04_exporting_ranking_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_04_export_ranking_models.py:17:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f06f9333910>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '37d5020c', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': ...ema.tags import Tags\n\nimport merlin.models.tf as mm\nfrom merlin.io.dataset import Dataset\nimport tensorflow as tf'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'f9f9365d-...e, 'engine': 'f9f9365d-67b2-4d29-853a-35e39a3b1d3f', 'started': '2022-06-08T18:35:32.262229Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E
E import nvtabular as nvt
E from nvtabular.ops import *
E
E from merlin.models.utils.example_utils import workflow_fit_transform
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mops�[39;00m �[38;5;28;01mimport�[39;00m �[38;5;241m*�[39m
E �[1;32m 6�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
_______________________ test_example_05_retrieval_models _______________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f06f9338f70>

@testbook(REPO_ROOT / "examples/05-Retrieval-Model.ipynb", execute=False)
def test_example_05_retrieval_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_05_export_retrieval_model.py:17:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f06f9338f70>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '92aa8daa', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': ...a.tags import Tags\n\nimport merlin.models.tf as mm\nfrom merlin.io.dataset import Dataset\n\nimport tensorflow as tf'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '36a4e5c8-...e, 'engine': '36a4e5c8-9942-402a-b8c0-92488dd4be5f', 'started': '2022-06-08T18:35:34.064346Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mops�[39;00m �[38;5;28;01mimport�[39;00m �[38;5;241m*�[39m
E �[1;32m 5�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/flatbuffers/compat.py:19
/var/jenkins_home/.local/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/tf/blocks/core/test_index.py: 7 warnings
tests/unit/tf/models/test_retrieval.py: 272 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 100 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
/tmp/autograph_generated_filedmmcw8bf.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:321: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
/var/jenkins_home/.local/lib/python3.8/site-packages/numpy/core/numeric.py:2453: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
return bool(asarray(a1 == a2).all())

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:210.)
return {key: torch.tensor(value) for key, value in data.items()}

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/implicit/init.py:18: could not import 'implicit': No module named 'implicit'
SKIPPED [1] tests/unit/lightfm/init.py:18: could not import 'lightfm': No module named 'lightfm'
SKIPPED [1] tests/unit/xgb/init.py:19: could not import 'xgboost': No module named 'xgboost'
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
===== 5 failed, 432 passed, 10 skipped, 412 warnings in 1099.81s (0:18:19) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins108377811879590863.sh

@benfred
Copy link
Member Author

benfred commented Jun 8, 2022

rerun tests

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #504 of commit f549498d83298b4be5f30c0e0a08153e0497c4e2, no merge conflicts.
Running as SYSTEM
Setting status of f549498d83298b4be5f30c0e0a08153e0497c4e2 to PENDING with url https://10.20.13.93:8080/job/merlin_models/428/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/504/*:refs/remotes/origin/pr/504/* # timeout=10
 > git rev-parse f549498d83298b4be5f30c0e0a08153e0497c4e2^{commit} # timeout=10
Checking out Revision f549498d83298b4be5f30c0e0a08153e0497c4e2 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f549498d83298b4be5f30c0e0a08153e0497c4e2 # timeout=10
Commit message: "Use shared implementation of triage workflow"
 > git rev-list --no-walk d32897ab2bfc292811a6a45b8df32f9088773439 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins16090387824458259581.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /var/jenkins_home/.local/lib/python3.8/site-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from testbook) (0.5.13)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.2.2.post1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.6.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.10.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.15.3)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.3)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (21.4.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.7.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.1.0)
Requirement already satisfied: tornado>=6.0 in /var/jenkins_home/.local/lib/python3.8/site-packages/tornado-6.1-py3.8-linux-x86_64.egg (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.1)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 444 items / 3 skipped

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 1%]
tests/unit/datasets/test_ecommerce.py .Fsss [ 2%]
tests/unit/datasets/test_entertainment.py ....sss. [ 4%]
tests/unit/datasets/test_social.py . [ 4%]
tests/unit/datasets/test_synthetic.py ..... [ 5%]
tests/unit/tf/test_core.py ...... [ 6%]
tests/unit/tf/test_dataset.py ............... [ 10%]
tests/unit/tf/test_public_api.py . [ 10%]
tests/unit/tf/blocks/test_cross.py ........... [ 13%]
tests/unit/tf/blocks/test_dlrm.py ........ [ 14%]
tests/unit/tf/blocks/test_interactions.py . [ 15%]
tests/unit/tf/blocks/test_mlp.py ............................. [ 21%]
tests/unit/tf/blocks/core/test_aggregation.py ......... [ 23%]
tests/unit/tf/blocks/core/test_base.py .. [ 24%]
tests/unit/tf/blocks/core/test_combinators.py ... [ 24%]
tests/unit/tf/blocks/core/test_index.py ... [ 25%]
tests/unit/tf/blocks/core/test_masking.py ....... [ 27%]
tests/unit/tf/blocks/core/test_tabular.py ... [ 27%]
tests/unit/tf/blocks/core/test_transformations.py ........... [ 30%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 30%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 33%]
tests/unit/tf/examples/test_01_getting_started.py . [ 33%]
tests/unit/tf/examples/test_02_dataschema.py F [ 33%]
tests/unit/tf/examples/test_03_exploring_different_models.py F [ 33%]
tests/unit/tf/examples/test_04_export_ranking_models.py F [ 34%]
tests/unit/tf/examples/test_05_export_retrieval_model.py F [ 34%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 34%]
tests/unit/tf/inputs/test_continuous.py ..... [ 35%]
tests/unit/tf/inputs/test_embedding.py .............. [ 38%]
tests/unit/tf/inputs/test_tabular.py ....... [ 40%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 48%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 49%]
tests/unit/tf/metrics/test_metrics_ranking.py ................. [ 53%]
tests/unit/tf/models/test_base.py ....... [ 55%]
tests/unit/tf/models/test_benchmark.py .. [ 55%]
tests/unit/tf/models/test_ranking.py ................ [ 59%]
tests/unit/tf/models/test_retrieval.py ........................... [ 65%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%]
tests/unit/tf/prediction_tasks/test_multi_task.py ....... [ 67%]
tests/unit/tf/prediction_tasks/test_next_item.py .................... [ 71%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 72%]
tests/unit/tf/prediction_tasks/test_sampling.py .................... [ 76%]
tests/unit/tf/utils/test_batch.py .... [ 77%]
tests/unit/torch/test_dataset.py ......... [ 79%]
tests/unit/torch/test_public_api.py . [ 79%]
tests/unit/torch/block/test_base.py .... [ 80%]
tests/unit/torch/block/test_mlp.py . [ 81%]
tests/unit/torch/features/test_continuous.py .. [ 81%]
tests/unit/torch/features/test_embedding.py .............. [ 84%]
tests/unit/torch/features/test_tabular.py .... [ 85%]
tests/unit/torch/model/test_head.py ............ [ 88%]
tests/unit/torch/model/test_model.py .. [ 88%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [100%]

=================================== FAILURES ===================================
________________________ test_synthetic_aliccp_raw_data ________________________

tmp_path = PosixPath('/tmp/pytest-of-jenkins/pytest-29/test_synthetic_aliccp_raw_data0')

def test_synthetic_aliccp_raw_data(tmp_path):
    dataset = generate_data("aliccp-raw", 100)

    assert isinstance(dataset, merlin.io.Dataset)
    assert dataset.num_rows == 100
    assert len(dataset.schema) == 25
    assert sorted(dataset.to_ddf().compute().columns) == [
        "click",
        "conversion",
        "item_brand",
        "item_category",
        "item_id",
        "item_intention",
        "item_shop",
        "position",
        "user_age",
        "user_brands",
        "user_categories",
        "user_consumption_1",
        "user_consumption_2",
        "user_gender",
        "user_geography",
        "user_group",
        "user_id",
        "user_intentions",
        "user_is_occupied",
        "user_item_brands",
        "user_item_categories",
        "user_item_intentions",
        "user_item_shops",
        "user_profile",
        "user_shops",
    ]
  ecommerce.transform_aliccp((dataset, dataset), tmp_path)

tests/unit/datasets/test_ecommerce.py:56:


merlin/datasets/ecommerce/aliccp/dataset.py:253: in transform_aliccp
nvt_workflow = nvt_workflow or default_aliccp_transformation(**locals())


add_target_encoding = True
kwargs = {'data': (<merlin.io.dataset.Dataset object at 0x7f167399ea30>, <merlin.io.dataset.Dataset object at 0x7f167399ea30>),...{}, 'nvt_workflow': None, 'output_path': PosixPath('/tmp/pytest-of-jenkins/pytest-29/test_synthetic_aliccp_raw_data0')}

def default_aliccp_transformation(add_target_encoding=True, **kwargs):
  import nvtabular as nvt

E ModuleNotFoundError: No module named 'nvtabular'

merlin/datasets/ecommerce/aliccp/dataset.py:177: ModuleNotFoundError
_______________________ test_example_02_nvt_integration ________________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f169c1d72b0>

@testbook(REPO_ROOT / "examples/02-Merlin-Models-and-NVTabular-integration.ipynb", execute=False)
def test_example_02_nvt_integration(tb):
    tb.inject(
        """
        import os
        os.environ["INPUT_DATA_DIR"] = "/tmp/data/"
        from unittest.mock import patch
        from merlin.datasets.synthetic import generate_data
        mock_train, mock_valid = generate_data(
            input="movielens-1m",
            num_rows=1000,
            set_sizes=(0.8, 0.2)
        )
        p1 = patch(
            "merlin.datasets.entertainment.get_movielens",
            return_value=[mock_train, mock_valid]
        )
        p1.start()
        p2 = patch(
            "merlin.core.utils.download_file",
            return_value=[]
        )
        p2.start()
        import numpy as np
        import pandas
        from pathlib import Path
        from merlin.datasets.synthetic import generate_data
        mock_data = generate_data(
            input="movielens-1m-raw-ratings",
            num_rows=1000
        )
        mock_data = mock_data.to_ddf().compute()
        if not isinstance(mock_data, pandas.core.frame.DataFrame):
            mock_data = mock_data.to_pandas()
        input_path = os.environ.get(
            "INPUT_DATA_DIR",
            os.path.expanduser("~/merlin-models-data/movielens/")
        )
        path = Path(input_path + "ml-1m/")
        path.mkdir(parents=True, exist_ok=True)
        np.savetxt(
            input_path + 'ml-1m/ratings.dat',
            mock_data.values,
            delimiter='::',
            fmt='%s',
            encoding='utf-8'
        )
        """
    )
  tb.execute()

tests/unit/tf/examples/test_02_dataschema.py:55:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f169c1d72b0>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '157fe18f', 'metadata': {'execution': {'iopub.status.busy': '2022-06...ls import download_file\nfrom merlin.datasets.entertainment import get_movielens\nfrom merlin.schema.tags import Tags'}
cell_index = 2
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '08918abf-...e, 'engine': '08918abf-528a-4c1d-a33c-919aad1fa7c0', 'started': '2022-06-08T20:45:31.615150Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E import pandas as pd
E import nvtabular as nvt
E from merlin.models.utils.example_utils import workflow_fit_transform
E import merlin.io
E
E import merlin.models.tf as mm
E
E from nvtabular import ops
E from merlin.core.utils import download_file
E from merlin.datasets.entertainment import get_movielens
E from merlin.schema.tags import Tags
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mpandas�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mpd�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E �[1;32m 5�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mio�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
__________________ test_example_03_exploring_different_models __________________

self = <testbook.client.TestbookNotebookClient object at 0x7f1673793520>
cell = {'cell_type': 'markdown', 'id': '56ecea23', 'metadata': {'pycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}
kwargs = {}, cell_indexes = [0, 1, 2, 3, 4, 5, ...]
executed_cells = [{'cell_type': 'code', 'execution_count': 2, 'id': '5f49a48e', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution':...ycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}]
idx = 4

def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]:
    """
    Executes a cell or list of cells
    """
    if isinstance(cell, slice):
        start, stop = self._cell_index(cell.start), self._cell_index(cell.stop)
        if cell.step is not None:
            raise TestbookError('testbook does not support step argument')

        cell = range(start, stop + 1)
    elif isinstance(cell, str) or isinstance(cell, int):
        cell = [cell]

    cell_indexes = cell

    if all(isinstance(x, str) for x in cell):
        cell_indexes = [self._cell_index(tag) for tag in cell]

    executed_cells = []
    for idx in cell_indexes:
        try:
          cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs)

../../../.local/lib/python3.8/site-packages/testbook/client.py:133:


args = (<testbook.client.TestbookNotebookClient object at 0x7f1673793520>, {'cell_type': 'code', 'execution_count': 3, 'outpu...cute_reply': '2022-06-08T20:45:34.174400Z', 'iopub.status.idle': '2022-06-08T20:45:34.175120Z'}}, 'id': 'ae1e05aa'}, 4)
kwargs = {}

def wrapped(*args, **kwargs):
  return just_run(coro(*args, **kwargs))

../../../.local/lib/python3.8/site-packages/nbclient/util.py:84:


coro = <coroutine object NotebookClient.async_execute_cell at 0x7f15e2e2cc40>

def just_run(coro: Awaitable) -> Any:
    """Make the coroutine run, even if there is an event loop running (using nest_asyncio)"""
    # original from vaex/asyncio.py
    loop = asyncio._get_running_loop()
    if loop is None:
        had_running_loop = False
        try:
            loop = asyncio.get_event_loop()
        except RuntimeError:
            # we can still get 'There is no current event loop in ...'
            loop = asyncio.new_event_loop()
            asyncio.set_event_loop(loop)
    else:
        had_running_loop = True
    if had_running_loop:
        # if there is a running loop, we patch using nest_asyncio
        # to have reentrant event loops
        check_ipython()
        import nest_asyncio

        nest_asyncio.apply()
        check_patch_tornado()
  return loop.run_until_complete(coro)

../../../.local/lib/python3.8/site-packages/nbclient/util.py:62:


self = <_UnixSelectorEventLoop running=False closed=False debug=False>
future = <Task finished name='Task-104' coro=<NotebookClient.async_execute_cell() done, defined at /var/jenkins_home/.local/lib...x1b[0;31mModuleNotFoundError\x1b[0m: No module named 'nvtabular'\nModuleNotFoundError: No module named 'nvtabular'\n")>

def run_until_complete(self, future):
    """Run until the Future is done.

    If the argument is a coroutine, it is wrapped in a Task.

    WARNING: It would be disastrous to call run_until_complete()
    with the same coroutine twice -- it would wrap it in two
    different Tasks and that can't be good.

    Return the Future's result, or raise its exception.
    """
    self._check_closed()
    self._check_running()

    new_task = not futures.isfuture(future)
    future = tasks.ensure_future(future, loop=self)
    if new_task:
        # An exception is raised if the future didn't complete, so there
        # is no need to log the "destroy pending task" message
        future._log_destroy_pending = False

    future.add_done_callback(_run_until_complete_cb)
    try:
        self.run_forever()
    except:
        if new_task and future.done() and not future.cancelled():
            # The coroutine raised a BaseException. Consume the exception
            # to not log a warning, the caller doesn't have access to the
            # local task.
            future.exception()
        raise
    finally:
        future.remove_done_callback(_run_until_complete_cb)
    if not future.done():
        raise RuntimeError('Event loop stopped before Future completed.')
  return future.result()

/usr/lib/python3.8/asyncio/base_events.py:616:


self = <testbook.client.TestbookNotebookClient object at 0x7f1673793520>
cell = {'cell_type': 'code', 'execution_count': 3, 'outputs': [{'output_type': 'error', 'ename': 'ModuleNotFoundError', 'eval....execute_reply': '2022-06-08T20:45:34.174400Z', 'iopub.status.idle': '2022-06-08T20:45:34.175120Z'}}, 'id': 'ae1e05aa'}
cell_index = 4, execution_count = None, store_history = True

async def async_execute_cell(
    self,
    cell: NotebookNode,
    cell_index: int,
    execution_count: t.Optional[int] = None,
    store_history: bool = True,
) -> NotebookNode:
    """
    Executes a single code cell.

    To execute all cells see :meth:`execute`.

    Parameters
    ----------
    cell : nbformat.NotebookNode
        The cell which is currently being processed.
    cell_index : int
        The position of the cell within the notebook object.
    execution_count : int
        The execution count to be assigned to the cell (default: Use kernel response)
    store_history : bool
        Determines if history should be stored in the kernel (default: False).
        Specific to ipython kernels, which can store command histories.

    Returns
    -------
    output : dict
        The execution output payload (or None for no output).

    Raises
    ------
    CellExecutionError
        If execution failed and should raise an exception, this will be raised
        with defaults about the failure.

    Returns
    -------
    cell : NotebookNode
        The cell which was just processed.
    """
    assert self.kc is not None

    await run_hook(self.on_cell_start, cell=cell, cell_index=cell_index)

    if cell.cell_type != 'code' or not cell.source.strip():
        self.log.debug("Skipping non-executing cell %s", cell_index)
        return cell

    if self.skip_cells_with_tag in cell.metadata.get("tags", []):
        self.log.debug("Skipping tagged cell %s", cell_index)
        return cell

    if self.record_timing:  # clear execution metadata prior to execution
        cell['metadata']['execution'] = {}

    self.log.debug("Executing cell:\n%s", cell.source)

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors or "raises-exception" in cell.metadata.get("tags", [])
    )

    await run_hook(self.on_cell_execute, cell=cell, cell_index=cell_index)
    parent_msg_id = await ensure_async(
        self.kc.execute(
            cell.source, store_history=store_history, stop_on_error=not cell_allows_errors
        )
    )
    await run_hook(self.on_cell_complete, cell=cell, cell_index=cell_index)
    # We launched a code cell to execute
    self.code_cells_executed += 1
    exec_timeout = self._get_timeout(cell)

    cell.outputs = []
    self.clear_before_next_output = False

    task_poll_kernel_alive = asyncio.ensure_future(self._async_poll_kernel_alive())
    task_poll_output_msg = asyncio.ensure_future(
        self._async_poll_output_msg(parent_msg_id, cell, cell_index)
    )
    self.task_poll_for_reply = asyncio.ensure_future(
        self._async_poll_for_reply(
            parent_msg_id, cell, exec_timeout, task_poll_output_msg, task_poll_kernel_alive
        )
    )
    try:
        exec_reply = await self.task_poll_for_reply
    except asyncio.CancelledError:
        # can only be cancelled by task_poll_kernel_alive when the kernel is dead
        task_poll_output_msg.cancel()
        raise DeadKernelError("Kernel died")
    except Exception as e:
        # Best effort to cancel request if it hasn't been resolved
        try:
            # Check if the task_poll_output is doing the raising for us
            if not isinstance(e, CellControlSignal):
                task_poll_output_msg.cancel()
        finally:
            raise

    if execution_count:
        cell['execution_count'] = execution_count
  await self._check_raise_for_error(cell, cell_index, exec_reply)

../../../.local/lib/python3.8/site-packages/nbclient/client.py:965:


self = <testbook.client.TestbookNotebookClient object at 0x7f1673793520>
cell = {'cell_type': 'code', 'execution_count': 3, 'outputs': [{'output_type': 'error', 'ename': 'ModuleNotFoundError', 'eval....execute_reply': '2022-06-08T20:45:34.174400Z', 'iopub.status.idle': '2022-06-08T20:45:34.175120Z'}}, 'id': 'ae1e05aa'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'ec257020-...e, 'engine': 'ec257020-89b8-4385-be28-f80eaa52163d', 'started': '2022-06-08T20:45:33.689514Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E import numpy as np
E
E from nvtabular.loader.tf_utils import configure_tensorflow
E
E configure_tensorflow()
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform, save_results
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 4>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnumpy�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnp�[39;00m
E �[0;32m----> 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mloader�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mtf_utils�[39;00m �[38;5;28;01mimport�[39;00m configure_tensorflow
E �[1;32m 6�[0m configure_tensorflow()
E �[1;32m 8�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError

During handling of the above exception, another exception occurred:

tb = <testbook.client.TestbookNotebookClient object at 0x7f1673793520>

@testbook(REPO_ROOT / "examples/03-Exploring-different-models.ipynb", execute=False)
def test_example_03_exploring_different_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
    NUM_OF_CELLS = len(tb.cells)
  tb.execute_cell(list(range(0, NUM_OF_CELLS - 5)))

tests/unit/tf/examples/test_03_exploring_different_models.py:18:


self = <testbook.client.TestbookNotebookClient object at 0x7f1673793520>
cell = {'cell_type': 'markdown', 'id': '56ecea23', 'metadata': {'pycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}
kwargs = {}, cell_indexes = [0, 1, 2, 3, 4, 5, ...]
executed_cells = [{'cell_type': 'code', 'execution_count': 2, 'id': '5f49a48e', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution':...ycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}]
idx = 4

def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]:
    """
    Executes a cell or list of cells
    """
    if isinstance(cell, slice):
        start, stop = self._cell_index(cell.start), self._cell_index(cell.stop)
        if cell.step is not None:
            raise TestbookError('testbook does not support step argument')

        cell = range(start, stop + 1)
    elif isinstance(cell, str) or isinstance(cell, int):
        cell = [cell]

    cell_indexes = cell

    if all(isinstance(x, str) for x in cell):
        cell_indexes = [self._cell_index(tag) for tag in cell]

    executed_cells = []
    for idx in cell_indexes:
        try:
            cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs)
        except CellExecutionError as ce:
          raise TestbookRuntimeError(ce.evalue, ce, self._get_error_class(ce.ename))

E testbook.exceptions.TestbookRuntimeError: An error occurred while executing the following cell:
E ------------------
E import os
E import numpy as np
E
E from nvtabular.loader.tf_utils import configure_tensorflow
E
E configure_tensorflow()
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform, save_results
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 4>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnumpy�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnp�[39;00m
E �[0;32m----> 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mloader�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mtf_utils�[39;00m �[38;5;28;01mimport�[39;00m configure_tensorflow
E �[1;32m 6�[0m configure_tensorflow()
E �[1;32m 8�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/testbook/client.py:135: TestbookRuntimeError
___________________ test_example_04_exporting_ranking_models ___________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f1673740a30>

@testbook(REPO_ROOT / "examples/04-Exporting-ranking-models.ipynb", execute=False)
def test_example_04_exporting_ranking_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_04_export_ranking_models.py:17:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f1673740a30>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '37d5020c', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': ...ema.tags import Tags\n\nimport merlin.models.tf as mm\nfrom merlin.io.dataset import Dataset\nimport tensorflow as tf'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'd65b72c4-...e, 'engine': 'd65b72c4-ff0f-4378-a4c8-c8611844a63c', 'started': '2022-06-08T20:45:35.441539Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E
E import nvtabular as nvt
E from nvtabular.ops import *
E
E from merlin.models.utils.example_utils import workflow_fit_transform
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mops�[39;00m �[38;5;28;01mimport�[39;00m �[38;5;241m*�[39m
E �[1;32m 6�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
_______________________ test_example_05_retrieval_models _______________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f167374b070>

@testbook(REPO_ROOT / "examples/05-Retrieval-Model.ipynb", execute=False)
def test_example_05_retrieval_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_05_export_retrieval_model.py:17:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f167374b070>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '92aa8daa', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': ...a.tags import Tags\n\nimport merlin.models.tf as mm\nfrom merlin.io.dataset import Dataset\n\nimport tensorflow as tf'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '2b18f999-...e, 'engine': '2b18f999-6622-435e-92cc-03237cabf57c', 'started': '2022-06-08T20:45:37.148137Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mops�[39;00m �[38;5;28;01mimport�[39;00m �[38;5;241m*�[39m
E �[1;32m 5�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/flatbuffers/compat.py:19
/var/jenkins_home/.local/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/tf/blocks/core/test_index.py: 7 warnings
tests/unit/tf/models/test_retrieval.py: 272 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 100 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
/tmp/autograph_generated_filenol5zie3.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:321: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
/var/jenkins_home/.local/lib/python3.8/site-packages/numpy/core/numeric.py:2453: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
return bool(asarray(a1 == a2).all())

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:210.)
return {key: torch.tensor(value) for key, value in data.items()}

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/implicit/init.py:18: could not import 'implicit': No module named 'implicit'
SKIPPED [1] tests/unit/lightfm/init.py:18: could not import 'lightfm': No module named 'lightfm'
SKIPPED [1] tests/unit/xgb/init.py:19: could not import 'xgboost': No module named 'xgboost'
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
===== 5 failed, 432 passed, 10 skipped, 412 warnings in 1091.63s (0:18:11) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins14555944843225178102.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #504 of commit 97cb6bece3335f1216b990f33e597d45f3477d3a, no merge conflicts.
Running as SYSTEM
Setting status of 97cb6bece3335f1216b990f33e597d45f3477d3a to PENDING with url https://10.20.13.93:8080/job/merlin_models/429/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/504/*:refs/remotes/origin/pr/504/* # timeout=10
 > git rev-parse 97cb6bece3335f1216b990f33e597d45f3477d3a^{commit} # timeout=10
Checking out Revision 97cb6bece3335f1216b990f33e597d45f3477d3a (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 97cb6bece3335f1216b990f33e597d45f3477d3a # timeout=10
Commit message: "Merge branch 'main' into triage_common"
 > git rev-list --no-walk f549498d83298b4be5f30c0e0a08153e0497c4e2 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins1907725973040108111.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /var/jenkins_home/.local/lib/python3.8/site-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from testbook) (0.5.13)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.2.2.post1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.6.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.10.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.15.3)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.3)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (21.4.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.7.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.1.0)
Requirement already satisfied: tornado>=6.0 in /var/jenkins_home/.local/lib/python3.8/site-packages/tornado-6.1-py3.8-linux-x86_64.egg (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.1)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 442 items / 3 skipped

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 1%]
tests/unit/datasets/test_ecommerce.py .Fsss [ 2%]
tests/unit/datasets/test_entertainment.py ....sss. [ 4%]
tests/unit/datasets/test_social.py . [ 4%]
tests/unit/datasets/test_synthetic.py ..... [ 5%]
tests/unit/tf/test_core.py ...... [ 7%]
tests/unit/tf/test_dataset.py ............... [ 10%]
tests/unit/tf/test_public_api.py . [ 10%]
tests/unit/tf/blocks/test_cross.py ........... [ 13%]
tests/unit/tf/blocks/test_dlrm.py ........ [ 14%]
tests/unit/tf/blocks/test_interactions.py . [ 15%]
tests/unit/tf/blocks/test_mlp.py ............................. [ 21%]
tests/unit/tf/blocks/core/test_aggregation.py ......... [ 23%]
tests/unit/tf/blocks/core/test_base.py .. [ 24%]
tests/unit/tf/blocks/core/test_combinators.py ... [ 24%]
tests/unit/tf/blocks/core/test_index.py ... [ 25%]
tests/unit/tf/blocks/core/test_masking.py ....... [ 27%]
tests/unit/tf/blocks/core/test_tabular.py ... [ 27%]
tests/unit/tf/blocks/core/test_transformations.py ........... [ 30%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 30%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 33%]
tests/unit/tf/examples/test_01_getting_started.py . [ 33%]
tests/unit/tf/examples/test_02_dataschema.py F [ 33%]
tests/unit/tf/examples/test_03_exploring_different_models.py F [ 33%]
tests/unit/tf/examples/test_04_export_ranking_models.py F [ 34%]
tests/unit/tf/examples/test_05_export_retrieval_model.py F [ 34%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 34%]
tests/unit/tf/inputs/test_continuous.py ..... [ 35%]
tests/unit/tf/inputs/test_embedding.py .............. [ 38%]
tests/unit/tf/inputs/test_tabular.py ....... [ 40%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 48%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 50%]
tests/unit/tf/metrics/test_metrics_ranking.py ................. [ 53%]
tests/unit/tf/models/test_base.py ..... [ 54%]
tests/unit/tf/models/test_benchmark.py .. [ 55%]
tests/unit/tf/models/test_ranking.py ................ [ 59%]
tests/unit/tf/models/test_retrieval.py ........................... [ 65%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%]
tests/unit/tf/prediction_tasks/test_multi_task.py ....... [ 67%]
tests/unit/tf/prediction_tasks/test_next_item.py .................... [ 71%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 72%]
tests/unit/tf/prediction_tasks/test_sampling.py .................... [ 76%]
tests/unit/tf/utils/test_batch.py .... [ 77%]
tests/unit/torch/test_dataset.py ......... [ 79%]
tests/unit/torch/test_public_api.py . [ 79%]
tests/unit/torch/block/test_base.py .... [ 80%]
tests/unit/torch/block/test_mlp.py . [ 80%]
tests/unit/torch/features/test_continuous.py .. [ 81%]
tests/unit/torch/features/test_embedding.py .............. [ 84%]
tests/unit/torch/features/test_tabular.py .... [ 85%]
tests/unit/torch/model/test_head.py ............ [ 88%]
tests/unit/torch/model/test_model.py .. [ 88%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [100%]

=================================== FAILURES ===================================
________________________ test_synthetic_aliccp_raw_data ________________________

tmp_path = PosixPath('/tmp/pytest-of-jenkins/pytest-30/test_synthetic_aliccp_raw_data0')

def test_synthetic_aliccp_raw_data(tmp_path):
    dataset = generate_data("aliccp-raw", 100)

    assert isinstance(dataset, merlin.io.Dataset)
    assert dataset.num_rows == 100
    assert len(dataset.schema) == 25
    assert sorted(dataset.to_ddf().compute().columns) == [
        "click",
        "conversion",
        "item_brand",
        "item_category",
        "item_id",
        "item_intention",
        "item_shop",
        "position",
        "user_age",
        "user_brands",
        "user_categories",
        "user_consumption_1",
        "user_consumption_2",
        "user_gender",
        "user_geography",
        "user_group",
        "user_id",
        "user_intentions",
        "user_is_occupied",
        "user_item_brands",
        "user_item_categories",
        "user_item_intentions",
        "user_item_shops",
        "user_profile",
        "user_shops",
    ]
  ecommerce.transform_aliccp((dataset, dataset), tmp_path)

tests/unit/datasets/test_ecommerce.py:56:


merlin/datasets/ecommerce/aliccp/dataset.py:253: in transform_aliccp
nvt_workflow = nvt_workflow or default_aliccp_transformation(**locals())


add_target_encoding = True
kwargs = {'data': (<merlin.io.dataset.Dataset object at 0x7f351c6bfe50>, <merlin.io.dataset.Dataset object at 0x7f351c6bfe50>),...{}, 'nvt_workflow': None, 'output_path': PosixPath('/tmp/pytest-of-jenkins/pytest-30/test_synthetic_aliccp_raw_data0')}

def default_aliccp_transformation(add_target_encoding=True, **kwargs):
  import nvtabular as nvt

E ModuleNotFoundError: No module named 'nvtabular'

merlin/datasets/ecommerce/aliccp/dataset.py:177: ModuleNotFoundError
_______________________ test_example_02_nvt_integration ________________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f351cbab070>

@testbook(REPO_ROOT / "examples/02-Merlin-Models-and-NVTabular-integration.ipynb", execute=False)
def test_example_02_nvt_integration(tb):
    tb.inject(
        """
        import os
        os.environ["INPUT_DATA_DIR"] = "/tmp/data/"
        from unittest.mock import patch
        from merlin.datasets.synthetic import generate_data
        mock_train, mock_valid = generate_data(
            input="movielens-1m",
            num_rows=1000,
            set_sizes=(0.8, 0.2)
        )
        p1 = patch(
            "merlin.datasets.entertainment.get_movielens",
            return_value=[mock_train, mock_valid]
        )
        p1.start()
        p2 = patch(
            "merlin.core.utils.download_file",
            return_value=[]
        )
        p2.start()
        import numpy as np
        import pandas
        from pathlib import Path
        from merlin.datasets.synthetic import generate_data
        mock_data = generate_data(
            input="movielens-1m-raw-ratings",
            num_rows=1000
        )
        mock_data = mock_data.to_ddf().compute()
        if not isinstance(mock_data, pandas.core.frame.DataFrame):
            mock_data = mock_data.to_pandas()
        input_path = os.environ.get(
            "INPUT_DATA_DIR",
            os.path.expanduser("~/merlin-models-data/movielens/")
        )
        path = Path(input_path + "ml-1m/")
        path.mkdir(parents=True, exist_ok=True)
        np.savetxt(
            input_path + 'ml-1m/ratings.dat',
            mock_data.values,
            delimiter='::',
            fmt='%s',
            encoding='utf-8'
        )
        """
    )
  tb.execute()

tests/unit/tf/examples/test_02_dataschema.py:55:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f351cbab070>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '157fe18f', 'metadata': {'execution': {'iopub.status.busy': '2022-06...ls import download_file\nfrom merlin.datasets.entertainment import get_movielens\nfrom merlin.schema.tags import Tags'}
cell_index = 2
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '2a94d8b6-...e, 'engine': '2a94d8b6-f478-457d-a542-9df6c6af603b', 'started': '2022-06-08T21:53:24.398834Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E import pandas as pd
E import nvtabular as nvt
E from merlin.models.utils.example_utils import workflow_fit_transform
E import merlin.io
E
E import merlin.models.tf as mm
E
E from nvtabular import ops
E from merlin.core.utils import download_file
E from merlin.datasets.entertainment import get_movielens
E from merlin.schema.tags import Tags
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mpandas�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mpd�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E �[1;32m 5�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mio�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
__________________ test_example_03_exploring_different_models __________________

self = <testbook.client.TestbookNotebookClient object at 0x7f351c5293d0>
cell = {'cell_type': 'markdown', 'id': '56ecea23', 'metadata': {'pycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}
kwargs = {}, cell_indexes = [0, 1, 2, 3, 4, 5, ...]
executed_cells = [{'cell_type': 'code', 'execution_count': 2, 'id': '5f49a48e', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution':...ycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}]
idx = 4

def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]:
    """
    Executes a cell or list of cells
    """
    if isinstance(cell, slice):
        start, stop = self._cell_index(cell.start), self._cell_index(cell.stop)
        if cell.step is not None:
            raise TestbookError('testbook does not support step argument')

        cell = range(start, stop + 1)
    elif isinstance(cell, str) or isinstance(cell, int):
        cell = [cell]

    cell_indexes = cell

    if all(isinstance(x, str) for x in cell):
        cell_indexes = [self._cell_index(tag) for tag in cell]

    executed_cells = []
    for idx in cell_indexes:
        try:
          cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs)

../../../.local/lib/python3.8/site-packages/testbook/client.py:133:


args = (<testbook.client.TestbookNotebookClient object at 0x7f351c5293d0>, {'cell_type': 'code', 'execution_count': 3, 'outpu...cute_reply': '2022-06-08T21:53:27.292446Z', 'iopub.status.idle': '2022-06-08T21:53:27.293188Z'}}, 'id': '5537e9fc'}, 4)
kwargs = {}

def wrapped(*args, **kwargs):
  return just_run(coro(*args, **kwargs))

../../../.local/lib/python3.8/site-packages/nbclient/util.py:84:


coro = <coroutine object NotebookClient.async_execute_cell at 0x7f348ca57f40>

def just_run(coro: Awaitable) -> Any:
    """Make the coroutine run, even if there is an event loop running (using nest_asyncio)"""
    # original from vaex/asyncio.py
    loop = asyncio._get_running_loop()
    if loop is None:
        had_running_loop = False
        try:
            loop = asyncio.get_event_loop()
        except RuntimeError:
            # we can still get 'There is no current event loop in ...'
            loop = asyncio.new_event_loop()
            asyncio.set_event_loop(loop)
    else:
        had_running_loop = True
    if had_running_loop:
        # if there is a running loop, we patch using nest_asyncio
        # to have reentrant event loops
        check_ipython()
        import nest_asyncio

        nest_asyncio.apply()
        check_patch_tornado()
  return loop.run_until_complete(coro)

../../../.local/lib/python3.8/site-packages/nbclient/util.py:62:


self = <_UnixSelectorEventLoop running=False closed=False debug=False>
future = <Task finished name='Task-104' coro=<NotebookClient.async_execute_cell() done, defined at /var/jenkins_home/.local/lib...x1b[0;31mModuleNotFoundError\x1b[0m: No module named 'nvtabular'\nModuleNotFoundError: No module named 'nvtabular'\n")>

def run_until_complete(self, future):
    """Run until the Future is done.

    If the argument is a coroutine, it is wrapped in a Task.

    WARNING: It would be disastrous to call run_until_complete()
    with the same coroutine twice -- it would wrap it in two
    different Tasks and that can't be good.

    Return the Future's result, or raise its exception.
    """
    self._check_closed()
    self._check_running()

    new_task = not futures.isfuture(future)
    future = tasks.ensure_future(future, loop=self)
    if new_task:
        # An exception is raised if the future didn't complete, so there
        # is no need to log the "destroy pending task" message
        future._log_destroy_pending = False

    future.add_done_callback(_run_until_complete_cb)
    try:
        self.run_forever()
    except:
        if new_task and future.done() and not future.cancelled():
            # The coroutine raised a BaseException. Consume the exception
            # to not log a warning, the caller doesn't have access to the
            # local task.
            future.exception()
        raise
    finally:
        future.remove_done_callback(_run_until_complete_cb)
    if not future.done():
        raise RuntimeError('Event loop stopped before Future completed.')
  return future.result()

/usr/lib/python3.8/asyncio/base_events.py:616:


self = <testbook.client.TestbookNotebookClient object at 0x7f351c5293d0>
cell = {'cell_type': 'code', 'execution_count': 3, 'outputs': [{'output_type': 'error', 'ename': 'ModuleNotFoundError', 'eval....execute_reply': '2022-06-08T21:53:27.292446Z', 'iopub.status.idle': '2022-06-08T21:53:27.293188Z'}}, 'id': '5537e9fc'}
cell_index = 4, execution_count = None, store_history = True

async def async_execute_cell(
    self,
    cell: NotebookNode,
    cell_index: int,
    execution_count: t.Optional[int] = None,
    store_history: bool = True,
) -> NotebookNode:
    """
    Executes a single code cell.

    To execute all cells see :meth:`execute`.

    Parameters
    ----------
    cell : nbformat.NotebookNode
        The cell which is currently being processed.
    cell_index : int
        The position of the cell within the notebook object.
    execution_count : int
        The execution count to be assigned to the cell (default: Use kernel response)
    store_history : bool
        Determines if history should be stored in the kernel (default: False).
        Specific to ipython kernels, which can store command histories.

    Returns
    -------
    output : dict
        The execution output payload (or None for no output).

    Raises
    ------
    CellExecutionError
        If execution failed and should raise an exception, this will be raised
        with defaults about the failure.

    Returns
    -------
    cell : NotebookNode
        The cell which was just processed.
    """
    assert self.kc is not None

    await run_hook(self.on_cell_start, cell=cell, cell_index=cell_index)

    if cell.cell_type != 'code' or not cell.source.strip():
        self.log.debug("Skipping non-executing cell %s", cell_index)
        return cell

    if self.skip_cells_with_tag in cell.metadata.get("tags", []):
        self.log.debug("Skipping tagged cell %s", cell_index)
        return cell

    if self.record_timing:  # clear execution metadata prior to execution
        cell['metadata']['execution'] = {}

    self.log.debug("Executing cell:\n%s", cell.source)

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors or "raises-exception" in cell.metadata.get("tags", [])
    )

    await run_hook(self.on_cell_execute, cell=cell, cell_index=cell_index)
    parent_msg_id = await ensure_async(
        self.kc.execute(
            cell.source, store_history=store_history, stop_on_error=not cell_allows_errors
        )
    )
    await run_hook(self.on_cell_complete, cell=cell, cell_index=cell_index)
    # We launched a code cell to execute
    self.code_cells_executed += 1
    exec_timeout = self._get_timeout(cell)

    cell.outputs = []
    self.clear_before_next_output = False

    task_poll_kernel_alive = asyncio.ensure_future(self._async_poll_kernel_alive())
    task_poll_output_msg = asyncio.ensure_future(
        self._async_poll_output_msg(parent_msg_id, cell, cell_index)
    )
    self.task_poll_for_reply = asyncio.ensure_future(
        self._async_poll_for_reply(
            parent_msg_id, cell, exec_timeout, task_poll_output_msg, task_poll_kernel_alive
        )
    )
    try:
        exec_reply = await self.task_poll_for_reply
    except asyncio.CancelledError:
        # can only be cancelled by task_poll_kernel_alive when the kernel is dead
        task_poll_output_msg.cancel()
        raise DeadKernelError("Kernel died")
    except Exception as e:
        # Best effort to cancel request if it hasn't been resolved
        try:
            # Check if the task_poll_output is doing the raising for us
            if not isinstance(e, CellControlSignal):
                task_poll_output_msg.cancel()
        finally:
            raise

    if execution_count:
        cell['execution_count'] = execution_count
  await self._check_raise_for_error(cell, cell_index, exec_reply)

../../../.local/lib/python3.8/site-packages/nbclient/client.py:965:


self = <testbook.client.TestbookNotebookClient object at 0x7f351c5293d0>
cell = {'cell_type': 'code', 'execution_count': 3, 'outputs': [{'output_type': 'error', 'ename': 'ModuleNotFoundError', 'eval....execute_reply': '2022-06-08T21:53:27.292446Z', 'iopub.status.idle': '2022-06-08T21:53:27.293188Z'}}, 'id': '5537e9fc'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '3ea3dcdd-...e, 'engine': '3ea3dcdd-0162-4a7b-a521-f5f305f0a47c', 'started': '2022-06-08T21:53:26.801840Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E import numpy as np
E
E from nvtabular.loader.tf_utils import configure_tensorflow
E
E configure_tensorflow()
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform, save_results
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 4>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnumpy�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnp�[39;00m
E �[0;32m----> 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mloader�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mtf_utils�[39;00m �[38;5;28;01mimport�[39;00m configure_tensorflow
E �[1;32m 6�[0m configure_tensorflow()
E �[1;32m 8�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError

During handling of the above exception, another exception occurred:

tb = <testbook.client.TestbookNotebookClient object at 0x7f351c5293d0>

@testbook(REPO_ROOT / "examples/03-Exploring-different-models.ipynb", execute=False)
def test_example_03_exploring_different_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
    NUM_OF_CELLS = len(tb.cells)
  tb.execute_cell(list(range(0, NUM_OF_CELLS - 5)))

tests/unit/tf/examples/test_03_exploring_different_models.py:18:


self = <testbook.client.TestbookNotebookClient object at 0x7f351c5293d0>
cell = {'cell_type': 'markdown', 'id': '56ecea23', 'metadata': {'pycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}
kwargs = {}, cell_indexes = [0, 1, 2, 3, 4, 5, ...]
executed_cells = [{'cell_type': 'code', 'execution_count': 2, 'id': '5f49a48e', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution':...ycharm': {'name': '#%% md\n'}}, 'source': "Let's start with importing the libraries that we'll use in this notebook."}]
idx = 4

def execute_cell(self, cell, **kwargs) -> Union[Dict, List[Dict]]:
    """
    Executes a cell or list of cells
    """
    if isinstance(cell, slice):
        start, stop = self._cell_index(cell.start), self._cell_index(cell.stop)
        if cell.step is not None:
            raise TestbookError('testbook does not support step argument')

        cell = range(start, stop + 1)
    elif isinstance(cell, str) or isinstance(cell, int):
        cell = [cell]

    cell_indexes = cell

    if all(isinstance(x, str) for x in cell):
        cell_indexes = [self._cell_index(tag) for tag in cell]

    executed_cells = []
    for idx in cell_indexes:
        try:
            cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs)
        except CellExecutionError as ce:
          raise TestbookRuntimeError(ce.evalue, ce, self._get_error_class(ce.ename))

E testbook.exceptions.TestbookRuntimeError: An error occurred while executing the following cell:
E ------------------
E import os
E import numpy as np
E
E from nvtabular.loader.tf_utils import configure_tensorflow
E
E configure_tensorflow()
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform, save_results
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 4>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[1;32m 2�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnumpy�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnp�[39;00m
E �[0;32m----> 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mloader�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mtf_utils�[39;00m �[38;5;28;01mimport�[39;00m configure_tensorflow
E �[1;32m 6�[0m configure_tensorflow()
E �[1;32m 8�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/testbook/client.py:135: TestbookRuntimeError
___________________ test_example_04_exporting_ranking_models ___________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f351c4cd8e0>

@testbook(REPO_ROOT / "examples/04-Exporting-ranking-models.ipynb", execute=False)
def test_example_04_exporting_ranking_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_04_export_ranking_models.py:17:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f351c4cd8e0>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '37d5020c', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': ...ema.tags import Tags\n\nimport merlin.models.tf as mm\nfrom merlin.io.dataset import Dataset\nimport tensorflow as tf'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'f4a25e79-...e, 'engine': 'f4a25e79-6f97-4aeb-94f3-2827893edc4a', 'started': '2022-06-08T21:53:28.505702Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E
E import nvtabular as nvt
E from nvtabular.ops import *
E
E from merlin.models.utils.example_utils import workflow_fit_transform
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mops�[39;00m �[38;5;28;01mimport�[39;00m �[38;5;241m*�[39m
E �[1;32m 6�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
_______________________ test_example_05_retrieval_models _______________________

tb = <testbook.client.TestbookNotebookClient object at 0x7f351c4d3f40>

@testbook(REPO_ROOT / "examples/05-Retrieval-Model.ipynb", execute=False)
def test_example_05_retrieval_models(tb):
    tb.inject(
        """
        import os
        os.environ["DATA_FOLDER"] = "/tmp/data/"
        os.environ["NUM_ROWS"] = "999"
        """
    )
  tb.execute()

tests/unit/tf/examples/test_05_export_retrieval_model.py:17:


../../../.local/lib/python3.8/site-packages/testbook/client.py:147: in execute
super().execute_cell(cell, index)
../../../.local/lib/python3.8/site-packages/nbclient/util.py:84: in wrapped
return just_run(coro(*args, **kwargs))
../../../.local/lib/python3.8/site-packages/nbclient/util.py:62: in just_run
return loop.run_until_complete(coro)
/usr/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
../../../.local/lib/python3.8/site-packages/nbclient/client.py:965: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)


self = <testbook.client.TestbookNotebookClient object at 0x7f351c4d3f40>
cell = {'cell_type': 'code', 'execution_count': 3, 'id': '92aa8daa', 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': ...a.tags import Tags\n\nimport merlin.models.tf as mm\nfrom merlin.io.dataset import Dataset\n\nimport tensorflow as tf'}
cell_index = 4
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '95d1279e-...e, 'engine': '95d1279e-66bf-4a48-af55-330155f8bd63', 'started': '2022-06-08T21:53:30.161104Z', 'status': 'error'}, ...}

async def _check_raise_for_error(
    self, cell: NotebookNode, cell_index: int, exec_reply: t.Optional[t.Dict]
) -> None:

    if exec_reply is None:
        return None

    exec_reply_content = exec_reply['content']
    if exec_reply_content['status'] != 'error':
        return None

    cell_allows_errors = (not self.force_raise_errors) and (
        self.allow_errors
        or exec_reply_content.get('ename') in self.allow_error_names
        or "raises-exception" in cell.metadata.get("tags", [])
    )
    await run_hook(self.on_cell_error, cell=cell, cell_index=cell_index)
    if not cell_allows_errors:
      raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)

E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E import os
E
E import nvtabular as nvt
E from nvtabular.ops import *
E from merlin.models.utils.example_utils import workflow_fit_transform
E
E from merlin.schema.tags import Tags
E
E import merlin.models.tf as mm
E from merlin.io.dataset import Dataset
E
E import tensorflow as tf
E ------------------
E
E �[0;31m---------------------------------------------------------------------------�[0m
E �[0;31mModuleNotFoundError�[0m Traceback (most recent call last)
E Input �[0;32mIn [3]�[0m, in �[0;36m<cell line: 3>�[0;34m()�[0m
E �[1;32m 1�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mos�[39;00m
E �[0;32m----> 3�[0m �[38;5;28;01mimport�[39;00m �[38;5;21;01mnvtabular�[39;00m �[38;5;28;01mas�[39;00m �[38;5;21;01mnvt�[39;00m
E �[1;32m 4�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mnvtabular�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mops�[39;00m �[38;5;28;01mimport�[39;00m �[38;5;241m*�[39m
E �[1;32m 5�[0m �[38;5;28;01mfrom�[39;00m �[38;5;21;01mmerlin�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mmodels�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mutils�[39;00m�[38;5;21;01m.�[39;00m�[38;5;21;01mexample_utils�[39;00m �[38;5;28;01mimport�[39;00m workflow_fit_transform
E
E �[0;31mModuleNotFoundError�[0m: No module named 'nvtabular'
E ModuleNotFoundError: No module named 'nvtabular'

../../../.local/lib/python3.8/site-packages/nbclient/client.py:862: CellExecutionError
=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/flatbuffers/compat.py:19
/var/jenkins_home/.local/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/tf/blocks/core/test_index.py: 7 warnings
tests/unit/tf/models/test_retrieval.py: 272 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 100 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
/tmp/autograph_generated_fileytan1x4c.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:321: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
/var/jenkins_home/.local/lib/python3.8/site-packages/numpy/core/numeric.py:2453: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
return bool(asarray(a1 == a2).all())

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:210.)
return {key: torch.tensor(value) for key, value in data.items()}

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/implicit/init.py:18: could not import 'implicit': No module named 'implicit'
SKIPPED [1] tests/unit/lightfm/init.py:18: could not import 'lightfm': No module named 'lightfm'
SKIPPED [1] tests/unit/xgb/init.py:19: could not import 'xgboost': No module named 'xgboost'
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
===== 5 failed, 430 passed, 10 skipped, 412 warnings in 1061.99s (0:17:41) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins14369845317511426986.sh

@benfred
Copy link
Member Author

benfred commented Jun 8, 2022

rerun tests

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #504 of commit 97cb6bece3335f1216b990f33e597d45f3477d3a, no merge conflicts.
Running as SYSTEM
Setting status of 97cb6bece3335f1216b990f33e597d45f3477d3a to PENDING with url https://10.20.13.93:8080/job/merlin_models/430/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/504/*:refs/remotes/origin/pr/504/* # timeout=10
 > git rev-parse 97cb6bece3335f1216b990f33e597d45f3477d3a^{commit} # timeout=10
Checking out Revision 97cb6bece3335f1216b990f33e597d45f3477d3a (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 97cb6bece3335f1216b990f33e597d45f3477d3a # timeout=10
Commit message: "Merge branch 'main' into triage_common"
 > git rev-list --no-walk 97cb6bece3335f1216b990f33e597d45f3477d3a # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins7767165485710729293.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /var/jenkins_home/.local/lib/python3.8/site-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /var/jenkins_home/.local/lib/python3.8/site-packages (from testbook) (0.5.13)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.2.2.post1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.6.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.10.0)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.15.3)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.3)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (21.4.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.7.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.1.0)
Requirement already satisfied: tornado>=6.0 in /var/jenkins_home/.local/lib/python3.8/site-packages/tornado-6.1-py3.8-linux-x86_64.egg (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.1)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.0)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.5.0, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 442 items / 3 skipped

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 1%]
tests/unit/datasets/test_ecommerce.py ..sss [ 2%]
tests/unit/datasets/test_entertainment.py ....sss. [ 4%]
tests/unit/datasets/test_social.py . [ 4%]
tests/unit/datasets/test_synthetic.py ..... [ 5%]
tests/unit/tf/test_core.py ...... [ 7%]
tests/unit/tf/test_dataset.py ............... [ 10%]
tests/unit/tf/test_public_api.py . [ 10%]
tests/unit/tf/blocks/test_cross.py ........... [ 13%]
tests/unit/tf/blocks/test_dlrm.py ........ [ 14%]
tests/unit/tf/blocks/test_interactions.py . [ 15%]
tests/unit/tf/blocks/test_mlp.py ............................. [ 21%]
tests/unit/tf/blocks/core/test_aggregation.py ......... [ 23%]
tests/unit/tf/blocks/core/test_base.py .. [ 24%]
tests/unit/tf/blocks/core/test_combinators.py ... [ 24%]
tests/unit/tf/blocks/core/test_index.py ... [ 25%]
tests/unit/tf/blocks/core/test_masking.py ....... [ 27%]
tests/unit/tf/blocks/core/test_tabular.py ... [ 27%]
tests/unit/tf/blocks/core/test_transformations.py ........... [ 30%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 30%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 33%]
tests/unit/tf/examples/test_01_getting_started.py . [ 33%]
tests/unit/tf/examples/test_02_dataschema.py . [ 33%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 33%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 34%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 34%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 34%]
tests/unit/tf/inputs/test_continuous.py ..... [ 35%]
tests/unit/tf/inputs/test_embedding.py .............. [ 38%]
tests/unit/tf/inputs/test_tabular.py ....... [ 40%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 48%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 50%]
tests/unit/tf/metrics/test_metrics_ranking.py ................. [ 53%]
tests/unit/tf/models/test_base.py ..... [ 54%]
tests/unit/tf/models/test_benchmark.py .. [ 55%]
tests/unit/tf/models/test_ranking.py ................ [ 59%]
tests/unit/tf/models/test_retrieval.py ........................... [ 65%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 65%]
tests/unit/tf/prediction_tasks/test_multi_task.py ....... [ 67%]
tests/unit/tf/prediction_tasks/test_next_item.py .................... [ 71%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 72%]
tests/unit/tf/prediction_tasks/test_sampling.py .................... [ 76%]
tests/unit/tf/utils/test_batch.py .... [ 77%]
tests/unit/torch/test_dataset.py ......... [ 79%]
tests/unit/torch/test_public_api.py . [ 79%]
tests/unit/torch/block/test_base.py .... [ 80%]
tests/unit/torch/block/test_mlp.py . [ 80%]
tests/unit/torch/features/test_continuous.py .. [ 81%]
tests/unit/torch/features/test_embedding.py .............. [ 84%]
tests/unit/torch/features/test_tabular.py .... [ 85%]
tests/unit/torch/model/test_head.py ............ [ 88%]
tests/unit/torch/model/test_model.py .. [ 88%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [100%]

=============================== warnings summary ===============================
../../../.local/lib/python3.8/site-packages/flatbuffers/compat.py:19
/var/jenkins_home/.local/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/tf/blocks/core/test_index.py: 7 warnings
tests/unit/tf/models/test_retrieval.py: 272 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 100 warnings
tests/unit/tf/utils/test_batch.py: 4 warnings
/tmp/autograph_generated_filecbna7fg1.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/blocks/core/test_transformations.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:321: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[True]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
tests/unit/tf/models/test_ranking.py::test_dlrm_model_multi_task[False]
/var/jenkins_home/.local/lib/python3.8/site-packages/numpy/core/numeric.py:2453: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
return bool(asarray(a1 == a2).all())

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:210.)
return {key: torch.tensor(value) for key, value in data.items()}

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/implicit/init.py:18: could not import 'implicit': No module named 'implicit'
SKIPPED [1] tests/unit/lightfm/init.py:18: could not import 'lightfm': No module named 'lightfm'
SKIPPED [1] tests/unit/xgb/init.py:19: could not import 'xgboost': No module named 'xgboost'
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
========== 435 passed, 10 skipped, 413 warnings in 1160.47s (0:19:20) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins10778692321686748308.sh

@benfred benfred merged commit cc6dbf2 into main Jun 8, 2022
@benfred benfred added the ci label Jun 8, 2022
@benfred benfred deleted the triage_common branch June 8, 2022 22:39
mengyao00 pushed a commit that referenced this pull request Jul 15, 2022
Use a reusable workflow for triaging issues that will be shared across repos,
rather than have a bunch of boilerplate defined here.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants