Skip to content

Conversation

@blisc
Copy link
Collaborator

@blisc blisc commented Oct 25, 2019

Add initial compatibility for caching inference results so we don't repeat forward pass multiple times. Will improve efficiency of #67
Adds two options to neural_factory.infer():

  • cache: Cache all tensors during this forward pass
  • use_cache: Use cache during this forward pass

Currently, only implemented for single GPU.

@blisc blisc changed the title WIP: Add a cache option for infer mode [WIP] Add a cache option for infer mode Oct 25, 2019
@blisc blisc changed the title [WIP] Add a cache option for infer mode Add a cache option for infer mode Oct 25, 2019
@blisc blisc closed this Oct 25, 2019
@blisc blisc deleted the infer_cache branch November 8, 2019 21:48
blisc added a commit that referenced this pull request May 5, 2025
* add update config to infer script

Signed-off-by: Jason <[email protected]>

* Update infer_and_evaluate.py

---------

Signed-off-by: Jason <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant