-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support arbitrary X
in data
#100
Conversation
Just to link this to aleximmer/Laplace#144 |
Pull Request Test Coverage Report for Build 8791845160Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
Here's an example use case. https://curvlinops--100.org.readthedocs.build/en/100/basic_usage/example_huggingface.html The load for the users is minimal and the choice of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a first pass and added some minor suggestions, mainly missing documentation.
One main suggestion is to avoid having to duplicate the test functions, simply by appending the test cases using a dict
to CASES_NO_DEVICE
and instead make the cases
fixture return the batch_size_fn
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While for many settings the model/data loader still has to be adjusted, this seems like a nice usability improvement!
See my comments and formatting and linting have to be fixed as well.
One more thing, the type hints for the data also have to be modified in all files that don't require other changes, e.g. |
_ _ (_) ___ ___ _ __| |_ | |/ _/ / _ \/ '__ _/ | |\__ \/\_\/| | | |_ |_|\___/\___/\_/ \_/ isort your imports, so you don't have to. VERSION 5.13.2 Nothing to do: no files or paths have have been passed in! Try one of the following: `isort .` - sort all Python files, starting from the current directory, recursively. `isort . --interactive` - Do the same, but ask before making any changes. `isort . --check --diff` - Check to see if imports are correctly sorted within this project. `isort --help` - In-depth information about isort's available command-line options. Visit https://pycqa.github.io/isort/ for complete information about how to use isort.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly nits. Please make sure you try running make test
on a compute infrastructure with access to a GPU to make sure there are no device-related issues as GH actions can only check on CPU (I'll do it anyways before merging).
@f-dangel I'm done. Currently running the test on a GPU-enabled env. It takes so long but you can continue reviewing. PEP8 and other linters' issues should also be resolved. |
Alright, confirmed that all tests passed on GPU! |
@runame I'm done with my second pass, could you take a quick second look and merge if everything looks good? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Closes #86
@f-dangel @runame please give feedback and answer the question below.
Assumption:
and there is an additional parameter in
_base._LinearOperator
:where it must be non-
None
wheneverX
is not atorch.Tensor
.This also fits well with Huggingface, although one must replace HF's default dataloader to outputs
(data, data['labels'])
instead of justdata
. Let me know if this should be considered.Code for testing the functionality:
https://gist.github.com/wiseodd/426061afae24199446e60bfabc00e26e
I use
laplace-torch
there (two birds one stone), so just remove it if you don't want to install it. If you want to testlaplace-torch
, install via