-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can compute parameter derivatives #143
Conversation
We really need to fix the CI... |
void TorchForce::addEnergyParameterDerivative(const string& name) { | ||
for (int i = 0; i < globalParameters.size(); i++) | ||
if (name == globalParameters[i].name) { | ||
energyParameterDerivatives.push_back(i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have seen this in other parts of OpenMM, what happens if I call this function twice with the same name? Is that handled somewhere before or after this?
torch::Tensor& forceTensor) { | ||
torch::Tensor& forceTensor, map<string, torch::Tensor>& derivInputs) { | ||
vector<torch::Tensor> gradInputs; | ||
if (!outputsForces) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be !outputsForces && includeForces
right?
There should also be a Python side test. Take this one if you'd like: BTW, this is still a problem. I do not think OpenMM-Torch can do anything about it, but perhaps there is a way to detect it and provide a useful message? |
Thanks for the comments. It turned out that what I had written didn't work with CUDA graphs. I restructured it to handle the input tensors in a different way and made the test case run both with and without graphs. Can you see if it looks better now? |
Is this ok to merge? Getting CI working is its own major project. I'm working on that in another PR. |
Implements #141.