-
Notifications
You must be signed in to change notification settings - Fork 19
Warn if duplicate *pool instances are being created
#904
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Based on #900. Only the last 4 commits are relevant. |
llucax
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general the 4 last commits look OK, but I wonder if it wouldn't be safer to just fail the request if it matches an existing pool instead of printing a warning and proceed with a behavior that can potentially cause issues. We could just raise a PoolAlreadyExist (inheriting from KeyError for example), and maybe add an extra method to get an existing instance (that will fail if the instance doesn't exist yet), if we think there might be cases where users wants to use the same pool but it is difficult to save and share the pool object.
I don't think this is a critical issue that we need to fail, because it would continue to function, and they're calling I think we can fail for trivial issues that happen at startup. But once the app is running, we should try to avoid crashes. Maybe we can look into static analysis, if there are multiple calls to |
|
Mmm, still not fully convinced, from this text:
For me unexpected behavior is bad. If using the same args is a bug (maybe is one that sometimes can be unproblematic), we should probably error. For me it is a bug if there is no valid case where you actually want to call the function 2 times with the exact same arguments. If you say there is a valid case where a user wants to do this, then I agree a warning is enough. Just avoiding a crash is not a good enough reason not to raise an exception IMHO, programmers can make all sort of mistakes that will lead to a crash, instead those crashed should be properly handled when possible (like with the actor auto-restart). Also I guess getting a pool is something you probably do at the start of an application/actor, so the failure should occur pretty soon. This also makes me think that maybe we don't want to provide a default priority if getting the same pool several times with the same priority is problematic. |
No, they might try to make a second pool only when they try to send a charge command, for example, which could be days after deployment. The power manager cleans up old requests after 1 min, so even if they create a new pool for each request, it just means that we'd converge to the target power more slowly, rather than crashing on friday 11pm because we try to handle weekend rush. And if multiple actors use the same batteries with the same priorities, it probably doesn't matter to them, but maybe we can make it even more harder to ignore, by making the priority a required parameter when creating *pools. If priority is required, they will have to think about it and deal with it. I think that much should be enough. |
Earlier they were called just `*pools`, which could be confusing. Signed-off-by: Sahas Subramanian <[email protected]>
Signed-off-by: Sahas Subramanian <[email protected]>
If this is not done, the power manager will have to make decisions on how to prioritize requests from such pools. Signed-off-by: Sahas Subramanian <[email protected]>
e567d19 to
543efe0
Compare
|
Maybe qualifying the documentation makes it better? (I've made this change already) |
This clarifies that `*pool` instances should be reused within actors and should have distinct priorities when created from different actors. Signed-off-by: Sahas Subramanian <[email protected]>
Signed-off-by: Sahas Subramanian <[email protected]>
|
OK, I created a separate discussion, because it seems like it is not a trivial topic, so we shouldn't block this PR which is already an improvement on the status quo:
I guess it is a bit more clear, but maybe it would be better to clarify the random priority is only among the conflicting requests, right? Is not really a random priority in the sense that it could be a proper different priority value. I asked ChatGPT for help and given some instructions it came up with:
Not for this PR, as it is probably more work, but maybe another approach would be to have a policy based on when the requests were made, like the latest request will always have a higher priority than the previous conflicting requests. |
llucax
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous comment is just a soft suggestion, reading the text again in full seems to be clear enough for an approval.
But this would lead to races between the actors, which is what the PowerManager hopes to avoid. |
What kind of races? If many (conflicting) pools are created in a burst, then yes, is sort of random as it is now, but if they are created more sequentially, it will just mean that the latest always wins (or loses, whatever makes more sense). |
They will keep sending conflicting requests repeatedly, maybe even continuously, because they see that they both have bounds, but the actual target power is somewhere else, and they need to correct it. and we'll end up oscillating between charging and discharging every second. (I'm actually talking about the case where the pools are created with the same priority but by different actors. I guess you were thinking of the same actor?) |
|
OK, I see now, the issue is not when the pools are created but when they are actually used 👍 |
This is because such usage could make the power manager take longer to converge on the desired power.
This PR also improves the docs for the
microgrid.*pool()functions.Closes #902