Skip to content

Conversation

@shsms
Copy link
Contributor

@shsms shsms commented Mar 27, 2024

This is because such usage could make the power manager take longer to converge on the desired power.

This PR also improves the docs for the microgrid.*pool() functions.

Closes #902

@shsms shsms requested a review from a team as a code owner March 27, 2024 15:19
@github-actions github-actions bot added part:docs Affects the documentation part:tests Affects the unit, integration and performance (benchmarks) tests part:data-pipeline Affects the data pipeline part:actor Affects an actor ot the actors utilities (decorator, etc.) part:microgrid Affects the interactions with the microgrid labels Mar 27, 2024
@shsms
Copy link
Contributor Author

shsms commented Mar 27, 2024

Based on #900. Only the last 4 commits are relevant.

Copy link
Contributor

@llucax llucax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general the 4 last commits look OK, but I wonder if it wouldn't be safer to just fail the request if it matches an existing pool instead of printing a warning and proceed with a behavior that can potentially cause issues. We could just raise a PoolAlreadyExist (inheriting from KeyError for example), and maybe add an extra method to get an existing instance (that will fail if the instance doesn't exist yet), if we think there might be cases where users wants to use the same pool but it is difficult to save and share the pool object.

@shsms
Copy link
Contributor Author

shsms commented Mar 28, 2024

I wonder if it wouldn't be safer to just fail the request if it matches an existing pool instead of printing a warning and proceed with a behavior that can potentially cause issues.

I don't think this is a critical issue that we need to fail, because it would continue to function, and they're calling propose_power anyway.

I think we can fail for trivial issues that happen at startup. But once the app is running, we should try to avoid crashes. Maybe we can look into static analysis, if there are multiple calls to *pool in a class, etc.

@llucax
Copy link
Contributor

llucax commented Mar 28, 2024

Mmm, still not fully convinced, from this text:

If not, a random prioritization will be imposed on them to resolve conflicts, which may lead to unexpected behavior.

For me unexpected behavior is bad. If using the same args is a bug (maybe is one that sometimes can be unproblematic), we should probably error. For me it is a bug if there is no valid case where you actually want to call the function 2 times with the exact same arguments. If you say there is a valid case where a user wants to do this, then I agree a warning is enough. Just avoiding a crash is not a good enough reason not to raise an exception IMHO, programmers can make all sort of mistakes that will lead to a crash, instead those crashed should be properly handled when possible (like with the actor auto-restart).

Also I guess getting a pool is something you probably do at the start of an application/actor, so the failure should occur pretty soon.

This also makes me think that maybe we don't want to provide a default priority if getting the same pool several times with the same priority is problematic.

@shsms
Copy link
Contributor Author

shsms commented Mar 28, 2024

Also I guess getting a pool is something you probably do at the start of an application/actor, so the failure should occur pretty soon.

No, they might try to make a second pool only when they try to send a charge command, for example, which could be days after deployment.

The power manager cleans up old requests after 1 min, so even if they create a new pool for each request, it just means that we'd converge to the target power more slowly, rather than crashing on friday 11pm because we try to handle weekend rush.

And if multiple actors use the same batteries with the same priorities, it probably doesn't matter to them, but maybe we can make it even more harder to ignore, by making the priority a required parameter when creating *pools.

If priority is required, they will have to think about it and deal with it. I think that much should be enough.

shsms added 3 commits April 3, 2024 12:34
Earlier they were called just `*pools`, which could be confusing.

Signed-off-by: Sahas Subramanian <[email protected]>
If this is not done, the power manager will have to make decisions on
how to prioritize requests from such pools.

Signed-off-by: Sahas Subramanian <[email protected]>
@shsms shsms force-pushed the warn-on-dup branch 2 times, most recently from e567d19 to 543efe0 Compare April 3, 2024 10:40
@shsms
Copy link
Contributor Author

shsms commented Apr 3, 2024

Maybe qualifying the documentation makes it better? (I've made this change already)

    ... If not,
    a random prioritization will be imposed on them to resolve conflicts, which may
    lead to unexpected behavior like longer duration to converge on the desired
    power.

shsms added 2 commits April 3, 2024 12:47
This clarifies that `*pool` instances should be reused within actors
and should have distinct priorities when created from different
actors.

Signed-off-by: Sahas Subramanian <[email protected]>
Signed-off-by: Sahas Subramanian <[email protected]>
@llucax
Copy link
Contributor

llucax commented Apr 3, 2024

OK, I created a separate discussion, because it seems like it is not a trivial topic, so we shouldn't block this PR which is already an improvement on the status quo:

Maybe qualifying the documentation makes it better? (I've made this change already)

I guess it is a bit more clear, but maybe it would be better to clarify the random priority is only among the conflicting requests, right? Is not really a random priority in the sense that it could be a proper different priority value.

I asked ChatGPT for help and given some instructions it came up with:

In deployments with multiple actors managing the same set of EV chargers, assigning distinct priorities for each is recommended to prevent conflicts. Without unique priorities, requests with identical EV chargers and priority levels are deterministically sorted in an implicit order (without changing their priority values). This can lead to unpredictable conflict resolution and longer times to reach the desired power distribution. Note that this internal sorting affects only requests that match both in charger set and priority level.

Not for this PR, as it is probably more work, but maybe another approach would be to have a policy based on when the requests were made, like the latest request will always have a higher priority than the previous conflicting requests.

Copy link
Contributor

@llucax llucax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous comment is just a soft suggestion, reading the text again in full seems to be clear enough for an approval.

@shsms
Copy link
Contributor Author

shsms commented Apr 3, 2024

Not for this PR, as it is probably more work, but maybe another approach would be to have a policy based on when the requests were made, like the latest request will always have a higher priority than the previous conflicting requests.

But this would lead to races between the actors, which is what the PowerManager hopes to avoid.

@shsms shsms added this pull request to the merge queue Apr 3, 2024
Merged via the queue into frequenz-floss:v1.x.x with commit 9fd4865 Apr 3, 2024
@shsms shsms deleted the warn-on-dup branch April 3, 2024 14:07
@llucax
Copy link
Contributor

llucax commented Apr 3, 2024

But this would lead to races between the actors, which is what the PowerManager hopes to avoid.

What kind of races? If many (conflicting) pools are created in a burst, then yes, is sort of random as it is now, but if they are created more sequentially, it will just mean that the latest always wins (or loses, whatever makes more sense).

@shsms
Copy link
Contributor Author

shsms commented Apr 3, 2024

What kind of races?

They will keep sending conflicting requests repeatedly, maybe even continuously, because they see that they both have bounds, but the actual target power is somewhere else, and they need to correct it. and we'll end up oscillating between charging and discharging every second.

(I'm actually talking about the case where the pools are created with the same priority but by different actors. I guess you were thinking of the same actor?)

@llucax
Copy link
Contributor

llucax commented Apr 3, 2024

OK, I see now, the issue is not when the pools are created but when they are actually used 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

part:actor Affects an actor ot the actors utilities (decorator, etc.) part:data-pipeline Affects the data pipeline part:docs Affects the documentation part:microgrid Affects the interactions with the microgrid part:tests Affects the unit, integration and performance (benchmarks) tests

Projects

Development

Successfully merging this pull request may close these issues.

Warn when several calls to microgrid.battery_pool(), ev_charger_pool(), pv_pool() are made

2 participants