-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop py27 more #63
Drop py27 more #63
Conversation
Relates to #20
The `defaults` kwarg to `pysipp.agent.Scenario` isn't used so just drop it and instead comb through provided `kwargs` and pop out values passed by the user into `pysipp.agent.Scenario.defaults` making these the "new" defaults applied to agents. Also, rename the default settings dicts more explicitly. Resolves #20
These fancy tuple-attribute-properties are supposed to be strictly assigned tuple types; let's enforce that. Resolves #12
After attempting to find an OS portable way to spawn subprocesses using the stdlib and coming out unsatisfied, I've decided use the new subprocess launching support in `trio`! This will of course require that the project moves to python 3.6+ giving us access to a lot of neat features of modern python including async/await support and adherence to the structured concurrency principles prominent in `trio`. It turns out this is a good fit since SIPp already has a built in cancellation mechanism via the SIGUSR1 signal. There's a lot of "core" changes to go over in this commit: - drop the "run protocol" and "runner creation" related hooks since they really shouldn't be overridden until there's some need for it and it's likely smarter to keep those "machinery" details strictly internal for now - the run "protocol" has now been relegated to an async function: `pysipp.launch.run_all_agents()` - many routines have been converted to async functions particularly at the runner (`pysipp.TrioRunner.run()`, `.get()`) and scenario (`pysipp.Scenario.arun()`) levels allowing us to expose both a sync and async interface for running subprocesses / agents - drop all the epoll/select loop stuff as this is entirely delegated to `trio.open_process()` and it's underlying machinery and APIs Resolves #53
it seems that i haven't picked up the commit that resolved the conflicts even though i forked after that |
ping @goodboy , anything else to do here? |
assert ( | ||
self._prepared_agents and self._runner | ||
), "Must run scenario before finalizing." | ||
return trio.run( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh right this is triggered from sync code yes?
Sorry trying to get my head back in this change set!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it's not the most elegant solution, do you want me to try something that doesn't require abusing objects attributes and be more functional? (it may change the API a bit though)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have something functional that you think is cleaner then yeah for sure.
I have no real problems with this I just forgot this is how we have to do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can just return a handle to finalize from Scenario.run()
, atm we return the runner which is exposing internals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is a good idea - feels like a continuation/future then.
@@ -56,8 +56,7 @@ async def run( | |||
for cmd in cmds: | |||
log.debug( | |||
"launching cmd:\n\"{}\"\n".format(cmd)) | |||
# proc = await trio.open_process( | |||
proc = trio.Process( | |||
proc = await trio.open_process( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want the async context manager interface here?
I'm pretty sure there's some teardown we may find useful?
I'm not sure how this is going to change though given python-trio/trio#1568.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aren't changes in this trio PR going to make it so you can use trio.open_process
inside a nursery?
in pysipp we want the Scenario API to not be async right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want the async context manager interface here?
what do you have in mind? isn't the teardown part of finalize that will be called from sync code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in pysipp we want the Scenario API to not be async right?
Right, I think that's what we have to aim for at the outset.
isn't the teardown part of finalize that will be called from sync code?
Yes you're right, so I guess I was thinking we should trigger the shutdown code I linked to, but because it's getting called from sync code we might have to either call Process.aclose()
explicitly or we use an async exit stack. I'm thinking the former is simpler and cleaner?
async def finalize(runner, agents, timeout): | ||
"""Block up to `timeout` seconds for all agents to complete.""" | ||
# this might raise TimeoutError | ||
cmds2procs = await runner.get(timeout=timeout) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm could be maybe do this in more trionic style using with trio.fail_after(timeout):
?
Not sure what will have to get factored out of .get()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i will give it a try and see if we can refactor finalize()
and .get()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yah I mentioned the spot from my original branch here. Not sure if I'm convinced though given a lot of users might not be familiar with this style?
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yes i would prefer a common global timeout, we can also document it
users might not be familiar with this style
which style are you referring to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with trio.fail_after(1):
scen.finalize()
I'm actually now thinking we can't do this anyway..
If we want to keep a sync api this won't work I don't think (fail_after()
won't work outside a call to trio.run()
pretty sure). I just don't have my head in this code so I'm probably making bad suggestions.
This should work if we offer an async .aclose()
method to do finalization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wait, nm .get()
is async now - see what I mean, obviously haven't looked at this in a while.
And, duh, we await
that in the above code.
So yeah I think maybe pulling out the timeout is the more trionic way to do this thus giving the user more control on waiting / cancellation. The only down side is we'll have to document this style for async users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kontsaki Thanks so much for this!
I think the one major thing we will need is a .dev0
bump to the setup.py
and maybe we need to start a changelog?
We might also need to tweak the docs which has some comments about the old runner. |
@goodboy sure! we can continue discussion there. |
@goodboy would something like this suffice ?