-
-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
trio.Queue methods do not play with inspect.iscoroutinefunction #635
Comments
This is not a bug. You cannot depend on
while asyncio's idiom is ::
|
Or, put another way: you really don't want code that depends on introspection when you use it. If you really need to do it, use
but, again, you really shouldn't do that. |
Hmm, I see. Thank you @smurfix So would this be a reasonable way to handle it: result = func(*args, **kwargs)
if inspect.isawaitable(result):
result = await result
return result |
Well … the question is, why do you need an interface that accepts both sync and async functions in the first place?
is
|
Related to the topic of this issue:
Thank you for that response though. I'm actually trying to figure how to write trio-esque code that can translate an existing code based on tornado loops. Although not directly related to the topic of this issue, I hope I can ask your advice on how to write things with trio. I'm not used to writing networking code, so I'm not sure what Consider the class: class timed_window(Stream):
""" Emit a tuple of collected results every interval
Every ``interval`` seconds this emits a tuple of all of the results
seen so far. This can help to batch data coming off of a high-volume
stream.
"""
def __init__(self, upstream, interval, **kwargs):
self.interval = convert_interval(interval)
self.buffer = []
self.last = gen.moment
Stream.__init__(self, upstream, ensure_io_loop=True, **kwargs)
self.loop.add_callback(self.cb)
def update(self, x, who=None):
self.buffer.append(x)
return self.last
@gen.coroutine
def cb(self):
while True:
L, self.buffer = self.buffer, []
self.last = self._emit(L)
yield self.last
yield gen.sleep(self.interval) When this class is initialized, an existing or created IOLoop object is used to start the long-running class timed_window(Stream):
""" Emit a tuple of collected results every interval
Every ``interval`` seconds this emits a tuple of all of the results
seen so far. This can help to batch data coming off of a high-volume
stream.
"""
_graphviz_shape = 'octagon'
def __init__(self, upstream, interval, **kwargs):
self.interval = convert_interval(interval)
self.buffer = []
self.last = time.time()
Stream.__init__(self, upstream, **kwargs)
async def update(self, x, who = None):
self.buffer.append(x)
async def cb(self):
while True:
await trio.sleep(self.interval)
L, self.buffer = self.buffer, []
await self.emit(L) Now, I don't see how to start the Apologies if this is too off-topic. I'd be glad to take pointers on another forum to ask this on. |
@achennu Another example where In [3]: inspect.iscoroutinefunction(my_async_fn)
Out[3]: True
In [4]: inspect.iscoroutinefunction(partial(my_async_fn, 1))
Out[4]: False So yeah, calling it and then checking |
Oh sorry, I didn't see your latest reply when writing that.
Well.... the problem is
If you have an object that requires a background task, then one nice idiom is to make your public "constructor" be an async with open_batched_stream(...) as batched_stream:
async for batch in batched_stream:
... and that @asynccontextmanager
async def open_batched_stream(...):
async with trio.open_nursery() as nursery:
batched_stream = BatchedStream(...)
nursery.start_soon(batched_stream._background_task)
yield batched_stream
nursery.cancel_scope.cancel() # cancel the background task ( |
Thanks @njsmith for the clarification. I consider this issue now closable, regarding the inspected nature of trio.Queue methods. I am trying to internalize the API design patterns that trio brings, and your explanation helps. (Your reply came in as I was on the doc page of In taking on this type of API pattern, I'm wondering how 'expensive' it is to create nurseries? In my class Stream:
def __init__(self, ...):
...
self.downstreams = downstreams
async def update(self, x, who=None):
await self.emit(x)
async def emit(self, x):
async with trio.open_nursery() as nursery:
for downstream in self.downstreams:
nursery.start_soon(downstream.update, x, self) In such an implementation, each call to emit of each node in the flow path creates a nursery to update its downstreams. The advantage is that each Considering your comments on using different constructors, I suppose it would have to look something like: from trio_streamz import run_dataflow, core
stream = core.Stream()
f1 = stream.map(lambda x: x+1).timed_window(2)
f2 = stream.map(lambda x: x*2).partition(3)
z = core.zip(f1, f2)
async with run_dataflow() as runner:
for i in range(10):
runner.emit(stream, i) This gives the chance that one runner has a nursery, and that's the one used for all emit calls. Is this for some reason preferable? |
I haven't measured the cost of setting up and tearing down a nursery, but it's just creating and manipulating a few python data structures, not doing any heavy computation or operating system calls or anything. I'd say, do whatever makes the code most readable, and if later on you discover it's causing speed problems, let us know and we'll try to make the readable code fast :-) It is true that if you want nodes in your flow graph to be "live" outside of calls to
By the way, just something to be aware of: trio has its own very different interface called |
For the autodetection thing, I'd suggest picking of these conventions:
(The other option is Trio itself uses the last convention; for example, Anyway, sounds like the issue in the title is resolved, so I'm going to close this. But I'm happy to keep chatting here or in the chat! |
I've been studying trio for a while now and boy, it is a breath of fresh air -- in terms of API design, documentation and even in the clarity of design (and code) it brings to me. Thank you @njsmith for your fantastic explanations in your blogs and in the docs.
Now, I've run into a situation where trio seemingly does not reflect the nature of the function correctly.
These are
async def
functions, but it seems that the@_core.enable_ki_protection
decorator messes with the introspectability of the code.A couple of searches on this repo and in the docs did not throw up any information. Is this a bug?
I'm using trio version 0.6.0 and python 3.7.
The text was updated successfully, but these errors were encountered: