-
Notifications
You must be signed in to change notification settings - Fork 15
lep: pull based event loop #3
base: master
Are you sure you want to change the base?
Conversation
allocation callbacks if necessary. | ||
|
||
~~~~ | ||
void uv_backend_dispatch(uv_loop_t* loop) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't really thought this through yet but I think it would be desirable to have an API where you can pull and dispatch one request at a time.
I'm not really sure what it should look like but here is a strawman for you to poke holes in:
std::vector<uv_req_t*> low_prio_reqs;
while (req = uv_backend_next(loop)) {
if (req->type == UV_READ || req->type == UV_WRITE)
uv_backend_dispatch(req);
else
low_prio_reqs.push_back(req);
}
// now process low_prio_reqs
The idea being that the user can decide when to dispatch and in what order. If, for example, you are aiming for low I/O latency, you give preferential treatment to read and write requests. Whereas if you need high-precision timers, you run those at the first possible opportunity.
(To prevent tons of calls, we'd probably want to have a batch API where you tell libuv to give you N requests. But hey, strawman, right? It doesn't have to be perfect right away.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I kind-of revolved around in libuv/libuv#6 (comment) but that's not exactly what they want.
Anyway, I don't disagree that it could be useful. Now, do we have an actual use case which we need to cover? The two approaches aren't orthogonal, so this could be refined in a follow-up LEP if we see the need. Or we could amend this one as something to consider in the future and then just edit the LEP and that's that.
We could have:
uv_req_t* uv_backend_dequeue(uv_loop_t* loop);
void uv_backend_dispatch(uv_req_t* req);
void uv_backend_dispatch_all(uv_loop_t* loop);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you get to uv_backend_dispatch, splitting that up to get what we want won't be a problem IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear, I don't want the C callbacks to be called ever. I don't want to have to provide C callbacks (though I am fine with having to pass in NULL). I want to be given as a return value some sort of data structure when the event happens and I ask for it with enough data to do my own dispatching in the scripting language.
In libuv 1.x the callbacks often will have extra data that's not in the req itself. I'm not sure how much of this will change in this new world where everything is a req. But for my use case, I need all the data returned in some sort of union type. I can then check the type and typecast to the proper type to get at all the data members. Using either the void* data member or a containerof trick on the req, I can get my data to know where to dispatch to in the scripting language.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear, I don't want the C callbacks to be called ever. I don't want to have to provide C callbacks (though I am fine with having to pass in NULL). I want to be given as a return value some sort of data structure when the event happens and I ask for it with enough data to do my own dispatching in the scripting language.
In libuv 1.x the callbacks often will have extra data that's not in the req itself. I'm not sure how much of this will change in this new world where everything is a req. But for my use case, I need all the data returned in some sort of union type. I can then check the type and typecast to the proper type to get at all the data members. Using either the void* data member or a containerof trick on the req, I can get my data to know where to dispatch to in the scripting language.
I see what you want to do, but that's not how libuv really works today. Functions take callbacks because that's the only way we have to tell the user something happened. We cannot just take NULL. While we might be able to cover your scenario (I'm not really sure at this point) it won't be on this LEP.
What worries me the most is the fact that we'd be exposing internal requests and all of their members. If we ever go for opaque types and getter functions you can see how much trouble this would be.
Feel free to write a LEP about this, but we'll need a solid plan and reasoning for it. Also, if this about performance, I'll ask for numbers.
FWIW, Python's cffi has this way for using callbacks, which I plan to use for pyuv's ffi version: https://cffi.readthedocs.org/en/release-0.8/#callbacks (I haven't measured the performance though).
I'm going to leave what I said in https://github.com/libuv/leps/pull/3/files#r20995602 out of this LEP for the time being. We'll merge this in draft state anyway, so we could revisit / amend when we get closer. What @creationix and @txdv suggest won't be covered by this, as I mentioned here: #3 (comment) |
It's ok, I can hack it away in my c# wrapper, since I don't have the performance penalty. |
I wasn't sure libuv would take my suggested change either. My backup plan is to see if I can wrap libuv in some C code that exposes what I want. C code calling into C code is plenty fast :). |
I'm wrapping it at the callback level, I guess you need to write another C lib which wraps the callbacks and queues them for you to consume. (or modify libuv). |
That would be best for now, I'd say. Once you get the feeling of it we can revisit. But there is a ton of code to be written first :-) |
Notwithstanding that it may take a long time to get this implemented, I am with @creationix here. Separating poll from dispatch makes sense, but it's not quite a "pull based" event loop. uv_req_t* reqs[64];
int r = uv_poll(loop, &reqs, ARRAY_SIZE(reqs));
for (int i = 0; i < r; i++) {
// Do whatever needs to be done after completing this req, e.g.:
my_request_wrapper_t* req = container_of(reqs[i], my_reqest_wrapper_t, uv_req);
req->my_callback(req);
} |
I suggested something like that here: #3 (comment). It's not quite the same, but it does have the advantage of not needing to expose internals. |
I don't understand the issue about exposing internals. Libuv already has Maybe you're talking about some of the requests that would be invisible to On Mon, Dec 1, 2014 at 4:14 PM, Saúl Ibarra Corretgé <
|
I don't think that's what @saghul is concerned about. To give you some context: we're also questioning the design decision of the very early libuv days that the user provides storage for libuv's internal state. (you should be in Amsterdam more often :)) Instead, libuv could allocate/manage memory for it's internal state internally, and from the user's perspective a request would have a predictable size, or handle could even just be represented by an "opaque value". In the latter case we wouldn't be able to store the return value in the struct. Let me propose an even more strange idea. What if we treated an uv_req_t as a "stack frame" with room for for parameters and results, and nothing else. Sample code to get the idea: typedef struct uv_write_s {
int r;
uv_stream_ref_t stream;
uv_buf_t bufs[];
size_t nbufs;
uv_req_ref_t internal;
} uv_write_t;
// Write
const uv_buf_t buf = uv_buf_init("Hello, world", 11);
uv_write_t wr = { .stream = stream, .bufs = &buf, .nbufs = 1 };
uv_write(&wr);
// Poll
req = uv_poll_xxx(...);
// Look at the result;
assert(req == &wr);
if (req->r < 0) print("fail!"); |
@piscisaureus I love the idea of keeping the request simple and having nothing more than inputs and outputs. That certainly simplifies my life. I don't understand why your example has the Also since you have no If you wanted to manage the internal stuff and let me manage the external/public stuff, wouldn't it make more sense for your internal struct to contain a pointer to the Sorry if I'm missing something obvious. I've only been doing C for a few years. |
I was afraid that would raise some questions... The reason I put that there is that libuv can't efficiently map the pointer to the user's uv_write_t to the place where it stored it's internal state, unless there's a field in the uv_write_t that points there. That was the purpose of the Now usually there would be no need for libuv to do this mapping. But consider |
By exposing internals I mean exposing requests that libuv would use which should never reach the user. These requests are those used by handles which just take a callback for their operation:
Those requests are not created by the user, and our interface with the user are callbacks. Windows already uses this model with several types of internal requests: UV_PROCESS_EXIT, UV_SIGNAL_REQ to name a few. If one only considers Moreover, it's possible that the concept of processing a request involves more work that just firing a callback. For process handles (on Unix), for example, we use a signal handle to listen for SIGCHLD. When the last process exits we stop it, which could be done as a part of dispatchig the process exit internal request. Likewise with stopping the actual process handle. Some generic pseudocode:
Not to mention that requests are linked in an active requests queue, which they need to be removed from when they fire, so the loop will stop if there is no more work to do. I don't see how all this can be achieved without encapsulating the dispatch mechanism. By encapsulating I mean something like what I proposed here, which is more flexible that what's currently written on the document: #3 (comment) This is all I can think of right now, without writing any code yet, FWIW. |
@piscisaureus I see, I had forgot about @saghul That makes more sense. I agree that there is a lot of implementing left to do before we dig too deep into specifics. I just wanted to raise my concerns early while there was still a chance to take them into account. As far as removing the request from the queue, I expected you would do that before (or soon after nextTick style) handing me the request as a return value to the poll function. It would then be my responsibility to put something else new in the event loop or the loop would unblock and end. Though I do wonder how a nextTick-like mechanism would work if I only called into libuv and it never called into me. I guess you'd do the cleanup when I next called in to poll for the next event since I'll always be polling in a loop. uv__req_unregister(req);
// TODO: put req in check queue somehow
return req; Then next time I call in to poll for the next event, process the check queue first thing. // req is pulled from check queue
uv__foo_req_cleanup(req);
uv__handle_stop(req->handle); And carry on processing my poll request. In other words, it works exactly like your callback behavior now, but instead of calling into my C code and waiting for me to return, you return to me and wait for me to call back. |
No description provided.