-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Reduce the number of gnrc_netif
threads
#13762
Comments
gnrc_netif
gnrc_netif
threads
The only advantage (as far as I know) shows up when multiple interfaces are using DMA based interfaces that are able to switch context during the transfer. Then multiple interfaces would be able to block on I/O-based operations at the same time. That's not possible with a single thread with the current SPI API. For the other threads ( This is one of the reasons why I'm very curious how performance is affected with this PR versus the RAM saved in a multi-interface scenario. I'm not sure if we actually have SPI driver implementation that allow for a thread switch during the transfer. I don't see much of a difference between option 1 and 2, except that it is a different name for the same thing. I prefer to have an explicit thread for gnrc, it allows for more flexible IPC (messages, events, flags) and I think it makes debugging easier, though that last point is more a gut feeling than an actual argument. |
Is this a use case, or, do we have that kind of interface in RIOT nowadays?
Then, for SPI based interfaces there would be no difference between the single- vs multi-threaded solution, do I get you correctly? |
IMHO polling based SPI is a kludge until someone provides a proper implementation. |
I've done this for the nrf52 devices in a branch (still have to clean up and PR). There thread initiating the transfer blocks on a mutex while the transfer is busy and is released as soon as the transfer is done. This way the SPI API does not have to be changed but the CPU is free to handle different threads (or sleep) during the transfer.
What I was trying to say here is that currently none of the SPI implementations block on a mutex or other thread sync mechanism but spinlock while the SPI peripheral is busy. Hower this is purely an implementation limitation and not an API limitation. For now I don't expect a difference in performance and I'm aware that right now this is a non-issue due to the SPI periph driver implementation. However as this is still in a design phase, judging by the RFC label, I'd say this is the point in time to look ahead and explore possible limitations of the approach. Of course I don't mean to block this issue on this, but I want to have this mentioned so that we can actively make a decision on whether to take this future limitation into account or not. |
The option 2. ( I'm aware this pattern could be done with multiple thread, but IMHO it's overkill to have one full thread per network interface. |
I'm not familiar with this term, care to elaborate?
That's what we're discussing here right? Whether it makes sense to decrease the number of threads and what the trade offs are. That's why I mentioned the possible (future) performance impact, before refactoring to a different approach. |
Polling based SPI implementation might be "a kludge until someone provides a proper implementation", but having a blocking API is IMO a sound decision. We don't have any language support for asynchronous APIs, so they would get messy. And providing a blocking implementation is easy, providing an asynchronous one is more difficult (and thus less likely to be available for any given hardware). I'd prefer fully async capable periph API's, especially as it is trivial to wrap a blocking API on top. But I don't think building this is trivial (meaning, this will not happen in a llong time). |
The Bottom-Half processing is basically the ISR offloading mechanism. It's the component that calls the IRQ handler of the device. We currently use the
Just to clarify, the performance impact (which l recognize it can be present) is not related to If we make This goes beyond this point with the fact that all network stack in |
During the summit's PHY/MAC rework breakout session, there was rough consensus that probably w don't want to change the way GNRC defines interfaces since it was designed to e flexible. Having one thread for all interfaces could affect flexibility. If there are no more commnets here, I would simply close this issue |
Was there a consensus for that? IMHO the consensus was to not enforce this, but I think there is still value (especially memory-wise) to allow for grouping GNRC's netifs to dedicated threads. |
As far as I understand devices with multiple interfaces are pretty rare, so there is probably not much to gain here in optimizing for that. |
Doesn't the |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If you want me to ignore this issue, please mark it with the "State: don't stale" label. Thank you for your contributions. |
How is this different from #10496? |
yes, it's basically the same |
Description
Currently GNRC netif allocates one thread per interface. This doesn't add any advantages since all threads have the same priority. In practice, this procedure allocates much more memory than needed.
So, I open this issue to decide how to proceed with this.
I see 2 alternatives:
Make one global
gnrc_netif
thread that demux to the rightgnrc_netif_xxx
implementation:We could have only one
gnrc_netif
thread (same as having onegnrc_ipv6
and onegnrc_udp
).Instead of sending a packet to a given PID, we can send all packets to the same thread. This one would operate the specific link layer (via the
gnrc_netif_ops
)We could make
gnrc_netif
agnostic to the source of events (likenetdev
).With this approach,
gnrc_netif
doesn't have any thread and requires someone else to process the event. For instance, we could think of reusing theevent_thread
module to handle interface events (send, recv, get, set).Some open questions:
gnrc_netif
? With both approaches the interfaces need a ID independent of the PID (see netif: add functions to get and get by identifier #12738). Forgnrc_netif_send
it's possible to use thenetif
header in the pktbuf, but how to achieve the same withgnrc_netif_get
andgnrc_netif_set
?event_thread
too. But 1. can also do the trickAll feedback is more than welcome
Related PRs
gnrc_netif_send
: gnrc_netif_send: add send function #13579gnrc_netif_.et
: gnrc_netif: add netif set and get function #13582The text was updated successfully, but these errors were encountered: