-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: A layered approach to netdev #7736
Comments
Another thing to consider might be that this could waste a few bytes of memory, since any of these layers could just call the next for certain operations without adding any new functionality (e.g. a netstats counter wouldn't be involved in |
But that might be premature optimization ;-) |
Since many of the layers will only hook into one or two of the functions, another approach could be to change the functions themselves to linked lists, where the next function in the list is called if the argument is not handled by the current function. That would reduce memory usage when the layer implements 3 or less of the 6 API functions, and it would reduce the latency for the functions which have few hooks ( Another important consideration: How do we decide which order the layers should be linked? Do we need a kind of sorting key or do we just add them in the order they are initialized during application start? Edit: The memory reduction assumes that a function pointer and a pointer to a struct uses the same number of bytes in memory. |
I do like the ideas here, it would certainly be possible to simplify MAC layer implementations if the netstats stuff could be broken out into its own layer, just counting the number of packets passing back and forth. And though it would be working the same way as today, I think it would make netdev_ieee802154_send/recv more visible. The monkey patching of send and recv being done by netdev_ieee802154_init in the current implementation was not at all obvious to me until I saw the call chain in a backtrace in the debugger from inside the send of my netdev driver, but maybe I was just being blind. Finding a solution for the allocation issue would mean that we also could potentially move the extra members in gnrc_netdev_t added by preprocessor conditions on MODULE_GNRC_MAC (https://github.com/RIOT-OS/RIOT/blob/master/sys/include/net/gnrc/netdev.h#L119-L163). It breaks ABI compatibility between builds to have public API structs change members depending on the included modules. |
I don't see news here. Just bad documentation and its effects. Netdev2 has been designed to be stackable from the beginning. Just add a "parent" pointer to any netdev_t and add some logic to pass send/recv/get/set calls up to that parents. Was anyone listening when I said "implement mac layers stacked on top of the device drivers using netdev"? edit sorry, that came out a lot more rude than intended. |
@kaspar030 thank you, I think using netdev is a good solution for a MAC layer, but there's still the issue of allocating it somewhere for each network device. Do you have any suggestions on how to tackle that? |
Not really. If it is not possible to do that statically (in a not completely ugly way), we could consider allowing some kind of oneway-malloc that can be used only in auto-init. |
I think 1-way malloc is fine on init during boot. Maybe it would be an idea to add a 1-way malloc which works during boot but after calling a certain function (malloc_finalize or sth) then any calls to that malloc will cause a panic, to prevent inadvertent malloc uses and running out of memory. For systems where you don't need dynamic allocation, and want to be able to ensure stability. |
Added the idea of #4795 to the list above to not loose track of it. |
I've been thinking a bit about this the last few days since I have a few network features in mind which could greatly benefit from "dynamic" allocation per network device. Most notably at the moment #6873 where ETX tracking doesn't make a lot of sense on wired interfaces.
Would it be possible to reuse the memory space of a thread for this? Stack starting on one end, "one way heap" on the other end. Maybe even use a thread flag to indicate that this one way malloc is allowed for the thread, as to have a way to enforce restrictions on this. As said before, the malloc could be really simple when assumed that a |
There are no threads in |
For now, I'm trying to
Just so that I understand what you're saying here, the netdev radio drivers are not thread aware, so no calls to threading functions right? Somewhere up in the stack there has to be some kind of thread running an event loop controlling the drivers somehow right ( |
Yes, the threading, if required, is provided by the network stack, but |
Makes sense. This whole idea of (one time dynamic) allocation of the |
Sound's like a nice idea. Are you willing maybe to provide a proof-of-concept. Would do it myself, but currently don't have enough time at hand for that. :-/ |
Sure, not going to promise anything as my time is limited too, but it sounds like a fun challenge :) |
While implementing this for netdev, I was thinking if this layered approach is possible for netif too. Maybe MAC layer protocols such as LWMAC and GoMacH could benefit from this approach. |
Could be. |
In general this work aims to reduce code complexity. The layers this issue talks about currently already exist in part and existed in the referenced paper as well (see modules like |
Okay, I'd like to have some opinions again. I'm currently looking at the way link layer addresses are initialized. My main issue with the current architecture is that the link layer address generated by the device driver is passed up and down the stack. ProblemThe device driver generates an address based on the luid module. This is written to the My problem here is that 1. there is data directly written between "layers", by the device driver, to the SolutionMy current solution would be to have the higher layer ( In this implementation checks would be required to check whether the netdev stack uses a link layer address and to check if the device driver provides a link layer address and then use that address. |
Didn't we "solve" the |
That only solves changes to the link layer address when the device driver is reset, but that doesn't solve my problem number 1 and number 2 here (right?) :) |
Well, problem 1 and 2 won't arise, when the reset is idempotent ;-). |
Or am I misunderstanding them? :-/ |
As a practical (or less hypothetical) example, the nrf52840 doesn't have hardware filtering. The software filter could be implemented as a netdev layer. This extra filter layer would then require knowledge of both the generated link layer address and the PAN ID. IMHO the easiest way for this layer to get the link layer address would be if it could grab the information from a
That just means I didn't explain them good enough :) I'm trying to remove this write by the device driver to the A different solution for the second issue might be to do a |
These "blockers" are all cases where data is directly read from a netdev struct at a position where in a layered module the content of this struct can't be guaranteed. List might be expanded in the future Blockers:
|
At the moment I think that the easiest way is to remove the link layer address from the The main issue is that now the address generated by the device driver has to be synced somehow with the The only place where the link layer address is requested is with the |
DAD failure ;-). |
That's what I was thinking, but not what I was writing :) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If you want me to ignore this issue, please mark it with the "State: don't stale" label. Thank you for your contributions. |
@haukepetersen and I today talked a little bit about the problems with
netdev_t
and how to dynamically add features independent from a specific device. Examples for this arenetdev_ieee802154_t
or deduplication of received packets (currently implemented in gnrc_netif_ieee802154: drop duplicate broadcast packets (optionally) #7577))This is what we came up with:
In addition to actual device drivers there are additional
netdev_driver_t
s available who also have anothernetdev_driver_t
associated to themselves (I called this constructionnetdev_layered_t
here but I'm not stuck on the name). This allows a network stack to use those features as though they were a normal network device and the whole thing would be very transparent to the network stack (so I wouldn't count this as a major API change ;-)). The benefit of this should be clear:netdev_ieee802154_set()
/netdev_ieee802154_get()
situation that seem to confuse peopleThe questions that are still open are:
netdev_layered_t
instances?netdev_layered_t
" stack while also having it configurable?The text was updated successfully, but these errors were encountered: