Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Networkdriver syscall interface adaption, for new VirtioNet driver #71

Closed
wants to merge 2 commits into from

Conversation

mustermeiszer
Copy link

Theses changes are a necessary adaption of the current network syscall interface used by smoltcp. Hence this pull-request must be seen/used together with the following pull-request from the libhermit-rs repository (hermit-os/kernel#112).

Changes:
The changes are rather small. Upon reception of a buffer, the driver now receives a static reference to the payload and a handle from the kernel. Upon finished consumption of the buffer, the user-space returns the handle back to the kernel.
I.e. the handle is the owner of the buffer in kernel space and is leaked as long the user-space holds it. When the handle is returned to the kernel-space network driver, then the driver takes back ownership and handles the memory.

mustermeiszer added 2 commits September 24, 2020 09:50
This commit adjust the userspace syscall interface for the network
driver. Solely in the aspect, that the buffer consumed notification
for the kernel space network driver now returns the reference to the
consumed data.
THis is necessary in order to deallocate the buffer in kernel space.
This commit changes the syscall interface slightly for the new network
driver. Now upon the receiption of a packet, the driver does forward a
reference to the packets memory as also a handle, which controls this
memory area inside the kernel-space driver part. The handle is then
returned to the network driver upon consumption. This allows to reduce
the amount of necessary copies by one.
bors bot added a commit to hermit-os/kernel that referenced this pull request Jan 4, 2021
112: New VirtioNet Driver and Extended Virtio Infrastructure (PackedVq) r=stlankes a=mustermeiszer

The following pull request includes:

- Extention of the Virtio PCI functionality
- Extention of the VirtioNet driver
- Implementation of the Packed Virtqueue
- Implementation of the Split Virtqueue (Most functionality of the new Interface)

Must be seen/used together with the PR in the rusty-hermit repository [(PR rusty-hermit)](hermit-os/hermit-rs#71)

## Extention of the PCI functionality

Virtio defines multiple ways to facilitate communication between devices and their drivers. One way is to use a PCI bus. Based on the existing PCI functionality this implementation extends the existing functionality of the kernel. The extention is focuses solely on the Virtio specific aspect of the existing kernel. 
The Virtio standard defines multiple configuration structures, which allow to configure and use a virtio device over a PCI bus. Given a virtio driver already posses the existing ```struct PciAdapter``` a driver can use the following call to gain access to all Virtio specific configuration structures.

```rust
// The PciAdapter struct can be obtained from the kernels pci code.
// &adapter is of type PciAdapter

let capability_collection: UniCapsColl = crate::drivers::virtio::transport::pci::map_caps(&adapter);

// Where UniCapsColl is structured as follows
struct UniCapsColl struct UniCapsColl {
    com_cfg_list: Vec<ComCfg>,
    notif_cfg_list: Vec<NotifCfg>,
    isr_stat_list: Vec<IsrStatus>,
    pci_cfg_acc_list: Vec<PciCfgAlt>,
    sh_mem_cfg_list: Vec<ShMemCfg>,
    dev_cfg_list: Vec<PciCap>
}
```

If the respective vector is empty, the device does not define the respective configuration structure. Allthough all devices MUST define at least a ```struct ComCfg```.  As devices are allowed to define multiple instances of each structure the `UniCapsColl` caries a vector of them. The device specifies a descending preference for structures of the same type and the vectors store the structures in this descending preference (i.e. the configuration structure the device prefers the most will be placed at the first entry of the vector, and so forth). 

One field of the struct must be noted seperately. The field `dev_cfg_list` contains the possibility to access device specific configuration structures via the given `PciCap`.  As those configuration structures are different for each device drivers must map those structures individually. The code provides the following helper function, to do this: 
```rust
// A virtio device specific configuration struct
struct NetDevCfg {
    ...
}

// pci_cap is an item from the dev_cfg_list
let net_dev_cfg: &'static NetDevCfg = match crate::drivers::virtio::transport::pci::map_dev_cfg::<NetDevCfg>(&pci_cap).unwrap();
```


## Extention of the VirtioNet Driver
The new driver is a complete rewrite of the existing driver. It Adds functionality to
-  Adding features (A minimal feature set is defined, if this set is not provided by the device, the driver aborts initalization)
   - The driver tries to reduce it's features down to the minimal set, before aborting. If a feature match is found before that point, the driver will uses this feature set.
- Error management. The new driver catches errors and tries to resolve them. Therefore the code does return a lot of Result<...,...>'s.
- Control functionality. Allthough no syscalls are currently implemented the driver provides the infrastructure to use the control functionality of the device. Through a seperated "Control Virtqueue" the driver is able to send the network device commands. Including among others, VLAN filtering, receive filtering, and mac filtering. 

## Implementation of the Packed Virtqueue
The implementation in this pull request tries to provide a simple-to-use interface, while hiding the complexity of the virtqueue. 
Virtio specifies two kinds of so called virtqueues, both existing for the actual data transfer between device and driver. The kernel already has an implementation of the "split virtqueue" and this pull request implements the second virtqueue, the "packed virtqueue". In order to allow a unified usage of both virtqueues in drivers, the implementation provides a common interface for both queues (Currently the interface is only implemented for the packed virtqueue, but in the future the split virtqueue can be ported to this interface, so users who want to use a split virtqueue currently have to use the split virtqueues interface). A cutout from the unified interface is listed below.

```rust
pub enum Virtq {
    Packed(PackedVq),
    Split(SplitVq), // Currently unimplemented
}

impl Virtq {
    /// Returns a new instance of the specified Virtq variant.
    /// The structs ComCfg and NotifCfg can be obtained via the above shown function from the virito-pci code.
    pub fn new(com_cfg: &mut ComCfg, notif_cfg: &NotifCfg, size: VqSize, vq_type: VqType, index: VqIndex, feats: u64) -> Self {
        match vq_type {
            VqType::Packed => match PackedVq::new(com_cfg, notif_cfg, size, index, feats) {
                Ok(packed_vq) => Virtq::Packed(packed_vq),
                Err(vq_error) => panic!("Currently panics if queue fails to be created")
            },
            VqType::Split => unimplemented!()
        }
    
    /// This function creates a BufferToken
    pub fn prep_buffer(&self, rc_self: Rc<Virtq>, send: Option<BuffSpec>, recv: Option<BuffSpec>) -> Result<BufferToken, VirtqError> {
        match self {
            Packed(vq) => vq.prep_buffer(rc: Rc<Virtq>, send: Option<BuffSpec>, recv: Option<BuffSpec>),
            Split(vq) => unimplemented!(),
    }
    
    // Enables interrupts (i.e. signal device to send interrupts).
    pub fn enable_interrupts(&self) {
        match self {
            Packed(vq) => vq.enable_interrupt(),
            Split(vq) => vq.enable_interrupt(),
    }

   .
   .
   .
}
```
The actual Virtqueue specification can be found in the resources below. 

In the following I will provide a short overview, how to use the virtq. The main advantage at the given approach is, that drivers do not need to handle the complexity of the queues and that most memory management is in the hand of the queue. 

```rust
// Create a new virtqueue, assuming we already obtained the configuration structs from the pci code
let type = VqType::Packed;
let size = VqSize::from(256);
let index = VqIndex(0);
// features is a u64, which bitwise indicates the features negotiated with the device.

let vq = Rc::new(Virtq::new(com_cfg, notif_cfg, size, type, index, features));

// Buffers in the virtqueue can contain write-only and read-only areas. 
// Each BufferToken created via the following function will be regarded as a token for a single transfer.
//
// We specify which kind of buffer, we want.
// In this example we create a 1500 bytes large read-only buffer for the device.
// and two write-only buffers for the device, of size 10 bytes and 1000 bytes.
let send_spec = BuffSpec::Single(Bytes::new(1500).unwrap());
let size_deff = [Bytes::new(10).unwrap(), Bytes::new(1000).unwrap()];
let recv_spec = BuffSpec::Multiple(&size_deff);

// We need to provide the queue with a Rc::clone of the virtqueue as every token holds on, in order to provide memory
// safety in cases where Transfers are dropped by the driver before the actual transfer in the queue is finished.
// This is needed, as the tokens are owners of the memory, which is accessed by device. 
// Therefore the virtqueue will take care of holding early dropped tokens until the transfer is finished.
let buff_tkn: BufferToken = vq.prep_buffer(Rc::clone(&vq), Some(send_spec), Some(recv_spec)).unwrap();

// If we want we can write data into the read-only buffers now. 
// As we do not know all use-cases, we can also write into the write-only buffers, if needed.
// But here we only write into the read-only buffer.
// The "data" that is written, needs to implement a AsSliceU8 trait, in order to be writable. 
// Afterwards we provide the BufferToken and receive a TransferToken. This intermediate step is necessary
// as the queue does provide batch-transfer of TransferTokens. Then the we dispatch the TransferToken.
// After this step the actual transfer is placed in the virtqueue and can be accessed by the device.
let transfer = buff_tkn.write(Some(data), None)) :Result<BufferToken, VirtqError>
    .unwrap() :BufferToken
    .provide() :TransferToken
    .dispatch() :Transfer

// Transfer can be polled. The poll function returns true, if the transfer is finished. All actions on the transfer will return
// an error if executed before the transfer.poll() function returns true.
if transfer.poll() {
    // If this is true, the transfer is finished and might be dropped or reused
    let buff_tkn = transfer.reuse() :Result<BufferToken, VirtqError>
        .unwrap() :BufferToken
        // At this point we can use the BufferToken again for a transfer. 
        // Also we can restrict the size of a BufferTokens memories. Upon reuse the size of 
       // underlying will be restored to their size at initalization. 
} else { 
    // This means, the transfer is ongoing. 
    // We can safely drop the transfer and it will be returned to the virtqueue
    drop(transfer)
}
```

For further details please look at the resources below and the actual code, which is carefully documented.

## Resources

- The specification this codes functionality is based on, can be found here: [Virtio Spec v1.1.](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html)
- My masterthesis: [MA_Schulz_Frederik.pdf](https://github.com/hermitcore/libhermit-rs/files/5746925/MA_Schulz_Frederik.pdf)


Co-authored-by: mustermeiszer <[email protected]>
@stlankes
Copy link
Contributor

stlankes commented Jan 4, 2021

Sorry, I oversaw this PR. It is similar to #99. Consequently I close this PR.

@stlankes stlankes closed this Jan 4, 2021
simonschoening pushed a commit to simonschoening/libhermit-rs that referenced this pull request Aug 26, 2021
112: New VirtioNet Driver and Extended Virtio Infrastructure (PackedVq) r=stlankes a=mustermeiszer

The following pull request includes:

- Extention of the Virtio PCI functionality
- Extention of the VirtioNet driver
- Implementation of the Packed Virtqueue
- Implementation of the Split Virtqueue (Most functionality of the new Interface)

Must be seen/used together with the PR in the rusty-hermit repository [(PR rusty-hermit)](hermit-os/hermit-rs#71)

## Extention of the PCI functionality

Virtio defines multiple ways to facilitate communication between devices and their drivers. One way is to use a PCI bus. Based on the existing PCI functionality this implementation extends the existing functionality of the kernel. The extention is focuses solely on the Virtio specific aspect of the existing kernel. 
The Virtio standard defines multiple configuration structures, which allow to configure and use a virtio device over a PCI bus. Given a virtio driver already posses the existing ```struct PciAdapter``` a driver can use the following call to gain access to all Virtio specific configuration structures.

```rust
// The PciAdapter struct can be obtained from the kernels pci code.
// &adapter is of type PciAdapter

let capability_collection: UniCapsColl = crate::drivers::virtio::transport::pci::map_caps(&adapter);

// Where UniCapsColl is structured as follows
struct UniCapsColl struct UniCapsColl {
    com_cfg_list: Vec<ComCfg>,
    notif_cfg_list: Vec<NotifCfg>,
    isr_stat_list: Vec<IsrStatus>,
    pci_cfg_acc_list: Vec<PciCfgAlt>,
    sh_mem_cfg_list: Vec<ShMemCfg>,
    dev_cfg_list: Vec<PciCap>
}
```

If the respective vector is empty, the device does not define the respective configuration structure. Allthough all devices MUST define at least a ```struct ComCfg```.  As devices are allowed to define multiple instances of each structure the `UniCapsColl` caries a vector of them. The device specifies a descending preference for structures of the same type and the vectors store the structures in this descending preference (i.e. the configuration structure the device prefers the most will be placed at the first entry of the vector, and so forth). 

One field of the struct must be noted seperately. The field `dev_cfg_list` contains the possibility to access device specific configuration structures via the given `PciCap`.  As those configuration structures are different for each device drivers must map those structures individually. The code provides the following helper function, to do this: 
```rust
// A virtio device specific configuration struct
struct NetDevCfg {
    ...
}

// pci_cap is an item from the dev_cfg_list
let net_dev_cfg: &'static NetDevCfg = match crate::drivers::virtio::transport::pci::map_dev_cfg::<NetDevCfg>(&pci_cap).unwrap();
```


## Extention of the VirtioNet Driver
The new driver is a complete rewrite of the existing driver. It Adds functionality to
-  Adding features (A minimal feature set is defined, if this set is not provided by the device, the driver aborts initalization)
   - The driver tries to reduce it's features down to the minimal set, before aborting. If a feature match is found before that point, the driver will uses this feature set.
- Error management. The new driver catches errors and tries to resolve them. Therefore the code does return a lot of Result<...,...>'s.
- Control functionality. Allthough no syscalls are currently implemented the driver provides the infrastructure to use the control functionality of the device. Through a seperated "Control Virtqueue" the driver is able to send the network device commands. Including among others, VLAN filtering, receive filtering, and mac filtering. 

## Implementation of the Packed Virtqueue
The implementation in this pull request tries to provide a simple-to-use interface, while hiding the complexity of the virtqueue. 
Virtio specifies two kinds of so called virtqueues, both existing for the actual data transfer between device and driver. The kernel already has an implementation of the "split virtqueue" and this pull request implements the second virtqueue, the "packed virtqueue". In order to allow a unified usage of both virtqueues in drivers, the implementation provides a common interface for both queues (Currently the interface is only implemented for the packed virtqueue, but in the future the split virtqueue can be ported to this interface, so users who want to use a split virtqueue currently have to use the split virtqueues interface). A cutout from the unified interface is listed below.

```rust
pub enum Virtq {
    Packed(PackedVq),
    Split(SplitVq), // Currently unimplemented
}

impl Virtq {
    /// Returns a new instance of the specified Virtq variant.
    /// The structs ComCfg and NotifCfg can be obtained via the above shown function from the virito-pci code.
    pub fn new(com_cfg: &mut ComCfg, notif_cfg: &NotifCfg, size: VqSize, vq_type: VqType, index: VqIndex, feats: u64) -> Self {
        match vq_type {
            VqType::Packed => match PackedVq::new(com_cfg, notif_cfg, size, index, feats) {
                Ok(packed_vq) => Virtq::Packed(packed_vq),
                Err(vq_error) => panic!("Currently panics if queue fails to be created")
            },
            VqType::Split => unimplemented!()
        }
    
    /// This function creates a BufferToken
    pub fn prep_buffer(&self, rc_self: Rc<Virtq>, send: Option<BuffSpec>, recv: Option<BuffSpec>) -> Result<BufferToken, VirtqError> {
        match self {
            Packed(vq) => vq.prep_buffer(rc: Rc<Virtq>, send: Option<BuffSpec>, recv: Option<BuffSpec>),
            Split(vq) => unimplemented!(),
    }
    
    // Enables interrupts (i.e. signal device to send interrupts).
    pub fn enable_interrupts(&self) {
        match self {
            Packed(vq) => vq.enable_interrupt(),
            Split(vq) => vq.enable_interrupt(),
    }

   .
   .
   .
}
```
The actual Virtqueue specification can be found in the resources below. 

In the following I will provide a short overview, how to use the virtq. The main advantage at the given approach is, that drivers do not need to handle the complexity of the queues and that most memory management is in the hand of the queue. 

```rust
// Create a new virtqueue, assuming we already obtained the configuration structs from the pci code
let type = VqType::Packed;
let size = VqSize::from(256);
let index = VqIndex(0);
// features is a u64, which bitwise indicates the features negotiated with the device.

let vq = Rc::new(Virtq::new(com_cfg, notif_cfg, size, type, index, features));

// Buffers in the virtqueue can contain write-only and read-only areas. 
// Each BufferToken created via the following function will be regarded as a token for a single transfer.
//
// We specify which kind of buffer, we want.
// In this example we create a 1500 bytes large read-only buffer for the device.
// and two write-only buffers for the device, of size 10 bytes and 1000 bytes.
let send_spec = BuffSpec::Single(Bytes::new(1500).unwrap());
let size_deff = [Bytes::new(10).unwrap(), Bytes::new(1000).unwrap()];
let recv_spec = BuffSpec::Multiple(&size_deff);

// We need to provide the queue with a Rc::clone of the virtqueue as every token holds on, in order to provide memory
// safety in cases where Transfers are dropped by the driver before the actual transfer in the queue is finished.
// This is needed, as the tokens are owners of the memory, which is accessed by device. 
// Therefore the virtqueue will take care of holding early dropped tokens until the transfer is finished.
let buff_tkn: BufferToken = vq.prep_buffer(Rc::clone(&vq), Some(send_spec), Some(recv_spec)).unwrap();

// If we want we can write data into the read-only buffers now. 
// As we do not know all use-cases, we can also write into the write-only buffers, if needed.
// But here we only write into the read-only buffer.
// The "data" that is written, needs to implement a AsSliceU8 trait, in order to be writable. 
// Afterwards we provide the BufferToken and receive a TransferToken. This intermediate step is necessary
// as the queue does provide batch-transfer of TransferTokens. Then the we dispatch the TransferToken.
// After this step the actual transfer is placed in the virtqueue and can be accessed by the device.
let transfer = buff_tkn.write(Some(data), None)) :Result<BufferToken, VirtqError>
    .unwrap() :BufferToken
    .provide() :TransferToken
    .dispatch() :Transfer

// Transfer can be polled. The poll function returns true, if the transfer is finished. All actions on the transfer will return
// an error if executed before the transfer.poll() function returns true.
if transfer.poll() {
    // If this is true, the transfer is finished and might be dropped or reused
    let buff_tkn = transfer.reuse() :Result<BufferToken, VirtqError>
        .unwrap() :BufferToken
        // At this point we can use the BufferToken again for a transfer. 
        // Also we can restrict the size of a BufferTokens memories. Upon reuse the size of 
       // underlying will be restored to their size at initalization. 
} else { 
    // This means, the transfer is ongoing. 
    // We can safely drop the transfer and it will be returned to the virtqueue
    drop(transfer)
}
```

For further details please look at the resources below and the actual code, which is carefully documented.

## Resources

- The specification this codes functionality is based on, can be found here: [Virtio Spec v1.1.](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html)
- My masterthesis: [MA_Schulz_Frederik.pdf](https://github.com/hermitcore/libhermit-rs/files/5746925/MA_Schulz_Frederik.pdf)


Co-authored-by: mustermeiszer <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants