Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] A new stack-based vector #2990

Closed
wants to merge 3 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
364 changes: 364 additions & 0 deletions text/2978-stack_based_vec.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,364 @@
- Feature Name: `stack_based_vec`
- Start Date: 2020-09-27
- RFC PR: [rust-lang/rfcs#2990](https://github.com/rust-lang/rfcs/pull/2990)
- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000)

# Summary
[summary]: #summary

This RFC, which depends and takes advantage of the upcoming stabilization of constant generics (min_const_generics), tries to propose the creation of a new vector named `ArrayVec` that manages stack memory and can be seen as an alternative for the built-in structure that handles heap-allocated memory, aka `alloc::vec::Vec<T>`.

# Motivation
[motivation]: #motivation

`core::collections::ArrayVec<T>` should be conveniently added into the standard library due to its importance and potential.

### Optimization

Stack-based allocation is generally faster than heap-based allocation and can be used as an optimization in places that otherwise would have to call an allocator. Some resource-constrained embedded devices can also benefit from it.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heap-based allocation on continous memory regions should be not slower(arena allocator), since the hardware-prefetcher can generally resolve the necessary indirections (and keeps the pointer in cache).
[Generally hardware prefetchers can do everything efficient with linear access patterns.]
Please correct me, if and how I am wrong here.

Please explain the tradeoff briefly in the reference explanation.
For example, once std would be allocator-aware, you could swap out the allocator by setting an annotation to the data structure.

However, I am unsure, when std (and in special Vec) will be at that point.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is using inline storage, so there is no indirection on nested storage, e.g. if we have an ArrayVec of Arrayvecs.
This is like the same argument for using Vec<T> rather than LinkedList<T>

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you mean it is reserved during compile-time and thats why it should be treated special?

I did not hear an argument, why it will not be possible to "slap a const before allocator operations" (with extending the Vec API in some way to make it usable in core) to get the same effect. Maybe I do also have flawed assumptions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you have a list of ArrayVec<i32, 3>, representing [[1, 2, 3], [4, 5, 6]], it is laid out in memory contiguously, like this:

+---------------------+---------------------+
| +-----+---+---+---+ | +-----+---+---+---+ |
| | len | 1 | 2 | 3 | | | len | 4 | 5 | 6 | |
| +-----+---+---+---+ | +-----+---+---+---+ |
+---------------------+---------------------+

If you have a list of Vec<i32>, it is instead laid out, requiring indirection through a pointer, like this:

+---------------------+---------------------+
| +-----+-----+-----+ | +-----+-----+-----+ |
| | ptr | len | cap | | | ptr | len | cap | |
| +--|--+-----+-----+ | +--|--+-----+-----+ |
+----|----------------+----|----------------+
     |                     |
     |   +---+---+---+     |   +---+---+---+
     '-> | 1 | 2 | 3 |     '-> | 4 | 5 | 6 |
         +---+---+---+         +---+---+---+

Copy link

@matu3ba matu3ba Jan 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be ArrayVec<i32, 6> and would make a nice explanation. Can you add the base address, capacity to the upper ascii graphic? Ideally, the invalid parts would be sketched as well.

I think my assumptions on what optimisations are feasible are too optimistic.

Copy link
Contributor

@mbartlett21 mbartlett21 Jan 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like this then?

If you have a Vec<ArrayVec<i32, 4>>, representing [[1, 2, 3], [4, 5]], it is laid out in memory contiguously, apart from the pointer, like this:

Vec<ArrayVec<i32, 4>>
 +-----+-----+-----+
 | ptr | len | cap |
 +--|--+-----+-----+
    |
    |   +------------------------------+--------------------------+----------+
    |   |       ArrayVec<i32, 4>       |     Arrayvec<i32, 4>     |          |
    |   | +-----+---+---+---+--------+ | +-----+---+---+--------+ |  Unused  |
    '-> | | len | 1 | 2 | 3 | Unused | | | len | 4 | 5 | Unused | | capacity |
        | +-----+---+---+---+--------+ | +-----+---+---+--------+ |          |
        +------------------------------+--------------------------+----------+

If you have a Vec<Vec<i32>>, it is instead laid out like this, requiring indirection through two pointers.

Vec<Vec<i32>>
 +-----+-----+-----+
 | ptr | len | cap |
 +--|--+-----+-----+
    |
    |   +---------------------+---------------------+----------+
    |   |       Vec<i32>      |       Vec<i32>      |          |
    |   | +-----+-----+-----+ | +-----+-----+-----+ |  Unused  |
    '-> | | ptr | len | cap | | | ptr | len | cap | | capacity |
        | +--|--+-----+-----+ | +--|--+-----+-----+ |          |
        +----|----------------+----|----------------+----------+
             |                     |
             |                     |   +---+---+--------+
             |                     '-> | 4 | 5 | Unused |
             |                         +---+---+--------+
             |   +---+---+---+--------+
             '-> | 1 | 2 | 3 | Unused |
                 +---+---+---+--------+

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mbartlett21 i have used your illustration in the RFC. Hope you don't mind it


Copy link
Contributor

@tgross35 tgross35 Sep 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest adding a new section, something like:

### C FFI

An extremely common pattern in C is to pass a buffer with a maximum size and a
current length, e.g., `myfunc(int buf[BUF_SIZE], int* current_len)`, and expect a
callee function to be able to modify the `buf` and `len` to store its return data.
Currently, the best ways to interoperate with this FFI pattern in Rust without relying
on external libraries is to create a slice and manually maintain the length as changes
are made.

This is an _ideal_ use case for `ArrayVec` in core, as it is a representation of a fixed-
capacity buffer with a current length. This greatly simplifies common patterns
required when working with existing C APIs, especially in embedded development.
Further, it would provide the backbone for an easier `&str` to `CStr` transition that
is not directly possible without going through heap-based `CString`, which is not
available in `no_std` environments.

Interoperability between C and Rust is currently "good", but providing a better
representation for common C patterns is an excellent way to grow Rust as a language
and ease adoption in C-dominant environments.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I don't know where best it would go, but some section here should really drive home the usability in no_std environments (embedded, bare metal, kernel, safety-critical, etc). Rust is gaining traction in kernel of course, but seems like it has yet to really break into any of the other environments.

In many of these places, dependencies are frowned upon.

### Unstable features and constant functions

By adding `ArrayVec` into the standard library, it will be possible to use internal unstable features to optimize machine code generation and expose public constant functions without the need of a nightly compiler.

### Useful in the real world

`arrayvec` is one of the most downloaded project of `crates.io` and is used by thousand of projects, including Rustc itself. Currently ranks ninth in the "Data structures" category and seventy-fifth in the "All Crate" category.

### Building block

Just like `Vec`, `ArrayVec` is also a primitive vector where high-level structures can use it as a building block. For example, a stack-based matrix or binary heap.

### Unification

There are a lot of different crates about the subject that tries to do roughly the same thing, a centralized implementation would stop the current fragmentation.

# Guide-level explanation
[guide-level-explanation]: #guide-level-explanation

`ArrayVec` is a container that encapsulates fixed size buffers.

```rust
let mut v: ArrayVec<i32, 4> = ArrayVec::new();
v.push(1);
v.push(2);

assert_eq!(v.len(), 2);
assert_eq!(v[0], 1);

assert_eq!(v.pop(), Some(2));
assert_eq!(v.len(), 1);

v[0] = 7;
assert_eq!(v[0], 7);

v.extend([1, 2, 3].iter().copied());

for element in &v {
println!("{}", element);
}
assert_eq!(v, [7, 1, 2, 3]);
```

Instead of relying on a heap-allocator, stack-based memory is added and removed on-demand in a last-in-first-out (LIFO) order according to the calling workflow of a program. `ArrayVec` takes advantage of this predictable behavior to reserve an exactly amount of uninitialized bytes up-front to form an internal buffer.

```rust
// `array_vec` can store up to 64 elements
let mut array_vec: ArrayVec<i32, 64> = ArrayVec::new();
```

Another potential use-case is the usage within constant environments:

```rust
const MY_CONST_ARRAY_VEC: ArrayVec<i32, 10> = {
let mut v = ArrayVec::new();
v.push(1);
v.push(2);
v.push(3);
v.push(4);
v
};
```

Of course, fixed buffers lead to inflexibility because unlike `Vec`, the underlying capacity can not expand at run-time and there will never be more than 64 elements in the above example.

```rust
// This vector can store up to 0 elements, therefore, nothing at all
let mut array_vec: ArrayVec<i32, 0> = ArrayVec::new();
array_vec.push(1); // Error!
```

A good question is: Should I use `core::collections::ArrayVec<T>` or `alloc::vec::Vec<T>`? Well, `Vec` is already good enough for most situations while stack allocation usually shines for small sizes.

* Do you have a known upper bound?

* How much memory are you going to allocate for your program? The default values of `RUST_MIN_STACK` or `ulimit -s` might not be enough.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a tool to measure this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


* Are you using nested `Vec`s? `Vec<ArrayVec<T, N>>` might be better than `Vec<Vec<T>>`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is it better? More performance at cost of more memory usage (keeping as reserve) I guess?


```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be ```text?

let _: Vec<Vec<i32>> = vec![vec![1, 2, 3], vec![4, 5]];

+-----+-----+-----+
| ptr | len | cap |
+--|--+-----+-----+
|
| +---------------------+---------------------+----------+
| | Vec<i32> | Vec<i32> | |
| | +-----+-----+-----+ | +-----+-----+-----+ | Unused |
'-> | | ptr | len | cap | | | ptr | len | cap | | capacity |
| +--|--+-----+-----+ | +--|--+-----+-----+ | |
+----|----------------+----|----------------+----------+
| |
| | +---+---+--------+
| '-> | 4 | 5 | Unused |
| +---+---+--------+
| +---+---+---+--------+
'-> | 1 | 2 | 3 | Unused |
+---+---+---+--------+

Illustration credits: @mbartlett21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to credit me.

```

Can you see the `N`, where `N` is length of the external `Vec`, calls to the heap allocator? In the following illustration, the internal `ArrayVec`s are placed contiguously in the same space.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could possibly be worded better:

As shown in the diagram above, each element of the vector has its own separate heap allocation, meaning more book-keeping for the global allocator to do.


```txt
let _: Vec<ArrayVec<i32, 3>> = vec![array_vec![1, 2, 3], array_vec![4, 5]];

+-----+-----+-----+
| ptr | len | cap |
+--|--+-----+-----+
|
| +------------------------------+--------------------------+----------+
| | ArrayVec<i32, 4> | Arrayvec<i32, 4> | |
| | +-----+---+---+---+--------+ | +-----+---+---+--------+ | Unused |
'-> | | len | 1 | 2 | 3 | Unused | | | len | 4 | 5 | Unused | | capacity |
| +-----+---+---+---+--------+ | +-----+---+---+--------+ | |
+------------------------------+--------------------------+----------+

Illustration credits: @mbartlett21
```

Each use-case is different and should be pondered individually. In case of doubt, stick with `Vec`.

For a more technical overview, take a look at the following operations:

```rust
// `array_vec` has a pre-allocated memory of 2048 bits (32 * 64) that can store up
// to 64 signed integers.
let mut array_vec: ArrayVec<i32, 64> = ArrayVec::new();

// Although reserved, there isn't anything explicitly stored yet
assert_eq!(array_vec.len(), 0);

// Initializes the first 32 bits with a simple '1' integer or
// 00000000 00000000 00000000 00000001 bits
array_vec.push(1);

// Our vector memory is now split into a 32/2016 pair of initialized and
// uninitialized memory respectively
Comment on lines +158 to +159
Copy link

@matu3ba matu3ba Jan 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Our vector memory is now split into 32 bit initialized and 2016 bit uninitialized memory.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't that be or?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// array_vec has a pre-allocated memory of 2048 bits (32 * 64) that can store up
// to 64 decimals.

32 + 2016 = 2048

assert_eq!(array_vec.len(), 1);
```

# Reference-level explanation
[reference-level-explanation]: #reference-level-explanation

`ArrayVec` is a contiguous memory block where elements can be collected, therefore, a collection by definition and even though `core::collections` doesn't exist, it is the most natural module placement.

To avoid length and conflicting conversations, the API will mimic most of the current `Vec` surface, which also means that all methods that depend on valid user input or valid internal capacity will panic at run-time when something goes wrong. For example, removing an element that is out of bounds.

```rust
// Please, bare in mind that these methods are simply suggestions. Discussions about the
// API should probably take place elsewhere.

pub struct ArrayVec<T, const N: usize> {
data: MaybeUninit<[T; N]>,
Copy link
Contributor

@mbartlett21 mbartlett21 Jan 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be [MaybeUninit<T>; N] or MaybeUninit<[T; N]>. I think the former would be better, since it allows simpler and safer implementations of most of the functions, due to it being an array. The only complication is From::from<[T; N]>, since generic transmutes don't work yet, and instead have to go through transmute_copy.

Copy link
Author

@c410-f3r c410-f3r Jan 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, MaybeUninit<[T; N]> has greater semantic value.

If they generate the same machine code, then [MaybeUninit<T>; N] should be preferable because, beyond your declarations, it is more popular than MaybeUninit<[T; N]>.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather [MaybeUninit<T>; N], since saying MaybeUninit<[T; N]> is like saying either all of the array or none of the array is initialized. It also allows simpler and safer implementations of push, pop, and other functions, since using pointer offsets isn't needed:

fn push(&mut self, element: T) -> Result<(), T> {
    if self.len == N {
        Err(T)
    } else {
        self.data[self.len].write(element);
        self.len += 1;
        Ok(())
    }
}

fn pop(&mut self) -> Option<T> {
    if self.len == 0 {
        None
    } else {
        self.len -= 1;
        unsafe { self.data[self.len].assume_init_read() }
    }
}

Copy link
Author

@c410-f3r c410-f3r Jan 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mbartlett21 My biggest concern in this case is about machine code generation or how well [MaybeUninit<T>; N] performs over MaybeUninit<[T; N]>. I just need some time to see if they generate the same assembly, and if they really do, then we can indeed stick with [MaybeUninit<T>; N].

cc rust-lang/rust#81167

Copy link
Contributor

@mbartlett21 mbartlett21 Jan 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment, there is a bounds check on the array accesses in the above examples.
If I add this function to the vector, then use it in the functions, it then removes the bound checks:

#[inline(always)]
fn assume_len_in_bounds(&self) {
    unsafe { core::intrinsics::assume(self.len <= N) }
}

// The edited methods from above
fn push(&mut self, element: T) -> Result<(), T> {
    self.assume_len_in_bounds();
    if self.len == N {
        Err(T)
    } else {
        self.data[self.len].write(element);
        self.len += 1;
        Ok(())
    }
}

fn pop(&mut self) -> Option<T> {
    if self.len == 0 {
        None
    } else {
        // This has to be before the decrement,
        // since otherwise `N` is a valid value of `len`
        self.assume_len_in_bounds();
        self.len -= 1;
        unsafe { self.data[self.len].assume_init_read() }
    }
}

Is there any way that we can say that an invariant of the type is that self.len <= N?
That could also open up options for Rust to niche-fill the types...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe assume_len_in_bounds could be placed in the constructor?

Is there any way that we can say that an invariant of the type is that self.len <= N?

Given enough context, the compiler can infer if a branch is "invariable". What Rustc and most people usually do is place something like assert!(self.len <= N) at the begging of the method.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be wishing for ponies here, but could this somehow be generalized further that the data could either be a [MaybeUninit<[T]>; N] or a Box<[MaybeUninit<[T]>; N]>? This could make gradually initializing a box more ergonomic. Once len == N the fully initialized Box could be extracted without reallocation. Then the user would need zero unsafe code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason Box isn't used is so that there is no heap allocation, and it can also be used in const contexts.

If there was an impl for TryInto<Box<[T; N]>> for Box<T> the boxed case could be handled without either reallocation or unsafe code:

let mut v = Vec::with_capacity(2);
v.push(this);
v.push(that);
let b: Box<[T; 2]> = v.into_boxed_slice().try_into().unwrap();

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That actually works already. Not sure why I didn't think about the vec-box conversions. No ponies needed then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't realize that there was aready an impl of TryFrom<Box<[T]>> for Box<[T; N]>...

len: usize,
Copy link
Contributor

@pickfire pickfire Oct 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be usize when we probably can't even hit u16?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Probably" is not a great fit for the standard library.

Ideally we’d use specialization of some private trait with an associated type to choose the smallest integer type that can represent 0..=N. However it looks like boolean conditions based on const parameters can not be used in where clauses today, and I don’t know if supporting that is planned at all. And this would only help where align_of::<T>() is less than size_of::<usize>().

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential hack until specializations works: Store the size in a [u8; M] which gets converted to a usize on demand. Calculate the M based on N.

}

impl<T, const N: usize> ArrayVec<T, N> {
// Constructors

pub const fn new() -> Self;

// Infallible Methods

pub const fn as_mut_ptr(&mut self) -> *mut T;

pub const fn as_mut_slice(&mut self) -> &mut [T];

pub const fn as_ptr(&self) -> *const T;

pub const fn as_slice(&self) -> &[T];

pub const fn capacity(&self) -> usize;

pub fn clear(&mut self);

pub const fn is_empty(&self) -> bool;

pub const fn len(&self) -> usize;

pub fn retain<F>(&mut self, mut f: F)
where
F: FnMut(&mut T) -> bool;

pub fn truncate(&mut self, len: usize);

// Methods that can panic at run-time

pub fn drain<R>(&mut self, range: R) -> Drain<'_, T, N>
where
R: RangeBounds<usize>;

pub fn extend_from_cloneable_slice<'a>(&mut self, other: &'a [T])
where
T: Clone;

pub fn extend_from_copyable_slice<'a>(&mut self, other: &'a [T])
where
T: Copy;

pub const fn insert(&mut self, idx: usize, element: T);

pub const fn push(&mut self, element: T);

pub const fn remove(&mut self, idx: usize) -> T;

pub fn splice<I, R>(&mut self, range: R, replace_with: I) -> Splice<'_, I::IntoIter, N>
where
I: IntoIterator<Item = T>,
R: RangeBounds<usize>;

pub fn split_off(&mut self, at: usize) -> Self;

pub fn swap_remove(&mut self, idx: usize) -> T;

// Verifiable methods

pub const fn pop(&mut self) -> Option<T>;
}
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These APIs heavily mirror Vec. But with the introduction of const generic arrays there are other needs in the standard library that are not about providing a more lightweight Vec but instead a way to partially initialize arrays and safely drop them.

rust-lang/rust#81615 is in search for a return type for cases where there may or may not have been enough elements to initialize an array of fixed size. ArrayVec could fill that role, but only when it has some additional const-generic functions such as safely unwrapping an array if and only if len == N

This gradual initialization could also be handled by ArrayVec (iirc very similar code also exists in several other places):

https://github.com/rust-lang/rust/blob/186f7ae5b04d31d8ccd1746ac63cdf1ab4bc2354/library/core/src/array/mod.rs#L433-L463

Another feature with which this would intersect is SIMD. A partially filled ArrayVec is quite similar to SIMD value + mask.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conversion of a filled ArrayVec can be done like this:

impl<T, const N: usize> TryFrom<ArrayVec<T, N>> for [T; N] {
    type Error = ArrayVec<T, N>;
    fn try_from(other: ArrayVec<T, N>) -> Result<[T; N], ArrayVec<T, N>> {
        if other.len == N {
            let (array, _) = other.into_raw_parts();
            Ok(unsafe { transmute(array) })
        } else {
            Err(other)
        }
    }
}


Meaningless, unstable and deprecated methods like `reserve` or `drain_filter` weren't considered. A concrete implementation is available at https://github.com/c410-f3r/stack-based-vec.

# Drawbacks
[drawbacks]: #drawbacks

### Additional complexity

New and existing users are likely to find it difficult to differentiate the purpose of each vector type, especially people that don't have a theoretical background in memory management.

### The current ecosystem is fine

`ArrayVec` might be an overkill in certain situations. If someone wants to use stack memory in a specific application, then it is just a matter of grabbing the appropriated crate.

# Prior art
[prior-art]: #prior-art

These are the most known structures:

* `arrayvec::ArrayVec`: Uses declarative macros and an `Array` trait for implementations but lacks support for arbitrary sizes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/bluss/arrayvec now supports arbitrary sizes using const generics.

* `heapless::Vec`: With the usage of `typenum`, can support arbitrary sizes without a nightly compiler.
* `staticvec::StaticVec`: Uses unstable constant generics for arrays of arbitrary sizes.
* `tinyvec::ArrayVec`: Supports fixed and arbitrary (unstable feature) sizes but requires `T: Default` for security reasons.

As seen, there isn't an implementation that stands out among the others because all of them roughly share the same purpose and functionality. Noteworthy is the usage of constant generics that makes it possible to create an efficient and unified approach for arbitrary array sizes.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Macros are slower to compile, so I wonder why ArrayVec is so popular. What flexibility does it give?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's just due to its age, macros have worked for much longer than const generics.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Macros are slower to compile, so I wonder why ArrayVec is so popular. What flexibility does it give?

I think you're thinking specifically about procedural macros, which can indeed be slow as molasses to compile (depending on your hardware) for various reasons, some of them being related to the particular crates that I think all procedural macros ever have basically no choice but to depend on.

The sort of macros that all crates discussed in the RFC sometimes make use of on the other hand are in all cases just regular macro_rules! macros, which have no noteworthy impact on compile times at all.


# Unresolved questions
[unresolved-questions]: #unresolved-questions

### Verifiable methods

Unlike methods that will abort the current thread execution, verifiable methods will signal that something has gone wrong or is missing. This approach has two major benefits:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also list downsides, e.g. having an interface inconsistent with Vec.


- `Security`: The user is forced to handle possible variants or corner-cases and enables graceful program shutdown by wrapping everything until `fn main() -> Result<(), MyCustomErrors>` is reached.

- `Flexibility`: Gives freedom to users because it is possible to choose between, for example, `my_full_array_vec.push(100)?` (check), `my_full_array_vec.push(100).unwrap()` (panic) or `let _ = my_full_array_vec.push(100);` (ignore).

In regards to performance, since the upper capacity bound is known at compile-time and the majority of methods are `#[inline]`, the compiler will probably have the necessary information to remove most of the conditional bounding checking when producing optimized machine code.

```rust
pub fn drain<R>(&mut self, range: R) -> Option<Drain<'_, T, N>>
where
R: RangeBounds<usize>;

pub fn extend_from_cloneable_slice<'a>(&mut self, other: &'a [T]) -> Result<(), &'a [T]>
where
T: Clone;

pub fn extend_from_copyable_slice<'a>(&mut self, other: &'a [T]) -> Result<(), &'a [T]>
where
T: Copy;

pub const fn insert(&mut self, idx: usize, element: T) -> Result<(), T>;

pub const fn push(&mut self, element: T) -> Result<(), T>;

pub const fn remove(&mut self, idx: usize) -> Option<T>;

pub fn splice<I, R>(&mut self, range: R, replace_with: I) -> Option<Splice<'_, I::IntoIter, N>>
where
I: IntoIterator<Item = T>,
R: RangeBounds<usize>;

pub fn split_off(&mut self, at: usize) -> Option<Self>;

pub fn swap_remove(&mut self, idx: usize) -> Option<T>;
```

In my opinion, every fallible method should either return `Option` or `Result` instead of panicking at run-time. Although the future addition of `try_*` variants can mitigate this situation, it will also bring additional maintenance burden.

### Nomenclature

`ArrayVec` will conflict with `arrayvec::ArrayVec` and `tinyvec::ArrayVec`.

### Prelude

Should it be included in the prelude?

### Macros

```rust
// Instance with 1i32, 2i32 and 3i32
let _: ArrayVec<i32, 33> = array_vec![1, 2, 3];

// Instance with 1i32 and 1i32
let _: ArrayVec<i32, 64> = array_vec![1; 2];
```

# Future possibilities
[future-possibilities]: #future-possibilities

### Dynamic array

An hybrid approach between heap and stack memory could also be provided natively in the future.

```rust
pub struct DynVec<T, const N: usize> {
// Hides internal implementation
data: DynVecData,
}

impl<T, const N: usize> DynVec<T, N> {
// Much of the `Vec` API goes here
}

// This is just an example. `Vec<T>` could be `Box` and `enum` an `union`.
enum DynVecData<T, const N: usize> {
Heap(Vec<T>),
Inline(ArrayVec<T, N>),
}
```

The above description is very similar to what `smallvec` already does.

### Generic collections and generic strings

Many structures that use `alloc::vec::Vec` as the underlying storage can also use stack or hybrid memory, for example, an hypothetical `GenericString<S>`, where `S` is the storage, could be split into:

```rust
type DynString<const N: usize> = GenericString<DynVec<u8, N>>;
type HeapString = GenericString<Vec<u8>>;
type StackString<const N: usize> = GenericString<ArrayVec<u8, N>>;
```