Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unnecessary is_power_of_two() Restriction for Flags in enumflags Macro Expansion #63

Open
crazyjackel opened this issue Dec 24, 2024 · 8 comments

Comments

@crazyjackel
Copy link

When using the #[bitflags] macro, I encountered an issue where flags must have exactly one set bit. My code is as follows:

#[bitflags]
#[repr(u8)]
#[derive(Copy, Clone, Debug, PartialEq)]
enum AiPrimitiveType {
    Unknown = 0x00,
    Point = 0x01,
    Line = 0x02,
    Triangle = 0x04,
    Polygon = 0x08,
    NgonEncodingFlag = 0x10,
}

This produces the following error:

error: Flags must have exactly one set bit
  --> src/structs/mesh.rs:40:5
   |
40 |     Unknown = 0x00,
   |     ^^^^^^^

The error is caused by the check_flag logic:

if !n.is_power_of_two() {
    Err(syn::Error::new(
        flag.span,
        "Flags must have exactly one set bit",
    ))
} 

However, looking at the macro expansion:

#[repr(u8)]
#[derive(Copy, Clone, Debug, PartialEq)]
enum AiPrimitiveType {
    Point = 0x01,
    Line = 0x02,
    Triangle = 0x04,
    Polygon = 0x08,
    NgonEncodingFlag = 0x10,
}
impl ::enumflags2::_internal::core::ops::Not for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn not(self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self).not()
    }
}
impl ::enumflags2::_internal::core::ops::BitOr for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn bitor(self, other: Self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self) | other
    }
}
impl ::enumflags2::_internal::core::ops::BitAnd for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn bitand(self, other: Self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self) & other
    }
}
impl ::enumflags2::_internal::core::ops::BitXor for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn bitxor(self, other: Self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self) ^ other
    }
}
unsafe impl ::enumflags2::_internal::RawBitFlags for AiPrimitiveType {
    type Numeric = u8;
    const EMPTY: Self::Numeric = 0;
    const DEFAULT: Self::Numeric = 0;
    const ALL_BITS: Self::Numeric = 0
        | (Self::Point as u8)
        | (Self::Line as u8)
        | (Self::Triangle as u8)
        | (Self::Polygon as u8)
        | (Self::NgonEncodingFlag as u8);
    const BITFLAGS_TYPE_NAME: &'static str = "BitFlags<AiPrimitiveType>";
    fn bits(self) -> Self::Numeric {
        self as u8
    }
}
impl ::enumflags2::BitFlag for AiPrimitiveType {}

This restriction seems unnecessary and limits usability, particularly for flags like Unknown = 0x00 or when defining named combinations. These combinations would naturally follow enum-to-u8 behavior as expected.

The only potential issue I see is with ALL_BITS as currently implemented. This could be resolved by ensuring it rounds up to the largest power of 2, setting all bits appropriately. Could this restriction be relaxed to allow more flexibility?

@meithecatte
Copy link
Owner

In your particular example, I'm a bit confused. Do you actually have situations where Line | Triangle would occur, to encode something that's both a line and a triangle at the same time? Maybe you're looking for a plain rust-built-in enum with discriminants specified?

If you want to define named combinations, you can do them like in #55 (I should add that to the docs...)

The general issue with having flags equal to 0 is that they're kinda always present and that raises all kinds of weird questions. What should BitFlags::intersects return? BitFlags::len would require more complex handling, and BitFlags::is_empty would need to be special-cased to handle this properly, if it even makes sense to ask that question for such a weird set of bitflags. Not to mention that the internals of the iterator code would break.

The idea of the crate is that your enum (which implements BitFlag) gives names to the individual bits, and then BitFlags<YourEnum> represents combinations of bits. This is motivated by situations in user code in which having the possibility of expressing "this argument/variable is just one bitflag and not an entire set" is useful. So, something like Unknown would be represented by BitFlags::empty(), and you could define an associated constant if you'd like (or make use of BitFlags::EMPTY).

@crazyjackel
Copy link
Author

Sorry for the late reply.

I’m using Bitflags in a feasibility test for rewriting Assimp in Rust, specifically to track the components of a 3D model—such as points, lines, triangles, polygons, and special ngons. The "Unknown" flag (which I’ve now renamed to "None" for clarity) serves as both a flag and a pattern for matching the specific zero-combination case. While I could use a raw u8 with custom conversion functions for this, I wanted to explore using enum-based flags similar to what I’ve done in C#.

Regarding your point about semantics, I think this is where my perspective might differ. To me, only the power-of-two values (e.g., 0, 1, 2, 4, 8, ...) hold intrinsic meaning within the bitflag context with named combinations (3, 7, 17, ...) being essentially syntactic sugar. However, these named combinations (and even named zeros) are valuable when using the enum type by itself for pattern matching or improving code readability.

From an implementation perspective, My view is that you just filter out the syntactic sugar within the macro, wherein only the power of two matters.

So a Bitflag of 1101 when itered becomes [1,4,8] even if [5] is defined. Things like len() and intersects work similarly. However I can still say something like let b: BitFlags<Enum> = Enum::A | Enum::5 wherein Enum 5 is defined as 5. (Alternatively a named 0 would just do nothing)

Basically, instead of erroring on the restriction, I would've thought it would've done filtering and looking at the code, I thought that filtering wouldn't be too bad to do. (If you don't mind, I could try doing a PR on a fork once the holidays are over.)

@meithecatte
Copy link
Owner

Regarding your point about semantics, I think this is where my perspective might differ. To me, only the power-of-two values (e.g., 0, 1, 2, 4, 8, ...) hold intrinsic meaning within the bitflag context with named combinations (3, 7, 17, ...) being essentially syntactic sugar. However, these named combinations (and even named zeros) are valuable when using the enum type by itself for pattern matching or improving code readability.

In that case, I think that it would make sense to define the syntactic sugar separately from the part that has actual intrinsic meaning (example from #55):

#[bitflags]
#[derive(Clone, Copy)]
#[repr(u8)]
enum Flags {
    A = 1 << 0, // or omit the values if you don't care about the exact bits used
    B = 1 << 1,
}

impl Flags {
    const AB: BitFlags<Self> = make_bitflags!(Self::{A | B});
}

@crazyjackel
Copy link
Author

Thanks, but if I want to use the named combinations outside of the BitFlags, such as with a const function that takes in data and gives you which flag you want, I would be unable to do so.

It also is awkward syntax compared to filtering out within the macro for only power of two values.

@meithecatte
Copy link
Owner

Thanks, but if I want to use the named combinations outside of the BitFlags, such as with a const function that takes in data and gives you which flag you want, I would be unable to do so.

Could you show me an example?

@crazyjackel
Copy link
Author

Away from computer, boarding plane… will provide tomorrow, my apologies.

@crazyjackel
Copy link
Author

crazyjackel commented Dec 25, 2024

impl AiPrimitiveType {
//Would compile if None was allowed
    const fn primitive_type_for_n_indices(n: u8) -> AiPrimitiveType {
        match n {
            4u8..=u8::MAX => AiPrimitiveType::Polygon,
            3 => AiPrimitiveType::Triangle,
            2 => AiPrimitiveType::Line,
            1 => AiPrimitiveType::Point,
            0 => AiPrimitiveType::None,
        }
    }
//Does not compile because const
    const fn primitive_type_for_n_indices2(n: u8) -> BitFlags<AiPrimitiveType> {
        match n {
            4u8..=u8::MAX => BitFlags::from_flag(AiPrimitiveType::Polygon),
            3 => BitFlags::from_flag(AiPrimitiveType::Triangle),
            2 => BitFlags::from_flag(AiPrimitiveType::Line),
            1 => BitFlags::from_flag(AiPrimitiveType::Point),
            0 => BitFlags::EMPTY,
        }
    }
}

Provided are two implementations of a Constant that would convert an index count to a Flag as a constant. The first does not compile because None cannot be included when made a flag, whereas the second does not compiler because BifFlags::from_flag is non-const. Also the internals are not public, so I can't inject in a const either.

When I manually expand the macro, without the power of two check: I get the following:

#[repr(u8)]
#[derive(Copy, Clone, Debug, PartialEq)]
enum AiPrimitiveType {
    None = 0x00,
    Point = 0x01,
    Line = 0x02,
    Triangle = 0x04,
    Polygon = 0x08,
    NgonEncodingFlag = 0x10,
}
impl ::enumflags2::_internal::core::ops::Not for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn not(self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self).not()
    }
}
impl ::enumflags2::_internal::core::ops::BitOr for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn bitor(self, other: Self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self) | other
    }
}
impl ::enumflags2::_internal::core::ops::BitAnd for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn bitand(self, other: Self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self) & other
    }
}
impl ::enumflags2::_internal::core::ops::BitXor for AiPrimitiveType {
    type Output = ::enumflags2::BitFlags<Self>;
    #[inline(always)]
    fn bitxor(self, other: Self) -> Self::Output {
        use ::enumflags2::BitFlags;
        BitFlags::from_flag(self) ^ other
    }
}
unsafe impl ::enumflags2::_internal::RawBitFlags for AiPrimitiveType {
    type Numeric = u8;
    const EMPTY: Self::Numeric = 0;
    const DEFAULT: Self::Numeric = 0;
    const ALL_BITS: Self::Numeric = 0
        | (Self::Point as u8)
        | (Self::Line as u8)
        | (Self::Triangle as u8)
        | (Self::Polygon as u8)
        | (Self::NgonEncodingFlag as u8);
    const BITFLAGS_TYPE_NAME: &'static str = "BitFlags<AiPrimitiveType>";
    fn bits(self) -> Self::Numeric {
        self as u8
    }
}
impl ::enumflags2::BitFlag for AiPrimitiveType {}


impl AiPrimitiveType {
    const fn primitive_type_for_n_indices(n: u8) -> AiPrimitiveType {
        match n {
            4u8..=u8::MAX => AiPrimitiveType::Polygon,
            3 => AiPrimitiveType::Triangle,
            2 => AiPrimitiveType::Line,
            1 => AiPrimitiveType::Point,
            0 => AiPrimitiveType::None,
        }
    }
}

In this case, it compiles perfectly well.

All I would need to do is filter out ALL_BITS variants to be only power of two variants and ignore the rest. (Also perhaps a check that named combinations do not exceed max flag size by the next power fo two.

In that case, everything works as expected still.

@meithecatte
Copy link
Owner

meithecatte commented Dec 25, 2024

Okay, I have located the equivalent code in assimp and now I understand what the bitflag is supposed to mean: you want to store the set of which primitives are actually present in a mesh. This finally makes it make sense.

In this case, the idea of enumflags2 is that you want to have separate types for "primitive type" and "set of primitive types". This is what you're working against.

You're motivated by complications in constructing a BitFlags<PrimitiveType> in a const context, but this doesn't mean you should shoehorn things by collapsing the distinction between PrimitiveType and BitFlags<PrimitiveType>.

What you're trying to do is possible by using make_bitflags!:

    const fn primitive_type_for_n_indices2(n: u8) -> BitFlags<AiPrimitiveType> {
        match n {
            4u8..=u8::MAX => make_bitflags!(AiPrimitiveType::{Polygon}),
            3 => make_bitflags!(AiPrimitiveType::{Triangle}),
            2 => make_bitflags!(AiPrimitiveType::{Line}),
            1 => make_bitflags!(AiPrimitiveType::{Point}),
            0 => BitFlags::EMPTY, // or make_bitflags!(AiPrimitiveType::{}), for consistency, if you prefer
        }
    }

meithecatte added a commit that referenced this issue Dec 25, 2024
- Emphasize the distinction between `T` and `BitFlags<T>`.
- Mention `make_bitflags` in the section about `const fn` APIs.
- Add a `const` to the `make_bitflags` example.

This is motivated by the discussion in #63.
meithecatte added a commit that referenced this issue Dec 25, 2024
- Emphasize the distinction between `T` and `BitFlags<T>`.
- Mention `make_bitflags` in the section about `const fn` APIs.
- Add a `const` to the `make_bitflags` example.

This is motivated by the discussion in #63.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants