-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unnecessary is_power_of_two() Restriction for Flags in enumflags Macro Expansion #63
Comments
In your particular example, I'm a bit confused. Do you actually have situations where If you want to define named combinations, you can do them like in #55 (I should add that to the docs...) The general issue with having flags equal to 0 is that they're kinda always present and that raises all kinds of weird questions. What should The idea of the crate is that your enum (which implements |
Sorry for the late reply. I’m using Bitflags in a feasibility test for rewriting Assimp in Rust, specifically to track the components of a 3D model—such as points, lines, triangles, polygons, and special ngons. The "Unknown" flag (which I’ve now renamed to "None" for clarity) serves as both a flag and a pattern for matching the specific zero-combination case. While I could use a raw u8 with custom conversion functions for this, I wanted to explore using enum-based flags similar to what I’ve done in C#. Regarding your point about semantics, I think this is where my perspective might differ. To me, only the power-of-two values (e.g., 0, 1, 2, 4, 8, ...) hold intrinsic meaning within the bitflag context with named combinations (3, 7, 17, ...) being essentially syntactic sugar. However, these named combinations (and even named zeros) are valuable when using the enum type by itself for pattern matching or improving code readability. From an implementation perspective, My view is that you just filter out the syntactic sugar within the macro, wherein only the power of two matters. So a Bitflag of 1101 when itered becomes [1,4,8] even if [5] is defined. Things like len() and intersects work similarly. However I can still say something like Basically, instead of erroring on the restriction, I would've thought it would've done filtering and looking at the code, I thought that filtering wouldn't be too bad to do. (If you don't mind, I could try doing a PR on a fork once the holidays are over.) |
In that case, I think that it would make sense to define the syntactic sugar separately from the part that has actual intrinsic meaning (example from #55):
|
Thanks, but if I want to use the named combinations outside of the BitFlags, such as with a const function that takes in data and gives you which flag you want, I would be unable to do so. It also is awkward syntax compared to filtering out within the macro for only power of two values. |
Could you show me an example? |
Away from computer, boarding plane… will provide tomorrow, my apologies. |
impl AiPrimitiveType {
//Would compile if None was allowed
const fn primitive_type_for_n_indices(n: u8) -> AiPrimitiveType {
match n {
4u8..=u8::MAX => AiPrimitiveType::Polygon,
3 => AiPrimitiveType::Triangle,
2 => AiPrimitiveType::Line,
1 => AiPrimitiveType::Point,
0 => AiPrimitiveType::None,
}
}
//Does not compile because const
const fn primitive_type_for_n_indices2(n: u8) -> BitFlags<AiPrimitiveType> {
match n {
4u8..=u8::MAX => BitFlags::from_flag(AiPrimitiveType::Polygon),
3 => BitFlags::from_flag(AiPrimitiveType::Triangle),
2 => BitFlags::from_flag(AiPrimitiveType::Line),
1 => BitFlags::from_flag(AiPrimitiveType::Point),
0 => BitFlags::EMPTY,
}
}
} Provided are two implementations of a Constant that would convert an index count to a Flag as a constant. The first does not compile because None cannot be included when made a flag, whereas the second does not compiler because BifFlags::from_flag is non-const. Also the internals are not public, so I can't inject in a const either. When I manually expand the macro, without the power of two check: I get the following: #[repr(u8)]
#[derive(Copy, Clone, Debug, PartialEq)]
enum AiPrimitiveType {
None = 0x00,
Point = 0x01,
Line = 0x02,
Triangle = 0x04,
Polygon = 0x08,
NgonEncodingFlag = 0x10,
}
impl ::enumflags2::_internal::core::ops::Not for AiPrimitiveType {
type Output = ::enumflags2::BitFlags<Self>;
#[inline(always)]
fn not(self) -> Self::Output {
use ::enumflags2::BitFlags;
BitFlags::from_flag(self).not()
}
}
impl ::enumflags2::_internal::core::ops::BitOr for AiPrimitiveType {
type Output = ::enumflags2::BitFlags<Self>;
#[inline(always)]
fn bitor(self, other: Self) -> Self::Output {
use ::enumflags2::BitFlags;
BitFlags::from_flag(self) | other
}
}
impl ::enumflags2::_internal::core::ops::BitAnd for AiPrimitiveType {
type Output = ::enumflags2::BitFlags<Self>;
#[inline(always)]
fn bitand(self, other: Self) -> Self::Output {
use ::enumflags2::BitFlags;
BitFlags::from_flag(self) & other
}
}
impl ::enumflags2::_internal::core::ops::BitXor for AiPrimitiveType {
type Output = ::enumflags2::BitFlags<Self>;
#[inline(always)]
fn bitxor(self, other: Self) -> Self::Output {
use ::enumflags2::BitFlags;
BitFlags::from_flag(self) ^ other
}
}
unsafe impl ::enumflags2::_internal::RawBitFlags for AiPrimitiveType {
type Numeric = u8;
const EMPTY: Self::Numeric = 0;
const DEFAULT: Self::Numeric = 0;
const ALL_BITS: Self::Numeric = 0
| (Self::Point as u8)
| (Self::Line as u8)
| (Self::Triangle as u8)
| (Self::Polygon as u8)
| (Self::NgonEncodingFlag as u8);
const BITFLAGS_TYPE_NAME: &'static str = "BitFlags<AiPrimitiveType>";
fn bits(self) -> Self::Numeric {
self as u8
}
}
impl ::enumflags2::BitFlag for AiPrimitiveType {}
impl AiPrimitiveType {
const fn primitive_type_for_n_indices(n: u8) -> AiPrimitiveType {
match n {
4u8..=u8::MAX => AiPrimitiveType::Polygon,
3 => AiPrimitiveType::Triangle,
2 => AiPrimitiveType::Line,
1 => AiPrimitiveType::Point,
0 => AiPrimitiveType::None,
}
}
} In this case, it compiles perfectly well. All I would need to do is filter out ALL_BITS variants to be only power of two variants and ignore the rest. (Also perhaps a check that named combinations do not exceed max flag size by the next power fo two. In that case, everything works as expected still. |
Okay, I have located the equivalent code in assimp and now I understand what the bitflag is supposed to mean: you want to store the set of which primitives are actually present in a mesh. This finally makes it make sense. In this case, the idea of You're motivated by complications in constructing a What you're trying to do is possible by using const fn primitive_type_for_n_indices2(n: u8) -> BitFlags<AiPrimitiveType> {
match n {
4u8..=u8::MAX => make_bitflags!(AiPrimitiveType::{Polygon}),
3 => make_bitflags!(AiPrimitiveType::{Triangle}),
2 => make_bitflags!(AiPrimitiveType::{Line}),
1 => make_bitflags!(AiPrimitiveType::{Point}),
0 => BitFlags::EMPTY, // or make_bitflags!(AiPrimitiveType::{}), for consistency, if you prefer
}
} |
- Emphasize the distinction between `T` and `BitFlags<T>`. - Mention `make_bitflags` in the section about `const fn` APIs. - Add a `const` to the `make_bitflags` example. This is motivated by the discussion in #63.
- Emphasize the distinction between `T` and `BitFlags<T>`. - Mention `make_bitflags` in the section about `const fn` APIs. - Add a `const` to the `make_bitflags` example. This is motivated by the discussion in #63.
When using the #[bitflags] macro, I encountered an issue where flags must have exactly one set bit. My code is as follows:
This produces the following error:
The error is caused by the check_flag logic:
However, looking at the macro expansion:
This restriction seems unnecessary and limits usability, particularly for flags like Unknown = 0x00 or when defining named combinations. These combinations would naturally follow enum-to-u8 behavior as expected.
The only potential issue I see is with ALL_BITS as currently implemented. This could be resolved by ensuring it rounds up to the largest power of 2, setting all bits appropriately. Could this restriction be relaxed to allow more flexibility?
The text was updated successfully, but these errors were encountered: