Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collisions in type_id #10389

Closed
DaGenix opened this issue Nov 9, 2013 · 163 comments
Closed

Collisions in type_id #10389

DaGenix opened this issue Nov 9, 2013 · 163 comments
Labels
A-typesystem Area: The type system C-bug Category: This is a bug. I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness P-low Low priority T-lang Relevant to the language team, which will review and decide on the PR/issue.

Comments

@DaGenix
Copy link

DaGenix commented Nov 9, 2013

The implementation of type_id from #10182 uses SipHash on various parameters depending on the type. The output size of SipHash is only 64-bits, however, making it feasible to find collisions via a Birthday Attack. I believe the code below demonstrates a collision in the type_id value of two different ty_structs:

use std::hash;
use std::hash::Streaming;

// I believe that this pretty much the same thing as hash_crate_independent() in ty.rs
// for a ty_struct on a 64-bit platform
fn hash_struct(local_hash: &str, node: u64) -> u64 {
    let mut state = hash::default_state();
    state.input([18]);
    state.input(local_hash.as_bytes());
    do node.iter_bytes(true) |bytes| { state.input(bytes); true };
    state.result_u64()
}

fn main() {
    // This represents two structures with different node values from a crate with a 
    // local_hash value of "local" that end up getting the same hash and thus, 
    // I think, the same type_id
    assert!(hash_struct("local", 0x9e02c8943c336302) == hash_struct("local", 0x366a806b1d5f1b2));
}
@alexcrichton
Copy link
Member

I'm not entirely sure how feasible it is for a program to have 0x366a806b1d5f1b2 node ids (2.5 trillion), but this is still concerning.

We could in theory have very cheap inequality among types, and then have an expensive equality check. Something which may walk the respective TyDesc structures in parallel to make sure that they're the same. We could also bump up the hash size to using something like sha1/sha2 and have the type_id() intrinsic return [u8, ..N] to reduce the possibility of a collision.

Either way, I don't think that this is a super-pressing issue for now, but I'm nominating to discuss whether we want to get this done for 1.0. This could in theory have serious implications depending on how frequently Any is used.

@alexcrichton
Copy link
Member

Ah, it was already nominated!

@bill-myers
Copy link
Contributor

Why not compare an interned version of the type data string? (i.e. what is currently passed as data to be hashed, possibly SHA-256 hashed first)

The linker can be used for interning by emitting a common symbol with the type data string as name and taking its address, and otherwise the same thing can be done manually in a global constructor.

This way it's always a pointer comparison, and there are no collisions.

@DaGenix
Copy link
Author

DaGenix commented Nov 11, 2013

I don't know how node id values are generated, but assuming that they are generated sequentially, this particular collision is not realistic. However, its not hard to find collisions for more realistic node id values by picking particular values for the crate hashes:

assert!(hash_struct("a2c55ca1a1f68", 4080) == hash_struct("138b8278caab5", 2804));

The key thing to consider isn't the number of node id values, though: its the total number of type id values. Some quick (hopefully correct) math shows that there is a 0.01% chance of a collision once there are around 60 million type id values. That's still a pretty large number of type id values for a somewhat low probability of a collision, thought. So, its unclear to me how big a deal this is for the Rust 1.0 timeframe. It all depends on what the acceptable probability of a collision is.

@nikomatsakis
Copy link
Contributor

When I saw that @alexcrichton proposed using a hash, my first reaction was "collision!" but then I thought "...but exceedingly unlikely to occur in practice". I think this is not a matter of imminent destruction but if we can leverage the linker or some other scheme to avoid this danger, we should -- and perhaps we should just go ahead and mark the current scheme as deprecated and just plan on finding a replacement scheme.

@thestinger
Copy link
Contributor

A cryptographic hash designed for this purpose (larger output) would be enough. Although, a larger output would be more expensive to compare (four u64 comparisons for SHA2).

@pnkfelix
Copy link
Member

We don't need to deal with this right now. P-low.

@steveklabnik
Copy link
Member

How relevant is this issue today? I think that it's all the same, but am not sure.

@thestinger
Copy link
Contributor

It's 64-bit so collisions are likely with enough types (consider recursive type metaprogramming) and it doesn't have any check to bail out if one occurs. Bailing out is not a very good solution anyway, because it pretty much means that there's no way to compile the program, beyond using a different random seed and hoping for the best. It's a crappy situation.

@vks
Copy link
Contributor

vks commented Jan 21, 2015

Note that "hoping for the best" by iteratively changing the seed might work with overwhelmingly large probability after very few iterations.

@sorear
Copy link
Contributor

sorear commented Oct 14, 2015

use std::any::Any;

fn main() {
    let weird : [([u8; 188250],[u8; 1381155],[u8; 558782]); 0] = [];
    let whoops = Any::downcast_ref::<[([u8; 1990233],[u8; 798602],[u8; 2074279]); 1]>(&weird);
    println!("{}",whoops.unwrap()[0].0[333333]);
}

Actually a soundness issue. playground: http://is.gd/TwBayX

@pnkfelix
Copy link
Member

I'd like the lang team to devote a little time to this now that we are post 1.0. Nominating

@pnkfelix pnkfelix added I-nominated T-lang Relevant to the language team, which will review and decide on the PR/issue. labels Oct 18, 2015
@nikomatsakis
Copy link
Contributor

OK, lang team discussed it, and our conclusion was that:

  1. This issue ought to be fixed, it's silly not to.
  2. This is an implementation detail that we could change whenever we want (right?)
  3. Nonetheless, we probably ought to open an RFC or at least a discuss thread, with a proposal to do better, since probably people will have some clever ideas.
  4. Probably the runtime overhead of the virtual call in the Any trait is way more than a strcmp anyhow for all realistic types.

@nikomatsakis
Copy link
Contributor

I was wondering about a design where we do something like:

  • generate a static string representing the full type; in static builds, at least, this will be interned by the linker;
  • generate a hash

compare the string pointers for equality (to give a fast equality check). If that fails, compare the hashes for inequality (to give a fast inequality check). If THAT fails, compare the strings for content (to handle dynamic linking).

Although re-reading the thread I see @bill-myers may have had an even more clever solution.

@pnkfelix
Copy link
Member

@nikomatsakis putting the hash of the data at the start is a good idea, to increase the probability that we catch unequal things quickly. It seems to me like @bill-myers' approach composes fine with that strategy.

@sorear
Copy link
Contributor

sorear commented Oct 30, 2015

I doubt the "problem" is limited to Any. You can probably confuse the compiler just as effectively by colliding hashes for symbol mangling, or many other things. What is the objective here? Since Rust is not a sandbox language, I don't think "protect memory from malicious programmers" should be one of our goals (we should document the types of undefined behavior that can be hit in safe code, and fix the ones that are possible to hit by accident; if someone is determined to break the type system, they can already write an unsafe block, or use std::process to launch a subprocess that ptraces its parent and corrupts memory).

@hmvp
Copy link

hmvp commented Jan 24, 2017

@Mark-Simulacrum
Copy link
Member

@nikomatsakis Should this be marked as I-unsound? I've done so for now, since that seems to be the general conclusion a couple of times by different people, but please unmark if I'm wrong.

@Mark-Simulacrum Mark-Simulacrum added I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness C-bug Category: This is a bug. and removed C-enhancement Category: An issue proposing an enhancement or a PR with one. labels Jul 19, 2017
@RalfJung
Copy link
Member

You'd probably want a scheme where a part of the hash is stored in-line, to avoid having to always read from the pointer. But yes that's one of a bunch of possibilities.

@Skgland
Copy link
Contributor

Skgland commented Jul 22, 2024

For variant with TypeId being a &'static str, we have the fast-equality path of reference equality, wouldn't we also have a fast-inequality path when the strings are of unequal length?
One would only need a full string comparison when the pointers differ and the length is the same, as strings of unequal length can't be equal.
Though that still leaves a concern over binary size when including the string for all type ids and non-deduped strings.
Could/Would the type id only included for types for which the TypeId requested similar to how vTables aren't generated for all types x traits?

@michaelwoerister
Copy link
Member

It's my understanding that TypeIds are allocated lazily but I could be wrong.

@the8472
Copy link
Member

the8472 commented Jul 22, 2024

The lang team #95845 (comment) that relying on a full (non-truncated) cryptographic hash is fine -- we don't have to guarantee soundness against an infinite-resource attacker that can generate collisions in cryptographic hash functions.

From the linked comment it's not clear whether that was a necessary or sufficient condition. So weaker functions might not have been rejected.

Also, my understanding is that siphash claims to be suitable for cryptographic use when the key is secret. In other words it is a cryptographic hash under the assumption that any process generating collisions is oblivious to the key. Which depending on the threat model could be interpreted as "it's behaves as a cryptographic hash, for our threat model".

So, as I have argued previously, it would be great if we could get a statement what the threat model is, not just on a particular solution.

Additional aspects that should be considered:

  • security-compile time tradeoff - blake2s was tried, and it has a huge overhead
    Do we need tunable security levels?
  • security-binary size tradeoff - embedding the type name for every TypeID'd type would bloat the data section.
  • would the decision be consistent with the security promises the rest of the compiler can give.
    I.e. if collisons were not acceptable and the type-name approach gets chosen, then do we also need to get of StableHasher since that also relies on collisions being negligible?

@RalfJung
Copy link
Member

RalfJung commented Jul 22, 2024 via email

@RalfJung
Copy link
Member

RalfJung commented Jul 22, 2024

From the linked comment it's not clear whether that was a necessary or sufficient condition. So weaker functions might not have been rejected.

It is obviously not necessary to use a hash function at all, since one could do a perfect comparison based on type_name. So I think "this is the lower bound for the guarantees provided" is the only reasonable interpretation. (The alternative is to believe that the lang team intended to rule out anything that provides stronger guarantees than a cryptographic hash function, which does not seem reasonable.)

The context there was considering various weaker options like a truncated hash function, which were rejected by that decision.

Also, my understanding is that siphash claims to be suitable for cryptographic use when the key is secret. In other words it is a cryptographic hash under the assumption that any process generating collisions is oblivious to the key.

I have not seen such a notion of "cryptographic hash function under assumptions about the process generating the inputs" before; is there literature defining and analyzing this idea in more detail?

Additional aspects that should be considered:

Those seem like a good starting point for assembling a summary for a lang team design meeting asking whether they want to revise their previous decision.

@the8472
Copy link
Member

the8472 commented Jul 22, 2024

I have not seen such a notion of "cryptographic hash function under assumptions about the process generating the inputs" before

I believe I am simply restating what a keyed PRF is when viewed from a different angle.

If it's collision-resistant when the key is not known to the attacker (e.g. like chosen plaintext attacks against a remote hasher) then that's no different from the process generating the inputs being oblivious to the key.

Which is another way of saying that it has cryptographic strength if your threat model only includes natural threats (people generting lots of rust code). Which, yes, is a fairly weak threat model.

@RalfJung
Copy link
Member

Has there been any analysis how good of a PRF SipHash-1-3 is?

@the8472
Copy link
Member

the8472 commented Jul 22, 2024

https://eprint.iacr.org/2019/865.pdf is from 2019 that gives an overview. For the 1-x variants they cite internal collisions that have been found in https://eprint.iacr.org/2014/722.pdf (also cited in #29754 (comment)), which says

At first, we start with characteristics, which
lead to internal collisions. This type of characteristic can be used to create forg-
eries as described in [9, 10]. To improve this attack, characteristics are needed,
which have a probability higher than 2−128 (in the case of SipHash). Other-
wise, a birthday attack should be preferred to find collisions. We are able to
present characteristics that lead to an internal collision for SipHash-1-x (2−167)
and SipHash-2-x (2−236.3).

So AIUI even for those reduced-round versions they haven't found attacks that would be better than the birthday bound.

@SichangHe
Copy link

My dumb take: the current implementation is fine—collision probability is low in theory, and there are no reports of one.

The take that would satisfy lang team and probably most people: just use SHA256…

@RalfJung
Copy link
Member

RalfJung commented Jul 23, 2024

security-compile time tradeoff - blake2s was tried, and it has a huge overhead

Note that here we're talking just about TypeId, and making incremental compilation more secure against collisions is another, separate question. The perf impact of changing the hash for TypeId only should be negligible.

If someone could assemble a summary of the wider use of hashes in Rust where collisions could have soundness impact. that would be a great basis for future discussion. Otherwise we'll just keep going in circles. From statements made above, my current understanding is that for non-incremental builds, type_id is the only "soundness-critical" hash.

@briansmith
Copy link
Contributor

briansmith commented Jul 23, 2024

@briansmith would you be concerned about Rust using a "full (non-truncated) cryptographic hash" for type IDs, and relying on their collisions resistance for soundness? IOW, is the debate over the use of the hash function at all, or about the quality of the hash function?

I think there are these three main issues, and some additional minor ones:

  1. To the extent we want Rust to have a formal semantics, and for the compiler and/or Rust code it compiles to have well-defined and/or formally-verifiable meaning, any use of a hash function has the negative consequence of introducing cryptography and probabilistic arguments into the formal semantics of the Rust language, i.e. we want to avoid having a conclusion being "Rust is a memory-safe language iff SHA-256 is a secure hash function, where 'secure' means..." One way to address this problem even if the compiler uses hashing is to specify the semantics without reference to any hashing (e.g. specify that TypeId is unique, without specifying how it is calculated), and then move the use of hashing to an implementation detail; "rustc 1.94 is a valid Rust compiler assuming SHA-256 is a secure hash function, where 'secure' means...".
  2. We know that regardless of algorithm, a 128-bit hash (truncated from a larger one, or otherwise) cannot be collision-resistant in the face of somebody actively trying to create a collision, and this is especially true for SipHash128 with a fixed key, which isn't even close to the level of MD5 in terms of collision resistance. From the cryptanalysis perspective, this fact is so obvious that it's uninteresting academically but also requires non-trivial effort, basically like proving water is wet. So, you may never get a PoC even though I think quite a few people are confident that collisions can be generated with current public knowledge (and IMO without even much expertise, i.e. I think even I could do it, though it would take way too much time for me). I absolutely do think that switching to a hash function that is considered secure addresses this concern.
  3. This in turn lead to the discussion about whether we should distinguish between intentional attacks and accidental collisions, because then we could say that intentional attacks are outside the threat model as the rust compiler doesn't defend against intentionally malicious code. I think this requires a more nuanced definition of the Rust compiler's role in the security of the Rust ecosystem, a shared understanding that we don't live in a binary world where code is either clearly fully trusted or fully untrusted. The most likely "attack" that we'd actually see publicly is somebody pulling a prank in submitting a PR to an extremely-widely-used crate that causes it to be affected by this issue, or to affect another totally unrelated widely-used crate such that when they are used together they are affected by this issue. If they devise their prank in a way where the collision happens in types that are part of the public API(s) of the affected crate(s) then those crates would potentially require SemVer-incompatible API changes to address it. If/when it happens it would be highly disruptive at virtually no cost (just free time, which pranksters generally seem to have much of) to the "attacker." For this reason, though the compiler can "declare" that this is outside its threat model, at the end of the day it will need to be addressed by the compiler, AFAICT. [michael-scott-i-declare-bankruptcy.jpg]
  4. One issue we will run into if you do decide to use a "secure" hash algorithm is getting people to agree on a hash function to use. This is a hugely political issue and I'd rather not kick the hornet's nest here. I would say that even if you chose my least preferred 256-bit hash function that is in wide use, I wouldn't argue against it. My one suggestion is to specify and implement things in such a way that you could replace the hash algorithm in a subsequent rustc and/or libstd release in a way that only requires all the code to be recompiled with the newer compiler.
  5. Keep in mind that a hash function is a lossy compression algorithm. The lossiness is the source of the concerns. Another avenue to explore is a variable-length lossless compression algorithm that maps type names to type IDs, where in the (vastly) common case we are comparing less than 256-bits when comparing compressed type names. For example, you could imagine having TypeId being a 64-bit hash, where you compare the hashes and then if they match, you compare the (compressed) type names. I do think it makes sense to explore this as it has the potential to completely avoid the other issues.
  6. Right now the same algorithm is used for internal hashing within the compiler and for TypeId, IIUC. I think some of the concerns regarding performance are based on the assumption that the compiler internals and TypeId comparison will always be "the same" but I don't think this has to be the case necessarily. My hunch, which might be wrong, is that the performance of the compiler internals matters a lot, but the performance of TypeId comparison matters much less.

@bjorn3
Copy link
Member

bjorn3 commented Jul 23, 2024

If they devise their prank in a way where the collision happens in types that are part of the public API(s) of the affected crate(s) then those crates would potentially require SemVer-incompatible API changes to address it.

Doing an empty patch release and yanking the old version should be enough to stop the collision. The crate version is hashed into the value cargo passes to -Cmetadata, which in turn affects the TypeId hashes.

@michaelwoerister
Copy link
Member

Another avenue to explore is a variable-length lossless compression algorithm that maps type names to type IDs

v0 symbol mangling losslessly encodes types and has some compression built into it already. #95845 had a working implementation of this, afaict.

My hunch, which might be wrong, is that the performance of the compiler internals matters a lot, but the performance of TypeId comparison matters much less.

This is my intuition too: I don't think performance is a real concern for the TypeId case. Binary size might be more of a concern.

Could also be made configurable via a -Ctype-id-impl compiler flag, with values like full, encrypted, hash128, hash256?

@nikomatsakis
Copy link
Contributor

Given that this issue has had way too many comments, I think what I'd propose to do is open a new issue that summarizes the current state of the discussion. Here's a proposed issue description; please speak up if there's something that you think is not properly represented.

Strong 👍 for this -- I found the issue summary very helpful. I'd like to encourage others to avoid directly commenting and instead limit yourselves (for the moment) to comment on what is not represented in this summary (ideally with a suggested edit), so that we can close this issue and restart afresh from a better starting point.

@RalfJung
Copy link
Member

RalfJung commented Jul 25, 2024

From the comments so far since my summary, there's multiple discussions that should likely go into separate issues:

  1. How can we make TypeId use a strong hash (or not depend on a hash at all) -- what's the best practical way to achieve that? That's the issue for which I wrote a summary above.
  2. Should we really require a cryptographic hash function, or is it enough to be resistant against code that wasn't deliberately written to attack the hash? @the8472 has been suggesting a model along the lines of: yes, we have a fixed key hard-coded in the compiler, but if we assume code isn't written to attack the hash, then we can assume code is written without making use of this knowledge of the key, and therefore for this weak threat model we can act as-if the key is actually random. @briansmith has been arguing that this is not strong enough. I won't be doing the summary for this one, I hope someone else can.
    One question I find interesting here is whether we can make any such collision "unstable" in the sense of making it not work across compiler versions, e.g. by putting the compiler version into the initial state of the hasher (or into the key, or so). I have no idea what the hardness would be of finding a collision that works for multiple different keys / prefixes, but if that is genuinely hard for a PRF, then this seems somewhat more secure than the status quo.

I think we're fairly sure that (1) can be done without significant perf impact, given that TypeId hashing is not a super common operation, and there are various proposals for how to make most TypeId comparisons fast. Binary size impact has yet to be determined, this requires someone to actually implement a prototype. (There's an old PR at #95845 for one of the variants that doesn't rely on a hash at all, which is the least size-efficient option. As far as I can see, perf looked completely fine, though there were some complaints about having type names in the final binary. But there's also people that want a TypeId -> TypeName function. But anyway this is all t-compiler territory.)

However, the compiler uses hashes in many places, so one question that is still unclear to me is whether just making TypeId use a stronger hash would actually fix anything, or whether it just shifts the problem to a different hash. It would be good to get input from people familiar with how our query system works on whether, for non-incremental builds, collisions in the stable hasher can lead to problems. Because if the answer is "yes", then (1) on its own seems a bit pointless, and we should evaluate the cost of making all the critical hashes more secure to really get the proper data for discussion (2). If the answer is "no", then I honestly see no good reason to not do (1); it seems like we can get protection against a stronger threat model without significant cost. At this point (2) would be about the threat model for incremental builds. The lang team has not decided on this, and existing data indicates that using a stronger hash here has a massive performance cost, but I don't know what possibilities exist to reduce that cost.

So -- @michaelwoerister and anyone else who knows the query system, can you help us answer this question? :)
@michaelwoerister you wrote in #127809

To elaborate on that: there might cases like that but at least for DepNode, DefPathHash, StableCrateId, legacy-symbol-name hashes, and incr. comp. query result hashing we need there to be no collisions. For the first three of these we check for collisions and have never found one.

but it's not clear to me whether this applies to all builds or just incremental builds.

@Pr0methean
Copy link

Pr0methean commented Aug 11, 2024

Is a linkage capable of bringing in more types than it has bytes of machine code, or (if dynamic) types that will survive when it's unloaded? If not, we can use sequential IDs starting with the linkage address.

@michaelwoerister
Copy link
Member

  1. How can we make TypeId use a strong hash (or not depend on a hash at all) -- what's the best practical way to achieve that? That's the issue for which I wrote a summary above.

Personally, I think eddyb chose a good approach in #95845 by reusing the symbol mangling scheme, as that has the same requirements of strong collision freedom. If we decide to use a strong, wide hash then I would still go

type ==(v0_mangle)==> type-as-string ==(hash)==> type-id

That way the implementation does not need to rely on StableHasher and, more importantly, HashStable implementations. Making HashStable impls generic over the hash algorithm would lead to binary bloat and some HashStable impls do some non-trivial internal caching which makes it harder to reason about their correctness. In contrast, if we just have to hash a symbol name, we can, for example, simply use the sha256 implementation we already have in the compiler for MSVC debuginfo.

To elaborate on that: there might cases like that but at least for DepNode, DefPathHash, StableCrateId, legacy-symbol-name hashes, and incr. comp. query result hashing we need there to be no collisions. For the first three of these we check for collisions and have never found one.

but it's not clear to me whether this applies to all builds or just incremental builds.

  • We unconditionally check for collisions of DefPathHashes for every kind of build and every kind of compiler version (e.g. no compiler built with debug assertions is needed):
    // Check for hash collisions of DefPathHashes. These should be
    // exceedingly rare.
    if let Some(existing) = self.def_path_hash_to_index.insert(&local_hash, &index) {
    let def_path1 = DefPath::make(LOCAL_CRATE, existing, |idx| self.def_key(idx));
    let def_path2 = DefPath::make(LOCAL_CRATE, index, |idx| self.def_key(idx));
    // Continuing with colliding DefPathHashes can lead to correctness
    // issues. We must abort compilation.
    //
    // The likelihood of such a collision is very small, so actually
    // running into one could be indicative of a poor hash function
    // being used.
    //
    // See the documentation for DefPathHash for more information.
    panic!(
    "found DefPathHash collision between {def_path1:?} and {def_path2:?}. \
    Compilation cannot continue."
    );
    }
  • We also unconditionally check for collisions of StableCrateId for all crates the compiler sees at once:
    } else if let Some(crate_name1) = self.metas[existing].as_ref().map(|data| data.name())
    {
    let crate_name0 = root.name();
    CrateError::StableCrateIdCollision(crate_name0, crate_name1)
    } else {
  • DepNode values are only created during incremental builds.

However, the compiler uses hashes in many places, so one question that is still unclear to me is whether just making TypeId use a stronger hash would actually fix anything, or whether it just shifts the problem to a different hash.

As said above, going through v0 mangling would basically fix this issue. The only hash that occurs in v0 is the StableCrateId. Any problem with collisions in that scheme would also be a problem with symbol names, so it is unlikely to go undetected.

One question I find interesting here is whether we can make any such collision "unstable" in the sense of making it not work across compiler versions, e.g. by putting the compiler version into the initial state of the hasher (or into the key, or so).

We already mostly do that by mixing the compiler version into StableCrateId. So types containing any kind of path already have a compiler-version-dependent stable hash. Also initializing the hashers with the compiler version seems like a good thing to do and should have a negligible performance impact.

(2) would be about the threat model for incremental builds.

Unless I'm overlooking something all incr. comp. specific hashes only need to be stable between successive incremental builds. So we can easily harden our use of SipHash there by generating random keys and then caching these in the incr. comp. cache.

@RalfJung
Copy link
Member

Great, thanks @michaelwoerister!
I have opened #129014 for changing type_id to be at least as good as a cryptographic hash function, aligning them with the t-lang decision.

I have opened #129016 for the questions around incremental compilation. If I missed an aspect if this, please bring it up in that issue.

For the people arguing that for type_id, we should consider a PRF (or even a weakened one like SipHash-1-3) to be good enough since we don't expect Rust programmers to try to exploit the compiler, I would suggest you file a new issue with the arguments for that and nominate it for t-lang discussion. Personally, since it seems like #129014 can be fixed without major downsides, I don't see a good case for weakening the requirement here.

initializing the hashers with the compiler version seems like a good thing to do and should have a negligible performance impact.

@michaelwoerister would it make sense to file this as a possible-improvement issue? Maybe it'd be better if you did this since you could point at where in the compiler that would even happen.

I will then close this issue, as further discussion should happen in the existing or to-be-created successor issues. Thanks all for the discussion!

@michaelwoerister
Copy link
Member

@michaelwoerister would it make sense to file this as a possible-improvement issue? Maybe it'd be better if you did this since you could point at where in the compiler that would even happen.

I'll look into opening an issue.

@RalfJung
Copy link
Member

RalfJung commented Aug 13, 2024

I opened rust-lang/unsafe-code-guidelines#525 for potential hashing concerns related to dlopen (where the uniqueness checks @michaelwoerister mentioned can't be run).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-typesystem Area: The type system C-bug Category: This is a bug. I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness P-low Low priority T-lang Relevant to the language team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.