-
-
Notifications
You must be signed in to change notification settings - Fork 6
fix(hadoop): Upgrade nimbus-jose-jwt in Hadoop to fix CVE-2025-53864 #1245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
4 changes: 2 additions & 2 deletions
4
...p/stackable/patches/3.4.1/0011-HADOOP-18583.-Fix-loading-of-OpenSSL-3.x-symbols-525.patch
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,7 +1,7 @@ | ||
| From cd1c23ea5bddd2796caf2590fef467e488c3bcbf Mon Sep 17 00:00:00 2001 | ||
| From 932464d9fbf23f9042fee2f8b4be6029174d2ca4 Mon Sep 17 00:00:00 2001 | ||
| From: Sebastian Klemke <[email protected]> | ||
| Date: Thu, 7 Nov 2024 19:14:13 +0100 | ||
| Subject: HADOOP-18583. Fix loading of OpenSSL 3.x symbols (#5256) (#7149) | ||
| Subject: HADOOP-18583. Fix loading of OpenSSL 3.x symbols (#5256) (#7149) | ||
|
|
||
| Contributed by Sebastian Klemke | ||
| --- | ||
|
|
||
37 changes: 37 additions & 0 deletions
37
...p/stackable/patches/3.4.1/0012-Upgrade-nimbus-jose-jwt-to-9.37.4-to-fix-CVE-2025-53.patch
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,37 @@ | ||
| From 672f429d62c9ef7c5fdd00cf856a698f438c7cec Mon Sep 17 00:00:00 2001 | ||
| From: xeniape <[email protected]> | ||
| Date: Thu, 11 Sep 2025 12:14:05 +0200 | ||
| Subject: Upgrade-nimbus-jose-jwt-to-9.37.4-to-fix-CVE-2025-53864, Upstream | ||
| reference: https://github.com/apache/hadoop/pull/7870 | ||
|
|
||
| --- | ||
| LICENSE-binary | 2 +- | ||
| hadoop-project/pom.xml | 2 +- | ||
| 2 files changed, 2 insertions(+), 2 deletions(-) | ||
|
|
||
| diff --git a/LICENSE-binary b/LICENSE-binary | ||
| index 90da3d032b..fdcb5c0a1f 100644 | ||
| --- a/LICENSE-binary | ||
| +++ b/LICENSE-binary | ||
| @@ -240,7 +240,7 @@ com.google.guava:guava:20.0 | ||
| com.google.guava:guava:32.0.1-jre | ||
| com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava | ||
| com.microsoft.azure:azure-storage:7.0.0 | ||
| -com.nimbusds:nimbus-jose-jwt:9.37.2 | ||
| +com.nimbusds:nimbus-jose-jwt:9.37.4 | ||
| com.zaxxer:HikariCP:4.0.3 | ||
| commons-beanutils:commons-beanutils:1.9.4 | ||
| commons-cli:commons-cli:1.5.0 | ||
| diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml | ||
| index 155cdf9841..e23f524224 100644 | ||
| --- a/hadoop-project/pom.xml | ||
| +++ b/hadoop-project/pom.xml | ||
| @@ -216,7 +216,7 @@ | ||
| <openssl-wildfly.version>1.1.3.Final</openssl-wildfly.version> | ||
| <jsonschema2pojo.version>1.0.2</jsonschema2pojo.version> | ||
| <woodstox.version>5.4.0</woodstox.version> | ||
| - <nimbus-jose-jwt.version>9.37.2</nimbus-jose-jwt.version> | ||
| + <nimbus-jose-jwt.version>9.37.4</nimbus-jose-jwt.version> | ||
| <nodejs.version>v14.17.0</nodejs.version> | ||
| <yarnpkg.version>v1.22.5</yarnpkg.version> | ||
| <apache-ant.version>1.10.13</apache-ant.version> | ||
22 changes: 22 additions & 0 deletions
22
hadoop/hadoop/stackable/patches/3.4.2/0001-YARN-11527-Update-node.js.patch
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,22 @@ | ||
| From c4dbb05b4f92f93c7e8f11d6a622b73f40f4664c Mon Sep 17 00:00:00 2001 | ||
| From: xeniape <[email protected]> | ||
| Date: Wed, 10 Sep 2025 14:18:38 +0200 | ||
| Subject: YARN-11527-Update-node.js | ||
|
|
||
| --- | ||
| hadoop-project/pom.xml | 2 +- | ||
| 1 file changed, 1 insertion(+), 1 deletion(-) | ||
|
|
||
| diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml | ||
| index b9eacd5ba3..70f64bf55c 100644 | ||
| --- a/hadoop-project/pom.xml | ||
| +++ b/hadoop-project/pom.xml | ||
| @@ -234,7 +234,7 @@ | ||
| <jsonschema2pojo.version>1.0.2</jsonschema2pojo.version> | ||
| <woodstox.version>5.4.0</woodstox.version> | ||
| <nimbus-jose-jwt.version>9.37.2</nimbus-jose-jwt.version> | ||
| - <nodejs.version>v12.22.1</nodejs.version> | ||
| + <nodejs.version>v14.17.0</nodejs.version> | ||
| <yarnpkg.version>v1.22.5</yarnpkg.version> | ||
| <apache-ant.version>1.10.13</apache-ant.version> | ||
| <jmh.version>1.20</jmh.version> |
259 changes: 259 additions & 0 deletions
259
...adoop/stackable/patches/3.4.2/0002-Allow-overriding-datanode-registration-addresses.patch
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,259 @@ | ||
| From adc337817824ba29e7eb669c13730acdbb0b9630 Mon Sep 17 00:00:00 2001 | ||
| From: xeniape <[email protected]> | ||
| Date: Wed, 10 Sep 2025 14:36:20 +0200 | ||
| Subject: Allow-overriding-datanode-registration-addresses | ||
|
|
||
| --- | ||
| .../org/apache/hadoop/hdfs/DFSConfigKeys.java | 9 +++ | ||
| .../blockmanagement/DatanodeManager.java | 43 +++++++----- | ||
| .../hadoop/hdfs/server/datanode/DNConf.java | 70 +++++++++++++++++++ | ||
| .../hadoop/hdfs/server/datanode/DataNode.java | 35 ++++++++-- | ||
| 4 files changed, 135 insertions(+), 22 deletions(-) | ||
|
|
||
| diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java | ||
| index f92a2ad565..25bcd438c7 100755 | ||
| --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java | ||
| +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java | ||
| @@ -152,6 +152,13 @@ public class DFSConfigKeys extends CommonConfigurationKeys { | ||
| public static final boolean DFS_DATANODE_DROP_CACHE_BEHIND_READS_DEFAULT = false; | ||
| public static final String DFS_DATANODE_USE_DN_HOSTNAME = "dfs.datanode.use.datanode.hostname"; | ||
| public static final boolean DFS_DATANODE_USE_DN_HOSTNAME_DEFAULT = false; | ||
| + | ||
| + public static final String DFS_DATANODE_REGISTERED_HOSTNAME = "dfs.datanode.registered.hostname"; | ||
| + public static final String DFS_DATANODE_REGISTERED_DATA_PORT = "dfs.datanode.registered.port"; | ||
| + public static final String DFS_DATANODE_REGISTERED_HTTP_PORT = "dfs.datanode.registered.http.port"; | ||
| + public static final String DFS_DATANODE_REGISTERED_HTTPS_PORT = "dfs.datanode.registered.https.port"; | ||
| + public static final String DFS_DATANODE_REGISTERED_IPC_PORT = "dfs.datanode.registered.ipc.port"; | ||
| + | ||
| public static final String DFS_DATANODE_MAX_LOCKED_MEMORY_KEY = "dfs.datanode.max.locked.memory"; | ||
| public static final long DFS_DATANODE_MAX_LOCKED_MEMORY_DEFAULT = 0; | ||
| public static final String DFS_DATANODE_FSDATASETCACHE_MAX_THREADS_PER_VOLUME_KEY = "dfs.datanode.fsdatasetcache.max.threads.per.volume"; | ||
| @@ -491,6 +498,8 @@ public class DFSConfigKeys extends CommonConfigurationKeys { | ||
| public static final long DFS_DATANODE_PROCESS_COMMANDS_THRESHOLD_DEFAULT = | ||
| TimeUnit.SECONDS.toMillis(2); | ||
|
|
||
| + public static final String DFS_NAMENODE_DATANODE_REGISTRATION_UNSAFE_ALLOW_ADDRESS_OVERRIDE_KEY = "dfs.namenode.datanode.registration.unsafe.allow-address-override"; | ||
| + public static final boolean DFS_NAMENODE_DATANODE_REGISTRATION_UNSAFE_ALLOW_ADDRESS_OVERRIDE_DEFAULT = false; | ||
| public static final String DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_KEY = "dfs.namenode.datanode.registration.ip-hostname-check"; | ||
| public static final boolean DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_DEFAULT = true; | ||
|
|
||
| diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java | ||
| index ebd2fa992e..c56f254478 100644 | ||
| --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java | ||
| +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java | ||
| @@ -181,6 +181,8 @@ public class DatanodeManager { | ||
| private boolean hasClusterEverBeenMultiRack = false; | ||
|
|
||
| private final boolean checkIpHostnameInRegistration; | ||
| + private final boolean allowRegistrationAddressOverride; | ||
| + | ||
| /** | ||
| * Whether we should tell datanodes what to cache in replies to | ||
| * heartbeat messages. | ||
| @@ -314,6 +316,11 @@ public class DatanodeManager { | ||
| // Block invalidate limit also has some dependency on heartbeat interval. | ||
| // Check setBlockInvalidateLimit(). | ||
| setBlockInvalidateLimit(configuredBlockInvalidateLimit); | ||
| + this.allowRegistrationAddressOverride = conf.getBoolean( | ||
| + DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_UNSAFE_ALLOW_ADDRESS_OVERRIDE_KEY, | ||
| + DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_UNSAFE_ALLOW_ADDRESS_OVERRIDE_DEFAULT); | ||
| + LOG.info(DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_UNSAFE_ALLOW_ADDRESS_OVERRIDE_KEY | ||
| + + "=" + allowRegistrationAddressOverride); | ||
| this.checkIpHostnameInRegistration = conf.getBoolean( | ||
| DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_KEY, | ||
| DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_DEFAULT); | ||
| @@ -1158,27 +1165,29 @@ public class DatanodeManager { | ||
| */ | ||
| public void registerDatanode(DatanodeRegistration nodeReg) | ||
| throws DisallowedDatanodeException, UnresolvedTopologyException { | ||
| - InetAddress dnAddress = Server.getRemoteIp(); | ||
| - if (dnAddress != null) { | ||
| - // Mostly called inside an RPC, update ip and peer hostname | ||
| - String hostname = dnAddress.getHostName(); | ||
| - String ip = dnAddress.getHostAddress(); | ||
| - if (checkIpHostnameInRegistration && !isNameResolved(dnAddress)) { | ||
| - // Reject registration of unresolved datanode to prevent performance | ||
| - // impact of repetitive DNS lookups later. | ||
| - final String message = "hostname cannot be resolved (ip=" | ||
| - + ip + ", hostname=" + hostname + ")"; | ||
| - LOG.warn("Unresolved datanode registration: " + message); | ||
| - throw new DisallowedDatanodeException(nodeReg, message); | ||
| + if (!allowRegistrationAddressOverride) { | ||
| + InetAddress dnAddress = Server.getRemoteIp(); | ||
| + if (dnAddress != null) { | ||
| + // Mostly called inside an RPC, update ip and peer hostname | ||
| + String hostname = dnAddress.getHostName(); | ||
| + String ip = dnAddress.getHostAddress(); | ||
| + if (checkIpHostnameInRegistration && !isNameResolved(dnAddress)) { | ||
| + // Reject registration of unresolved datanode to prevent performance | ||
| + // impact of repetitive DNS lookups later. | ||
| + final String message = "hostname cannot be resolved (ip=" | ||
| + + ip + ", hostname=" + hostname + ")"; | ||
| + LOG.warn("Unresolved datanode registration: " + message); | ||
| + throw new DisallowedDatanodeException(nodeReg, message); | ||
| + } | ||
| + // update node registration with the ip and hostname from rpc request | ||
| + nodeReg.setIpAddr(ip); | ||
| + nodeReg.setPeerHostName(hostname); | ||
| } | ||
| - // update node registration with the ip and hostname from rpc request | ||
| - nodeReg.setIpAddr(ip); | ||
| - nodeReg.setPeerHostName(hostname); | ||
| } | ||
| - | ||
| + | ||
| try { | ||
| nodeReg.setExportedKeys(blockManager.getBlockKeys()); | ||
| - | ||
| + | ||
| // Checks if the node is not on the hosts list. If it is not, then | ||
| // it will be disallowed from registering. | ||
| if (!hostConfigManager.isIncluded(nodeReg)) { | ||
| diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java | ||
| index 21b92db307..5d3437239c 100644 | ||
| --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java | ||
| +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java | ||
| @@ -101,6 +101,11 @@ public class DNConf { | ||
| final boolean syncOnClose; | ||
| final boolean encryptDataTransfer; | ||
| final boolean connectToDnViaHostname; | ||
| + private final String registeredHostname; | ||
| + private final int registeredDataPort; | ||
| + private final int registeredHttpPort; | ||
| + private final int registeredHttpsPort; | ||
| + private final int registeredIpcPort; | ||
| final boolean overwriteDownstreamDerivedQOP; | ||
| private final boolean pmemCacheRecoveryEnabled; | ||
|
|
||
| @@ -189,6 +194,11 @@ public class DNConf { | ||
| connectToDnViaHostname = getConf().getBoolean( | ||
| DFSConfigKeys.DFS_DATANODE_USE_DN_HOSTNAME, | ||
| DFSConfigKeys.DFS_DATANODE_USE_DN_HOSTNAME_DEFAULT); | ||
| + registeredHostname = getConf().get(DFSConfigKeys.DFS_DATANODE_REGISTERED_HOSTNAME); | ||
| + registeredDataPort = getConf().getInt(DFSConfigKeys.DFS_DATANODE_REGISTERED_DATA_PORT, -1); | ||
| + registeredHttpPort = getConf().getInt(DFSConfigKeys.DFS_DATANODE_REGISTERED_HTTP_PORT, -1); | ||
| + registeredHttpsPort = getConf().getInt(DFSConfigKeys.DFS_DATANODE_REGISTERED_HTTPS_PORT, -1); | ||
| + registeredIpcPort = getConf().getInt(DFSConfigKeys.DFS_DATANODE_REGISTERED_IPC_PORT, -1); | ||
| this.blockReportInterval = getConf().getLong( | ||
| DFS_BLOCKREPORT_INTERVAL_MSEC_KEY, | ||
| DFS_BLOCKREPORT_INTERVAL_MSEC_DEFAULT); | ||
| @@ -363,6 +373,66 @@ public class DNConf { | ||
| return connectToDnViaHostname; | ||
| } | ||
|
|
||
| + /** | ||
| + * Returns a hostname to register with the cluster instead of the system | ||
| + * hostname. | ||
| + * This is an expert setting and can be used in multihoming scenarios to | ||
| + * override the detected hostname. | ||
| + * | ||
| + * @return null if the system hostname should be used, otherwise a hostname | ||
| + */ | ||
| + public String getRegisteredHostname() { | ||
| + return registeredHostname; | ||
| + } | ||
| + | ||
| + /** | ||
| + * Returns a port number to register with the cluster instead of the | ||
| + * data port that the node is listening on. | ||
| + * This is an expert setting and can be used in multihoming scenarios to | ||
| + * override the detected port. | ||
| + * | ||
| + * @return -1 if the actual port should be used, otherwise a port number | ||
| + */ | ||
| + public int getRegisteredDataPort() { | ||
| + return registeredDataPort; | ||
| + } | ||
| + | ||
| + /** | ||
| + * Returns a port number to register with the cluster instead of the | ||
| + * HTTP port that the node is listening on. | ||
| + * This is an expert setting and can be used in multihoming scenarios to | ||
| + * override the detected port. | ||
| + * | ||
| + * @return -1 if the actual port should be used, otherwise a port number | ||
| + */ | ||
| + public int getRegisteredHttpPort() { | ||
| + return registeredHttpPort; | ||
| + } | ||
| + | ||
| + /** | ||
| + * Returns a port number to register with the cluster instead of the | ||
| + * HTTPS port that the node is listening on. | ||
| + * This is an expert setting and can be used in multihoming scenarios to | ||
| + * override the detected port. | ||
| + * | ||
| + * @return -1 if the actual port should be used, otherwise a port number | ||
| + */ | ||
| + public int getRegisteredHttpsPort() { | ||
| + return registeredHttpsPort; | ||
| + } | ||
| + | ||
| + /** | ||
| + * Returns a port number to register with the cluster instead of the | ||
| + * IPC port that the node is listening on. | ||
| + * This is an expert setting and can be used in multihoming scenarios to | ||
| + * override the detected port. | ||
| + * | ||
| + * @return -1 if the actual port should be used, otherwise a port number | ||
| + */ | ||
| + public int getRegisteredIpcPort() { | ||
| + return registeredIpcPort; | ||
| + } | ||
| + | ||
| /** | ||
| * Returns socket timeout | ||
| * | ||
| diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java | ||
| index 956f5bbe51..22ae127d98 100644 | ||
| --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java | ||
| +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java | ||
| @@ -135,6 +135,7 @@ import java.util.HashSet; | ||
| import java.util.Iterator; | ||
| import java.util.List; | ||
| import java.util.Map; | ||
| +import java.util.Optional; | ||
| import java.util.Map.Entry; | ||
| import java.util.Set; | ||
| import java.util.UUID; | ||
| @@ -2076,11 +2077,35 @@ public class DataNode extends ReconfigurableBase | ||
| NodeType.DATA_NODE); | ||
| } | ||
|
|
||
| - DatanodeID dnId = new DatanodeID( | ||
| - streamingAddr.getAddress().getHostAddress(), hostName, | ||
| - storage.getDatanodeUuid(), getXferPort(), getInfoPort(), | ||
| - infoSecurePort, getIpcPort()); | ||
| - return new DatanodeRegistration(dnId, storageInfo, | ||
| + String registeredHostname = Optional | ||
| + .ofNullable(dnConf.getRegisteredHostname()) | ||
| + .orElseGet(() -> streamingAddr.getAddress().getHostAddress()); | ||
| + int registeredDataPort = dnConf.getRegisteredDataPort(); | ||
| + if (registeredDataPort == -1) { | ||
| + registeredDataPort = getXferPort(); | ||
| + } | ||
| + int registeredHttpPort = dnConf.getRegisteredHttpPort(); | ||
| + if (registeredHttpPort == -1) { | ||
| + registeredHttpPort = getInfoPort(); | ||
| + } | ||
| + int registeredHttpsPort = dnConf.getRegisteredHttpsPort(); | ||
| + if (registeredHttpsPort == -1) { | ||
| + registeredHttpsPort = getInfoSecurePort(); | ||
| + } | ||
| + int registeredIpcPort = dnConf.getRegisteredIpcPort(); | ||
| + if (registeredIpcPort == -1) { | ||
| + registeredIpcPort = getIpcPort(); | ||
| + } | ||
| + | ||
| + DatanodeID dnId = new DatanodeID(registeredHostname, | ||
| + registeredHostname, | ||
| + storage.getDatanodeUuid(), | ||
| + registeredDataPort, | ||
| + registeredHttpPort, | ||
| + registeredHttpsPort, | ||
| + registeredIpcPort); | ||
| + | ||
| + return new DatanodeRegistration(dnId, storageInfo, | ||
| new ExportedBlockKeys(), VersionInfo.getVersion()); | ||
| } | ||
|
|
29 changes: 29 additions & 0 deletions
29
hadoop/hadoop/stackable/patches/3.4.2/0003-Async-profiler-also-grab-itimer-events.patch
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,29 @@ | ||
| From ab9550bd7b71c16c381a105a22732f6e71f2dba6 Mon Sep 17 00:00:00 2001 | ||
| From: xeniape <[email protected]> | ||
| Date: Wed, 10 Sep 2025 14:39:20 +0200 | ||
| Subject: Async-profiler-also-grab-itimer-events | ||
|
|
||
| --- | ||
| .../src/main/java/org/apache/hadoop/http/ProfileServlet.java | 2 ++ | ||
| 1 file changed, 2 insertions(+) | ||
|
|
||
| diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/ProfileServlet.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/ProfileServlet.java | ||
| index ce53274151..909892ff90 100644 | ||
| --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/ProfileServlet.java | ||
| +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/ProfileServlet.java | ||
| @@ -76,6 +76,7 @@ import org.apache.hadoop.util.ProcessUtils; | ||
| * Following event types are supported (default is 'cpu') (NOTE: not all OS'es support all events) | ||
| * // Perf events: | ||
| * // cpu | ||
| + * // itimer | ||
| * // page-faults | ||
| * // context-switches | ||
| * // cycles | ||
| @@ -118,6 +119,7 @@ public class ProfileServlet extends HttpServlet { | ||
| private enum Event { | ||
|
|
||
| CPU("cpu"), | ||
| + ITIMER("itimer"), | ||
| ALLOC("alloc"), | ||
| LOCK("lock"), | ||
| PAGE_FAULTS("page-faults"), |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.