Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
103 commits
Select commit Hold shift + click to select a range
51466d7
HDDS-10527. Rewrite key atomically (#6385)
sodonnel May 15, 2024
70df306
HDDS-10865. Check OM version before making rewrite key request (#6689)
adoroszlai May 17, 2024
771c418
HDDS-10872. Rewrite single key via CLI (#6691)
adoroszlai May 18, 2024
3c8398b
HDDS-10857. Rename rewriteGeneration to expectedDataGeneration (#6692)
sodonnel May 21, 2024
3b282c7
HDDS-10840. Do not pass owner in KeyArgs when rewriting a key (#6710)
sodonnel May 21, 2024
dbe58c4
HDDS-10839. Add end to end tests for atomic rewrite for OBS buckets. …
adoroszlai May 28, 2024
f862049
HDDS-10921. Enable Atomic Rewrite in FSO buckets (#6740)
sodonnel May 28, 2024
8c04d72
HDDS-10947. Add robot test for rewrite of multipart key (#6757)
adoroszlai Jun 2, 2024
3ec065d
HDDS-10843. Enhance rewrite test to cover all key attributes (#6799)
adoroszlai Jun 11, 2024
99be317
HDDS-10946. Test combinations of rename and rewrite (#6823)
adoroszlai Jun 21, 2024
1fe9011
Merge remote-tracking branch 'origin/master' into HDDS-10656-atomic-k…
adoroszlai Jun 21, 2024
54f1519
HDDS-11048. Remove dev-only toggle functionality of rewrite CLI (#6846)
adoroszlai Jun 22, 2024
56d1289
HDDS-9874. Introduce Metrics for listKeys Dashboard (#5745)
muskan1012 Jul 8, 2024
d6d33f6
HDDS-9977. Dashboard for create key metrics (#6865)
muskan1012 Jul 8, 2024
3e70cf4
HDDS-11106. Save logs for stopped containers (#6908)
adoroszlai Jul 8, 2024
a9a42eb
HDDS-10941. Add a few interesting ContainerStateMachine metrics in CS…
jojochuang Jul 8, 2024
b428407
HDDS-9842. Cache volume capacity and available space (#6383)
adoroszlai Jul 8, 2024
197553e
HDDS-11113. Remove unused ScmUtils#preCheck and related code (#6907)
myskov Jul 9, 2024
44df637
HDDS-10112. Dashboard for read key metrics (#6868)
muskan1012 Jul 9, 2024
19d287d
HDDS-11045. Recon Decommissioning Info API throws NPE. (#6862)
devmadhuu Jul 9, 2024
0eab761
Merge remote-tracking branch 'origin/master' into HDDS-10656-atomic-k…
adoroszlai Jul 9, 2024
89494f1
HDDS-11017. Migrated to ECharts, Vite and AntD v4 with eslint, pretti…
devabhishekpal Jul 10, 2024
33924d9
HDDS-11112. Verify javadoc creation in CI (#6910)
adoroszlai Jul 10, 2024
27838de
HDDS-11093. Mark TestContainerBalancerDatanodeNodeLimit#testMetrics a…
adoroszlai Jul 10, 2024
76df311
HDDS-10490. Mark TestSnapshotDiffManager#testLoadJobsOnStartUp as flaky
adoroszlai Jul 10, 2024
0230c8e
HDDS-8900. Mark TestSecretKeysApi#testSecretKeyApiSuccess as flaky
adoroszlai Jul 10, 2024
a514830
HDDS-10886. Mark OzoneRpcClientTests#testParallelDeleteBucketAndCreat…
adoroszlai Jul 10, 2024
9b83c01
HDDS-11128. Mark TestReconAndAdminContainerCLI#testNodesInDecommissio…
adoroszlai Jul 10, 2024
1984c56
HDDS-11087. Mark TestContainerReplication#testECContainerReplication …
adoroszlai Jul 10, 2024
c89bc37
HDDS-11129. Mark TestSnapshotDirectoryCleaningService#testExclusiveSi…
adoroszlai Jul 10, 2024
c1e25ef
HDDS-11130. Mark TestSnapshotDeletingService#testSnapshotSplitAndMove…
adoroszlai Jul 10, 2024
b666840
HDDS-11131. Mark TestSnapshotDeletingService#testSnapshotWithFSO as f…
adoroszlai Jul 10, 2024
56ce591
HDDS-11040. Disable REST endpoint for S3 secret manipulation by usern…
ivanzlenko Jul 10, 2024
4d29b6c
HDDS-11103. Do not assume working dir is writable in container (#6913)
adoroszlai Jul 10, 2024
975a8d8
HDDS-11052. HttpFS fails to start when compiled for Java 17 (#6854)
adoroszlai Jul 10, 2024
8500020
HDDS-11138. Remove version from compose files (#6927)
adoroszlai Jul 11, 2024
b000a2a
HDDS-10841. Snapshot diff CLI help should print default value for par…
will-sh Jul 11, 2024
824e7b1
HDDS-11110. Allow running test-all.sh from any directory (#6931)
adoroszlai Jul 11, 2024
ef374aa
HDDS-11139. Avoid unnecessary object creation in OM request validator…
adoroszlai Jul 12, 2024
a8c377f
HDDS-11069. Block location is missing in output of Ozone debug chunki…
sadanand48 Jul 12, 2024
c05227a
HDDS-10386. Introduce Metrics for deletekey operation in OM Service. …
muskan1012 Jul 12, 2024
ed52aa9
HDDS-11096. Error creating s3 auth info for request with Authorizatio…
adoroszlai Jul 12, 2024
c768815
HDDS-10604. Whitelist based compliance check for crypto related confi…
dombizita Jul 12, 2024
9993049
HDDS-11100. OM/SCM support displaying Netty off-heap memory metrics (…
slfan1989 Jul 12, 2024
9b6f20b
HDDS-10998. Declare annotation processors explicitly (#6796)
adoroszlai Jul 12, 2024
0201448
HDDS-11124. Removed DELETED_TABLE and DELETED_DIR_TABLE locks (#6921)
hemantk-12 Jul 12, 2024
2a48bd1
HDDS-11169. Upgrade packageManager in package.json to match pom.xml (…
devabhishekpal Jul 13, 2024
f2671e7
HDDS-11172. Bump vite to 4.5.3 (#6918)
dependabot[bot] Jul 13, 2024
3383c86
HDDS-11173. Bump maven-clean-plugin to 3.4.0 (#6937)
dependabot[bot] Jul 13, 2024
5cb2bb4
HDDS-11175. Bump sqlite-jdbc to 3.46.0.0 (#6938)
dependabot[bot] Jul 13, 2024
4987e15
HDDS-11176. Bump Spring Framework to 5.3.37 (#6940)
dependabot[bot] Jul 13, 2024
adc7ad3
HDDS-11177. Bump error_prone_annotations to 2.28.0 (#6939)
dependabot[bot] Jul 13, 2024
9b29eae
HDDS-11117. Introduce debug CLI command to show the value schema of a…
Tejaskriya Jul 15, 2024
63a232b
HDDS-10907. DataNode StorageContainerMetrics numWriteChunk is counted…
chungen0126 Jul 16, 2024
404a036
HDDS-11191. Add a config to set max_open_files for OM RocksDB. (#6954)
sadanand48 Jul 16, 2024
6fa74bb
HDDS-11166. Switch to Rocky Linux-based ozone-runner (#6942)
adoroszlai Jul 17, 2024
abf3a0a
HDDS-10844. Clarify snapshot create error message. (#6955)
will-sh Jul 17, 2024
e01a57d
HDDS-11186. First container log missing from bundle (#6952)
adoroszlai Jul 17, 2024
dd5c5a0
HDDS-11179. DBConfigFromFile#readFromFile result of toIOException not…
will-sh Jul 17, 2024
7062e17
HDDS-11192. Increase SPNEGO URL test coverage (#6956)
adoroszlai Jul 17, 2024
b532828
HDDS-10561. Dashboard for delete key metrics (#6948)
muskan1012 Jul 18, 2024
f5ed9d3
HDDS-10389. Implement a search feature for users to locate open keys …
ArafatKhan2198 Jul 18, 2024
dbb3047
HDDS-11194. OM missing audit log for upgrade (#6958)
sumitagrawl Jul 18, 2024
50a07a7
HDDS-11180. Simplify HttpServer2#inferMimeType return statement (#6963)
will-sh Jul 18, 2024
daab2b3
HDDS-11198. Fix Typescript configs for Recon (#6961)
devabhishekpal Jul 18, 2024
1996f3a
HDDS-11183. Keys from DeletedTable and DeletedDirTable of AOS should …
swamirishi Jul 18, 2024
0c924bc
HDDS-11150. Recon Overview page crashes due to failed API Calls (#6944)
devabhishekpal Jul 19, 2024
2690f02
HDDS-11210. Bump log4j2 to 2.23.1 (#6970)
dependabot[bot] Jul 20, 2024
71e7da0
HDDS-11211. Bump assertj-core to 3.26.3 (#6972)
dependabot[bot] Jul 20, 2024
b9ea9b0
HDDS-11212. Bump commons-net to 3.11.1 (#6973)
dependabot[bot] Jul 20, 2024
a6b3392
HDDS-11213. Bump commons-daemon to 1.4.0 (#6971)
dependabot[bot] Jul 20, 2024
86c4339
HDDS-11187. Fix Event Handling in Recon OMDBUpdatesHandler to Prevent…
ArafatKhan2198 Jul 22, 2024
a5e420c
HDDS-11120. Rich rebalancing status info (#6911)
juncevich Jul 22, 2024
2eed61c
HDDS-11188. Initial setup for new UI layout and enable users to switc…
devabhishekpal Jul 23, 2024
e33fd2d
HDDS-10382. Optimize Netty memory allocation by avoiding zero assignm…
sarvekshayr Jul 24, 2024
19f9afb
HDDS-11215. Quota count can go wrong when double buffer flush takes t…
sumitagrawl Jul 24, 2024
c760804
HDDS-11083. Avoid duplicate creation of RunningDatanodeState (#6886)
jianghuazhu Jul 25, 2024
324d296
HDDS-11228. Ozone Recon HeatMap refactoring of code. (#6986)
devmadhuu Jul 25, 2024
96e1a8c
HDDS-11140. Recon Disk Usage Metadata Details are not working for du …
smitajoshi12 Jul 25, 2024
86346cb
HDDS-10658. Include Transaction ID and Command Name in OM Audit Messa…
sumitagrawl Jul 26, 2024
69ba680
HDDS-11136. Some containers affected by HDDS-8129 may still be in the…
siddhantsangwan Jul 26, 2024
7a07625
HDDS-11232. Spare InfoBucket RPC call for the FileSystem#getFileStatu…
fapifta Jul 26, 2024
9ba4a73
HDDS-11167. Use Key/TrustManagers directly for TLS connection instead…
Galsza Jul 26, 2024
b3939ff
HDDS-11241. Bump kotlin to 1.9.25 (#6996)
dependabot[bot] Jul 28, 2024
b864195
HDDS-11242. Bump exec-maven-plugin to 3.3.0 (#6995)
dependabot[bot] Jul 28, 2024
b07bb21
HDDS-11223. Fix iteration over ChunkBufferImplWithByteBufferList (#6999)
Cyrill Jul 28, 2024
9bf587c
HDDS-11245. Bump maven-core to 3.9.8 (#6997)
dependabot[bot] Jul 29, 2024
82c6bf3
HDDS-11023. Recon Disk Usage null conditions not handled properly for…
smitajoshi12 Jul 29, 2024
dcfa3b4
HDDS-11238. Converge redundant getBucket calls for FileSystem client …
tanvipenumudy Jul 29, 2024
a532b89
HDDS-11206. Statistics storage usage indicators include min, max, med…
jianghuazhu Jul 29, 2024
f9d0bc1
HDDS-11076. Revert HDDS-11076 and HDDS-11078 (#7004)
adoroszlai Jul 29, 2024
dd831d8
HDDS-11221. Resolve potential time discrepancy for expired multipart …
ivandika3 Jul 30, 2024
57bfa8c
HDDS-11236. Move Java version-specific NETTY_OPTS to ozone-functions.…
sarvekshayr Jul 30, 2024
99b481b
HDDS-11082. Code cleanup in DatanodeStateMachine (#6883)
jianghuazhu Jul 30, 2024
b48a4c8
HDDS-11119. Unnecessary UPDATE_VOLUME audit entry for DELETE_TENANT (…
sumitagrawl Jul 31, 2024
9533066
HDDS-11231. Make Recon start more resilient (#6987)
devmadhuu Jul 31, 2024
1ae2701
HDDS-11068. Move SstFiltered flag to a file in the snapshot directory…
swamirishi Jul 31, 2024
a3f987f
HDDS-10917. Refactor more tests from TestContainerBalancerTask (#6734)
Montura Aug 1, 2024
cc95ee3
HDDS-11078. Remove usage of sun.misc.Signal (#7006)
adoroszlai Aug 1, 2024
5118f23
HDDS-11076. NoSuchMethodError: ByteBuffer.position compiling with Jav…
adoroszlai Jul 8, 2024
d38372a
HDDS-11201. Optimise FullTableCache eviction, scheduler and lock. (#6…
sumitagrawl Aug 2, 2024
d65bd47
Test change
sarvekshayr Aug 5, 2024
298d79f
Update test.sh
sarvekshayr Aug 6, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
5 changes: 2 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ jobs:
distribution: 'temurin'
java-version: ${{ matrix.java }}
- name: Run a full build
run: hadoop-ozone/dev-support/checks/build.sh -Pdist -Psrc ${{ inputs.ratis_args }}
run: hadoop-ozone/dev-support/checks/build.sh -Pdist -Psrc -Dmaven.javadoc.skip=true ${{ inputs.ratis_args }}
env:
DEVELOCITY_ACCESS_KEY: ${{ secrets.GE_ACCESS_TOKEN }}
- name: Store binaries for tests
Expand Down Expand Up @@ -218,7 +218,7 @@ jobs:
distribution: 'temurin'
java-version: ${{ matrix.java }}
- name: Compile Ozone using Java ${{ matrix.java }}
run: hadoop-ozone/dev-support/checks/build.sh -Dskip.npx -Dskip.installnpx -Djavac.version=${{ matrix.java }} ${{ inputs.ratis_args }}
run: hadoop-ozone/dev-support/checks/build.sh -Pdist -Dskip.npx -Dskip.installnpx -Djavac.version=${{ matrix.java }} ${{ inputs.ratis_args }}
env:
OZONE_WITH_COVERAGE: false
DEVELOCITY_ACCESS_KEY: ${{ secrets.GE_ACCESS_TOKEN }}
Expand Down Expand Up @@ -428,7 +428,6 @@ jobs:
mkdir -p hadoop-ozone/dist/target
tar xzvf ozone*.tar.gz -C hadoop-ozone/dist/target
rm ozone*.tar.gz
sudo chmod -R a+rwX hadoop-ozone/dist/target
- name: Execute tests
run: |
pushd hadoop-ozone/dist/target/ozone-*
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/intermittent-test-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ jobs:
java-version: 8
- name: Build (most) of Ozone
run: |
args="-Dskip.npx -Dskip.installnpx -DskipShade"
args="-Dskip.npx -Dskip.installnpx -DskipShade -Dmaven.javadoc.skip=true"
if [[ "${{ github.event.inputs.ratis-ref }}" != "" ]]; then
args="$args -Dratis.version=${{ needs.ratis.outputs.ratis-version }}"
args="$args -Dratis.thirdparty.version=${{ needs.ratis.outputs.thirdparty-version }}"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/repeat-acceptance.yml
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ jobs:
distribution: 'temurin'
java-version: ${{ env.JAVA_VERSION }}
- name: Run a full build
run: hadoop-ozone/dev-support/checks/build.sh -Pdist -Psrc
run: hadoop-ozone/dev-support/checks/build.sh -Pdist -Psrc -Dmaven.javadoc.skip=true
env:
DEVELOCITY_ACCESS_KEY: ${{ secrets.GE_ACCESS_TOKEN }}
- name: Store binaries for tests
Expand Down
12 changes: 12 additions & 0 deletions hadoop-hdds/annotations/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,16 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
<properties>
<maven.test.skip>true</maven.test.skip> <!-- no tests in this module so far -->
</properties>

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<proc>none</proc>
</configuration>
</plugin>
</plugins>
</build>
</project>
37 changes: 37 additions & 0 deletions hadoop-hdds/client/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,43 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
<excludeFilterFile>${basedir}/dev-support/findbugsExcludeFile.xml</excludeFilterFile>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>org.apache.ozone</groupId>
<artifactId>hdds-config</artifactId>
<version>${hdds.version}</version>
</path>
</annotationProcessorPaths>
<annotationProcessors>
<annotationProcessor>org.apache.hadoop.hdds.conf.ConfigFileGenerator</annotationProcessor>
</annotationProcessors>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<executions>
<execution>
<id>ban-annotations</id> <!-- override default restriction from root POM -->
<configuration>
<rules>
<restrictImports>
<reason>Only selected annotation processors are enabled, see configuration of maven-compiler-plugin.</reason>
<bannedImports>
<bannedImport>org.apache.hadoop.ozone.om.request.validation.RequestFeatureValidator</bannedImport>
<bannedImport>org.apache.hadoop.hdds.scm.metadata.Replicate</bannedImport>
<bannedImport>org.kohsuke.MetaInfServices</bannedImport>
</bannedImports>
</restrictImports>
</rules>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,7 @@ public ContainerCommandResponseProto sendCommand(
}
for (DatanodeDetails dn : datanodeList) {
try {
request = reconstructRequestIfNeeded(request, dn);
futureHashMap.put(dn, sendCommandAsync(request, dn).getResponse());
} catch (InterruptedException e) {
LOG.error("Command execution was interrupted.");
Expand Down Expand Up @@ -316,6 +317,29 @@ public ContainerCommandResponseProto sendCommand(
return responseProtoHashMap;
}

/**
* @param request
* @param dn
* @param pipeline
* In case of getBlock for EC keys, it is required to set replicaIndex for
* every request with the replicaIndex for that DN for which the request is
* sent to. This method unpacks proto and reconstructs request after setting
* the replicaIndex field.
* @return new updated Request
*/
private ContainerCommandRequestProto reconstructRequestIfNeeded(
ContainerCommandRequestProto request, DatanodeDetails dn) {
boolean isEcRequest = pipeline.getReplicationConfig()
.getReplicationType() == HddsProtos.ReplicationType.EC;
if (request.hasGetBlock() && isEcRequest) {
ContainerProtos.GetBlockRequestProto gbr = request.getGetBlock();
request = request.toBuilder().setGetBlock(gbr.toBuilder().setBlockID(
gbr.getBlockID().toBuilder().setReplicaIndex(
pipeline.getReplicaIndex(dn)).build()).build()).build();
}
return request;
}

@Override
public ContainerCommandResponseProto sendCommand(
ContainerCommandRequestProto request, List<Validator> validators)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,27 +67,26 @@ private HddsClientUtils() {
.add(NotReplicatedException.class)
.build();

private static void doNameChecks(String resName) {
private static void doNameChecks(String resName, String resType) {
if (resName == null) {
throw new IllegalArgumentException("Bucket or Volume name is null");
throw new IllegalArgumentException(resType + " name is null");
}

if (resName.length() < OzoneConsts.OZONE_MIN_BUCKET_NAME_LENGTH ||
resName.length() > OzoneConsts.OZONE_MAX_BUCKET_NAME_LENGTH) {
throw new IllegalArgumentException(
"Bucket or Volume length is illegal, "
+ "valid length is 3-63 characters");
throw new IllegalArgumentException(resType +
" length is illegal, " + "valid length is 3-63 characters");
}

if (resName.charAt(0) == '.' || resName.charAt(0) == '-') {
throw new IllegalArgumentException(
"Bucket or Volume name cannot start with a period or dash");
throw new IllegalArgumentException(resType +
" name cannot start with a period or dash");
}

if (resName.charAt(resName.length() - 1) == '.' ||
resName.charAt(resName.length() - 1) == '-') {
throw new IllegalArgumentException("Bucket or Volume name "
+ "cannot end with a period or dash");
throw new IllegalArgumentException(resType +
" name cannot end with a period or dash");
}
}

Expand All @@ -108,27 +107,27 @@ private static boolean isSupportedCharacter(char c, boolean isStrictS3) {
return false;
}

private static void doCharacterChecks(char currChar, char prev,
private static void doCharacterChecks(char currChar, char prev, String resType,
boolean isStrictS3) {
if (Character.isUpperCase(currChar)) {
throw new IllegalArgumentException(
"Bucket or Volume name does not support uppercase characters");
throw new IllegalArgumentException(resType +
" name does not support uppercase characters");
}
if (!isSupportedCharacter(currChar, isStrictS3)) {
throw new IllegalArgumentException("Bucket or Volume name has an " +
"unsupported character : " + currChar);
throw new IllegalArgumentException(resType +
" name has an unsupported character : " + currChar);
}
if (prev == '.' && currChar == '.') {
throw new IllegalArgumentException("Bucket or Volume name should not " +
"have two contiguous periods");
throw new IllegalArgumentException(resType +
" name should not have two contiguous periods");
}
if (prev == '-' && currChar == '.') {
throw new IllegalArgumentException(
"Bucket or Volume name should not have period after dash");
throw new IllegalArgumentException(resType +
" name should not have period after dash");
}
if (prev == '.' && currChar == '-') {
throw new IllegalArgumentException(
"Bucket or Volume name should not have dash after period");
throw new IllegalArgumentException(resType +
" name should not have dash after period");
}
}

Expand All @@ -140,7 +139,11 @@ private static void doCharacterChecks(char currChar, char prev,
* @throws IllegalArgumentException
*/
public static void verifyResourceName(String resName) {
verifyResourceName(resName, true);
verifyResourceName(resName, "resource", true);
}

public static void verifyResourceName(String resName, String resType) {
verifyResourceName(resName, resType, true);
}

/**
Expand All @@ -150,9 +153,9 @@ public static void verifyResourceName(String resName) {
*
* @throws IllegalArgumentException
*/
public static void verifyResourceName(String resName, boolean isStrictS3) {
public static void verifyResourceName(String resName, String resType, boolean isStrictS3) {

doNameChecks(resName);
doNameChecks(resName, resType);

boolean isIPv4 = true;
char prev = (char) 0;
Expand All @@ -162,13 +165,13 @@ public static void verifyResourceName(String resName, boolean isStrictS3) {
if (currChar != '.') {
isIPv4 = ((currChar >= '0') && (currChar <= '9')) && isIPv4;
}
doCharacterChecks(currChar, prev, isStrictS3);
doCharacterChecks(currChar, prev, resType, isStrictS3);
prev = currChar;
}

if (isIPv4) {
throw new IllegalArgumentException(
"Bucket or Volume name cannot be an IPv4 address or all numeric");
throw new IllegalArgumentException(resType +
" name cannot be an IPv4 address or all numeric");
}
}

Expand Down
37 changes: 37 additions & 0 deletions hadoop-hdds/common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -257,6 +257,43 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
<excludeFilterFile>${basedir}/dev-support/findbugsExcludeFile.xml</excludeFilterFile>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>org.apache.ozone</groupId>
<artifactId>hdds-config</artifactId>
<version>${hdds.version}</version>
</path>
</annotationProcessorPaths>
<annotationProcessors>
<annotationProcessor>org.apache.hadoop.hdds.conf.ConfigFileGenerator</annotationProcessor>
</annotationProcessors>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<executions>
<execution>
<id>ban-annotations</id> <!-- override default restriction from root POM -->
<configuration>
<rules>
<restrictImports>
<reason>Only selected annotation processors are enabled, see configuration of maven-compiler-plugin.</reason>
<bannedImports>
<bannedImport>org.apache.hadoop.ozone.om.request.validation.RequestFeatureValidator</bannedImport>
<bannedImport>org.apache.hadoop.hdds.scm.metadata.Replicate</bannedImport>
<bannedImport>org.kohsuke.MetaInfServices</bannedImport>
</bannedImports>
</restrictImports>
</rules>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,9 @@ public static Map<String, Object> getCountsMap(DatanodeDetails datanode, JsonNod
Map<String, Object> countsMap, String errMsg)
throws IOException {
for (int i = 1; i <= numDecomNodes; i++) {
if (datanode.getHostName().equals(counts.get("tag.datanode." + i).asText())) {
String datanodeHostName =
(counts.get("tag.datanode." + i) != null) ? (counts.get("tag.datanode." + i).asText()) : "";
if (datanode.getHostName().equals(datanodeHostName)) {
JsonNode pipelinesDN = counts.get("PipelinesWaitingToCloseDN." + i);
JsonNode underReplicatedDN = counts.get("UnderReplicatedDN." + i);
JsonNode unclosedDN = counts.get("UnclosedContainersDN." + i);
Expand Down
Loading