Skip to content

Conversation

@steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Mar 19, 2023

Description of PR

This is @sreeb-msft's PR of #5488 with

  • my switch to turn this on/off
  • more logging on recovery

The handling is now exclusively in the AbfsClient class.

How was this patch tested?

abfs test in progress; azure cardiff.

For code changes:

  • [ = Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

sreeb-msft and others added 6 commits March 17, 2023 10:03
If "fs.azure.enable.rename.resilience" is true, then
do a HEAD of the source file before the rename, which
can then be used to recover from the failure, as
the manifest committer does (HADOOP-18163).

Change-Id: Ia417f1501f7274662eb9ff919c6378fb913b476b

HADOOP-18425. ABFS rename resilience through etags

only get the etag on HNS stores

Change-Id: I9faffa78294e1782f0b2db3d1c997ec3fe53637c
1. move config checks of rename resilience flag into AbfsClient
2. only getPathStatus on rename if enabled
3. resilience handling logs when unable to recover from a dir
4. and when it successfully recovers a file.

Change-Id: I58b5f11e4c9b7c1a1d809d2db47a3cafe51f2060
@steveloughran
Copy link
Contributor Author

steveloughran commented Mar 19, 2023

tests good except for timeout in all the lease runs

O] 
[ERROR] Errors: 
[ERROR]   ITestAzureBlobFileSystemLease.testInfiniteLease:284 » TestTimedOut test timed ...
[ERROR]   ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendNoInfiniteLease:177->twoWriters:166 » TestTimedOut
[ERROR]   ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154 » TestTimedOut
[ERROR]   ITestAzureBlobFileSystemLease.testWriteAfterBreakLease:223->lambda$testWriteAfterBreakLease$2:225 » TestTimedOut
[INFO] 

not sure what is up there. I had been playing with leases recently but my site settings haven't set any.

the timeouts are only 30s, so maybe its just a slow test run. no vpn invoved though

@steveloughran
Copy link
Contributor Author

and yes, a test run on its own works

NFO] Running org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.74 s - in org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0

@steveloughran
Copy link
Contributor Author

reviews encouraged from people, especially @sreeb-msft @snvijaya @saxenapranav @hiteshs @mukund-thakur @mehakmeet @taklwu

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 45s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 41s Maven dependency ordering for branch
+1 💚 mvninstall 28m 41s trunk passed
+1 💚 compile 25m 4s trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 21m 35s trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 checkstyle 3m 59s trunk passed
+1 💚 mvnsite 2m 26s trunk passed
+1 💚 javadoc 1m 46s trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 18s trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 spotbugs 3m 55s trunk passed
+1 💚 shadedclient 23m 34s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 1m 34s the patch passed
+1 💚 compile 24m 29s the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
-1 ❌ javac 24m 29s /results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 2 new + 2825 unchanged - 0 fixed = 2827 total (was 2825)
+1 💚 compile 21m 36s the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
-1 ❌ javac 21m 36s /results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu120.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu120.04.1-b09 generated 2 new + 2622 unchanged - 0 fixed = 2624 total (was 2622)
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 3m 48s /results-checkstyle-root.txt root: The patch generated 8 new + 28 unchanged - 0 fixed = 36 total (was 28)
+1 💚 mvnsite 2m 25s the patch passed
+1 💚 javadoc 1m 35s the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 18s the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 spotbugs 4m 2s the patch passed
+1 💚 shadedclient 23m 55s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 18m 18s hadoop-common in the patch passed.
+1 💚 unit 2m 9s hadoop-azure in the patch passed.
+1 💚 asflicense 0m 51s The patch does not generate ASF License warnings.
238m 24s
Subsystem Report/Notes
Docker ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/1/artifact/out/Dockerfile
GITHUB PR #5494
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux bbc482f5a226 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 70a8d0f
Default Java Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/1/testReport/
Max. process+thread count 3137 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@mukund-thakur mukund-thakur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added some minor comments.

this.statistics = fs.statistics;
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cut extra lines?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accidental edit; will revert

final boolean recovered = result.getStatusCode() == HttpURLConnection.HTTP_OK
&& sourceEtag.equals(extractEtagHeader(result));
} catch (AzureBlobFileSystemException ignored) {
LOG.info("File rename has taken place: recovery completed");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this log statement is incorrect. It will depend on the value of the recovered flag.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines 678 to 680
// Server has returned HTTP 404, which means rename source no longer
// exists. Check on destination status and if its etag matches
// that of the source, consider it to be a success.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these comments not valid here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will cut back

&& (op.getResult().getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND)
&& isNotEmpty(sourceEtag)) {

if (!(op.isARetriedRequest())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment here.

// etag passed in, so source is a file
final boolean hasEtag = isEmpty(sourceEtag);
boolean isDir = !hasEtag;
if (!hasEtag && renameResilience) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sourceEtag should be passed down only for HNS account as eTag does not remain the same when its FNS (as rename = copy to destination and delete source).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, but AFAIK there's no way to check in the client whether it is available.

it can only get passed in through the manifest committer, and as ABFS.createResilientCommitSupport() requires etag preservation, it won't be doing it on a non-HNS store.

/**
* Enable resilient rename.
*/
private boolean renameResilience;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: lets have it final.

Comment on lines 49 to 51
import org.apache.hadoop.fs.EtagSource;
import org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation;
import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imports not required.

// not an error
return false;
}
LOG.debug("Source not found on retry of rename({}, {}) isDir {} etag {}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can come even if op was retried and statusCode is not equal to 404

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the existing code only handles 404. Are you saying there are other errors we should look for, such as some 500+ value? if so: which ones

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function gets invoked for any AbfsBlobFileSystemException whether it be because of 404 or any other: https://github.com/steveloughran/hadoop/blob/70a8d0fb3956b2a0e2c343373475ef7a0e75ab08/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java#L607.

now because at line 668, it is because :

op.isARetriedRequest() || op.getResult().getStatusCode() != HttpUrlConnection.HTTP_NOT_FOUND

any retried operation will come on this line.

Comment on lines 668 to 676
LOG.debug("Source not found on retry of rename({}, {}) isDir {} etag {}",
source, destination, isDir, sourceEtag);
if (isDir) {
// directory recovery is not supported.
// log and fail.
LOG.info("rename directory {} to {} failed; unable to recover",
source, destination);
return false;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this becomes equavalent to:

op.isARetriedRequest() || op.getResult().getStatusCode != HttpURLConnection.HTTP_NOT_FOUND

The older condition:

    if ((op.isARetriedRequest())
        && (op.getResult().getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND)

is changed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, let me copy and paste again. i think i was trying to do the inversion logic but may have stopped partway through.

source, destination);
return false;
}
if (isNotEmpty(sourceEtag)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per code in renamePath(),

isDir = !isNotEmpty(sourceEtag)

Should we use the same relation in this method to reduce confusion, instead of having a new argument isDir.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but I want to log slightly differently. though actually it is confusion as isDir is true if resilience is off. will cut

// specifying AbfsHttpOperation mock behavior

// mock object representing the 404 path not found result
AbfsHttpOperation mockHttp404Op = Mockito.mock(AbfsHttpOperation.class);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be awesome if we can use server integration in picture. Have added a comment on @sreeb-msft 's pr #5488 (comment). Suggested changes in saxenapranav@5247e12

final List<AbfsHttpHeader> requestHeaders = createDefaultHeaders();

// etag passed in, so source is a file
final boolean hasEtag = isEmpty(sourceEtag);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it be

hasEtag = !isEmpty(sourceEtag)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ooh, nice catch. scarily nice. what happens when i code on a weekend

steveloughran and others added 3 commits March 20, 2023 13:32
+ new itest, ITestAbfsContractRenameWithoutResilience

this disables resilience and so verifies the normal codepath
is still good.

Change-Id: Ib2663c70afb112c9430043e94d75e9ddf7b3c313
Change-Id: I1db3878ee12ea082e00438781e1ae86af9850ff7
Integration testing all happy; had to do some work to get
my auth mechanism work through the tests.

Added test for dir handling, and commit renaming working through
the failure. First time it's had this test, fwiw

Change-Id: I89f7763d03d1a24a1a43361b001bfa5830804bc1
@steveloughran
Copy link
Contributor Author

running the itests with the new pr and it is interesting because: when resilience is enabled, ITestAbfsFileSystemContractRename fail because of different outcomes from what is defined in the contract XML file.

that is actually what is expected according to https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/filesystem/filesystem.html#boolean_rename.28Path_src.2C_Path_d.29, so I'm not sure why it doesn't happen today. will investigate


java.lang.AssertionError: Renaming a missing file unexpectedly threw an exception

	at org.apache.hadoop.fs.contract.ContractTestUtils.fail(ContractTestUtils.java:548)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameNonexistentFile(AbstractContractRenameTest.java:77)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.FileNotFoundException: Operation failed: "The specified path does not exist.", 404, HEAD, https://stevelukwest.dfs.core.windows.net/stevel-testing/test/testRenameNonexistentFileSrc?upn=false&action=getStatus&timeout=90
	at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1481)
	at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.rename(AzureBlobFileSystem.java:466)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.rename(AbstractFSContractTestBase.java:388)
	at org.apache.hadoop.fs.contract.AbstractContractRenameTest.testRenameNonexistentFile(AbstractContractRenameTest.java:62)
	... 15 more

@steveloughran
Copy link
Contributor Author

so hdfs returns false if source not found, or dest exists

 <property>
    <name>fs.contract.rename-returns-false-if-dest-exists</name>
    <value>true</value>
  </property>

  <property>
    <name>fs.contract.rename-returns-false-if-source-missing</name>
    <value>true</value>
  </property>

abfs does the same.

s3a client blows up with meaningful errors

  <property>
    <name>fs.contract.rename-returns-false-if-source-missing</name>
    <value>false</value>
  </property>

  <property>
    <name>fs.contract.rename-returns-false-if-dest-exists</name>
    <value>false</value>
  </property>

I'm more in favour of the "blow up" strategy; if you look at almost all uses of rename it is code like:

if (!rename(src, dest)) throw new Exception("rename failed we don't know why")p

and nobody ever seems to complain about the s3a failure...but then of course rename is broken there for other reasons, so maybe code just avoids it (e.g. committers).

Change-Id: Iceb0042b2d97725d0864d138d3a522f29fb5c867
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 36s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 50s Maven dependency ordering for branch
+1 💚 mvninstall 25m 51s trunk passed
+1 💚 compile 23m 0s trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 20m 33s trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 checkstyle 3m 46s trunk passed
+1 💚 mvnsite 2m 39s trunk passed
+1 💚 javadoc 2m 0s trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 30s trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 spotbugs 3m 55s trunk passed
+1 💚 shadedclient 20m 38s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 27s Maven dependency ordering for patch
+1 💚 mvninstall 1m 29s the patch passed
+1 💚 compile 22m 32s the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
-1 ❌ javac 22m 32s /results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 2 new + 2825 unchanged - 0 fixed = 2827 total (was 2825)
+1 💚 compile 20m 32s the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
-1 ❌ javac 20m 32s /results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu120.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu120.04.1-b09 generated 2 new + 2622 unchanged - 0 fixed = 2624 total (was 2622)
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 3m 38s /results-checkstyle-root.txt root: The patch generated 6 new + 29 unchanged - 1 fixed = 35 total (was 30)
+1 💚 mvnsite 2m 37s the patch passed
+1 💚 javadoc 1m 54s the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 36s the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 spotbugs 4m 5s the patch passed
+1 💚 shadedclient 20m 59s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 18m 16s hadoop-common in the patch passed.
+1 💚 unit 2m 21s hadoop-azure in the patch passed.
+1 💚 asflicense 1m 1s The patch does not generate ASF License warnings.
225m 55s
Subsystem Report/Notes
Docker ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/2/artifact/out/Dockerfile
GITHUB PR #5494
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 1819407248d6 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / e36b4c2
Default Java Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/2/testReport/
Max. process+thread count 2289 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/2/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 46s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 33s Maven dependency ordering for branch
+1 💚 mvninstall 28m 57s trunk passed
+1 💚 compile 25m 2s trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 21m 42s trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 checkstyle 4m 3s trunk passed
+1 💚 mvnsite 2m 24s trunk passed
+1 💚 javadoc 1m 44s trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 19s trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 spotbugs 3m 57s trunk passed
+1 💚 shadedclient 23m 52s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 49s Maven dependency ordering for patch
+1 💚 mvninstall 1m 37s the patch passed
+1 💚 compile 24m 30s the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
-1 ❌ javac 24m 30s /results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 2 new + 2825 unchanged - 0 fixed = 2827 total (was 2825)
+1 💚 compile 21m 32s the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
-1 ❌ javac 21m 32s /results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu120.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu120.04.1-b09 generated 2 new + 2622 unchanged - 0 fixed = 2624 total (was 2622)
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 3m 55s /results-checkstyle-root.txt root: The patch generated 6 new + 29 unchanged - 1 fixed = 35 total (was 30)
+1 💚 mvnsite 2m 24s the patch passed
+1 💚 javadoc 1m 37s the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 18s the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
+1 💚 spotbugs 4m 3s the patch passed
+1 💚 shadedclient 24m 23s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 19m 14s hadoop-common in the patch passed.
+1 💚 unit 2m 10s hadoop-azure in the patch passed.
+1 💚 asflicense 0m 50s The patch does not generate ASF License warnings.
240m 59s
Subsystem Report/Notes
Docker ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/3/artifact/out/Dockerfile
GITHUB PR #5494
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 7b374dbc308e 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / a303e33
Default Java Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/3/testReport/
Max. process+thread count 3137 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5494/3/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@mehakmeet mehakmeet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, looks good.

import java.util.concurrent.TimeUnit;

import org.apache.hadoop.classification.VisibleForTesting;
import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import seem to be in the wrong place here.

HadoopExecutors.newScheduledThreadPool(this.abfsConfiguration.getNumLeaseThreads(), tf));
// rename resilience
renameResilience = abfsConfiguration.getRenameResilience();
LOG.debug("Rename resilience is {}",renameResilience);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: space after ",".

}

/**
* Even with failures, having
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incomplete javadocs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants