Skip to content

Conversation

@steveloughran
Copy link
Contributor

...for large uploads over slow links

  • New default value of 15 minutes in source
  • Updated docs
  • Troubleshooting: new stack trace and details of the problem

I don't think this should be the final patch.

Because I've just discovered how we can set different timeouts on each request, so
a long timeout for data put/post seems straightforward, as well as testing.

How was this patch tested?

big bulk uploads

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

...for large uploads over slow links

* New default value of 15 minutes in source
* Updated docs
* Troubleshooting: new stack trace and details
  of the problem

Change-Id: If3319268a8df35b5f0377f31ea9df2c2f49186ea
@steveloughran steveloughran marked this pull request as draft October 1, 2024 12:32
@steveloughran steveloughran changed the title HADOOP-19295. S3A: fs.s3a.connection.request.timeout too low HADOOP-19295. S3A: fs.s3a.connection.request.timeout too low: quick fix Oct 1, 2024

/**
* Default duration of a request before it is timed out: 60s.
* Default duration of a request before it is timed out: 16m.
Copy link
Member

@dongjoon-hyun dongjoon-hyun Oct 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

16m seems to mismatch with the actual code, 15 minutes, and performance.md. Maybe, 15m?

* (default 15s) allowed for a fast failure of all other operations.
* In V2 SDK this now applies to all operations, including uploads.
* A large timeout is now needed, even though it means that some service/network
* failures may now take a long time to surface.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, 15minutes sounds like too big as a timeout, doesn't it? What is the background of this huge value?

Technically, 15m means 1 hour when we retry 3 times.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

problem is we want a duration which is valid with many other streams competing for that bandwidth, so a single block upload of, say, 64M needs time. I don't know what a good time is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could maybe cut to something a bit lower, like say 10m?

side issue: why don't you try it as is for a remote upload of the rc1 tar.gz and see how you get on without this patch.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 18s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 0s Maven dependency ordering for branch
+1 💚 mvninstall 20m 5s trunk passed
+1 💚 compile 9m 2s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 compile 8m 13s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 checkstyle 2m 11s trunk passed
+1 💚 mvnsite 1m 38s trunk passed
+1 💚 javadoc 1m 25s trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 1m 7s trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 spotbugs 2m 18s trunk passed
+1 💚 shadedclient 21m 10s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 21s Maven dependency ordering for patch
+1 💚 mvninstall 0m 50s the patch passed
+1 💚 compile 8m 48s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javac 8m 48s the patch passed
+1 💚 compile 8m 11s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 javac 8m 11s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 2m 5s the patch passed
+1 💚 mvnsite 1m 30s the patch passed
+1 💚 javadoc 1m 19s the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04
+1 💚 javadoc 1m 8s the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05
+1 💚 spotbugs 2m 32s the patch passed
+1 💚 shadedclient 21m 13s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 44s hadoop-common in the patch passed.
+1 💚 unit 2m 16s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 43s The patch does not generate ASF License warnings.
153m 7s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7087/1/artifact/out/Dockerfile
GITHUB PR #7087
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle markdownlint
uname Linux 0020e37573d0 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 520ad5b
Default Java Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7087/1/testReport/
Max. process+thread count 3108 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7087/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor Author

In #7089 I'm doing a real fix with different timings but

for this reason, I think we should target a 3.4.2 with it


<!--- global properties -->

<property>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure how i deleted this; will revert

Copy link
Contributor Author

@steveloughran steveloughran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

going to switch to 10m if that is enough for the problem to go away locally

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants