Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
5ce1810
HADOOP-17346. Fair call queue is defeated by abusive service principa…
amahussein Nov 12, 2020
f5e6be3
HDFS-15545 - Allow WebHdfsFileSystem to read a new delegation token f…
ibuenros Nov 12, 2020
f56cd88
HDFS-15538. Fix the documentation for dfs.namenode.replication.max-st…
tasanuma Nov 13, 2020
ebe1d1f
HADOOP-17362. reduce RPC calls doing ls on HAR file (#2444). Contribu…
amahussein Nov 13, 2020
dd85a90
HADOOP-17376. ITestS3AContractRename failing against stricter tests. …
adoroszlai Nov 16, 2020
0b2510e
YARN-10485. TimelineConnector swallows InterruptedException (#2450). …
amahussein Nov 16, 2020
b57f04c
HDFS-15685. [JDK 14] TestConfiguredFailoverProxyProvider#testResolveD…
aajisaka Nov 17, 2020
a7b923c
HADOOP-17379. AbstractS3ATokenIdentifier to set issue date == now. (#…
HeartSaVioR Nov 17, 2020
2045a9d
MAPREDUCE-7305. [JDK 11] TestMRJobsWithProfiler fails. (#2463)
aajisaka Nov 18, 2020
425996e
HDFS-15674. TestBPOfferService#testMissBlocksWhenReregister fails on …
iwasakims Nov 18, 2020
e3c08f2
HADOOP-17244. S3A directory delete tombstones dir markers prematurely…
steveloughran Nov 18, 2020
ce7827c
HADOOP-17318. Support concurrent S3A commit jobs with same app attemp…
steveloughran Nov 18, 2020
5ff70a5
YARN-10486. FS-CS converter: handle case when weight=0 and allow more…
szilard-nemeth Nov 18, 2020
0d3155a
YARN-10457. Add a configuration switch to change between legacy and J…
szilard-nemeth Nov 18, 2020
34aa613
HADOOP-17292. Using lz4-java in Lz4Codec (#2350)
viirya Nov 18, 2020
0705033
HADOOP-17367. Add InetAddress api to ProxyUsers.authorize (#2449). Co…
amahussein Nov 19, 2020
8fa699b
HDFS-15635. ViewFileSystemOverloadScheme support specifying mount tab…
zuston Nov 20, 2020
f3c629c
HADOOP-17388. AbstractS3ATokenIdentifier to issue date in UTC. (#2477)
HeartSaVioR Nov 20, 2020
fb92aa4
MAPREDUCE-7304. Enhance the map-reduce Job end notifier to be able to…
pbacsko Nov 20, 2020
747883a
HDFS-15659. MiniDFSCluster dfs.namenode.redundancy.considerLoad defau…
amahussein Nov 20, 2020
740399a
HADOOP-17390. Skip license check on lz4 code files (#2478)
dengzhhu653 Nov 20, 2020
d730294
HDFS-15690. Add lz4-java as test dependency (#2481)
viirya Nov 22, 2020
641d885
HDFS-15684. EC: Call recoverLease on DFSStripedOutputStream close exc…
Hexiaoqiao Nov 23, 2020
fb79be9
HADOOP-17343. Upgrade AWS SDK to 1.11.901 (#2468)
steveloughran Nov 23, 2020
f13c7b1
MAPREDUCE-7307. Potential thread leak in LocatedFileStatusFetcher. (#…
dengzhhu653 Nov 23, 2020
07b7d07
HADOOP-17325. WASB Test Failures
steveloughran Nov 23, 2020
9b4faf2
HADOOP-17332. S3A MarkerTool -min and -max are inverted. (#2425)
steveloughran Nov 23, 2020
c4ba0ab
YARN-10470. When building new web ui with root user, the bower instal…
aajisaka Nov 24, 2020
5fee950
HADOOP-17323. S3A getFileStatus("/") to skip IO (#2479)
mukund-thakur Nov 24, 2020
f813f14
MAPREDUCE-7309. Improve performance of reading resource request for m…
szilard-nemeth Nov 24, 2020
569b20e
YARN-10468. Fix TestNodeStatusUpdater timeouts and broken conditions …
amahussein Nov 24, 2020
08b2e28
YARN-10488. Several typos in package: org.apache.hadoop.yarn.server.r…
ankitk-me Nov 25, 2020
3193d8c
HADOOP-17311. ABFS: Logs should redact SAS signature (#2422)
bilaharith Nov 25, 2020
ac7045b
HADOOP-17313. FileSystem.get to support slow-to-instantiate FS client…
steveloughran Nov 25, 2020
235947e
HDFS-15689. allow/disallowSnapshot on EZ roots shouldn't fail due to …
smengcl Nov 25, 2020
ce5b3d7
[JDK 11] Fix error in mvn package -Pdocs (#2488)
aajisaka Nov 26, 2020
65002c9
Revert "[JDK 11] Fix error in mvn package -Pdocs (#2488)" because JIR…
aajisaka Nov 26, 2020
2ce2198
HADOOP-17394. [JDK 11] Fix error in mvn package -Pdocs (#2488)
aajisaka Nov 26, 2020
009ce4f
HADOOP-17396. ABFS: testRenameFileOverExistingFile fails (#2491)
snvijaya Nov 26, 2020
cf43a7e
HADOOP-17397. ABFS: SAS Test updates for version and permission updat…
snvijaya Nov 26, 2020
67dc092
HADOOP-17385. ITestS3ADeleteCost.testDirMarkersFileCreation failure (…
steveloughran Nov 26, 2020
03b4e98
HADOOP-17398. Skipping network I/O in S3A getFileStatus(/) breaks som…
mukund-thakur Nov 26, 2020
142941b
HADOOP-17296. ABFS: Force reads to be always of buffer size.
snvijaya Nov 27, 2020
68442b4
HDFS-15698. Fix the typo of dfshealth.html after HDFS-15358 (#2495)
ferhui Nov 28, 2020
4d2ae5b
YARN-10498. Fix typo in CapacityScheduler Markdown document (#2484)
Nov 30, 2020
44910b5
HDFS-15699 Remove lz4 references in vcxproj (#2498)
GauthamBanasandra Nov 30, 2020
6a1d7d9
HDFS-15677. TestRouterRpcMultiDestination#testGetCachedDatanodeReport…
iwasakims Nov 30, 2020
918ba9e
HDFS-15694. Avoid calling UpdateHeartBeatState inside DataNodeDescrip…
amahussein Dec 1, 2020
fa773a8
YARN-10278: CapacityScheduler test framework ProportionalCapacityPree…
erichadoop Dec 1, 2020
2b5b556
HDFS-15695. NN should not let the balancer run in safemode (#2489). C…
amahussein Dec 2, 2020
60201cb
HDFS-15703. Don't generate edits for set operations that are no-op (#…
amahussein Dec 2, 2020
6ff2409
HDFS-14904. Add Option to let Balancer prefer highly utilized nodes i…
LeonGao91 Dec 2, 2020
42a2919
HDFS-15705. Fix a typo in SecondaryNameNode.java. Contributed by Sixi…
jojochuang Dec 3, 2020
9969745
YARN-9883. Reshape SchedulerHealth class. Contributed by D M Murali K…
Dec 3, 2020
717b835
HADOOP-17397: ABFS: SAS Test updates for version and permission update
ThomasMarquardt Dec 1, 2020
9170eb5
YARN-10511. Update yarn.nodemanager.env-whitelist value in docs (#2512)
ilpianista Dec 3, 2020
f94e927
HADOOP-17392. Remote exception messages should not include the except…
amahussein Dec 3, 2020
db73e99
HADOOP-16881. KerberosAuthentication does not disconnect HttpURLConne…
Dec 3, 2020
07655a7
HDFS-15706. HttpFS: Log more information on request failures. (#2515)
amahussein Dec 3, 2020
8c234fc
HADOOP-17389. KMS should log full UGI principal. (#2476)
amahussein Dec 4, 2020
e2c1268
HDFS-15240. Erasure Coding: dirty buffer causes reconstruction block …
ferhui Dec 4, 2020
7dda804
HDFS-14090. RBF: Improved isolation for downstream name nodes. {Stati…
ayushtkn Dec 4, 2020
ad40715
HDFS-15221. Add checking of effective filesystem during initializing …
ayushtkn Dec 7, 2020
da1ea25
HDFS-15660. StorageTypeProto is not compatiable between 3.x and 2.6. …
linyiqun Dec 7, 2020
32099e3
HDFS-15707. NNTop counts don't add up as expected. (#2516)
amahussein Dec 7, 2020
40f7543
HDFS-15709. Socket file descriptor leak in StripedBlockChecksumRecons…
crossfire Dec 7, 2020
7d3c8ef
YARN-10495. make the rpath of container-executor configurable. Contri…
ericbadger Dec 8, 2020
4ffec79
HDFS-15712. Upgrade googletest to 1.10.0 (#2523)
GauthamBanasandra Dec 8, 2020
01383a2
HDFS-15716. WaitforReplication in TestUpgradeDomainBlockPlacementPoli…
amahussein Dec 8, 2020
aaf9e3d
YARN-10491. Fix deprecation warnings in SLSWebApp.java (#2519)
ankitk-me Dec 9, 2020
d67ccd0
YARN-10380: Import logic of multi-node allocation in CapacitySchedule…
zhuqi-lucas Dec 9, 2020
0a45bd0
YARN-10520. Deprecated the residual nested class for the LCEResourceH…
Dec 9, 2020
c2cecfc
HADOOP-17425. Bump up snappy-java to 1.1.8.2. (#2536)
viirya Dec 10, 2020
3ec01b1
HDFS-15711. Add Metrics to HttpFS Server. (#2521) Contributed by Ahme…
amahussein Dec 10, 2020
9bd3c9b
HDFS-15720 namenode audit async logger should add some log4j config (…
Neilxzn Dec 10, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion LICENSE-binary
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ com.aliyun:aliyun-java-sdk-ecs:4.2.0
com.aliyun:aliyun-java-sdk-ram:3.0.0
com.aliyun:aliyun-java-sdk-sts:3.0.0
com.aliyun.oss:aliyun-sdk-oss:3.4.1
com.amazonaws:aws-java-sdk-bundle:1.11.563
com.amazonaws:aws-java-sdk-bundle:1.11.901
com.cedarsoftware:java-util:1.9.0
com.cedarsoftware:json-io:2.5.1
com.fasterxml.jackson.core:jackson-annotations:2.9.9
Expand Down
2 changes: 1 addition & 1 deletion hadoop-common-project/hadoop-auth/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@
<execution>
<phase>package</phase>
<goals>
<goal>javadoc</goal>
<goal>javadoc-no-fork</goal>
</goals>
</execution>
</executions>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -183,8 +183,9 @@ public void authenticate(URL url, AuthenticatedURL.Token token)
if (!token.isSet()) {
this.url = url;
base64 = new Base64(0);
HttpURLConnection conn = null;
try {
HttpURLConnection conn = token.openConnection(url, connConfigurator);
conn = token.openConnection(url, connConfigurator);
conn.setRequestMethod(AUTH_HTTP_METHOD);
conn.connect();

Expand Down Expand Up @@ -218,6 +219,10 @@ public void authenticate(URL url, AuthenticatedURL.Token token)
} catch (AuthenticationException ex){
throw wrapExceptionWithMessage(ex,
"Error while authenticating with endpoint: " + url);
} finally {
if (conn != null) {
conn.disconnect();
}
}
}
}
Expand Down
11 changes: 5 additions & 6 deletions hadoop-common-project/hadoop-common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@
<wsce.config.file>wsce-site.xml</wsce.config.file>
</properties>


<dependencies>
<dependency>
<groupId>org.apache.hadoop.thirdparty</groupId>
Expand Down Expand Up @@ -371,6 +370,11 @@
<artifactId>snappy-java</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.lz4</groupId>
<artifactId>lz4-java</artifactId>
<scope>provided</scope>
</dependency>
</dependencies>

<build>
Expand Down Expand Up @@ -577,11 +581,6 @@
<exclude>src/main/native/m4/*</exclude>
<exclude>src/test/empty-file</exclude>
<exclude>src/test/all-tests</exclude>
<exclude>src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.h</exclude>
<exclude>src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c</exclude>
<exclude>src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.h</exclude>
<exclude>src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.c</exclude>
<exclude>src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc_encoder.h</exclude>
<exclude>src/main/native/gtest/**/*</exclude>
<exclude>src/test/resources/test-untar.tgz</exclude>
<exclude>src/test/resources/test.har/_SUCCESS</exclude>
Expand Down
4 changes: 0 additions & 4 deletions hadoop-common-project/hadoop-common/src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -236,10 +236,6 @@ configure_file(${CMAKE_SOURCE_DIR}/config.h.cmake ${CMAKE_BINARY_DIR}/config.h)
set(CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)
hadoop_add_dual_library(hadoop
main/native/src/exception.c
${SRC}/io/compress/lz4/Lz4Compressor.c
${SRC}/io/compress/lz4/Lz4Decompressor.c
${SRC}/io/compress/lz4/lz4.c
${SRC}/io/compress/lz4/lz4hc.c
${ISAL_SOURCE_FILES}
${ZSTD_SOURCE_FILES}
${OPENSSL_SOURCE_FILES}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,27 @@ public class CommonConfigurationKeysPublic {
public static final String FS_AUTOMATIC_CLOSE_KEY = "fs.automatic.close";
/** Default value for FS_AUTOMATIC_CLOSE_KEY */
public static final boolean FS_AUTOMATIC_CLOSE_DEFAULT = true;

/**
* Number of filesystems instances can be created in parallel.
* <p></p>
* A higher number here does not necessarily improve performance, especially
* for object stores, where multiple threads may be attempting to create an FS
* instance for the same URI.
* <p></p>
* Default value: {@value}.
*/
public static final String FS_CREATION_PARALLEL_COUNT =
"fs.creation.parallel.count";

/**
* Default value for {@link #FS_CREATION_PARALLEL_COUNT}.
* <p></p>
* Default value: {@value}.
*/
public static final int FS_CREATION_PARALLEL_COUNT_DEFAULT =
64;

/**
* @see
* <a href="{@docRoot}/../hadoop-project-dist/hadoop-common/core-default.xml">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import java.io.Closeable;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InterruptedIOException;
import java.lang.ref.WeakReference;
import java.lang.ref.ReferenceQueue;
import java.net.URI;
Expand All @@ -44,6 +45,7 @@
import java.util.Stack;
import java.util.TreeSet;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.Semaphore;
import java.util.concurrent.atomic.AtomicLong;

import org.apache.commons.logging.Log;
Expand Down Expand Up @@ -75,6 +77,7 @@
import org.apache.hadoop.security.token.DelegationTokenIssuer;
import org.apache.hadoop.util.ClassUtil;
import org.apache.hadoop.util.DataChecksum;
import org.apache.hadoop.util.DurationInfo;
import org.apache.hadoop.util.LambdaUtils;
import org.apache.hadoop.util.Progressable;
import org.apache.hadoop.util.ReflectionUtils;
Expand Down Expand Up @@ -200,7 +203,7 @@ public abstract class FileSystem extends Configured
public static final String USER_HOME_PREFIX = "/user";

/** FileSystem cache. */
static final Cache CACHE = new Cache();
static final Cache CACHE = new Cache(new Configuration());

/** The key this instance is stored under in the cache. */
private Cache.Key key;
Expand Down Expand Up @@ -2591,8 +2594,11 @@ public void close() throws IOException {
+ "; Object Identity Hash: "
+ Integer.toHexString(System.identityHashCode(this)));
// delete all files that were marked as delete-on-exit.
processDeleteOnExit();
CACHE.remove(this.key, this);
try {
processDeleteOnExit();
} finally {
CACHE.remove(this.key, this);
}
}

/**
Expand Down Expand Up @@ -3453,7 +3459,9 @@ public static Class<? extends FileSystem> getFileSystemClass(String scheme,
private static FileSystem createFileSystem(URI uri, Configuration conf)
throws IOException {
Tracer tracer = FsTracer.get(conf);
try(TraceScope scope = tracer.newScope("FileSystem#createFileSystem")) {
try(TraceScope scope = tracer.newScope("FileSystem#createFileSystem");
DurationInfo ignored =
new DurationInfo(LOGGER, false, "Creating FS %s", uri)) {
scope.addKVAnnotation("scheme", uri.getScheme());
Class<? extends FileSystem> clazz =
getFileSystemClass(uri.getScheme(), conf);
Expand All @@ -3476,15 +3484,39 @@ private static FileSystem createFileSystem(URI uri, Configuration conf)
}

/** Caching FileSystem objects. */
static class Cache {
static final class Cache {
private final ClientFinalizer clientFinalizer = new ClientFinalizer();

private final Map<Key, FileSystem> map = new HashMap<>();
private final Set<Key> toAutoClose = new HashSet<>();

/** Semaphore used to serialize creation of new FS instances. */
private final Semaphore creatorPermits;

/**
* Counter of the number of discarded filesystem instances
* in this cache. Primarily for testing, but it could possibly
* be made visible as some kind of metric.
*/
private final AtomicLong discardedInstances = new AtomicLong(0);

/** A variable that makes all objects in the cache unique. */
private static AtomicLong unique = new AtomicLong(1);

/**
* Instantiate. The configuration is used to read the
* count of permits issued for concurrent creation
* of filesystem instances.
* @param conf configuration
*/
Cache(final Configuration conf) {
int permits = conf.getInt(FS_CREATION_PARALLEL_COUNT,
FS_CREATION_PARALLEL_COUNT_DEFAULT);
checkArgument(permits > 0, "Invalid value of %s: %s",
FS_CREATION_PARALLEL_COUNT, permits);
creatorPermits = new Semaphore(permits);
}

FileSystem get(URI uri, Configuration conf) throws IOException{
Key key = new Key(uri, conf);
return getInternal(uri, conf, key);
Expand Down Expand Up @@ -3518,33 +3550,86 @@ private FileSystem getInternal(URI uri, Configuration conf, Key key)
if (fs != null) {
return fs;
}

fs = createFileSystem(uri, conf);
final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
ShutdownHookManager.TIME_UNIT_DEFAULT);
synchronized (this) { // refetch the lock again
FileSystem oldfs = map.get(key);
if (oldfs != null) { // a file system is created while lock is releasing
fs.close(); // close the new file system
return oldfs; // return the old file system
}

// now insert the new file system into the map
if (map.isEmpty()
&& !ShutdownHookManager.get().isShutdownInProgress()) {
ShutdownHookManager.get().addShutdownHook(clientFinalizer,
SHUTDOWN_HOOK_PRIORITY, timeout,
ShutdownHookManager.TIME_UNIT_DEFAULT);
// fs not yet created, acquire lock
// to construct an instance.
try (DurationInfo d = new DurationInfo(LOGGER, false,
"Acquiring creator semaphore for %s", uri)) {
creatorPermits.acquire();
} catch (InterruptedException e) {
// acquisition was interrupted; convert to an IOE.
throw (IOException)new InterruptedIOException(e.toString())
.initCause(e);
}
FileSystem fsToClose = null;
try {
// See if FS was instantiated by another thread while waiting
// for the permit.
synchronized (this) {
fs = map.get(key);
}
fs.key = key;
map.put(key, fs);
if (conf.getBoolean(
FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
toAutoClose.add(key);
if (fs != null) {
LOGGER.debug("Filesystem {} created while awaiting semaphore", uri);
return fs;
}
return fs;
// create the filesystem
fs = createFileSystem(uri, conf);
final long timeout = conf.getTimeDuration(SERVICE_SHUTDOWN_TIMEOUT,
SERVICE_SHUTDOWN_TIMEOUT_DEFAULT,
ShutdownHookManager.TIME_UNIT_DEFAULT);
// any FS to close outside of the synchronized section
synchronized (this) { // lock on the Cache object

// see if there is now an entry for the FS, which happens
// if another thread's creation overlapped with this one.
FileSystem oldfs = map.get(key);
if (oldfs != null) {
// a file system was created in a separate thread.
// save the FS reference to close outside all locks,
// and switch to returning the oldFS
fsToClose = fs;
fs = oldfs;
} else {
// register the clientFinalizer if needed and shutdown isn't
// already active
if (map.isEmpty()
&& !ShutdownHookManager.get().isShutdownInProgress()) {
ShutdownHookManager.get().addShutdownHook(clientFinalizer,
SHUTDOWN_HOOK_PRIORITY, timeout,
ShutdownHookManager.TIME_UNIT_DEFAULT);
}
// insert the new file system into the map
fs.key = key;
map.put(key, fs);
if (conf.getBoolean(
FS_AUTOMATIC_CLOSE_KEY, FS_AUTOMATIC_CLOSE_DEFAULT)) {
toAutoClose.add(key);
}
}
} // end of synchronized block
} finally {
// release the creator permit.
creatorPermits.release();
}
if (fsToClose != null) {
LOGGER.debug("Duplicate FS created for {}; discarding {}",
uri, fs);
discardedInstances.incrementAndGet();
// close the new file system
// note this will briefly remove and reinstate "fsToClose" from
// the map. It is done in a synchronized block so will not be
// visible to others.
IOUtils.cleanupWithLogger(LOGGER, fsToClose);
}
return fs;
}

/**
* Get the count of discarded instances.
* @return the new instance.
*/
@VisibleForTesting
long getDiscardedInstances() {
return discardedInstances.get();
}

synchronized void remove(Key key, FileSystem fs) {
Expand Down
Loading