diff --git a/BUILDING.txt b/BUILDING.txt index 03dffdd80c74c..d3c9a1a7f51ee 100644 --- a/BUILDING.txt +++ b/BUILDING.txt @@ -6,6 +6,7 @@ Requirements: * Unix System * JDK 1.8 * Maven 3.3 or later +* Protocol Buffers 3.7.1 (if compiling native code) * CMake 3.1 or newer (if compiling native code) * Zlib devel (if compiling native code) * Cyrus SASL devel (if compiling native code) @@ -61,6 +62,16 @@ Installing required packages for clean install of Ubuntu 14.04 LTS Desktop: $ sudo apt-get -y install maven * Native libraries $ sudo apt-get -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev libsasl2-dev +* Protocol Buffers 3.7.1 (required to build native code) + $ mkdir -p /opt/protobuf-3.7-src \ + && curl -L -s -S \ + https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz \ + -o /opt/protobuf-3.7.1.tar.gz \ + && tar xzf /opt/protobuf-3.7.1.tar.gz --strip-components 1 -C /opt/protobuf-3.7-src \ + && cd /opt/protobuf-3.7-src \ + && ./configure\ + && make install \ + && rm -rf /opt/protobuf-3.7-src Optional packages: @@ -384,6 +395,15 @@ Installing required dependencies for clean install of macOS 10.14: * Install native libraries, only openssl is required to compile native code, you may optionally install zlib, lz4, etc. $ brew install openssl +* Protocol Buffers 3.7.1 (required to compile native code) + $ wget https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz + $ mkdir -p protobuf-3.7 && tar zxvf protobuf-java-3.7.1.tar.gz --strip-components 1 -C protobuf-3.7 + $ cd protobuf-3.7 + $ ./configure + $ make + $ make check + $ make install + $ protoc --version Note that building Hadoop 3.1.1/3.1.2/3.2.0 native code from source is broken on macOS. For 3.1.1/3.1.2, you need to manually backport YARN-8622. For 3.2.0, @@ -409,6 +429,7 @@ Requirements: * Windows System * JDK 1.8 * Maven 3.0 or later +* Protocol Buffers 3.7.1 * CMake 3.1 or newer * Visual Studio 2010 Professional or Higher * Windows SDK 8.1 (if building CPU rate control for the container executor) diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile index 969d8bb44e376..65cada2784df9 100644 --- a/dev-support/docker/Dockerfile +++ b/dev-support/docker/Dockerfile @@ -105,6 +105,23 @@ RUN mkdir -p /opt/cmake \ ENV CMAKE_HOME /opt/cmake ENV PATH "${PATH}:/opt/cmake/bin" +###### +# Install Google Protobuf 3.7.1 (2.6.0 ships with Xenial) +###### +# hadolint ignore=DL3003 +RUN mkdir -p /opt/protobuf-src \ + && curl -L -s -S \ + https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz \ + -o /opt/protobuf.tar.gz \ + && tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src \ + && cd /opt/protobuf-src \ + && ./configure --prefix=/opt/protobuf \ + && make install \ + && cd /root \ + && rm -rf /opt/protobuf-src +ENV PROTOBUF_HOME /opt/protobuf +ENV PATH "${PATH}:/opt/protobuf/bin" + ###### # Install Apache Maven 3.3.9 (3.3.9 ships with Xenial) ###### diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md index 090696483be34..b8f9e87e66b1d 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md @@ -119,6 +119,59 @@ Return the data at the current position. else result = -1 +### `InputStream.available()` + +Returns the number of bytes "estimated" to be readable on a stream before `read()` +blocks on any IO (i.e. the thread is potentially suspended for some time). + +That is: for all values `v` returned by `available()`, `read(buffer, 0, v)` +is should not block. + +#### Postconditions + +```python +if len(data) == 0: + result = 0 + +elif pos >= len(data): + result = 0 + +else: + d = "the amount of data known to be already buffered/cached locally" + result = min(1, d) # optional but recommended: see below. +``` + +As `0` is a number which is always meets this condition, it is nominally +possible for an implementation to simply return `0`. However, this is not +considered useful, and some applications/libraries expect a positive number. + +#### The GZip problem. + +[JDK-7036144](http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7036144), +"GZIPInputStream readTrailer uses faulty available() test for end-of-stream" +discusses how the JDK's GZip code it uses `available()` to detect an EOF, +in a loop similar to the the following + +```java +while(instream.available()) { + process(instream.read()); +} +``` + +The correct loop would have been: + +```java +int r; +while((r=instream.read()) >= 0) { + process(r); +} +``` + +If `available()` ever returns 0, then the gzip loop halts prematurely. + +For this reason, implementations *should* return a value >=1, even +if it breaks that requirement of `available()` returning the amount guaranteed +not to block on reads. ### `InputStream.read(buffer[], offset, length)` diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java index ca8e4a053beac..db3691611b118 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java @@ -32,6 +32,7 @@ import java.io.IOException; import java.util.Random; +import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY; import static org.apache.hadoop.fs.contract.ContractTestUtils.createFile; import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset; import static org.apache.hadoop.fs.contract.ContractTestUtils.skip; @@ -99,14 +100,18 @@ public void testSeekZeroByteFile() throws Throwable { describe("seek and read a 0 byte file"); instream = getFileSystem().open(zeroByteFile); assertEquals(0, instream.getPos()); + assertAvailableIsZero(instream); //expect initial read to fai; int result = instream.read(); assertMinusOne("initial byte read", result); + assertAvailableIsZero(instream); byte[] buffer = new byte[1]; //expect that seek to 0 works instream.seek(0); + assertAvailableIsZero(instream); //reread, expect same exception result = instream.read(); + assertAvailableIsZero(instream); assertMinusOne("post-seek byte read", result); result = instream.read(buffer, 0, 1); assertMinusOne("post-seek buffer read", result); @@ -132,8 +137,8 @@ public void testBlockReadZeroByteFile() throws Throwable { @Test public void testSeekReadClosedFile() throws Throwable { instream = getFileSystem().open(smallSeekFile); - getLogger().debug( - "Stream is of type " + instream.getClass().getCanonicalName()); + getLogger().debug("Stream is of type {}", + instream.getClass().getCanonicalName()); instream.close(); try { instream.seek(0); @@ -168,10 +173,26 @@ public void testSeekReadClosedFile() throws Throwable { try { long offset = instream.getPos(); } catch (IOException e) { - // its valid to raise error here; but the test is applied to make + // it is valid to raise error here; but the test is applied to make // sure there's no other exception like an NPE. } + // a closed stream should either fail or return 0 bytes. + try { + int a = instream.available(); + LOG.info("available() returns a value on a closed file: {}", a); + assertAvailableIsZero(instream); + } catch (IOException | IllegalStateException expected) { + // expected + } + // a closed stream should either fail or return 0 bytes. + try { + int a = instream.available(); + LOG.info("available() returns a value on a closed file: {}", a); + assertAvailableIsZero(instream); + } catch (IOException | IllegalStateException expected) { + // expected + } //and close again instream.close(); } @@ -205,6 +226,7 @@ public void testSeekFile() throws Throwable { //expect that seek to 0 works instream.seek(0); int result = instream.read(); + assertAvailableIsPositive(instream); assertEquals(0, result); assertEquals(1, instream.read()); assertEquals(2, instream.getPos()); @@ -226,13 +248,24 @@ public void testSeekAndReadPastEndOfFile() throws Throwable { //go just before the end instream.seek(TEST_FILE_LEN - 2); assertTrue("Premature EOF", instream.read() != -1); + assertAvailableIsPositive(instream); assertTrue("Premature EOF", instream.read() != -1); + checkAvailabilityAtEOF(); assertMinusOne("read past end of file", instream.read()); } + /** + * This can be overridden if a filesystem always returns 01 + * @throws IOException + */ + protected void checkAvailabilityAtEOF() throws IOException { + assertAvailableIsZero(instream); + } + @Test public void testSeekPastEndOfFileThenReseekAndRead() throws Throwable { - describe("do a seek past the EOF, then verify the stream recovers"); + describe("do a seek past the EOF, " + + "then verify the stream recovers"); instream = getFileSystem().open(smallSeekFile); //go just before the end. This may or may not fail; it may be delayed until the //read @@ -261,6 +294,7 @@ public void testSeekPastEndOfFileThenReseekAndRead() throws Throwable { //now go back and try to read from a valid point in the file instream.seek(1); assertTrue("Premature EOF", instream.read() != -1); + assertAvailableIsPositive(instream); } /** @@ -278,6 +312,7 @@ public void testSeekBigFile() throws Throwable { //expect that seek to 0 works instream.seek(0); int result = instream.read(); + assertAvailableIsPositive(instream); assertEquals(0, result); assertEquals(1, instream.read()); assertEquals(2, instream.read()); @@ -296,6 +331,7 @@ public void testSeekBigFile() throws Throwable { instream.seek(0); assertEquals(0, instream.getPos()); instream.read(); + assertAvailableIsPositive(instream); assertEquals(1, instream.getPos()); byte[] buf = new byte[80 * 1024]; instream.readFully(1, buf, 0, buf.length); @@ -314,7 +350,7 @@ public void testPositionedBulkReadDoesntChangePosition() throws Throwable { instream.seek(39999); assertTrue(-1 != instream.read()); assertEquals(40000, instream.getPos()); - + assertAvailableIsPositive(instream); int v = 256; byte[] readBuffer = new byte[v]; assertEquals(v, instream.read(128, readBuffer, 0, v)); @@ -322,6 +358,7 @@ public void testPositionedBulkReadDoesntChangePosition() throws Throwable { assertEquals(40000, instream.getPos()); //content is the same too assertEquals("@40000", block[40000], (byte) instream.read()); + assertAvailableIsPositive(instream); //now verify the picked up data for (int i = 0; i < 256; i++) { assertEquals("@" + i, block[i + 128], readBuffer[i]); @@ -376,6 +413,7 @@ public void testReadFullyZeroByteFile() throws Throwable { assertEquals(0, instream.getPos()); byte[] buffer = new byte[1]; instream.readFully(0, buffer, 0, 0); + assertAvailableIsZero(instream); assertEquals(0, instream.getPos()); // seek to 0 read 0 bytes from it instream.seek(0); @@ -551,7 +589,9 @@ public void testReadSmallFile() throws Throwable { fail("Expected an exception, got " + r); } catch (EOFException e) { handleExpectedException(e); - } catch (IOException | IllegalArgumentException | IndexOutOfBoundsException e) { + } catch (IOException + | IllegalArgumentException + | IndexOutOfBoundsException e) { handleRelaxedException("read() with a negative position ", "EOFException", e); @@ -587,6 +627,29 @@ public void testReadAtExactEOF() throws Throwable { instream = getFileSystem().open(smallSeekFile); instream.seek(TEST_FILE_LEN -1); assertTrue("read at last byte", instream.read() > 0); + assertAvailableIsZero(instream); assertEquals("read just past EOF", -1, instream.read()); } + + /** + * Assert that the number of bytes available is zero. + * @param in input stream + */ + protected static void assertAvailableIsZero(FSDataInputStream in) + throws IOException { + assertEquals("stream.available() should be zero", + 0, in.available()); + } + + /** + * Assert that the number of bytes available is greater than zero. + * @param in input stream + */ + protected static void assertAvailableIsPositive(FSDataInputStream in) + throws IOException { + int available = in.available(); + assertTrue("stream.available() should be positive but is " + + available, + available > 0); + } } diff --git a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java index 8218c7708712d..b15828a153098 100644 --- a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java +++ b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java @@ -78,7 +78,9 @@ public class XceiverClientManager implements Closeable { private boolean isSecurityEnabled; private final boolean topologyAwareRead; /** - * Creates a new XceiverClientManager. + * Creates a new XceiverClientManager for non secured ozone cluster. + * For security enabled ozone cluster, client should use the other constructor + * with a valid ca certificate in pem string format. * * @param conf configuration */ diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java index c62d9773639fc..2828f6ea41ca0 100644 --- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java +++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java @@ -41,8 +41,7 @@ */ public final class Pipeline { - private static final Logger LOG = LoggerFactory - .getLogger(Pipeline.class); + private static final Logger LOG = LoggerFactory.getLogger(Pipeline.class); private final PipelineID id; private final ReplicationType type; private final ReplicationFactor factor; diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java index aa05d88dadabe..7be2921b6a117 100644 --- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java +++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java @@ -24,7 +24,7 @@ * CacheKey for the RocksDB table. * @param */ -public class CacheKey { +public class CacheKey implements Comparable { private final KEY key; @@ -53,4 +53,13 @@ public boolean equals(Object o) { public int hashCode() { return Objects.hash(key); } + + @Override + public int compareTo(Object o) { + if(Objects.equals(key, ((CacheKey)o).key)) { + return 0; + } else { + return key.toString().compareTo((((CacheKey) o).key).toString()); + } + } } diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java index c3215c475eb9b..3e6999a49cfaa 100644 --- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java +++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java @@ -23,6 +23,7 @@ import java.util.Map; import java.util.NavigableSet; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.ConcurrentSkipListSet; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; @@ -47,7 +48,7 @@ public class TableCacheImpl implements TableCache { - private final ConcurrentHashMap cache; + private final Map cache; private final NavigableSet> epochEntries; private ExecutorService executorService; private CacheCleanupPolicy cleanupPolicy; @@ -55,7 +56,14 @@ public class TableCacheImpl(); + + // As for full table cache only we need elements to be inserted in sorted + // manner, so that list will be easy. For other we can go with Hash map. + if (cleanupPolicy == CacheCleanupPolicy.NEVER) { + cache = new ConcurrentSkipListMap<>(); + } else { + cache = new ConcurrentHashMap<>(); + } epochEntries = new ConcurrentSkipListSet<>(); // Created a singleThreadExecutor, so one cleanup will be running at a // time. diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java index a3d1c4ab28834..3f7d0b915d5d5 100644 --- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java +++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java @@ -453,6 +453,9 @@ public final class OzoneConfigKeys { "ozone.network.topology.aware.read"; public static final boolean OZONE_NETWORK_TOPOLOGY_AWARE_READ_DEFAULT = false; + public static final String OZONE_MANAGER_FAIR_LOCK = "ozone.om.lock.fair"; + public static final boolean OZONE_MANAGER_FAIR_LOCK_DEFAULT = false; + /** * There is no need to instantiate this class. */ diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java index 49efad05feb5a..95dfd6c393cac 100644 --- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java +++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/ActiveLock.java @@ -31,9 +31,12 @@ public final class ActiveLock { /** * Use ActiveLock#newInstance to create instance. + * + * @param fairness - if true the lock uses a fair ordering policy, else + * non-fair ordering. */ - private ActiveLock() { - this.lock = new ReentrantReadWriteLock(); + private ActiveLock(boolean fairness) { + this.lock = new ReentrantReadWriteLock(fairness); this.count = new AtomicInteger(0); } @@ -42,8 +45,8 @@ private ActiveLock() { * * @return new ActiveLock */ - public static ActiveLock newInstance() { - return new ActiveLock(); + public static ActiveLock newInstance(boolean fairness) { + return new ActiveLock(fairness); } /** diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java index 670d4d16378bd..3c2b5d4a394c2 100644 --- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java +++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java @@ -37,18 +37,31 @@ public class LockManager { private static final Logger LOG = LoggerFactory.getLogger(LockManager.class); private final Map activeLocks = new ConcurrentHashMap<>(); - private final GenericObjectPool lockPool = - new GenericObjectPool<>(new PooledLockFactory()); + private final GenericObjectPool lockPool; /** - * Creates new LockManager instance with the given Configuration. + * Creates new LockManager instance with the given Configuration.and uses + * non-fair mode for locks. * * @param conf Configuration object */ public LockManager(final Configuration conf) { + this(conf, false); + } + + + /** + * Creates new LockManager instance with the given Configuration. + * + * @param conf Configuration object + * @param fair - true to use fair lock ordering, else non-fair lock ordering. + */ + public LockManager(final Configuration conf, boolean fair) { final int maxPoolSize = conf.getInt( HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT); + lockPool = + new GenericObjectPool<>(new PooledLockFactory(fair)); lockPool.setMaxTotal(maxPoolSize); } diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/PooledLockFactory.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/PooledLockFactory.java index 4c24ef74b2831..1e3ba05a3a2b2 100644 --- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/PooledLockFactory.java +++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/PooledLockFactory.java @@ -26,9 +26,14 @@ */ public class PooledLockFactory extends BasePooledObjectFactory { + private boolean fairness; + + PooledLockFactory(boolean fair) { + this.fairness = fair; + } @Override public ActiveLock create() throws Exception { - return ActiveLock.newInstance(); + return ActiveLock.newInstance(fairness); } @Override diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml b/hadoop-hdds/common/src/main/resources/ozone-default.xml index 31bc65240d294..b0a59fa209ccb 100644 --- a/hadoop-hdds/common/src/main/resources/ozone-default.xml +++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml @@ -1529,6 +1529,17 @@ + + ozone.om.lock.fair + false + If this is true, the Ozone Manager lock will be used in Fair + mode, which will schedule threads in the order received/queued. If this is + false, uses non-fair ordering. See + java.util.concurrent.locks.ReentrantReadWriteLock + for more information on fair/non-fair locks. + + + ozone.om.ratis.enable false diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java index ff30eca470e09..0b5c18e8205cb 100644 --- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java +++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java @@ -25,6 +25,7 @@ import org.apache.hadoop.hdds.cli.GenericCli; import org.apache.hadoop.hdds.cli.HddsVersionProvider; import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol; import org.apache.hadoop.hdds.scm.ScmConfigKeys; import org.apache.hadoop.hdds.scm.XceiverClientManager; import org.apache.hadoop.hdds.scm.cli.container.ContainerCommands; @@ -36,17 +37,20 @@ import org.apache.hadoop.hdds.scm.protocolPB .StorageContainerLocationProtocolClientSideTranslatorPB; import org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolPB; +import org.apache.hadoop.hdds.security.x509.SecurityConfig; import org.apache.hadoop.hdds.tracing.TracingUtil; import org.apache.hadoop.ipc.Client; import org.apache.hadoop.ipc.ProtobufRpcEngine; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.ozone.OzoneConsts; +import org.apache.hadoop.ozone.OzoneSecurityUtil; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.util.NativeCodeLoader; import org.apache.commons.lang3.StringUtils; import static org.apache.hadoop.hdds.HddsUtils.getScmAddressForClients; +import static org.apache.hadoop.hdds.HddsUtils.getScmSecurityClient; import static org.apache.hadoop.hdds.scm.ScmConfigKeys .OZONE_SCM_CLIENT_ADDRESS_KEY; import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE; @@ -136,8 +140,21 @@ public ScmClient createScmClient() NetUtils.getDefaultSocketFactory(ozoneConf), Client.getRpcTimeout(ozoneConf))), StorageContainerLocationProtocol.class, ozoneConf); - return new ContainerOperationClient( - client, new XceiverClientManager(ozoneConf)); + + XceiverClientManager xceiverClientManager = null; + if (OzoneSecurityUtil.isSecurityEnabled(ozoneConf)) { + SecurityConfig securityConfig = new SecurityConfig(ozoneConf); + SCMSecurityProtocol scmSecurityProtocolClient = getScmSecurityClient( + (OzoneConfiguration) securityConfig.getConfiguration()); + String caCertificate = + scmSecurityProtocolClient.getCACertificate(); + xceiverClientManager = new XceiverClientManager(ozoneConf, + OzoneConfiguration.of(ozoneConf).getObject(XceiverClientManager + .ScmClientConfig.class), caCertificate); + } else { + xceiverClientManager = new XceiverClientManager(ozoneConf); + } + return new ContainerOperationClient(client, xceiverClientManager); } public void checkContainerExists(ScmClient scmClient, long containerId) diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/webapps/static/index.html b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/webapps/static/index.html index 7caba43124cab..b28c959be3058 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/webapps/static/index.html +++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/webapps/static/index.html @@ -21,16 +21,16 @@

Hadoop HttpFS Server

\ No newline at end of file diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java index 943699aaa7830..ac25da3fbdd0c 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRedudantBlocks.java @@ -58,7 +58,7 @@ public class TestRedudantBlocks { private final int cellSize = ecPolicy.getCellSize(); private final int stripesPerBlock = 4; private final int blockSize = stripesPerBlock * cellSize; - private final int numDNs = groupSize + 1; + private final int numDNs = groupSize; @Before public void setup() throws IOException { @@ -110,12 +110,16 @@ public void testProcessOverReplicatedAndRedudantBlock() throws Exception { // update blocksMap cluster.triggerBlockReports(); - // add to invalidates + // delete redundant block cluster.triggerHeartbeats(); - // datanode delete block + //wait for IBR + Thread.sleep(1100); + + // trigger reconstruction cluster.triggerHeartbeats(); - // update blocksMap - cluster.triggerBlockReports(); + + //wait for IBR + Thread.sleep(1100); HashSet blockIdsSet = new HashSet(); diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java index 045997fd05584..b179ca5395695 100644 --- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java +++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java @@ -193,10 +193,12 @@ public List getLocationInfoList() { .setPipeline(streamEntry.getPipeline()).build(); locationInfoList.add(info); } - LOG.debug( - "block written " + streamEntry.getBlockID() + ", length " + length - + " bcsID " + streamEntry.getBlockID() - .getBlockCommitSequenceId()); + if (LOG.isDebugEnabled()) { + LOG.debug( + "block written " + streamEntry.getBlockID() + ", length " + length + + " bcsID " + streamEntry.getBlockID() + .getBlockCommitSequenceId()); + } } return locationInfoList; } diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java index fa1672a1fa7d0..ecbb3290a7dc6 100644 --- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java +++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java @@ -97,8 +97,10 @@ private synchronized void initialize(String keyName, long keyLength = 0; for (int i = 0; i < blockInfos.size(); i++) { OmKeyLocationInfo omKeyLocationInfo = blockInfos.get(i); - LOG.debug("Adding stream for accessing {}. The stream will be " + - "initialized later.", omKeyLocationInfo); + if (LOG.isDebugEnabled()) { + LOG.debug("Adding stream for accessing {}. The stream will be " + + "initialized later.", omKeyLocationInfo); + } addStream(omKeyLocationInfo, xceiverClientManager, verifyChecksum); diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java index d0dd124171f58..06351ab2c3d0b 100644 --- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java +++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java @@ -439,10 +439,14 @@ public Token getDelegationToken(Text renewer) ozoneManagerClient.getDelegationToken(renewer); if (token != null) { token.setService(dtService); - LOG.debug("Created token {} for dtService {}", token, dtService); + if (LOG.isDebugEnabled()) { + LOG.debug("Created token {} for dtService {}", token, dtService); + } } else { - LOG.debug("Cannot get ozone delegation token for renewer {} to access " + - "service {}", renewer, dtService); + if (LOG.isDebugEnabled()) { + LOG.debug("Cannot get ozone delegation token for renewer {} to " + + "access service {}", renewer, dtService); + } } return token; } diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java index 2fdf543f31bec..fb5665820628c 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java @@ -75,7 +75,9 @@ public S3SecretValue getS3Secret(String kerberosID) throws IOException { } finally { omMetadataManager.getLock().releaseLock(S3_SECRET_LOCK, kerberosID); } - LOG.trace("Secret for accessKey:{}, proto:{}", kerberosID, result); + if (LOG.isTraceEnabled()) { + LOG.trace("Secret for accessKey:{}, proto:{}", kerberosID, result); + } return result; } diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java index 62d8fdc2613a1..32684de5b73f2 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java @@ -214,8 +214,10 @@ private Text computeDelegationTokenService() { @Override public void performFailover(OzoneManagerProtocolPB currentProxy) { int newProxyIndex = incrementProxyIndex(); - LOG.debug("Failing over OM proxy to index: {}, nodeId: {}", - newProxyIndex, omNodeIDList.get(newProxyIndex)); + if (LOG.isDebugEnabled()) { + LOG.debug("Failing over OM proxy to index: {}, nodeId: {}", + newProxyIndex, omNodeIDList.get(newProxyIndex)); + } } /** diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OMRatisHelper.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OMRatisHelper.java index bc64d6c5a1fd5..c1930c85d03f5 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OMRatisHelper.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OMRatisHelper.java @@ -61,7 +61,9 @@ private OMRatisHelper() { */ public static RaftClient newRaftClient(RpcType rpcType, String omId, RaftGroup group, RetryPolicy retryPolicy, Configuration conf) { - LOG.trace("newRaftClient: {}, leader={}, group={}", rpcType, omId, group); + if (LOG.isTraceEnabled()) { + LOG.trace("newRaftClient: {}, leader={}, group={}", rpcType, omId, group); + } final RaftProperties properties = new RaftProperties(); RaftConfigKeys.Rpc.setType(properties, rpcType); diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java index c6a99ac2d9cfd..31f092446234e 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java @@ -29,6 +29,9 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.ozone.lock.LockManager; +import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_MANAGER_FAIR_LOCK_DEFAULT; +import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_MANAGER_FAIR_LOCK; + /** * Provides different locks to handle concurrency in OzoneMaster. * We also maintain lock hierarchy, based on the weight. @@ -89,7 +92,9 @@ public class OzoneManagerLock { * @param conf Configuration object */ public OzoneManagerLock(Configuration conf) { - manager = new LockManager<>(conf); + boolean fair = conf.getBoolean(OZONE_MANAGER_FAIR_LOCK, + OZONE_MANAGER_FAIR_LOCK_DEFAULT); + manager = new LockManager<>(conf, fair); } /** @@ -168,8 +173,10 @@ private boolean lock(Resource resource, String resourceName, throw new RuntimeException(errorMessage); } else { lockFn.accept(resourceName); - LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name, - resourceName); + if (LOG.isDebugEnabled()) { + LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name, + resourceName); + } lockSet.set(resource.setLock(lockSet.get())); return true; } @@ -264,8 +271,10 @@ public boolean acquireMultiUserLock(String firstUser, String secondUser) { throw ex; } } - LOG.debug("Acquired Write {} lock on resource {} and {}", resource.name, - firstUser, secondUser); + if (LOG.isDebugEnabled()) { + LOG.debug("Acquired Write {} lock on resource {} and {}", resource.name, + firstUser, secondUser); + } lockSet.set(resource.setLock(lockSet.get())); return true; } @@ -300,8 +309,10 @@ public void releaseMultiUserLock(String firstUser, String secondUser) { manager.writeUnlock(firstUser); manager.writeUnlock(secondUser); } - LOG.debug("Release Write {} lock on resource {} and {}", resource.name, - firstUser, secondUser); + if (LOG.isDebugEnabled()) { + LOG.debug("Release Write {} lock on resource {} and {}", resource.name, + firstUser, secondUser); + } lockSet.set(resource.clearLock(lockSet.get())); } @@ -352,8 +363,10 @@ private void unlock(Resource resource, String resourceName, // locks, as some locks support acquiring lock again. lockFn.accept(resourceName); // clear lock - LOG.debug("Release {} {}, lock on resource {}", lockType, resource.name, - resourceName); + if (LOG.isDebugEnabled()) { + LOG.debug("Release {} {}, lock on resource {}", lockType, resource.name, + resourceName); + } lockSet.set(resource.clearLock(lockSet.get())); } diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java index b3f607a9c3610..5cc782336a85a 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java @@ -89,7 +89,7 @@ public Token generateToken(String user, if (LOG.isTraceEnabled()) { long expiryTime = tokenIdentifier.getExpiryDate(); String tokenId = tokenIdentifier.toString(); - LOG.trace("Issued delegation token -> expiryTime:{},tokenId:{}", + LOG.trace("Issued delegation token -> expiryTime:{}, tokenId:{}", expiryTime, tokenId); } // Pass blockId as service. diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java index 7e03095cdc45c..0de8ac63c3f04 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java @@ -289,8 +289,10 @@ public OzoneTokenIdentifier cancelToken(Token token, String canceller) throws IOException { OzoneTokenIdentifier id = OzoneTokenIdentifier.readProtoBuf( token.getIdentifier()); - LOG.debug("Token cancellation requested for identifier: {}", - formatTokenId(id)); + if (LOG.isDebugEnabled()) { + LOG.debug("Token cancellation requested for identifier: {}", + formatTokenId(id)); + } if (id.getUser() == null) { throw new InvalidToken("Token with no owner " + formatTokenId(id)); diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java index dd2ab1fa2e507..68afaaf52b81a 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java @@ -43,9 +43,13 @@ public OzoneDelegationTokenSelector() { @Override public Token selectToken(Text service, Collection> tokens) { - LOG.trace("Getting token for service {}", service); + if (LOG.isTraceEnabled()) { + LOG.trace("Getting token for service {}", service); + } Token token = getSelectedTokens(service, tokens); - LOG.debug("Got tokens: {} for service {}", token, service); + if (LOG.isDebugEnabled()) { + LOG.debug("Got tokens: {} for service {}", token, service); + } return token; } diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java index 78f0565b81dc7..06fc071f32dde 100644 --- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java +++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecretManager.java @@ -110,8 +110,10 @@ public byte[] createPassword(byte[] identifier, PrivateKey privateKey) @Override public byte[] createPassword(T identifier) { - logger.debug("Creating password for identifier: {}, currentKey: {}", - formatTokenId(identifier), currentKey.getKeyId()); + if (logger.isDebugEnabled()) { + logger.debug("Creating password for identifier: {}, currentKey: {}", + formatTokenId(identifier), currentKey.getKeyId()); + } byte[] password = null; try { password = createPassword(identifier.getBytes(), diff --git a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh index df193307d2f67..81551d1ed9778 100755 --- a/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh +++ b/hadoop-ozone/dev-support/checks/_mvn_unit_report.sh @@ -45,6 +45,11 @@ grep -A1 'Crashed tests' "${REPORT_DIR}/output.log" \ | cut -f2- -d' ' \ | sort -u >> "${REPORT_DIR}/summary.txt" +## Check if Maven was killed +if grep -q 'Killed.* mvn .* test ' "${REPORT_DIR}/output.log"; then + echo 'Maven test run was killed' >> "${REPORT_DIR}/summary.txt" +fi + #Collect of all of the report failes of FAILED tests while IFS= read -r -d '' dir; do while IFS=$'\n' read -r file; do diff --git a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config index 3232a105f96e3..63bbbd8987338 100644 --- a/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config @@ -31,51 +31,5 @@ HDFS-SITE.XML_dfs.namenode.rpc-address=namenode:9000 HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-yyyy_mm_dd.log -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 - #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/common-config b/hadoop-ozone/dist/src/main/compose/ozone-mr/common-config index b83f3323fab13..7936238833125 100644 --- a/hadoop-ozone/dist/src/main/compose/ozone-mr/common-config +++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/common-config @@ -75,12 +75,3 @@ CAPACITY-SCHEDULER.XML_yarn.scheduler.capacity.root.default.acl_administer_queue CAPACITY-SCHEDULER.XML_yarn.scheduler.capacity.node-locality-delay=40 CAPACITY-SCHEDULER.XML_yarn.scheduler.capacity.queue-mappings= CAPACITY-SCHEDULER.XML_yarn.scheduler.capacity.queue-mappings-override.enable=false - -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop=INFO -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR diff --git a/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config b/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config index 5c3b2a2c2db11..f3de99a50a796 100644 --- a/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config @@ -35,51 +35,6 @@ OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 ASYNC_PROFILER_HOME=/opt/profiler -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT diff --git a/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config b/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config index e45353b78601d..61d1378cded30 100644 --- a/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config @@ -31,51 +31,6 @@ OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 ASYNC_PROFILER_HOME=/opt/profiler -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. -#BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT +#BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm \ No newline at end of file diff --git a/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config b/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config index cfbdfae26cb88..ac6a3679de3a7 100644 --- a/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozone-topology/docker-config @@ -33,18 +33,6 @@ OZONE-SITE.XML_dfs.network.topology.aware.read.enable=true HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 ASYNC_PROFILER_HOME=/opt/profiler -LOG4J.PROPERTIES_log4j.rootLogger=DEBUG, ARF -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN -LOG4J.PROPERTIES_log4j.appender.ARF=org.apache.log4j.RollingFileAppender -LOG4J.PROPERTIES_log4j.appender.ARF.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.ARF.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.appender.ARF.file=/opt/hadoop/logs/${module.name}-${user.name}.log HDDS_DN_OPTS=-Dmodule.name=datanode HDFS_OM_OPTS=-Dmodule.name=om HDFS_STORAGECONTAINERMANAGER_OPTS=-Dmodule.name=scm @@ -53,40 +41,3 @@ HDFS_SCM_CLI_OPTS=-Dmodule.name=scmcli #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT diff --git a/hadoop-ozone/dist/src/main/compose/ozone/docker-config b/hadoop-ozone/dist/src/main/compose/ozone/docker-config index c7a1647774f35..380b529cd33a8 100644 --- a/hadoop-ozone/dist/src/main/compose/ozone/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozone/docker-config @@ -29,51 +29,6 @@ OZONE-SITE.XML_hdds.profiler.endpoint.enabled=true HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 ASYNC_PROFILER_HOME=/opt/profiler -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT diff --git a/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config b/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config index af72465091c6f..4d5466c6ab9e5 100644 --- a/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config @@ -37,51 +37,6 @@ OZONE-SITE.XML_hdds.scm.replication.event.timeout=10s OZONE-SITE.XML_dfs.ratis.server.failure.duration=35s HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT diff --git a/hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config b/hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config index 538376ee90536..d2d345272a1f1 100644 --- a/hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config @@ -35,16 +35,3 @@ HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 JAEGER_SAMPLER_PARAM=1 JAEGER_SAMPLER_TYPE=const JAEGER_AGENT_HOST=jaeger - -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN -LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-yyyy_mm_dd.log -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 diff --git a/hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config b/hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config index 4ffe9a6674c7f..d3efa2e884fa3 100644 --- a/hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config @@ -26,54 +26,6 @@ OZONE-SITE.XML_hdds.datanode.dir=/data/hdds HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-yyyy_mm_dd.log -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT diff --git a/hadoop-ozone/dist/src/main/compose/ozones3/docker-config b/hadoop-ozone/dist/src/main/compose/ozones3/docker-config index 4ffe9a6674c7f..d3efa2e884fa3 100644 --- a/hadoop-ozone/dist/src/main/compose/ozones3/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozones3/docker-config @@ -26,54 +26,6 @@ OZONE-SITE.XML_hdds.datanode.dir=/data/hdds HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR -LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-yyyy_mm_dd.log -LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm - -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT diff --git a/hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config b/hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config index 4e67a044b0119..fe713e0dde21e 100644 --- a/hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config @@ -31,9 +31,4 @@ HDFS-SITE.XML_dfs.namenode.rpc-address=namenode:9000 HDFS-SITE.XML_dfs.namenode.name.dir=/data/namenode HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 -HDFS-SITE.XML_dfs.datanode.plugins=org.apache.hadoop.ozone.HddsDatanodeService -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n +HDFS-SITE.XML_dfs.datanode.plugins=org.apache.hadoop.ozone.HddsDatanodeService \ No newline at end of file diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config index be9dc1e3b51c2..646fd021ce7fd 100644 --- a/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config @@ -119,55 +119,9 @@ CAPACITY-SCHEDULER.XML_yarn.scheduler.capacity.node-locality-delay=40 CAPACITY-SCHEDULER.XML_yarn.scheduler.capacity.queue-mappings= CAPACITY-SCHEDULER.XML_yarn.scheduler.capacity.queue-mappings-override.enable=false -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop=INFO -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR - #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT - OZONE_DATANODE_SECURE_USER=root KEYTAB_DIR=/etc/security/keytabs KERBEROS_KEYTABS=dn om scm HTTP testuser s3g rm nm yarn jhs hadoop spark diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config index 60d1fcf6ebe39..44af35ee85d71 100644 --- a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config +++ b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config @@ -65,14 +65,6 @@ CORE-SITE.XML_hadoop.http.authentication.kerberos.principal=HTTP/_HOST@EXAMPLE.C CORE-SITE.XML_hadoop.http.authentication.kerberos.keytab=/etc/security/keytabs/HTTP.keytab CORE-SITE.XML_hadoop.http.filter.initializers=org.apache.hadoop.security.AuthenticationFilterInitializer -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.authentication.server -.AuthenticationFilter=DEBUG -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.authentication.server -.KerberosAuthenticationHandler=TRACE -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.http.HttpServer2=TRACE - - - CORE-SITE.XML_hadoop.security.authorization=true HADOOP-POLICY.XML_ozone.om.security.client.protocol.acl=* HADOOP-POLICY.XML_hdds.security.client.datanode.container.protocol.acl=* @@ -82,55 +74,10 @@ HADOOP-POLICY.XML_hdds.security.client.scm.certificate.protocol.acl=* HDFS-SITE.XML_rpc.metrics.quantile.enable=true HDFS-SITE.XML_rpc.metrics.percentiles.intervals=60,300 -LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout -LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender -LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR -LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop=INFO -LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR #Enable this variable to print out all hadoop rpc traffic to the stdout. See http://byteman.jboss.org/ to define your own instrumentation. #BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm -#LOG4J2.PROPERTIES_* are for Ozone Audit Logging -LOG4J2.PROPERTIES_monitorInterval=30 -LOG4J2.PROPERTIES_filter=read,write -LOG4J2.PROPERTIES_filter.read.type=MarkerFilter -LOG4J2.PROPERTIES_filter.read.marker=READ -LOG4J2.PROPERTIES_filter.read.onMatch=DENY -LOG4J2.PROPERTIES_filter.read.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.type=MarkerFilter -LOG4J2.PROPERTIES_filter.write.marker=WRITE -LOG4J2.PROPERTIES_filter.write.onMatch=NEUTRAL -LOG4J2.PROPERTIES_filter.write.onMismatch=NEUTRAL -LOG4J2.PROPERTIES_appenders=console, rolling -LOG4J2.PROPERTIES_appender.console.type=Console -LOG4J2.PROPERTIES_appender.console.name=STDOUT -LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.type=RollingFile -LOG4J2.PROPERTIES_appender.rolling.name=RollingFile -LOG4J2.PROPERTIES_appender.rolling.fileName=${sys:hadoop.log.dir}/om-audit-${hostName}.log -LOG4J2.PROPERTIES_appender.rolling.filePattern=${sys:hadoop.log.dir}/om-audit-${hostName}-%d{yyyy-MM-dd-HH-mm-ss}-%i.log.gz -LOG4J2.PROPERTIES_appender.rolling.layout.type=PatternLayout -LOG4J2.PROPERTIES_appender.rolling.layout.pattern=%d{DEFAULT} | %-5level | %c{1} | %msg | %throwable{3} %n -LOG4J2.PROPERTIES_appender.rolling.policies.type=Policies -LOG4J2.PROPERTIES_appender.rolling.policies.time.type=TimeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.time.interval=86400 -LOG4J2.PROPERTIES_appender.rolling.policies.size.type=SizeBasedTriggeringPolicy -LOG4J2.PROPERTIES_appender.rolling.policies.size.size=64MB -LOG4J2.PROPERTIES_loggers=audit -LOG4J2.PROPERTIES_logger.audit.type=AsyncLogger -LOG4J2.PROPERTIES_logger.audit.name=OMAudit -LOG4J2.PROPERTIES_logger.audit.level=INFO -LOG4J2.PROPERTIES_logger.audit.appenderRefs=rolling -LOG4J2.PROPERTIES_logger.audit.appenderRef.file.ref=RollingFile -LOG4J2.PROPERTIES_rootLogger.level=INFO -LOG4J2.PROPERTIES_rootLogger.appenderRefs=stdout -LOG4J2.PROPERTIES_rootLogger.appenderRef.stdout.ref=STDOUT - OZONE_DATANODE_SECURE_USER=root SECURITY_ENABLED=true KEYTAB_DIR=/etc/security/keytabs diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh b/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh index 01106b861545d..f32846386a9f7 100755 --- a/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh +++ b/hadoop-ozone/dist/src/main/compose/ozonesecure/test.sh @@ -35,6 +35,8 @@ execute_robot_test scm ozonefs/ozonefs.robot execute_robot_test s3g s3 +execute_robot_test scm scmcli + stop_docker_env generate_report diff --git a/hadoop-ozone/dist/src/main/smoketest/scmcli/pipeline.robot b/hadoop-ozone/dist/src/main/smoketest/scmcli/pipeline.robot new file mode 100644 index 0000000000000..6a6f0b0eb782a --- /dev/null +++ b/hadoop-ozone/dist/src/main/smoketest/scmcli/pipeline.robot @@ -0,0 +1,28 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Smoketest ozone cluster startup +Library OperatingSystem +Library BuiltIn +Resource ../commonlib.robot + +*** Variables *** + + +*** Test Cases *** +Run list pipeline + ${output} = Execute ozone scmcli pipeline list + Should contain ${output} Type:RATIS, Factor:ONE, State:OPEN \ No newline at end of file diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java index 540445642f555..d64eae4e6e4c8 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java @@ -570,8 +570,10 @@ public boolean checkAccess(OzoneObj ozObject, RequestContext context) } boolean hasAccess = OzoneAclUtil.checkAclRights(bucketInfo.getAcls(), context); - LOG.debug("user:{} has access rights for bucket:{} :{} ", - context.getClientUgi(), ozObject.getBucketName(), hasAccess); + if (LOG.isDebugEnabled()) { + LOG.debug("user:{} has access rights for bucket:{} :{} ", + context.getClientUgi(), ozObject.getBucketName(), hasAccess); + } return hasAccess; } catch (IOException ex) { if(ex instanceof OMException) { diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java index f3ae9b1cd73c9..20b7fdfec534f 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java @@ -1661,8 +1661,10 @@ public boolean checkAccess(OzoneObj ozObject, RequestContext context) if (keyInfo == null) { // the key does not exist, but it is a parent "dir" of some key // let access be determined based on volume/bucket/prefix ACL - LOG.debug("key:{} is non-existent parent, permit access to user:{}", - keyName, context.getClientUgi()); + if (LOG.isDebugEnabled()) { + LOG.debug("key:{} is non-existent parent, permit access to user:{}", + keyName, context.getClientUgi()); + } return true; } } catch (OMException e) { @@ -1678,8 +1680,10 @@ public boolean checkAccess(OzoneObj ozObject, RequestContext context) boolean hasAccess = OzoneAclUtil.checkAclRight( keyInfo.getAcls(), context); - LOG.debug("user:{} has access rights for key:{} :{} ", - context.getClientUgi(), ozObject.getKeyName(), hasAccess); + if (LOG.isDebugEnabled()) { + LOG.debug("user:{} has access rights for key:{} :{} ", + context.getClientUgi(), ozObject.getKeyName(), hasAccess); + } return hasAccess; } catch (IOException ex) { if(ex instanceof OMException) { @@ -1766,10 +1770,11 @@ public OzoneFileStatus getFileStatus(OmKeyArgs args) throws IOException { if (keys.iterator().hasNext()) { return new OzoneFileStatus(keyName); } - - LOG.debug("Unable to get file status for the key: volume:" + volumeName + - " bucket:" + bucketName + " key:" + keyName + " with error no " + - "such file exists:"); + if (LOG.isDebugEnabled()) { + LOG.debug("Unable to get file status for the key: volume: {}, bucket:" + + " {}, key: {}, with error: No such file exists.", volumeName, + bucketName, keyName); + } throw new OMException("Unable to get file status: volume: " + volumeName + " bucket: " + bucketName + " key: " + keyName, FILE_NOT_FOUND); @@ -2132,8 +2137,10 @@ private void sortDatanodeInPipeline(OmKeyInfo keyInfo, String clientMachine) { List sortedNodes = scmClient.getBlockClient() .sortDatanodes(nodeList, clientMachine); k.getPipeline().setNodesInOrder(sortedNodes); - LOG.debug("Sort datanodes {} for client {}, return {}", nodes, - clientMachine, sortedNodes); + if (LOG.isDebugEnabled()) { + LOG.debug("Sort datanodes {} for client {}, return {}", nodes, + clientMachine, sortedNodes); + } } catch (IOException e) { LOG.warn("Unable to sort datanodes based on distance to " + "client, volume=" + keyInfo.getVolumeName() + diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java index 6c085911e11b3..95f21ae0ca332 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java @@ -23,6 +23,9 @@ import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.TreeMap; +import java.util.TreeSet; import java.util.stream.Collectors; import org.apache.hadoop.hdds.client.BlockID; @@ -619,23 +622,31 @@ public List listBuckets(final String volumeName, } int currentCount = 0; - try (TableIterator> - bucketIter = bucketTable.iterator()) { - KeyValue kv = bucketIter.seek(startKey); - while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) { - kv = bucketIter.next(); - // Skip the Start Bucket if needed. - if (kv != null && skipStartKey && - kv.getKey().equals(startKey)) { + + // For Bucket it is full cache, so we can just iterate in-memory table + // cache. + Iterator, CacheValue>> iterator = + bucketTable.cacheIterator(); + + + while (currentCount < maxNumOfBuckets && iterator.hasNext()) { + Map.Entry, CacheValue> entry = + iterator.next(); + + String key = entry.getKey().getCacheKey(); + OmBucketInfo omBucketInfo = entry.getValue().getCacheValue(); + // Making sure that entry in cache is not for delete bucket request. + + if (omBucketInfo != null) { + if (key.equals(startKey) && skipStartKey) { continue; } - if (kv != null && kv.getKey().startsWith(seekPrefix)) { - result.add(kv.getValue()); + + // We should return only the keys, whose keys match with prefix and + // the keys after the startBucket. + if (key.startsWith(seekPrefix) && key.compareTo(startKey) > 0) { + result.add(omBucketInfo); currentCount++; - } else { - // The SeekPrefix does not match any more, we can break out of the - // loop. - break; } } } @@ -645,7 +656,12 @@ public List listBuckets(final String volumeName, @Override public List listKeys(String volumeName, String bucketName, String startKey, String keyPrefix, int maxKeys) throws IOException { + List result = new ArrayList<>(); + if (maxKeys <= 0) { + return result; + } + if (Strings.isNullOrEmpty(volumeName)) { throw new OMException("Volume name is required.", ResultCodes.VOLUME_NOT_FOUND); @@ -680,19 +696,56 @@ public List listKeys(String volumeName, String bucketName, seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX); } int currentCount = 0; - try (TableIterator> keyIter = - getKeyTable() - .iterator()) { - KeyValue kv = keyIter.seek(seekKey); - while (currentCount < maxKeys && keyIter.hasNext()) { - kv = keyIter.next(); - // Skip the Start key if needed. - if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) { - continue; + + + TreeMap cacheKeyMap = new TreeMap<>(); + Set deletedKeySet = new TreeSet<>(); + Iterator, CacheValue>> iterator = + keyTable.cacheIterator(); + + //TODO: We can avoid this iteration if table cache has stored entries in + // treemap. Currently HashMap is used in Cache. HashMap get operation is an + // constant time operation, where as for treeMap get is log(n). + // So if we move to treemap, the get operation will be affected. As get + // is frequent operation on table. So, for now in list we iterate cache map + // and construct treeMap which match with keyPrefix and are greater than or + // equal to startKey. Later we can revisit this, if list operation + // is becoming slow. + while (iterator.hasNext()) { + Map.Entry< CacheKey, CacheValue> entry = + iterator.next(); + + String key = entry.getKey().getCacheKey(); + OmKeyInfo omKeyInfo = entry.getValue().getCacheValue(); + // Making sure that entry in cache is not for delete key request. + + if (omKeyInfo != null) { + if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) { + cacheKeyMap.put(key, omKeyInfo); } + } else { + deletedKeySet.add(key); + } + } + + // Get maxKeys from DB if it has. + + try (TableIterator> + keyIter = getKeyTable().iterator()) { + KeyValue< String, OmKeyInfo > kv; + keyIter.seek(seekKey); + // we need to iterate maxKeys + 1 here because if skipStartKey is true, + // we should skip that entry and return the result. + while (currentCount < maxKeys + 1 && keyIter.hasNext()) { + kv = keyIter.next(); if (kv != null && kv.getKey().startsWith(seekPrefix)) { - result.add(kv.getValue()); - currentCount++; + + // Entry should not be marked for delete, consider only those + // entries. + if(!deletedKeySet.contains(kv.getKey())) { + cacheKeyMap.put(kv.getKey(), kv.getValue()); + currentCount++; + } } else { // The SeekPrefix does not match any more, we can break out of the // loop. @@ -700,6 +753,28 @@ public List listKeys(String volumeName, String bucketName, } } } + + // Finally DB entries and cache entries are merged, then return the count + // of maxKeys from the sorted map. + currentCount = 0; + + for (Map.Entry cacheKey : cacheKeyMap.entrySet()) { + if (cacheKey.getKey().equals(seekKey) && skipStartKey) { + continue; + } + + result.add(cacheKey.getValue()); + currentCount++; + + if (currentCount == maxKeys) { + break; + } + } + + // Clear map and set. + cacheKeyMap.clear(); + deletedKeySet.clear(); + return result; } diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OpenKeyCleanupService.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OpenKeyCleanupService.java index fa4be651dae63..79bc39f498464 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OpenKeyCleanupService.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OpenKeyCleanupService.java @@ -88,7 +88,9 @@ public BackgroundTaskResult call() throws Exception { if (result.isSuccess()) { try { keyManager.deleteExpiredOpenKey(result.getObjectKey()); - LOG.debug("Key {} deleted from OM DB", result.getObjectKey()); + if (LOG.isDebugEnabled()) { + LOG.debug("Key {} deleted from OM DB", result.getObjectKey()); + } deletedSize += 1; } catch (IOException e) { LOG.warn("Failed to delete hanging-open key {}", diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java index a6503d73140a3..0cd087eee2364 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java @@ -734,10 +734,12 @@ private static void loginOMUser(OzoneConfiguration conf) if (SecurityUtil.getAuthenticationMethod(conf).equals( AuthenticationMethod.KERBEROS)) { - LOG.debug("Ozone security is enabled. Attempting login for OM user. " - + "Principal: {},keytab: {}", conf.get( - OZONE_OM_KERBEROS_PRINCIPAL_KEY), - conf.get(OZONE_OM_KERBEROS_KEYTAB_FILE_KEY)); + if (LOG.isDebugEnabled()) { + LOG.debug("Ozone security is enabled. Attempting login for OM user. " + + "Principal: {}, keytab: {}", conf.get( + OZONE_OM_KERBEROS_PRINCIPAL_KEY), + conf.get(OZONE_OM_KERBEROS_KEYTAB_FILE_KEY)); + } UserGroupInformation.setConfiguration(conf); diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java index 0eafff9dcbd93..c89b32ee7347e 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java @@ -139,7 +139,10 @@ public boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException { OMPrefixAclOpResult omPrefixAclOpResult = removeAcl(obj, acl, prefixInfo); if (!omPrefixAclOpResult.isOperationsResult()) { - LOG.debug("acl {} does not exist for prefix path {} ", acl, prefixPath); + if (LOG.isDebugEnabled()) { + LOG.debug("acl {} does not exist for prefix path {} ", + acl, prefixPath); + } return false; } @@ -236,8 +239,10 @@ public boolean checkAccess(OzoneObj ozObject, RequestContext context) if (lastNode != null && lastNode.getValue() != null) { boolean hasAccess = OzoneAclUtil.checkAclRights(lastNode.getValue(). getAcls(), context); - LOG.debug("user:{} has access rights for ozObj:{} ::{} ", - context.getClientUgi(), ozObject, hasAccess); + if (LOG.isDebugEnabled()) { + LOG.debug("user:{} has access rights for ozObj:{} ::{} ", + context.getClientUgi(), ozObject, hasAccess); + } return hasAccess; } else { return true; diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java index 04cf09e5ef9ee..7375eb89b26d0 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java @@ -108,7 +108,7 @@ private UserVolumeInfo delVolumeFromOwnerList(String volume, String owner) if (volumeList != null) { prevVolList.addAll(volumeList.getVolumeNamesList()); } else { - LOG.debug("volume:{} not found for user:{}"); + LOG.debug("volume:{} not found for user:{}", volume, owner); throw new OMException(ResultCodes.USER_NOT_FOUND); } @@ -503,7 +503,9 @@ public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException { try { volumeArgs.addAcl(acl); } catch (OMException ex) { - LOG.debug("Add acl failed.", ex); + if (LOG.isDebugEnabled()) { + LOG.debug("Add acl failed.", ex); + } return false; } metadataManager.getVolumeTable().put(dbVolumeKey, volumeArgs); @@ -553,7 +555,9 @@ public boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException { try { volumeArgs.removeAcl(acl); } catch (OMException ex) { - LOG.debug("Remove acl failed.", ex); + if (LOG.isDebugEnabled()) { + LOG.debug("Remove acl failed.", ex); + } return false; } metadataManager.getVolumeTable().put(dbVolumeKey, volumeArgs); @@ -685,8 +689,10 @@ public boolean checkAccess(OzoneObj ozObject, RequestContext context) Preconditions.checkState(volume.equals(volumeArgs.getVolume())); boolean hasAccess = volumeArgs.getAclMap().hasAccess( context.getAclRights(), context.getClientUgi()); - LOG.debug("user:{} has access rights for volume:{} :{} ", - context.getClientUgi(), ozObject.getVolumeName(), hasAccess); + if (LOG.isDebugEnabled()) { + LOG.debug("user:{} has access rights for volume:{} :{} ", + context.getClientUgi(), ozObject.getVolumeName(), hasAccess); + } return hasAccess; } catch (IOException ex) { LOG.error("Check access operation failed for volume:{}", volume, ex); diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java index b4f5b8d98fc94..e5cadffc40090 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java @@ -148,9 +148,11 @@ private void flushTransactions() { flushedTransactionCount.addAndGet(flushedTransactionsSize); flushIterations.incrementAndGet(); - LOG.debug("Sync Iteration {} flushed transactions in this " + - "iteration{}", flushIterations.get(), - flushedTransactionsSize); + if (LOG.isDebugEnabled()) { + LOG.debug("Sync Iteration {} flushed transactions in this " + + "iteration{}", flushIterations.get(), + flushedTransactionsSize); + } long lastRatisTransactionIndex = readyBuffer.stream().map(DoubleBufferEntry::getTrxLogIndex) diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java index 2cbef50cb0492..6f97f56241b01 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java @@ -99,8 +99,10 @@ public static OzoneManagerRatisClient newOzoneManagerRatisClient( } public void connect() { - LOG.debug("Connecting to OM Ratis Server GroupId:{} OM:{}", - raftGroup.getGroupId().getUuid().toString(), omNodeID); + if (LOG.isDebugEnabled()) { + LOG.debug("Connecting to OM Ratis Server GroupId:{} OM:{}", + raftGroup.getGroupId().getUuid().toString(), omNodeID); + } // TODO : XceiverClient ratis should pass the config value of // maxOutstandingRequests so as to set the upper bound on max no of async @@ -147,8 +149,7 @@ private OzoneManagerProtocolProtos.Status parseErrorStatus(String message) { if (message.contains(STATUS_CODE)) { String errorCode = message.substring(message.indexOf(STATUS_CODE) + STATUS_CODE.length()); - LOG.debug("Parsing error message for error code " + - errorCode); + LOG.debug("Parsing error message for error code {}", errorCode); return OzoneManagerProtocolProtos.Status.valueOf(errorCode.trim()); } else { return OzoneManagerProtocolProtos.Status.INTERNAL_ERROR; @@ -166,25 +167,27 @@ private CompletableFuture sendCommandAsync(OMRequest request) { CompletableFuture raftClientReply = sendRequestAsync(request); - return raftClientReply.whenComplete((reply, e) -> LOG.debug( - "received reply {} for request: cmdType={} traceID={} " + - "exception: {}", reply, request.getCmdType(), - request.getTraceID(), e)) - .thenApply(reply -> { - try { - Preconditions.checkNotNull(reply); - if (!reply.isSuccess()) { - RaftException exception = reply.getException(); - Preconditions.checkNotNull(exception, "Raft reply failure " + - "but no exception propagated."); - throw new CompletionException(exception); - } - return OMRatisHelper.getOMResponseFromRaftClientReply(reply); - - } catch (InvalidProtocolBufferException e) { - throw new CompletionException(e); - } - }); + return raftClientReply.whenComplete((reply, e) -> { + if (LOG.isDebugEnabled()) { + LOG.debug("received reply {} for request: cmdType={} traceID={} " + + "exception: {}", reply, request.getCmdType(), + request.getTraceID(), e); + } + }).thenApply(reply -> { + try { + Preconditions.checkNotNull(reply); + if (!reply.isSuccess()) { + RaftException exception = reply.getException(); + Preconditions.checkNotNull(exception, "Raft reply failure " + + "but no exception propagated."); + throw new CompletionException(exception); + } + return OMRatisHelper.getOMResponseFromRaftClientReply(reply); + + } catch (InvalidProtocolBufferException e) { + throw new CompletionException(e); + } + }); } /** @@ -198,7 +201,9 @@ private CompletableFuture sendRequestAsync( OMRequest request) { boolean isReadOnlyRequest = OmUtils.isReadOnly(request); ByteString byteString = OMRatisHelper.convertRequestToByteString(request); - LOG.debug("sendOMRequestAsync {} {}", isReadOnlyRequest, request); + if (LOG.isDebugEnabled()) { + LOG.debug("sendOMRequestAsync {} {}", isReadOnlyRequest, request); + } return isReadOnlyRequest ? raftClient.sendReadOnlyAsync(() -> byteString) : raftClient.sendAsync(() -> byteString); } diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java index 69a7ae93a81aa..7cab9d2738ab6 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java @@ -169,8 +169,10 @@ private OMResponse processReply(OMRequest omRequest, RaftClientReply reply) omResponse.setMessage(stateMachineException.getCause().getMessage()); omResponse.setStatus(parseErrorStatus( stateMachineException.getCause().getMessage())); - LOG.debug("Error while executing ratis request. " + - "stateMachineException: ", stateMachineException); + if (LOG.isDebugEnabled()) { + LOG.debug("Error while executing ratis request. " + + "stateMachineException: ", stateMachineException); + } return omResponse.build(); } diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java index 46db75df17cf7..b97de955a51ad 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java @@ -103,7 +103,9 @@ OMClientResponse onFailure(OMResponse.Builder omResponse, void onComplete(boolean operationResult, IOException exception, OMMetrics omMetrics) { if (operationResult) { - LOG.debug("Set acl: {} for path: {} success!", getAcls(), getPath()); + if (LOG.isDebugEnabled()) { + LOG.debug("Set acl: {} for path: {} success!", getAcls(), getPath()); + } } else { omMetrics.incNumBucketUpdateFails(); if (exception == null) { diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java index 01b5edc8d5545..a5abbcca012af 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java @@ -96,8 +96,10 @@ OMClientResponse onFailure(OMResponse.Builder omResponse, @Override void onComplete(IOException ex) { if (ex == null) { - LOG.debug("Set acls: {} to volume: {} success!", - getAcls(), getVolumeName()); + if (LOG.isDebugEnabled()) { + LOG.debug("Set acls: {} to volume: {} success!", + getAcls(), getVolumeName()); + } } else { LOG.error("Set acls {} to volume {} failed!", getAcls(), getVolumeName(), ex); diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java index 66f489233417d..2d305d7831a33 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java @@ -48,7 +48,9 @@ public OzoneManagerHARequestHandlerImpl(OzoneManager om, @Override public OMResponse handleApplyTransaction(OMRequest omRequest, long transactionLogIndex) { - LOG.debug("Received OMRequest: {}, ", omRequest); + if (LOG.isDebugEnabled()) { + LOG.debug("Received OMRequest: {}, ", omRequest); + } Type cmdType = omRequest.getCmdType(); switch (cmdType) { case CreateVolume: diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java index d4c029b8b3b99..ff2c966983f48 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java @@ -225,7 +225,9 @@ private OMResponse submitRequestDirectlyToOM(OMRequest request) { } try { omClientResponse.getFlushFuture().get(); - LOG.trace("Future for {} is completed", request); + if (LOG.isTraceEnabled()) { + LOG.trace("Future for {} is completed", request); + } } catch (ExecutionException | InterruptedException ex) { // terminate OM. As if we are in this stage means, while getting // response from flush future, we got an exception. diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java index 01e59b4fea8b5..ef96e0cc27ec4 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java @@ -149,7 +149,9 @@ public OzoneManagerRequestHandler(OzoneManager om) { @SuppressWarnings("methodlength") @Override public OMResponse handle(OMRequest request) { - LOG.debug("Received OMRequest: {}, ", request); + if (LOG.isDebugEnabled()) { + LOG.debug("Received OMRequest: {}, ", request); + } Type cmdType = request.getCmdType(); OMResponse.Builder responseBuilder = OMResponse.newBuilder() .setCmdType(cmdType) diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java index 5acd37e09c8c8..0b7c51a40640d 100644 --- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java +++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneNativeAuthorizer.java @@ -79,20 +79,20 @@ public boolean checkAccess(IOzoneObj ozObject, RequestContext context) switch (objInfo.getResourceType()) { case VOLUME: - LOG.trace("Checking access for volume:" + objInfo); + LOG.trace("Checking access for volume: {}", objInfo); return volumeManager.checkAccess(objInfo, context); case BUCKET: - LOG.trace("Checking access for bucket:" + objInfo); + LOG.trace("Checking access for bucket: {}", objInfo); return (bucketManager.checkAccess(objInfo, context) && volumeManager.checkAccess(objInfo, context)); case KEY: - LOG.trace("Checking access for Key:" + objInfo); + LOG.trace("Checking access for Key: {}", objInfo); return (keyManager.checkAccess(objInfo, context) && prefixManager.checkAccess(objInfo, context) && bucketManager.checkAccess(objInfo, context) && volumeManager.checkAccess(objInfo, context)); case PREFIX: - LOG.trace("Checking access for Prefix:" + objInfo); + LOG.trace("Checking access for Prefix: {]", objInfo); return (prefixManager.checkAccess(objInfo, context) && bucketManager.checkAccess(objInfo, context) && volumeManager.checkAccess(objInfo, context)); diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java new file mode 100644 index 0000000000000..e0e4c61d3e54f --- /dev/null +++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java @@ -0,0 +1,417 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.hadoop.ozone.om; +import com.google.common.base.Optional; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.protocol.StorageType; +import org.apache.hadoop.hdds.protocol.proto.HddsProtos; +import org.apache.hadoop.hdds.utils.db.cache.CacheKey; +import org.apache.hadoop.hdds.utils.db.cache.CacheValue; +import org.apache.hadoop.ozone.om.helpers.OmBucketInfo; +import org.apache.hadoop.ozone.om.helpers.OmKeyInfo; +import org.apache.hadoop.ozone.om.request.TestOMRequestUtils; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Rule; +import org.junit.Test; +import org.junit.rules.TemporaryFolder; + +import java.util.List; +import java.util.TreeSet; + +import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS; + +/** + * Tests OzoneManager MetadataManager. + */ +public class TestOmMetadataManager { + + private OMMetadataManager omMetadataManager; + private OzoneConfiguration ozoneConfiguration; + + @Rule + public TemporaryFolder folder = new TemporaryFolder(); + + + @Before + public void setup() throws Exception { + ozoneConfiguration = new OzoneConfiguration(); + ozoneConfiguration.set(OZONE_OM_DB_DIRS, + folder.getRoot().getAbsolutePath()); + omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration); + } + @Test + public void testListBuckets() throws Exception { + + String volumeName1 = "volumeA"; + String prefixBucketNameWithOzoneOwner = "ozoneBucket"; + String prefixBucketNameWithHadoopOwner = "hadoopBucket"; + + TestOMRequestUtils.addVolumeToDB(volumeName1, omMetadataManager); + + + TreeSet volumeABucketsPrefixWithOzoneOwner = new TreeSet<>(); + TreeSet volumeABucketsPrefixWithHadoopOwner = new TreeSet<>(); + for (int i=1; i<= 100; i++) { + if (i % 2 == 0) { + volumeABucketsPrefixWithOzoneOwner.add( + prefixBucketNameWithOzoneOwner + i); + addBucketsToCache(volumeName1, prefixBucketNameWithOzoneOwner + i); + } else { + volumeABucketsPrefixWithHadoopOwner.add( + prefixBucketNameWithHadoopOwner + i); + addBucketsToCache(volumeName1, prefixBucketNameWithHadoopOwner + i); + } + } + + String volumeName2 = "volumeB"; + TreeSet volumeBBucketsPrefixWithOzoneOwner = new TreeSet<>(); + TreeSet volumeBBucketsPrefixWithHadoopOwner = new TreeSet<>(); + TestOMRequestUtils.addVolumeToDB(volumeName2, omMetadataManager); + for (int i=1; i<= 100; i++) { + if (i % 2 == 0) { + volumeBBucketsPrefixWithOzoneOwner.add( + prefixBucketNameWithOzoneOwner + i); + addBucketsToCache(volumeName2, prefixBucketNameWithOzoneOwner + i); + } else { + volumeBBucketsPrefixWithHadoopOwner.add( + prefixBucketNameWithHadoopOwner + i); + addBucketsToCache(volumeName2, prefixBucketNameWithHadoopOwner + i); + } + } + + // List all buckets which have prefix ozoneBucket + List omBucketInfoList = + omMetadataManager.listBuckets(volumeName1, + null, prefixBucketNameWithOzoneOwner, 100); + + Assert.assertEquals(omBucketInfoList.size(), 50); + + for (OmBucketInfo omBucketInfo : omBucketInfoList) { + Assert.assertTrue(omBucketInfo.getBucketName().startsWith( + prefixBucketNameWithOzoneOwner)); + } + + + String startBucket = prefixBucketNameWithOzoneOwner + 10; + omBucketInfoList = + omMetadataManager.listBuckets(volumeName1, + startBucket, prefixBucketNameWithOzoneOwner, + 100); + + Assert.assertEquals(volumeABucketsPrefixWithOzoneOwner.tailSet( + startBucket).size() - 1, omBucketInfoList.size()); + + startBucket = prefixBucketNameWithOzoneOwner + 38; + omBucketInfoList = + omMetadataManager.listBuckets(volumeName1, + startBucket, prefixBucketNameWithOzoneOwner, + 100); + + Assert.assertEquals(volumeABucketsPrefixWithOzoneOwner.tailSet( + startBucket).size() - 1, omBucketInfoList.size()); + + for (OmBucketInfo omBucketInfo : omBucketInfoList) { + Assert.assertTrue(omBucketInfo.getBucketName().startsWith( + prefixBucketNameWithOzoneOwner)); + Assert.assertFalse(omBucketInfo.getBucketName().equals( + prefixBucketNameWithOzoneOwner + 10)); + } + + + + omBucketInfoList = omMetadataManager.listBuckets(volumeName2, + null, prefixBucketNameWithHadoopOwner, 100); + + Assert.assertEquals(omBucketInfoList.size(), 50); + + for (OmBucketInfo omBucketInfo : omBucketInfoList) { + Assert.assertTrue(omBucketInfo.getBucketName().startsWith( + prefixBucketNameWithHadoopOwner)); + } + + // Try to get buckets by count 10, like that get all buckets in the + // volumeB with prefixBucketNameWithHadoopOwner. + startBucket = null; + TreeSet expectedBuckets = new TreeSet<>(); + for (int i=0; i<5; i++) { + + omBucketInfoList = omMetadataManager.listBuckets(volumeName2, + startBucket, prefixBucketNameWithHadoopOwner, 10); + + Assert.assertEquals(omBucketInfoList.size(), 10); + + for (OmBucketInfo omBucketInfo : omBucketInfoList) { + expectedBuckets.add(omBucketInfo.getBucketName()); + Assert.assertTrue(omBucketInfo.getBucketName().startsWith( + prefixBucketNameWithHadoopOwner)); + startBucket = omBucketInfo.getBucketName(); + } + } + + + Assert.assertEquals(volumeBBucketsPrefixWithHadoopOwner, expectedBuckets); + // As now we have iterated all 50 buckets, calling next time should + // return empty list. + omBucketInfoList = omMetadataManager.listBuckets(volumeName2, + startBucket, prefixBucketNameWithHadoopOwner, 10); + + Assert.assertEquals(omBucketInfoList.size(), 0); + + } + + + private void addBucketsToCache(String volumeName, String bucketName) { + + OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder() + .setVolumeName(volumeName) + .setBucketName(bucketName) + .setStorageType(StorageType.DISK) + .setIsVersionEnabled(false) + .build(); + + omMetadataManager.getBucketTable().addCacheEntry( + new CacheKey<>(omMetadataManager.getBucketKey(volumeName, bucketName)), + new CacheValue<>(Optional.of(omBucketInfo), 1)); + } + + @Test + public void testListKeys() throws Exception { + + String volumeNameA = "volumeA"; + String volumeNameB = "volumeB"; + String ozoneBucket = "ozoneBucket"; + String hadoopBucket = "hadoopBucket"; + + + // Create volumes and buckets. + TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager); + TestOMRequestUtils.addVolumeToDB(volumeNameB, omMetadataManager); + addBucketsToCache(volumeNameA, ozoneBucket); + addBucketsToCache(volumeNameB, hadoopBucket); + + + String prefixKeyA = "key-a"; + String prefixKeyB = "key-b"; + TreeSet keysASet = new TreeSet<>(); + TreeSet keysBSet = new TreeSet<>(); + for (int i=1; i<= 100; i++) { + if (i % 2 == 0) { + keysASet.add( + prefixKeyA + i); + addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i); + } else { + keysBSet.add( + prefixKeyB + i); + addKeysToOM(volumeNameA, hadoopBucket, prefixKeyB + i, i); + } + } + + + TreeSet keysAVolumeBSet = new TreeSet<>(); + TreeSet keysBVolumeBSet = new TreeSet<>(); + for (int i=1; i<= 100; i++) { + if (i % 2 == 0) { + keysAVolumeBSet.add( + prefixKeyA + i); + addKeysToOM(volumeNameB, ozoneBucket, prefixKeyA + i, i); + } else { + keysBVolumeBSet.add( + prefixKeyB + i); + addKeysToOM(volumeNameB, hadoopBucket, prefixKeyB + i, i); + } + } + + + // List all keys which have prefix "key-a" + List omKeyInfoList = + omMetadataManager.listKeys(volumeNameA, ozoneBucket, + null, prefixKeyA, 100); + + Assert.assertEquals(omKeyInfoList.size(), 50); + + for (OmKeyInfo omKeyInfo : omKeyInfoList) { + Assert.assertTrue(omKeyInfo.getKeyName().startsWith( + prefixKeyA)); + } + + + String startKey = prefixKeyA + 10; + omKeyInfoList = + omMetadataManager.listKeys(volumeNameA, ozoneBucket, + startKey, prefixKeyA, 100); + + Assert.assertEquals(keysASet.tailSet( + startKey).size() - 1, omKeyInfoList.size()); + + startKey = prefixKeyA + 38; + omKeyInfoList = + omMetadataManager.listKeys(volumeNameA, ozoneBucket, + startKey, prefixKeyA, 100); + + Assert.assertEquals(keysASet.tailSet( + startKey).size() - 1, omKeyInfoList.size()); + + for (OmKeyInfo omKeyInfo : omKeyInfoList) { + Assert.assertTrue(omKeyInfo.getKeyName().startsWith( + prefixKeyA)); + Assert.assertFalse(omKeyInfo.getBucketName().equals( + prefixKeyA + 38)); + } + + + + omKeyInfoList = omMetadataManager.listKeys(volumeNameB, hadoopBucket, + null, prefixKeyB, 100); + + Assert.assertEquals(omKeyInfoList.size(), 50); + + for (OmKeyInfo omKeyInfo : omKeyInfoList) { + Assert.assertTrue(omKeyInfo.getKeyName().startsWith( + prefixKeyB)); + } + + // Try to get keys by count 10, like that get all keys in the + // volumeB/ozoneBucket with "key-a". + startKey = null; + TreeSet expectedKeys = new TreeSet<>(); + for (int i=0; i<5; i++) { + + omKeyInfoList = omMetadataManager.listKeys(volumeNameB, hadoopBucket, + startKey, prefixKeyB, 10); + + Assert.assertEquals(10, omKeyInfoList.size()); + + for (OmKeyInfo omKeyInfo : omKeyInfoList) { + expectedKeys.add(omKeyInfo.getKeyName()); + Assert.assertTrue(omKeyInfo.getKeyName().startsWith( + prefixKeyB)); + startKey = omKeyInfo.getKeyName(); + } + } + + Assert.assertEquals(expectedKeys, keysBVolumeBSet); + + + // As now we have iterated all 50 buckets, calling next time should + // return empty list. + omKeyInfoList = omMetadataManager.listKeys(volumeNameB, hadoopBucket, + startKey, prefixKeyB, 10); + + Assert.assertEquals(omKeyInfoList.size(), 0); + + } + + @Test + public void testListKeysWithFewDeleteEntriesInCache() throws Exception { + String volumeNameA = "volumeA"; + String ozoneBucket = "ozoneBucket"; + + // Create volumes and bucket. + TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager); + + addBucketsToCache(volumeNameA, ozoneBucket); + + String prefixKeyA = "key-a"; + TreeSet keysASet = new TreeSet<>(); + TreeSet deleteKeySet = new TreeSet<>(); + + + for (int i=1; i<= 100; i++) { + if (i % 2 == 0) { + keysASet.add( + prefixKeyA + i); + addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i); + } else { + addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i); + String key = omMetadataManager.getOzoneKey(volumeNameA, + ozoneBucket, prefixKeyA + i); + // Mark as deleted in cache. + omMetadataManager.getKeyTable().addCacheEntry( + new CacheKey<>(key), + new CacheValue<>(Optional.absent(), 100L)); + deleteKeySet.add(key); + } + } + + // Now list keys which match with prefixKeyA. + List omKeyInfoList = + omMetadataManager.listKeys(volumeNameA, ozoneBucket, + null, prefixKeyA, 100); + + // As in total 100, 50 are marked for delete. It should list only 50 keys. + Assert.assertEquals(50, omKeyInfoList.size()); + + TreeSet expectedKeys = new TreeSet<>(); + + for (OmKeyInfo omKeyInfo : omKeyInfoList) { + expectedKeys.add(omKeyInfo.getKeyName()); + Assert.assertTrue(omKeyInfo.getKeyName().startsWith(prefixKeyA)); + } + + Assert.assertEquals(expectedKeys, keysASet); + + + // Now get key count by 10. + String startKey = null; + expectedKeys = new TreeSet<>(); + for (int i=0; i<5; i++) { + + omKeyInfoList = omMetadataManager.listKeys(volumeNameA, ozoneBucket, + startKey, prefixKeyA, 10); + + System.out.println(i); + Assert.assertEquals(10, omKeyInfoList.size()); + + for (OmKeyInfo omKeyInfo : omKeyInfoList) { + expectedKeys.add(omKeyInfo.getKeyName()); + Assert.assertTrue(omKeyInfo.getKeyName().startsWith( + prefixKeyA)); + startKey = omKeyInfo.getKeyName(); + } + } + + Assert.assertEquals(keysASet, expectedKeys); + + + // As now we have iterated all 50 buckets, calling next time should + // return empty list. + omKeyInfoList = omMetadataManager.listKeys(volumeNameA, ozoneBucket, + startKey, prefixKeyA, 10); + + Assert.assertEquals(omKeyInfoList.size(), 0); + + + + } + + private void addKeysToOM(String volumeName, String bucketName, + String keyName, int i) throws Exception { + + if (i%2== 0) { + TestOMRequestUtils.addKeyToTable(false, volumeName, bucketName, keyName, + 1000L, HddsProtos.ReplicationType.RATIS, + HddsProtos.ReplicationFactor.ONE, omMetadataManager); + } else { + TestOMRequestUtils.addKeyToTableCache(volumeName, bucketName, keyName, + HddsProtos.ReplicationType.RATIS, HddsProtos.ReplicationFactor.ONE, + omMetadataManager); + } + } + +} \ No newline at end of file diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java index 88848f8b2a8fc..472d46a289e13 100644 --- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java +++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java @@ -120,7 +120,52 @@ public static void addKeyToTable(boolean openKeyTable, String volumeName, OMMetadataManager omMetadataManager) throws Exception { - OmKeyInfo.Builder builder = new OmKeyInfo.Builder() + OmKeyInfo omKeyInfo = createOmKeyInfo(volumeName, bucketName, keyName, + replicationType, replicationFactor); + + if (openKeyTable) { + omMetadataManager.getOpenKeyTable().put( + omMetadataManager.getOpenKey(volumeName, bucketName, keyName, + clientID), omKeyInfo); + } else { + omMetadataManager.getKeyTable().put(omMetadataManager.getOzoneKey( + volumeName, bucketName, keyName), omKeyInfo); + } + + } + + /** + * Add key entry to key table cache. + * @param volumeName + * @param bucketName + * @param keyName + * @param replicationType + * @param replicationFactor + * @param omMetadataManager + */ + @SuppressWarnings("parameterNumber") + public static void addKeyToTableCache(String volumeName, + String bucketName, + String keyName, + HddsProtos.ReplicationType replicationType, + HddsProtos.ReplicationFactor replicationFactor, + OMMetadataManager omMetadataManager) { + + + OmKeyInfo omKeyInfo = createOmKeyInfo(volumeName, bucketName, keyName, + replicationType, replicationFactor); + + omMetadataManager.getKeyTable().addCacheEntry( + new CacheKey<>(omMetadataManager.getOzoneKey(volumeName, bucketName, + keyName)), new CacheValue<>(Optional.of(omKeyInfo), + 1L)); + + } + + private OmKeyInfo createKeyInfo(String volumeName, String bucketName, + String keyName, HddsProtos.ReplicationType replicationType, + HddsProtos.ReplicationFactor replicationFactor) { + return new OmKeyInfo.Builder() .setVolumeName(volumeName) .setBucketName(bucketName) .setKeyName(keyName) @@ -130,19 +175,10 @@ public static void addKeyToTable(boolean openKeyTable, String volumeName, .setModificationTime(Time.now()) .setDataSize(1000L) .setReplicationType(replicationType) - .setReplicationFactor(replicationFactor); - - if (openKeyTable) { - omMetadataManager.getOpenKeyTable().put( - omMetadataManager.getOpenKey(volumeName, bucketName, keyName, - clientID), builder.build()); - } else { - omMetadataManager.getKeyTable().put(omMetadataManager.getOzoneKey( - volumeName, bucketName, keyName), builder.build()); - } - + .setReplicationFactor(replicationFactor).build(); } + /** * Create OmKeyInfo. */ diff --git a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java index 4147c8ff4e3e9..298fd2e693737 100644 --- a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java +++ b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java @@ -425,7 +425,9 @@ private boolean innerDelete(Path f, boolean recursive) throws IOException { DeleteIterator iterator = new DeleteIterator(f, recursive); return iterator.iterate(); } catch (FileNotFoundException e) { - LOG.debug("Couldn't delete {} - does not exist", f); + if (LOG.isDebugEnabled()) { + LOG.debug("Couldn't delete {} - does not exist", f); + } return false; } } diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java index 9b65b387a7928..82ffa0c5c4303 100644 --- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java +++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java @@ -110,10 +110,14 @@ public void parse() throws Exception { canonicalRequest = buildCanonicalRequest(); strToSign.append(hash(canonicalRequest)); - LOG.debug("canonicalRequest:[{}]", canonicalRequest); + if (LOG.isDebugEnabled()) { + LOG.debug("canonicalRequest:[{}]", canonicalRequest); + } - headerMap.keySet().forEach(k -> LOG.trace("Header:{},value:{}", k, - headerMap.get(k))); + if (LOG.isTraceEnabled()) { + headerMap.keySet().forEach(k -> LOG.trace("Header:{},value:{}", k, + headerMap.get(k))); + } LOG.debug("StringToSign:[{}]", strToSign); stringToSign = strToSign.toString(); diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java index abaca03908240..d42c005e58316 100644 --- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java +++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java @@ -86,8 +86,9 @@ private OzoneClient getClient(OzoneConfiguration config) throws IOException { identifier.setSignature(v4RequestParser.getSignature()); identifier.setAwsAccessId(v4RequestParser.getAwsAccessId()); identifier.setOwner(new Text(v4RequestParser.getAwsAccessId())); - - LOG.trace("Adding token for service:{}", omService); + if (LOG.isTraceEnabled()) { + LOG.trace("Adding token for service:{}", omService); + } Token token = new Token(identifier.getBytes(), identifier.getSignature().getBytes(UTF_8), identifier.getKind(), diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/OS3ExceptionMapper.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/OS3ExceptionMapper.java index 43f335ede6f5e..588dafae86a6d 100644 --- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/OS3ExceptionMapper.java +++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/OS3ExceptionMapper.java @@ -42,7 +42,9 @@ public class OS3ExceptionMapper implements ExceptionMapper { @Override public Response toResponse(OS3Exception exception) { - LOG.debug("Returning exception. ex: {}", exception.toString()); + if (LOG.isDebugEnabled()) { + LOG.debug("Returning exception. ex: {}", exception.toString()); + } exception.setRequestId(requestIdentifier.getRequestId()); return Response.status(exception.getHttpCode()) .entity(exception.toXml()).build(); diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java index c92a85ea57e9e..8aac868853a63 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java @@ -218,7 +218,7 @@ private synchronized void reopen(String reason, long targetPos, long length, } @Override - public synchronized long getPos() throws IOException { + public synchronized long getPos() { return (nextReadPos < 0) ? 0 : nextReadPos; } @@ -620,15 +620,26 @@ public synchronized boolean resetConnection() throws IOException { return isObjectStreamOpen(); } + /** + * Return the number of bytes available. + * If the inner stream is closed, the value is 1 for consistency + * with S3ObjectStream -and so address the GZip bug + * http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7036144 . + * If the stream is open, then it is the amount returned by the + * wrapped stream. + * @return a value greater than or equal to zero. + * @throws IOException IO failure. + */ @Override public synchronized int available() throws IOException { checkNotClosed(); - - long remaining = remainingInFile(); - if (remaining > Integer.MAX_VALUE) { - return Integer.MAX_VALUE; + if (contentLength == 0 || (nextReadPos >= contentLength)) { + return 0; } - return (int)remaining; + + return wrappedStream == null + ? 1 + : wrappedStream.available(); } /** @@ -637,8 +648,8 @@ public synchronized int available() throws IOException { */ @InterfaceAudience.Private @InterfaceStability.Unstable - public synchronized long remainingInFile() { - return this.contentLength - this.pos; + public synchronized long remainingInFile() throws IOException { + return contentLength - getPos(); } /** @@ -649,7 +660,7 @@ public synchronized long remainingInFile() { @InterfaceAudience.Private @InterfaceStability.Unstable public synchronized long remainingInCurrentRequest() { - return this.contentRangeFinish - this.pos; + return contentRangeFinish - getPos(); } @InterfaceAudience.Private diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java index 9332621d114f4..3513d0179cb0a 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java @@ -80,7 +80,7 @@ public class ITestS3AContractSeek extends AbstractContractSeekTest { * which S3A Supports. * @return a list of seek policies to test. */ - @Parameterized.Parameters + @Parameterized.Parameters(name = "{0}-{1}") public static Collection params() { return Arrays.asList(new Object[][]{ {INPUT_FADV_SEQUENTIAL, Default_JSSE}, diff --git a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java index 592c4be907db1..c8e0d368793af 100644 --- a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java +++ b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java @@ -1396,13 +1396,17 @@ public static Set listInitialThreadsForLifecycleChecks() { } /** - * Get a set containing the names of all active threads. + * Get a set containing the names of all active threads, + * stripping out all test runner threads. * @return the current set of threads. */ public static Set getCurrentThreadNames() { - return Thread.getAllStackTraces().keySet() + TreeSet threads = Thread.getAllStackTraces().keySet() .stream() .map(Thread::getName) + .filter(n -> n.startsWith("JUnit")) + .filter(n -> n.startsWith("surefire")) .collect(Collectors.toCollection(TreeSet::new)); + return threads; } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java index c09373f54838c..060e2045278b7 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java @@ -24,6 +24,7 @@ import io.swagger.annotations.ApiModelProperty; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.yarn.api.records.LocalResourceVisibility; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlEnum; @@ -73,6 +74,7 @@ public String toString() { private TypeEnum type = null; private String destFile = null; private String srcFile = null; + private LocalResourceVisibility visibility = null; private Map properties = new HashMap<>(); public ConfigFile copy() { @@ -80,6 +82,7 @@ public ConfigFile copy() { copy.setType(this.getType()); copy.setSrcFile(this.getSrcFile()); copy.setDestFile(this.getDestFile()); + copy.setVisibility(this.visibility); if (this.getProperties() != null && !this.getProperties().isEmpty()) { copy.getProperties().putAll(this.getProperties()); } @@ -150,6 +153,26 @@ public void setSrcFile(String srcFile) { this.srcFile = srcFile; } + + /** + * Visibility of the Config file. + **/ + public ConfigFile visibility(LocalResourceVisibility localrsrcVisibility) { + this.visibility = localrsrcVisibility; + return this; + } + + @ApiModelProperty(example = "null", value = "Visibility of the Config file") + @JsonProperty("visibility") + public LocalResourceVisibility getVisibility() { + return visibility; + } + + @XmlElement(name = "visibility", defaultValue="APPLICATION") + public void setVisibility(LocalResourceVisibility localrsrcVisibility) { + this.visibility = localrsrcVisibility; + } + /** A blob of key value pairs that will be dumped in the dest_file in the format as specified in type. If src_file is specified, src_file content are dumped @@ -200,12 +223,13 @@ public boolean equals(java.lang.Object o) { return Objects.equals(this.type, configFile.type) && Objects.equals(this.destFile, configFile.destFile) && Objects.equals(this.srcFile, configFile.srcFile) + && Objects.equals(this.visibility, configFile.visibility) && Objects.equals(this.properties, configFile.properties); } @Override public int hashCode() { - return Objects.hash(type, destFile, srcFile, properties); + return Objects.hash(type, destFile, srcFile, visibility, properties); } @Override @@ -217,6 +241,8 @@ public String toString() { .append(" destFile: ").append(toIndentedString(destFile)) .append("\n") .append(" srcFile: ").append(toIndentedString(srcFile)).append("\n") + .append(" visibility: ").append(toIndentedString(visibility)) + .append("\n") .append(" properties: ").append(toIndentedString(properties)) .append("\n") .append("}"); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java index 1276022f25fef..46bfa7a4564f5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java @@ -817,6 +817,21 @@ public int actionDestroy(String serviceName) throws YarnException, + appDir); ret = EXIT_NOT_FOUND; } + + // Delete Public Resource Dir + Path publicResourceDir = new Path(fs.getBasePath(), serviceName); + if (fileSystem.exists(publicResourceDir)) { + if (fileSystem.delete(publicResourceDir, true)) { + LOG.info("Successfully deleted public resource dir for " + + serviceName + ": " + publicResourceDir); + } else { + String message = "Failed to delete public resource dir for service " + + serviceName + " at: " + publicResourceDir; + LOG.info(message); + throw new YarnException(message); + } + } + try { deleteZKNode(serviceName); // don't set destroySucceed to false if no ZK node exists because not @@ -1315,7 +1330,8 @@ private boolean addAMLog4jResource(String serviceName, Configuration conf, new Path(remoteConfPath, YarnServiceConstants.YARN_SERVICE_LOG4J_FILENAME); copy(conf, localFilePath, remoteFilePath); LocalResource localResource = - fs.createAmResource(remoteConfPath, LocalResourceType.FILE); + fs.createAmResource(remoteConfPath, LocalResourceType.FILE, + LocalResourceVisibility.APPLICATION); localResources.put(localFilePath.getName(), localResource); hasAMLog4j = true; } else { @@ -1465,7 +1481,7 @@ private void addKeytabResourceIfSecure(SliderFileSystem fileSystem, return; } LocalResource keytabRes = fileSystem.createAmResource(keytabOnhdfs, - LocalResourceType.FILE); + LocalResourceType.FILE, LocalResourceVisibility.PRIVATE); localResource.put(String.format(YarnServiceConstants.KEYTAB_LOCATION, service.getName()), keytabRes); LOG.info("Adding " + service.getName() + "'s keytab for " diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConstants.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConstants.java index 05135fe61960b..dd940650c8aa4 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConstants.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConstants.java @@ -47,6 +47,8 @@ public interface YarnServiceConstants { String SERVICES_DIRECTORY = "services"; + String SERVICES_PUBLIC_DIRECTORY = "/tmp/hadoop-yarn/staging/"; + /** * JVM property to define the service lib directory; * this is set by the yarn.sh script diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java index 5fc96a09df27e..0b091e22419fd 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/ProviderUtils.java @@ -27,6 +27,7 @@ import org.apache.hadoop.yarn.api.records.Container; import org.apache.hadoop.yarn.api.records.LocalResource; import org.apache.hadoop.yarn.api.records.LocalResourceType; +import org.apache.hadoop.yarn.api.records.LocalResourceVisibility; import org.apache.hadoop.yarn.service.ServiceContext; import org.apache.hadoop.yarn.service.api.records.ConfigFile; import org.apache.hadoop.yarn.service.api.records.ConfigFormat; @@ -191,6 +192,17 @@ public static Path initCompInstanceDir(SliderFileSystem fs, return compInstanceDir; } + public static Path initCompPublicResourceDir(SliderFileSystem fs, + ContainerLaunchService.ComponentLaunchContext compLaunchContext, + ComponentInstance instance) { + Path compDir = fs.getComponentPublicResourceDir( + compLaunchContext.getServiceVersion(), compLaunchContext.getName()); + Path compPublicResourceDir = new Path(compDir, + instance.getCompInstanceName()); + return compPublicResourceDir; + } + + // 1. Create all config files for a component on hdfs for localization // 2. Add the config file to localResource public static synchronized void createConfigFileAndAddLocalResource( @@ -212,6 +224,20 @@ public static synchronized void createConfigFileAndAddLocalResource( log.info("Component instance conf dir already exists: " + compInstanceDir); } + Path compPublicResourceDir = initCompPublicResourceDir(fs, + compLaunchContext, instance); + if (!fs.getFileSystem().exists(compPublicResourceDir)) { + log.info("{} version {} : Creating Public Resource dir on hdfs: {}", + instance.getCompInstanceId(), compLaunchContext.getServiceVersion(), + compPublicResourceDir); + fs.getFileSystem().mkdirs(compPublicResourceDir, + new FsPermission(FsAction.ALL, FsAction.READ_EXECUTE, + FsAction.EXECUTE)); + } else { + log.info("Component instance public resource dir already exists: " + + compPublicResourceDir); + } + log.debug("Tokens substitution for component instance: {}{}{}" + instance .getCompInstanceName(), System.lineSeparator(), tokensForSubstitution); @@ -236,7 +262,14 @@ public static synchronized void createConfigFileAndAddLocalResource( * substitution and merges in new configs, and writes a new file to * compInstanceDir/fileName. */ - Path remoteFile = new Path(compInstanceDir, fileName); + Path remoteFile = null; + LocalResourceVisibility visibility = configFile.getVisibility(); + if (visibility != null && + visibility.equals(LocalResourceVisibility.PUBLIC)) { + remoteFile = new Path(compPublicResourceDir, fileName); + } else { + remoteFile = new Path(compInstanceDir, fileName); + } if (!fs.getFileSystem().exists(remoteFile)) { log.info("Saving config file on hdfs for component " + instance @@ -268,7 +301,8 @@ public static synchronized void createConfigFileAndAddLocalResource( // Add resource for localization LocalResource configResource = - fs.createAmResource(remoteFile, LocalResourceType.FILE); + fs.createAmResource(remoteFile, LocalResourceType.FILE, + configFile.getVisibility()); Path destFile = new Path(configFile.getDestFile()); String symlink = APP_CONF_DIR + "/" + fileName; addLocalResource(launcher, symlink, configResource, destFile, @@ -311,7 +345,8 @@ public static synchronized void handleStaticFilesForLocalization( LocalResource localResource = fs.createAmResource(sourceFile, (staticFile.getType() == ConfigFile.TypeEnum.ARCHIVE ? LocalResourceType.ARCHIVE : - LocalResourceType.FILE)); + LocalResourceType.FILE), staticFile.getVisibility()); + Path destFile = new Path(sourceFile.getName()); if (staticFile.getDestFile() != null && !staticFile.getDestFile() .isEmpty()) { diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/tarball/TarballProviderService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/tarball/TarballProviderService.java index 87406f792282d..cd783e77f76ce 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/tarball/TarballProviderService.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/provider/tarball/TarballProviderService.java @@ -20,6 +20,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.yarn.api.records.LocalResource; import org.apache.hadoop.yarn.api.records.LocalResourceType; +import org.apache.hadoop.yarn.api.records.LocalResourceVisibility; import org.apache.hadoop.yarn.service.api.records.Service; import org.apache.hadoop.yarn.service.component.instance.ComponentInstance; import org.apache.hadoop.yarn.service.containerlaunch.ContainerLaunchService; @@ -43,7 +44,8 @@ public void processArtifact(AbstractLauncher launcher, } log.info("Adding resource {}", artifact); LocalResourceType type = LocalResourceType.ARCHIVE; - LocalResource packageResource = fileSystem.createAmResource(artifact, type); + LocalResource packageResource = fileSystem.createAmResource(artifact, type, + LocalResourceVisibility.APPLICATION); launcher.addLocalResource(APP_LIB_DIR, packageResource); } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/CoreFileSystem.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/CoreFileSystem.java index b9a464960d578..0ee8e83980753 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/CoreFileSystem.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/CoreFileSystem.java @@ -384,13 +384,19 @@ public Path getHomeDirectory() { * @param resourceType resource type * @return the local resource for AM */ - public LocalResource createAmResource(Path destPath, LocalResourceType resourceType) throws IOException { + public LocalResource createAmResource(Path destPath, + LocalResourceType resourceType, + LocalResourceVisibility visibility) throws IOException { + FileStatus destStatus = fileSystem.getFileStatus(destPath); LocalResource amResource = Records.newRecord(LocalResource.class); amResource.setType(resourceType); // Set visibility of the resource // Setting to most private option - amResource.setVisibility(LocalResourceVisibility.APPLICATION); + if (visibility == null) { + visibility = LocalResourceVisibility.APPLICATION; + } + amResource.setVisibility(visibility); // Set the resource to be copied over amResource.setResource( URL.fromPath(fileSystem.resolvePath(destStatus.getPath()))); @@ -419,7 +425,7 @@ public Map submitDirectory(Path srcDir, String destRelati for (FileStatus entry : fileset) { LocalResource resource = createAmResource(entry.getPath(), - LocalResourceType.FILE); + LocalResourceType.FILE, LocalResourceVisibility.APPLICATION); String relativePath = destRelativeDir + "/" + entry.getPath().getName(); localResources.put(relativePath, resource); } @@ -465,7 +471,8 @@ public LocalResource submitFile(File localFile, Path tempPath, String subdir, St // Set the type of resource - file or archive // archives are untarred at destination // we don't need the jar file to be untarred for now - return createAmResource(destPath, LocalResourceType.FILE); + return createAmResource(destPath, LocalResourceType.FILE, + LocalResourceVisibility.APPLICATION); } /** @@ -483,7 +490,7 @@ public void submitTarGzipAndUpdate( BadClusterStateException { Path dependencyLibTarGzip = getDependencyTarGzip(); LocalResource lc = createAmResource(dependencyLibTarGzip, - LocalResourceType.ARCHIVE); + LocalResourceType.ARCHIVE, LocalResourceVisibility.APPLICATION); providerResources.put(YarnServiceConstants.DEPENDENCY_LOCALIZED_DIR_LINK, lc); } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/SliderFileSystem.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/SliderFileSystem.java index c7764764be805..4af97502269a5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/SliderFileSystem.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/SliderFileSystem.java @@ -21,6 +21,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.yarn.service.conf.YarnServiceConstants; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -63,6 +64,26 @@ public Path getComponentDir(String serviceVersion, String compName) { serviceVersion + "/" + compName); } + public Path getBasePath() { + String tmpDir = configuration.get("hadoop.tmp.dir"); + String basePath = YarnServiceConstants.SERVICE_BASE_DIRECTORY + + "/" + YarnServiceConstants.SERVICES_DIRECTORY; + return new Path(tmpDir, basePath); + } + + /** + * Returns the component public resource directory path. + * + * @param serviceVersion service version + * @param compName component name + * @return component public resource directory + */ + public Path getComponentPublicResourceDir(String serviceVersion, + String compName) { + return new Path(new Path(getBasePath(), getAppDir().getName() + "/" + + "components"), serviceVersion + "/" + compName); + } + /** * Deletes the component directory. * @@ -77,6 +98,12 @@ public void deleteComponentDir(String serviceVersion, String compName) fileSystem.delete(path, true); LOG.debug("deleted dir {}", path); } + Path publicResourceDir = getComponentPublicResourceDir(serviceVersion, + compName); + if (fileSystem.exists(publicResourceDir)) { + fileSystem.delete(publicResourceDir, true); + LOG.debug("deleted public resource dir {}", publicResourceDir); + } } /** @@ -92,6 +119,13 @@ public void deleteComponentsVersionDirIfEmpty(String serviceVersion) fileSystem.delete(path, true); LOG.info("deleted dir {}", path); } + Path publicResourceDir = new Path(new Path(getBasePath(), + getAppDir().getName() + "/" + "components"), serviceVersion); + if (fileSystem.exists(publicResourceDir) + && fileSystem.listStatus(publicResourceDir).length == 0) { + fileSystem.delete(publicResourceDir, true); + LOG.info("deleted public resource dir {}", publicResourceDir); + } } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java index 84c3b6e020d5b..bfdcccd268c4a 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/provider/TestProviderUtils.java @@ -63,95 +63,100 @@ public void testStaticFileLocalization() throws IOException { List configFileList = new ArrayList<>(); when(conf.getFiles()).thenReturn(configFileList); when(compLaunchCtx.getConfiguration()).thenReturn(conf); - when(sfs.createAmResource(any(Path.class), any(LocalResourceType.class))) - .thenAnswer(invocationOnMock -> new LocalResource() { - @Override - public URL getResource() { - return URL.fromPath(((Path) invocationOnMock.getArguments()[0])); - } + when(sfs.createAmResource(any(Path.class), any(LocalResourceType.class), + any(LocalResourceVisibility.class))).thenAnswer( + invocationOnMock -> new LocalResource() { + @Override + public URL getResource() { + return URL.fromPath(((Path) invocationOnMock.getArguments()[0])); + } - @Override - public void setResource(URL resource) { + @Override + public void setResource(URL resource) { - } + } - @Override - public long getSize() { - return 0; - } + @Override + public long getSize() { + return 0; + } - @Override - public void setSize(long size) { + @Override + public void setSize(long size) { - } + } - @Override - public long getTimestamp() { - return 0; - } + @Override + public long getTimestamp() { + return 0; + } - @Override - public void setTimestamp(long timestamp) { + @Override + public void setTimestamp(long timestamp) { - } + } - @Override - public LocalResourceType getType() { - return (LocalResourceType) invocationOnMock.getArguments()[1]; - } + @Override + public LocalResourceType getType() { + return (LocalResourceType) invocationOnMock.getArguments()[1]; + } - @Override - public void setType(LocalResourceType type) { + @Override + public void setType(LocalResourceType type) { - } + } - @Override - public LocalResourceVisibility getVisibility() { - return null; - } + @Override + public LocalResourceVisibility getVisibility() { + return LocalResourceVisibility.APPLICATION; + } - @Override - public void setVisibility(LocalResourceVisibility visibility) { + @Override + public void setVisibility(LocalResourceVisibility visibility) { - } + } - @Override - public String getPattern() { - return null; - } + @Override + public String getPattern() { + return null; + } - @Override - public void setPattern(String pattern) { + @Override + public void setPattern(String pattern) { - } + } - @Override - public boolean getShouldBeUploadedToSharedCache() { - return false; - } + @Override + public boolean getShouldBeUploadedToSharedCache() { + return false; + } - @Override - public void setShouldBeUploadedToSharedCache( - boolean shouldBeUploadedToSharedCache) { + @Override + public void setShouldBeUploadedToSharedCache( + boolean shouldBeUploadedToSharedCache) { - } - }); + } + }); // Initialize list of files. //archive configFileList.add(new ConfigFile().srcFile("hdfs://default/sourceFile1") - .destFile("destFile1").type(ConfigFile.TypeEnum.ARCHIVE)); + .destFile("destFile1").type(ConfigFile.TypeEnum.ARCHIVE) + .visibility(LocalResourceVisibility.APPLICATION)); //static file configFileList.add(new ConfigFile().srcFile("hdfs://default/sourceFile2") - .destFile("folder/destFile_2").type(ConfigFile.TypeEnum.STATIC)); + .destFile("folder/destFile_2").type(ConfigFile.TypeEnum.STATIC) + .visibility(LocalResourceVisibility.APPLICATION)); //This will be ignored since type is JSON configFileList.add(new ConfigFile().srcFile("hdfs://default/sourceFile3") - .destFile("destFile3").type(ConfigFile.TypeEnum.JSON)); + .destFile("destFile3").type(ConfigFile.TypeEnum.JSON) + .visibility(LocalResourceVisibility.APPLICATION)); //No destination file specified configFileList.add(new ConfigFile().srcFile("hdfs://default/sourceFile4") - .type(ConfigFile.TypeEnum.STATIC)); + .type(ConfigFile.TypeEnum.STATIC) + .visibility(LocalResourceVisibility.APPLICATION)); ProviderService.ResolvedLaunchParams resolved = new ProviderService.ResolvedLaunchParams(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java index df4a9b870082b..f1c8c89800576 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java @@ -36,6 +36,8 @@ @RunWith(Parameterized.class) public class TestResourceCalculator { + private static final String EXTRA_RESOURCE_NAME = "test"; + private final ResourceCalculator resourceCalculator; @Parameterized.Parameters(name = "{0}") @@ -55,7 +57,7 @@ public void setupNoExtraResource() { private static void setupExtraResource() { Configuration conf = new Configuration(); - conf.set(YarnConfiguration.RESOURCE_TYPES, "test"); + conf.set(YarnConfiguration.RESOURCE_TYPES, EXTRA_RESOURCE_NAME); ResourceUtils.resetResourceTypes(conf); } @@ -97,10 +99,10 @@ private Resource newResource(long memory, int cpu) { return res; } - private Resource newResource(long memory, int cpu, int test) { + private Resource newResource(long memory, int cpu, int extraResource) { Resource res = newResource(memory, cpu); - res.setResourceValue("test", test); + res.setResourceValue(EXTRA_RESOURCE_NAME, extraResource); return res; } @@ -548,4 +550,43 @@ public void testFitsInDiagnosticsCollector() { newResource(1, 1))); } } + + @Test + public void testRatioWithNoExtraResource() { + //setup + Resource resource1 = newResource(1, 1); + Resource resource2 = newResource(2, 1); + + //act + float ratio = resourceCalculator.ratio(resource1, resource2); + + //assert + if (resourceCalculator instanceof DefaultResourceCalculator) { + double ratioOfMemories = 0.5; + assertEquals(ratioOfMemories, ratio, 0.00001); + } else if (resourceCalculator instanceof DominantResourceCalculator) { + double ratioOfCPUs = 1.0; + assertEquals(ratioOfCPUs, ratio, 0.00001); + } + } + + @Test + public void testRatioWithExtraResource() { + //setup + setupExtraResource(); + Resource resource1 = newResource(1, 1, 2); + Resource resource2 = newResource(2, 1, 1); + + //act + float ratio = resourceCalculator.ratio(resource1, resource2); + + //assert + if (resourceCalculator instanceof DefaultResourceCalculator) { + double ratioOfMemories = 0.5; + assertEquals(ratioOfMemories, ratio, 0.00001); + } else if (resourceCalculator instanceof DominantResourceCalculator) { + double ratioOfExtraResources = 2.0; + assertEquals(ratioOfExtraResources, ratio, 0.00001); + } + } } \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ResourceMappings.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ResourceMappings.java index d673341b01c47..c1c3b5d0aa792 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ResourceMappings.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ResourceMappings.java @@ -18,11 +18,10 @@ package org.apache.hadoop.yarn.server.nodemanager.containermanager.container; -import java.io.ByteArrayInputStream; -import java.io.ByteArrayOutputStream; +import org.apache.commons.lang3.SerializationException; +import org.apache.commons.lang3.SerializationUtils; + import java.io.IOException; -import java.io.ObjectInputStream; -import java.io.ObjectOutputStream; import java.io.Serializable; import java.util.ArrayList; import java.util.Collections; @@ -30,8 +29,6 @@ import java.util.List; import java.util.Map; -import org.apache.commons.io.IOUtils; - /** * This class is used to store assigned resource to a single container by * resource types. @@ -91,16 +88,11 @@ public void updateAssignedResources(List list) { @SuppressWarnings("unchecked") public static AssignedResources fromBytes(byte[] bytes) throws IOException { - ObjectInputStream ois = null; - List resources; + final List resources; try { - ByteArrayInputStream bis = new ByteArrayInputStream(bytes); - ois = new ObjectInputStream(bis); - resources = (List) ois.readObject(); - } catch (ClassNotFoundException e) { + resources = SerializationUtils.deserialize(bytes); + } catch (SerializationException e) { throw new IOException(e); - } finally { - IOUtils.closeQuietly(ois); } AssignedResources ar = new AssignedResources(); ar.updateAssignedResources(resources); @@ -108,15 +100,11 @@ public static AssignedResources fromBytes(byte[] bytes) } public byte[] toBytes() throws IOException { - ObjectOutputStream oos = null; - byte[] bytes; + final byte[] bytes; try { - ByteArrayOutputStream bos = new ByteArrayOutputStream(); - oos = new ObjectOutputStream(bos); - oos.writeObject(resources); - bytes = bos.toByteArray(); - } finally { - IOUtils.closeQuietly(oos); + bytes = SerializationUtils.serialize((Serializable) resources); + } catch (SerializationException e) { + throw new IOException(e); } return bytes; } diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java index 50721debe5e60..dce24908a51d8 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java @@ -235,6 +235,9 @@ public class DockerLinuxContainerRuntime extends OCIContainerRuntime { @InterfaceAudience.Private public static final String ENV_DOCKER_CONTAINER_DOCKER_RUNTIME = "YARN_CONTAINER_RUNTIME_DOCKER_RUNTIME"; + @InterfaceAudience.Private + public static final String ENV_DOCKER_CONTAINER_DOCKER_SERVICE_MODE = + "YARN_CONTAINER_RUNTIME_DOCKER_SERVICE_MODE"; @InterfaceAudience.Private private static final String RUNTIME_TYPE = "DOCKER"; @@ -588,7 +591,9 @@ public void launchContainer(ContainerRuntimeContext ctx) String network = environment.get(ENV_DOCKER_CONTAINER_NETWORK); String hostname = environment.get(ENV_DOCKER_CONTAINER_HOSTNAME); String runtime = environment.get(ENV_DOCKER_CONTAINER_DOCKER_RUNTIME); - boolean useEntryPoint = checkUseEntryPoint(environment); + boolean serviceMode = Boolean.parseBoolean(environment.get( + ENV_DOCKER_CONTAINER_DOCKER_SERVICE_MODE)); + boolean useEntryPoint = serviceMode || checkUseEntryPoint(environment); if (imageName == null || imageName.isEmpty()) { imageName = defaultImageName; @@ -679,10 +684,12 @@ public void launchContainer(ContainerRuntimeContext ctx) runCommand.addRuntime(runtime); } - runCommand.addAllReadWriteMountLocations(containerLogDirs); - runCommand.addAllReadWriteMountLocations(applicationLocalDirs); - runCommand.addAllReadOnlyMountLocations(filecacheDirs); - runCommand.addAllReadOnlyMountLocations(userFilecacheDirs); + if (!serviceMode) { + runCommand.addAllReadWriteMountLocations(containerLogDirs); + runCommand.addAllReadWriteMountLocations(applicationLocalDirs); + runCommand.addAllReadOnlyMountLocations(filecacheDirs); + runCommand.addAllReadOnlyMountLocations(userFilecacheDirs); + } if (environment.containsKey(ENV_DOCKER_CONTAINER_MOUNTS)) { Matcher parsedMounts = USER_MOUNT_PATTERN.matcher( @@ -800,11 +807,20 @@ public void launchContainer(ContainerRuntimeContext ctx) runCommand.setYarnSysFS(true); } + // In service mode, the YARN log dirs are not mounted into the container. + // As a result, the container fails to start due to stdout and stderr output + // being sent to a file in a directory that does not exist. In service mode, + // only supply the command with no stdout or stderr redirection. + List commands = container.getLaunchContext().getCommands(); + if (serviceMode) { + commands = Arrays.asList( + String.join(" ", commands).split("1>")[0].split(" ")); + } + if (useEntryPoint) { runCommand.setOverrideDisabled(true); runCommand.addEnv(environment); - runCommand.setOverrideCommandWithArgs(container.getLaunchContext() - .getCommands()); + runCommand.setOverrideCommandWithArgs(commands); runCommand.disableDetach(); runCommand.setLogDir(container.getLogDir()); } else { @@ -818,6 +834,10 @@ public void launchContainer(ContainerRuntimeContext ctx) runCommand.detachOnRun(); } + if (serviceMode) { + runCommand.setServiceMode(serviceMode); + } + if(enableUserReMapping) { if (!allowPrivilegedContainerExecution(container)) { runCommand.groupAdd(groups); @@ -1279,11 +1299,14 @@ private void handleContainerKill(ContainerRuntimeContext ctx, throw new ContainerExecutionException(e); } + boolean serviceMode = Boolean.parseBoolean(env.get( + ENV_DOCKER_CONTAINER_DOCKER_SERVICE_MODE)); + // Only need to check whether the container was asked to be privileged. // If the container had failed the permissions checks upon launch, it // would have never been launched and thus we wouldn't be here // attempting to signal it. - if (isContainerRequestedAsPrivileged(container)) { + if (isContainerRequestedAsPrivileged(container) || serviceMode) { String containerId = container.getContainerId().toString(); DockerCommandExecutor.DockerContainerStatus containerStatus = DockerCommandExecutor.getContainerStatus(containerId, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java index b0603a3a22067..7fb0e40c442dc 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java @@ -199,6 +199,12 @@ public DockerRunCommand setLogDir(String logDir) { return this; } + public DockerRunCommand setServiceMode(boolean serviceMode) { + String value = Boolean.toString(serviceMode); + super.addCommandArguments("service-mode", value); + return this; + } + /** * Check if user defined environment variables are empty. * diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h index b215af72a8773..757bd16c63ab3 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h @@ -325,12 +325,6 @@ int sync_yarn_sysfs(char* const* local_dirs, const char *running_user, */ int execute_regex_match(const char *regex_str, const char *input); -/** - * Validate the docker image name matches the expected input. - * Return 0 on success. - */ -int validate_docker_image_name(const char *image_name); - struct configuration* get_cfg(); /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c index 17114338e7249..3ef571fdefffa 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c @@ -28,6 +28,7 @@ #include "docker-util.h" #include "string-utils.h" #include "util.h" +#include "container-executor.h" #include #include #include @@ -374,6 +375,8 @@ const char *get_docker_error_message(const int error_code) { return "Invalid docker tmpfs mount"; case INVALID_DOCKER_RUNTIME: return "Invalid docker runtime"; + case SERVICE_MODE_DISABLED: + return "Service mode disabled"; default: return "Unknown error"; } @@ -987,6 +990,22 @@ static int set_runtime(const struct configuration *command_config, return ret; } +int is_service_mode_enabled(const struct configuration *command_config, + const struct configuration *executor_cfg, args *args) { + int ret = 0; + struct section *section = get_configuration_section(CONTAINER_EXECUTOR_CFG_DOCKER_SECTION, executor_cfg); + char *value = get_configuration_value("service-mode", DOCKER_COMMAND_FILE_SECTION, command_config); + if (value != NULL && strcasecmp(value, "true") == 0) { + if (is_feature_enabled(DOCKER_SERVICE_MODE_ENABLED_KEY, ret, section)) { + ret = 1; + } else { + ret = SERVICE_MODE_DISABLED; + } + } + free(value); + return ret; +} + static int add_ports_mapping_to_command(const struct configuration *command_config, args *args) { int i = 0, ret = 0; char *network_type = (char*) malloc(128); @@ -1595,12 +1614,19 @@ int get_docker_run_command(const char *command_file, const struct configuration char *privileged = NULL; char *no_new_privileges_enabled = NULL; char *use_entry_point = NULL; + int service_mode_enabled = 0; struct configuration command_config = {0, NULL}; ret = read_and_verify_command_file(command_file, DOCKER_RUN_COMMAND, &command_config); if (ret != 0) { goto free_and_exit; } + service_mode_enabled = is_service_mode_enabled(&command_config, conf, args); + if (service_mode_enabled == SERVICE_MODE_DISABLED) { + ret = SERVICE_MODE_DISABLED; + goto free_and_exit; + } + use_entry_point = get_configuration_value("use-entry-point", DOCKER_COMMAND_FILE_SECTION, &command_config); if (use_entry_point != NULL && strcasecmp(use_entry_point, "true") == 0) { entry_point = 1; @@ -1612,10 +1638,13 @@ int get_docker_run_command(const char *command_file, const struct configuration ret = INVALID_DOCKER_CONTAINER_NAME; goto free_and_exit; } - user = get_configuration_value("user", DOCKER_COMMAND_FILE_SECTION, &command_config); - if (user == NULL) { - ret = INVALID_DOCKER_USER_NAME; - goto free_and_exit; + + if (!service_mode_enabled) { + user = get_configuration_value("user", DOCKER_COMMAND_FILE_SECTION, &command_config); + if (user == NULL) { + ret = INVALID_DOCKER_USER_NAME; + goto free_and_exit; + } } image = get_configuration_value("image", DOCKER_COMMAND_FILE_SECTION, &command_config); if (image == NULL || validate_docker_image_name(image) != 0) { @@ -1640,12 +1669,14 @@ int get_docker_run_command(const char *command_file, const struct configuration privileged = get_configuration_value("privileged", DOCKER_COMMAND_FILE_SECTION, &command_config); if (privileged == NULL || strcmp(privileged, "false") == 0) { - char *user_buffer = make_string("--user=%s", user); - ret = add_to_args(args, user_buffer); - free(user_buffer); - if (ret != 0) { - ret = BUFFER_TOO_SMALL; - goto free_and_exit; + if (!service_mode_enabled) { + char *user_buffer = make_string("--user=%s", user); + ret = add_to_args(args, user_buffer); + free(user_buffer); + if (ret != 0) { + ret = BUFFER_TOO_SMALL; + goto free_and_exit; + } } no_new_privileges_enabled = get_configuration_value("docker.no-new-privileges.enabled", @@ -1725,9 +1756,11 @@ int get_docker_run_command(const char *command_file, const struct configuration goto free_and_exit; } - ret = set_group_add(&command_config, args); - if (ret != 0) { - goto free_and_exit; + if (!service_mode_enabled) { + ret = set_group_add(&command_config, args); + if (ret != 0) { + goto free_and_exit; + } } ret = set_devices(&command_config, conf, args); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.h index 07da195629a05..d9d34a0640a6c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.h +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.h @@ -36,6 +36,7 @@ #define DOCKER_START_COMMAND "start" #define DOCKER_EXEC_COMMAND "exec" #define DOCKER_IMAGES_COMMAND "images" +#define DOCKER_SERVICE_MODE_ENABLED_KEY "docker.service-mode.enabled" #define DOCKER_ARG_MAX 1024 #define ARGS_INITIAL_VALUE { 0 }; @@ -71,7 +72,8 @@ enum docker_error_codes { INVALID_PID_NAMESPACE, INVALID_DOCKER_IMAGE_TRUST, INVALID_DOCKER_TMPFS_MOUNT, - INVALID_DOCKER_RUNTIME + INVALID_DOCKER_RUNTIME, + SERVICE_MODE_DISABLED }; /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestResourceMappings.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestResourceMappings.java new file mode 100644 index 0000000000000..561ce0c018598 --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestResourceMappings.java @@ -0,0 +1,118 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.nodemanager.containermanager.container; + +import com.google.common.collect.ImmutableList; +import org.apache.commons.io.IOUtils; +import org.apache.hadoop.yarn.server.nodemanager.api.deviceplugin.Device; +import org.junit.Assert; +import org.junit.BeforeClass; +import org.junit.Test; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.ObjectOutputStream; +import java.io.Serializable; +import java.util.List; + +public class TestResourceMappings { + + private static final ResourceMappings.AssignedResources testResources = + new ResourceMappings.AssignedResources(); + + @BeforeClass + public static void setup() { + testResources.updateAssignedResources(ImmutableList.of( + Device.Builder.newInstance() + .setId(0) + .setDevPath("/dev/hdwA0") + .setMajorNumber(256) + .setMinorNumber(0) + .setBusID("0000:80:00.0") + .setHealthy(true) + .build(), + Device.Builder.newInstance() + .setId(1) + .setDevPath("/dev/hdwA1") + .setMajorNumber(256) + .setMinorNumber(0) + .setBusID("0000:80:00.1") + .setHealthy(true) + .build() + )); + } + + @Test + public void testSerializeAssignedResourcesWithSerializationUtils() { + try { + byte[] serializedString = testResources.toBytes(); + + ResourceMappings.AssignedResources deserialized = + ResourceMappings.AssignedResources.fromBytes(serializedString); + + Assert.assertEquals(testResources.getAssignedResources(), + deserialized.getAssignedResources()); + + } catch (IOException e) { + e.printStackTrace(); + Assert.fail(String.format("Serialization of test AssignedResources " + + "failed with %s", e.getMessage())); + } + } + + @Test + public void testAssignedResourcesCanDeserializePreviouslySerializedValues() { + try { + byte[] serializedString = toBytes(testResources.getAssignedResources()); + + ResourceMappings.AssignedResources deserialized = + ResourceMappings.AssignedResources.fromBytes(serializedString); + + Assert.assertEquals(testResources.getAssignedResources(), + deserialized.getAssignedResources()); + + } catch (IOException e) { + e.printStackTrace(); + Assert.fail(String.format("Deserialization of test AssignedResources " + + "failed with %s", e.getMessage())); + } + } + + /** + * This was the legacy way to serialize resources. This is here for + * backward compatibility to ensure that after YARN-9128 we can still + * deserialize previously serialized resources. + * + * @param resources the list of resources + * @return byte array representation of the resource + * @throws IOException + */ + private byte[] toBytes(List resources) throws IOException { + ObjectOutputStream oos = null; + byte[] bytes; + try { + ByteArrayOutputStream bos = new ByteArrayOutputStream(); + oos = new ObjectOutputStream(bos); + oos.writeObject(resources); + bytes = bos.toByteArray(); + } finally { + IOUtils.closeQuietly(oos); + } + return bytes; + } +} \ No newline at end of file diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java index 9e843dfe89c7e..eff8aa8663297 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/MutableConfigurationProvider.java @@ -65,6 +65,12 @@ void logAndApplyMutation(UserGroupInformation user, SchedConfUpdateInfo */ Configuration getConfiguration(); + /** + * Get the last updated scheduler config version. + * @return Last updated scheduler config version. + */ + long getConfigVersion() throws Exception; + void formatConfigurationInStore(Configuration conf) throws Exception; /** diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java index 80053bef9642a..464ef149b1b4c 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java @@ -29,6 +29,7 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -62,6 +63,7 @@ public class FSSchedulerConfigurationStore extends YarnConfigurationStore { private volatile Configuration schedConf; private volatile Configuration oldConf; private Path tempConfigPath; + private Path configVersionFile; @Override public void initialize(Configuration conf, Configuration vSchedConf, @@ -99,9 +101,17 @@ public boolean accept(Path path) { } } + this.configVersionFile = new Path(schedulerConfPathStr, "ConfigVersion"); + if (!fileSystem.exists(configVersionFile)) { + fileSystem.createNewFile(configVersionFile); + writeConfigVersion(0L); + } + // create capacity-schedule.xml.ts file if not existing if (this.getConfigFileInputStream() == null) { writeConfigurationToFileSystem(vSchedConf); + long configVersion = getConfigVersion() + 1L; + writeConfigVersion(configVersion); } this.schedConf = this.getConfigurationFromFileSystem(); @@ -141,6 +151,8 @@ public void confirmMutation(boolean isValid) throws Exception { } if (isValid) { finalizeFileSystemFile(); + long configVersion = getConfigVersion() + 1L; + writeConfigVersion(configVersion); } else { schedConf = oldConf; removeTmpConfigFile(); @@ -158,7 +170,15 @@ private void finalizeFileSystemFile() throws IOException { @Override public void format() throws Exception { - fileSystem.delete(schedulerConfDir, true); + FileStatus[] fileStatuses = fileSystem.listStatus(this.schedulerConfDir, + this.configFilePathFilter); + if (fileStatuses == null) { + return; + } + for (int i = 0; i < fileStatuses.length; i++) { + fileSystem.delete(fileStatuses[i].getPath(), false); + LOG.info("delete config file " + fileStatuses[i].getPath()); + } } private Path getFinalConfigPath(Path tempPath) { @@ -222,6 +242,27 @@ private Path getLatestConfigPath() throws IOException { return fileStatuses[fileStatuses.length - 1].getPath(); } + private void writeConfigVersion(long configVersion) throws IOException { + try (FSDataOutputStream out = fileSystem.create(configVersionFile, true)) { + out.writeLong(configVersion); + } catch (IOException e) { + LOG.info("Failed to write config version at {}", configVersionFile, e); + throw e; + } + } + + @Override + public long getConfigVersion() throws Exception { + try (FSDataInputStream in = fileSystem.open(configVersionFile)) { + return in.readLong(); + } catch (IOException e) { + LOG.info("Failed to read config version at {}", configVersionFile, e); + throw e; + } + } + + + @VisibleForTesting private Path writeTmpConfig(Configuration vSchedConf) throws IOException { long start = Time.monotonicNow(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java index 4871443e54a94..47dd6bdfe617f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java @@ -33,11 +33,13 @@ public class InMemoryConfigurationStore extends YarnConfigurationStore { private Configuration schedConf; private LogMutation pendingMutation; + private long configVersion; @Override public void initialize(Configuration conf, Configuration schedConf, RMContext rmContext) { this.schedConf = schedConf; + this.configVersion = 1L; } @Override @@ -56,6 +58,7 @@ public void confirmMutation(boolean isValid) { schedConf.set(kv.getKey(), kv.getValue()); } } + this.configVersion = this.configVersion + 1L; } pendingMutation = null; } @@ -70,6 +73,11 @@ public synchronized Configuration retrieve() { return schedConf; } + @Override + public long getConfigVersion() { + return configVersion; + } + @Override public List getConfirmedConfHistory(long fromId) { // Unimplemented. diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/LeveldbConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/LeveldbConfigurationStore.java index 743d7ef45a854..2966c948d2e59 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/LeveldbConfigurationStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/LeveldbConfigurationStore.java @@ -68,8 +68,11 @@ public class LeveldbConfigurationStore extends YarnConfigurationStore { private static final String DB_NAME = "yarn-conf-store"; private static final String LOG_KEY = "log"; private static final String VERSION_KEY = "version"; + private static final String CONF_VERSION_NAME = "conf-version-store"; + private static final String CONF_VERSION_KEY = "conf-version"; private DB db; + private DB versiondb; private long maxLogs; private Configuration conf; private LogMutation pendingMutation; @@ -102,11 +105,11 @@ public void initialize(Configuration config, Configuration schedConf, public void format() throws Exception { close(); FileSystem fs = FileSystem.getLocal(conf); - fs.delete(getStorageDir(), true); + fs.delete(getStorageDir(DB_NAME), true); } private void initDatabase(Configuration config) throws Exception { - Path storeRoot = createStorageDir(); + Path storeRoot = createStorageDir(DB_NAME); Options options = new Options(); options.createIfMissing(false); options.comparator(new DBComparator() { @@ -142,6 +145,29 @@ public byte[] findShortSuccessor(byte[] key) { } }); + Path confVersion = createStorageDir(CONF_VERSION_NAME); + Options confOptions = new Options(); + confOptions.createIfMissing(false); + LOG.info("Using conf version at " + confVersion); + File confVersionFile = new File(confVersion.toString()); + try { + versiondb = JniDBFactory.factory.open(confVersionFile, confOptions); + } catch (NativeDB.DBException e) { + if (e.isNotFound() || e.getMessage().contains(" does not exist ")) { + LOG.info("Creating conf version at " + confVersionFile); + confOptions.createIfMissing(true); + try { + versiondb = JniDBFactory.factory.open(confVersionFile, confOptions); + versiondb.put(bytes(CONF_VERSION_KEY), bytes(String.valueOf(0))); + } catch (DBException dbErr) { + throw new IOException(dbErr.getMessage(), dbErr); + } + } else { + throw e; + } + } + + LOG.info("Using conf database at " + storeRoot); File dbfile = new File(storeRoot.toString()); try { @@ -158,6 +184,9 @@ public byte[] findShortSuccessor(byte[] key) { initBatch.put(bytes(kv.getKey()), bytes(kv.getValue())); } db.write(initBatch); + long configVersion = getConfigVersion() + 1L; + versiondb.put(bytes(CONF_VERSION_KEY), + bytes(String.valueOf(configVersion))); } catch (DBException dbErr) { throw new IOException(dbErr.getMessage(), dbErr); } @@ -167,20 +196,20 @@ public byte[] findShortSuccessor(byte[] key) { } } - private Path createStorageDir() throws IOException { - Path root = getStorageDir(); + private Path createStorageDir(String storageName) throws IOException { + Path root = getStorageDir(storageName); FileSystem fs = FileSystem.getLocal(conf); fs.mkdirs(root, new FsPermission((short) 0700)); return root; } - private Path getStorageDir() throws IOException { + private Path getStorageDir(String storageName) throws IOException { String storePath = conf.get(YarnConfiguration.RM_SCHEDCONF_STORE_PATH); if (storePath == null) { throw new IOException("No store location directory configured in " + YarnConfiguration.RM_SCHEDCONF_STORE_PATH); } - return new Path(storePath, DB_NAME); + return new Path(storePath, storageName); } @Override @@ -188,6 +217,9 @@ public void close() throws IOException { if (db != null) { db.close(); } + if (versiondb != null) { + versiondb.close(); + } } @Override @@ -213,6 +245,9 @@ public void confirmMutation(boolean isValid) throws IOException { updateBatch.put(bytes(changes.getKey()), bytes(changes.getValue())); } } + long configVersion = getConfigVersion() + 1L; + versiondb.put(bytes(CONF_VERSION_KEY), + bytes(String.valueOf(configVersion))); } db.write(updateBatch); pendingMutation = null; @@ -258,6 +293,13 @@ public synchronized Configuration retrieve() { return config; } + @Override + public long getConfigVersion() { + String version = new String(versiondb.get(bytes(CONF_VERSION_KEY)), + StandardCharsets.UTF_8); + return Long.parseLong(version); + } + @Override public List getConfirmedConfHistory(long fromId) { return null; // unimplemented diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java index 41b9b2579fa25..f464b2ca7c9f5 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/MutableCSConfigurationProvider.java @@ -134,6 +134,11 @@ public Configuration getConfiguration() { return new Configuration(schedConf); } + @Override + public long getConfigVersion() throws Exception { + return confStore.getConfigVersion(); + } + @Override public ConfigurationMutationACLPolicy getAclMutationPolicy() { return aclMutationPolicy; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java index 334c962807142..6af11a31d6bc0 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java @@ -132,6 +132,12 @@ public void close() throws IOException {} */ public abstract void format() throws Exception; + /** + * Get the last updated config version. + * @return Last updated config version. + */ + public abstract long getConfigVersion() throws Exception; + /** * Get a list of confirmed configuration mutations starting from a given id. * @param fromId id from which to start getting mutations, inclusive diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ZKConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ZKConfigurationStore.java index d3fab3982473e..1aee4159a3802 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ZKConfigurationStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ZKConfigurationStore.java @@ -62,11 +62,13 @@ public class ZKConfigurationStore extends YarnConfigurationStore { private static final String LOGS_PATH = "LOGS"; private static final String CONF_STORE_PATH = "CONF_STORE"; private static final String FENCING_PATH = "FENCING"; + private static final String CONF_VERSION_PATH = "CONF_VERSION"; private String zkVersionPath; private String logsPath; private String confStorePath; private String fencingNodePath; + private String confVersionPath; @VisibleForTesting protected ZKCuratorManager zkManager; @@ -89,6 +91,7 @@ public void initialize(Configuration config, Configuration schedConf, this.logsPath = getNodePath(znodeParentPath, LOGS_PATH); this.confStorePath = getNodePath(znodeParentPath, CONF_STORE_PATH); this.fencingNodePath = getNodePath(znodeParentPath, FENCING_PATH); + this.confVersionPath = getNodePath(znodeParentPath, CONF_VERSION_PATH); zkManager.createRootDirRecursively(znodeParentPath, zkAcl); zkManager.delete(fencingNodePath); @@ -99,6 +102,11 @@ public void initialize(Configuration config, Configuration schedConf, serializeObject(new LinkedList()), -1); } + if (!zkManager.exists(confVersionPath)) { + zkManager.create(confVersionPath); + zkManager.setData(confVersionPath, String.valueOf(0), -1); + } + if (!zkManager.exists(confStorePath)) { zkManager.create(confStorePath); HashMap mapSchedConf = new HashMap<>(); @@ -106,6 +114,8 @@ public void initialize(Configuration config, Configuration schedConf, mapSchedConf.put(entry.getKey(), entry.getValue()); } zkManager.setData(confStorePath, serializeObject(mapSchedConf), -1); + long configVersion = getConfigVersion() + 1L; + zkManager.setData(confVersionPath, String.valueOf(configVersion), -1); } } @@ -185,6 +195,9 @@ public void confirmMutation(boolean isValid) } zkManager.safeSetData(confStorePath, serializeObject(mapConf), -1, zkAcl, fencingNodePath); + long configVersion = getConfigVersion() + 1L; + zkManager.setData(confVersionPath, String.valueOf(configVersion), -1); + } pendingMutation = null; } @@ -213,6 +226,11 @@ public synchronized Configuration retrieve() { return null; } + @Override + public long getConfigVersion() throws Exception { + return Long.parseLong(zkManager.getStringData(confVersionPath)); + } + @Override public List getConfirmedConfHistory(long fromId) { return null; // unimplemented diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java index 6cc1e29f24a1e..ab481403bd0eb 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java @@ -51,6 +51,9 @@ public final class RMWSConsts { /** Path for {@code RMWebServices#formatSchedulerConfiguration}. */ public static final String FORMAT_SCHEDULER_CONF = "/scheduler-conf/format"; + /** Path for {@code RMWebServices#getSchedulerConfigurationVersion}. */ + public static final String SCHEDULER_CONF_VERSION = "/scheduler-conf/version"; + /** Path for {@code RMWebServiceProtocol#dumpSchedulerLogs}. */ public static final String SCHEDULER_LOGS = "/scheduler/logs"; diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java index d1e04fa56eba0..bb77dbd7a756f 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java @@ -196,6 +196,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.StatisticsItemInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ConfigVersionInfo; import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ConfInfo; import org.apache.hadoop.yarn.server.security.ApplicationACLsManager; import org.apache.hadoop.yarn.server.utils.BuilderUtils; @@ -2590,7 +2591,7 @@ public Response formatSchedulerConfiguration(@Context HttpServletRequest hsr) } } else { return Response.status(Status.BAD_REQUEST) - .entity("Configuration change only supported by " + + .entity("Scheduler Configuration format only supported by " + "MutableConfScheduler.").build(); } } @@ -2680,6 +2681,39 @@ public Response getSchedulerConfiguration(@Context HttpServletRequest hsr) } } + @GET + @Path(RMWSConsts.SCHEDULER_CONF_VERSION) + @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, + MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 }) + public Response getSchedulerConfigurationVersion(@Context + HttpServletRequest hsr) throws AuthorizationException { + // Only admin user is allowed to get scheduler conf version + UserGroupInformation callerUGI = getCallerUserGroupInformation(hsr, true); + initForWritableEndpoints(callerUGI, true); + + ResourceScheduler scheduler = rm.getResourceScheduler(); + if (scheduler instanceof MutableConfScheduler + && ((MutableConfScheduler) scheduler).isConfigurationMutable()) { + MutableConfigurationProvider mutableConfigurationProvider = + ((MutableConfScheduler) scheduler).getMutableConfProvider(); + + try { + long configVersion = mutableConfigurationProvider + .getConfigVersion(); + return Response.status(Status.OK) + .entity(new ConfigVersionInfo(configVersion)).build(); + } catch (Exception e) { + LOG.error("Exception thrown when fetching configuration version.", e); + return Response.status(Status.BAD_REQUEST).entity(e.getMessage()) + .build(); + } + } else { + return Response.status(Status.BAD_REQUEST) + .entity("Configuration Version only supported by " + + "MutableConfScheduler.").build(); + } + } + @GET @Path(RMWSConsts.CHECK_USER_ACCESS_TO_QUEUE) @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8, diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ConfigVersionInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ConfigVersionInfo.java new file mode 100644 index 0000000000000..50a2728c2204e --- /dev/null +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ConfigVersionInfo.java @@ -0,0 +1,44 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.yarn.server.resourcemanager.webapp.dao; + +import javax.xml.bind.annotation.XmlAccessType; +import javax.xml.bind.annotation.XmlAccessorType; +import javax.xml.bind.annotation.XmlRootElement; + +/** + * Version of Scheduler Config. + */ +@XmlRootElement(name = "configversion") +@XmlAccessorType(XmlAccessType.FIELD) +public class ConfigVersionInfo { + + private long versionID; + + public ConfigVersionInfo() { + } // JAXB needs this + + public ConfigVersionInfo(long version) { + this.versionID = version; + } + + public long getVersionID() { + return this.versionID; + } + +} diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestFSSchedulerConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestFSSchedulerConfigurationStore.java index f3d5e745b1fa0..33596c38d5eb2 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestFSSchedulerConfigurationStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestFSSchedulerConfigurationStore.java @@ -37,7 +37,6 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.fail; import static org.junit.Assert.assertTrue; -import static org.junit.Assert.assertFalse; /** @@ -140,7 +139,6 @@ public void confirmMutationWithInValid() throws Exception { @Test public void testFormatConfiguration() throws Exception { - assertTrue(testSchedulerConfigurationDir.exists()); Configuration schedulerConf = new Configuration(); schedulerConf.set("a", "a"); writeConf(schedulerConf); @@ -148,7 +146,15 @@ public void testFormatConfiguration() throws Exception { Configuration storedConfig = configurationStore.retrieve(); assertEquals("a", storedConfig.get("a")); configurationStore.format(); - assertFalse(testSchedulerConfigurationDir.exists()); + boolean exceptionCaught = false; + try { + storedConfig = configurationStore.retrieve(); + } catch (IOException e) { + if (e.getMessage().contains("no capacity scheduler file in")) { + exceptionCaught = true; + } + } + assertTrue(exceptionCaught); } @Test diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestZKConfigurationStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestZKConfigurationStore.java index f71c4e7a9d651..eae80d500eff1 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestZKConfigurationStore.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestZKConfigurationStore.java @@ -137,6 +137,21 @@ public void testFormatConfiguration() throws Exception { assertNull(confStore.retrieve()); } + @Test + public void testGetConfigurationVersion() throws Exception { + confStore.initialize(conf, schedConf, rmContext); + long v1 = confStore.getConfigVersion(); + assertEquals(1, v1); + Map update = new HashMap<>(); + update.put("keyver", "valver"); + YarnConfigurationStore.LogMutation mutation = + new YarnConfigurationStore.LogMutation(update, TEST_USER); + confStore.logMutation(mutation); + confStore.confirmMutation(true); + long v2 = confStore.getConfigVersion(); + assertEquals(2, v2); + } + @Test public void testPersistUpdatedConfiguration() throws Exception { confStore.initialize(conf, schedConf, rmContext); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesConfigurationMutation.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesConfigurationMutation.java index 67f83c8d647de..c717d8b84e438 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesConfigurationMutation.java +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesConfigurationMutation.java @@ -202,6 +202,25 @@ public void testFormatSchedulerConf() throws Exception { assertEquals(3, orgConf.getQueues("root").length); } + private long getConfigVersion() throws Exception { + WebResource r = resource(); + ClientResponse response = r.path("ws").path("v1").path("cluster") + .queryParam("user.name", userName) + .path(RMWSConsts.SCHEDULER_CONF_VERSION) + .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class); + assertEquals(Status.OK.getStatusCode(), response.getStatus()); + + JSONObject json = response.getEntity(JSONObject.class); + return Long.parseLong(json.get("versionID").toString()); + } + + @Test + public void testSchedulerConfigVersion() throws Exception { + assertEquals(1, getConfigVersion()); + testAddNestedQueue(); + assertEquals(2, getConfigVersion()); + } + @Test public void testAddNestedQueue() throws Exception { CapacitySchedulerConfiguration orgConf = getSchedulerConf(); diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainers.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainers.md index e30ac9808ebcf..db9c56d99ee7b 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainers.md +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainers.md @@ -285,6 +285,7 @@ are allowed. It contains the following properties: | `docker.inspect.max.retries` | Integer value to check docker container readiness. Each inspection is set with 3 seconds delay. Default value of 10 will wait 30 seconds for docker container to become ready before marked as container failed. | | `docker.no-new-privileges.enabled` | Enable/disable the no-new-privileges flag for docker run. Set to "true" to enable, disabled by default. | | `docker.allowed.runtimes` | Comma seperated runtimes that containers are allowed to use. By default no runtimes are allowed to be added.| +| `docker.service-mode.enabled` | Set to "true" or "false" to enable or disable docker container service mode. Default value is "false". | Please note that if you wish to run Docker containers that require access to the YARN local directories, you must add them to the docker.allowed.rw-mounts list. @@ -436,6 +437,7 @@ environment variables in the application's environment: | `YARN_CONTAINER_RUNTIME_DOCKER_TMPFS_MOUNTS` | Adds additional tmpfs mounts to the Docker container. The value of the environment variable should be a comma-separated list of absolute mount points within the container. | | `YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL` | Allows a user to request delayed deletion of the Docker container on a per container basis. If true, Docker containers will not be removed until the duration defined by yarn.nodemanager.delete.debug-delay-sec has elapsed. Administrators can disable this feature through the yarn-site property yarn.nodemanager.runtime.linux.docker.delayed-removal.allowed. This feature is disabled by default. When this feature is disabled or set to false, the container will be removed as soon as it exits. | | `YARN_CONTAINER_RUNTIME_YARN_SYSFS_ENABLE` | Enable mounting of container working directory sysfs sub-directory into Docker container /hadoop/yarn/sysfs. This is useful for populating cluster information into container. | +| `YARN_CONTAINER_RUNTIME_DOCKER_SERVICE_MODE` | Enable Service Mode which runs the docker container as defined by the image but does not set the user (--user and --group-add). | The first two are required. The remainder can be set as needed. While controlling the container type through environment variables is somewhat less @@ -1080,3 +1082,24 @@ YARN service framework automatically populates cluster information to /hadoop/yarn/sysfs/app.json. For more information about YARN service, see: [YARN Service](./yarn-service/Overview.html). +Docker Container Service Mode +----------------------------- + +Docker Container Service Mode runs the container as defined by the image +but does not set the user (--user and --group-add). This mode is disabled +by default. The administrator sets docker.service-mode.enabled to true +in container-executor.cfg under docker section to enable. + +Part of a container-executor.cfg which allows docker service mode is below: + +``` +yarn.nodemanager.linux-container-executor.group=yarn +[docker] + module.enabled=true + docker.privileged-containers.enabled=true + docker.service-mode.enabled=true +``` + +Application User can enable or disable service mode at job level by exporting +environment variable YARN_CONTAINER_RUNTIME_DOCKER_SERVICE_MODE in the application's +environment with value true or false respectively.