Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
be901f4
YARN-9873. Mutation API Config Change need to update Version Number. …
sunilgovind Oct 9, 2019
35f093f
YARN-9356. Add more tests to ratio method in TestResourceCalculator. …
szilard-nemeth Oct 9, 2019
6f1ab95
YARN-9128. Use SerializationUtils from apache commons to serialize / …
szilard-nemeth Oct 9, 2019
1f954e6
HDDS-2217. Remove log4j and audit configuration from the docker-confi…
Oct 9, 2019
4b0a5bc
HDDS-2217. Remove log4j and audit configuration from the docker-confi…
adoroszlai Oct 9, 2019
b034350
Squashed commit of the following:
elek Oct 9, 2019
2d81abc
HDDS-2265. integration.sh may report false negative
adoroszlai Oct 9, 2019
d76e265
HDFS-14754. Erasure Coding : The number of Under-Replicated Blocks ne…
surendralilhore Oct 9, 2019
eeb58a0
HDFS-14898. Use Relative URLS in Hadoop HDFS HTTP FS. Contributed by …
ayushtkn Oct 9, 2019
a031388
HDDS-2266. Avoid evaluation of LOG.trace and LOG.debug statement in t…
swagle Oct 10, 2019
104ccca
HDFS-14900. Fix build failure of hadoop-hdfs-native-client. Contribut…
ayushtkn Oct 10, 2019
effe608
HADOOP-16650. ITestS3AClosedFS failing.
steveloughran Oct 10, 2019
4850b3a
HDDS-2269. Provide config for fair/non-fair for OM RW Lock. (#1623)
bharatviswa504 Oct 10, 2019
957253f
HDDS-1984. Fix listBucket API. (#1555)
bharatviswa504 Oct 10, 2019
7a4b3d4
HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos.
lqjack Oct 10, 2019
31e0122
YARN-9860. Enable service mode for Docker containers on YARN
macroadster Oct 10, 2019
9c72bf4
HDDS-1986. Fix listkeys API. (#1588)
bharatviswa504 Oct 10, 2019
f267917
HDDS-2282. scmcli pipeline list command throws NullPointerException. …
xiaoyuyao Oct 11, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions BUILDING.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Requirements:
* Unix System
* JDK 1.8
* Maven 3.3 or later
* Protocol Buffers 3.7.1 (if compiling native code)
* CMake 3.1 or newer (if compiling native code)
* Zlib devel (if compiling native code)
* Cyrus SASL devel (if compiling native code)
Expand Down Expand Up @@ -61,6 +62,16 @@ Installing required packages for clean install of Ubuntu 14.04 LTS Desktop:
$ sudo apt-get -y install maven
* Native libraries
$ sudo apt-get -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev libsasl2-dev
* Protocol Buffers 3.7.1 (required to build native code)
$ mkdir -p /opt/protobuf-3.7-src \
&& curl -L -s -S \
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz \
-o /opt/protobuf-3.7.1.tar.gz \
&& tar xzf /opt/protobuf-3.7.1.tar.gz --strip-components 1 -C /opt/protobuf-3.7-src \
&& cd /opt/protobuf-3.7-src \
&& ./configure\
&& make install \
&& rm -rf /opt/protobuf-3.7-src

Optional packages:

Expand Down Expand Up @@ -384,6 +395,15 @@ Installing required dependencies for clean install of macOS 10.14:
* Install native libraries, only openssl is required to compile native code,
you may optionally install zlib, lz4, etc.
$ brew install openssl
* Protocol Buffers 3.7.1 (required to compile native code)
$ wget https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
$ mkdir -p protobuf-3.7 && tar zxvf protobuf-java-3.7.1.tar.gz --strip-components 1 -C protobuf-3.7
$ cd protobuf-3.7
$ ./configure
$ make
$ make check
$ make install
$ protoc --version

Note that building Hadoop 3.1.1/3.1.2/3.2.0 native code from source is broken
on macOS. For 3.1.1/3.1.2, you need to manually backport YARN-8622. For 3.2.0,
Expand All @@ -409,6 +429,7 @@ Requirements:
* Windows System
* JDK 1.8
* Maven 3.0 or later
* Protocol Buffers 3.7.1
* CMake 3.1 or newer
* Visual Studio 2010 Professional or Higher
* Windows SDK 8.1 (if building CPU rate control for the container executor)
Expand Down
17 changes: 17 additions & 0 deletions dev-support/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,23 @@ RUN mkdir -p /opt/cmake \
ENV CMAKE_HOME /opt/cmake
ENV PATH "${PATH}:/opt/cmake/bin"

######
# Install Google Protobuf 3.7.1 (2.6.0 ships with Xenial)
######
# hadolint ignore=DL3003
RUN mkdir -p /opt/protobuf-src \
&& curl -L -s -S \
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz \
-o /opt/protobuf.tar.gz \
&& tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src \
&& cd /opt/protobuf-src \
&& ./configure --prefix=/opt/protobuf \
&& make install \
&& cd /root \
&& rm -rf /opt/protobuf-src
ENV PROTOBUF_HOME /opt/protobuf
ENV PATH "${PATH}:/opt/protobuf/bin"

######
# Install Apache Maven 3.3.9 (3.3.9 ships with Xenial)
######
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,59 @@ Return the data at the current position.
else
result = -1

### <a name="InputStream.available"></a> `InputStream.available()`

Returns the number of bytes "estimated" to be readable on a stream before `read()`
blocks on any IO (i.e. the thread is potentially suspended for some time).

That is: for all values `v` returned by `available()`, `read(buffer, 0, v)`
is should not block.

#### Postconditions

```python
if len(data) == 0:
result = 0

elif pos >= len(data):
result = 0

else:
d = "the amount of data known to be already buffered/cached locally"
result = min(1, d) # optional but recommended: see below.
```

As `0` is a number which is always meets this condition, it is nominally
possible for an implementation to simply return `0`. However, this is not
considered useful, and some applications/libraries expect a positive number.

#### The GZip problem.

[JDK-7036144](http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7036144),
"GZIPInputStream readTrailer uses faulty available() test for end-of-stream"
discusses how the JDK's GZip code it uses `available()` to detect an EOF,
in a loop similar to the the following

```java
while(instream.available()) {
process(instream.read());
}
```

The correct loop would have been:

```java
int r;
while((r=instream.read()) >= 0) {
process(r);
}
```

If `available()` ever returns 0, then the gzip loop halts prematurely.

For this reason, implementations *should* return a value &gt;=1, even
if it breaks that requirement of `available()` returning the amount guaranteed
not to block on reads.

### <a name="InputStream.read.buffer[]"></a> `InputStream.read(buffer[], offset, length)`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
import java.io.IOException;
import java.util.Random;

import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY;
import static org.apache.hadoop.fs.contract.ContractTestUtils.createFile;
import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
import static org.apache.hadoop.fs.contract.ContractTestUtils.skip;
Expand Down Expand Up @@ -99,14 +100,18 @@ public void testSeekZeroByteFile() throws Throwable {
describe("seek and read a 0 byte file");
instream = getFileSystem().open(zeroByteFile);
assertEquals(0, instream.getPos());
assertAvailableIsZero(instream);
//expect initial read to fai;
int result = instream.read();
assertMinusOne("initial byte read", result);
assertAvailableIsZero(instream);
byte[] buffer = new byte[1];
//expect that seek to 0 works
instream.seek(0);
assertAvailableIsZero(instream);
//reread, expect same exception
result = instream.read();
assertAvailableIsZero(instream);
assertMinusOne("post-seek byte read", result);
result = instream.read(buffer, 0, 1);
assertMinusOne("post-seek buffer read", result);
Expand All @@ -132,8 +137,8 @@ public void testBlockReadZeroByteFile() throws Throwable {
@Test
public void testSeekReadClosedFile() throws Throwable {
instream = getFileSystem().open(smallSeekFile);
getLogger().debug(
"Stream is of type " + instream.getClass().getCanonicalName());
getLogger().debug("Stream is of type {}",
instream.getClass().getCanonicalName());
instream.close();
try {
instream.seek(0);
Expand Down Expand Up @@ -168,10 +173,26 @@ public void testSeekReadClosedFile() throws Throwable {
try {
long offset = instream.getPos();
} catch (IOException e) {
// its valid to raise error here; but the test is applied to make
// it is valid to raise error here; but the test is applied to make
// sure there's no other exception like an NPE.

}
// a closed stream should either fail or return 0 bytes.
try {
int a = instream.available();
LOG.info("available() returns a value on a closed file: {}", a);
assertAvailableIsZero(instream);
} catch (IOException | IllegalStateException expected) {
// expected
}
// a closed stream should either fail or return 0 bytes.
try {
int a = instream.available();
LOG.info("available() returns a value on a closed file: {}", a);
assertAvailableIsZero(instream);
} catch (IOException | IllegalStateException expected) {
// expected
}
//and close again
instream.close();
}
Expand Down Expand Up @@ -205,6 +226,7 @@ public void testSeekFile() throws Throwable {
//expect that seek to 0 works
instream.seek(0);
int result = instream.read();
assertAvailableIsPositive(instream);
assertEquals(0, result);
assertEquals(1, instream.read());
assertEquals(2, instream.getPos());
Expand All @@ -226,13 +248,24 @@ public void testSeekAndReadPastEndOfFile() throws Throwable {
//go just before the end
instream.seek(TEST_FILE_LEN - 2);
assertTrue("Premature EOF", instream.read() != -1);
assertAvailableIsPositive(instream);
assertTrue("Premature EOF", instream.read() != -1);
checkAvailabilityAtEOF();
assertMinusOne("read past end of file", instream.read());
}

/**
* This can be overridden if a filesystem always returns 01
* @throws IOException
*/
protected void checkAvailabilityAtEOF() throws IOException {
assertAvailableIsZero(instream);
}

@Test
public void testSeekPastEndOfFileThenReseekAndRead() throws Throwable {
describe("do a seek past the EOF, then verify the stream recovers");
describe("do a seek past the EOF, " +
"then verify the stream recovers");
instream = getFileSystem().open(smallSeekFile);
//go just before the end. This may or may not fail; it may be delayed until the
//read
Expand Down Expand Up @@ -261,6 +294,7 @@ public void testSeekPastEndOfFileThenReseekAndRead() throws Throwable {
//now go back and try to read from a valid point in the file
instream.seek(1);
assertTrue("Premature EOF", instream.read() != -1);
assertAvailableIsPositive(instream);
}

/**
Expand All @@ -278,6 +312,7 @@ public void testSeekBigFile() throws Throwable {
//expect that seek to 0 works
instream.seek(0);
int result = instream.read();
assertAvailableIsPositive(instream);
assertEquals(0, result);
assertEquals(1, instream.read());
assertEquals(2, instream.read());
Expand All @@ -296,6 +331,7 @@ public void testSeekBigFile() throws Throwable {
instream.seek(0);
assertEquals(0, instream.getPos());
instream.read();
assertAvailableIsPositive(instream);
assertEquals(1, instream.getPos());
byte[] buf = new byte[80 * 1024];
instream.readFully(1, buf, 0, buf.length);
Expand All @@ -314,14 +350,15 @@ public void testPositionedBulkReadDoesntChangePosition() throws Throwable {
instream.seek(39999);
assertTrue(-1 != instream.read());
assertEquals(40000, instream.getPos());

assertAvailableIsPositive(instream);
int v = 256;
byte[] readBuffer = new byte[v];
assertEquals(v, instream.read(128, readBuffer, 0, v));
//have gone back
assertEquals(40000, instream.getPos());
//content is the same too
assertEquals("@40000", block[40000], (byte) instream.read());
assertAvailableIsPositive(instream);
//now verify the picked up data
for (int i = 0; i < 256; i++) {
assertEquals("@" + i, block[i + 128], readBuffer[i]);
Expand Down Expand Up @@ -376,6 +413,7 @@ public void testReadFullyZeroByteFile() throws Throwable {
assertEquals(0, instream.getPos());
byte[] buffer = new byte[1];
instream.readFully(0, buffer, 0, 0);
assertAvailableIsZero(instream);
assertEquals(0, instream.getPos());
// seek to 0 read 0 bytes from it
instream.seek(0);
Expand Down Expand Up @@ -551,7 +589,9 @@ public void testReadSmallFile() throws Throwable {
fail("Expected an exception, got " + r);
} catch (EOFException e) {
handleExpectedException(e);
} catch (IOException | IllegalArgumentException | IndexOutOfBoundsException e) {
} catch (IOException
| IllegalArgumentException
| IndexOutOfBoundsException e) {
handleRelaxedException("read() with a negative position ",
"EOFException",
e);
Expand Down Expand Up @@ -587,6 +627,29 @@ public void testReadAtExactEOF() throws Throwable {
instream = getFileSystem().open(smallSeekFile);
instream.seek(TEST_FILE_LEN -1);
assertTrue("read at last byte", instream.read() > 0);
assertAvailableIsZero(instream);
assertEquals("read just past EOF", -1, instream.read());
}

/**
* Assert that the number of bytes available is zero.
* @param in input stream
*/
protected static void assertAvailableIsZero(FSDataInputStream in)
throws IOException {
assertEquals("stream.available() should be zero",
0, in.available());
}

/**
* Assert that the number of bytes available is greater than zero.
* @param in input stream
*/
protected static void assertAvailableIsPositive(FSDataInputStream in)
throws IOException {
int available = in.available();
assertTrue("stream.available() should be positive but is "
+ available,
available > 0);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,9 @@ public class XceiverClientManager implements Closeable {
private boolean isSecurityEnabled;
private final boolean topologyAwareRead;
/**
* Creates a new XceiverClientManager.
* Creates a new XceiverClientManager for non secured ozone cluster.
* For security enabled ozone cluster, client should use the other constructor
* with a valid ca certificate in pem string format.
*
* @param conf configuration
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,7 @@
*/
public final class Pipeline {

private static final Logger LOG = LoggerFactory
.getLogger(Pipeline.class);
private static final Logger LOG = LoggerFactory.getLogger(Pipeline.class);
private final PipelineID id;
private final ReplicationType type;
private final ReplicationFactor factor;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
* CacheKey for the RocksDB table.
* @param <KEY>
*/
public class CacheKey<KEY> {
public class CacheKey<KEY> implements Comparable<KEY> {

private final KEY key;

Expand Down Expand Up @@ -53,4 +53,13 @@ public boolean equals(Object o) {
public int hashCode() {
return Objects.hash(key);
}

@Override
public int compareTo(Object o) {
if(Objects.equals(key, ((CacheKey<?>)o).key)) {
return 0;
} else {
return key.toString().compareTo((((CacheKey<?>) o).key).toString());
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import java.util.Map;
import java.util.NavigableSet;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentSkipListMap;
import java.util.concurrent.ConcurrentSkipListSet;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
Expand All @@ -47,15 +48,22 @@
public class TableCacheImpl<CACHEKEY extends CacheKey,
CACHEVALUE extends CacheValue> implements TableCache<CACHEKEY, CACHEVALUE> {

private final ConcurrentHashMap<CACHEKEY, CACHEVALUE> cache;
private final Map<CACHEKEY, CACHEVALUE> cache;
private final NavigableSet<EpochEntry<CACHEKEY>> epochEntries;
private ExecutorService executorService;
private CacheCleanupPolicy cleanupPolicy;



public TableCacheImpl(CacheCleanupPolicy cleanupPolicy) {
cache = new ConcurrentHashMap<>();

// As for full table cache only we need elements to be inserted in sorted
// manner, so that list will be easy. For other we can go with Hash map.
if (cleanupPolicy == CacheCleanupPolicy.NEVER) {
cache = new ConcurrentSkipListMap<>();
} else {
cache = new ConcurrentHashMap<>();
}
epochEntries = new ConcurrentSkipListSet<>();
// Created a singleThreadExecutor, so one cleanup will be running at a
// time.
Expand Down
Loading