Skip to content
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
a8b8dbc
Merge branch 'HDDS-10239-container-reconciliation' into HDDS-10374-sc…
errose28 Nov 22, 2024
999a913
Add code to build and write the tree from the data scanners
errose28 Nov 22, 2024
b0d1ba9
Update todo in acceptance test
errose28 Nov 25, 2024
382bce2
Add unit tests for tree generation by scanners based on container state
errose28 Nov 25, 2024
28b1889
Add initial (failing) unit test for KeyValueContaienrCheck
errose28 Nov 26, 2024
dc182e8
Update container data checksum when building the tree
errose28 Nov 26, 2024
a3401a9
Fix handling of fully truncated block of 0 size
errose28 Jan 7, 2025
a25d44d
Add unit tests for new addBlock method in tree
errose28 Jan 7, 2025
7550a3c
Test that SCM gets a checksum with the container report
errose28 Jan 7, 2025
847f8d8
Add (failing) tests that SCM sees updated checksums
errose28 Jan 7, 2025
452c294
Update acceptance test
errose28 Jan 8, 2025
dc45eca
Add javadoc for tree generation from metadata
errose28 Jan 8, 2025
1cb291f
Data integration tests passing
errose28 Jan 8, 2025
d6b21d2
Don't generate tree from metadata for unhealthy container
errose28 Jan 9, 2025
2a2dbbd
Checkstyle
errose28 Jan 9, 2025
c9a077c
Marking container unhealthy should not write a merkle tree (test fix)
errose28 Jan 9, 2025
0bbbdc5
Checkstyle
errose28 Jan 9, 2025
7b971a9
Address review comments
errose28 Jan 13, 2025
15d6848
Merge branch 'HDDS-10239-container-reconciliation' into HDDS-10374-sc…
errose28 Apr 11, 2025
0989881
Initial use of on demand scan in TestKeyValueHandler
errose28 Apr 11, 2025
834be96
Make on-demand scanner a normal instance
errose28 Apr 15, 2025
e73757e
Register on-demand scan callback in ContainerSet
errose28 Apr 15, 2025
f0d8efe
Migrate scanContainer usage in prod code
errose28 Apr 15, 2025
4cb054c
Switch terminology from error to scan. Add existence checks
errose28 Apr 15, 2025
8abedb6
Update tests
errose28 Apr 15, 2025
577a075
Add unit test for ContainerSet
errose28 Apr 16, 2025
4c8d843
Checkstyle
errose28 Apr 16, 2025
0bd4127
Improve comments and test
errose28 Apr 16, 2025
61fae12
Merge branch 'non-static-on-demand-scan' into HDDS-10374-scanner-buil…
errose28 Apr 16, 2025
61f30f3
WIP migrate reconciliation unit tests
errose28 Apr 17, 2025
192eb7b
Most tests passing
errose28 Apr 23, 2025
0cf79f6
Improve logging in test and prod code
errose28 Apr 28, 2025
8b30f54
Fix tree tracking during reconcile process
errose28 Apr 28, 2025
9c74f4b
Use mixin to standardize scanner operations, log checksum changes in …
errose28 Apr 29, 2025
d550669
Logging improvements
errose28 Apr 29, 2025
97e02ea
Add checksum validation, generate readable data
errose28 Apr 30, 2025
22b41b8
Use tree writer between peer updates. All tests pass
errose28 May 5, 2025
f49a9dd
Wait for on-demand scans to complete in test
errose28 May 5, 2025
f5d4dbf
Improve char data generation, reset scan metrics
errose28 May 5, 2025
1140c90
Update test name
errose28 May 5, 2025
e0aa7cb
Checkstyle
errose28 May 5, 2025
62d7794
Merge branch 'HDDS-10239-container-reconciliation' into HDDS-10374-sc…
errose28 May 6, 2025
9c3b87c
Merge branch 'reconcile-unit-test-framework' into HDDS-10374-scanner-…
errose28 May 6, 2025
9322b4a
Fix TODOs dependent on this patch
errose28 May 13, 2025
9b75957
Rename container scan helper
errose28 May 13, 2025
f615275
Add comment on failure type
errose28 May 13, 2025
dadc829
Fix checkstyle unique to this PR
errose28 May 13, 2025
076a82e
Merge branch 'HDDS-10239-container-reconciliation' into HDDS-10374-sc…
errose28 May 14, 2025
cc55527
Fix sending ICR when only checksum changes (pending test)
errose28 May 14, 2025
35879b4
Updates after reviewing diff
errose28 May 14, 2025
1ab8c14
Add unit test for KeyValueHandler#updateContainerChecksum
errose28 May 14, 2025
6c8be07
Improve and update scanner integration tests
errose28 May 14, 2025
60a1a6e
Add unit tests that checksum update failure does not stop container s…
errose28 May 14, 2025
d035c17
Checkstyle
errose28 May 14, 2025
53336ae
Fix scan gap for unit test
errose28 May 15, 2025
56e7ed4
Merge branch 'HDDS-10239-container-reconciliation' into HDDS-10374-sc…
errose28 May 16, 2025
2504638
Fix metadata scan test
errose28 May 16, 2025
4be9992
Update based on review
errose28 May 19, 2025
c0b89dd
pmd
errose28 May 19, 2025
e24a24e
Update ContainerData checksum info after reconcile with each peer
errose28 May 22, 2025
dc27f74
Support bypassing scan gap (tests are failing)
errose28 May 22, 2025
e2974b4
Checkstyle
errose28 May 27, 2025
34b4b9a
Fix scan gap bug. All tests expected to pass
errose28 May 27, 2025
5fda700
Fix scan gap call
errose28 Jun 2, 2025
de6a757
Use temp dir for test, fix space overflow in CI
errose28 Jun 3, 2025
1574cdd
Add configs in test to support restarting DN and SCM quickly
errose28 Jun 3, 2025
7ff972c
Use standard corruption injection in failing test
errose28 Jun 3, 2025
02a3ac6
Checkstyle
errose28 Jun 3, 2025
54cbf92
Findbugs
errose28 Jun 3, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -81,35 +81,45 @@ public void stop() {
* The data merkle tree within the file is replaced with the {@code tree} parameter, but all other content of the
* file remains unchanged.
* Concurrent writes to the same file are coordinated internally.
* This method also updates the container's data checksum in the {@code data} parameter, which will be seen by SCM
* on container reports.
*/
public ContainerProtos.ContainerChecksumInfo writeContainerDataTree(ContainerData data,
ContainerMerkleTreeWriter tree)
throws IOException {
ContainerMerkleTreeWriter tree) throws IOException {
long containerID = data.getContainerID();
// If there is an error generating the tree and we cannot obtain a final checksum, use 0 to indicate a metadata
// failure.
long dataChecksum = 0;
ContainerProtos.ContainerChecksumInfo checksumInfo = null;
Lock writeLock = getLock(containerID);
writeLock.lock();
try {
ContainerProtos.ContainerChecksumInfo.Builder checksumInfoBuilder = null;
try {
// If the file is not present, we will create the data for the first time. This happens under a write lock.
checksumInfoBuilder = readBuilder(data)
.orElse(ContainerProtos.ContainerChecksumInfo.newBuilder());
checksumInfoBuilder = readBuilder(data).orElse(ContainerProtos.ContainerChecksumInfo.newBuilder());
} catch (IOException ex) {
LOG.error("Failed to read container checksum tree file for container {}. Overwriting it with a new instance.",
LOG.error("Failed to read container checksum tree file for container {}. Creating a new instance.",
containerID, ex);
checksumInfoBuilder = ContainerProtos.ContainerChecksumInfo.newBuilder();
}

ContainerProtos.ContainerChecksumInfo checksumInfo = checksumInfoBuilder
ContainerProtos.ContainerMerkleTree treeProto = captureLatencyNs(metrics.getCreateMerkleTreeLatencyNS(),
tree::toProto);
checksumInfoBuilder
.setContainerID(containerID)
.setContainerMerkleTree(captureLatencyNs(metrics.getCreateMerkleTreeLatencyNS(), tree::toProto))
.build();
.setContainerMerkleTree(treeProto);
checksumInfo = checksumInfoBuilder.build();
write(data, checksumInfo);
LOG.debug("Data merkle tree for container {} updated", containerID);
return checksumInfo;
// If write succeeds, update the checksum in memory. Otherwise 0 will be used to indicate the metadata failure.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should there be a way to tell if a checksum failed vs. scanner has not yet run? Should failure to generate checksum == -1?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed, the in-memory hash should be what is written to disk as it is an in-memory cache. If updating the Merkle tree is failing, we can have a metric + log message and/or a status report to SCM or a new Admin API to Datanode to query the Merkle tree (I think there are going to be multiple debug scenarios where we would like to query a Datanode what it knows independent of SCM reports).

dataChecksum = treeProto.getDataChecksum();
LOG.debug("Data merkle tree for container {} updated with container checksum {}", containerID, dataChecksum);
} finally {
// Even if persisting the tree fails, we should still update the data checksum in memory to report back to SCM.
data.setDataChecksum(dataChecksum);
writeLock.unlock();
}
return checksumInfo;
}

/**
Expand Down Expand Up @@ -364,8 +374,6 @@ private void write(ContainerData data, ContainerProtos.ContainerChecksumInfo che
throw new IOException("Error occurred when writing container merkle tree for containerID "
+ data.getContainerID(), ex);
}
// Set in-memory data checksum.
data.setDataChecksum(checksumInfo.getContainerMerkleTree().getDataChecksum());
}

/**
Expand All @@ -388,7 +396,7 @@ public ContainerMerkleTreeMetrics getMetrics() {
return this.metrics;
}

public static boolean checksumFileExist(Container container) {
public static boolean checksumFileExist(Container<?> container) {
File checksumFile = getContainerChecksumFile(container.getContainerData());
return checksumFile.exists();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,28 @@ public ContainerMerkleTreeWriter() {
* If the block entry already exists, the chunks will be added to the existing chunks for that block.
*
* @param blockID The ID of the block that these chunks belong to.
* @param healthy True if there were no errors detected with these chunks. False indicates that all the chunks
* being added had errors.
* @param chunks A list of chunks to add to this block. The chunks will be sorted internally by their offset.
*/
public void addChunks(long blockID, Collection<ContainerProtos.ChunkInfo> chunks) {
id2Block.computeIfAbsent(blockID, BlockMerkleTreeWriter::new).addChunks(chunks);
public void addChunks(long blockID, boolean healthy, ContainerProtos.ChunkInfo... chunks) {
id2Block.computeIfAbsent(blockID, BlockMerkleTreeWriter::new).addChunks(healthy, chunks);
}

public void addChunks(long blockID, boolean healthy, Collection<ContainerProtos.ChunkInfo> chunks) {
for (ContainerProtos.ChunkInfo chunk: chunks) {
addChunks(blockID, healthy, chunk);
}
}

/**
* Adds an empty block to the tree. This method is not a pre-requisite to {@code addChunks}.
* If the block entry already exists, it will not be modified.
*
* @param blockID The ID of the empty block to add to the tree
*/
public void addBlock(long blockID) {
addChunks(blockID, true);
}

/**
Expand Down Expand Up @@ -110,11 +128,13 @@ private static class BlockMerkleTreeWriter {
* Adds the specified chunks to this block. The offset value of the chunk must be unique within the block,
* otherwise it will overwrite the previous value at that offset.
*
* @param healthy True if there were no errors detected with these chunks. False indicates that all the chunks
* being added had errors.
* @param chunks A list of chunks to add to this block.
*/
public void addChunks(Collection<ContainerProtos.ChunkInfo> chunks) {
public void addChunks(boolean healthy, ContainerProtos.ChunkInfo... chunks) {
for (ContainerProtos.ChunkInfo chunk: chunks) {
offset2Chunk.put(chunk.getOffset(), new ChunkMerkleTreeWriter(chunk));
offset2Chunk.put(chunk.getOffset(), new ChunkMerkleTreeWriter(chunk, healthy));
}
}

Expand Down Expand Up @@ -160,10 +180,10 @@ private static class ChunkMerkleTreeWriter {
private final boolean isHealthy;
private final long dataChecksum;

ChunkMerkleTreeWriter(ContainerProtos.ChunkInfo chunk) {
ChunkMerkleTreeWriter(ContainerProtos.ChunkInfo chunk, boolean healthy) {
length = chunk.getLen();
offset = chunk.getOffset();
isHealthy = true;
isHealthy = healthy;
ChecksumByteBuffer checksumImpl = CHECKSUM_BUFFER_SUPPLIER.get();
for (ByteString checksum: chunk.getChecksumData().getChecksumsList()) {
checksumImpl.update(checksum.asReadOnlyByteBuffer());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,6 @@ public DataScanResult fullCheck(DataTransferThrottler throttler, Canceler cancel

LOG.debug("Running data checks for container {}", containerID);
try {
// TODO HDDS-10374 this tree will get updated with the container's contents as it is scanned.
ContainerMerkleTreeWriter dataTree = new ContainerMerkleTreeWriter();
List<ContainerScanError> dataErrors = scanData(dataTree, throttler, canceler);
if (containerIsDeleted()) {
Expand Down Expand Up @@ -375,12 +374,15 @@ private List<ContainerScanError> scanBlock(DBHandle db, File dbFile, BlockData b
// So, we need to make sure, chunk length > 0, before declaring
// the missing chunk file.
if (!block.getChunks().isEmpty() && block.getChunks().get(0).getLen() > 0) {
ContainerScanError error = new ContainerScanError(FailureType.MISSING_CHUNK_FILE,
ContainerScanError error = new ContainerScanError(FailureType.MISSING_DATA_FILE,
new File(containerDataFromDisk.getChunksPath()), new IOException("Missing chunk file " +
chunkFile.getAbsolutePath()));
blockErrors.add(error);
}
} else if (chunk.getChecksumData().getType() != ContainerProtos.ChecksumType.NONE) {
// Before adding chunks, add a block entry to the tree to represent cases where the block exists but has no
// chunks.
currentTree.addBlock(block.getBlockID().getLocalID());
int bytesPerChecksum = chunk.getChecksumData().getBytesPerChecksum();
ByteBuffer buffer = BUFFER_POOL.getBuffer(bytesPerChecksum);
// Keep scanning the block even if there are errors with individual chunks.
Expand Down Expand Up @@ -418,6 +420,14 @@ private static List<ContainerScanError> verifyChecksum(BlockData block,

List<ContainerScanError> scanErrors = new ArrayList<>();

// Information used to populate the merkle tree. Chunk metadata will be the same, but we must fill in the
// checksums with what we actually observe.
ContainerProtos.ChunkInfo.Builder observedChunkBuilder = chunk.toBuilder();
ContainerProtos.ChecksumData.Builder observedChecksumData = chunk.getChecksumData().toBuilder();
observedChecksumData.clearChecksums();
boolean chunkHealthy = true;
boolean chunkMissing = false;

ChecksumData checksumData =
ChecksumData.getFromProtoBuf(chunk.getChecksumData());
int checksumCount = checksumData.getChecksums().size();
Expand All @@ -430,10 +440,7 @@ private static List<ContainerScanError> verifyChecksum(BlockData block,
if (layout == ContainerLayoutVersion.FILE_PER_BLOCK) {
channel.position(chunk.getOffset());
}
// Only report one error per chunk. Reporting corruption at every "bytes per checksum" interval will lead to a
// large amount of errors when a full chunk is corrupted.
boolean chunkHealthy = true;
for (int i = 0; i < checksumCount && chunkHealthy; i++) {
for (int i = 0; i < checksumCount; i++) {
// limit last read for FILE_PER_BLOCK, to avoid reading next chunk
if (layout == ContainerLayoutVersion.FILE_PER_BLOCK &&
i == checksumCount - 1 &&
Expand All @@ -453,7 +460,11 @@ private static List<ContainerScanError> verifyChecksum(BlockData block,
ByteString expected = checksumData.getChecksums().get(i);
ByteString actual = cal.computeChecksum(buffer)
.getChecksums().get(0);
if (!expected.equals(actual)) {
observedChecksumData.addChecksums(actual);
// Only report one error per chunk. Reporting corruption at every "bytes per checksum" interval will lead to a
// large amount of errors when a full chunk is corrupted.
// Continue scanning the chunk even after the first error so the full merkle tree can be built.
if (chunkHealthy && !expected.equals(actual)) {
String message = String
.format("Inconsistent read for chunk=%s" +
" checksum item %d" +
Expand All @@ -465,26 +476,46 @@ private static List<ContainerScanError> verifyChecksum(BlockData block,
StringUtils.bytes2Hex(expected.asReadOnlyByteBuffer()),
StringUtils.bytes2Hex(actual.asReadOnlyByteBuffer()),
block.getBlockID());
chunkHealthy = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can break the loop here because smallest unit is chunk.

scanErrors.add(new ContainerScanError(FailureType.CORRUPT_CHUNK, chunkFile,
new OzoneChecksumException(message)));
chunkHealthy = false;
}
}
// If all the checksums match, also check that the length stored in the metadata matches the number of bytes
// seen on the disk.

observedChunkBuilder.setLen(bytesRead);
// If we haven't seen any errors after scanning the whole chunk, verify that the length stored in the metadata
// matches the number of bytes seen on the disk.
if (chunkHealthy && bytesRead != chunk.getLen()) {
String message = String
.format("Inconsistent read for chunk=%s expected length=%d"
+ " actual length=%d for block %s",
chunk.getChunkName(),
chunk.getLen(), bytesRead, block.getBlockID());
scanErrors.add(new ContainerScanError(FailureType.INCONSISTENT_CHUNK_LENGTH, chunkFile,
new IOException(message)));
if (bytesRead == 0) {
// If we could not find any data for the chunk, report it as missing.
chunkMissing = true;
chunkHealthy = false;
String message = String.format("Missing chunk=%s with expected length=%d for block %s",
chunk.getChunkName(), chunk.getLen(), block.getBlockID());
scanErrors.add(new ContainerScanError(FailureType.MISSING_CHUNK, chunkFile, new IOException(message)));
} else {
// We found data for the chunk, but it was shorter than expected.
String message = String
.format("Inconsistent read for chunk=%s expected length=%d"
+ " actual length=%d for block %s",
chunk.getChunkName(),
chunk.getLen(), bytesRead, block.getBlockID());
chunkHealthy = false;
scanErrors.add(new ContainerScanError(FailureType.INCONSISTENT_CHUNK_LENGTH, chunkFile,
new IOException(message)));
}
}
} catch (IOException ex) {
scanErrors.add(new ContainerScanError(FailureType.MISSING_CHUNK_FILE, chunkFile, ex));
// An unknown error occurred trying to access the chunk. Report it as corrupted.
chunkHealthy = false;
scanErrors.add(new ContainerScanError(FailureType.CORRUPT_CHUNK, chunkFile, ex));
}

// Missing chunks should not be added to the merkle tree.
if (!chunkMissing) {
observedChunkBuilder.setChecksumData(observedChecksumData);
currentTree.addChunks(block.getBlockID().getLocalID(), chunkHealthy, observedChunkBuilder.build());
}
return scanErrors;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -626,13 +626,18 @@ ContainerCommandResponseProto handleCloseContainer(
return getSuccessResponse(request);
}


/**
* Create a Merkle tree for the container if it does not exist.
* Write the merkle tree for this container using the existing checksum metadata only. The data is not read or
* validated by this method, so it is expected to run quickly.
*
* If a checksum file already exists on the disk, this method will do nothing. The existing file would have either
* been made from the metadata or data itself so there is no need to recreate it from the metadata.
*
* TODO: This method should be changed to private after HDDS-10374 is merged.
*
* @param container The container which will have a tree generated.
*/
@VisibleForTesting
public void createContainerMerkleTree(Container container) {
public void createContainerMerkleTreeFromMetadata(Container container) {
if (ContainerChecksumTreeManager.checksumFileExist(container)) {
return;
}
Expand Down Expand Up @@ -1393,7 +1398,7 @@ public void markContainerForClose(Container container)
} finally {
container.writeUnlock();
}
createContainerMerkleTree(container);
createContainerMerkleTreeFromMetadata(container);
ContainerLogger.logClosing(container.getContainerData());
sendICR(container);
}
Expand Down Expand Up @@ -1426,7 +1431,6 @@ public void markContainerUnhealthy(Container container, ScanResult reason)
} finally {
container.writeUnlock();
}
createContainerMerkleTree(container);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a container moves from OPEN to UNHEALTHY state we should try to build a merkle tree with whatever data we have at the moment before the scanner builds the actual merkle tree. If for some reason (either metadata/data error) we are unable to build it, then we can log the exception and move.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, when we have an unhealthy container we should build the tree from the actual data, but this method only builds the tree from the metadata. I'm renaming it in the next commit to make that clearer. We should not depend on just the metadata if we suspect corruption. The way this should work is open -> unhealthy would trigger an on-demand scan of the container, but that code path is not in place right now. We may want to add that to our branch actually.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we should add the on-demand scan here to generate the Merkle tree.

// Even if the container file is corrupted/missing and the unhealthy
// update fails, the unhealthy state is kept in memory and sent to
// SCM. Write a corresponding entry to the container log as well.
Expand Down Expand Up @@ -1457,7 +1461,7 @@ public void quasiCloseContainer(Container container, String reason)
} finally {
container.writeUnlock();
}
createContainerMerkleTree(container);
createContainerMerkleTreeFromMetadata(container);
ContainerLogger.logQuasiClosed(container.getContainerData(), reason);
sendICR(container);
}
Expand Down Expand Up @@ -1491,7 +1495,7 @@ public void closeContainer(Container container)
} finally {
container.writeUnlock();
}
createContainerMerkleTree(container);
createContainerMerkleTreeFromMetadata(container);
ContainerLogger.logClosed(container.getContainerData());
sendICR(container);
}
Expand Down Expand Up @@ -1599,12 +1603,12 @@ private ContainerProtos.ContainerChecksumInfo updateAndGetContainerChecksum(KeyV
BlockData blockData = blockIterator.nextBlock();
List<ContainerProtos.ChunkInfo> chunkInfos = blockData.getChunks();
// TODO: Add empty blocks to the merkle tree. Done in HDDS-10374, needs to be backported.
merkleTree.addChunks(blockData.getLocalID(), chunkInfos);
// Assume all chunks are healthy when building the tree from metadata. Scanner will identify corruption when
// it runs after.
merkleTree.addChunks(blockData.getLocalID(), true, chunkInfos);
}
}
ContainerProtos.ContainerChecksumInfo checksumInfo = checksumManager
.writeContainerDataTree(containerData, merkleTree);
return checksumInfo;
return checksumManager.writeContainerDataTree(containerData, merkleTree);
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,13 @@ public void scanContainer(Container<?> c)
if (result.isDeleted()) {
LOG.debug("Container [{}] has been deleted during the data scan.", containerId);
} else {
// Merkle tree write failure should not abort the scanning process. Continue marking the scan as completed.
try {
checksumManager.writeContainerDataTree(containerData, result.getDataTree());
} catch (IOException ex) {
LOG.error("Failed to write container merkle tree for container {}", containerId, ex);
}

if (!result.isHealthy()) {
logUnhealthyScanResult(containerId, result, LOG);

Expand All @@ -103,17 +110,15 @@ public void scanContainer(Container<?> c)
metrics.incNumUnHealthyContainers();
}
}
checksumManager.writeContainerDataTree(containerData, result.getDataTree());
metrics.incNumContainersScanned();
}

// Even if the container was deleted, mark the scan as completed since we already logged it as starting.
Instant now = Instant.now();
logScanCompleted(containerData, now);

if (!result.isDeleted()) {
controller.updateDataScanTimestamp(containerId, now);
}
// Even if the container was deleted, mark the scan as completed since we already logged it as starting.
logScanCompleted(containerData, now);
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,10 @@ public enum FailureType {
MISSING_METADATA_DIR,
MISSING_CONTAINER_FILE,
MISSING_CHUNKS_DIR,
MISSING_CHUNK_FILE,
MISSING_DATA_FILE,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq: Is this for the cases when *.block file is missing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I changed the name so it is agnostic of container layout.

CORRUPT_CONTAINER_FILE,
CORRUPT_CHUNK,
MISSING_CHUNK,
INCONSISTENT_CHUNK_LENGTH,
INACCESSIBLE_DB,
WRITE_FAILURE,
Expand Down
Loading
Loading