-
Notifications
You must be signed in to change notification settings - Fork 9.2k
HDFS-16316.Improve DirectoryScanner: add regular file check related block. #3861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -49,6 +49,7 @@ | |
| import javax.management.ObjectName; | ||
| import javax.management.StandardMBean; | ||
|
|
||
| import org.apache.hadoop.fs.FileUtil; | ||
| import org.apache.hadoop.fs.HardLink; | ||
| import org.apache.hadoop.classification.VisibleForTesting; | ||
|
|
||
|
|
@@ -2645,6 +2646,9 @@ public void checkAndUpdate(String bpid, ScanInfo scanInfo) | |
| Block.getGenerationStamp(diskMetaFile.getName()) : | ||
| HdfsConstants.GRANDFATHER_GENERATION_STAMP; | ||
|
|
||
| final boolean isRegular = FileUtil.isRegularFile(diskMetaFile, false) && | ||
| FileUtil.isRegularFile(diskFile, false); | ||
|
|
||
| if (vol.getStorageType() == StorageType.PROVIDED) { | ||
| if (memBlockInfo == null) { | ||
| // replica exists on provided store but not in memory | ||
|
|
@@ -2812,6 +2816,9 @@ public void checkAndUpdate(String bpid, ScanInfo scanInfo) | |
| + memBlockInfo.getNumBytes() + " to " | ||
| + memBlockInfo.getBlockDataLength()); | ||
| memBlockInfo.setNumBytes(memBlockInfo.getBlockDataLength()); | ||
| } else if (!isRegular) { | ||
|
||
| corruptBlock = new Block(memBlockInfo); | ||
| LOG.warn("Block:{} is not a regular file.", corruptBlock.getBlockId()); | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe we should print the absolute path so that we can deal with these abnormal files?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks @tomscut for the comment and review. When the file is actually cleaned up, the specific path will be printed. Here are some examples of online clusters:
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for explaining this. It looks like the operation related to mount has been performed. Did HDFS successfully clean the abnormal file on your online cluster you mentioned after this change?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, these exception files are cleaned up.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is good. |
||
| } | ||
| } finally { | ||
| if (dataNodeMetrics != null) { | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.