Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
121 commits
Select commit Hold shift + click to select a range
69431c7
HBASE-21102 ServerCrashProcedure should select target server where no…
tedyu Sep 19, 2018
3a0fcd5
HBASE-21156 [hbck2] Queue an assign of hbase:meta and bulk assign/una…
saintstack Sep 13, 2018
dc767c0
HBASE-21023 Added bypassProcedure() API to HbckService
uagashe Sep 5, 2018
cd161d9
HBASE-21204 NPE when scan raw DELETE_FAMILY_VERSION and codec is not set
tianjy1990 Sep 20, 2018
ddd30a2
HBASE-21206 Scan with batch size may return incomplete cells
openinx Sep 18, 2018
1010992
HBASE-21203 TestZKMainServer#testCommandLineWorks won't pass with def…
apurtell Sep 17, 2018
98909f4
HBASE-21214 [hbck2] setTableState just sets hbase:meta state, not in-…
saintstack Sep 20, 2018
3de02d5
HBASE-20636 Introduce two bloom filter type : ROWPREFIX and ROWPREFIX…
guangxuCheng May 29, 2018
7ab7751
Amend HBASE-20704 Sometimes some compacted storefiles are not archive…
apurtell Sep 21, 2018
c686b53
HBASE-21208 Bytes#toShort doesn't work without unsafe
chia7712 Sep 25, 2018
8eaaa63
HBASE-21217 Revisit the executeProcedure method for open/close region
Apache9 Sep 24, 2018
b8134fe
HBASE-21221 Ineffective assertion in TestFromClientSide3#testMultiRow…
tedyu Sep 25, 2018
2736913
HBASE-21223 [amv2] Remove abort_procedure from shell
saintstack Sep 25, 2018
08c4d70
HBASE-21164 reportForDuty should do backoff rather than retry
liuml07 Sep 7, 2018
0e173d3
HBASE-20734 Colocate recovered edits directory with hbase.wal.dir
z-york Jun 27, 2018
7b2f595
HBASE-21212 Wrong flush time when update flush metric
Sep 26, 2018
d7e0831
HBASE-21227 Implement exponential retrying backoff for Assign/Unassig…
Apache9 Sep 26, 2018
1154f81
HBASE-21232 Show table state in Tables view on Master home page
saintstack Sep 26, 2018
8f8d571
HBASE-20766 Typo in VerifyReplication error.
ffernandez92 Jun 29, 2018
98b1fea
HBASE-21241 Close stale PRs
busbey Sep 26, 2018
86cb8e4
HBASE-21228 Memory leak since AbstractFSWAL caches Thread object and …
Sep 27, 2018
22ac655
HBASE-21233 Allow the procedure implementation to skip persistence of…
Apache9 Sep 27, 2018
3baafbe
HBASE-21248 Implement exponential backoff when retrying for ModifyPee…
Apache9 Sep 28, 2018
71be251
Revert "HBASE-21188 Print heap and gc informations in our junit Resou…
Apache9 Sep 28, 2018
d39ea25
HBASE-21244 Skip persistence when retrying for assignment related pro…
Apache9 Sep 27, 2018
6bc7089
HBASE-21249 Add jitter for ProcedureUtil.getBackoffTimeMs
Sep 28, 2018
704f8b8
HBASE-18451 PeriodicMemstoreFlusher should inspect the queue before a…
xcangCRM Sep 25, 2018
668a179
HBASE-19418 configurable range of delay in PeriodicMemstoreFlusher
Sep 11, 2018
801fc05
HBASE-21207 Add client side sorting functionality in master web UI fo…
archana-katiyar Sep 21, 2018
aa9e1d0
HBASE-20857 balancer status tag in jmx metrics
kiran-maturi Sep 18, 2018
56ac470
HBASE-21196 HTableMultiplexer clears the meta cache after every put o…
NihalJain Sep 14, 2018
ab6ec1f
Revert "HBASE-21248 Implement exponential backoff when retrying for M…
Apache9 Sep 29, 2018
fdbaa4c
HBASE-21248 Implement exponential backoff when retrying for ModifyPee…
Apache9 Sep 29, 2018
f9d51b6
HBASE-21245 Add exponential backoff when retrying for sync replicatio…
infraio Sep 29, 2018
4d7235e
HBASE-21258 Add resetting of flags for RS Group pre/post hooks in Tes…
tedyu Oct 1, 2018
1b7e4fd
HBASE-19275 TestSnapshotFileCache never worked properly
xcangCRM Jul 23, 2018
79fe878
HBASE-21261 Add log4j.properties for hbase-rsgroup tests
apurtell Oct 2, 2018
42aa3dd
HBASE-18549 Add metrics for failed replication queue recovery
xcangCRM Aug 29, 2018
b9bb14e
HBASE-21221 Ineffective assertion in TestFromClientSide3#testMultiRow…
tedyu Oct 3, 2018
4508f67
HBASE-21185 - WALPrettyPrinter: Additional useful info to be printed …
Oct 2, 2018
e42741e
HBASE-21265 Split up TestRSGroups
apurtell Oct 3, 2018
5da0c20
HBASE-21219 Hbase incremental backup fails with null pointer exception
Sep 25, 2018
118b074
HBASE-21250 Refactor WALProcedureStore and add more comments for bett…
Apache9 Oct 6, 2018
e8df847
HBASE-21250 Addendum remove unused modification in hbase-server module
Apache9 Oct 8, 2018
ce29e9e
HBASE-21273 Move classes out of o.a.spark packages
madrob Oct 5, 2018
fd3e0ff
HBASE-21230 BackupUtils#checkTargetDir doesn't compose error message …
tedyu Oct 8, 2018
c9213f7
HBASE-20764 build broken when latest commit is gpg signed
madrob Jun 20, 2018
7c755bf
HBASE-21280 Add anchors for each heading in UI
saintstack Oct 9, 2018
f122328
HBASE-21251 Refactor RegionMover
infraio Oct 8, 2018
a1f28f3
HBASE-21277 Prevent to add same table to two sync replication peer's …
infraio Oct 8, 2018
fe579a1
Add Balazs Meszaros to committers
meszibalu Oct 10, 2018
0789f54
HBASE-21247 Allow WAL Provider to be specified by configuration witho…
tedyu Oct 10, 2018
7255230
HBASE-21283 Add new shell command 'rit' for listing regions in transi…
apurtell Oct 9, 2018
db9a5b7
HBASE-21287 Allow configuring test master initialization wait time.
madrob Oct 10, 2018
8b66dea
HBASE-21281 Upgrade bouncycastle to latest
joshelser Oct 9, 2018
eec1479
HBASE-21282 Upgrade to latest jetty 9.3 versions
joshelser Oct 11, 2018
42d7ddc
HBASE-21103 nightly job should make sure cached yetus will run.
busbey Oct 11, 2018
924d183
HBASE-21247 Allow WAL Provider to be specified by configuration witho…
tedyu Oct 11, 2018
da63ebb
HBASE-21256 Improve IntegrationTestBigLinkedList for testing huge data
ZephyrGuo Oct 12, 2018
9e9a1e0
HBASE-21254 Need to find a way to limit the number of proc wal files
Apache9 Oct 11, 2018
fa5fa6e
HBASE-21289 Remove the log "'hbase.regionserver.maxlogs' was deprecat…
infraio Oct 11, 2018
05f8bea
HBASE-21178 [BC break] : Get and Scan operation with a custom convert…
Sep 18, 2018
a292ab7
HBASE-21299 List counts of actual region states in master UI tables s…
saintstack Oct 12, 2018
e736168
HBASE-21303 [shell] clear_deadservers with no args fails
saintstack Oct 12, 2018
7464e2e
HBASE-21114 add 2.1 docs to menu
madrob Oct 12, 2018
dde336f
HBASE-21309 Increase the waiting timeout for TestProcedurePriority
Apache9 Oct 13, 2018
6781918
HBASE-21310 Split TestCloneSnapshotFromClient
Apache9 Oct 15, 2018
7d798b3
HBASE-21260 The whole balancer plans might be aborted if there are mo…
sunhelly Oct 9, 2018
4a04312
HBASE-21290 No need to instantiate BlockCache for master which not ca…
infraio Oct 11, 2018
07e2247
HBASE-21238 MapReduceHFileSplitterJob#run shouldn't call System.exit
dbist Oct 15, 2018
fc7a6a6
HBASE-21311 Split TestRestoreSnapshotFromClient
Apache9 Oct 15, 2018
c9dcc9a
HBASE-21266 Not running balancer because processing dead regionserver…
apurtell Oct 11, 2018
0d99829
HBASE-21278 Do not rollback successful sub procedures when rolling ba…
Apache9 Oct 14, 2018
fa652cc
HBASE-21315 The getActiveMinProcId and getActiveMaxProcId of BitSetNo…
Apache9 Oct 15, 2018
821e4d7
HBASE-21291 Add a test for bypassing stuck state-machine procedures
tianjy1990 Oct 16, 2018
3b91ae5
HBASE-21263 Mention compression algorithm along with other storefile …
Oct 15, 2018
8cc56bd
HBASE-21320 [canary] Cleanup of usage and add commentary
saintstack Oct 16, 2018
fd940f3
HBASE-21327 Fix minor logging issue where we don't report servername …
saintstack Oct 17, 2018
8cb28ce
HBASE-21198 Exclude dependency on net.minidev:json-smart
dbist Oct 17, 2018
1e339e6
HBASE-21281 Update bouncycastle dependency - addendum adds dependency…
tedyu Oct 17, 2018
3a75505
HBASE-21279 Split TestAdminShell into several tests
dbist Oct 17, 2018
5efa5f6
HBASE-21330 ReopenTableRegionsProcedure will enter an infinite loop i…
Apache9 Oct 17, 2018
e520399
HBASE-20716: Changes the bytes[] conversion done in Bytes and ByteBuf…
SahilAggarwal Oct 9, 2018
92fdc8d
HBASE-21055 NullPointerException when balanceOverall() but server bal…
sunhelly Aug 15, 2018
132bea9
HBASE-21323 Should not skip force updating for a sub procedure even i…
Apache9 Oct 17, 2018
5fbb227
HBASE-21269 Forward-port HBASE-21213 [hbck2] bypass leaves behind sta…
tianjy1990 Oct 18, 2018
bc7628a
HBASE-21073 Redo concept of maintenance mode
madrob Oct 8, 2018
05d22ed
HBASE-21292 IdLock.getLockEntry() may hang if interrupted
Oct 18, 2018
ae53716
Update downloads.xml with new entry for 1.4.8 release
apurtell Oct 19, 2018
4bf3c5a
HBASE-21200 Memstore flush doesn't finish because of seekToPreviousRo…
brfrn169 Sep 27, 2018
7adf590
HBASE-21336 Simplify the implementation of WALProcedureMap
Apache9 Oct 20, 2018
b723ce1
HBASE-21194 Add tests in TestCopyTable which exercises MOB feature
dbist Oct 19, 2018
5858467
HBASE-21281 Upgrade bouncycastle to latest - addendum adds test depen…
tedyu Oct 20, 2018
ae5308a
HBASE-21302 update downloads page for HBase 1.2.8 release.
busbey Oct 21, 2018
dd474ef
HBASE-21334 TestMergeTableRegionsProcedure is flakey
Apache9 Oct 21, 2018
7d72930
Revert "HBASE-21336 Simplify the implementation of WALProcedureMap"
Apache9 Oct 22, 2018
77ac352
HBASE-21355 HStore's storeSize is calculated repeatedly which causing…
openinx Oct 21, 2018
3b66b65
HBASE-21336 Simplify the implementation of WALProcedureMap
Apache9 Oct 20, 2018
d0e7367
HBASE-21355 (addendum) replace the expensive reload storefiles with r…
openinx Oct 22, 2018
931156f
HBASE-21336 Addendum remove unused code in HBTU
Apache9 Oct 22, 2018
ae13b0b
HBASE-21356 bulkLoadHFile API should ensure that rs has the source hf…
openinx Oct 22, 2018
86f2312
HBASE-21354 Procedure may be deleted improperly during master restart…
Oct 23, 2018
603bf4c
HBASE-21354 Addendum fix compile error
Apache9 Oct 23, 2018
3b68e53
HBASE-20973 ArrayIndexOutOfBoundsException when rolling back procedure
Oct 23, 2018
807736f
HBASE-21338 Warn if balancer is an ill-fit for cluster size
xcangCRM Oct 22, 2018
1e9d998
HBASE-21342 FileSystem in use may get closed by other bulk load call …
sufism Oct 19, 2018
1f437ac
HBASE-21349 Do not run CatalogJanitor or Nomalizer when cluster is sh…
xcangCRM Oct 23, 2018
b2fcf76
HBASE-21363 Rewrite the buildingHoldCleanupTracker method in WALProce…
Apache9 Oct 24, 2018
3fe8649
HBASE-21377 Add debug log for catching the root cause
Apache9 Oct 24, 2018
6830a1c
HBASE-21372) Set hbase.assignment.maximum.attempts to Long.MAX
saintstack Oct 23, 2018
d4cc5ee
HBASE-21318 Make RefreshHFilesClient runnable
Aug 17, 2018
5dde5b7
HBASE-21215 Figure how to invoke hbck2; make it easy to find
saintstack Oct 24, 2018
614612a
HBASE-21364 Procedure holds the lock should put to front of the queue…
Oct 25, 2018
6646973
HBASE-21384 Procedure with holdlock=false should not be restored lock…
Oct 25, 2018
cd94341
HBASE-21385 HTable.delete request use rpc call directly instead of As…
infraio Oct 25, 2018
3a7412d
HBASE-21383 Change refguide to point at hbck2 instead of hbck1
saintstack Oct 24, 2018
23b7510
HBASE-21365 Throw exception when user put data with skip wal to a tab…
infraio Oct 24, 2018
385e398
Revert "HBASE-20973 ArrayIndexOutOfBoundsException when rolling back …
Apache9 Oct 26, 2018
0ab7c3a
HBASE-21391 RefreshPeerProcedure should also wait master initialized …
Apache9 Oct 26, 2018
e5ba798
HBASE-20973 ArrayIndexOutOfBoundsException when rolling back procedure
Apache9 Oct 26, 2018
7cdb525
HBASE-21175 Partially initialized SnapshotHFileCleaner leads to NPE d…
dbist Oct 26, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
23 changes: 21 additions & 2 deletions bin/hbase
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,8 @@ if [ $# = 0 ]; then
echo "Commands:"
echo "Some commands take arguments. Pass no args or -h for usage."
echo " shell Run the HBase shell"
echo " hbck Run the hbase 'fsck' tool"
echo " hbck Run the HBase 'fsck' tool. Defaults read-only hbck1."
echo " Pass '-j /path/to/HBCK2.jar' to run hbase-2.x HBCK2."
echo " snapshot Tool for managing snapshots"
if [ "${in_omnibus_tarball}" = "true" ]; then
echo " wal Write-ahead-log analyzer"
Expand Down Expand Up @@ -482,7 +483,25 @@ if [ "$COMMAND" = "shell" ] ; then
HBASE_OPTS="$HBASE_OPTS $HBASE_SHELL_OPTS"
CLASS="org.jruby.Main -X+O ${JRUBY_OPTS} ${HBASE_HOME}/bin/hirb.rb"
elif [ "$COMMAND" = "hbck" ] ; then
CLASS='org.apache.hadoop.hbase.util.HBaseFsck'
# Look for the -j /path/to/HBCK2.jar parameter. Else pass through to hbck.
case "${1}" in
-j)
# Found -j parameter. Add arg to CLASSPATH and set CLASS to HBCK2.
shift
JAR="${1}"
if [ ! -f "${JAR}" ]; then
echo "${JAR} file not found!"
echo "Usage: hbase [<options>] hbck -jar /path/to/HBCK2.jar [<args>]"
exit 1
fi
CLASSPATH="${JAR}:${CLASSPATH}";
CLASS="org.apache.hbase.HBCK2"
shift # past argument=value
;;
*)
CLASS='org.apache.hadoop.hbase.util.HBaseFsck'
;;
esac
elif [ "$COMMAND" = "wal" ] ; then
CLASS='org.apache.hadoop.hbase.wal.WALPrettyPrinter'
elif [ "$COMMAND" = "hfile" ] ; then
Expand Down
3 changes: 2 additions & 1 deletion dev-support/Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,8 @@ pipeline {
if [[ true != "${USE_YETUS_PRERELEASE}" ]]; then
YETUS_DIR="${WORKSPACE}/yetus-${YETUS_RELEASE}"
echo "Checking for Yetus ${YETUS_RELEASE} in '${YETUS_DIR}'"
if [ ! -d "${YETUS_DIR}" ]; then
if ! "${YETUS_DIR}/bin/test-patch" --version >/dev/null 2>&1 ; then
rm -rf "${YETUS_DIR}"
"${WORKSPACE}/component/dev-support/jenkins-scripts/cache-apache-project-artifact.sh" \
--working-dir "${WORKSPACE}/downloads-yetus" \
--keys 'https://www.apache.org/dist/yetus/KEYS' \
Expand Down
13 changes: 6 additions & 7 deletions dev-support/submit-patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,17 +171,16 @@ def validate_patch_dir(patch_dir):
# - current branch is same as base branch
# - current branch is ahead of base_branch by more than 1 commits
def check_diff_between_branches(base_branch):
only_in_base_branch = git.log("HEAD.." + base_branch, oneline = True)
only_in_active_branch = git.log(base_branch + "..HEAD", oneline = True)
only_in_base_branch = list(repo.iter_commits("HEAD.." + base_branch))
only_in_active_branch = list(repo.iter_commits(base_branch + "..HEAD"))
if len(only_in_base_branch) != 0:
log_fatal_and_exit(" '%s' is ahead of current branch by %s commits. Rebase "
"and try again.", base_branch, len(only_in_base_branch.split("\n")))
"and try again.", base_branch, len(only_in_base_branch))
if len(only_in_active_branch) == 0:
log_fatal_and_exit(" Current branch is same as '%s'. Exiting...", base_branch)
if len(only_in_active_branch.split("\n")) > 1:
if len(only_in_active_branch) > 1:
log_fatal_and_exit(" Current branch is ahead of '%s' by %s commits. Squash into single "
"commit and try again.",
base_branch, len(only_in_active_branch.split("\n")))
"commit and try again.", base_branch, len(only_in_active_branch))


# If ~/.apache-creds is present, load credentials from it otherwise prompt user.
Expand Down Expand Up @@ -277,7 +276,7 @@ def get_review_board_id_if_present(issue_url, rb_link_title):
# Use jira summary as review's summary too.
summary = get_jira_summary(issue_url)
# Use commit message as description.
description = git.log("-1", pretty="%B")
description = repo.head.commit.message
update_draft_data = {"bugs_closed" : [args.jira_id.upper()], "target_groups" : "hbase",
"target_people" : args.reviewers, "summary" : summary,
"description" : description }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -296,9 +296,20 @@ private List<String> getLogFilesForNewBackup(HashMap<String, Long> olderTimestam
currentLogFile = log.getPath().toString();
resultLogFiles.add(currentLogFile);
currentLogTS = BackupUtils.getCreationTime(log.getPath());
// newestTimestamps is up-to-date with the current list of hosts
// so newestTimestamps.get(host) will not be null.
if (currentLogTS > newestTimestamps.get(host)) {

// If newestTimestamps.get(host) is null, means that
// either RS (host) has been restarted recently with different port number
// or RS is down (was decommisioned). In any case, we treat this
// log file as eligible for inclusion into incremental backup log list
Long ts = newestTimestamps.get(host);
if (ts == null) {
LOG.warn("ORPHAN log found: " + log + " host=" + host);
LOG.debug("Known hosts (from newestTimestamps):");
for (String s: newestTimestamps.keySet()) {
LOG.debug(s);
}
}
if (ts == null || currentLogTS > ts) {
newestLogs.add(currentLogFile);
}
}
Expand Down Expand Up @@ -343,7 +354,7 @@ private List<String> getLogFilesForNewBackup(HashMap<String, Long> olderTimestam
// Even if these logs belong to a obsolete region server, we still need
// to include they to avoid loss of edits for backup.
Long newTimestamp = newestTimestamps.get(host);
if (newTimestamp != null && currentLogTS > newTimestamp) {
if (newTimestamp == null || currentLogTS > newTimestamp) {
newestLogs.add(currentLogFile);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ public static void main(String[] args) throws Exception {
public int run(String[] args) throws Exception {
if (args.length < 2) {
usage("Wrong number of arguments: " + args.length);
System.exit(-1);
return -1;
}
Job job = createSubmittableJob(args);
int result = job.waitForCompletion(true) ? 0 : 1;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ public static void checkTargetDir(String backupRootPath, Configuration conf) thr
if (expMsg.contains("No FileSystem for scheme")) {
newMsg =
"Unsupported filesystem scheme found in the backup target url. Error Message: "
+ newMsg;
+ expMsg;
LOG.error(newMsg);
throw new IOException(newMsg);
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1737,11 +1737,14 @@ HTableDescriptor[] getTableDescriptors(List<String> names)

/**
* Abort a procedure.
* Do not use. Usually it is ignored but if not, it can do more damage than good. See hbck2.
* @param procId ID of the procedure to abort
* @param mayInterruptIfRunning if the proc completed at least one step, should it be aborted?
* @return <code>true</code> if aborted, <code>false</code> if procedure already completed or does not exist
* @throws IOException
* @deprecated Since 2.1.1 -- to be removed.
*/
@Deprecated
boolean abortProcedure(
long procId,
boolean mayInterruptIfRunning) throws IOException;
Expand All @@ -1752,12 +1755,15 @@ boolean abortProcedure(
* It may throw ExecutionException if there was an error while executing the operation
* or TimeoutException in case the wait timeout was not long enough to allow the
* operation to complete.
* Do not use. Usually it is ignored but if not, it can do more damage than good. See hbck2.
*
* @param procId ID of the procedure to abort
* @param mayInterruptIfRunning if the proc completed at least one step, should it be aborted?
* @return <code>true</code> if aborted, <code>false</code> if procedure already completed or does not exist
* @throws IOException
* @deprecated Since 2.1.1 -- to be removed.
*/
@Deprecated
Future<Boolean> abortProcedureAsync(
long procId,
boolean mayInterruptIfRunning) throws IOException;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -877,12 +877,15 @@ CompletableFuture<Boolean> isProcedureFinished(String signature, String instance
Map<String, String> props);

/**
* abort a procedure
* Abort a procedure
* Do not use. Usually it is ignored but if not, it can do more damage than good. See hbck2.
* @param procId ID of the procedure to abort
* @param mayInterruptIfRunning if the proc completed at least one step, should it be aborted?
* @return true if aborted, false if procedure already completed or does not exist. the value is
* wrapped by {@link CompletableFuture}
* @deprecated Since 2.1.1 -- to be removed.
*/
@Deprecated
CompletableFuture<Boolean> abortProcedure(long procId, boolean mayInterruptIfRunning);

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -826,7 +826,9 @@ private void receiveMultiAction(MultiAction multiAction,
byte[] regionName = regionEntry.getKey();

Throwable regionException = responses.getExceptions().get(regionName);
cleanServerCache(server, regionException);
if (regionException != null) {
cleanServerCache(server, regionException);
}

Map<Integer, Object> regionResults =
results.containsKey(regionName) ? results.get(regionName).result : Collections.emptyMap();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
import java.io.InterruptedIOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.EnumSet;
import java.util.HashMap;
import java.util.Iterator;
Expand Down Expand Up @@ -230,7 +231,6 @@
* @see Admin
*/
@InterfaceAudience.Private
@InterfaceStability.Evolving
public class HBaseAdmin implements Admin {
private static final Logger LOG = LoggerFactory.getLogger(HBaseAdmin.class);

Expand Down Expand Up @@ -4314,15 +4314,13 @@ public Void call() throws Exception {
}

@Override
public List<ServerName> clearDeadServers(final List<ServerName> servers) throws IOException {
if (servers == null || servers.size() == 0) {
throw new IllegalArgumentException("servers cannot be null or empty");
}
public List<ServerName> clearDeadServers(List<ServerName> servers) throws IOException {
return executeCallable(new MasterCallable<List<ServerName>>(getConnection(),
getRpcControllerFactory()) {
@Override
protected List<ServerName> rpcCall() throws Exception {
ClearDeadServersRequest req = RequestConverter.buildClearDeadServersRequest(servers);
ClearDeadServersRequest req = RequestConverter.
buildClearDeadServersRequest(servers == null? Collections.EMPTY_LIST: servers);
return ProtobufUtil.toServerNameList(
master.clearDeadServers(getRpcController(), req).getServerNameList());
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,25 +18,38 @@
package org.apache.hadoop.hbase.client;

import java.io.IOException;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.stream.Collectors;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.ipc.RpcControllerFactory;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.shaded.protobuf.RequestConverter;
import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.GetTableStateResponse;
import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.HbckService.BlockingInterface;

import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;

import org.apache.yetus.audience.InterfaceAudience;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;


/**
* Use {@link ClusterConnection#getHbck()} to obtain an instance of {@link Hbck} instead of
* constructing
* an HBaseHbck directly. This will be mostly used by hbck tool.
* constructing an HBaseHbck directly.
*
* <p>Connection should be an <i>unmanaged</i> connection obtained via
* {@link ConnectionFactory#createConnection(Configuration)}.</p>
*
* <p>NOTE: The methods in here can do damage to a cluster if applied in the wrong sequence or at
* the wrong time. Use with caution. For experts only. These methods are only for the
* extreme case where the cluster has been damaged or has achieved an inconsistent state because
* of some unforeseen circumstance or bug and requires manual intervention.
*
* <p>An instance of this class is lightweight and not-thread safe. A new instance should be created
* by each thread. Pooling or caching of the instance is not recommended.</p>
*
Expand Down Expand Up @@ -75,10 +88,6 @@ public boolean isAborted() {
return this.aborted;
}

/**
* NOTE: This is a dangerous action, as existing running procedures for the table or regions
* which belong to the table may get confused.
*/
@Override
public TableState setTableStateInMeta(TableState state) throws IOException {
try {
Expand All @@ -87,9 +96,62 @@ public TableState setTableStateInMeta(TableState state) throws IOException {
RequestConverter.buildSetTableStateInMetaRequest(state));
return TableState.convert(state.getTableName(), response.getTableState());
} catch (ServiceException se) {
LOG.debug("ServiceException while updating table state in meta. table={}, state={}",
state.getTableName(), state.getState());
LOG.debug("table={}, state={}", state.getTableName(), state.getState(), se);
throw new IOException(se);
}
}

@Override
public List<Long> assigns(List<String> encodedRegionNames, boolean override)
throws IOException {
try {
MasterProtos.AssignsResponse response =
this.hbck.assigns(rpcControllerFactory.newController(),
RequestConverter.toAssignRegionsRequest(encodedRegionNames, override));
return response.getPidList();
} catch (ServiceException se) {
LOG.debug(toCommaDelimitedString(encodedRegionNames), se);
throw new IOException(se);
}
}

@Override
public List<Long> unassigns(List<String> encodedRegionNames, boolean override)
throws IOException {
try {
MasterProtos.UnassignsResponse response =
this.hbck.unassigns(rpcControllerFactory.newController(),
RequestConverter.toUnassignRegionsRequest(encodedRegionNames, override));
return response.getPidList();
} catch (ServiceException se) {
LOG.debug(toCommaDelimitedString(encodedRegionNames), se);
throw new IOException(se);
}
}

private static String toCommaDelimitedString(List<String> list) {
return list.stream().collect(Collectors.joining(", "));
}

@Override
public List<Boolean> bypassProcedure(List<Long> pids, long waitTime, boolean override,
boolean recursive)
throws IOException {
MasterProtos.BypassProcedureResponse response = ProtobufUtil.call(
new Callable<MasterProtos.BypassProcedureResponse>() {
@Override
public MasterProtos.BypassProcedureResponse call() throws Exception {
try {
return hbck.bypassProcedure(rpcControllerFactory.newController(),
MasterProtos.BypassProcedureRequest.newBuilder().addAllProcId(pids).
setWaitTime(waitTime).setOverride(override).setRecursive(recursive).build());
} catch (Throwable t) {
LOG.error(pids.stream().map(i -> i.toString()).
collect(Collectors.joining(", ")), t);
throw t;
}
}
});
return response.getBypassedList();
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -487,35 +487,20 @@ public static <R> void doBatchWithCallback(List<? extends Row> actions, Object[]
}

@Override
public void delete(final Delete delete)
throws IOException {
CancellableRegionServerCallable<SingleResponse> callable =
new CancellableRegionServerCallable<SingleResponse>(
connection, getName(), delete.getRow(), this.rpcControllerFactory.newController(),
writeRpcTimeoutMs, new RetryingTimeTracker().start(), delete.getPriority()) {
public void delete(final Delete delete) throws IOException {
ClientServiceCallable<Void> callable =
new ClientServiceCallable<Void>(this.connection, getName(), delete.getRow(),
this.rpcControllerFactory.newController(), delete.getPriority()) {
@Override
protected SingleResponse rpcCall() throws Exception {
MutateRequest request = RequestConverter.buildMutateRequest(
getLocation().getRegionInfo().getRegionName(), delete);
MutateResponse response = doMutate(request);
return ResponseConverter.getResult(request, response, getRpcControllerCellScanner());
protected Void rpcCall() throws Exception {
MutateRequest request = RequestConverter
.buildMutateRequest(getLocation().getRegionInfo().getRegionName(), delete);
doMutate(request);
return null;
}
};
List<Delete> rows = Collections.singletonList(delete);
AsyncProcessTask task = AsyncProcessTask.newBuilder()
.setPool(pool)
.setTableName(tableName)
.setRowAccess(rows)
.setCallable(callable)
.setRpcTimeout(writeRpcTimeoutMs)
.setOperationTimeout(operationTimeoutMs)
.setSubmittedRows(AsyncProcessTask.SubmittedRows.ALL)
.build();
AsyncRequestFuture ars = multiAp.submit(task);
ars.waitUntilDone();
if (ars.hasError()) {
throw ars.getErrors();
}
rpcCallerFactory.<Void>newCaller(this.writeRpcTimeoutMs)
.callWithRetries(callable, this.operationTimeoutMs);
}

@Override
Expand Down
Loading