diff --git a/.git-blame-ignore-revs b/.git-blame-ignore-revs new file mode 100644 index 000000000000..1193436b6795 --- /dev/null +++ b/.git-blame-ignore-revs @@ -0,0 +1,2 @@ +1aea663c6de4c08f0b2a2d4b2ca788772dc0b686 +9c3528d730dc34eb29837330b98a3a3c8f7260e1 diff --git a/.gitignore b/.gitignore index 0b883e082c20..56ed1be0b818 100644 --- a/.gitignore +++ b/.gitignore @@ -22,3 +22,4 @@ linklint/ **/.checkstyle .java-version tmp +**/.flattened-pom.xml diff --git a/.idea/checkstyle-idea.xml b/.idea/checkstyle-idea.xml deleted file mode 100644 index d2a76a8ac7c2..000000000000 --- a/.idea/checkstyle-idea.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - - - \ No newline at end of file diff --git a/CHANGES.md b/CHANGES.md index 1decdcde8a22..310805f1c844 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,4 +1,3 @@ -# HBASE Changelog +# HBASE Changelog + +## Release 2.5.10 - 2024-07-26 + + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28428](https://issues.apache.org/jira/browse/HBASE-28428) | Zookeeper ConnectionRegistry APIs should have timeout | Major | . | +| [HBASE-28683](https://issues.apache.org/jira/browse/HBASE-28683) | Only allow one TableProcedureInterface for a single table to run at the same time for some special procedure types | Critical | master, proc-v2 | +| [HBASE-28717](https://issues.apache.org/jira/browse/HBASE-28717) | Support FuzzyRowFilter in REST interface | Major | REST | +| [HBASE-28718](https://issues.apache.org/jira/browse/HBASE-28718) | Should support different license name for 'Apache License, Version 2.0' | Major | build, shading | +| [HBASE-28685](https://issues.apache.org/jira/browse/HBASE-28685) | Support non-root context in REST RemoteHTable and RemodeAdmin | Major | REST | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28704](https://issues.apache.org/jira/browse/HBASE-28704) | The expired snapshot can be read by CopyTable or ExportSnapshot | Major | mapreduce, snapshots | +| [HBASE-28740](https://issues.apache.org/jira/browse/HBASE-28740) | Need to call parent class's serialization methods in CloseExcessRegionReplicasProcedure | Blocker | proc-v2 | +| [HBASE-28727](https://issues.apache.org/jira/browse/HBASE-28727) | SteppingSplitPolicy may not work when table enables region replication | Minor | . | +| [HBASE-28665](https://issues.apache.org/jira/browse/HBASE-28665) | WALs not marked closed when there are errors in closing WALs | Minor | wal | +| [HBASE-28364](https://issues.apache.org/jira/browse/HBASE-28364) | Warn: Cache key had block type null, but was found in L1 cache | Major | . | +| [HBASE-28714](https://issues.apache.org/jira/browse/HBASE-28714) | Hadoop check for hadoop 3.4.0 is failing | Critical | dependencies, hadoop3 | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28737](https://issues.apache.org/jira/browse/HBASE-28737) | Add the slack channel related information in README.md | Major | documentation | +| [HBASE-28723](https://issues.apache.org/jira/browse/HBASE-28723) | [JDK17] TestSecureIPC fails under JDK17 | Major | java, test | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28707](https://issues.apache.org/jira/browse/HBASE-28707) | Backport the code changes in HBASE-28675 to branch-2.x | Major | . | + + +## Release 2.5.9 - Unreleased (as of 2024-07-05) + + + +### NEW FEATURES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-26192](https://issues.apache.org/jira/browse/HBASE-26192) | Master UI hbck should provide a JSON formatted output option | Minor | . | + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28656](https://issues.apache.org/jira/browse/HBASE-28656) | Optimize the verifyCopyResult logic in ExportSnapshot | Critical | . | +| [HBASE-28671](https://issues.apache.org/jira/browse/HBASE-28671) | Add close method to REST client | Major | REST | +| [HBASE-28646](https://issues.apache.org/jira/browse/HBASE-28646) | Use Streams to unmarshall protobuf REST data | Major | REST | +| [HBASE-28651](https://issues.apache.org/jira/browse/HBASE-28651) | Reformat the javadoc for CellChunkMap | Major | documentation, regionserver | +| [HBASE-28636](https://issues.apache.org/jira/browse/HBASE-28636) | Add UTs for testing copy/sync table between clusters | Major | mapreduce, test | +| [HBASE-28540](https://issues.apache.org/jira/browse/HBASE-28540) | Cache Results in org.apache.hadoop.hbase.rest.client.RemoteHTable.Scanner | Minor | REST | +| [HBASE-28625](https://issues.apache.org/jira/browse/HBASE-28625) | ExportSnapshot should verify checksums for the source file and the target file | Major | . | +| [HBASE-28614](https://issues.apache.org/jira/browse/HBASE-28614) | Introduce a field to display whether the snapshot is expired | Minor | shell, snapshots, UI | +| [HBASE-28613](https://issues.apache.org/jira/browse/HBASE-28613) | Use streaming when marshalling protobuf REST output | Major | REST | +| [HBASE-26525](https://issues.apache.org/jira/browse/HBASE-26525) | Use unique thread name for group WALs | Major | wal | +| [HBASE-28501](https://issues.apache.org/jira/browse/HBASE-28501) | Support non-SPNEGO authentication methods and implement session handling in REST java client library | Major | REST | +| [HBASE-25972](https://issues.apache.org/jira/browse/HBASE-25972) | Dual File Compaction | Major | . | +| [HBASE-27938](https://issues.apache.org/jira/browse/HBASE-27938) | Enable PE to load any custom implementation of tests at runtime | Minor | test | +| [HBASE-28563](https://issues.apache.org/jira/browse/HBASE-28563) | Closing ZooKeeper in ZKMainServer | Minor | Zookeeper | +| [HBASE-28556](https://issues.apache.org/jira/browse/HBASE-28556) | Reduce memory copying in Rest server when serializing CellModel to Protobuf | Minor | REST | +| [HBASE-28523](https://issues.apache.org/jira/browse/HBASE-28523) | Use a single get call in REST multiget endpoint | Major | REST | +| [HBASE-28552](https://issues.apache.org/jira/browse/HBASE-28552) | Bump up bouncycastle dependency from 1.76 to 1.78 | Major | dependencies, security | +| [HBASE-28518](https://issues.apache.org/jira/browse/HBASE-28518) | Allow specifying a filter for the REST multiget endpoint | Major | REST | +| [HBASE-28517](https://issues.apache.org/jira/browse/HBASE-28517) | Make properties dynamically configured | Major | . | +| [HBASE-28529](https://issues.apache.org/jira/browse/HBASE-28529) | Use ZKClientConfig instead of system properties when setting zookeeper configurations | Major | Zookeeper | +| [HBASE-28150](https://issues.apache.org/jira/browse/HBASE-28150) | CreateTableProcedure and DeleteTableProcedure should sleep a while before retrying | Major | master, proc-v2 | +| [HBASE-28497](https://issues.apache.org/jira/browse/HBASE-28497) | Missing fields in Get.toJSON | Major | Client | +| [HBASE-28470](https://issues.apache.org/jira/browse/HBASE-28470) | Fix Typo in Java Method Comment | Trivial | Admin | +| [HBASE-28292](https://issues.apache.org/jira/browse/HBASE-28292) | Make Delay prefetch property to be dynamically configured | Major | . | +| [HBASE-28504](https://issues.apache.org/jira/browse/HBASE-28504) | Implement eviction logic for scanners in Rest APIs to prevent scanner leakage | Major | REST | +| [HBASE-28485](https://issues.apache.org/jira/browse/HBASE-28485) | Re-use ZstdDecompressCtx/ZstdCompressCtx for performance | Major | . | +| [HBASE-28124](https://issues.apache.org/jira/browse/HBASE-28124) | Missing fields in Scan.toJSON | Major | . | +| [HBASE-28427](https://issues.apache.org/jira/browse/HBASE-28427) | FNFE related to 'master:store' when moving archived hfiles to global archived dir | Minor | master | +| [HBASE-28424](https://issues.apache.org/jira/browse/HBASE-28424) | Set correct Result to RegionActionResult for successful Put/Delete mutations | Major | . | +| [HBASE-24791](https://issues.apache.org/jira/browse/HBASE-24791) | Improve HFileOutputFormat2 to avoid always call getTableRelativePath method | Critical | mapreduce | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28703](https://issues.apache.org/jira/browse/HBASE-28703) | Data race in RecoveredEditsOutputSink while closing writers | Critical | wal | +| [HBASE-28688](https://issues.apache.org/jira/browse/HBASE-28688) | Correct the usage for blanks ignore options in yetus | Major | build, jenkins | +| [HBASE-28658](https://issues.apache.org/jira/browse/HBASE-28658) | The failsafe snapshot should be deleted after rollback successfully | Major | Client, snapshots | +| [HBASE-28663](https://issues.apache.org/jira/browse/HBASE-28663) | CanaryTool continues executing and scanning after timeout | Minor | canary | +| [HBASE-28662](https://issues.apache.org/jira/browse/HBASE-28662) | Removing missing scanner via REST should return 404 | Minor | REST | +| [HBASE-28650](https://issues.apache.org/jira/browse/HBASE-28650) | REST multiget endpoint returns 500 error if no rows are specified | Minor | REST | +| [HBASE-28649](https://issues.apache.org/jira/browse/HBASE-28649) | Wrong properties are used to set up SSL for REST Client Kerberos authenticator | Major | REST | +| [HBASE-28549](https://issues.apache.org/jira/browse/HBASE-28549) | Make shell commands support column qualifiers with colons | Major | shell | +| [HBASE-28619](https://issues.apache.org/jira/browse/HBASE-28619) | Fix the inaccurate message when snapshot doesn't exist | Minor | snapshots | +| [HBASE-28618](https://issues.apache.org/jira/browse/HBASE-28618) | The hadolint check in nightly build is broken | Major | scripts | +| [HBASE-28420](https://issues.apache.org/jira/browse/HBASE-28420) | Aborting Active HMaster is not rejecting remote Procedure Reports | Critical | master, proc-v2 | +| [HBASE-28526](https://issues.apache.org/jira/browse/HBASE-28526) | hbase-rest client shading conflict with hbase-shaded-client in HBase 2.x | Major | REST | +| [HBASE-28622](https://issues.apache.org/jira/browse/HBASE-28622) | FilterListWithAND can swallow SEEK\_NEXT\_USING\_HINT | Major | Filters | +| [HBASE-28546](https://issues.apache.org/jira/browse/HBASE-28546) | Make WAL rolling exception clear | Minor | . | +| [HBASE-28628](https://issues.apache.org/jira/browse/HBASE-28628) | Use Base64.getUrlEncoder().withoutPadding() in REST tests | Major | REST | +| [HBASE-28626](https://issues.apache.org/jira/browse/HBASE-28626) | MultiRowRangeFilter deserialization fails in org.apache.hadoop.hbase.rest.model.ScannerModel | Major | REST | +| [HBASE-28582](https://issues.apache.org/jira/browse/HBASE-28582) | ModifyTableProcedure should not reset TRSP on region node when closing unused region replicas | Critical | proc-v2 | +| [HBASE-27915](https://issues.apache.org/jira/browse/HBASE-27915) | Update hbase\_docker with an extra Dockerfile compatible with mac m1 platfrom | Minor | . | +| [HBASE-28599](https://issues.apache.org/jira/browse/HBASE-28599) | RowTooBigException is thrown when duplicate increment RPC call is attempted | Major | regionserver | +| [HBASE-28595](https://issues.apache.org/jira/browse/HBASE-28595) | Losing exception from scan RPC can lead to partial results | Critical | regionserver, Scanners | +| [HBASE-28604](https://issues.apache.org/jira/browse/HBASE-28604) | Fix the error message in ReservoirSample's constructor | Major | util | +| [HBASE-28598](https://issues.apache.org/jira/browse/HBASE-28598) | NPE for writer object access in AsyncFSWAL#closeWriter | Major | wal | +| [HBASE-26625](https://issues.apache.org/jira/browse/HBASE-26625) | ExportSnapshot tool failed to copy data files for tables with merge region | Minor | . | +| [HBASE-28575](https://issues.apache.org/jira/browse/HBASE-28575) | Always printing error log when snapshot table | Minor | snapshots | +| [HBASE-28459](https://issues.apache.org/jira/browse/HBASE-28459) | HFileOutputFormat2 ClassCastException with s3 magic committer | Major | . | +| [HBASE-28567](https://issues.apache.org/jira/browse/HBASE-28567) | Race condition causes MetaRegionLocationCache to never set watcher to populate meta location | Major | . | +| [HBASE-28533](https://issues.apache.org/jira/browse/HBASE-28533) | Split procedure rollback can leave parent region state in SPLITTING after completion | Major | Region Assignment | +| [HBASE-28405](https://issues.apache.org/jira/browse/HBASE-28405) | Region open procedure silently returns without notifying the parent proc | Major | proc-v2, Region Assignment | +| [HBASE-28554](https://issues.apache.org/jira/browse/HBASE-28554) | TestZooKeeperScanPolicyObserver and TestAdminShell fail 100% of times on flaky dashboard | Blocker | shell, test, Zookeeper | +| [HBASE-28482](https://issues.apache.org/jira/browse/HBASE-28482) | Reverse scan with tags throws ArrayIndexOutOfBoundsException with DBE | Major | HFile | +| [HBASE-28298](https://issues.apache.org/jira/browse/HBASE-28298) | HFilePrettyPrinter thrown NoSuchMethodError about MetricRegistry | Major | HFile, UI | +| [HBASE-28500](https://issues.apache.org/jira/browse/HBASE-28500) | Rest Java client library assumes stateless servers | Major | REST | +| [HBASE-28183](https://issues.apache.org/jira/browse/HBASE-28183) | It's impossible to re-enable the quota table if it gets disabled | Major | . | +| [HBASE-28481](https://issues.apache.org/jira/browse/HBASE-28481) | Prompting table already exists after failing to create table with many region replications | Major | . | +| [HBASE-28366](https://issues.apache.org/jira/browse/HBASE-28366) | Mis-order of SCP and regionServerReport results into region inconsistencies | Major | . | +| [HBASE-28452](https://issues.apache.org/jira/browse/HBASE-28452) | Missing null check of rpcServer.scheduler.executor causes NPE with invalid value of hbase.client.default.rpc.codec | Major | IPC/RPC | +| [HBASE-28314](https://issues.apache.org/jira/browse/HBASE-28314) | Enable maven-source-plugin for all modules | Major | build | +| [HBASE-28260](https://issues.apache.org/jira/browse/HBASE-28260) | Possible data loss in WAL after RegionServer crash | Major | . | +| [HBASE-28417](https://issues.apache.org/jira/browse/HBASE-28417) | TestBlockingIPC.testBadPreambleHeader sometimes fails with broken pipe instead of bad auth | Major | IPC/RPC, test | +| [HBASE-28354](https://issues.apache.org/jira/browse/HBASE-28354) | RegionSizeCalculator throws NPE when regions are in transition | Major | . | +| [HBASE-28174](https://issues.apache.org/jira/browse/HBASE-28174) | DELETE endpoint in REST API does not support deleting binary row keys/columns | Blocker | REST | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28678](https://issues.apache.org/jira/browse/HBASE-28678) | Make nightly builds for 3.x java 17 only and add java 17 test for 2.x | Major | build, jenkins | +| [HBASE-28693](https://issues.apache.org/jira/browse/HBASE-28693) | Change flaky tests to run with jdk17 | Major | flakies, jenkins | +| [HBASE-28677](https://issues.apache.org/jira/browse/HBASE-28677) | Add jdk 17 task for pre commit build for 2.x | Major | build, jenkins | +| [HBASE-28679](https://issues.apache.org/jira/browse/HBASE-28679) | Upgrade yetus to a newer version | Major | build, jenkins | +| [HBASE-28652](https://issues.apache.org/jira/browse/HBASE-28652) | Backport HBASE-21785 master reports open regions as RITs and also messes up rit age metric | Major | . | +| [HBASE-28049](https://issues.apache.org/jira/browse/HBASE-28049) | RSProcedureDispatcher to log the request details during retries | Minor | . | +| [HBASE-26048](https://issues.apache.org/jira/browse/HBASE-26048) | [JDK17] Replace the usage of deprecated API ThreadGroup.destroy() | Major | proc-v2 | +| [HBASE-28586](https://issues.apache.org/jira/browse/HBASE-28586) | Backport HBASE-24791 Improve HFileOutputFormat2 to avoid always call getTableRelativePath method | Major | . | +| [HBASE-28507](https://issues.apache.org/jira/browse/HBASE-28507) | Deprecate hbase-compression-xz | Major | . | +| [HBASE-28457](https://issues.apache.org/jira/browse/HBASE-28457) | Introduce a version field in file based tracker record | Major | HFile | +| [HBASE-27989](https://issues.apache.org/jira/browse/HBASE-27989) | ByteBuffAllocator causes ArithmeticException due to improper poolBufSize value checking | Critical | BucketCache | +| [HBASE-27993](https://issues.apache.org/jira/browse/HBASE-27993) | AbstractFSWAL causes ArithmeticException due to improper logRollSize value checking | Critical | . | +| [HBASE-27990](https://issues.apache.org/jira/browse/HBASE-27990) | BucketCache causes ArithmeticException due to improper blockSize value checking | Critical | BucketCache | +| [HBASE-28401](https://issues.apache.org/jira/browse/HBASE-28401) | Introduce a close method for memstore for release active segment | Major | in-memory-compaction, regionserver | +| [HBASE-28350](https://issues.apache.org/jira/browse/HBASE-28350) | [JDK17] Unable to run hbase-it tests with JDK 17 | Major | . | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28699](https://issues.apache.org/jira/browse/HBASE-28699) | Bump jdk and maven versions in pre commit and nighly dockerfile | Major | jenkins, scripts | +| [HBASE-28661](https://issues.apache.org/jira/browse/HBASE-28661) | Fix compatibility issue in SecurityHeadersFilter in branch-2.x | Major | . | +| [HBASE-28635](https://issues.apache.org/jira/browse/HBASE-28635) | Bump io.airlift:aircompressor from 0.24 to 0.27 | Major | dependabot, dependencies, security | +| [HBASE-28616](https://issues.apache.org/jira/browse/HBASE-28616) | Remove/Deprecated the rs.\* related configuration in TableOutputFormat | Major | mapreduce | +| [HBASE-28605](https://issues.apache.org/jira/browse/HBASE-28605) | Add ErrorProne ban on Hadoop shaded thirdparty jars | Major | build | +| [HBASE-28607](https://issues.apache.org/jira/browse/HBASE-28607) | Bump requests from 2.31.0 to 2.32.0 in /dev-support/flaky-tests | Major | dependabot, scripts, security | +| [HBASE-28574](https://issues.apache.org/jira/browse/HBASE-28574) | Bump jinja2 from 3.1.3 to 3.1.4 in /dev-support/flaky-tests | Major | dependabot, scripts, security | +| [HBASE-28444](https://issues.apache.org/jira/browse/HBASE-28444) | Bump org.apache.zookeeper:zookeeper from 3.8.3 to 3.8.4 | Blocker | security, Zookeeper | +| [HBASE-28403](https://issues.apache.org/jira/browse/HBASE-28403) | Improve debugging for failures in procedure tests | Major | proc-v2, test | + + +## Release 2.5.8 - 2024-03-08 + + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28313](https://issues.apache.org/jira/browse/HBASE-28313) | StorefileRefresherChore should not refresh readonly table | Major | regionserver | +| [HBASE-28398](https://issues.apache.org/jira/browse/HBASE-28398) | Make sure we close all the scanners in TestHRegion | Major | test | +| [HBASE-28356](https://issues.apache.org/jira/browse/HBASE-28356) | RegionServer Canary can should use Scan just like Region Canary with option to enable Raw Scan | Minor | canary | +| [HBASE-28357](https://issues.apache.org/jira/browse/HBASE-28357) | MoveWithAck#isSuccessfulScan for Region movement should use Region End Key for limiting scan to one region only. | Minor | Region Assignment | +| [HBASE-28332](https://issues.apache.org/jira/browse/HBASE-28332) | Type conversion is no need in method CompactionChecker.chore() | Minor | Compaction | +| [HBASE-28327](https://issues.apache.org/jira/browse/HBASE-28327) | Add remove(String key, Metric metric) method to MetricRegistry interface | Major | metrics | +| [HBASE-28271](https://issues.apache.org/jira/browse/HBASE-28271) | Infinite waiting on lock acquisition by snapshot can result in unresponsive master | Major | . | +| [HBASE-28319](https://issues.apache.org/jira/browse/HBASE-28319) | Expose DelegatingRpcScheduler as IA.LimitedPrivate | Major | . | +| [HBASE-28306](https://issues.apache.org/jira/browse/HBASE-28306) | Add property to customize Version information | Major | . | +| [HBASE-28256](https://issues.apache.org/jira/browse/HBASE-28256) | Enhance ByteBufferUtils.readVLong to read more bytes at a time | Major | Performance | +| [HBASE-20528](https://issues.apache.org/jira/browse/HBASE-20528) | Revise collections copying from iteration to built-in function | Minor | . | +| [HBASE-21243](https://issues.apache.org/jira/browse/HBASE-21243) | Correct java-doc for the method RpcServer.getRemoteAddress() | Trivial | . | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28391](https://issues.apache.org/jira/browse/HBASE-28391) | Remove the need for ADMIN permissions for listDecommissionedRegionServers | Major | Admin | +| [HBASE-28390](https://issues.apache.org/jira/browse/HBASE-28390) | WAL value compression fails for cells with large values | Major | . | +| [HBASE-28377](https://issues.apache.org/jira/browse/HBASE-28377) | Fallback to simple is broken for blocking rpc client | Major | IPC/RPC | +| [HBASE-28311](https://issues.apache.org/jira/browse/HBASE-28311) | Few ITs (using MiniMRYarnCluster on hadoop-2) are failing due to NCDFE: com/sun/jersey/core/util/FeaturesAndProperties | Major | integration tests, test | +| [HBASE-28353](https://issues.apache.org/jira/browse/HBASE-28353) | Close HBase connection on implicit exit from HBase shell | Major | shell | +| [HBASE-28204](https://issues.apache.org/jira/browse/HBASE-28204) | Region Canary can take lot more time If any region (except the first region) starts with delete markers | Major | canary | +| [HBASE-28345](https://issues.apache.org/jira/browse/HBASE-28345) | Close HBase connection on exit from HBase Shell | Major | shell | +| [HBASE-26816](https://issues.apache.org/jira/browse/HBASE-26816) | Fix CME in ReplicationSourceManager | Minor | Replication | +| [HBASE-28330](https://issues.apache.org/jira/browse/HBASE-28330) | TestUnknownServers.testListUnknownServers is flaky in branch-2 | Major | test | +| [HBASE-28326](https://issues.apache.org/jira/browse/HBASE-28326) | All nightly jobs are failing | Major | jenkins | +| [HBASE-28324](https://issues.apache.org/jira/browse/HBASE-28324) | TestRegionNormalizerWorkQueue#testTake is flaky | Major | test | +| [HBASE-28312](https://issues.apache.org/jira/browse/HBASE-28312) | The bad auth exception can not be passed to client rpc calls properly | Major | Client, IPC/RPC | +| [HBASE-28287](https://issues.apache.org/jira/browse/HBASE-28287) | MOB HFiles are expired earlier than their reference data | Major | mob | +| [HBASE-28301](https://issues.apache.org/jira/browse/HBASE-28301) | IntegrationTestImportTsv fails with UnsupportedOperationException | Minor | integration tests, test | +| [HBASE-28297](https://issues.apache.org/jira/browse/HBASE-28297) | IntegrationTestImportTsv fails with ArrayIndexOfOutBounds | Major | integration tests, test | +| [HBASE-28261](https://issues.apache.org/jira/browse/HBASE-28261) | Sync jvm11 module flags from hbase-surefire.jdk11.flags to bin/hbase | Trivial | . | +| [HBASE-28259](https://issues.apache.org/jira/browse/HBASE-28259) | Add java.base/java.io=ALL-UNNAMED open to jdk11\_jvm\_flags | Trivial | java | +| [HBASE-28224](https://issues.apache.org/jira/browse/HBASE-28224) | ClientSideRegionScanner appears not to shutdown MobFileCache | Minor | Scanners | +| [HBASE-28269](https://issues.apache.org/jira/browse/HBASE-28269) | Fix broken ruby scripts and clean up logging | Major | jruby | +| [HBASE-28262](https://issues.apache.org/jira/browse/HBASE-28262) | Fix spotless error on branch-2.5 | Major | . | + + +### TESTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28337](https://issues.apache.org/jira/browse/HBASE-28337) | Positive connection test in TestShadeSaslAuthenticationProvider runs with Kerberos instead of Shade authentication | Major | . | +| [HBASE-28274](https://issues.apache.org/jira/browse/HBASE-28274) | Flaky test: TestFanOutOneBlockAsyncDFSOutput (Part 2) | Major | flakies, integration tests, test | +| [HBASE-28275](https://issues.apache.org/jira/browse/HBASE-28275) | [Flaky test] Fix 'test\_list\_decommissioned\_regionservers' in TestAdminShell2.java | Minor | flakies, test | +| [HBASE-28254](https://issues.apache.org/jira/browse/HBASE-28254) | Flaky test: TestTableShell | Major | flakies, integration tests | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28340](https://issues.apache.org/jira/browse/HBASE-28340) | Add trust/key store type to ZK TLS settings handled by HBase | Major | Zookeeper | +| [HBASE-28341](https://issues.apache.org/jira/browse/HBASE-28341) | [JDK17] Fix Failure TestLdapHttpServer | Major | . | +| [HBASE-28031](https://issues.apache.org/jira/browse/HBASE-28031) | TestClusterScopeQuotaThrottle is still failing with broken WAL writer | Major | test | +| [HBASE-28290](https://issues.apache.org/jira/browse/HBASE-28290) | Add 'TM' superscript to the index page title when generating javadoc | Major | build, documentation | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28333](https://issues.apache.org/jira/browse/HBASE-28333) | Refactor TestClientTimeouts to make it more clear that what we want to test | Major | Client, test | +| [HBASE-28310](https://issues.apache.org/jira/browse/HBASE-28310) | Bump jinja2 from 3.1.2 to 3.1.3 in /dev-support/flaky-tests | Major | dependabot, scripts, security, test | +| [HBASE-28308](https://issues.apache.org/jira/browse/HBASE-28308) | Bump gitpython from 3.1.37 to 3.1.41 in /dev-support/flaky-tests | Major | dependabot, scripts, security, test | +| [HBASE-28304](https://issues.apache.org/jira/browse/HBASE-28304) | Add hbase-shaded-testing-util version to dependencyManagement | Major | . | + + +## Release 2.5.7 - 2023-12-22 + + + +### NEW FEATURES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28168](https://issues.apache.org/jira/browse/HBASE-28168) | Add option in RegionMover.java to isolate one or more regions on the RegionSever | Minor | . | + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28209](https://issues.apache.org/jira/browse/HBASE-28209) | Create a jmx metrics to expose the oldWALs directory size | Major | metrics | +| [HBASE-28212](https://issues.apache.org/jira/browse/HBASE-28212) | Do not need to maintain rollback step when root procedure does not support rollback | Major | master, proc-v2 | +| [HBASE-25549](https://issues.apache.org/jira/browse/HBASE-25549) | Provide a switch that allows avoiding reopening all regions when modifying a table to prevent RIT storms. | Major | master, shell | +| [HBASE-28193](https://issues.apache.org/jira/browse/HBASE-28193) | Update plugin for SBOM generation to 2.7.10 | Major | build, pom | +| [HBASE-27276](https://issues.apache.org/jira/browse/HBASE-27276) | Reduce reflection overhead in Filter deserialization | Major | . | +| [HBASE-28113](https://issues.apache.org/jira/browse/HBASE-28113) | Modify the way of acquiring the RegionStateNode lock in checkOnlineRegionsReport to tryLock | Major | master | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28247](https://issues.apache.org/jira/browse/HBASE-28247) | Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM test flags | Minor | java | +| [HBASE-28252](https://issues.apache.org/jira/browse/HBASE-28252) | Add sun.net.dns and sun.net.util to the JDK11+ module exports in the hbase script | Major | scripts | +| [HBASE-28248](https://issues.apache.org/jira/browse/HBASE-28248) | Race between RegionRemoteProcedureBase and rollback operation could lead to ROLLEDBACK state be persisent to procedure store | Critical | proc-v2, Region Assignment | +| [HBASE-28211](https://issues.apache.org/jira/browse/HBASE-28211) | BucketCache.blocksByHFile may leak on allocationFailure or if we reach io errors tolerated | Major | . | +| [HBASE-28217](https://issues.apache.org/jira/browse/HBASE-28217) | PrefetchExecutor should not run for files from CFs that have disabled BLOCKCACHE | Major | . | +| [HBASE-28210](https://issues.apache.org/jira/browse/HBASE-28210) | There could be holes in stack ids when loading procedures | Critical | master, proc-v2 | +| [HBASE-24687](https://issues.apache.org/jira/browse/HBASE-24687) | MobFileCleanerChore uses a new Connection for each table each time it runs | Minor | mob | +| [HBASE-28191](https://issues.apache.org/jira/browse/HBASE-28191) | Meta browser can happen NPE when the server or target server of region is null | Major | UI | +| [HBASE-28175](https://issues.apache.org/jira/browse/HBASE-28175) | RpcLogDetails' Message can become corrupt before log is consumed | Major | . | +| [HBASE-28189](https://issues.apache.org/jira/browse/HBASE-28189) | Fix the miss count in one of CombinedBlockCache getBlock implementations | Major | . | +| [HBASE-28184](https://issues.apache.org/jira/browse/HBASE-28184) | Tailing the WAL is very slow if there are multiple peers. | Major | Replication | +| [HBASE-28185](https://issues.apache.org/jira/browse/HBASE-28185) | Alter table to set TTL using hbase shell failed when ttl string is not match format | Minor | . | +| [HBASE-28157](https://issues.apache.org/jira/browse/HBASE-28157) | hbck should report previously reported regions with null region location | Major | . | +| [HBASE-28145](https://issues.apache.org/jira/browse/HBASE-28145) | When specifying the wrong BloomFilter type while creating a table in HBase shell, an error will occur. | Minor | shell | +| [HBASE-28017](https://issues.apache.org/jira/browse/HBASE-28017) | Client metrics are missing response and request size data when using netty | Major | . | +| [HBASE-28146](https://issues.apache.org/jira/browse/HBASE-28146) | Remove ServerManager's rsAdmins map | Major | master | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28206](https://issues.apache.org/jira/browse/HBASE-28206) | [JDK17] JVM crashes intermittently on aarch64 | Major | . | +| [HBASE-24179](https://issues.apache.org/jira/browse/HBASE-24179) | Backport fix for "Netty SASL implementation does not wait for challenge response" to branch-2.x | Major | netty | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28243](https://issues.apache.org/jira/browse/HBASE-28243) | Bump jackson version to 2.15.2 | Major | . | +| [HBASE-28245](https://issues.apache.org/jira/browse/HBASE-28245) | Sync internal protobuf version for hbase to be same as hbase-thirdparty | Major | . | +| [HBASE-28153](https://issues.apache.org/jira/browse/HBASE-28153) | Upgrade zookeeper to a newer version | Major | security, Zookeeper | +| [HBASE-28110](https://issues.apache.org/jira/browse/HBASE-28110) | Align TestShadeSaslAuthenticationProvider between different branches | Major | security, test | +| [HBASE-28147](https://issues.apache.org/jira/browse/HBASE-28147) | Bump gitpython from 3.1.35 to 3.1.37 in /dev-support/flaky-tests | Major | dependabot, scripts, security | + + +## Release 2.5.6 - 2023-10-20 + + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28135](https://issues.apache.org/jira/browse/HBASE-28135) | Specify -Xms for tests | Major | test | +| [HBASE-22138](https://issues.apache.org/jira/browse/HBASE-22138) | Undo our direct dependence on protos in google.protobuf.Any in Procedure.proto | Major | proc-v2, Protobufs | +| [HBASE-28128](https://issues.apache.org/jira/browse/HBASE-28128) | Reject requests at RPC layer when RegionServer is aborting | Major | . | +| [HBASE-28068](https://issues.apache.org/jira/browse/HBASE-28068) | Add hbase.normalizer.merge.merge\_request\_max\_number\_of\_regions property to limit max number of regions in a merge request for merge normalization | Minor | Normalizer | +| [HBASE-28059](https://issues.apache.org/jira/browse/HBASE-28059) | Use correct units in RegionLoad#getStoreUncompressedSizeMB() | Major | Admin | +| [HBASE-28038](https://issues.apache.org/jira/browse/HBASE-28038) | Add TLS settings to ZooKeeper client | Major | Zookeeper | +| [HBASE-28052](https://issues.apache.org/jira/browse/HBASE-28052) | Removing the useless parameters from ProcedureExecutor.loadProcedures | Minor | proc-v2 | +| [HBASE-28051](https://issues.apache.org/jira/browse/HBASE-28051) | The javadoc about RegionProcedureStore.delete is incorrect | Trivial | documentation | +| [HBASE-28025](https://issues.apache.org/jira/browse/HBASE-28025) | Enhance ByteBufferUtils.findCommonPrefix to compare 8 bytes each time | Major | Performance | +| [HBASE-28012](https://issues.apache.org/jira/browse/HBASE-28012) | Avoid CellUtil.cloneRow in BufferedEncodedSeeker | Major | Offheaping, Performance | +| [HBASE-27956](https://issues.apache.org/jira/browse/HBASE-27956) | Support wall clock profiling in ProfilerServlet | Major | . | +| [HBASE-27906](https://issues.apache.org/jira/browse/HBASE-27906) | Fix the javadoc for SyncFutureCache | Minor | documentation | +| [HBASE-27897](https://issues.apache.org/jira/browse/HBASE-27897) | ConnectionImplementation#locateRegionInMeta should pause and retry when taking user region lock failed | Major | Client | +| [HBASE-27920](https://issues.apache.org/jira/browse/HBASE-27920) | Skipping compact for this region if the table disable compaction | Major | Compaction | +| [HBASE-27948](https://issues.apache.org/jira/browse/HBASE-27948) | Report memstore on-heap and off-heap size as jmx metrics in sub=Memory bean | Major | . | +| [HBASE-27892](https://issues.apache.org/jira/browse/HBASE-27892) | Report memstore on-heap and off-heap size as jmx metrics | Major | metrics | +| [HBASE-27902](https://issues.apache.org/jira/browse/HBASE-27902) | New async admin api to invoke coproc on multiple servers | Major | . | +| [HBASE-27939](https://issues.apache.org/jira/browse/HBASE-27939) | Bump snappy-java from 1.1.9.1 to 1.1.10.1 | Major | dependabot, security | +| [HBASE-27888](https://issues.apache.org/jira/browse/HBASE-27888) | Record readBlock message in log when it takes too long time | Minor | HFile | +| [HBASE-27899](https://issues.apache.org/jira/browse/HBASE-27899) | Beautify the output information of the getStats method in ReplicationSource | Minor | Replication | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28133](https://issues.apache.org/jira/browse/HBASE-28133) | TestSyncTimeRangeTracker fails with OOM with small -Xms values | Major | . | +| [HBASE-28144](https://issues.apache.org/jira/browse/HBASE-28144) | Canary publish read failure fails with NPE if region location is null | Major | . | +| [HBASE-28109](https://issues.apache.org/jira/browse/HBASE-28109) | NPE for the region state: Failed to become active master (HMaster) | Major | master | +| [HBASE-28129](https://issues.apache.org/jira/browse/HBASE-28129) | Do not retry refreshSources when region server is already stopping | Major | Replication, rpc | +| [HBASE-28126](https://issues.apache.org/jira/browse/HBASE-28126) | TestSimpleRegionNormalizer fails 100% of times on flaky dashboard | Major | Normalizer | +| [HBASE-28081](https://issues.apache.org/jira/browse/HBASE-28081) | Snapshot working dir does not retain ACLs after snapshot commit phase | Blocker | acl, test | +| [HBASE-28037](https://issues.apache.org/jira/browse/HBASE-28037) | Replication stuck after switching to new WAL but the queue is empty | Blocker | Replication | +| [HBASE-28047](https://issues.apache.org/jira/browse/HBASE-28047) | Deadlock when opening mob files | Major | mob | +| [HBASE-27991](https://issues.apache.org/jira/browse/HBASE-27991) | [hbase-examples] MultiThreadedClientExample throws java.lang.ClassCastException | Minor | . | +| [HBASE-28105](https://issues.apache.org/jira/browse/HBASE-28105) | NPE in QuotaCache if Table is dropped from cluster | Major | Quotas | +| [HBASE-28106](https://issues.apache.org/jira/browse/HBASE-28106) | TestShadeSaslAuthenticationProvider fails for branch-2.x | Blocker | test | +| [HBASE-28101](https://issues.apache.org/jira/browse/HBASE-28101) | Should check the return value of protobuf Message.mergeDelimitedFrom | Critical | Protobufs, rpc | +| [HBASE-28065](https://issues.apache.org/jira/browse/HBASE-28065) | Corrupt HFile data is mishandled in several cases | Major | HFile | +| [HBASE-28061](https://issues.apache.org/jira/browse/HBASE-28061) | HBaseTestingUtility failed to start MiniHbaseCluster in case of Hadoop3.3.1 | Major | hadoop3, integration tests | +| [HBASE-28076](https://issues.apache.org/jira/browse/HBASE-28076) | NPE on initialization error in RecoveredReplicationSourceShipper | Minor | . | +| [HBASE-28055](https://issues.apache.org/jira/browse/HBASE-28055) | Performance improvement for scan over several stores. | Major | . | +| [HBASE-27966](https://issues.apache.org/jira/browse/HBASE-27966) | HBase Master/RS JVM metrics populated incorrectly | Major | metrics | +| [HBASE-28042](https://issues.apache.org/jira/browse/HBASE-28042) | Snapshot corruptions due to non-atomic rename within same filesystem | Major | snapshots | +| [HBASE-28011](https://issues.apache.org/jira/browse/HBASE-28011) | The logStats about LruBlockCache is not accurate | Minor | BlockCache | +| [HBASE-27553](https://issues.apache.org/jira/browse/HBASE-27553) | SlowLog does not include params for Mutations | Minor | . | +| [HBASE-27859](https://issues.apache.org/jira/browse/HBASE-27859) | HMaster.getCompactionState can happen NPE when region state is closed | Major | master | +| [HBASE-27942](https://issues.apache.org/jira/browse/HBASE-27942) | The description about hbase.hstore.comactionThreshold is not accurate | Minor | documentation | +| [HBASE-27951](https://issues.apache.org/jira/browse/HBASE-27951) | Use ADMIN\_QOS in MasterRpcServices for regionserver operational dependencies | Major | . | +| [HBASE-27950](https://issues.apache.org/jira/browse/HBASE-27950) | ClientSideRegionScanner does not adhere to RegionScanner.nextRaw contract | Minor | . | +| [HBASE-27936](https://issues.apache.org/jira/browse/HBASE-27936) | NPE in StoreFileReader.passesGeneralRowPrefixBloomFilter() | Major | regionserver | +| [HBASE-27871](https://issues.apache.org/jira/browse/HBASE-27871) | Meta replication stuck forever if wal it's still reading gets rolled and deleted | Major | meta replicas | +| [HBASE-27940](https://issues.apache.org/jira/browse/HBASE-27940) | Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot | Major | HFile | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28050](https://issues.apache.org/jira/browse/HBASE-28050) | RSProcedureDispatcher to fail-fast for krb auth failures | Major | . | +| [HBASE-28028](https://issues.apache.org/jira/browse/HBASE-28028) | Read all compressed bytes to a byte array before submitting them to decompressor | Major | . | +| [HBASE-28027](https://issues.apache.org/jira/browse/HBASE-28027) | Make TestClusterScopeQuotaThrottle run faster | Major | Quotas, test | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-28127](https://issues.apache.org/jira/browse/HBASE-28127) | Upgrade avro version to 1.11.3 | Major | dependencies, security | +| [HBASE-28112](https://issues.apache.org/jira/browse/HBASE-28112) | Bump org.xerial.snappy:snappy-java from 1.1.10.1 to 1.1.10.4 | Major | dependabot, dependencies, security | +| [HBASE-28089](https://issues.apache.org/jira/browse/HBASE-28089) | Upgrade BouncyCastle to fix CVE-2023-33201 | Major | . | +| [HBASE-28087](https://issues.apache.org/jira/browse/HBASE-28087) | Add hadoop 3.3.6 in hadoopcheck | Major | jenkins, scripts | +| [HBASE-28066](https://issues.apache.org/jira/browse/HBASE-28066) | Drop duplicate test class TestShellRSGroups.java | Minor | test | +| [HBASE-28074](https://issues.apache.org/jira/browse/HBASE-28074) | Bump gitpython from 3.1.34 to 3.1.35 in /dev-support/flaky-tests | Major | dependabot, scripts, security | +| [HBASE-28072](https://issues.apache.org/jira/browse/HBASE-28072) | Bump gitpython from 3.1.32 to 3.1.34 in /dev-support/flaky-tests | Major | dependabot, scripts, security | +| [HBASE-28022](https://issues.apache.org/jira/browse/HBASE-28022) | Remove netty 3 dependency in the pom file for hbase-endpoint | Major | dependencies, pom, security | +| [HBASE-28018](https://issues.apache.org/jira/browse/HBASE-28018) | Bump gitpython from 3.1.30 to 3.1.32 in /dev-support/flaky-tests | Major | dependabot, scripts, security | +| [HBASE-27992](https://issues.apache.org/jira/browse/HBASE-27992) | Bump exec-maven-plugin to 3.1.0 | Trivial | build | + + +## Release 2.5.5 - 2023-06-09 + + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27876](https://issues.apache.org/jira/browse/HBASE-27876) | Only generate SBOM when releasing | Minor | build, pom | +| [HBASE-27848](https://issues.apache.org/jira/browse/HBASE-27848) | Should fast-fail if unmatched column family exists when using ImportTsv | Major | mapreduce | +| [HBASE-27870](https://issues.apache.org/jira/browse/HBASE-27870) | Eliminate the 'WARNING: package jdk.internal.util.random not in java.base' when running UTs with jdk11 | Major | build, pom, test | +| [HBASE-27858](https://issues.apache.org/jira/browse/HBASE-27858) | Update surefire version to 3.0.0 and use the SurefireForkNodeFactory | Minor | test | +| [HBASE-27844](https://issues.apache.org/jira/browse/HBASE-27844) | changed type names to avoid conflicts with built-in types | Minor | build | +| [HBASE-27838](https://issues.apache.org/jira/browse/HBASE-27838) | Update zstd-jni from version 1.5.4-2 -\> 1.5.5-2 | Minor | io | +| [HBASE-27799](https://issues.apache.org/jira/browse/HBASE-27799) | RpcThrottlingException wait interval message is misleading between 0-1s | Major | . | +| [HBASE-27821](https://issues.apache.org/jira/browse/HBASE-27821) | Split TestFuzzyRowFilterEndToEnd | Major | test | +| [HBASE-27792](https://issues.apache.org/jira/browse/HBASE-27792) | Guard Master/RS Dump Servlet behind admin walls | Minor | security, UI | +| [HBASE-27819](https://issues.apache.org/jira/browse/HBASE-27819) | 10k RpcServer.MAX\_REQUEST\_SIZE is not enough in ReplicationDroppedTable related tests | Major | test | +| [HBASE-27808](https://issues.apache.org/jira/browse/HBASE-27808) | Change flatten mode for oss in our pom file | Major | community, pom | +| [HBASE-27818](https://issues.apache.org/jira/browse/HBASE-27818) | Split TestReplicationDroppedTables | Major | Replication, test | +| [HBASE-27789](https://issues.apache.org/jira/browse/HBASE-27789) | Backport "HBASE-24914 Remove duplicate code appearing continuously in method ReplicationPeerManager.updatePeerConfig" to branch-2 | Minor | Replication | +| [HBASE-27422](https://issues.apache.org/jira/browse/HBASE-27422) | Support replication for hbase:acl | Major | acl, Replication | +| [HBASE-27713](https://issues.apache.org/jira/browse/HBASE-27713) | Remove numRegions in Region Metrics | Major | metrics | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27793](https://issues.apache.org/jira/browse/HBASE-27793) | Make HBCK be able to report unknown servers | Major | hbck | +| [HBASE-27872](https://issues.apache.org/jira/browse/HBASE-27872) | xerial's snappy-java requires GLIBC \>= 2.32 | Minor | . | +| [HBASE-27867](https://issues.apache.org/jira/browse/HBASE-27867) | Close the L1 victim handler race | Minor | BlockCache | +| [HBASE-27277](https://issues.apache.org/jira/browse/HBASE-27277) | TestRaceBetweenSCPAndTRSP fails in pre commit | Major | proc-v2, test | +| [HBASE-27874](https://issues.apache.org/jira/browse/HBASE-27874) | Problem in flakey generated report causes pre-commit run to fail | Major | build | +| [HBASE-27865](https://issues.apache.org/jira/browse/HBASE-27865) | TestThriftServerCmdLine fails with org.apache.hadoop.hbase.SystemExitRule$SystemExitInTestException | Major | test, Thrift | +| [HBASE-27860](https://issues.apache.org/jira/browse/HBASE-27860) | Fix build error against Hadoop 3.3.5 | Major | build, hadoop3 | +| [HBASE-27857](https://issues.apache.org/jira/browse/HBASE-27857) | HBaseClassTestRule: system exit not restored if test times out may cause test to hang | Minor | test | +| [HBASE-26646](https://issues.apache.org/jira/browse/HBASE-26646) | WALPlayer should obtain token from filesystem | Minor | . | +| [HBASE-27824](https://issues.apache.org/jira/browse/HBASE-27824) | NPE in MetricsMasterWrapperImpl.isRunning | Major | test | +| [HBASE-27823](https://issues.apache.org/jira/browse/HBASE-27823) | NPE in ClaimReplicationQueuesProcedure when running TestAssignmentManager.testAssignSocketTimeout | Major | test | +| [HBASE-27822](https://issues.apache.org/jira/browse/HBASE-27822) | TestFromClientSide5.testAppendWithoutWAL is flaky | Major | scan, test | +| [HBASE-27810](https://issues.apache.org/jira/browse/HBASE-27810) | HBCK throws RejectedExecutionException when closing ZooKeeper resources | Major | hbck | +| [HBASE-27807](https://issues.apache.org/jira/browse/HBASE-27807) | PressureAwareCompactionThroughputController#tune log the opposite of the actual scenario | Trivial | Compaction | +| [HBASE-27796](https://issues.apache.org/jira/browse/HBASE-27796) | Improve MemcachedBlockCache | Major | . | +| [HBASE-27768](https://issues.apache.org/jira/browse/HBASE-27768) | Race conditions in BlockingRpcConnection | Major | . | +| [HBASE-27778](https://issues.apache.org/jira/browse/HBASE-27778) | Incorrect ReplicationSourceWALReader. totalBufferUsed may cause replication hang up | Major | Replication | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27843](https://issues.apache.org/jira/browse/HBASE-27843) | If moveAndClose fails HFileArchiver should delete any incomplete archive side changes | Major | . | +| [HBASE-20804](https://issues.apache.org/jira/browse/HBASE-20804) | Document and add tests for HBaseConfTool | Major | documentation, tooling | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27820](https://issues.apache.org/jira/browse/HBASE-27820) | HBase is not starting due to Jersey library conflicts with javax.ws.rs.api jar | Major | dependencies | +| [HBASE-27880](https://issues.apache.org/jira/browse/HBASE-27880) | Bump requests from 2.28.1 to 2.31.0 in /dev-support/flaky-tests | Major | dependabot, scripts, security | +| [HBASE-27634](https://issues.apache.org/jira/browse/HBASE-27634) | Builds emit errors related to SBOM parsing | Minor | build | +| [HBASE-27864](https://issues.apache.org/jira/browse/HBASE-27864) | Reduce the Cardinality for TestFuzzyRowFilterEndToEndLarge | Major | test | +| [HBASE-27863](https://issues.apache.org/jira/browse/HBASE-27863) | Add hadoop 3.3.5 check in our personality script | Major | jenkins, scripts | +| [HBASE-27762](https://issues.apache.org/jira/browse/HBASE-27762) | Include EventType and ProcedureV2 pid in logging via MDC | Major | . | +| [HBASE-27791](https://issues.apache.org/jira/browse/HBASE-27791) | Upgrade vega and its related js libraries | Major | UI | +| [HBASE-27720](https://issues.apache.org/jira/browse/HBASE-27720) | TestClusterRestartFailover is flakey due to metrics assertion | Minor | test | + + +## Release 2.5.4 - 2023-04-14 + + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27758](https://issues.apache.org/jira/browse/HBASE-27758) | Inconsistent synchronization in MetricsUserSourceImpl | Major | . | +| [HBASE-26526](https://issues.apache.org/jira/browse/HBASE-26526) | Introduce a timeout to shutdown of WAL | Major | wal | +| [HBASE-27744](https://issues.apache.org/jira/browse/HBASE-27744) | Update compression dependencies | Minor | io | +| [HBASE-27676](https://issues.apache.org/jira/browse/HBASE-27676) | Scan handlers in the RPC executor should match at least one scan queues | Major | . | +| [HBASE-27646](https://issues.apache.org/jira/browse/HBASE-27646) | Should not use pread when prefetching in HFilePreadReader | Minor | Performance | +| [HBASE-27710](https://issues.apache.org/jira/browse/HBASE-27710) | ByteBuff ref counting is too expensive for on-heap buffers | Major | . | +| [HBASE-27672](https://issues.apache.org/jira/browse/HBASE-27672) | Read RPC threads may BLOCKED at the Configuration.get when using java compression | Minor | . | +| [HBASE-27670](https://issues.apache.org/jira/browse/HBASE-27670) | Improve FSUtils to directly obtain FSDataOutputStream | Major | Filesystem Integration | +| [HBASE-23983](https://issues.apache.org/jira/browse/HBASE-23983) | Spotbugs warning complain on master build | Major | . | +| [HBASE-27458](https://issues.apache.org/jira/browse/HBASE-27458) | Use ReadWriteLock for region scanner readpoint map | Minor | Scanners | +| [HBASE-23102](https://issues.apache.org/jira/browse/HBASE-23102) | Improper Usage of Map putIfAbsent | Minor | . | +| [HBASE-27666](https://issues.apache.org/jira/browse/HBASE-27666) | Allow preCompact hooks to return scanners whose cells can be shipped | Major | . | +| [HBASE-27655](https://issues.apache.org/jira/browse/HBASE-27655) | Remove the empty path annotation from ClusterMetricsResource | Trivial | . | +| [HBASE-15242](https://issues.apache.org/jira/browse/HBASE-15242) | Client metrics for retries and timeouts | Major | metrics | +| [HBASE-21521](https://issues.apache.org/jira/browse/HBASE-21521) | Expose master startup status via web UI | Major | master, UI | +| [HBASE-27590](https://issues.apache.org/jira/browse/HBASE-27590) | Change Iterable to List in SnapshotFileCache | Minor | . | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27704](https://issues.apache.org/jira/browse/HBASE-27704) | Quotas can drastically overflow configured limit | Major | . | +| [HBASE-27726](https://issues.apache.org/jira/browse/HBASE-27726) | ruby shell not handled SyntaxError exceptions properly | Minor | shell | +| [HBASE-27732](https://issues.apache.org/jira/browse/HBASE-27732) | NPE in TestBasicWALEntryStreamFSHLog.testEOFExceptionInOldWALsDirectory | Major | Replication | +| [HBASE-26866](https://issues.apache.org/jira/browse/HBASE-26866) | Shutdown WAL may abort region server | Major | wal | +| [HBASE-27684](https://issues.apache.org/jira/browse/HBASE-27684) | Client metrics for user region lock related behaviors. | Major | Client | +| [HBASE-27671](https://issues.apache.org/jira/browse/HBASE-27671) | Client should not be able to restore/clone a snapshot after it's TTL has expired | Minor | . | +| [HBASE-27651](https://issues.apache.org/jira/browse/HBASE-27651) | hbase-daemon.sh foreground\_start should propagate SIGHUP and SIGTERM | Minor | scripts | +| [HBASE-27718](https://issues.apache.org/jira/browse/HBASE-27718) | The regionStateNode only need remove once in regionOffline | Minor | amv2 | +| [HBASE-27729](https://issues.apache.org/jira/browse/HBASE-27729) | Missed one parameter when logging exception in StoreFileListFile | Major | logging | +| [HBASE-27708](https://issues.apache.org/jira/browse/HBASE-27708) | CPU hot-spot resolving User subject | Major | Client, tracing | +| [HBASE-27652](https://issues.apache.org/jira/browse/HBASE-27652) | Client-side lock contention around Configuration when using read replica regions | Major | Client, read replicas | +| [HBASE-27714](https://issues.apache.org/jira/browse/HBASE-27714) | WALEntryStreamTestBase creates a new HBTU in startCluster method which causes all sub classes are testing default configurations | Major | Replication, test | +| [HBASE-27688](https://issues.apache.org/jira/browse/HBASE-27688) | HFile splitting occurs during bulkload, the CREATE\_TIME\_TS of hfileinfo is 0 | Major | HFile | +| [HBASE-27250](https://issues.apache.org/jira/browse/HBASE-27250) | MasterRpcService#setRegionStateInMeta does not support replica region encodedNames or region names | Minor | . | +| [HBASE-23561](https://issues.apache.org/jira/browse/HBASE-23561) | Look up of Region in Master by encoded region name is O(n) | Trivial | . | +| [HBASE-24781](https://issues.apache.org/jira/browse/HBASE-24781) | Clean up peer metrics when disabling peer | Major | Replication | +| [HBASE-27650](https://issues.apache.org/jira/browse/HBASE-27650) | Merging empty regions corrupts meta cache | Major | . | +| [HBASE-27668](https://issues.apache.org/jira/browse/HBASE-27668) | PB's parseDelimitedFrom can successfully return when there are not enough bytes | Critical | Protobufs, wal | +| [HBASE-27644](https://issues.apache.org/jira/browse/HBASE-27644) | Should not return false when WALKey has no following KVs while reading WAL file | Critical | dataloss, wal | +| [HBASE-27649](https://issues.apache.org/jira/browse/HBASE-27649) | WALPlayer does not properly dedupe overridden cell versions | Major | . | +| [HBASE-27661](https://issues.apache.org/jira/browse/HBASE-27661) | Set size of systable queue in UT | Major | . | +| [HBASE-27654](https://issues.apache.org/jira/browse/HBASE-27654) | IndexBlockEncoding is missing in HFileContextBuilder copy constructor | Major | . | +| [HBASE-27636](https://issues.apache.org/jira/browse/HBASE-27636) | The "CREATE\_TIME\_TS" value of the hfile generated by the HFileOutputFormat2 class is 0 | Major | HFile, mapreduce | +| [HBASE-27648](https://issues.apache.org/jira/browse/HBASE-27648) | CopyOnWriteArrayMap does not honor contract of ConcurrentMap.putIfAbsent | Major | . | +| [HBASE-27637](https://issues.apache.org/jira/browse/HBASE-27637) | Zero length value would cause value compressor read nothing and not advance the position of the InputStream | Critical | dataloss, wal | +| [HBASE-27602](https://issues.apache.org/jira/browse/HBASE-27602) | Remove the impact of operating env on testHFileCleaning | Major | test | +| [HBASE-27621](https://issues.apache.org/jira/browse/HBASE-27621) | Also clear the Dictionary when resetting when reading compressed WAL file | Critical | Replication, wal | +| [HBASE-27628](https://issues.apache.org/jira/browse/HBASE-27628) | Spotless fix in RELEASENOTES.md | Trivial | . | +| [HBASE-27619](https://issues.apache.org/jira/browse/HBASE-27619) | Bulkload fails when trying to bulkload files with invalid names after HBASE-26707 | Major | . | +| [HBASE-27580](https://issues.apache.org/jira/browse/HBASE-27580) | Reverse scan over rows with tags throw exceptions when using DataBlockEncoding | Major | . | +| [HBASE-27608](https://issues.apache.org/jira/browse/HBASE-27608) | Use lowercase image reference name in our docker file | Major | scripts | + + +### TESTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27595](https://issues.apache.org/jira/browse/HBASE-27595) | ThreadGroup is removed since Hadoop 3.2.4 | Minor | . | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27669](https://issues.apache.org/jira/browse/HBASE-27669) | chaos-daemon.sh should make use hbase script start/stop chaosagent and chaos monkey runner. | Major | . | +| [HBASE-27643](https://issues.apache.org/jira/browse/HBASE-27643) | [JDK17] Add-opens java.util.concurrent | Major | java, test | +| [HBASE-27645](https://issues.apache.org/jira/browse/HBASE-27645) | [JDK17] Use ReflectionUtils#getModifiersField in UT | Major | java, test | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27736](https://issues.apache.org/jira/browse/HBASE-27736) | HFileSystem.getLocalFs is not used | Major | HFile | +| [HBASE-27748](https://issues.apache.org/jira/browse/HBASE-27748) | Bump jettison from 1.5.2 to 1.5.4 | Major | dependabot, dependencies, security | +| [HBASE-27741](https://issues.apache.org/jira/browse/HBASE-27741) | Fall back to protoc osx-x86\_64 on Apple Silicon | Minor | build | +| [HBASE-27737](https://issues.apache.org/jira/browse/HBASE-27737) | Add supplemental model for com.aayushatharva.brotli4j:native-osx-aarch64 | Minor | build, community | +| [HBASE-27685](https://issues.apache.org/jira/browse/HBASE-27685) | Enable code coverage reporting to SonarQube in HBase | Minor | . | +| [HBASE-27626](https://issues.apache.org/jira/browse/HBASE-27626) | Suppress noisy logging in client.ConnectionImplementation | Minor | logging | + + +## Release 2.5.3 - Unreleased (as of 2023-02-01) + + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27506](https://issues.apache.org/jira/browse/HBASE-27506) | Optionally disable sorting directories by size in CleanerChore | Minor | . | +| [HBASE-27503](https://issues.apache.org/jira/browse/HBASE-27503) | Support replace \ in GC\_OPTS for ZGC | Minor | scripts | +| [HBASE-27512](https://issues.apache.org/jira/browse/HBASE-27512) | Add file \`.git-blame-ignore-revs\` for \`git blame\` | Trivial | . | +| [HBASE-27487](https://issues.apache.org/jira/browse/HBASE-27487) | Slow meta can create pathological feedback loop with multigets | Major | . | +| [HBASE-27466](https://issues.apache.org/jira/browse/HBASE-27466) | hbase client metrics per user specified identity on hconnections. | Major | Client | +| [HBASE-27490](https://issues.apache.org/jira/browse/HBASE-27490) | Locating regions for all actions of batch requests can exceed operation timeout | Major | . | +| [HBASE-22924](https://issues.apache.org/jira/browse/HBASE-22924) | GitHUB PR job should use when clause to filter to just PRs. | Minor | build, community | +| [HBASE-27491](https://issues.apache.org/jira/browse/HBASE-27491) | AsyncProcess should not clear meta cache for RejectedExecutionException | Major | . | +| [HBASE-27459](https://issues.apache.org/jira/browse/HBASE-27459) | Improve our hbase\_docker to be able to build and start standalone clusters other than master branch | Major | scripts | +| [HBASE-27530](https://issues.apache.org/jira/browse/HBASE-27530) | Fix comment syntax errors | Trivial | documentation | +| [HBASE-27253](https://issues.apache.org/jira/browse/HBASE-27253) | Make slow log configs updatable with configuration observer | Major | . | +| [HBASE-27540](https://issues.apache.org/jira/browse/HBASE-27540) | Client metrics for success/failure counts. | Major | Client | +| [HBASE-27233](https://issues.apache.org/jira/browse/HBASE-27233) | Read blocks into off-heap if caching is disabled for read | Major | . | +| [HBASE-27531](https://issues.apache.org/jira/browse/HBASE-27531) | AsyncRequestFutureImpl unnecessarily clears meta cache for full server | Major | . | +| [HBASE-27565](https://issues.apache.org/jira/browse/HBASE-27565) | Make the initial corePoolSize configurable for ChoreService | Major | conf | +| [HBASE-27529](https://issues.apache.org/jira/browse/HBASE-27529) | Provide RS coproc ability to attach WAL extended attributes to mutations at replication sink | Major | Coprocessors, Replication | +| [HBASE-27562](https://issues.apache.org/jira/browse/HBASE-27562) | Publish SBOM artifacts | Major | java | +| [HBASE-27583](https://issues.apache.org/jira/browse/HBASE-27583) | Remove -X option when building protoc check in nightly and pre commit job | Major | jenkins, scripts | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27463](https://issues.apache.org/jira/browse/HBASE-27463) | Reset sizeOfLogQueue when refresh replication source | Minor | Replication | +| [HBASE-27510](https://issues.apache.org/jira/browse/HBASE-27510) | Should use 'org.apache.hbase.thirdparty.io.netty.tryReflectionSetAccessible' | Major | . | +| [HBASE-27484](https://issues.apache.org/jira/browse/HBASE-27484) | FNFE on StoreFileScanner after a flush followed by a compaction | Major | . | +| [HBASE-27494](https://issues.apache.org/jira/browse/HBASE-27494) | Client meta cache clear by exception metrics are missing some cases | Minor | . | +| [HBASE-27519](https://issues.apache.org/jira/browse/HBASE-27519) | Another case for FNFE on StoreFileScanner after a flush followed by a compaction | Major | . | +| [HBASE-27498](https://issues.apache.org/jira/browse/HBASE-27498) | Observed lot of threads blocked in ConnectionImplementation.getKeepAliveMasterService | Major | Client | +| [HBASE-27524](https://issues.apache.org/jira/browse/HBASE-27524) | Fix python requirements problem | Major | scripts, security | +| [HBASE-27390](https://issues.apache.org/jira/browse/HBASE-27390) | getClusterMetrics NullPointerException when ServerTask status null | Major | . | +| [HBASE-27485](https://issues.apache.org/jira/browse/HBASE-27485) | HBaseTestingUtility minicluster requires log4j2 | Major | test | +| [HBASE-27566](https://issues.apache.org/jira/browse/HBASE-27566) | Bump gitpython from 3.1.29 to 3.1.30 in /dev-support | Major | scripts, security | +| [HBASE-27560](https://issues.apache.org/jira/browse/HBASE-27560) | CatalogJanitor consistencyCheck cannot report the hole on last region if next table is disabled in meta | Minor | hbck2 | +| [HBASE-27563](https://issues.apache.org/jira/browse/HBASE-27563) | ChaosMonkey sometimes generates invalid boundaries for random item selection | Minor | integration tests | +| [HBASE-27561](https://issues.apache.org/jira/browse/HBASE-27561) | hbase.master.port is ignored in processing of hbase.masters | Minor | Client | +| [HBASE-27564](https://issues.apache.org/jira/browse/HBASE-27564) | Add default encryption type for MiniKDC to fix failed tests on JDK11+ | Major | . | +| [HBASE-27579](https://issues.apache.org/jira/browse/HBASE-27579) | CatalogJanitor can cause data loss due to errors during cleanMergeRegion | Blocker | . | +| [HBASE-27589](https://issues.apache.org/jira/browse/HBASE-27589) | Rename TestConnectionImplementation in hbase-it to fix javadoc failure | Blocker | Client, documentation | +| [HBASE-27592](https://issues.apache.org/jira/browse/HBASE-27592) | Update hadoop netty version for hadoop-2.0 profile | Major | . | +| [HBASE-27586](https://issues.apache.org/jira/browse/HBASE-27586) | Bump up commons-codec to 1.15 | Major | dependencies, security | +| [HBASE-27547](https://issues.apache.org/jira/browse/HBASE-27547) | Close store file readers after region warmup | Major | regionserver | +| [HBASE-26967](https://issues.apache.org/jira/browse/HBASE-26967) | FilterList with FuzzyRowFilter and SingleColumnValueFilter evaluated with operator MUST\_PASS\_ONE doesn't work as expected | Critical | Filters | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27557](https://issues.apache.org/jira/browse/HBASE-27557) | [JDK17] Update shade plugin version | Minor | . | +| [HBASE-25516](https://issues.apache.org/jira/browse/HBASE-25516) | [JDK17] reflective access Field.class.getDeclaredField("modifiers") not supported | Major | Filesystem Integration | +| [HBASE-27591](https://issues.apache.org/jira/browse/HBASE-27591) | [JDK17] Fix failure TestImmutableScan#testScanCopyConstructor | Minor | . | +| [HBASE-27581](https://issues.apache.org/jira/browse/HBASE-27581) | [JDK17] Fix failure TestHBaseTestingUtil#testResolvePortConflict | Minor | test | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27513](https://issues.apache.org/jira/browse/HBASE-27513) | Modify README.txt to mention how to contribue | Major | community | +| [HBASE-27548](https://issues.apache.org/jira/browse/HBASE-27548) | Bump jettison from 1.5.1 to 1.5.2 | Major | dependabot, dependencies, security | +| [HBASE-27567](https://issues.apache.org/jira/browse/HBASE-27567) | Introduce ChaosMonkey Action to print HDFS Cluster status | Minor | integration tests | +| [HBASE-27568](https://issues.apache.org/jira/browse/HBASE-27568) | ChaosMonkey add support for JournalNodes | Major | integration tests | +| [HBASE-27575](https://issues.apache.org/jira/browse/HBASE-27575) | Bump future from 0.18.2 to 0.18.3 in /dev-support | Minor | . | +| [HBASE-27578](https://issues.apache.org/jira/browse/HBASE-27578) | Upgrade hbase-thirdparty to 4.1.4 | Blocker | dependencies, security | +| [HBASE-27588](https://issues.apache.org/jira/browse/HBASE-27588) | "Instantiating StoreFileTracker impl" INFO level logging is too chatty | Minor | . | + + +## Release 2.5.2 - Unreleased (as of 2022-11-24) + + + +### NEW FEATURES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-26809](https://issues.apache.org/jira/browse/HBASE-26809) | Report client backoff time for server overloaded in ConnectionMetrics | Major | . | + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27434](https://issues.apache.org/jira/browse/HBASE-27434) | Use $revision as placeholder for maven version to make it easier to control the version from command line | Major | build, pom | +| [HBASE-27167](https://issues.apache.org/jira/browse/HBASE-27167) | s390x: Skip tests on unsupported compression libs | Major | build, pom | +| [HBASE-27450](https://issues.apache.org/jira/browse/HBASE-27450) | Update all our python scripts to use python3 | Major | scripts | +| [HBASE-27414](https://issues.apache.org/jira/browse/HBASE-27414) | Search order for locations in HFileLink | Minor | Performance | +| [HBASE-27495](https://issues.apache.org/jira/browse/HBASE-27495) | Improve HFileLinkCleaner to validate back reference links ahead the next traverse | Major | master | +| [HBASE-27408](https://issues.apache.org/jira/browse/HBASE-27408) | Improve BucketAllocatorException log to always include HFile name | Major | . | +| [HBASE-27496](https://issues.apache.org/jira/browse/HBASE-27496) | Optionally limit the amount of plans executed in the Normalizer | Minor | Normalizer | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27433](https://issues.apache.org/jira/browse/HBASE-27433) | DefaultMobStoreCompactor should delete MobStoreFile cleanly when compaction is failed | Major | mob | +| [HBASE-27440](https://issues.apache.org/jira/browse/HBASE-27440) | metrics method removeHistogramMetrics trigger serious memory leak | Major | metrics, regionserver | +| [HBASE-25983](https://issues.apache.org/jira/browse/HBASE-25983) | javadoc generation fails on openjdk-11.0.11+9 | Major | documentation, pom | +| [HBASE-27446](https://issues.apache.org/jira/browse/HBASE-27446) | Spotbugs 4.7.2 report a lot of logging errors when generating report | Major | build, jenkins, scripts | +| [HBASE-27437](https://issues.apache.org/jira/browse/HBASE-27437) | TestHeapSize is flaky | Major | test | +| [HBASE-27472](https://issues.apache.org/jira/browse/HBASE-27472) | The personality script set wrong hadoop2 check version for branch-2 | Major | jenkins, scripts | +| [HBASE-27473](https://issues.apache.org/jira/browse/HBASE-27473) | Fix spotbugs warnings in hbase-rest Client.getResponseBody | Major | REST | +| [HBASE-27480](https://issues.apache.org/jira/browse/HBASE-27480) | Skip error prone for hadoop2/3 checkes in our nightly jobs | Major | jenkins, scripts | +| [HBASE-27469](https://issues.apache.org/jira/browse/HBASE-27469) | IllegalArgumentException is thrown by SnapshotScannerHDFSAclController when dropping a table | Major | snapshots | +| [HBASE-27379](https://issues.apache.org/jira/browse/HBASE-27379) | numOpenConnections metric is one less than the actual | Minor | metrics | +| [HBASE-27423](https://issues.apache.org/jira/browse/HBASE-27423) | Upgrade hbase-thirdparty to 4.1.3 and upgrade Jackson for CVE-2022-42003/42004 | Major | security | +| [HBASE-27464](https://issues.apache.org/jira/browse/HBASE-27464) | In memory compaction 'COMPACT' may cause data corruption when adding cells large than maxAlloc(default 256k) size | Critical | in-memory-compaction | +| [HBASE-27501](https://issues.apache.org/jira/browse/HBASE-27501) | The .flattened-pom.xml for some modules are not installed | Blocker | build, pom | +| [HBASE-27445](https://issues.apache.org/jira/browse/HBASE-27445) | result of DirectMemoryUtils#getDirectMemorySize may be wrong | Minor | UI | +| [HBASE-27504](https://issues.apache.org/jira/browse/HBASE-27504) | Remove duplicated config 'hbase.normalizer.merge.min\_region\_age.days' in hbase-default.xml | Minor | conf | + + +### TESTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27479](https://issues.apache.org/jira/browse/HBASE-27479) | Flaky Test testClone in TestTaskMonitor | Trivial | test | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27475](https://issues.apache.org/jira/browse/HBASE-27475) | Use different jdks when running hadoopcheck in personality scripts | Critical | jenkins, scripts | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27425](https://issues.apache.org/jira/browse/HBASE-27425) | Run flaky test job more often | Minor | test | +| [HBASE-27460](https://issues.apache.org/jira/browse/HBASE-27460) | Fix the hadolint errors after HBASE-27456 | Major | scripts | +| [HBASE-27443](https://issues.apache.org/jira/browse/HBASE-27443) | Use java11 in the general check of our jenkins job | Major | build, jenkins | + + +## Release 2.5.1 - 2022-10-21 + + + +### NEW FEATURES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27314](https://issues.apache.org/jira/browse/HBASE-27314) | Make index block be customized and configured | Major | . | + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27159](https://issues.apache.org/jira/browse/HBASE-27159) | Emit source metrics for BlockCacheExpressHitPercent | Minor | BlockCache, metrics | +| [HBASE-27339](https://issues.apache.org/jira/browse/HBASE-27339) | Improve sasl connection failure log message to include server | Minor | Client | +| [HBASE-27365](https://issues.apache.org/jira/browse/HBASE-27365) | Minimise block addition failures due to no space in bucket cache writers queue by introducing wait time | Major | BucketCache | +| [HBASE-27391](https://issues.apache.org/jira/browse/HBASE-27391) | Downgrade ERROR log to DEBUG in ConnectionUtils.updateStats | Major | . | +| [HBASE-27370](https://issues.apache.org/jira/browse/HBASE-27370) | Avoid decompressing blocks when reading from bucket cache prefetch threads | Major | . | +| [HBASE-27361](https://issues.apache.org/jira/browse/HBASE-27361) | Add .flattened-pom.xml to .gitignore | Major | build | +| [HBASE-27224](https://issues.apache.org/jira/browse/HBASE-27224) | HFile tool statistic sampling produces misleading results | Major | . | +| [HBASE-27340](https://issues.apache.org/jira/browse/HBASE-27340) | Artifacts with resolved profiles | Minor | build, pom | +| [HBASE-27332](https://issues.apache.org/jira/browse/HBASE-27332) | Remove RejectedExecutionHandler for long/short compaction thread pools | Minor | Compaction | +| [HBASE-27338](https://issues.apache.org/jira/browse/HBASE-27338) | brotli compression lib tests fail on arm64 | Minor | . | +| [HBASE-27320](https://issues.apache.org/jira/browse/HBASE-27320) | hide some sensitive configuration information in the UI | Minor | security, UI | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27426](https://issues.apache.org/jira/browse/HBASE-27426) | Region server abort with failing to list region servers. | Major | Zookeeper | +| [HBASE-27432](https://issues.apache.org/jira/browse/HBASE-27432) | TestUsersOperationsWithSecureHadoop fails affter HBASE-27411 | Major | . | +| [HBASE-27420](https://issues.apache.org/jira/browse/HBASE-27420) | Failure while connecting to zk if HBase is running in standalone mode in a container | Minor | Zookeeper | +| [HBASE-27424](https://issues.apache.org/jira/browse/HBASE-27424) | Upgrade Jettison for CVE-2022-40149/40150 | Major | . | +| [HBASE-27419](https://issues.apache.org/jira/browse/HBASE-27419) | Update to hbase-thirdparty 4.1.2 | Major | dependencies | +| [HBASE-27407](https://issues.apache.org/jira/browse/HBASE-27407) | Fixing check for "description" request param in JMXJsonServlet.java | Minor | metrics | +| [HBASE-27409](https://issues.apache.org/jira/browse/HBASE-27409) | Fix the javadoc for WARCRecord | Major | documentation | +| [HBASE-27381](https://issues.apache.org/jira/browse/HBASE-27381) | Still seeing 'Stuck' in static initialization creating RegionInfo instance | Major | . | +| [HBASE-27368](https://issues.apache.org/jira/browse/HBASE-27368) | Do not need to throw IllegalStateException when peer is not active in ReplicationSource.initialize | Major | regionserver, Replication | +| [HBASE-27352](https://issues.apache.org/jira/browse/HBASE-27352) | Quoted string argument with spaces passed from command line are propagated wrongly to the underlying java class | Minor | shell | +| [HBASE-27362](https://issues.apache.org/jira/browse/HBASE-27362) | Fix some tests hung by CompactSplit.requestCompactionInternal ignoring compactionsEnabled check | Major | Compaction | +| [HBASE-27353](https://issues.apache.org/jira/browse/HBASE-27353) | opentelemetry-context jar missing at runtime causes MR jobs to fail | Minor | . | +| [HBASE-22939](https://issues.apache.org/jira/browse/HBASE-22939) | SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned on. | Major | . | +| [HBASE-27336](https://issues.apache.org/jira/browse/HBASE-27336) | The region visualizer shows 'undefined' region server | Major | master, UI | +| [HBASE-27335](https://issues.apache.org/jira/browse/HBASE-27335) | HBase shell hang for a minute when quiting | Major | shell | +| [HBASE-27152](https://issues.apache.org/jira/browse/HBASE-27152) | Under compaction mark may leak | Major | Compaction | +| [HBASE-25922](https://issues.apache.org/jira/browse/HBASE-25922) | Disabled sanity checks ignored on snapshot restore | Minor | conf, snapshots | +| [HBASE-27246](https://issues.apache.org/jira/browse/HBASE-27246) | RSGroupMappingScript#getRSGroup has thread safety problem | Major | rsgroup | +| [HBASE-25166](https://issues.apache.org/jira/browse/HBASE-25166) | MobFileCompactionChore is closing the master's shared cluster connection | Major | master | + + +### TESTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27360](https://issues.apache.org/jira/browse/HBASE-27360) | The trace related assertions are flaky for async client tests | Major | test, tracing | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27421](https://issues.apache.org/jira/browse/HBASE-27421) | Bump spotless plugin to 2.27.2 and reimplement the 'Remove unhelpful javadoc stubs' rule | Major | documentation, pom | +| [HBASE-27401](https://issues.apache.org/jira/browse/HBASE-27401) | Clean up current broken 'n's in our javadoc | Major | documentation | +| [HBASE-27403](https://issues.apache.org/jira/browse/HBASE-27403) | Remove 'Remove unhelpful javadoc stubs' spotless rule for now | Major | documentation, pom | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27431](https://issues.apache.org/jira/browse/HBASE-27431) | Remove TestRemoteTable.testLimitedScan | Trivial | REST, test | +| [HBASE-27411](https://issues.apache.org/jira/browse/HBASE-27411) | Update and clean up bcprov-jdk15on dependency | Minor | build | +| [HBASE-27372](https://issues.apache.org/jira/browse/HBASE-27372) | Update java versions in our Dockerfiles | Major | build, scripts | +| [HBASE-27373](https://issues.apache.org/jira/browse/HBASE-27373) | Fix new spotbugs warnings after upgrading spotbugs to 4.7.2 | Major | . | +| [HBASE-27371](https://issues.apache.org/jira/browse/HBASE-27371) | Bump spotbugs version | Major | build, pom | + + +## Release 2.5.0 - Unreleased (as of 2022-08-23) + + + +### NEW FEATURES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27104](https://issues.apache.org/jira/browse/HBASE-27104) | Add a tool command list\_unknownservers | Major | master | +| [HBASE-27129](https://issues.apache.org/jira/browse/HBASE-27129) | Add a config that allows us to configure region-level storage policies | Major | regionserver | +| [HBASE-27028](https://issues.apache.org/jira/browse/HBASE-27028) | Add a shell command for flushing master local region | Minor | shell | +| [HBASE-26826](https://issues.apache.org/jira/browse/HBASE-26826) | Backport StoreFileTracker (HBASE-26067, HBASE-26584, and others) to branch-2.5 | Major | Operability, regionserver | +| [HBASE-26342](https://issues.apache.org/jira/browse/HBASE-26342) | Support custom paths of independent configuration and pool for hfile cleaner | Major | master | +| [HBASE-27018](https://issues.apache.org/jira/browse/HBASE-27018) | Add a tool command list\_liveservers | Major | . | +| [HBASE-26617](https://issues.apache.org/jira/browse/HBASE-26617) | Use spotless to reduce the pain on fixing checkstyle issues | Major | build, community | +| [HBASE-26959](https://issues.apache.org/jira/browse/HBASE-26959) | Brotli compression support | Minor | . | +| [HBASE-25865](https://issues.apache.org/jira/browse/HBASE-25865) | Visualize current state of region assignment | Blocker | master, Operability, Usability | +| [HBASE-26703](https://issues.apache.org/jira/browse/HBASE-26703) | Allow configuration of IPC queue balancer | Minor | . | +| [HBASE-26576](https://issues.apache.org/jira/browse/HBASE-26576) | Allow Pluggable Queue to belong to FastPath or normal Balanced Executor | Minor | regionserver, rpc | +| [HBASE-26347](https://issues.apache.org/jira/browse/HBASE-26347) | Support detect and exclude slow DNs in fan-out of WAL | Major | wal | +| [HBASE-26284](https://issues.apache.org/jira/browse/HBASE-26284) | Add HBase Thrift API to get all table names along with whether it is enabled or not | Major | Thrift | +| [HBASE-26141](https://issues.apache.org/jira/browse/HBASE-26141) | Add tracing support for HTable and sync connection on branch-2 | Major | tracing | +| [HBASE-6908](https://issues.apache.org/jira/browse/HBASE-6908) | Pluggable Call BlockingQueue for HBaseServer | Major | IPC/RPC | +| [HBASE-25841](https://issues.apache.org/jira/browse/HBASE-25841) | Add basic jshell support | Minor | shell, Usability | +| [HBASE-25756](https://issues.apache.org/jira/browse/HBASE-25756) | Support alternate compression for major and minor compactions | Minor | Compaction | +| [HBASE-25751](https://issues.apache.org/jira/browse/HBASE-25751) | Add writable TimeToPurgeDeletes to ScanOptions | Major | . | +| [HBASE-25665](https://issues.apache.org/jira/browse/HBASE-25665) | Disable reverse DNS lookup for SASL Kerberos client connection | Major | . | +| [HBASE-25587](https://issues.apache.org/jira/browse/HBASE-25587) | [hbck2] Schedule SCP for all unknown servers | Major | hbase-operator-tools, hbck2 | +| [HBASE-25460](https://issues.apache.org/jira/browse/HBASE-25460) | Expose drainingServers as cluster metric | Major | metrics | +| [HBASE-25496](https://issues.apache.org/jira/browse/HBASE-25496) | add get\_namespace\_rsgroup command | Major | . | +| [HBASE-24620](https://issues.apache.org/jira/browse/HBASE-24620) | Add a ClusterManager which submits command to ZooKeeper and its Agent which picks and execute those Commands. | Major | integration tests | +| [HBASE-22749](https://issues.apache.org/jira/browse/HBASE-22749) | Distributed MOB compactions | Major | mob | + + +### IMPROVEMENTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27305](https://issues.apache.org/jira/browse/HBASE-27305) | add an option to skip file splitting when bulkload hfiles | Minor | tooling | +| [HBASE-27089](https://issues.apache.org/jira/browse/HBASE-27089) | Add “commons.crypto.stream.buffer.size” configuration | Minor | io | +| [HBASE-27268](https://issues.apache.org/jira/browse/HBASE-27268) | In trace log mode, the client does not print callId/startTime and the server does not print receiveTime | Minor | logging | +| [HBASE-26982](https://issues.apache.org/jira/browse/HBASE-26982) | Add index and bloom filter statistics of LruBlockCache on rs web UI | Minor | BlockCache, UI | +| [HBASE-27296](https://issues.apache.org/jira/browse/HBASE-27296) | Some Cell's implementation of toString() such as IndividualBytesFieldCell prints out value and tags which is too verbose | Minor | logging | +| [HBASE-27283](https://issues.apache.org/jira/browse/HBASE-27283) | Use readTO instead of hard coded RpcClient.DEFAULT\_SOCKET\_TIMEOUT\_READ when creating ReadTimeoutHandler in NettyRpcConnection | Major | IPC/RPC, test | +| [HBASE-27273](https://issues.apache.org/jira/browse/HBASE-27273) | Should stop autoRead and skip all the bytes when rpc request too big | Major | IPC/RPC | +| [HBASE-27153](https://issues.apache.org/jira/browse/HBASE-27153) | Improvements to read-path tracing | Major | Operability, regionserver | +| [HBASE-27229](https://issues.apache.org/jira/browse/HBASE-27229) | BucketCache statistics should not count evictions by hfile | Major | . | +| [HBASE-27222](https://issues.apache.org/jira/browse/HBASE-27222) | Purge FutureReturnValueIgnored warnings from error prone | Major | netty | +| [HBASE-20499](https://issues.apache.org/jira/browse/HBASE-20499) | Replication/Priority executors can use specific max queue length as default value instead of general maxQueueLength | Minor | rpc | +| [HBASE-27208](https://issues.apache.org/jira/browse/HBASE-27208) | Use spotless to purge the missing summary warnings from error prone | Major | pom | +| [HBASE-27048](https://issues.apache.org/jira/browse/HBASE-27048) | Server side scanner time limit should account for time in queue | Major | . | +| [HBASE-27088](https://issues.apache.org/jira/browse/HBASE-27088) | IntegrationLoadTestCommonCrawl async load improvements | Minor | integration tests, test | +| [HBASE-27149](https://issues.apache.org/jira/browse/HBASE-27149) | Server should close scanner if client times out before results are ready | Major | . | +| [HBASE-27188](https://issues.apache.org/jira/browse/HBASE-27188) | Report maxStoreFileCount in jmx | Minor | . | +| [HBASE-27186](https://issues.apache.org/jira/browse/HBASE-27186) | Report block cache size metrics separately for L1 and L2 | Minor | . | +| [HBASE-26950](https://issues.apache.org/jira/browse/HBASE-26950) | Use AsyncConnection in ReplicationSink | Major | . | +| [HBASE-27078](https://issues.apache.org/jira/browse/HBASE-27078) | Allow configuring a separate timeout for meta scans | Major | . | +| [HBASE-27101](https://issues.apache.org/jira/browse/HBASE-27101) | support commons-crypto version 1.1.0 | Minor | . | +| [HBASE-26218](https://issues.apache.org/jira/browse/HBASE-26218) | Better logging in CanaryTool | Minor | canary | +| [HBASE-27146](https://issues.apache.org/jira/browse/HBASE-27146) | Avoid CellUtil.cloneRow in MetaCellComparator | Major | meta, Offheaping, Performance | +| [HBASE-27125](https://issues.apache.org/jira/browse/HBASE-27125) | The batch size of cleaning expired mob files should have an upper bound | Minor | mob | +| [HBASE-26923](https://issues.apache.org/jira/browse/HBASE-26923) | PerformanceEvaluation support encryption option | Minor | PE | +| [HBASE-27095](https://issues.apache.org/jira/browse/HBASE-27095) | HbckChore should produce a report | Major | hbck2, master | +| [HBASE-27046](https://issues.apache.org/jira/browse/HBASE-27046) | The filenum in AbstractFSWAL should be monotone increasing | Major | . | +| [HBASE-27093](https://issues.apache.org/jira/browse/HBASE-27093) | AsyncNonMetaRegionLocator:put Complete CompletableFuture outside lock block | Major | asyncclient, Client | +| [HBASE-27080](https://issues.apache.org/jira/browse/HBASE-27080) | Optimize debug output log of ConstantSizeRegionSplitPolicy class. | Minor | logging | +| [HBASE-26649](https://issues.apache.org/jira/browse/HBASE-26649) | Support meta replica LoadBalance mode for RegionLocator#getAllRegionLocations() | Major | meta replicas | +| [HBASE-26320](https://issues.apache.org/jira/browse/HBASE-26320) | Separate Log Cleaner DirScanPool to prevent the OLDWALs from filling up the disk when archive is large | Major | Operability | +| [HBASE-27043](https://issues.apache.org/jira/browse/HBASE-27043) | Let lock wait timeout to improve performance of SnapshotHFileCleaner | Major | snapshots | +| [HBASE-25465](https://issues.apache.org/jira/browse/HBASE-25465) | Use javac --release option for supporting cross version compilation | Minor | create-release | +| [HBASE-27013](https://issues.apache.org/jira/browse/HBASE-27013) | Introduce read all bytes when using pread for prefetch | Major | HFile, Performance | +| [HBASE-27003](https://issues.apache.org/jira/browse/HBASE-27003) | Optimize log format for PerformanceEvaluation | Minor | . | +| [HBASE-26990](https://issues.apache.org/jira/browse/HBASE-26990) | Add default implementation for BufferedMutator interface setters | Minor | . | +| [HBASE-26419](https://issues.apache.org/jira/browse/HBASE-26419) | Tracing Spans of RPCs should follow otel semantic conventions | Blocker | tracing | +| [HBASE-26961](https://issues.apache.org/jira/browse/HBASE-26961) | cache region locations when getAllRegionLocations() for branch-2.4+ | Minor | Client | +| [HBASE-26975](https://issues.apache.org/jira/browse/HBASE-26975) | Add on heap and off heap memstore info in rs web UI | Minor | UI | +| [HBASE-26980](https://issues.apache.org/jira/browse/HBASE-26980) | Update javadoc of BucketCache.java | Trivial | documentation | +| [HBASE-26971](https://issues.apache.org/jira/browse/HBASE-26971) | SnapshotInfo --snapshot param is marked as required even when trying to list all snapshots | Minor | . | +| [HBASE-26807](https://issues.apache.org/jira/browse/HBASE-26807) | Unify CallQueueTooBigException special pause with CallDroppedException | Major | . | +| [HBASE-26891](https://issues.apache.org/jira/browse/HBASE-26891) | Make MetricsConnection scope configurable | Minor | . | +| [HBASE-26947](https://issues.apache.org/jira/browse/HBASE-26947) | Implement a special TestAppender to limit the size of test output | Major | logging, test | +| [HBASE-26618](https://issues.apache.org/jira/browse/HBASE-26618) | Involving primary meta region in meta scan with CatalogReplicaLoadBalanceSimpleSelector | Minor | meta replicas | +| [HBASE-26885](https://issues.apache.org/jira/browse/HBASE-26885) | The TRSP should not go on when it get a bogus server name from AM | Major | proc-v2 | +| [HBASE-26872](https://issues.apache.org/jira/browse/HBASE-26872) | Load rate calculator for cost functions should be more precise | Major | Balancer | +| [HBASE-26832](https://issues.apache.org/jira/browse/HBASE-26832) | Avoid repeated releasing of flushed wal entries in AsyncFSWAL#syncCompleted | Major | wal | +| [HBASE-26878](https://issues.apache.org/jira/browse/HBASE-26878) | TableInputFormatBase should cache RegionSizeCalculator | Minor | . | +| [HBASE-26175](https://issues.apache.org/jira/browse/HBASE-26175) | MetricsHBaseServer should record all kinds of Exceptions | Minor | metrics | +| [HBASE-21065](https://issues.apache.org/jira/browse/HBASE-21065) | Try ROW\_INDEX\_V1 encoding on meta table (fix bloomfilters on meta while we are at it) | Major | meta, Performance | +| [HBASE-26858](https://issues.apache.org/jira/browse/HBASE-26858) | Refactor TestMasterRegionOnTwoFileSystems to avoid dead loop | Major | test | +| [HBASE-26833](https://issues.apache.org/jira/browse/HBASE-26833) | Avoid waiting to clear buffer usage of ReplicationSourceShipper when aborting the RS | Major | regionserver, Replication | +| [HBASE-26848](https://issues.apache.org/jira/browse/HBASE-26848) | Set java.io.tmpdir on mvn command when running jenkins job | Major | jenkins, test | +| [HBASE-26828](https://issues.apache.org/jira/browse/HBASE-26828) | Increase the concurrency when running UTs in pre commit job | Major | jenkins, test | +| [HBASE-26680](https://issues.apache.org/jira/browse/HBASE-26680) | Close and do not write trailer for the broken WAL writer | Major | wal | +| [HBASE-26720](https://issues.apache.org/jira/browse/HBASE-26720) | ExportSnapshot should validate the source snapshot before copying files | Major | snapshots | +| [HBASE-26275](https://issues.apache.org/jira/browse/HBASE-26275) | update error message when executing deleteall with ROWPREFIXFILTER in meta table | Minor | shell | +| [HBASE-26835](https://issues.apache.org/jira/browse/HBASE-26835) | Rewrite TestLruAdaptiveBlockCache to make it more stable | Major | test | +| [HBASE-26830](https://issues.apache.org/jira/browse/HBASE-26830) | Rewrite TestLruBlockCache to make it more stable | Major | test | +| [HBASE-26814](https://issues.apache.org/jira/browse/HBASE-26814) | Default StoreHotnessProtector to off, with logs to guide when to turn it on | Major | . | +| [HBASE-26784](https://issues.apache.org/jira/browse/HBASE-26784) | Use HIGH\_QOS for ResultScanner.close requests | Major | . | +| [HBASE-26552](https://issues.apache.org/jira/browse/HBASE-26552) | Introduce retry to logroller to avoid abort | Major | wal | +| [HBASE-26792](https://issues.apache.org/jira/browse/HBASE-26792) | Implement ScanInfo#toString | Minor | regionserver | +| [HBASE-26731](https://issues.apache.org/jira/browse/HBASE-26731) | Metrics for tracking of active scanners | Minor | . | +| [HBASE-26789](https://issues.apache.org/jira/browse/HBASE-26789) | Automatically add default security headers to http/rest if SSL enabled | Major | REST, UI | +| [HBASE-26765](https://issues.apache.org/jira/browse/HBASE-26765) | Minor refactor of async scanning code | Major | . | +| [HBASE-26659](https://issues.apache.org/jira/browse/HBASE-26659) | The ByteBuffer of metadata in RAMQueueEntry in BucketCache could be reused. | Major | BucketCache, Performance | +| [HBASE-26730](https://issues.apache.org/jira/browse/HBASE-26730) | Extend hbase shell 'status' command to support an option 'tasks' | Minor | Operability, shell | +| [HBASE-26709](https://issues.apache.org/jira/browse/HBASE-26709) | Ban the usage of junit 3 TestCase | Major | test | +| [HBASE-26724](https://issues.apache.org/jira/browse/HBASE-26724) | Backport the UT changes in HBASE-24510 to branch-2.x | Major | test | +| [HBASE-26702](https://issues.apache.org/jira/browse/HBASE-26702) | Make ageOfLastShip, ageOfLastApplied extend TimeHistogram instead of plain histogram. | Minor | metrics, Replication | +| [HBASE-26726](https://issues.apache.org/jira/browse/HBASE-26726) | Allow disable of region warmup before graceful move | Minor | master, Region Assignment | +| [HBASE-26657](https://issues.apache.org/jira/browse/HBASE-26657) | ProfileServlet should move the output location to hbase specific directory | Minor | . | +| [HBASE-26590](https://issues.apache.org/jira/browse/HBASE-26590) | Hbase-client Meta lookup performance regression between hbase-1 and hbase-2 | Major | meta | +| [HBASE-26567](https://issues.apache.org/jira/browse/HBASE-26567) | Remove IndexType from ChunkCreator | Major | in-memory-compaction | +| [HBASE-26641](https://issues.apache.org/jira/browse/HBASE-26641) | Split TestMasterFailoverWithProcedures | Major | proc-v2, test | +| [HBASE-26629](https://issues.apache.org/jira/browse/HBASE-26629) | Add expiration for long time vacant scanners in Thrift2 | Major | Performance, Thrift | +| [HBASE-26638](https://issues.apache.org/jira/browse/HBASE-26638) | Cherry-pick the ReflectionUtils improvements in HBASE-21515 to branch-2 | Major | util | +| [HBASE-26635](https://issues.apache.org/jira/browse/HBASE-26635) | Optimize decodeNumeric in OrderedBytes | Major | Performance | +| [HBASE-26623](https://issues.apache.org/jira/browse/HBASE-26623) | Report CallDroppedException in exception metrics | Minor | IPC/RPC, metrics, rpc | +| [HBASE-26609](https://issues.apache.org/jira/browse/HBASE-26609) | Round the size to MB or KB at the end of calculation in HRegionServer.createRegionLoad | Major | regionserver | +| [HBASE-26598](https://issues.apache.org/jira/browse/HBASE-26598) | Fix excessive connections in MajorCompactor | Major | Compaction, tooling | +| [HBASE-26579](https://issues.apache.org/jira/browse/HBASE-26579) | Set storage policy of recovered edits when wal storage type is configured | Major | Recovery | +| [HBASE-26601](https://issues.apache.org/jira/browse/HBASE-26601) | maven-gpg-plugin failing with "Inappropriate ioctl for device" | Major | build | +| [HBASE-26556](https://issues.apache.org/jira/browse/HBASE-26556) | IT and Chaos Monkey improvements | Minor | integration tests | +| [HBASE-25547](https://issues.apache.org/jira/browse/HBASE-25547) | Thread pools should release unused resources | Minor | master, regionserver | +| [HBASE-26525](https://issues.apache.org/jira/browse/HBASE-26525) | Use unique thread name for group WALs | Major | wal | +| [HBASE-26517](https://issues.apache.org/jira/browse/HBASE-26517) | Add auth method information to AccessChecker audit log | Trivial | security | +| [HBASE-26512](https://issues.apache.org/jira/browse/HBASE-26512) | Make timestamp format configurable in HBase shell scan output | Major | shell | +| [HBASE-26485](https://issues.apache.org/jira/browse/HBASE-26485) | Introduce a method to clean restore directory after Snapshot Scan | Minor | snapshots | +| [HBASE-26479](https://issues.apache.org/jira/browse/HBASE-26479) | Print too slow/big scan's operation\_id in region server log | Minor | regionserver, scan | +| [HBASE-26475](https://issues.apache.org/jira/browse/HBASE-26475) | The flush and compact methods in HTU should skip processing secondary replicas | Major | test | +| [HBASE-26249](https://issues.apache.org/jira/browse/HBASE-26249) | Ameliorate compaction made by bulk-loading files | Major | . | +| [HBASE-26421](https://issues.apache.org/jira/browse/HBASE-26421) | Use HFileLink file to replace entire file‘s reference when splitting | Major | . | +| [HBASE-26267](https://issues.apache.org/jira/browse/HBASE-26267) | Master initialization fails if Master Region WAL dir is missing | Major | master | +| [HBASE-26446](https://issues.apache.org/jira/browse/HBASE-26446) | CellCounter should report serialized cell size counts too | Minor | . | +| [HBASE-26432](https://issues.apache.org/jira/browse/HBASE-26432) | Add tracing support in hbase shell | Minor | shell, tracing | +| [HBASE-26337](https://issues.apache.org/jira/browse/HBASE-26337) | Optimization for weighted random generators | Major | Balancer | +| [HBASE-26363](https://issues.apache.org/jira/browse/HBASE-26363) | OpenTelemetry configuration support for per-process service names | Major | tracing | +| [HBASE-26309](https://issues.apache.org/jira/browse/HBASE-26309) | Balancer tends to move regions to the server at the end of list | Major | Balancer | +| [HBASE-26305](https://issues.apache.org/jira/browse/HBASE-26305) | Move NavigableSet add operation to writer thread in BucketCache | Minor | BucketCache, Performance | +| [HBASE-26251](https://issues.apache.org/jira/browse/HBASE-26251) | StochasticLoadBalancer metrics should update even if balancer doesn't run | Minor | Balancer | +| [HBASE-26270](https://issues.apache.org/jira/browse/HBASE-26270) | Provide getConfiguration method for Region and Store interface | Minor | . | +| [HBASE-26273](https://issues.apache.org/jira/browse/HBASE-26273) | TableSnapshotInputFormat/TableSnapshotInputFormatImpl should use ReadType.STREAM for scanning HFiles | Major | mapreduce | +| [HBASE-26276](https://issues.apache.org/jira/browse/HBASE-26276) | Allow HashTable/SyncTable to perform rawScan when comparing cells | Major | . | +| [HBASE-26229](https://issues.apache.org/jira/browse/HBASE-26229) | Limit count and size of L0 files compaction in StripeCompactionPolicy | Major | Compaction | +| [HBASE-26243](https://issues.apache.org/jira/browse/HBASE-26243) | Fix typo for file 'hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java' | Trivial | . | +| [HBASE-25773](https://issues.apache.org/jira/browse/HBASE-25773) | TestSnapshotScannerHDFSAclController.setupBeforeClass is flaky | Major | . | +| [HBASE-26147](https://issues.apache.org/jira/browse/HBASE-26147) | Add dry run mode to hbase balancer | Major | Balancer, master | +| [HBASE-25642](https://issues.apache.org/jira/browse/HBASE-25642) | Fix or stop warning about already cached block | Major | BlockCache, Operability, regionserver | +| [HBASE-26212](https://issues.apache.org/jira/browse/HBASE-26212) | Allow AuthUtil automatic renewal to be disabled | Minor | Client, security | +| [HBASE-26187](https://issues.apache.org/jira/browse/HBASE-26187) | Write straight into the store directory when Splitting and Merging | Major | . | +| [HBASE-26193](https://issues.apache.org/jira/browse/HBASE-26193) | Do not store meta region location as permanent state on zookeeper | Major | meta, Zookeeper | +| [HBASE-24652](https://issues.apache.org/jira/browse/HBASE-24652) | master-status UI make date type fields sortable | Minor | master, Operability, UI, Usability | +| [HBASE-25680](https://issues.apache.org/jira/browse/HBASE-25680) | Non-idempotent test in TestReplicationHFileCleaner | Minor | test | +| [HBASE-26179](https://issues.apache.org/jira/browse/HBASE-26179) | TestRequestTooBigException spends too much time to finish | Major | test | +| [HBASE-26160](https://issues.apache.org/jira/browse/HBASE-26160) | Configurable disallowlist for live editing of loglevels | Minor | . | +| [HBASE-25469](https://issues.apache.org/jira/browse/HBASE-25469) | Add detailed RIT info in JSON format for consumption as metrics | Minor | master | +| [HBASE-26154](https://issues.apache.org/jira/browse/HBASE-26154) | Provide exception metric for quota exceeded and throttling | Minor | . | +| [HBASE-26144](https://issues.apache.org/jira/browse/HBASE-26144) | The HStore.snapshot method is never called in main code | Major | regionserver | +| [HBASE-26105](https://issues.apache.org/jira/browse/HBASE-26105) | Rectify the expired TODO comment in CombinedBC | Trivial | BlockCache | +| [HBASE-26146](https://issues.apache.org/jira/browse/HBASE-26146) | Allow custom opts for hbck in hbase bin | Minor | . | +| [HBASE-26118](https://issues.apache.org/jira/browse/HBASE-26118) | The HStore.commitFile and HStore.moveFileIntoPlace almost have the same logic | Major | Compaction, regionserver | +| [HBASE-26119](https://issues.apache.org/jira/browse/HBASE-26119) | Polish TestAsyncNonMetaRegionLocator | Major | meta replicas, test | +| [HBASE-21946](https://issues.apache.org/jira/browse/HBASE-21946) | Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable | Critical | Offheaping | +| [HBASE-26025](https://issues.apache.org/jira/browse/HBASE-26025) | Add a flag to mark if the IOError can be solved by retry in thrift IOError | Major | Thrift | +| [HBASE-25986](https://issues.apache.org/jira/browse/HBASE-25986) | Expose the NORMALIZARION\_ENABLED table descriptor through a property in hbase-site | Minor | Normalizer | +| [HBASE-25700](https://issues.apache.org/jira/browse/HBASE-25700) | Enhance znode parent validation when add\_peer | Minor | Replication | +| [HBASE-26069](https://issues.apache.org/jira/browse/HBASE-26069) | Remove HStore.compactRecentForTestingAssumingDefaultPolicy and DefaultCompactor.compactForTesting | Major | Compaction, test | +| [HBASE-26065](https://issues.apache.org/jira/browse/HBASE-26065) | StripeStoreFileManager does not need to throw IOException for most methods | Major | Compaction, HFile | +| [HBASE-25914](https://issues.apache.org/jira/browse/HBASE-25914) | Provide slow/large logs on RegionServer UI | Major | regionserver, UI | +| [HBASE-26012](https://issues.apache.org/jira/browse/HBASE-26012) | Improve logging and dequeue logic in DelayQueue | Minor | . | +| [HBASE-26020](https://issues.apache.org/jira/browse/HBASE-26020) | Split TestWALEntryStream.testDifferentCounts out | Major | Replication, test | +| [HBASE-25937](https://issues.apache.org/jira/browse/HBASE-25937) | Clarify UnknownRegionException | Minor | Client | +| [HBASE-25998](https://issues.apache.org/jira/browse/HBASE-25998) | Revisit synchronization in SyncFuture | Major | Performance, regionserver, wal | +| [HBASE-26000](https://issues.apache.org/jira/browse/HBASE-26000) | Optimize the display of ZK dump in the master web UI | Minor | . | +| [HBASE-25995](https://issues.apache.org/jira/browse/HBASE-25995) | Change the method name for DoubleArrayCost.setCosts | Major | Balancer | +| [HBASE-26002](https://issues.apache.org/jira/browse/HBASE-26002) | MultiRowMutationEndpoint should return the result of the conditional update | Major | Coprocessors | +| [HBASE-25993](https://issues.apache.org/jira/browse/HBASE-25993) | Make excluded SSL cipher suites configurable for all Web UIs | Major | . | +| [HBASE-25987](https://issues.apache.org/jira/browse/HBASE-25987) | Make SSL keystore type configurable for HBase ThriftServer | Major | Thrift | +| [HBASE-25666](https://issues.apache.org/jira/browse/HBASE-25666) | Explain why balancer is skipping runs | Major | Balancer, master, UI | +| [HBASE-25942](https://issues.apache.org/jira/browse/HBASE-25942) | Get rid of null regioninfo in wrapped connection exceptions | Trivial | logging | +| [HBASE-25745](https://issues.apache.org/jira/browse/HBASE-25745) | Deprecate/Rename config \`hbase.normalizer.min.region.count\` to \`hbase.normalizer.merge.min.region.count\` | Minor | master, Normalizer | +| [HBASE-25908](https://issues.apache.org/jira/browse/HBASE-25908) | Exclude jakarta.activation-api | Major | hadoop3, shading | +| [HBASE-25933](https://issues.apache.org/jira/browse/HBASE-25933) | Log trace raw exception, instead of cause message in NettyRpcServerRequestDecoder | Minor | . | +| [HBASE-25534](https://issues.apache.org/jira/browse/HBASE-25534) | Honor TableDescriptor settings earlier in normalization | Major | Normalizer | +| [HBASE-25906](https://issues.apache.org/jira/browse/HBASE-25906) | UI of master-status to show recent history of balancer desicion | Major | Balancer, master, UI | +| [HBASE-25899](https://issues.apache.org/jira/browse/HBASE-25899) | Improve efficiency of SnapshotHFileCleaner | Major | master | +| [HBASE-25682](https://issues.apache.org/jira/browse/HBASE-25682) | Add a new command to update the configuration of all RSs in a RSGroup | Major | Admin, shell | +| [HBASE-25032](https://issues.apache.org/jira/browse/HBASE-25032) | Do not assign regions to region server which has not called regionServerReport yet | Major | . | +| [HBASE-25860](https://issues.apache.org/jira/browse/HBASE-25860) | Add metric for successful wal roll requests. | Major | metrics, wal | +| [HBASE-25754](https://issues.apache.org/jira/browse/HBASE-25754) | StripeCompactionPolicy should support compacting cold regions | Minor | Compaction | +| [HBASE-25766](https://issues.apache.org/jira/browse/HBASE-25766) | Introduce RegionSplitRestriction that restricts the pattern of the split point | Major | . | +| [HBASE-25798](https://issues.apache.org/jira/browse/HBASE-25798) | typo in MetricsAssertHelper | Minor | . | +| [HBASE-25770](https://issues.apache.org/jira/browse/HBASE-25770) | Http InfoServers should honor gzip encoding when requested | Major | UI | +| [HBASE-25776](https://issues.apache.org/jira/browse/HBASE-25776) | Use Class.asSubclass to fix the warning in StochasticLoadBalancer.loadCustomCostFunctions | Minor | Balancer | +| [HBASE-25767](https://issues.apache.org/jira/browse/HBASE-25767) | CandidateGenerator.getRandomIterationOrder is too slow on large cluster | Major | Balancer, Performance | +| [HBASE-25762](https://issues.apache.org/jira/browse/HBASE-25762) | Improvement for some debug-logging guards | Minor | logging, Performance | +| [HBASE-25653](https://issues.apache.org/jira/browse/HBASE-25653) | Add units and round off region size to 2 digits after decimal | Major | master, Normalizer | +| [HBASE-25482](https://issues.apache.org/jira/browse/HBASE-25482) | Improve SimpleRegionNormalizer#getAverageRegionSizeMb | Minor | Normalizer | +| [HBASE-25759](https://issues.apache.org/jira/browse/HBASE-25759) | The master services field in LocalityBasedCostFunction is never used | Major | Balancer | +| [HBASE-25744](https://issues.apache.org/jira/browse/HBASE-25744) | Change default of \`hbase.normalizer.merge.min\_region\_size.mb\` to \`0\` | Major | master, Normalizer | +| [HBASE-25747](https://issues.apache.org/jira/browse/HBASE-25747) | Remove unused getWriteAvailable method in OperationQuota | Minor | Quotas | +| [HBASE-25558](https://issues.apache.org/jira/browse/HBASE-25558) | Adding audit log for execMasterService | Major | . | +| [HBASE-25703](https://issues.apache.org/jira/browse/HBASE-25703) | Support conditional update in MultiRowMutationEndpoint | Major | Coprocessors | +| [HBASE-25686](https://issues.apache.org/jira/browse/HBASE-25686) | [hbtop] Add some javadoc | Minor | hbtop | +| [HBASE-25627](https://issues.apache.org/jira/browse/HBASE-25627) | HBase replication should have a metric to represent if the source is stuck getting initialized | Major | Replication | +| [HBASE-25688](https://issues.apache.org/jira/browse/HBASE-25688) | Use CustomRequestLog instead of Slf4jRequestLog for jetty | Major | logging, UI | +| [HBASE-25678](https://issues.apache.org/jira/browse/HBASE-25678) | Support nonce operations for Increment/Append in RowMutations and CheckAndMutate | Major | . | +| [HBASE-25679](https://issues.apache.org/jira/browse/HBASE-25679) | Size of log queue metric is incorrect in branch-1/branch-2 | Major | . | +| [HBASE-25518](https://issues.apache.org/jira/browse/HBASE-25518) | Support separate child regions to different region servers | Major | . | +| [HBASE-25643](https://issues.apache.org/jira/browse/HBASE-25643) | The delayed FlushRegionEntry should be removed when we need a non-delayed one | Major | regionserver | +| [HBASE-25621](https://issues.apache.org/jira/browse/HBASE-25621) | Balancer should check region plan source to avoid misplace region groups | Major | Balancer | +| [HBASE-25374](https://issues.apache.org/jira/browse/HBASE-25374) | Make REST Client connection and socket time out configurable | Minor | REST | +| [HBASE-25597](https://issues.apache.org/jira/browse/HBASE-25597) | Add row info in Exception when cell size exceeds maxCellSize | Minor | . | +| [HBASE-25660](https://issues.apache.org/jira/browse/HBASE-25660) | Print split policy in use on Region open (as well as split policy vitals) | Trivial | . | +| [HBASE-25635](https://issues.apache.org/jira/browse/HBASE-25635) | CandidateGenerator may miss some region balance actions | Major | Balancer | +| [HBASE-25622](https://issues.apache.org/jira/browse/HBASE-25622) | Result#compareResults should compare tags. | Major | Client | +| [HBASE-25570](https://issues.apache.org/jira/browse/HBASE-25570) | On largish cluster, "CleanerChore: Could not delete dir..." makes master log unreadable | Major | . | +| [HBASE-25566](https://issues.apache.org/jira/browse/HBASE-25566) | RoundRobinTableInputFormat | Major | mapreduce | +| [HBASE-25636](https://issues.apache.org/jira/browse/HBASE-25636) | Expose HBCK report as metrics | Minor | metrics | +| [HBASE-25548](https://issues.apache.org/jira/browse/HBASE-25548) | Optionally allow snapshots to preserve cluster's max filesize config by setting it into table descriptor | Major | . | +| [HBASE-25582](https://issues.apache.org/jira/browse/HBASE-25582) | Support setting scan ReadType to be STREAM at cluster level | Major | . | +| [HBASE-23578](https://issues.apache.org/jira/browse/HBASE-23578) | [UI] Master UI shows long stack traces when table is broken | Minor | master, UI | +| [HBASE-25637](https://issues.apache.org/jira/browse/HBASE-25637) | Rename method completeCompaction to refreshStoreSizeCount | Minor | . | +| [HBASE-25492](https://issues.apache.org/jira/browse/HBASE-25492) | Create table with rsgroup info in branch-2 | Major | rsgroup | +| [HBASE-25603](https://issues.apache.org/jira/browse/HBASE-25603) | Add switch for compaction after bulkload | Major | Compaction | +| [HBASE-25539](https://issues.apache.org/jira/browse/HBASE-25539) | Add metric for age of oldest wal. | Major | metrics, regionserver | +| [HBASE-25574](https://issues.apache.org/jira/browse/HBASE-25574) | Revisit put/delete/increment/append related RegionObserver methods | Major | Coprocessors | +| [HBASE-25541](https://issues.apache.org/jira/browse/HBASE-25541) | In WALEntryStream, set the current path to null while dequeing the log | Major | . | +| [HBASE-23887](https://issues.apache.org/jira/browse/HBASE-23887) | New L1 cache : AdaptiveLRU | Major | BlockCache, Performance | +| [HBASE-25364](https://issues.apache.org/jira/browse/HBASE-25364) | Redo the getMidPoint() in HFileWriterImpl to get rid of the double comparison process | Minor | . | +| [HBASE-25519](https://issues.apache.org/jira/browse/HBASE-25519) | BLOCKSIZE needs to support pretty print | Major | . | +| [HBASE-24772](https://issues.apache.org/jira/browse/HBASE-24772) | Use GetoptLong or OptionParser in hbase-shell | Minor | shell | +| [HBASE-25542](https://issues.apache.org/jira/browse/HBASE-25542) | Add client detail to scan name so when lease expires, we have clue on who was scanning | Major | scan | +| [HBASE-25553](https://issues.apache.org/jira/browse/HBASE-25553) | It is better for ReplicationTracker.getListOfRegionServers to return ServerName instead of String | Major | . | +| [HBASE-25528](https://issues.apache.org/jira/browse/HBASE-25528) | Dedicated merge dispatch threadpool on master | Minor | master | +| [HBASE-25536](https://issues.apache.org/jira/browse/HBASE-25536) | Remove 0 length wal file from logQueue if it belongs to old sources. | Major | Replication | +| [HBASE-25329](https://issues.apache.org/jira/browse/HBASE-25329) | Dump region hashes in logs for the regions that are stuck in transition for more than a configured amount of time | Minor | . | +| [HBASE-25475](https://issues.apache.org/jira/browse/HBASE-25475) | Improve unit test for HBASE-25445 : SplitWALRemoteProcedure failed to archive split WAL | Minor | wal | +| [HBASE-25431](https://issues.apache.org/jira/browse/HBASE-25431) | MAX\_FILESIZE and MEMSTORE\_FLUSHSIZE should not be set negative number | Major | . | +| [HBASE-25439](https://issues.apache.org/jira/browse/HBASE-25439) | Add BYTE unit in PrettyPrinter.Unit | Major | . | +| [HBASE-25249](https://issues.apache.org/jira/browse/HBASE-25249) | Adding StoreContext | Major | . | +| [HBASE-23340](https://issues.apache.org/jira/browse/HBASE-23340) | hmaster /hbase/replication/rs session expired (hbase replication default value is true, we don't use ) causes logcleaner can not clean oldWALs, which resulits in oldWALs too large (more than 2TB) | Major | master | +| [HBASE-25449](https://issues.apache.org/jira/browse/HBASE-25449) | 'dfs.client.read.shortcircuit' should not be set in hbase-default.xml | Major | conf | +| [HBASE-25476](https://issues.apache.org/jira/browse/HBASE-25476) | Enable error prone check in pre commit | Major | build | +| [HBASE-25211](https://issues.apache.org/jira/browse/HBASE-25211) | Rack awareness in region\_mover | Major | . | +| [HBASE-25483](https://issues.apache.org/jira/browse/HBASE-25483) | set the loadMeta log level to debug. | Major | MTTR, Region Assignment | +| [HBASE-25471](https://issues.apache.org/jira/browse/HBASE-25471) | Move RegionScannerImpl out of HRegion | Major | regionserver | +| [HBASE-25458](https://issues.apache.org/jira/browse/HBASE-25458) | HRegion methods cleanup | Major | regionserver | +| [HBASE-25435](https://issues.apache.org/jira/browse/HBASE-25435) | Slow metric value can be configured | Minor | metrics | +| [HBASE-25318](https://issues.apache.org/jira/browse/HBASE-25318) | Configure where IntegrationTestImportTsv generates HFiles | Minor | integration tests | +| [HBASE-25450](https://issues.apache.org/jira/browse/HBASE-25450) | The parameter "hbase.bucketcache.size" is misdescribed | Major | conf | +| [HBASE-24751](https://issues.apache.org/jira/browse/HBASE-24751) | Display Task completion time and/or processing duration on Web UI | Minor | UI | +| [HBASE-25379](https://issues.apache.org/jira/browse/HBASE-25379) | Make retry pause time configurable for regionserver short operation RPC (reportRegionStateTransition/reportProcedureDone) | Minor | regionserver | +| [HBASE-24850](https://issues.apache.org/jira/browse/HBASE-24850) | CellComparator perf improvement | Critical | Performance, scan | +| [HBASE-25443](https://issues.apache.org/jira/browse/HBASE-25443) | Improve the experience of using the Master webpage by change the loading process of snapshot list to asynchronous | Minor | master, UI | +| [HBASE-25425](https://issues.apache.org/jira/browse/HBASE-25425) | Some notes on RawCell | Trivial | . | +| [HBASE-25084](https://issues.apache.org/jira/browse/HBASE-25084) | RegexStringComparator in ParseFilter should support case-insensitive regexes | Major | Thrift | +| [HBASE-25420](https://issues.apache.org/jira/browse/HBASE-25420) | Some minor improvements in rpc implementation | Minor | rpc | +| [HBASE-25246](https://issues.apache.org/jira/browse/HBASE-25246) | Backup/Restore hbase cell tags. | Major | backup&restore | +| [HBASE-25363](https://issues.apache.org/jira/browse/HBASE-25363) | Improve performance of HFileLinkCleaner by using ReadWriteLock instead of synchronize | Major | master | +| [HBASE-25328](https://issues.apache.org/jira/browse/HBASE-25328) | Add builder method to create Tags. | Minor | . | +| [HBASE-25187](https://issues.apache.org/jira/browse/HBASE-25187) | Improve SizeCachedKV variants initialization | Minor | . | +| [HBASE-23303](https://issues.apache.org/jira/browse/HBASE-23303) | Add security headers to REST server/info page | Major | REST | + + +### BUG FIXES: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-25970](https://issues.apache.org/jira/browse/HBASE-25970) | MOB data loss - incorrect concatenation of MOB\_FILE\_REFS | Critical | mob | +| [HBASE-24163](https://issues.apache.org/jira/browse/HBASE-24163) | MOB compactor implementations should use format specifiers when calling String.format | Major | Compaction, mob | +| [HBASE-27292](https://issues.apache.org/jira/browse/HBASE-27292) | Fix build failure against Hadoop 3.3.4 due to added dependency on okhttp | Major | build, hadoop3, pom | +| [HBASE-27275](https://issues.apache.org/jira/browse/HBASE-27275) | graceful\_stop.sh unable to restore the balance state | Blocker | regionserver | +| [HBASE-27282](https://issues.apache.org/jira/browse/HBASE-27282) | CME in AuthManager causes region server crash | Major | acl | +| [HBASE-26775](https://issues.apache.org/jira/browse/HBASE-26775) | TestProcedureSchedulerConcurrency fails in pre commit | Major | proc-v2, test | +| [HBASE-27269](https://issues.apache.org/jira/browse/HBASE-27269) | The implementation of TestReplicationStatus.waitOnMetricsReport is incorrect | Major | Replication, test | +| [HBASE-27271](https://issues.apache.org/jira/browse/HBASE-27271) | BufferCallBeforeInitHandler should ignore the flush request | Major | IPC/RPC | +| [HBASE-26977](https://issues.apache.org/jira/browse/HBASE-26977) | HMaster's ShutdownHook does not take effect, if tablesOnMaster is false | Major | master | +| [HBASE-27247](https://issues.apache.org/jira/browse/HBASE-27247) | TestPerTableCFReplication.testParseTableCFsFromConfig is broken because of ReplicationPeerConfigUtil.parseTableCFsFromConfig | Major | Replication | +| [HBASE-27087](https://issues.apache.org/jira/browse/HBASE-27087) | TestQuotaThrottle times out | Major | test | +| [HBASE-27179](https://issues.apache.org/jira/browse/HBASE-27179) | Issues building with OpenJDK 17 | Minor | build | +| [HBASE-27204](https://issues.apache.org/jira/browse/HBASE-27204) | BlockingRpcClient will hang for 20 seconds when SASL is enabled after finishing negotiation | Critical | rpc, sasl, security | +| [HBASE-27075](https://issues.apache.org/jira/browse/HBASE-27075) | TestUpdateRSGroupConfiguration.testCustomOnlineConfigChangeInRSGroup is flaky | Minor | rsgroup, test | +| [HBASE-27219](https://issues.apache.org/jira/browse/HBASE-27219) | Change JONI encoding in RegexStringComparator | Minor | Filters | +| [HBASE-27195](https://issues.apache.org/jira/browse/HBASE-27195) | Clean up netty worker/thread pool configuration | Major | . | +| [HBASE-27211](https://issues.apache.org/jira/browse/HBASE-27211) | Data race in MonitoredTaskImpl could cause split wal failure | Critical | monitoring, wal | +| [HBASE-27053](https://issues.apache.org/jira/browse/HBASE-27053) | IOException during caching of uncompressed block to the block cache. | Major | BlockCache | +| [HBASE-27192](https://issues.apache.org/jira/browse/HBASE-27192) | The retry number for TestSeparateClientZKCluster is too small | Major | test, Zookeeper | +| [HBASE-27193](https://issues.apache.org/jira/browse/HBASE-27193) | TestZooKeeper is flaky | Major | test, Zookeeper | +| [HBASE-27097](https://issues.apache.org/jira/browse/HBASE-27097) | SimpleRpcServer is broken | Blocker | rpc | +| [HBASE-27189](https://issues.apache.org/jira/browse/HBASE-27189) | NettyServerRpcConnection is not properly closed when the netty channel is closed | Blocker | netty, rpc | +| [HBASE-27169](https://issues.apache.org/jira/browse/HBASE-27169) | TestSeparateClientZKCluster is flaky | Major | test | +| [HBASE-27180](https://issues.apache.org/jira/browse/HBASE-27180) | Multiple possible buffer leaks | Major | netty, regionserver | +| [HBASE-26708](https://issues.apache.org/jira/browse/HBASE-26708) | Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering with SASL implementation | Blocker | netty, rpc, sasl | +| [HBASE-27171](https://issues.apache.org/jira/browse/HBASE-27171) | Fix Annotation Error in HRegionFileSystem | Trivial | . | +| [HBASE-27170](https://issues.apache.org/jira/browse/HBASE-27170) | ByteBuffAllocator leak when decompressing blocks near minSizeForReservoirUse | Major | . | +| [HBASE-27160](https://issues.apache.org/jira/browse/HBASE-27160) | ClientZKSyncer.deleteDataForClientZkUntilSuccess should break from the loop when deletion is succeeded | Major | Client, Zookeeper | +| [HBASE-27001](https://issues.apache.org/jira/browse/HBASE-27001) | The deleted variable cannot be printed out | Minor | . | +| [HBASE-27105](https://issues.apache.org/jira/browse/HBASE-27105) | HBaseInterClusterReplicationEndpoint should honor replication adaptive timeout | Major | Replication | +| [HBASE-27098](https://issues.apache.org/jira/browse/HBASE-27098) | Fix link for field comments | Minor | . | +| [HBASE-27143](https://issues.apache.org/jira/browse/HBASE-27143) | Add hbase-unsafe as a dependency for a MR job triggered by hbase shell | Major | integration tests, mapreduce | +| [HBASE-27099](https://issues.apache.org/jira/browse/HBASE-27099) | In the HFileBlock class, the log printing fspread/fsread cost time unit should be milliseconds | Minor | HFile | +| [HBASE-27128](https://issues.apache.org/jira/browse/HBASE-27128) | when open archiveRetries totalLogSize calculation mistake | Minor | wal | +| [HBASE-27117](https://issues.apache.org/jira/browse/HBASE-27117) | Update the method comments for RegionServerAccounting | Minor | . | +| [HBASE-27103](https://issues.apache.org/jira/browse/HBASE-27103) | All MR UTs are broken because of ClassNotFound | Critical | hadoop3, test | +| [HBASE-27066](https://issues.apache.org/jira/browse/HBASE-27066) | The Region Visualizer display failed | Major | . | +| [HBASE-27038](https://issues.apache.org/jira/browse/HBASE-27038) | CellComparator should extend Serializable | Minor | . | +| [HBASE-27017](https://issues.apache.org/jira/browse/HBASE-27017) | MOB snapshot is broken when FileBased SFT is used | Major | mob | +| [HBASE-26985](https://issues.apache.org/jira/browse/HBASE-26985) | SecureBulkLoadManager will set wrong permission if umask too strict | Major | regionserver | +| [HBASE-27081](https://issues.apache.org/jira/browse/HBASE-27081) | Fix disallowed compatibility breaks on branch-2.5 and branch-2 | Blocker | . | +| [HBASE-27079](https://issues.apache.org/jira/browse/HBASE-27079) | Lower some DEBUG level logs in ReplicationSourceWALReader to TRACE | Minor | . | +| [HBASE-27068](https://issues.apache.org/jira/browse/HBASE-27068) | NPE occurs when the active master has not yet been elected | Major | . | +| [HBASE-27064](https://issues.apache.org/jira/browse/HBASE-27064) | CME in TestRegionNormalizerWorkQueue | Minor | test | +| [HBASE-27069](https://issues.apache.org/jira/browse/HBASE-27069) | Hbase SecureBulkload permission regression | Major | . | +| [HBASE-27065](https://issues.apache.org/jira/browse/HBASE-27065) | Build against Hadoop 3.3.3 | Major | build | +| [HBASE-26854](https://issues.apache.org/jira/browse/HBASE-26854) | Shell startup logs a bunch of noise | Major | . | +| [HBASE-27061](https://issues.apache.org/jira/browse/HBASE-27061) | two phase bulkload is broken when SFT is in use. | Major | . | +| [HBASE-27055](https://issues.apache.org/jira/browse/HBASE-27055) | Add additional comments when using HBASE\_TRACE\_OPTS with standalone mode | Minor | tracing | +| [HBASE-27030](https://issues.apache.org/jira/browse/HBASE-27030) | Fix undefined local variable error in draining\_servers.rb | Major | jruby, shell | +| [HBASE-27047](https://issues.apache.org/jira/browse/HBASE-27047) | Fix typo for metric drainingRegionServers | Minor | metrics | +| [HBASE-27027](https://issues.apache.org/jira/browse/HBASE-27027) | Deprecated jetty SslContextFactory cause HMaster startup failure due to multiple certificates in KeyStores | Major | security | +| [HBASE-27032](https://issues.apache.org/jira/browse/HBASE-27032) | The draining region servers metric description is incorrect | Minor | . | +| [HBASE-27019](https://issues.apache.org/jira/browse/HBASE-27019) | Minor compression performance improvements | Trivial | . | +| [HBASE-26905](https://issues.apache.org/jira/browse/HBASE-26905) | ReplicationPeerManager#checkPeerExists should throw ReplicationPeerNotFoundException if peer doesn't exists | Major | Replication | +| [HBASE-27021](https://issues.apache.org/jira/browse/HBASE-27021) | StoreFileInfo should set its initialPath in a consistent way | Major | . | +| [HBASE-26994](https://issues.apache.org/jira/browse/HBASE-26994) | MasterFileSystem create directory without permission check | Major | master | +| [HBASE-26963](https://issues.apache.org/jira/browse/HBASE-26963) | ReplicationSource#removePeer hangs if we try to remove bad peer. | Major | regionserver, Replication | +| [HBASE-27000](https://issues.apache.org/jira/browse/HBASE-27000) | Block cache stats (Misses Caching) display error in RS web UI | Major | . | +| [HBASE-26984](https://issues.apache.org/jira/browse/HBASE-26984) | Chaos Monkey thread dies in ITBLL Chaos GracefulRollingRestartRsAction | Major | integration tests | +| [HBASE-26992](https://issues.apache.org/jira/browse/HBASE-26992) | Brotli compressor has unexpected behavior during reinitialization | Minor | . | +| [HBASE-26988](https://issues.apache.org/jira/browse/HBASE-26988) | Balancer should reset to default setting for hbase.master.loadbalance.bytable if dynamically reloading configuration | Minor | Balancer | +| [HBASE-26917](https://issues.apache.org/jira/browse/HBASE-26917) | Do not add --threads when running 'mvn site' | Major | build, scripts | +| [HBASE-22349](https://issues.apache.org/jira/browse/HBASE-22349) | Stochastic Load Balancer skips balancing when node is replaced in cluster | Major | Balancer | +| [HBASE-26979](https://issues.apache.org/jira/browse/HBASE-26979) | StoreFileListFile logs frequent stacktraces at INFO level | Minor | . | +| [HBASE-26941](https://issues.apache.org/jira/browse/HBASE-26941) | LocalHBaseCluster.waitOnRegionServer should not call join while interrupted | Critical | test | +| [HBASE-26938](https://issues.apache.org/jira/browse/HBASE-26938) | Compaction failures after StoreFileTracker integration | Blocker | Compaction | +| [HBASE-26944](https://issues.apache.org/jira/browse/HBASE-26944) | Possible resource leak while creating new region scanner | Major | . | +| [HBASE-26895](https://issues.apache.org/jira/browse/HBASE-26895) | on hbase shell, 'delete/deleteall' for a columnfamily is not working | Major | shell | +| [HBASE-26901](https://issues.apache.org/jira/browse/HBASE-26901) | delete with null columnQualifier occurs NullPointerException when NewVersionBehavior is on | Major | Deletes, Scanners | +| [HBASE-26880](https://issues.apache.org/jira/browse/HBASE-26880) | Misspelling commands in hbase shell will crash the shell | Minor | shell | +| [HBASE-26939](https://issues.apache.org/jira/browse/HBASE-26939) | Typo in admin.rb "COMPRESSION\_COMPACT\_MAJPR" | Trivial | . | +| [HBASE-26924](https://issues.apache.org/jira/browse/HBASE-26924) | [Documentation] Fix log parameter error and spelling error | Trivial | logging | +| [HBASE-26811](https://issues.apache.org/jira/browse/HBASE-26811) | Secondary replica may be disabled for read incorrectly forever | Major | read replicas | +| [HBASE-26812](https://issues.apache.org/jira/browse/HBASE-26812) | ShortCircuitingClusterConnection fails to close RegionScanners when making short-circuited calls | Critical | . | +| [HBASE-26838](https://issues.apache.org/jira/browse/HBASE-26838) | Junit jar is not included in the hbase tar ball, causing issues for some hbase tools that do rely on it | Major | integration tests, tooling | +| [HBASE-26871](https://issues.apache.org/jira/browse/HBASE-26871) | shaded mapreduce and shaded byo-hadoop client artifacts contains no classes | Blocker | integration tests, jenkins, mapreduce | +| [HBASE-26896](https://issues.apache.org/jira/browse/HBASE-26896) | list\_quota\_snapshots fails with ‘ERROR NameError: uninitialized constant Shell::Commands::ListQuotaSnapshots::TABLE’ | Major | shell | +| [HBASE-26718](https://issues.apache.org/jira/browse/HBASE-26718) | HFileArchiver can remove referenced StoreFiles from the archive | Major | Compaction, HFile, snapshots | +| [HBASE-26864](https://issues.apache.org/jira/browse/HBASE-26864) | SplitTableRegionProcedure calls openParentRegions() at a wrong state during rollback. | Major | Region Assignment | +| [HBASE-26876](https://issues.apache.org/jira/browse/HBASE-26876) | Use toStringBinary for rowkey in RegionServerCallable error string | Minor | . | +| [HBASE-26875](https://issues.apache.org/jira/browse/HBASE-26875) | RpcRetryingCallerImpl translateException ignores return value of recursive call | Minor | . | +| [HBASE-26869](https://issues.apache.org/jira/browse/HBASE-26869) | RSRpcServices.scan should deep clone cells when RpcCallContext is null | Major | regionserver | +| [HBASE-26870](https://issues.apache.org/jira/browse/HBASE-26870) | Log4j2 integration is incorrect in nighly's client integration test | Critical | jenkins, logging | +| [HBASE-26840](https://issues.apache.org/jira/browse/HBASE-26840) | Fix NPE in the retry of logroller | Minor | wal | +| [HBASE-26670](https://issues.apache.org/jira/browse/HBASE-26670) | HFileLinkCleaner should be added even if snapshot is disabled | Critical | snapshots | +| [HBASE-26761](https://issues.apache.org/jira/browse/HBASE-26761) | TestMobStoreScanner (testGetMassive) can OOME | Minor | mob, test | +| [HBASE-26816](https://issues.apache.org/jira/browse/HBASE-26816) | Fix CME in ReplicationSourceManager | Minor | Replication | +| [HBASE-26715](https://issues.apache.org/jira/browse/HBASE-26715) | Blocked on SyncFuture in AsyncProtobufLogWriter#write | Major | . | +| [HBASE-26804](https://issues.apache.org/jira/browse/HBASE-26804) | Missing opentelemetry agent in hadoop-two-compat.xml | Blocker | tracing | +| [HBASE-26815](https://issues.apache.org/jira/browse/HBASE-26815) | TestFanOutOneBlockAsyncDFSOutput is flakey | Major | test | +| [HBASE-26783](https://issues.apache.org/jira/browse/HBASE-26783) | ScannerCallable doubly clears meta cache on retries | Major | . | +| [HBASE-26777](https://issues.apache.org/jira/browse/HBASE-26777) | BufferedDataBlockEncoder$OffheapDecodedExtendedCell.deepClone throws UnsupportedOperationException | Major | regionserver | +| [HBASE-26745](https://issues.apache.org/jira/browse/HBASE-26745) | MetricsStochasticBalancerSource metrics don't render in /jmx endpoint | Minor | . | +| [HBASE-26776](https://issues.apache.org/jira/browse/HBASE-26776) | RpcServer failure to SASL handshake always logs user "unknown" to audit log | Major | security | +| [HBASE-26772](https://issues.apache.org/jira/browse/HBASE-26772) | Shell suspended in background | Minor | shell | +| [HBASE-26767](https://issues.apache.org/jira/browse/HBASE-26767) | Rest server should not use a large Header Cache. | Major | REST | +| [HBASE-26546](https://issues.apache.org/jira/browse/HBASE-26546) | hbase-shaded-client missing required thirdparty classes under hadoop 3.3.1 | Major | Client, hadoop3, shading | +| [HBASE-26727](https://issues.apache.org/jira/browse/HBASE-26727) | Fix CallDroppedException reporting | Minor | . | +| [HBASE-26712](https://issues.apache.org/jira/browse/HBASE-26712) | Balancer encounters NPE in rare case | Major | . | +| [HBASE-26742](https://issues.apache.org/jira/browse/HBASE-26742) | Comparator of NOT\_EQUAL NULL is invalid for checkAndMutate | Major | . | +| [HBASE-26688](https://issues.apache.org/jira/browse/HBASE-26688) | Threads shared EMPTY\_RESULT may lead to unexpected client job down. | Major | Client | +| [HBASE-26741](https://issues.apache.org/jira/browse/HBASE-26741) | Incorrect exception handling in shell | Critical | shell | +| [HBASE-26729](https://issues.apache.org/jira/browse/HBASE-26729) | Backport "HBASE-26714 Introduce path configuration for system coprocessors" to branch-2 | Major | Coprocessors | +| [HBASE-26713](https://issues.apache.org/jira/browse/HBASE-26713) | Increments submitted by 1.x clients will be stored with timestamp 0 on 2.x+ clusters | Major | . | +| [HBASE-26679](https://issues.apache.org/jira/browse/HBASE-26679) | Wait on the future returned by FanOutOneBlockAsyncDFSOutput.flush would stuck | Critical | wal | +| [HBASE-26662](https://issues.apache.org/jira/browse/HBASE-26662) | User.createUserForTesting should not reset UserProvider.groups every time if hbase.group.service.for.test.only is true | Major | . | +| [HBASE-26671](https://issues.apache.org/jira/browse/HBASE-26671) | Misspellings of hbck usage | Minor | hbck | +| [HBASE-26469](https://issues.apache.org/jira/browse/HBASE-26469) | correct HBase shell exit behavior to match code passed to exit | Critical | shell | +| [HBASE-26543](https://issues.apache.org/jira/browse/HBASE-26543) | correct parsing of hbase shell args with GetoptLong | Blocker | shell | +| [HBASE-26643](https://issues.apache.org/jira/browse/HBASE-26643) | LoadBalancer should not return empty map | Critical | proc-v2, Region Assignment, test | +| [HBASE-26646](https://issues.apache.org/jira/browse/HBASE-26646) | WALPlayer should obtain token from filesystem | Minor | . | +| [HBASE-26494](https://issues.apache.org/jira/browse/HBASE-26494) | Using RefCnt to fix the flawed MemStoreLABImpl | Major | regionserver | +| [HBASE-26625](https://issues.apache.org/jira/browse/HBASE-26625) | ExportSnapshot tool failed to copy data files for tables with merge region | Minor | . | +| [HBASE-26615](https://issues.apache.org/jira/browse/HBASE-26615) | Snapshot referenced data files are deleted when delete a table with merge regions | Major | . | +| [HBASE-26613](https://issues.apache.org/jira/browse/HBASE-26613) | The logic of the method incrementIV in Encryption class has problem | Major | Performance, security | +| [HBASE-26488](https://issues.apache.org/jira/browse/HBASE-26488) | Memory leak when MemStore retry flushing | Major | regionserver | +| [HBASE-26340](https://issues.apache.org/jira/browse/HBASE-26340) | TableSplit returns false size under 1MB | Major | mapreduce, regionserver | +| [HBASE-26537](https://issues.apache.org/jira/browse/HBASE-26537) | FuzzyRowFilter backwards compatibility | Major | . | +| [HBASE-26550](https://issues.apache.org/jira/browse/HBASE-26550) | Make sure the master is running normally before accepting a balance command | Minor | Balancer, master | +| [HBASE-26541](https://issues.apache.org/jira/browse/HBASE-26541) | hbase-protocol-shaded not buildable on M1 MacOSX | Major | . | +| [HBASE-26527](https://issues.apache.org/jira/browse/HBASE-26527) | ArrayIndexOutOfBoundsException in KeyValueUtil.copyToNewKeyValue() | Major | wal | +| [HBASE-26462](https://issues.apache.org/jira/browse/HBASE-26462) | Should persist restoreAcl flag in the procedure state for CloneSnapshotProcedure and RestoreSnapshotProcedure | Critical | proc-v2, snapshots | +| [HBASE-26535](https://issues.apache.org/jira/browse/HBASE-26535) | [site, branch-2] DependencyManagement report is broken | Blocker | build, pom | +| [HBASE-26533](https://issues.apache.org/jira/browse/HBASE-26533) | KeyValueScanner might not be properly closed when using InternalScan.checkOnlyMemStore() | Minor | . | +| [HBASE-26482](https://issues.apache.org/jira/browse/HBASE-26482) | HMaster may clean wals that is replicating in rare cases | Critical | Replication | +| [HBASE-26468](https://issues.apache.org/jira/browse/HBASE-26468) | Region Server doesn't exit cleanly incase it crashes. | Major | regionserver | +| [HBASE-25905](https://issues.apache.org/jira/browse/HBASE-25905) | Shutdown of WAL stuck at waitForSafePoint | Blocker | regionserver, wal | +| [HBASE-26455](https://issues.apache.org/jira/browse/HBASE-26455) | TestStochasticLoadBalancerRegionReplicaWithRacks fails consistently | Major | Balancer | +| [HBASE-26450](https://issues.apache.org/jira/browse/HBASE-26450) | Server configuration will overwrite HStore configuration after using shell command 'update\_config' | Minor | Compaction, conf, regionserver | +| [HBASE-26476](https://issues.apache.org/jira/browse/HBASE-26476) | Make DefaultMemStore extensible for HStore.memstore | Major | regionserver | +| [HBASE-26465](https://issues.apache.org/jira/browse/HBASE-26465) | MemStoreLAB may be released early when its SegmentScanner is scanning | Critical | regionserver | +| [HBASE-26467](https://issues.apache.org/jira/browse/HBASE-26467) | Wrong Cell Generated by MemStoreLABImpl.forceCopyOfBigCellInto when Cell size bigger than data chunk size | Critical | in-memory-compaction | +| [HBASE-26463](https://issues.apache.org/jira/browse/HBASE-26463) | Unreadable table names after HBASE-24605 | Trivial | UI | +| [HBASE-26438](https://issues.apache.org/jira/browse/HBASE-26438) | Fix flaky test TestHStore.testCompactingMemStoreCellExceedInmemoryFlushSize | Major | test | +| [HBASE-26436](https://issues.apache.org/jira/browse/HBASE-26436) | check-aggregate-license error related to javadns after HADOOP-17317 | Major | build, hadoop3 | +| [HBASE-26311](https://issues.apache.org/jira/browse/HBASE-26311) | Balancer gets stuck in cohosted replica distribution | Major | Balancer | +| [HBASE-26384](https://issues.apache.org/jira/browse/HBASE-26384) | Segment already flushed to hfile may still be remained in CompactingMemStore | Major | in-memory-compaction | +| [HBASE-26414](https://issues.apache.org/jira/browse/HBASE-26414) | Tracing INSTRUMENTATION\_NAME is incorrect | Blocker | tracing | +| [HBASE-26410](https://issues.apache.org/jira/browse/HBASE-26410) | Fix HBase TestCanaryTool for Java17 | Major | java | +| [HBASE-26429](https://issues.apache.org/jira/browse/HBASE-26429) | HeapMemoryManager fails memstore flushes with NPE if enabled | Major | Operability, regionserver | +| [HBASE-26404](https://issues.apache.org/jira/browse/HBASE-26404) | Update javadoc for CellUtil#createCell with tags methods. | Major | . | +| [HBASE-26398](https://issues.apache.org/jira/browse/HBASE-26398) | CellCounter fails for large tables filling up local disk | Minor | mapreduce | +| [HBASE-26190](https://issues.apache.org/jira/browse/HBASE-26190) | High rate logging of BucketAllocatorException: Allocation too big | Major | BucketCache, Operability | +| [HBASE-26392](https://issues.apache.org/jira/browse/HBASE-26392) | Update ClassSize.BYTE\_BUFFER for JDK17 | Major | java, util | +| [HBASE-26394](https://issues.apache.org/jira/browse/HBASE-26394) | Cache in RSRpcServices.executeProcedures does not take effect | Major | . | +| [HBASE-26385](https://issues.apache.org/jira/browse/HBASE-26385) | Clear CellScanner when replay | Major | regionserver, rpc | +| [HBASE-26383](https://issues.apache.org/jira/browse/HBASE-26383) | HBCK incorrectly reports inconsistencies for recently split regions following a master failover | Critical | master | +| [HBASE-26371](https://issues.apache.org/jira/browse/HBASE-26371) | Prioritize meta region move over other region moves in region\_mover | Major | . | +| [HBASE-26361](https://issues.apache.org/jira/browse/HBASE-26361) | Enable OpenTelemetry to be used from developer sandbox | Major | tracing | +| [HBASE-26364](https://issues.apache.org/jira/browse/HBASE-26364) | TestThriftServer is failing 100% in our flaky test job | Major | test, Thrift | +| [HBASE-26350](https://issues.apache.org/jira/browse/HBASE-26350) | Missing server side debugging on failed SASL handshake | Minor | . | +| [HBASE-26344](https://issues.apache.org/jira/browse/HBASE-26344) | Fix Bug for MultiByteBuff.put(int, byte) | Major | . | +| [HBASE-26312](https://issues.apache.org/jira/browse/HBASE-26312) | Shell scan fails with timestamp | Major | shell, test | +| [HBASE-24601](https://issues.apache.org/jira/browse/HBASE-24601) | Change default Hfile storage policy from HOT to NONE for HDFS | Major | HFile | +| [HBASE-26295](https://issues.apache.org/jira/browse/HBASE-26295) | BucketCache could not free BucketEntry which restored from persistence file | Major | BucketCache | +| [HBASE-26289](https://issues.apache.org/jira/browse/HBASE-26289) | Hbase scan setMaxResultsPerColumnFamily not giving right results | Major | regionserver | +| [HBASE-26299](https://issues.apache.org/jira/browse/HBASE-26299) | Fix TestHTableTracing.testTableClose for nightly build of branch-2 | Major | test, tracing | +| [HBASE-26238](https://issues.apache.org/jira/browse/HBASE-26238) | OOME in VerifyReplication for the table contains rows with 10M+ cells | Major | Client, Replication | +| [HBASE-26297](https://issues.apache.org/jira/browse/HBASE-26297) | Balancer run is improperly triggered by accuracy error of double comparison | Major | Balancer | +| [HBASE-26178](https://issues.apache.org/jira/browse/HBASE-26178) | Improve data structure and algorithm for BalanceClusterState to improve computation speed for large cluster | Major | Balancer, Performance | +| [HBASE-26274](https://issues.apache.org/jira/browse/HBASE-26274) | Create an option to reintroduce BlockCache to mapreduce job | Major | BlockCache, HFile, mapreduce | +| [HBASE-26261](https://issues.apache.org/jira/browse/HBASE-26261) | Store configuration loss when use update\_config | Minor | . | +| [HBASE-26281](https://issues.apache.org/jira/browse/HBASE-26281) | DBB got from BucketCache would be freed unexpectedly before RPC completed | Critical | BucketCache | +| [HBASE-26197](https://issues.apache.org/jira/browse/HBASE-26197) | Fix some obvious bugs in MultiByteBuff.put | Major | . | +| [HBASE-26106](https://issues.apache.org/jira/browse/HBASE-26106) | AbstractFSWALProvider#getArchivedLogPath doesn't look for wal file in all oldWALs directory. | Critical | wal | +| [HBASE-26205](https://issues.apache.org/jira/browse/HBASE-26205) | TableMRUtil#initCredentialsForCluster should use specified conf for UserProvider | Major | mapreduce | +| [HBASE-26210](https://issues.apache.org/jira/browse/HBASE-26210) | HBase Write should be doomed to hang when cell size exceeds InmemoryFlushSize for CompactingMemStore | Critical | in-memory-compaction | +| [HBASE-26244](https://issues.apache.org/jira/browse/HBASE-26244) | Avoid trim the error stack trace when running UT with maven | Major | . | +| [HBASE-25588](https://issues.apache.org/jira/browse/HBASE-25588) | Excessive logging of "hbase.zookeeper.useMulti is deprecated. Default to true always." | Minor | logging, Operability, Replication | +| [HBASE-26232](https://issues.apache.org/jira/browse/HBASE-26232) | SEEK\_NEXT\_USING\_HINT is ignored on reversed Scans | Critical | Filters, scan | +| [HBASE-26204](https://issues.apache.org/jira/browse/HBASE-26204) | VerifyReplication should obtain token for peerQuorumAddress too | Major | . | +| [HBASE-26219](https://issues.apache.org/jira/browse/HBASE-26219) | Negative time is logged while waiting on regionservers | Trivial | . | +| [HBASE-26087](https://issues.apache.org/jira/browse/HBASE-26087) | JVM crash when displaying RPC params by MonitoredRPCHandler | Major | UI | +| [HBASE-24570](https://issues.apache.org/jira/browse/HBASE-24570) | connection#close throws NPE | Minor | Client | +| [HBASE-26200](https://issues.apache.org/jira/browse/HBASE-26200) | Undo 'HBASE-25165 Change 'State time' in UI so sorts (#2508)' in favor of HBASE-24652 | Major | UI | +| [HBASE-26196](https://issues.apache.org/jira/browse/HBASE-26196) | Support configuration override for remote cluster of HFileOutputFormat locality sensitive | Major | mapreduce | +| [HBASE-26026](https://issues.apache.org/jira/browse/HBASE-26026) | HBase Write may be stuck forever when using CompactingMemStore | Critical | in-memory-compaction | +| [HBASE-26155](https://issues.apache.org/jira/browse/HBASE-26155) | JVM crash when scan | Major | Scanners | +| [HBASE-26176](https://issues.apache.org/jira/browse/HBASE-26176) | Correct regex in hbase-personality.sh | Minor | build | +| [HBASE-26142](https://issues.apache.org/jira/browse/HBASE-26142) | NullPointerException when set 'hbase.hregion.memstore.mslab.indexchunksize.percent' to zero | Critical | . | +| [HBASE-25651](https://issues.apache.org/jira/browse/HBASE-25651) | NORMALIZER\_TARGET\_REGION\_SIZE needs a unit in its name | Major | Normalizer | +| [HBASE-26166](https://issues.apache.org/jira/browse/HBASE-26166) | table list in master ui has a minor bug | Minor | UI | +| [HBASE-26114](https://issues.apache.org/jira/browse/HBASE-26114) | when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster cannot start normally | Minor | master | +| [HBASE-26120](https://issues.apache.org/jira/browse/HBASE-26120) | New replication gets stuck or data loss when multiwal groups more than 10 | Critical | Replication | +| [HBASE-26001](https://issues.apache.org/jira/browse/HBASE-26001) | When turn on access control, the cell level TTL of Increment and Append operations is invalid. | Minor | Coprocessors | +| [HBASE-24984](https://issues.apache.org/jira/browse/HBASE-24984) | WAL corruption due to early DBBs re-use when Durability.ASYNC\_WAL is used with multi operation | Critical | rpc, wal | +| [HBASE-26088](https://issues.apache.org/jira/browse/HBASE-26088) | conn.getBufferedMutator(tableName) leaks thread executors and other problems | Critical | Client | +| [HBASE-25973](https://issues.apache.org/jira/browse/HBASE-25973) | Balancer should explain progress in a better way in log | Major | Balancer | +| [HBASE-26086](https://issues.apache.org/jira/browse/HBASE-26086) | TestHRegionReplayEvents do not pass in branch-2 and throws NullPointerException | Minor | . | +| [HBASE-26036](https://issues.apache.org/jira/browse/HBASE-26036) | DBB released too early and dirty data for some operations | Critical | rpc | +| [HBASE-26068](https://issues.apache.org/jira/browse/HBASE-26068) | The last assertion in TestHStore.testRefreshStoreFilesNotChanged is wrong | Major | test | +| [HBASE-22923](https://issues.apache.org/jira/browse/HBASE-22923) | hbase:meta is assigned to localhost when we downgrade the hbase version | Major | . | +| [HBASE-26063](https://issues.apache.org/jira/browse/HBASE-26063) | The current checkcompatibility.py script can not compare master and rel/2.0.0 | Blocker | scripts | +| [HBASE-26030](https://issues.apache.org/jira/browse/HBASE-26030) | hbase-cleanup.sh did not clean the wal dir if hbase.wal.dir configured individually | Major | scripts | +| [HBASE-26035](https://issues.apache.org/jira/browse/HBASE-26035) | Redundant null check in the compareTo function | Minor | metrics, Performance | +| [HBASE-25902](https://issues.apache.org/jira/browse/HBASE-25902) | Add missing CFs in meta during HBase 1 to 2.3+ Upgrade | Critical | meta, Operability | +| [HBASE-26028](https://issues.apache.org/jira/browse/HBASE-26028) | The view as json page shows exception when using TinyLfuBlockCache | Major | UI | +| [HBASE-26029](https://issues.apache.org/jira/browse/HBASE-26029) | It is not reliable to use nodeDeleted event to track region server's death | Critical | Replication, Zookeeper | +| [HBASE-26039](https://issues.apache.org/jira/browse/HBASE-26039) | TestReplicationKillRS is useless after HBASE-23956 | Major | Replication, test | +| [HBASE-25980](https://issues.apache.org/jira/browse/HBASE-25980) | Master table.jsp pointed at meta throws 500 when no all replicas are online | Major | master, meta replicas, UI | +| [HBASE-26013](https://issues.apache.org/jira/browse/HBASE-26013) | Get operations readRows metrics becomes zero after HBASE-25677 | Minor | metrics | +| [HBASE-25966](https://issues.apache.org/jira/browse/HBASE-25966) | Fix typo in NOTICE.vm | Major | build, community | +| [HBASE-25877](https://issues.apache.org/jira/browse/HBASE-25877) | Add access check for compactionSwitch | Major | security | +| [HBASE-25698](https://issues.apache.org/jira/browse/HBASE-25698) | Persistent IllegalReferenceCountException at scanner open when using TinyLfuBlockCache | Major | BucketCache, HFile, Scanners | +| [HBASE-25984](https://issues.apache.org/jira/browse/HBASE-25984) | FSHLog WAL lockup with sync future reuse [RS deadlock] | Critical | regionserver, wal | +| [HBASE-25994](https://issues.apache.org/jira/browse/HBASE-25994) | Active WAL tailing fails when WAL value compression is enabled | Major | . | +| [HBASE-25924](https://issues.apache.org/jira/browse/HBASE-25924) | Seeing a spike in uncleanlyClosedWALs metric. | Major | Replication, wal | +| [HBASE-25967](https://issues.apache.org/jira/browse/HBASE-25967) | The readRequestsCount does not calculate when the outResults is empty | Major | metrics | +| [HBASE-25981](https://issues.apache.org/jira/browse/HBASE-25981) | JVM crash when displaying regionserver UI | Major | rpc, UI | +| [HBASE-25930](https://issues.apache.org/jira/browse/HBASE-25930) | Thrift does not support requests in Kerberos environment | Major | Thrift | +| [HBASE-25929](https://issues.apache.org/jira/browse/HBASE-25929) | RegionServer JVM crash when compaction | Critical | Compaction | +| [HBASE-25932](https://issues.apache.org/jira/browse/HBASE-25932) | TestWALEntryStream#testCleanClosedWALs test is failing. | Major | metrics, Replication, wal | +| [HBASE-25903](https://issues.apache.org/jira/browse/HBASE-25903) | ReadOnlyZKClient APIs - CompletableFuture.get() calls can cause threads to hang forver when ZK client create throws Non IOException | Major | . | +| [HBASE-25927](https://issues.apache.org/jira/browse/HBASE-25927) | Fix the log messages by not stringifying the exceptions in log | Minor | logging | +| [HBASE-25938](https://issues.apache.org/jira/browse/HBASE-25938) | The SnapshotOfRegionAssignmentFromMeta.initialize call in FavoredNodeLoadBalancer is just a dummy one | Major | Balancer, FavoredNodes | +| [HBASE-25861](https://issues.apache.org/jira/browse/HBASE-25861) | Correct the usage of Configuration#addDeprecation | Major | . | +| [HBASE-25928](https://issues.apache.org/jira/browse/HBASE-25928) | TestHBaseConfiguration#testDeprecatedConfigurations is broken with Hadoop 3.3 | Major | . | +| [HBASE-25898](https://issues.apache.org/jira/browse/HBASE-25898) | RS getting aborted due to NPE in Replication WALEntryStream | Critical | Replication | +| [HBASE-25875](https://issues.apache.org/jira/browse/HBASE-25875) | RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start | Major | . | +| [HBASE-25513](https://issues.apache.org/jira/browse/HBASE-25513) | When the table is turned on normalize, the first region may not be merged even the size is 0 | Major | Normalizer | +| [HBASE-25892](https://issues.apache.org/jira/browse/HBASE-25892) | 'False' should be 'True' in auditlog of listLabels | Major | logging, security | +| [HBASE-25817](https://issues.apache.org/jira/browse/HBASE-25817) | Memory leak from thrift server hashMap | Minor | Thrift | +| [HBASE-25869](https://issues.apache.org/jira/browse/HBASE-25869) | WAL value compression | Major | Operability, wal | +| [HBASE-25827](https://issues.apache.org/jira/browse/HBASE-25827) | Per Cell TTL tags get duplicated with increments causing tags length overflow | Critical | regionserver | +| [HBASE-25897](https://issues.apache.org/jira/browse/HBASE-25897) | TestRetainAssignmentOnRestart is flaky after HBASE-25032 | Major | Region Assignment | +| [HBASE-25867](https://issues.apache.org/jira/browse/HBASE-25867) | Extra doc around ITBLL | Minor | documentation | +| [HBASE-25859](https://issues.apache.org/jira/browse/HBASE-25859) | Reference class incorrectly parses the protobuf magic marker | Minor | regionserver | +| [HBASE-25774](https://issues.apache.org/jira/browse/HBASE-25774) | ServerManager.getOnlineServer may miss some region servers when refreshing state in some procedure implementations | Critical | Replication | +| [HBASE-25837](https://issues.apache.org/jira/browse/HBASE-25837) | TestRollingRestart is flaky | Major | test | +| [HBASE-25850](https://issues.apache.org/jira/browse/HBASE-25850) | Fix spotbugs warnings on branch-2 | Major | Compaction, findbugs, mob | +| [HBASE-25825](https://issues.apache.org/jira/browse/HBASE-25825) | RSGroupBasedLoadBalancer.onConfigurationChange should chain the request to internal balancer | Major | Balancer | +| [HBASE-25792](https://issues.apache.org/jira/browse/HBASE-25792) | Filter out o.a.hadoop.thirdparty building shaded jars | Major | shading | +| [HBASE-25777](https://issues.apache.org/jira/browse/HBASE-25777) | Fix wrong initialization value in StressAssignmentManagerMonkeyFactory | Major | integration tests | +| [HBASE-25735](https://issues.apache.org/jira/browse/HBASE-25735) | Add target Region to connection exceptions | Major | rpc | +| [HBASE-25717](https://issues.apache.org/jira/browse/HBASE-25717) | RegionServer aborted due to ClassCastException | Major | . | +| [HBASE-25743](https://issues.apache.org/jira/browse/HBASE-25743) | Retry REQUESTTIMEOUT KeeperExceptions from ZK | Major | Zookeeper | +| [HBASE-25726](https://issues.apache.org/jira/browse/HBASE-25726) | MoveCostFunction is not included in the list of cost functions for StochasticLoadBalancer | Major | Balancer | +| [HBASE-25692](https://issues.apache.org/jira/browse/HBASE-25692) | Failure to instantiate WALCellCodec leaks socket in replication | Major | Replication | +| [HBASE-25568](https://issues.apache.org/jira/browse/HBASE-25568) | Upgrade Thrift jar to fix CVE-2020-13949 | Critical | Thrift | +| [HBASE-25693](https://issues.apache.org/jira/browse/HBASE-25693) | NPE getting metrics from standby masters (MetricsMasterWrapperImpl.getMergePlanCount) | Major | master | +| [HBASE-25685](https://issues.apache.org/jira/browse/HBASE-25685) | asyncprofiler2.0 no longer supports svg; wants html | Major | . | +| [HBASE-25594](https://issues.apache.org/jira/browse/HBASE-25594) | graceful\_stop.sh fails to unload regions when ran at localhost | Minor | . | +| [HBASE-25674](https://issues.apache.org/jira/browse/HBASE-25674) | RegionInfo.parseFrom(DataInputStream) sometimes fails to read the protobuf magic marker | Minor | Client | +| [HBASE-25673](https://issues.apache.org/jira/browse/HBASE-25673) | Wrong log regarding current active master at ZKLeaderManager#waitToBecomeLeader | Minor | . | +| [HBASE-25595](https://issues.apache.org/jira/browse/HBASE-25595) | TestLruBlockCache.testBackgroundEvictionThread is flaky | Major | . | +| [HBASE-25662](https://issues.apache.org/jira/browse/HBASE-25662) | Fix spotbugs warning in RoundRobinTableInputFormat | Major | findbugs | +| [HBASE-25657](https://issues.apache.org/jira/browse/HBASE-25657) | Fix spotbugs warnings after upgrading spotbugs to 4.x | Major | findbugs | +| [HBASE-25646](https://issues.apache.org/jira/browse/HBASE-25646) | Possible Resource Leak in CatalogJanitor | Major | master | +| [HBASE-25626](https://issues.apache.org/jira/browse/HBASE-25626) | Possible Resource Leak in HeterogeneousRegionCountCostFunction | Major | . | +| [HBASE-25644](https://issues.apache.org/jira/browse/HBASE-25644) | Scan#setSmall blindly sets ReadType as PREAD | Critical | . | +| [HBASE-25609](https://issues.apache.org/jira/browse/HBASE-25609) | There is a problem with the SPLITS\_FILE in the HBase shell statement | Minor | shell | +| [HBASE-25385](https://issues.apache.org/jira/browse/HBASE-25385) | TestCurrentHourProvider fails if the latest timezone changes are not present | Blocker | . | +| [HBASE-25596](https://issues.apache.org/jira/browse/HBASE-25596) | Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL | Critical | Replication | +| [HBASE-25367](https://issues.apache.org/jira/browse/HBASE-25367) | Sort broken after Change 'State time' in UI | Major | UI | +| [HBASE-25421](https://issues.apache.org/jira/browse/HBASE-25421) | There is no limit on the column family length when creating a table | Major | Client | +| [HBASE-25371](https://issues.apache.org/jira/browse/HBASE-25371) | When openRegion fails during initial verification(before initializing and setting seq num), exception is observed during region close. | Major | Region Assignment | +| [HBASE-25611](https://issues.apache.org/jira/browse/HBASE-25611) | ExportSnapshot chmod flag uses value as decimal | Major | . | +| [HBASE-25586](https://issues.apache.org/jira/browse/HBASE-25586) | Fix HBASE-22492 on branch-2 (SASL GapToken) | Major | rpc | +| [HBASE-25556](https://issues.apache.org/jira/browse/HBASE-25556) | Frequent replication "Encountered a malformed edit" warnings | Minor | Operability, Replication | +| [HBASE-25575](https://issues.apache.org/jira/browse/HBASE-25575) | Should validate Puts in RowMutations | Minor | Client | +| [HBASE-25571](https://issues.apache.org/jira/browse/HBASE-25571) | Compilation error in branch-2 after HBASE-25364 | Major | . | +| [HBASE-25512](https://issues.apache.org/jira/browse/HBASE-25512) | May throw StringIndexOutOfBoundsException when construct illegal tablename error | Trivial | . | +| [HBASE-25560](https://issues.apache.org/jira/browse/HBASE-25560) | Remove unused parameter named peerId in the constructor method of CatalogReplicationSourcePeer | Major | . | +| [HBASE-25543](https://issues.apache.org/jira/browse/HBASE-25543) | When configuration "hadoop.security.authorization" is set to false, the system will still try to authorize an RPC and raise AccessDeniedException | Minor | IPC/RPC | +| [HBASE-25554](https://issues.apache.org/jira/browse/HBASE-25554) | NPE when init RegionMover | Major | . | +| [HBASE-25523](https://issues.apache.org/jira/browse/HBASE-25523) | Region normalizer chore thread is getting killed | Major | Normalizer | +| [HBASE-25533](https://issues.apache.org/jira/browse/HBASE-25533) | The metadata of the table and family should not be an empty string | Major | . | +| [HBASE-25478](https://issues.apache.org/jira/browse/HBASE-25478) | Implement retries when enabling tables in TestRegionReplicaReplicationEndpoint | Minor | . | +| [HBASE-25497](https://issues.apache.org/jira/browse/HBASE-25497) | move\_namespaces\_rsgroup should change hbase.rsgroup.name config in NamespaceDescriptor | Major | . | +| [HBASE-25356](https://issues.apache.org/jira/browse/HBASE-25356) | HBaseAdmin#getRegion() needs to filter out non-regionName and non-encodedRegionName | Major | shell | +| [HBASE-25279](https://issues.apache.org/jira/browse/HBASE-25279) | Non-daemon thread in ZKWatcher | Critical | Zookeeper | +| [HBASE-25503](https://issues.apache.org/jira/browse/HBASE-25503) | HBase code download is failing on windows with invalid path error | Major | . | +| [HBASE-24813](https://issues.apache.org/jira/browse/HBASE-24813) | ReplicationSource should clear buffer usage on ReplicationSourceManager upon termination | Major | Replication | +| [HBASE-25459](https://issues.apache.org/jira/browse/HBASE-25459) | WAL can't be cleaned in some scenes | Major | . | +| [HBASE-25434](https://issues.apache.org/jira/browse/HBASE-25434) | SlowDelete & SlowPut metric value should use updateDelete & updatePut | Major | regionserver | +| [HBASE-25441](https://issues.apache.org/jira/browse/HBASE-25441) | add security check for some APIs in RSRpcServices | Critical | . | +| [HBASE-25432](https://issues.apache.org/jira/browse/HBASE-25432) | we should add security checks for setTableStateInMeta and fixMeta | Blocker | . | +| [HBASE-25445](https://issues.apache.org/jira/browse/HBASE-25445) | Old WALs archive fails in procedure based WAL split | Critical | wal | +| [HBASE-25287](https://issues.apache.org/jira/browse/HBASE-25287) | Forgetting to unbuffer streams results in many CLOSE\_WAIT sockets when loading files | Major | . | +| [HBASE-25447](https://issues.apache.org/jira/browse/HBASE-25447) | remoteProc is suspended due to OOM ERROR | Major | proc-v2 | +| [HBASE-24755](https://issues.apache.org/jira/browse/HBASE-24755) | [LOG][RSGroup]Error message is confusing while adding a offline RS to rsgroup | Major | rsgroup | +| [HBASE-25463](https://issues.apache.org/jira/browse/HBASE-25463) | Fix comment error | Minor | shell | +| [HBASE-25457](https://issues.apache.org/jira/browse/HBASE-25457) | Possible race in AsyncConnectionImpl between getChoreService and close | Major | Client | +| [HBASE-25456](https://issues.apache.org/jira/browse/HBASE-25456) | setRegionStateInMeta need security check | Critical | . | +| [HBASE-25383](https://issues.apache.org/jira/browse/HBASE-25383) | HBase doesn't update and remove the peer config from hbase.replication.source.custom.walentryfilters if the config is already set on the peer. | Major | . | +| [HBASE-25404](https://issues.apache.org/jira/browse/HBASE-25404) | Procedures table Id under master web UI gets word break to single character | Minor | UI | +| [HBASE-25277](https://issues.apache.org/jira/browse/HBASE-25277) | postScannerFilterRow impacts Scan performance a lot in HBase 2.x | Critical | Coprocessors, scan | +| [HBASE-25365](https://issues.apache.org/jira/browse/HBASE-25365) | The log in move\_servers\_rsgroup is incorrect | Minor | . | +| [HBASE-25372](https://issues.apache.org/jira/browse/HBASE-25372) | Fix typo in ban-jersey section of the enforcer plugin in pom.xml | Major | build | +| [HBASE-25361](https://issues.apache.org/jira/browse/HBASE-25361) | [Flakey Tests] branch-2 TestMetaRegionLocationCache.testStandByMetaLocations | Major | flakies | +| [HBASE-25292](https://issues.apache.org/jira/browse/HBASE-25292) | Improve InetSocketAddress usage discipline | Major | Client, HFile | + + +### TESTS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27161](https://issues.apache.org/jira/browse/HBASE-27161) | Improve TestMultiRespectsLimits | Minor | test | +| [HBASE-27051](https://issues.apache.org/jira/browse/HBASE-27051) | TestReplicationSource.testReplicationSourceInitializingMetric is flaky | Minor | test | +| [HBASE-27039](https://issues.apache.org/jira/browse/HBASE-27039) | Some methods of MasterRegion should be annotated for testing only | Minor | master | +| [HBASE-27054](https://issues.apache.org/jira/browse/HBASE-27054) | TestStochasticLoadBalancerRegionReplicaLargeCluster.testRegionReplicasOnLargeCluster is flaky | Major | test | +| [HBASE-27052](https://issues.apache.org/jira/browse/HBASE-27052) | TestAsyncTableScanner.testScanWrongColumnFamily is flaky | Major | test | +| [HBASE-27050](https://issues.apache.org/jira/browse/HBASE-27050) | Support unit test pattern matching again | Minor | test | +| [HBASE-26989](https://issues.apache.org/jira/browse/HBASE-26989) | TestStochasticLoadBalancer has some slow methods, and inconsistent set, reset, unset of configuration | Minor | Balancer, test | +| [HBASE-26689](https://issues.apache.org/jira/browse/HBASE-26689) | Backport HBASE-24443 Refactor TestCustomSaslAuthenticationProvider | Minor | test | +| [HBASE-26542](https://issues.apache.org/jira/browse/HBASE-26542) | Apply a \`package\` to test protobuf files | Minor | Protobufs, test | +| [HBASE-26349](https://issues.apache.org/jira/browse/HBASE-26349) | Improve recent change to IntegrationTestLoadCommonCrawl | Minor | integration tests, test | +| [HBASE-26335](https://issues.apache.org/jira/browse/HBASE-26335) | Minor improvements to IntegrationTestLoadCommonCrawl | Minor | integration tests, test | +| [HBASE-26272](https://issues.apache.org/jira/browse/HBASE-26272) | TestTableMapReduceUtil failure in branch-2 | Major | test | +| [HBASE-26185](https://issues.apache.org/jira/browse/HBASE-26185) | Fix TestMaster#testMoveRegionWhenNotInitialized with hbase.min.version.move.system.tables | Minor | . | +| [HBASE-25910](https://issues.apache.org/jira/browse/HBASE-25910) | Fix TestClusterPortAssignment.testClusterPortAssignment test and re-enable it. | Minor | flakies, test | +| [HBASE-25824](https://issues.apache.org/jira/browse/HBASE-25824) | IntegrationTestLoadCommonCrawl | Minor | integration tests | +| [HBASE-25561](https://issues.apache.org/jira/browse/HBASE-25561) | Added ignored test for async connection that runs retries just so can check how long it takes and that retrying is happening | Trivial | test | +| [HBASE-25334](https://issues.apache.org/jira/browse/HBASE-25334) | TestRSGroupsFallback.testFallback is flaky | Major | . | +| [HBASE-25370](https://issues.apache.org/jira/browse/HBASE-25370) | Fix flaky test TestClassFinder#testClassFinderDefaultsToOwnPackage | Major | test | + + +### SUB-TASKS: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-27206](https://issues.apache.org/jira/browse/HBASE-27206) | Clean up error-prone findings in hbase-common | Major | . | +| [HBASE-27234](https://issues.apache.org/jira/browse/HBASE-27234) | Clean up error-prone findings in hbase-examples | Major | . | +| [HBASE-27252](https://issues.apache.org/jira/browse/HBASE-27252) | Clean up error-prone findings in hbase-it | Major | . | +| [HBASE-26969](https://issues.apache.org/jira/browse/HBASE-26969) | Eliminate MOB renames when SFT is enabled | Major | mob | +| [HBASE-27301](https://issues.apache.org/jira/browse/HBASE-27301) | Add Delete addFamilyVersion timestamp verify | Minor | Client | +| [HBASE-27293](https://issues.apache.org/jira/browse/HBASE-27293) | Remove jenkins and personality scripts support for 1.x | Major | scripts | +| [HBASE-27201](https://issues.apache.org/jira/browse/HBASE-27201) | Clean up error-prone findings in hbase-backup | Major | . | +| [HBASE-27240](https://issues.apache.org/jira/browse/HBASE-27240) | Clean up error-prone findings in hbase-http | Major | . | +| [HBASE-27203](https://issues.apache.org/jira/browse/HBASE-27203) | Clean up error-prone findings in hbase-client | Major | . | +| [HBASE-27236](https://issues.apache.org/jira/browse/HBASE-27236) | Clean up error-prone findings in hbase-hbtop | Major | . | +| [HBASE-27210](https://issues.apache.org/jira/browse/HBASE-27210) | Clean up error-prone findings in hbase-endpoint | Major | . | +| [HBASE-27235](https://issues.apache.org/jira/browse/HBASE-27235) | Clean up error-prone findings in hbase-hadoop-compat | Major | . | +| [HBASE-27220](https://issues.apache.org/jira/browse/HBASE-27220) | Apply the spotless format change in HBASE-27208 to our code base | Major | . | +| [HBASE-27202](https://issues.apache.org/jira/browse/HBASE-27202) | Clean up error-prone findings in hbase-balancer | Major | . | +| [HBASE-27200](https://issues.apache.org/jira/browse/HBASE-27200) | Clean up error-prone findings in hbase-archetypes | Major | . | +| [HBASE-27194](https://issues.apache.org/jira/browse/HBASE-27194) | Add test coverage for SimpleRpcServer | Major | . | +| [HBASE-23330](https://issues.apache.org/jira/browse/HBASE-23330) | Expose cluster ID for clients using it for delegation token based auth | Major | Client, master | +| [HBASE-27166](https://issues.apache.org/jira/browse/HBASE-27166) | WAL value compression minor improvements | Minor | . | +| [HBASE-27111](https://issues.apache.org/jira/browse/HBASE-27111) | Make Netty channel bytebuf allocator configurable | Major | . | +| [HBASE-26167](https://issues.apache.org/jira/browse/HBASE-26167) | Allow users to not start zookeeper and dfs cluster when using TestingHBaseCluster | Major | test | +| [HBASE-26366](https://issues.apache.org/jira/browse/HBASE-26366) | Provide meaningful parent spans to ZK interactions | Major | tracing | +| [HBASE-26933](https://issues.apache.org/jira/browse/HBASE-26933) | Remove all ref guide stuff on branch other than master | Major | documentation | +| [HBASE-27006](https://issues.apache.org/jira/browse/HBASE-27006) | cordon off large ci worker nodes | Major | . | +| [HBASE-26855](https://issues.apache.org/jira/browse/HBASE-26855) | Delete unnecessary dependency on jaxb-runtime jar | Major | . | +| [HBASE-27024](https://issues.apache.org/jira/browse/HBASE-27024) | The User API and Developer API links are broken on hbase.apache.org | Major | website | +| [HBASE-27045](https://issues.apache.org/jira/browse/HBASE-27045) | Disable TestClusterScopeQuotaThrottle | Major | test | +| [HBASE-26986](https://issues.apache.org/jira/browse/HBASE-26986) | Trace a one-shot execution of a Master procedure | Major | . | +| [HBASE-26999](https://issues.apache.org/jira/browse/HBASE-26999) | HStore should try write WAL compaction marker before replacing compacted files in StoreEngine | Major | . | +| [HBASE-26330](https://issues.apache.org/jira/browse/HBASE-26330) | Document new provided compression codecs | Blocker | . | +| [HBASE-26995](https://issues.apache.org/jira/browse/HBASE-26995) | Remove ref guide check in pre commit and nightly for branches other than master | Major | build, scripts | +| [HBASE-26648](https://issues.apache.org/jira/browse/HBASE-26648) | Improve fidelity of RegionLocator spans | Major | tracing | +| [HBASE-26899](https://issues.apache.org/jira/browse/HBASE-26899) | Run spotless:apply | Major | . | +| [HBASE-26932](https://issues.apache.org/jira/browse/HBASE-26932) | Skip generating ref guide when running 'mvn site' on branch other than master | Major | build, pom | +| [HBASE-25058](https://issues.apache.org/jira/browse/HBASE-25058) | Export necessary modules when running under JDK11 | Major | Performance, rpc | +| [HBASE-26928](https://issues.apache.org/jira/browse/HBASE-26928) | Fix several indentation problems | Major | . | +| [HBASE-26922](https://issues.apache.org/jira/browse/HBASE-26922) | Fix LineLength warnings as much as possible if it can not be fixed by spotless | Major | . | +| [HBASE-26929](https://issues.apache.org/jira/browse/HBASE-26929) | Upgrade surefire plugin to 3.0.0-M6 | Major | pom, test | +| [HBASE-26927](https://issues.apache.org/jira/browse/HBASE-26927) | Add snapshot scanner UT with SFT and some cleanups to TestTableSnapshotScanner | Major | . | +| [HBASE-26916](https://issues.apache.org/jira/browse/HBASE-26916) | Fix missing braces warnings in DefaultVisibilityExpressionResolver | Major | . | +| [HBASE-26919](https://issues.apache.org/jira/browse/HBASE-26919) | Rewrite the counting rows part in TestFromClientSide4 | Major | test | +| [HBASE-26920](https://issues.apache.org/jira/browse/HBASE-26920) | Fix missing braces warnings in TestProcedureMember | Major | test | +| [HBASE-26921](https://issues.apache.org/jira/browse/HBASE-26921) | Rewrite the counting cells part in TestMultiVersions | Major | test | +| [HBASE-26545](https://issues.apache.org/jira/browse/HBASE-26545) | Implement tracing of scan | Major | tracing | +| [HBASE-26531](https://issues.apache.org/jira/browse/HBASE-26531) | Trace coprocessor exec endpoints | Major | . | +| [HBASE-25896](https://issues.apache.org/jira/browse/HBASE-25896) | Implement a Region Visualization on Master WebUI | Major | . | +| [HBASE-25895](https://issues.apache.org/jira/browse/HBASE-25895) | Implement a Cluster Metrics JSON endpoint | Major | . | +| [HBASE-26824](https://issues.apache.org/jira/browse/HBASE-26824) | TestHBaseTestingUtil.testResolvePortConflict failing after HBASE-26582 | Major | . | +| [HBASE-26582](https://issues.apache.org/jira/browse/HBASE-26582) | Prune use of Random and SecureRandom objects | Minor | . | +| [HBASE-26764](https://issues.apache.org/jira/browse/HBASE-26764) | Implement generic exception support for TraceUtil methods over Callables and Runnables | Major | . | +| [HBASE-26759](https://issues.apache.org/jira/browse/HBASE-26759) | Fix trace continuity through CallRunner | Major | . | +| [HBASE-26640](https://issues.apache.org/jira/browse/HBASE-26640) | Reimplement master local region initialization to better work with SFT | Major | master, RegionProcedureStore | +| [HBASE-26673](https://issues.apache.org/jira/browse/HBASE-26673) | Implement a shell command for change SFT implementation | Major | shell | +| [HBASE-26434](https://issues.apache.org/jira/browse/HBASE-26434) | Compact L0 files for cold regions using StripeCompactionPolicy | Major | . | +| [HBASE-26749](https://issues.apache.org/jira/browse/HBASE-26749) | Migrate HBase main pre commit job to ci-hbase | Major | . | +| [HBASE-26697](https://issues.apache.org/jira/browse/HBASE-26697) | Migrate HBase Nightly HBase-Flaky-Tests and HBase-Find-Flaky-Tests to ci-hbase | Major | jenkins | +| [HBASE-26521](https://issues.apache.org/jira/browse/HBASE-26521) | Name RPC spans as \`$package.$service/$method\` | Major | . | +| [HBASE-26747](https://issues.apache.org/jira/browse/HBASE-26747) | Use python2 instead of python in our python scripts | Major | jenkins | +| [HBASE-26472](https://issues.apache.org/jira/browse/HBASE-26472) | Adhere to semantic conventions regarding table data operations | Major | . | +| [HBASE-26473](https://issues.apache.org/jira/browse/HBASE-26473) | Introduce \`db.hbase.container\_operations\` span attribute | Major | . | +| [HBASE-26474](https://issues.apache.org/jira/browse/HBASE-26474) | Implement connection-level attributes | Major | . | +| [HBASE-26520](https://issues.apache.org/jira/browse/HBASE-26520) | Remove use of \`db.hbase.namespace\` tracing attribute | Major | . | +| [HBASE-26397](https://issues.apache.org/jira/browse/HBASE-26397) | Display the excluded datanodes on regionserver UI | Major | . | +| [HBASE-24870](https://issues.apache.org/jira/browse/HBASE-24870) | Ignore TestAsyncTableRSCrashPublish | Major | . | +| [HBASE-26304](https://issues.apache.org/jira/browse/HBASE-26304) | Reflect out-of-band locality improvements in served requests | Major | . | +| [HBASE-26471](https://issues.apache.org/jira/browse/HBASE-26471) | Move tracing semantic attributes to their own class | Major | . | +| [HBASE-26470](https://issues.apache.org/jira/browse/HBASE-26470) | Use openlabtesting protoc on linux arm64 in HBASE 2.x | Major | build | +| [HBASE-26327](https://issues.apache.org/jira/browse/HBASE-26327) | Replicas cohosted on a rack shouldn't keep triggering Balancer | Major | Balancer | +| [HBASE-26308](https://issues.apache.org/jira/browse/HBASE-26308) | Sum of multiplier of cost functions is not populated properly when we have a shortcut for trigger | Critical | Balancer | +| [HBASE-26430](https://issues.apache.org/jira/browse/HBASE-26430) | Increase DefaultHeapMemoryTuner log level | Minor | regionserver | +| [HBASE-26353](https://issues.apache.org/jira/browse/HBASE-26353) | Support loadable dictionaries in hbase-compression-zstd | Minor | . | +| [HBASE-26319](https://issues.apache.org/jira/browse/HBASE-26319) | Make flaky find job track more builds | Major | flakies, jenkins | +| [HBASE-26390](https://issues.apache.org/jira/browse/HBASE-26390) | Upload src tarball to nightlies for nightly jobs | Major | jenkins, scripts | +| [HBASE-26382](https://issues.apache.org/jira/browse/HBASE-26382) | Use gen\_redirect\_html for linking flaky test logs | Major | jenkins, scripts, test | +| [HBASE-26362](https://issues.apache.org/jira/browse/HBASE-26362) | Upload mvn site artifacts for nightly build to nightlies | Major | jenkins, scripts | +| [HBASE-26316](https://issues.apache.org/jira/browse/HBASE-26316) | Per-table or per-CF compression codec setting overrides | Minor | HFile, Operability | +| [HBASE-26360](https://issues.apache.org/jira/browse/HBASE-26360) | Use gen\_redirect\_html for linking test logs | Major | jenkins, scripts | +| [HBASE-26341](https://issues.apache.org/jira/browse/HBASE-26341) | Upload dashboard html for flaky find job to nightlies | Major | flakies, jenkins, scripts | +| [HBASE-24833](https://issues.apache.org/jira/browse/HBASE-24833) | Bootstrap should not delete the META table directory if it's not partial | Major | . | +| [HBASE-26339](https://issues.apache.org/jira/browse/HBASE-26339) | SshPublisher will skip uploading artifacts if the build is failure | Major | jenkins, scripts | +| [HBASE-26324](https://issues.apache.org/jira/browse/HBASE-26324) | Reuse compressors and decompressors in WAL CompressionContext | Minor | wal | +| [HBASE-26317](https://issues.apache.org/jira/browse/HBASE-26317) | Publish the test logs for pre commit jenkins job to nightlies | Major | jenkins, scripts | +| [HBASE-26313](https://issues.apache.org/jira/browse/HBASE-26313) | Publish the test logs for our nightly jobs to nightlies.apache.org | Major | jenkins, scripts | +| [HBASE-26318](https://issues.apache.org/jira/browse/HBASE-26318) | Publish test logs for flaky jobs to nightlies | Major | flakies, jenkins | +| [HBASE-26259](https://issues.apache.org/jira/browse/HBASE-26259) | Fallback support to pure Java compression | Major | Performance | +| [HBASE-26294](https://issues.apache.org/jira/browse/HBASE-26294) | Backport "HBASE-26181 Region server and master could use itself as ConnectionRegistry" to branch-2 | Major | master, regionserver | +| [HBASE-26293](https://issues.apache.org/jira/browse/HBASE-26293) | Use reservoir sampling when selecting bootstrap nodes | Major | master, regionserver | +| [HBASE-26277](https://issues.apache.org/jira/browse/HBASE-26277) | Revert 26240, Apply InterfaceAudience.Private to BalanceResponse$Builder | Minor | . | +| [HBASE-26240](https://issues.apache.org/jira/browse/HBASE-26240) | Set BalanceRequest$Builder to InterfaceAudience.Private | Trivial | . | +| [HBASE-26157](https://issues.apache.org/jira/browse/HBASE-26157) | Expose some IA.LimitedPrivate interface in TestingHBaseCluster | Major | API, test | +| [HBASE-26235](https://issues.apache.org/jira/browse/HBASE-26235) | We could start RegionServerTracker before becoming active master | Major | master, Zookeeper | +| [HBASE-26216](https://issues.apache.org/jira/browse/HBASE-26216) | Move HRegionServer.abort(String) to Abortable as a default method | Major | API, regionserver | +| [HBASE-26189](https://issues.apache.org/jira/browse/HBASE-26189) | Reduce log level of CompactionProgress notice to DEBUG | Minor | Compaction | +| [HBASE-26168](https://issues.apache.org/jira/browse/HBASE-26168) | Backport HBASE-25811 for fixing nightly tests with error of \`NoClassDefFoundError: io/opentelemetry/api/GlobalOpenTelemetry\` | Major | tracing | +| [HBASE-26227](https://issues.apache.org/jira/browse/HBASE-26227) | Forward port HBASE-26223 test code to branch-2.4+ | Major | test | +| [HBASE-26140](https://issues.apache.org/jira/browse/HBASE-26140) | Backport HBASE-25778 "The tracinig implementation for AsyncConnectionImpl.getHbck is incorrect" to branch-2 | Major | tracing | +| [HBASE-26139](https://issues.apache.org/jira/browse/HBASE-26139) | Backport HBASE-23762 "Add documentation on how to enable and view tracing with OpenTelemetry" to branch-2 | Major | tracing | +| [HBASE-26138](https://issues.apache.org/jira/browse/HBASE-26138) | Backport HBASE-25733 "Upgrade opentelemetry to 1.0.1" to branch-2 | Major | tracing | +| [HBASE-26137](https://issues.apache.org/jira/browse/HBASE-26137) | Backport HBASE-25732 "Change the command line argument for tracing after upgrading opentelemtry to 1.0.0" to branch-2 | Major | tracing | +| [HBASE-26180](https://issues.apache.org/jira/browse/HBASE-26180) | Introduce a initial refresh interval for RpcConnectionRegistry | Major | Client | +| [HBASE-26215](https://issues.apache.org/jira/browse/HBASE-26215) | The backup master status page should use ActiveMasterManager instead of MasterAddressTracker | Major | master, UI | +| [HBASE-26136](https://issues.apache.org/jira/browse/HBASE-26136) | Backport HBASE-25723 "Temporarily remove the trace support for RegionScanner.next" to branch-2 | Major | tracing | +| [HBASE-26135](https://issues.apache.org/jira/browse/HBASE-26135) | Backport HBASE-25616 "Upgrade opentelemetry to 1.0.0" to branch-2 | Major | tracing | +| [HBASE-26173](https://issues.apache.org/jira/browse/HBASE-26173) | Return only a sub set of region servers as bootstrap nodes | Major | Client | +| [HBASE-26182](https://issues.apache.org/jira/browse/HBASE-26182) | Allow disabling refresh of connection registry endpoint | Major | Client | +| [HBASE-26134](https://issues.apache.org/jira/browse/HBASE-26134) | Backport HBASE-25617 "Revisit the span names" to branch-2 | Major | . | +| [HBASE-26133](https://issues.apache.org/jira/browse/HBASE-26133) | Backport HBASE-25591 "Upgrade opentelemetry to 0.17.1" to branch-2 | Major | . | +| [HBASE-26132](https://issues.apache.org/jira/browse/HBASE-26132) | Backport HBASE-25535 "Set span kind to CLIENT in AbstractRpcClient" to branch-2 | Major | . | +| [HBASE-26131](https://issues.apache.org/jira/browse/HBASE-26131) | Backport HBASE-25484 "Add trace support for WAL sync" to branch-2 | Major | . | +| [HBASE-26172](https://issues.apache.org/jira/browse/HBASE-26172) | Deprecate MasterRegistry | Major | Client | +| [HBASE-24337](https://issues.apache.org/jira/browse/HBASE-24337) | Backport HBASE-23968 to branch-2 | Minor | . | +| [HBASE-26130](https://issues.apache.org/jira/browse/HBASE-26130) | Backport HBASE-25455 "Add trace support for HRegion read/write operation" to branch-2 | Major | . | +| [HBASE-26129](https://issues.apache.org/jira/browse/HBASE-26129) | Backport HBASE-25481 "Add host and port attribute when tracing rpc call at client side" to branch-2 | Major | . | +| [HBASE-26128](https://issues.apache.org/jira/browse/HBASE-26128) | Backport HBASE-25454 "Add trace support for connection registry" to branch-2 | Major | tracing | +| [HBASE-26150](https://issues.apache.org/jira/browse/HBASE-26150) | Let region server also carry ClientMetaService | Major | Client, meta | +| [HBASE-26127](https://issues.apache.org/jira/browse/HBASE-26127) | Backport HBASE-23898 "Add trace support for simple apis in async client" to branch-2 | Major | tracing | +| [HBASE-26126](https://issues.apache.org/jira/browse/HBASE-26126) | Backport HBASE-25424 "Find a way to config OpenTelemetry tracing without directly depending on opentelemetry-sdk" to branch-2 | Major | . | +| [HBASE-26125](https://issues.apache.org/jira/browse/HBASE-26125) | Backport HBASE-25401 "Add trace support for async call in rpc client" to branch-2 | Major | tracing | +| [HBASE-26151](https://issues.apache.org/jira/browse/HBASE-26151) | Reimplement MasterAddressTracker to also cache backup master addresses | Major | Client, Zookeeper | +| [HBASE-26098](https://issues.apache.org/jira/browse/HBASE-26098) | Support passing a customized Configuration object when creating TestingHBaseCluster | Major | API, test | +| [HBASE-26124](https://issues.apache.org/jira/browse/HBASE-26124) | Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2 | Major | tracing | +| [HBASE-26093](https://issues.apache.org/jira/browse/HBASE-26093) | Replication is stuck due to zero length wal file in oldWALs directory [master/branch-2] | Major | . | +| [HBASE-24734](https://issues.apache.org/jira/browse/HBASE-24734) | RegionInfo#containsRange should support check meta table | Major | HFile, MTTR | +| [HBASE-25739](https://issues.apache.org/jira/browse/HBASE-25739) | TableSkewCostFunction need to use aggregated deviation | Major | Balancer, master | +| [HBASE-26080](https://issues.apache.org/jira/browse/HBASE-26080) | Implement a new mini cluster class for end users | Major | API, test | +| [HBASE-26050](https://issues.apache.org/jira/browse/HBASE-26050) | Remove the reflection used in FSUtils.isInSafeMode | Major | . | +| [HBASE-26041](https://issues.apache.org/jira/browse/HBASE-26041) | Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo() | Major | util | +| [HBASE-26019](https://issues.apache.org/jira/browse/HBASE-26019) | Remove reflections used in HBaseConfiguration.getPassword() | Major | . | +| [HBASE-25992](https://issues.apache.org/jira/browse/HBASE-25992) | Polish the ReplicationSourceWALReader code for 2.x after HBASE-25596 | Major | Replication | +| [HBASE-25976](https://issues.apache.org/jira/browse/HBASE-25976) | Implement a master based ReplicationTracker | Major | Replication | +| [HBASE-25989](https://issues.apache.org/jira/browse/HBASE-25989) | FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdfs 3.3+ | Major | . | +| [HBASE-25969](https://issues.apache.org/jira/browse/HBASE-25969) | Cleanup netty-all transitive includes | Major | . | +| [HBASE-25963](https://issues.apache.org/jira/browse/HBASE-25963) | HBaseCluster should be marked as IA.Public | Major | API | +| [HBASE-25911](https://issues.apache.org/jira/browse/HBASE-25911) | Fix uses of System.currentTimeMillis (should be EnvironmentEdgeManager.currentTime) | Minor | . | +| [HBASE-25941](https://issues.apache.org/jira/browse/HBASE-25941) | TestRESTServerSSL fails because of jdk bug | Major | test | +| [HBASE-25940](https://issues.apache.org/jira/browse/HBASE-25940) | Update Compression/TestCompressionTest: LZ4, SNAPPY, LZO | Major | . | +| [HBASE-25718](https://issues.apache.org/jira/browse/HBASE-25718) | Backport HBASE-25705 to branch-2 | Minor | rsgroup | +| [HBASE-25926](https://issues.apache.org/jira/browse/HBASE-25926) | Cleanup MetaTableAccessor references in FavoredNodeBalancer related code | Major | Balancer, FavoredNodes, meta | +| [HBASE-25791](https://issues.apache.org/jira/browse/HBASE-25791) | UI of master-status to show a recent history of that why balancer was rejected to run | Major | Balancer, master, UI | +| [HBASE-25894](https://issues.apache.org/jira/browse/HBASE-25894) | Improve the performance for region load and region count related cost functions | Major | Balancer, Performance | +| [HBASE-25873](https://issues.apache.org/jira/browse/HBASE-25873) | Refactor and cleanup the code for CostFunction | Major | Balancer, Performance | +| [HBASE-25872](https://issues.apache.org/jira/browse/HBASE-25872) | Add documentation for LoadBalancer about synchronization | Major | Balancer, documentation | +| [HBASE-25883](https://issues.apache.org/jira/browse/HBASE-25883) | The regionFinder and rackManager fields in BaseLoadBalancer should be volatile | Major | Balancer | +| [HBASE-25852](https://issues.apache.org/jira/browse/HBASE-25852) | Move all the intialization work of LoadBalancer implementation to initialize method | Major | Balancer | +| [HBASE-25876](https://issues.apache.org/jira/browse/HBASE-25876) | Add retry if we fail to read all bytes of the protobuf magic marker | Trivial | io | +| [HBASE-25790](https://issues.apache.org/jira/browse/HBASE-25790) | NamedQueue 'BalancerRejection' for recent history of balancer skipping | Major | Balancer, master | +| [HBASE-25854](https://issues.apache.org/jira/browse/HBASE-25854) | Remove redundant AM in-memory state changes in CatalogJanitor | Major | . | +| [HBASE-25851](https://issues.apache.org/jira/browse/HBASE-25851) | Make LoadBalancer not extend Configurable interface | Major | Balancer | +| [HBASE-25847](https://issues.apache.org/jira/browse/HBASE-25847) | More DEBUG and TRACE level logging in CatalogJanitor and HbckChore | Minor | . | +| [HBASE-25834](https://issues.apache.org/jira/browse/HBASE-25834) | Remove balanceTable method from LoadBalancer interface | Major | Balancer | +| [HBASE-25838](https://issues.apache.org/jira/browse/HBASE-25838) | Use double instead of Double in StochasticLoadBalancer | Major | Balancer, Performance | +| [HBASE-25835](https://issues.apache.org/jira/browse/HBASE-25835) | Ignore duplicate split requests from regionserver reports | Major | . | +| [HBASE-25836](https://issues.apache.org/jira/browse/HBASE-25836) | RegionStates#getAssignmentsForBalancer should only care about OPEN or OPENING regions | Major | . | +| [HBASE-25840](https://issues.apache.org/jira/browse/HBASE-25840) | CatalogJanitor warns about skipping gc of regions during RIT, but does not actually skip | Minor | . | +| [HBASE-25819](https://issues.apache.org/jira/browse/HBASE-25819) | Fix style issues for StochasticLoadBalancer | Major | Balancer | +| [HBASE-25802](https://issues.apache.org/jira/browse/HBASE-25802) | Miscellaneous style improvements for load balancer related classes | Major | Balancer | +| [HBASE-25793](https://issues.apache.org/jira/browse/HBASE-25793) | Move BaseLoadBalancer.Cluster to a separated file | Major | Balancer | +| [HBASE-25775](https://issues.apache.org/jira/browse/HBASE-25775) | Use a special balancer to deal with maintenance mode | Major | Balancer | +| [HBASE-25687](https://issues.apache.org/jira/browse/HBASE-25687) | Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 and branch-1 | Major | . | +| [HBASE-25199](https://issues.apache.org/jira/browse/HBASE-25199) | Remove HStore#getStoreHomedir | Minor | . | +| [HBASE-25696](https://issues.apache.org/jira/browse/HBASE-25696) | Need to initialize SLF4JBridgeHandler in jul-to-slf4j for redirecting jul to slf4j | Major | logging | +| [HBASE-25695](https://issues.apache.org/jira/browse/HBASE-25695) | Link to the filter on hbase:meta from user tables panel on master page | Major | UI | +| [HBASE-25629](https://issues.apache.org/jira/browse/HBASE-25629) | Reimplement TestCurrentHourProvider to not depend on unstable TZs | Major | test | +| [HBASE-25671](https://issues.apache.org/jira/browse/HBASE-25671) | Backport HBASE-25608 to branch-2 | Major | . | +| [HBASE-25677](https://issues.apache.org/jira/browse/HBASE-25677) | Server+table counters on each scan #nextRaw invocation becomes a bottleneck when heavy load | Major | metrics | +| [HBASE-25667](https://issues.apache.org/jira/browse/HBASE-25667) | Remove RSGroup test addition made in parent; depends on functionality not in old branches | Major | . | +| [HBASE-25550](https://issues.apache.org/jira/browse/HBASE-25550) | More readable Competition Time | Minor | UI | +| [HBASE-24900](https://issues.apache.org/jira/browse/HBASE-24900) | Make retain assignment configurable during SCP | Major | amv2 | +| [HBASE-25509](https://issues.apache.org/jira/browse/HBASE-25509) | ChoreService.cancelChore will not call ScheduledChore.cleanup which may lead to resource leak | Major | util | +| [HBASE-25336](https://issues.apache.org/jira/browse/HBASE-25336) | Use Address instead of InetSocketAddress in RpcClient implementation | Major | Client, rpc | +| [HBASE-25353](https://issues.apache.org/jira/browse/HBASE-25353) | [Flakey Tests] branch-2 TestShutdownBackupMaster | Major | flakies | + + +### OTHER: + +| JIRA | Summary | Priority | Component | +|:---- |:---- | :--- |:---- | +| [HBASE-13126](https://issues.apache.org/jira/browse/HBASE-13126) | Provide alternate mini cluster classes other than HBTU for downstream users to write unit tests | Critical | API, test | +| [HBASE-26245](https://issues.apache.org/jira/browse/HBASE-26245) | Store region server list in master local region | Major | master, Zookeeper | +| [HBASE-25934](https://issues.apache.org/jira/browse/HBASE-25934) | Add username for RegionScannerHolder | Minor | . | +| [HBASE-25826](https://issues.apache.org/jira/browse/HBASE-25826) | Revisit the synchronization of balancer implementation | Major | Balancer | +| [HBASE-26882](https://issues.apache.org/jira/browse/HBASE-26882) | Backport "HBASE-26810 Add dynamic configuration support for system coprocessors" to branch-2 | Major | Coprocessors, master, regionserver | +| [HBASE-27294](https://issues.apache.org/jira/browse/HBASE-27294) | Add new hadoop releases in our hadoop checks | Major | scripts | +| [HBASE-27299](https://issues.apache.org/jira/browse/HBASE-27299) | Bump minimum hadoop 2 version to 2.10.2 | Major | hadoop2, security | +| [HBASE-27221](https://issues.apache.org/jira/browse/HBASE-27221) | Bump spotless version to 2.24.1 | Major | build, pom | +| [HBASE-27281](https://issues.apache.org/jira/browse/HBASE-27281) | Add default implementation for Connection$getClusterId | Critical | Client | +| [HBASE-27248](https://issues.apache.org/jira/browse/HBASE-27248) | WALPrettyPrinter add print timestamp | Minor | tooling, wal | +| [HBASE-27182](https://issues.apache.org/jira/browse/HBASE-27182) | Rework tracing configuration | Major | scripts | +| [HBASE-27148](https://issues.apache.org/jira/browse/HBASE-27148) | Move minimum hadoop 3 support version to 3.2.3 | Major | dependencies, hadoop3, security | +| [HBASE-27175](https://issues.apache.org/jira/browse/HBASE-27175) | Failure to cleanup WAL split dir log should be at INFO level | Minor | . | +| [HBASE-27172](https://issues.apache.org/jira/browse/HBASE-27172) | Upgrade OpenTelemetry dependency to 1.15.0 | Major | build | +| [HBASE-27141](https://issues.apache.org/jira/browse/HBASE-27141) | Upgrade hbase-thirdparty dependency to 4.1.1 | Critical | dependencies, security, thirdparty | +| [HBASE-27108](https://issues.apache.org/jira/browse/HBASE-27108) | Revert HBASE-25709 | Blocker | . | +| [HBASE-27102](https://issues.apache.org/jira/browse/HBASE-27102) | Vacate the .idea folder in order to simplify spotless configuration | Major | build | +| [HBASE-27023](https://issues.apache.org/jira/browse/HBASE-27023) | Add protobuf to NOTICE file | Major | . | +| [HBASE-27082](https://issues.apache.org/jira/browse/HBASE-27082) | Change the return value of RSGroupInfo.getServers from SortedSet to Set to keep compatibility | Blocker | rsgroup | +| [HBASE-26912](https://issues.apache.org/jira/browse/HBASE-26912) | Bump checkstyle from 8.28 to 8.29 | Minor | test | +| [HBASE-26523](https://issues.apache.org/jira/browse/HBASE-26523) | Upgrade hbase-thirdparty dependency to 4.0.1 | Blocker | thirdparty | +| [HBASE-26892](https://issues.apache.org/jira/browse/HBASE-26892) | Add spotless:check in our pre commit general check | Major | jenkins | +| [HBASE-26906](https://issues.apache.org/jira/browse/HBASE-26906) | Remove duplicate dependency declaration | Major | build | +| [HBASE-26903](https://issues.apache.org/jira/browse/HBASE-26903) | Bump httpclient from 4.5.3 to 4.5.13 | Minor | . | +| [HBASE-26902](https://issues.apache.org/jira/browse/HBASE-26902) | Bump bcprov-jdk15on from 1.60 to 1.67 | Minor | . | +| [HBASE-26834](https://issues.apache.org/jira/browse/HBASE-26834) | Adapt ConnectionRule for both sync and async connections | Major | test | +| [HBASE-26861](https://issues.apache.org/jira/browse/HBASE-26861) | Fix flaky TestSnapshotFromMaster.testSnapshotHFileArchiving | Major | snapshots, test | +| [HBASE-26802](https://issues.apache.org/jira/browse/HBASE-26802) | Backport the log4j2 changes to branch-2 | Blocker | logging | +| [HBASE-26819](https://issues.apache.org/jira/browse/HBASE-26819) | Minor code cleanup in and around RpcScheduler | Minor | IPC/RPC | +| [HBASE-26817](https://issues.apache.org/jira/browse/HBASE-26817) | Mark RpcExecutor as IA.LimitedPrivate COPROC and PHOENIX | Major | compatibility | +| [HBASE-26782](https://issues.apache.org/jira/browse/HBASE-26782) | Minor code cleanup in and around RpcExecutor | Minor | IPC/RPC | +| [HBASE-26760](https://issues.apache.org/jira/browse/HBASE-26760) | LICENSE handling should not allow non-aggregated "apache-2.0" | Minor | community | +| [HBASE-26691](https://issues.apache.org/jira/browse/HBASE-26691) | Replacing log4j with reload4j for branch-2.x | Critical | logging | +| [HBASE-26788](https://issues.apache.org/jira/browse/HBASE-26788) | Disable Checks API callback from test results in PRs | Major | build | +| [HBASE-26622](https://issues.apache.org/jira/browse/HBASE-26622) | Update to error-prone 2.10 | Major | . | +| [HBASE-26663](https://issues.apache.org/jira/browse/HBASE-26663) | Upgrade Maven Enforcer Plugin | Major | build | +| [HBASE-25918](https://issues.apache.org/jira/browse/HBASE-25918) | Upgrade hbase-thirdparty dependency to 3.5.1 | Critical | dependencies | +| [HBASE-26719](https://issues.apache.org/jira/browse/HBASE-26719) | Remove 'patch' file added as part of commit for HBASE-25973 | Minor | . | +| [HBASE-26614](https://issues.apache.org/jira/browse/HBASE-26614) | Refactor code related to "dump"ing ZK nodes | Major | Zookeeper | +| [HBASE-26551](https://issues.apache.org/jira/browse/HBASE-26551) | Add FastPath feature to HBase RWQueueRpcExecutor | Major | rpc, Scheduler | +| [HBASE-26616](https://issues.apache.org/jira/browse/HBASE-26616) | Refactor code related to ZooKeeper authentication | Major | Zookeeper | +| [HBASE-26631](https://issues.apache.org/jira/browse/HBASE-26631) | Upgrade junit to 4.13.2 | Major | security, test | +| [HBASE-26564](https://issues.apache.org/jira/browse/HBASE-26564) | Retire the method visitLogEntryBeforeWrite without RegionInfo in WALActionListner | Minor | wal | +| [HBASE-26566](https://issues.apache.org/jira/browse/HBASE-26566) | Optimize encodeNumeric in OrderedBytes | Major | Performance | +| [HBASE-26580](https://issues.apache.org/jira/browse/HBASE-26580) | The message of StoreTooBusy is confused | Trivial | logging, regionserver | +| [HBASE-26549](https://issues.apache.org/jira/browse/HBASE-26549) | hbaseprotoc plugin should initialize maven | Major | jenkins | +| [HBASE-26490](https://issues.apache.org/jira/browse/HBASE-26490) | Add builder for class ReplicationLoadSink | Minor | Client | +| [HBASE-26444](https://issues.apache.org/jira/browse/HBASE-26444) | BucketCacheWriter should log only the BucketAllocatorException message, not the full stack trace | Major | logging, Operability | +| [HBASE-26443](https://issues.apache.org/jira/browse/HBASE-26443) | Some BaseLoadBalancer log lines should be at DEBUG level | Major | logging, Operability | +| [HBASE-26369](https://issues.apache.org/jira/browse/HBASE-26369) | Fix checkstyle issues for HBase-common: KeyValue and ByteBufferUtils | Trivial | . | +| [HBASE-26368](https://issues.apache.org/jira/browse/HBASE-26368) | Fix checkstyle issues for HRegionserver | Trivial | . | +| [HBASE-26329](https://issues.apache.org/jira/browse/HBASE-26329) | Upgrade commons-io to 2.11.0 | Major | dependencies | +| [HBASE-26186](https://issues.apache.org/jira/browse/HBASE-26186) | jenkins script for caching artifacts should verify cached file before relying on it | Major | build, integration tests | +| [HBASE-26288](https://issues.apache.org/jira/browse/HBASE-26288) | Revisit the usage of MetaTableLocator when HRegionServer.TEST\_SKIP\_REPORTING\_TRANSITION is true | Major | meta, regionserver, test, Zookeeper | +| [HBASE-26285](https://issues.apache.org/jira/browse/HBASE-26285) | Remove MetaTableLocator usages in non-migration code | Major | meta, Zookeeper | +| [HBASE-25853](https://issues.apache.org/jira/browse/HBASE-25853) | Backport HBASE-22120 (Replace HTrace with OpenTelemetry) to branch-2 | Major | tracing | +| [HBASE-26152](https://issues.apache.org/jira/browse/HBASE-26152) | Exclude javax.servlet:servlet-api in hbase-shaded-testing-util | Major | . | +| [HBASE-25521](https://issues.apache.org/jira/browse/HBASE-25521) | Change ChoreService and ScheduledChore to IA.Private | Major | util | +| [HBASE-26015](https://issues.apache.org/jira/browse/HBASE-26015) | Should implement getRegionServers(boolean) method in AsyncAdmin | Major | Admin, Client | +| [HBASE-25920](https://issues.apache.org/jira/browse/HBASE-25920) | Support Hadoop 3.3.1 | Major | . | +| [HBASE-25948](https://issues.apache.org/jira/browse/HBASE-25948) | Remove deprecated ZK command 'rmr' in hbase-cleanup.sh | Major | scripts | +| [HBASE-25884](https://issues.apache.org/jira/browse/HBASE-25884) | NPE while getting Balancer decisions | Major | . | +| [HBASE-25843](https://issues.apache.org/jira/browse/HBASE-25843) | Move master http-related code into o.a.h.h.master.http | Minor | master | +| [HBASE-25842](https://issues.apache.org/jira/browse/HBASE-25842) | Move regionserver http-related code into o.a.h.h.regionserver.http | Minor | regionserver | +| [HBASE-25779](https://issues.apache.org/jira/browse/HBASE-25779) | HRegionServer#compactSplitThread should be private | Trivial | regionserver | +| [HBASE-25755](https://issues.apache.org/jira/browse/HBASE-25755) | Exclude tomcat-embed-core from libthrift | Critical | dependencies, Thrift | +| [HBASE-25750](https://issues.apache.org/jira/browse/HBASE-25750) | Upgrade RpcControllerFactory and HBaseRpcController from Private to LimitedPrivate(COPROC,PHOENIX) | Major | Coprocessors, phoenix, rpc | +| [HBASE-25604](https://issues.apache.org/jira/browse/HBASE-25604) | Upgrade spotbugs to 4.x | Major | build, findbugs | +| [HBASE-25620](https://issues.apache.org/jira/browse/HBASE-25620) | Increase timeout value for pre commit | Major | build, test | +| [HBASE-25615](https://issues.apache.org/jira/browse/HBASE-25615) | Upgrade java version in pre commit docker file | Major | build | +| [HBASE-25083](https://issues.apache.org/jira/browse/HBASE-25083) | make sure the next hbase 1.y release has Hadoop 2.10 as a minimum version | Major | documentation, hadoop2 | +| [HBASE-25474](https://issues.apache.org/jira/browse/HBASE-25474) | Update HBase version to 2.5.0 in branch-2 pom.xml | Major | . | +| [HBASE-25333](https://issues.apache.org/jira/browse/HBASE-25333) | Add maven enforcer rule to ban VisibleForTesting imports | Major | build, pom | +| [HBASE-25451](https://issues.apache.org/jira/browse/HBASE-25451) | Upgrade commons-io to 2.8.0 | Major | dependencies | +| [HBASE-25452](https://issues.apache.org/jira/browse/HBASE-25452) | Use MatcherAssert.assertThat instead of org.junit.Assert.assertThat | Major | test | +| [HBASE-25400](https://issues.apache.org/jira/browse/HBASE-25400) | [Flakey Tests] branch-2 TestRegionMoveAndAbandon | Major | . | +| [HBASE-25389](https://issues.apache.org/jira/browse/HBASE-25389) | [Flakey Tests] branch-2 TestMetaShutdownHandler | Major | flakies | -# Be careful doing manual edits in this file. Do not change format -# of release header or remove the below marker. This file is generated. -# DO NOT REMOVE THIS MARKER; FOR INTERPOLATING CHANGES!--> ## Release 2.2.0 - Unreleased (as of 2019-06-11) diff --git a/LICENSE.txt b/LICENSE.txt index a49fc5ef2ce0..197ace3b792e 100755 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -252,7 +252,7 @@ under the terms of the MIT license. THE SOFTWARE. ---- -This project incorporates portions of the 'Protocol Buffers' project avaialble +This project incorporates portions of the 'Protocol Buffers' project available under a '3-clause BSD' license. Copyright 2008, Google Inc. @@ -612,3 +612,64 @@ available under the Creative Commons By Attribution 3.0 License. this trademark restriction does not form part of this License. Creative Commons may be contacted at https://creativecommons.org/. + +---- +This project incorporates portions of the 'Ruby' project available +under a '2-clause BSD' license. + + Copyright (C) 1993-2013 Yukihiro Matsumoto. All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + 1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + SUCH DAMAGE. + +---- +This project bundles a copy of the Vega minified javascript library version +5.24.0, the Vega-Lite minified javascript library version 5.6.1, and the +Vega-Embed minified javascript library version 6.21.3. All three are +available under the following '3-clause BSD' license. + +Copyright (c) 2015-2023, University of Washington Interactive Data Lab +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its contributors + may be used to endorse or promote products derived from this software + without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/NOTICE.txt b/NOTICE.txt index 8c97c343031c..41ed32c1c691 100755 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -1,5 +1,5 @@ Apache HBase -Copyright 2007-2020 The Apache Software Foundation +Copyright 2007-2022 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). diff --git a/README.md b/README.md new file mode 100644 index 000000000000..058472e1c070 --- /dev/null +++ b/README.md @@ -0,0 +1,56 @@ + + +![hbase-logo](https://raw.githubusercontent.com/apache/hbase/master/src/site/resources/images/hbase_logo_with_orca_large.png) + +[Apache HBase](https://hbase.apache.org) is an open-source, distributed, versioned, column-oriented store modeled after Google' [Bigtable](https://research.google.com/archive/bigtable.html): A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of [Apache Hadoop](https://hadoop.apache.org/). + +# Getting Start +To get started using HBase, the full documentation for this release can be found under the doc/ directory that accompanies this README. Using a browser, open the docs/index.html to view the project home page (or browse https://hbase.apache.org). The hbase '[book](https://hbase.apache.org/book.html)' has a 'quick start' section and is where you should being your exploration of the hbase project. + +The latest HBase can be downloaded from the [download page](https://hbase.apache.org/downloads.html). + +We use mailing lists to send notice and discuss. The mailing lists and archives are listed [here](http://hbase.apache.org/mail-lists.html) + +We use the #hbase channel on the official [ASF Slack Workspace](https://the-asf.slack.com/) for real time questions and discussions. Please mail dev@hbase.apache.org to request an invite. + +# How to Contribute +The source code can be found at https://hbase.apache.org/source-repository.html + +The HBase issue tracker is at https://hbase.apache.org/issue-tracking.html + +Notice that, the public registration for https://issues.apache.org/ has been disabled due to spam. If you want to contribute to HBase, please send an email to [private@hbase.apache.org](mailto:private@hbase.apache.org) in the follow format so the HBase PMC members can acquire a jira account for you: + +``` +Subject: Request for Jira account + +Contents of the mail(should be in English): +Preferred Jira Id: [a-z0-9]+ +Full Name: +E-Mail Address: + +Reason: Jira Id you wish to contribute to, or details around the bug/feature you wish to report or work on. +``` + +> **_NOTE:_** we need to process the requests manually so it may take sometime, for example, up to a week, for us to respond to your request. + +# About +Apache HBase is made available under the [Apache License, version 2.0](https://hbase.apache.org/license.html) + +The HBase distribution includes cryptographic software. See the export control notice [here](https://hbase.apache.org/export_control.html). diff --git a/README.txt b/README.txt deleted file mode 100755 index 4ebb50467228..000000000000 --- a/README.txt +++ /dev/null @@ -1,34 +0,0 @@ -Apache HBase [1] is an open-source, distributed, versioned, column-oriented -store modeled after Google' Bigtable: A Distributed Storage System for -Structured Data by Chang et al.[2] Just as Bigtable leverages the distributed -data storage provided by the Google File System, HBase provides Bigtable-like -capabilities on top of Apache Hadoop [3]. - -To get started using HBase, the full documentation for this release can be -found under the doc/ directory that accompanies this README. Using a browser, -open the docs/index.html to view the project home page (or browse to [1]). -The hbase 'book' at http://hbase.apache.org/book.html has a 'quick start' -section and is where you should being your exploration of the hbase project. - -The latest HBase can be downloaded from an Apache Mirror [4]. - -The source code can be found at [5] - -The HBase issue tracker is at [6] - -Apache HBase is made available under the Apache License, version 2.0 [7] - -The HBase mailing lists and archives are listed here [8]. - -The HBase distribution includes cryptographic software. See the export control -notice here [9]. - -1. http://hbase.apache.org -2. http://research.google.com/archive/bigtable.html -3. http://hadoop.apache.org -4. http://www.apache.org/dyn/closer.cgi/hbase/ -5. https://hbase.apache.org/source-repository.html -6. https://hbase.apache.org/issue-tracking.html -7. http://hbase.apache.org/license.html -8. http://hbase.apache.org/mail-lists.html -9. https://hbase.apache.org/export_control.html diff --git a/RELEASENOTES.md b/RELEASENOTES.md index 527e543dec41..7238adfb8251 100644 --- a/RELEASENOTES.md +++ b/RELEASENOTES.md @@ -1,25 +1,1601 @@ # RELEASENOTES + +# HBASE 2.5.10 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-28718](https://issues.apache.org/jira/browse/HBASE-28718) | *Major* | **Should support different license name for 'Apache License, Version 2.0'** + +Also accept "Apache-2.0" and "Apache Software License - Version 2.0" when aggregating license in resource bundle module. + + + +# HBASE 2.5.9 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-28699](https://issues.apache.org/jira/browse/HBASE-28699) | *Major* | **Bump jdk and maven versions in pre commit and nighly dockerfile** + +maven 3.8.6 -\> 3.9.8 +temurin openjdk8 8u352-b08 -\> 8u412-b08 +temurin openjdk11 11.0.17\_8 -\> 11.0.23\_9 +temurin openjdk17 17.0.10\_7 -\> 17.0.11\_9 + + +--- + +* [HBASE-28679](https://issues.apache.org/jira/browse/HBASE-28679) | *Major* | **Upgrade yetus to a newer version** + +Upgrade yetus to 0.15.0. + +Some notable differences: +Whitespace related checks are renamed to blanks. +Use xmllint instead of jrunscript for validating xml files. +For github there is an extra step to write commit status back to github but for HBase it does not work due to insufficient permission. + + +--- + +* [HBASE-28616](https://issues.apache.org/jira/browse/HBASE-28616) | *Major* | **Remove/Deprecated the rs.\* related configuration in TableOutputFormat** + +Mark these two fileds in TableOutputFormat as deprecated as they do not take effect any more. + +REGION\_SERVER\_CLASS +REGION\_SERVER\_IMPL + +Mark these two methods in TableMapReduceUtil as deprecated as the serverClass and serverImpl parameters do not take effect any more. + +void initTableReducerJob(String table, Class\ reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl) throws IOException +void initTableReducerJob(String table, Class\ reducer, Job job, Class partitioner, String quorumAddress, String serverClass, String serverImpl, boolean addDependencyJars) throws IOException + + +--- + +* [HBASE-25972](https://issues.apache.org/jira/browse/HBASE-25972) | *Major* | **Dual File Compaction** + +The default compactor in HBase compacts HFiles into one file. This change introduces a new store file writer which writes the retained cells by compaction into two files, which will be called DualFileWriter. One of these files will include the live cells. This file will be called a live-version file. The other file will include the rest of the cells, that is, historical versions. This file will be called a historical-version file. DualFileWriter will work with the default compactor. The historical files will not be read for the scans scanning latest row versions. This eliminates scanning unnecessary cell versions in compacted files and thus it is expected to improve performance of these scans. + + +--- + +* [HBASE-28552](https://issues.apache.org/jira/browse/HBASE-28552) | *Major* | **Bump up bouncycastle dependency from 1.76 to 1.78** + +Bump bouncycastle dependency from 1.76 to 1.78 for addressing several CVEs + +CVE-2024-29857 +CVE-2024-30171 +CVE-2024-30172 +CVE-2024-301XX(Full CVE Code not available yet) + + +--- + +* [HBASE-28517](https://issues.apache.org/jira/browse/HBASE-28517) | *Major* | **Make properties dynamically configured** + +Make the following properties dynamically configured: +\* hbase.rs.evictblocksonclose +\* hbase.rs.cacheblocksonwrite +\* hbase.block.data.cacheonread + + +--- + +* [HBASE-28457](https://issues.apache.org/jira/browse/HBASE-28457) | *Major* | **Introduce a version field in file based tracker record** + +Introduce a 'version' field in file based tracker record, so while downgrading, we will know that we are reading a new version of file tracker file and fail with explicit message instead of failing silently and causing possible data loss. + + +--- + +* [HBASE-28444](https://issues.apache.org/jira/browse/HBASE-28444) | *Blocker* | **Bump org.apache.zookeeper:zookeeper from 3.8.3 to 3.8.4** + +Upgrade zookeeper to 3.8.4 for addressing CVE-2024-23944. + + +--- + +* [HBASE-28260](https://issues.apache.org/jira/browse/HBASE-28260) | *Major* | **Possible data loss in WAL after RegionServer crash** + +Adds a new flag hbase.regionserver.wal.avoid-local-writes. When true (default false), we will avoid writing a block replica to the local datanode for WAL writes. This will improve MTTR and redundancy, but may come with a performance impact for WAL writes. It's recommended to enable, but monitor performance in doing so if that is a concern for you. + + + +# HBASE 2.5.8 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-28204](https://issues.apache.org/jira/browse/HBASE-28204) | *Major* | **Region Canary can take lot more time If any region (except the first region) starts with delete markers** + +Canary is using Scan for first region of the table and Get for rest of the region. RAW Scan was only enabled for first region of any table. If a region has high number of deleted rows for the first row of the key-space, then It can take really long time for Get to finish execution. + +With this change, Region canary will use scan to validate that every region is accessible and also enables RAW Scan if it's enabled by the user. + + +--- + +* [HBASE-28319](https://issues.apache.org/jira/browse/HBASE-28319) | *Major* | **Expose DelegatingRpcScheduler as IA.LimitedPrivate** + +hbase-server now exposes a DelegatingRpcScheduler. Users who have been using hbase.region.server.rpc.scheduler.factory.class to override the default behavior of the built-in schedulers may find it beneficial to have their implementation extend this new class. This will insulate you from breaking interface changes down the line. + + +--- + +* [HBASE-28306](https://issues.apache.org/jira/browse/HBASE-28306) | *Major* | **Add property to customize Version information** + +Added a new build property -Dversioninfo.version which can be used to influence the generated Version.java class in custom build scenarios. The version specified will show up in the HMaster UI and also have implications on various version-related checks. This is an advanced usage property and it's recommended not to stray too far from the default format of major.minor.patch-suffix. + + + +# HBASE 2.5.7 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-25549](https://issues.apache.org/jira/browse/HBASE-25549) | *Major* | **Provide a switch that allows avoiding reopening all regions when modifying a table to prevent RIT storms.** + +New APIs are added to Admin, AsyncAdmin, and hbase shell to allow modifying a table without reopening all regions. Care should be used in using this API, as regions will be in an inconsistent state until they are all reopened. Whether this matters depends on the change, and some changes are disallowed (such as enabling region replication or adding/removing a column family). + + +--- + +* [HBASE-28168](https://issues.apache.org/jira/browse/HBASE-28168) | *Minor* | **Add option in RegionMover.java to isolate one or more regions on the RegionSever** + +This adds a new "isolate\_regions" operation to RegionMover, which allows operators to pass a list of region encoded ids to be "isolated" in the passed RegionServer. +Regions currently deployed in the RegionServer that are not in the passed list of regions would be moved to other RegionServers. Regions in the passed list that are currently on other RegionServers would be moved to the passed RegionServer. + +Please refer to the command help for further information. + + + +# HBASE 2.5.6 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-28068](https://issues.apache.org/jira/browse/HBASE-28068) | *Minor* | **Add hbase.normalizer.merge.merge\_request\_max\_number\_of\_regions property to limit max number of regions in a merge request for merge normalization** + +Added a new property "hbase.normalizer.merge.merge\_request\_max\_number\_of\_regions" to limit the max number of region to be processed for merge request in a single merge normalisation. Defaults to 100 + + +--- + +* [HBASE-27956](https://issues.apache.org/jira/browse/HBASE-27956) | *Major* | **Support wall clock profiling in ProfilerServlet** + +You can now do wall clock profiling with async-profiler by specifying ?event=wall query param on the profiler servlet (/prof) + + +--- + +* [HBASE-27888](https://issues.apache.org/jira/browse/HBASE-27888) | *Minor* | **Record readBlock message in log when it takes too long time** + +Add a configuration parameter,which control to record read block slow in logs. +\ + \hbase.fs.reader.warn.time.ms\ + \-1\ +\ +If reading block cost time in milliseconds more than the threshold, a warning will be logged,the default value is -1, it means skipping record the read block slow warning log. + + + +# HBASE 2.5.5 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-27838](https://issues.apache.org/jira/browse/HBASE-27838) | *Minor* | **Update zstd-jni from version 1.5.4-2 -\> 1.5.5-2** + +Bump zstd-jni from 1.5.4-2 to 1.5.5-2, which fixed a critical issue on s390x. + + +--- + +* [HBASE-27799](https://issues.apache.org/jira/browse/HBASE-27799) | *Major* | **RpcThrottlingException wait interval message is misleading between 0-1s** + +The RpcThrottleException now includes millis in the message + + +--- + +* [HBASE-27762](https://issues.apache.org/jira/browse/HBASE-27762) | *Major* | **Include EventType and ProcedureV2 pid in logging via MDC** + + +Log the `o.a.h.hbase.executor.EventType` and ProcedureV2 pid in log messages via MDC. PatternLayouts on master and branch-2 have been updated to make use of the MDC variables. Note that due to LOG4J2-3660, log lines for which the MDC is empty will have extraneous characters. To opt-in on branch-2.5 or branch-2.4, make an appropriate change to `conf/log4j2.properties`. + + +--- + +* [HBASE-27808](https://issues.apache.org/jira/browse/HBASE-27808) | *Major* | **Change flatten mode for oss in our pom file** + +Changed the flatten mode from default to oss. It will include these extra section in the published pom files: + +name, description, url, developers, scm, inceptionYear, organization, mailingLists, issueManagement, distributionManagement. + + + +# HBASE 2.5.4 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-27748](https://issues.apache.org/jira/browse/HBASE-27748) | *Major* | **Bump jettison from 1.5.2 to 1.5.4** + +Bump jettison from 1.5.2 to 1.5.4 for CVE-2023-1436. + + +--- + +* [HBASE-27741](https://issues.apache.org/jira/browse/HBASE-27741) | *Minor* | **Fall back to protoc osx-x86\_64 on Apple Silicon** + + +This change introduces and automatically applies a new profile for osx-aarch_64 hosts named `apple-silicon-workaround`. This profile overrides the property `os.detected.classifier` with the value `osx-x86_64`. The intention is that this change will permit the build to proceed with the x86 version of `protoc`, making use of the Rosetta instruction translation service built into the OS. If you'd like to provide and make use of your own aarch_64 `protoc`, you can disable this profile on the command line by adding `-P'!apple-silicon-workaround'`, or through configuration in your `settings.xml`. + + +--- + +* [HBASE-27651](https://issues.apache.org/jira/browse/HBASE-27651) | *Minor* | **hbase-daemon.sh foreground\_start should propagate SIGHUP and SIGTERM** + + +Introduce separate `trap`s for SIGHUP vs. the rest. Treat `SIGINT`, `SIGKILL`, and `EXIT` identically, as before. Use the signal name without `SIG` prefix for increased portability, as per the POSIX man page for `trap`. + +`SIGTERM` handler will now honor `HBASE_STOP_TIMEOUT` as described in the file header. + + +--- + +* [HBASE-27250](https://issues.apache.org/jira/browse/HBASE-27250) | *Minor* | **MasterRpcService#setRegionStateInMeta does not support replica region encodedNames or region names** + +MasterRpcServices#setRegionStateInMeta can now work with both primary and timeline-consistent replica regions. + + + +# HBASE 2.5.3 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-27506](https://issues.apache.org/jira/browse/HBASE-27506) | *Minor* | **Optionally disable sorting directories by size in CleanerChore** + +Added \`hbase.cleaner.directory.sorting\` configuration to enable the CleanerChore to sort the subdirectories by consumed space and start the cleaning with the largest subdirectory. Enabled by default. + + +--- + +* [HBASE-27494](https://issues.apache.org/jira/browse/HBASE-27494) | *Minor* | **Client meta cache clear by exception metrics are missing some cases** + +Patch available at https://github.com/apache/hbase/pull/4902 + + +--- + +* [HBASE-27490](https://issues.apache.org/jira/browse/HBASE-27490) | *Major* | **Locating regions for all actions of batch requests can exceed operation timeout** + +The first step of submitting a multi request is to resolve all region locations for all actions in the request. If meta is slow, previously it was possible to exceed the configured operation timeout in this phase. Now, the operation timeout will be checked before each region location lookup. Once exceeded, the multi request will be failed but the region locations that had been looked up should remain in the cache (making future requests more likely to succeed). + + +--- + +* [HBASE-27513](https://issues.apache.org/jira/browse/HBASE-27513) | *Major* | **Modify README.txt to mention how to contribue** + +Remove README.txt and replace it with README.md. +Add a 'How to Contribute' section to tell contributors how to acquire a jira account. + + +--- + +* [HBASE-27498](https://issues.apache.org/jira/browse/HBASE-27498) | *Major* | **Observed lot of threads blocked in ConnectionImplementation.getKeepAliveMasterService** + +added hbase.client.master.state.cache.timeout.sec for sync connection implementation ConnectionImplementation such that cached the master is running state instead always refresh the master states per RPC call. + + +--- + +* [HBASE-27233](https://issues.apache.org/jira/browse/HBASE-27233) | *Major* | **Read blocks into off-heap if caching is disabled for read** + +Using Scan.setCacheBlocks(false) with on-heap LRUBlockCache will now result in significantly less heap allocations for those scans if hbase.server.allocator.pool.enabled is enabled. Previously all allocations went to on-heap if LRUBlockCache was used, but now it will go to the off-heap pool if cache blocks is enabled. + + +--- + +* [HBASE-27565](https://issues.apache.org/jira/browse/HBASE-27565) | *Major* | **Make the initial corePoolSize configurable for ChoreService** + +Add 'hbase.choreservice.initial.pool.size' configuration property to set the initial number of threads for the ChoreService. + + +--- + +* [HBASE-27529](https://issues.apache.org/jira/browse/HBASE-27529) | *Major* | **Provide RS coproc ability to attach WAL extended attributes to mutations at replication sink** + +New regionserver coproc endpoints that can be used by coproc at the replication sink cluster if WAL has extended attributes. +Using the new endpoints, WAL extended attributes can be transferred to Mutation attributes at the replication sink cluster. + + +--- + +* [HBASE-27575](https://issues.apache.org/jira/browse/HBASE-27575) | *Minor* | **Bump future from 0.18.2 to 0.18.3 in /dev-support** + +pushed to 2.4, 2.5, branch-2, and master + + + +# HBASE 2.5.2 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-27434](https://issues.apache.org/jira/browse/HBASE-27434) | *Major* | **Use $revision as placeholder for maven version to make it easier to control the version from command line** + +Use ${revision} as placeholder for maven version in pom, so later you can use 'mvn install -Drevision=xxx' to specify the version at build time. +After this change, you can not use mvn versions:set to bump the version, instead. you should just modify the parent pom to change the value of the 'revision' property in the properties section. + + +--- + +* [HBASE-27472](https://issues.apache.org/jira/browse/HBASE-27472) | *Major* | **The personality script set wrong hadoop2 check version for branch-2** + +This only affects branch-2 but for aliging the personality scripts across all active branches, we apply it to all active branches. + + +--- + +* [HBASE-27443](https://issues.apache.org/jira/browse/HBASE-27443) | *Major* | **Use java11 in the general check of our jenkins job** + +Change to use java 11 in nightly and pre commit jobs. + +Bump error prone to 2.16 and force using jdk11 when error prone is enabled. + + + +# HBASE 2.5.1 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-27381](https://issues.apache.org/jira/browse/HBASE-27381) | *Major* | **Still seeing 'Stuck' in static initialization creating RegionInfo instance** + +Static constant UNDEFINED has been removed from public interface RegionInfo. This is a breaking change, but resolves a critical deadlock bug. This constant was never meant to be exposed and has been deprecated since version 2.3.2 with no replacement. + + +--- + +* [HBASE-27372](https://issues.apache.org/jira/browse/HBASE-27372) | *Major* | **Update java versions in our Dockerfiles** + +Upgrade java version to 11.0.16.1 and 8u345b01 in the docker files which are used in our pre commit and nightly jobs. +Remove JDK7 in these docker files as we do not support JDK7 any more. + + +--- + +* [HBASE-27371](https://issues.apache.org/jira/browse/HBASE-27371) | *Major* | **Bump spotbugs version** + +Bump spotbugs version from 4.2.2 to 4.7.2. Also bump maven spotbugs plugin version from 4.2.0 to 4.7.2.0. + + +--- + +* [HBASE-27224](https://issues.apache.org/jira/browse/HBASE-27224) | *Major* | **HFile tool statistic sampling produces misleading results** + +Fixes HFilePrettyPrinter's calculation of min and max size for an HFile so that it will truly be the min and max for the whole file. Previously was based on just a sampling, as with the histograms. Additionally adds a new argument to the tool '-d' which prints detailed range counts for each summary. The range counts give you the exact count of rows/cells that fall within the pre-defined ranges, useful for giving more detailed insight into outliers. + + +--- + +* [HBASE-27340](https://issues.apache.org/jira/browse/HBASE-27340) | *Minor* | **Artifacts with resolved profiles** + +Published poms now contain runtime dependencies only; build and test time dependencies are stripped. Profiles are also now resolved and in-lined at publish time. This removes the need/ability of downstreamers shaping hbase dependencies via enable/disable of hbase profile settings (Implication is that now the hbase project publishes artifacts for hadoop2 and for hadoop3, and so on). + + +--- + +* [HBASE-27320](https://issues.apache.org/jira/browse/HBASE-27320) | *Minor* | **hide some sensitive configuration information in the UI** + +hide superuser and password related settings in the configuration UI + + + +# HBASE 2.5.0 Release Notes + +These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. + + +--- + +* [HBASE-27305](https://issues.apache.org/jira/browse/HBASE-27305) | *Minor* | **add an option to skip file splitting when bulkload hfiles** + +Add a 'hbase.loadincremental.fail.if.need.split.hfile' configuration. If set to true, th bulk load operation will fail immediately if we need to split the hfiles. This can be used to prevent unexpected time consuming bulk load operation. + + +--- + +* [HBASE-27104](https://issues.apache.org/jira/browse/HBASE-27104) | *Major* | **Add a tool command list\_unknownservers** + +Introduce a shell command 'list\_unknownservers' to list unknown servers. + + +--- + +* [HBASE-27129](https://issues.apache.org/jira/browse/HBASE-27129) | *Major* | **Add a config that allows us to configure region-level storage policies** + +Add a 'hbase.hregion.block.storage.policy' so you can config storage policy at region level. This is useful when you want to control the storage policy for the directories other than CF directories, such as .splits, .recovered.edits, etc. + + +--- + +* [HBASE-27089](https://issues.apache.org/jira/browse/HBASE-27089) | *Minor* | **Add “commons.crypto.stream.buffer.size” configuration** + +Add a 'commons.crypto.stream.buffer.size' config for setting the buffer size when doing AES crypto for RPC. + + +--- + +* [HBASE-27299](https://issues.apache.org/jira/browse/HBASE-27299) | *Major* | **Bump minimum hadoop 2 version to 2.10.2** + +Now the minimum support hadoop 2.x version is 2.10.2 for hbase 2.5+. + + +--- + +* [HBASE-27281](https://issues.apache.org/jira/browse/HBASE-27281) | *Critical* | **Add default implementation for Connection$getClusterId** + +Adds a default null implementation for Connection$getClusterId. Downstream applications should implement this method. + + +--- + +* [HBASE-27229](https://issues.apache.org/jira/browse/HBASE-27229) | *Major* | **BucketCache statistics should not count evictions by hfile** + +The eviction metric for the BucketCache has been updated to only count evictions triggered by the eviction process (i.e responding to cache pressure). This brings it in-line with the other cache implementations, with the goal of this metric being to give operators insight into cache pressure. Other evictions by hfile or drain to storage engine failure, etc, no longer count towards the eviction rate. + + +--- + +* [HBASE-27204](https://issues.apache.org/jira/browse/HBASE-27204) | *Critical* | **BlockingRpcClient will hang for 20 seconds when SASL is enabled after finishing negotiation** + +When Kerberos authentication succeeds, on the server side, after receiving the final SASL token from the client, we simply wait for the client to continue by sending the connection header. After HBASE-24579, on the client side, an additional readStatus() was added, which assumed that after negotiation has completed a status code will be sent. However when authentication has succeeded the server will not send one. As a result the client would hang and only throw an exception when the configured read timeout is reached, which is 20 seconds by default. This was especially noticeable when using BlockingRpcClient as the client implementation. HBASE-24579 was reverted to correct this issue. + + +--- + +* [HBASE-27219](https://issues.apache.org/jira/browse/HBASE-27219) | *Minor* | **Change JONI encoding in RegexStringComparator** + +In RegexStringComparator an infinite loop can occur if an invalid UTF8 is encountered. We now use joni's NonStrictUTF8Encoding instead of UTF8Encoding to avoid the issue. + + +--- + +* [HBASE-20499](https://issues.apache.org/jira/browse/HBASE-20499) | *Minor* | **Replication/Priority executors can use specific max queue length as default value instead of general maxQueueLength** + +Added new config 'hbase.ipc.server.replication.max.callqueue.length' + + +--- + +* [HBASE-27048](https://issues.apache.org/jira/browse/HBASE-27048) | *Major* | **Server side scanner time limit should account for time in queue** + +Server will now account for queue time when determining how long a scanner can run before heartbeat should be returned. This should help avoid timeouts when server is overloaded. + + +--- + +* [HBASE-27148](https://issues.apache.org/jira/browse/HBASE-27148) | *Major* | **Move minimum hadoop 3 support version to 3.2.3** + +Bump the minimum hadoop 3 dependency to 3.2.3. + +Also upgrade apache-avro to 1.11.0 and exclude all jackson 1.x dependencies since all jackson 1.x versions have vulnerabilities. + +Notice that for hadoop 2 dependency we will need to include jackson 1.x because hadoop directly depend on it. + + +--- + +* [HBASE-27078](https://issues.apache.org/jira/browse/HBASE-27078) | *Major* | **Allow configuring a separate timeout for meta scans** + +Similar to hbase.read.rpc.timeout and hbase.client.scanner.timeout.period for normal scans, this issue adds two new configs for meta scans: hbase.client.meta.read.rpc.timeout and hbase.client.meta.scanner.timeout.period. Each meta scan RPC call will be limited by hbase.client.meta.read.rpc.timeout, while hbase.client.meta.scanner.timeout.period acts as an overall operation timeout. + +Additionally, for 2.5.0, normal Table-based scan RPCs will now be limited by hbase.read.rpc.timeout if configured, instead of hbase.rpc.timeout. This behavior already existed for AsyncTable scans. + + +--- + +* [HBASE-13126](https://issues.apache.org/jira/browse/HBASE-13126) | *Critical* | **Provide alternate mini cluster classes other than HBTU for downstream users to write unit tests** + +Introduce a TestingHBaseCluster for users to implement integration test with mini hbase cluster. +See this section https://hbase.apache.org/book.html#\_integration\_testing\_with\_an\_hbase\_mini\_cluster in the ref guide for more details on how to use it. +TestingHBaseCluster also allowes you to start a mini hbase cluster based on external HDFS cluster and zookeeper cluster, please see the release note of HBASE-26167 for more details. +HBaseTestingUtility is marked as deprecated and will be 'removed' in the future. + + +--- + +* [HBASE-27028](https://issues.apache.org/jira/browse/HBASE-27028) | *Minor* | **Add a shell command for flushing master local region** + +Introduced a shell command flush\_master\_store for flushing master local region after HBASE-27028 + + +--- + +* [HBASE-27125](https://issues.apache.org/jira/browse/HBASE-27125) | *Minor* | **The batch size of cleaning expired mob files should have an upper bound** + +Configure "hbase.master.mob.cleaner.batch.size.upper.bound" to set a proper batch size of cleaning expired mob files, its default value is 10000. + + +--- + +* [HBASE-26167](https://issues.apache.org/jira/browse/HBASE-26167) | *Major* | **Allow users to not start zookeeper and dfs cluster when using TestingHBaseCluster** + +Introduce two new methods when creating a TestingHBaseClusterOption. + +public Builder useExternalDfs(String uri) +public Builder useExternalZooKeeper(String connectString) + +Users can use these two methods to specify external zookeeper or HDFS cluster to be used by the TestingHBaseCluster. + + +--- + +* [HBASE-27108](https://issues.apache.org/jira/browse/HBASE-27108) | *Blocker* | **Revert HBASE-25709** + +HBASE-25709 caused a regression for scans that result in a large number of rows and has been reverted in this release. + + +--- + +* [HBASE-26923](https://issues.apache.org/jira/browse/HBASE-26923) | *Minor* | **PerformanceEvaluation support encryption option** + +Add a new command line argument: --encryption to enable encryptopn in PerformanceEvaluation tool. + +Usage: + encryption Encryption type to use (AES, ...). Default: 'NONE'" + +Examples: + To run a AES encryption sequentialWrite: + $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=xxx --encryption='AES' sequentialWrite 10 + + +--- + +* [HBASE-26826](https://issues.apache.org/jira/browse/HBASE-26826) | *Major* | **Backport StoreFileTracker (HBASE-26067, HBASE-26584, and others) to branch-2.5** + +Introduces the StoreFileTracker interface to HBase. This is a server-side interface which abstracts how a Store (column family) knows what files should be included in that Store. Previously, HBase relied on a listing the directory a Store used for storage to determine the files which should make up that Store. + +\*\*\* StoreFileTracker is EXPERIMENTAL in 2.5. Use at your own risk. \*\*\* + +After this feature, there are two implementations of StoreFileTrackers. The first (and default) implementation is listing the Store directory. The second is a new implementation which records files which belong to a Store within each Store. Whenever the list of files that make up a Store change, this metadata file will be updated. + +This feature is notable in that it better enables HBase to function on storage systems which do not provide the typical posix filesystem semantics, most importantly, those which do not implement a file rename operation which is atomic. Storage systems which do not implement atomic renames often implement a rename as a copy and delete operation which amplifies the I/O costs by 2x. + +At scale, this feature should have a 2x reduction in I/O costs when using storage systems that do not provide atomic renames, most importantly in HBase compactions and memstore flushes. See the corresponding section, "Store File Tracking", in the HBase book for more information on how to use this feature. + +The file based StoreFileTracker, FileBasedStoreFileTracker, is currently incompatible with the Medium Objects (MOB) feature. Do not enable them together. + + +--- + +* [HBASE-26649](https://issues.apache.org/jira/browse/HBASE-26649) | *Major* | **Support meta replica LoadBalance mode for RegionLocator#getAllRegionLocations()** + +When setting 'hbase.locator.meta.replicas.mode' to "LoadBalance" at HBase client, RegionLocator#getAllRegionLocations() now load balances across all Meta Replica Regions. Please note, results from non-primary meta replica regions may contain stale data. + + +--- + +* [HBASE-27055](https://issues.apache.org/jira/browse/HBASE-27055) | *Minor* | **Add additional comments when using HBASE\_TRACE\_OPTS with standalone mode** + +hbase-env.sh has been updated with an optional configuration HBASE\_OPTS for standalone mode + +# export HBASE\_OPTS="${HBASE\_OPTS} ${HBASE\_TRACE\_OPTS} -Dotel.resource.attributes=service.name=hbase-standalone" + + +--- + +* [HBASE-26342](https://issues.apache.org/jira/browse/HBASE-26342) | *Major* | **Support custom paths of independent configuration and pool for hfile cleaner** + +Configure the custom hifle paths (under archive directory), e.g. data/default/testTable1,data/default/testTable2 by "hbase.master.hfile.cleaner.custom.paths". +Configure hfile cleaner classes for the custom paths by "hbase.master.hfilecleaner.custom.paths.plugins". +Configure the shared pool size of custom hfile cleaner paths by "hbase.cleaner.custom.hfiles.pool.size". + + +--- + +* [HBASE-27047](https://issues.apache.org/jira/browse/HBASE-27047) | *Minor* | **Fix typo for metric drainingRegionServers** + +Fix typo for metric drainingRegionServers. Change metric name from draininigRegionServers to drainingRegionServers. + + +--- + +* [HBASE-25465](https://issues.apache.org/jira/browse/HBASE-25465) | *Minor* | **Use javac --release option for supporting cross version compilation** + +When compiling with java 11 and above, we will use --release 8 to maintain java 8 compatibility. +Also upgrade jackson to 2.13.1 because in hbase-thirdparty 4.1.0 we shade jackson 2.13.1. + + +--- + +* [HBASE-27024](https://issues.apache.org/jira/browse/HBASE-27024) | *Major* | **The User API and Developer API links are broken on hbase.apache.org** + +Upgrade maven-site-plugin to 3.12.0, maven-javadoc-plugin to 3.4.0. + + +--- + +* [HBASE-26986](https://issues.apache.org/jira/browse/HBASE-26986) | *Major* | **Trace a one-shot execution of a Master procedure** + +Individual executions of procedures are now wrapped in a tracing span. No effort is made to coordinate multiple executions back to a common PID. + + +--- + +* [HBASE-27013](https://issues.apache.org/jira/browse/HBASE-27013) | *Major* | **Introduce read all bytes when using pread for prefetch** + +Introduce optional flag hfile.pread.all.bytes.enabled for pread that must read full bytes with the next block header. This feature is specially helpful when users are running HBase with Blob storage like S3 and Azure Blob storage. Especially when using HBase with S3A and set fs.s3a.experimental.input.fadvise=sequential, it can save input stream from seeking backward that spent more time on storage connection reset time. + + +--- + +* [HBASE-26899](https://issues.apache.org/jira/browse/HBASE-26899) | *Major* | **Run spotless:apply** + +Run spotless:apply to format our code base. +When viewing 'git blame', you may find a file has a large amount lines are modified by the commit of HBASE-26899, so you need to go back to the commit before this commit, and viewing 'git blame' again. + + +--- + +* [HBASE-26617](https://issues.apache.org/jira/browse/HBASE-26617) | *Major* | **Use spotless to reduce the pain on fixing checkstyle issues** + +Use spotless to format our java file and pom file, using the hbase\_eclipse\_formatter.xml and eclipse.importerorder file under our dev-support directory. +On all branches, the ratchetFrom is set the commit just before the commit which introduces the spotless plugin, so we will only format the files which are touched in later commits. +From now on, you should type mvn spotless:apply before generating a PR, the spotless plugin will fix most of the format issues for you. + + +--- + +* [HBASE-22349](https://issues.apache.org/jira/browse/HBASE-22349) | *Major* | **Stochastic Load Balancer skips balancing when node is replaced in cluster** + +StochasticLoadBalancer now respects the hbase.regions.slop configuration value as another factor in determining whether to attempt a balancer run. If any regionserver has a region count outside of the target range, the balancer will attempt to balance. Using the default 0.2 value, the target range is 80%-120% of the average (mean) region count per server. Whether the balancer will ultimately move regions will still depend on the weights of StochasticLoadBalancer's cost functions. + + +--- + +* [HBASE-26807](https://issues.apache.org/jira/browse/HBASE-26807) | *Major* | **Unify CallQueueTooBigException special pause with CallDroppedException** + +Introduces a new config "hbase.client.pause.server.overloaded", deprecating old "hbase.client.pause.cqtbe". The new config specifies a special pause time to use when the client receives an exception from the server indicating that it is overloaded. Currently this applies to CallQueueTooBigException and CallDroppedException. + + +--- + +* [HBASE-26891](https://issues.apache.org/jira/browse/HBASE-26891) | *Minor* | **Make MetricsConnection scope configurable** + +Adds a new "hbase.client.metrics.scope" config which allows users to define a custom scope for each Connection's metric instance. The default scope has also been changed to include the clusterId of the Connection, which should help differentiate metrics for processes connecting to multiple clusters. The scope is added to the ObjectName for JMX beans, so can be used to query for metrics for a particular connection. Using a custom scope might be useful in cases where you maintain separate Connections for writes vs reads. In that case you can set the scope appropriately and differentiate metrics for each. + + +--- + +* [HBASE-26618](https://issues.apache.org/jira/browse/HBASE-26618) | *Minor* | **Involving primary meta region in meta scan with CatalogReplicaLoadBalanceSimpleSelector** + +When META replica LoadBalance mode is enabled at client-side, clients will try to read from one META region first. If META location is from any non-primary META regions, in case of errors, it will fall back to the primary META region. + + +--- + +* [HBASE-26245](https://issues.apache.org/jira/browse/HBASE-26245) | *Major* | **Store region server list in master local region** + +A typical HBase deployment on cloud is to store the data other than WAL on OSS, and store the WAL data on a special HDFS cluster. A common operation is to rebuild the cluster with fresh new zk cluster and HDFS cluster, with only the old rootdir on OSS. But it requires extra manual steps since we rely on the WAL directory to find out previous live region servers, so we can schedule SCP to bring regions online. +After this issue, now it is possible to rebuild the cluster without extra manual steps as we will also store the previous live region servers in master local region. +But notice that you'd better stop masters first and then region servers when rebuilding, as some tests show that if there are some pending procedures, the new clusters may still hang. + + +--- + +* [HBASE-21065](https://issues.apache.org/jira/browse/HBASE-21065) | *Major* | **Try ROW\_INDEX\_V1 encoding on meta table (fix bloomfilters on meta while we are at it)** + +Enables ROW\_INDEX\_V1 encoding on hbase:meta by default. Also enables blooms. + +Will NOT enable encoding and blooms on upgrade. Operator will need to do this manually by editing hbase:meta schema (Or we provide a migration script to enable these configs -- out-of-scope for this JIRA). + + +--- + +* [HBASE-25895](https://issues.apache.org/jira/browse/HBASE-25895) | *Major* | **Implement a Cluster Metrics JSON endpoint** + +Introduces a REST+JSON endpoint on the master info server port, which is used to render the "Region Visualizer" section of the Master Status page. When enabled, access to this API is gated by authentication only, just like the Master Status page. This API is considered InterfaceAudience.Private, can change or disappear without notice. + + +--- + +* [HBASE-26802](https://issues.apache.org/jira/browse/HBASE-26802) | *Blocker* | **Backport the log4j2 changes to branch-2** + +Use log4j2 instead of log4j for logging. +Exclude log4j dependency from hbase and transitive dependencies, use log4j-1.2-api as test dependency for bridging as hadoop still need log4j for some reasons. Copy FileAppender implementation in hbase-logging as the ContainerLogAppender for YARN NodeManager extends it. All log4j.properties files have been replaced by log4j2.properties. + + +--- + +* [HBASE-26552](https://issues.apache.org/jira/browse/HBASE-26552) | *Major* | **Introduce retry to logroller to avoid abort** + +For retrying to roll log, the wait timeout is limited by "hbase.regionserver.logroll.wait.timeout.ms", +and the max retry time is limited by "hbase.regionserver.logroll.retries". +Do not retry to roll log is the default behavior. + + +--- + +* [HBASE-26640](https://issues.apache.org/jira/browse/HBASE-26640) | *Major* | **Reimplement master local region initialization to better work with SFT** + +Introduced a 'hbase.master.store.region.file-tracker.impl' config to specify the store file tracker implementation for master local region. + +If not present, master local region will use the cluster level store file tracker implementation. + + +--- + +* [HBASE-26673](https://issues.apache.org/jira/browse/HBASE-26673) | *Major* | **Implement a shell command for change SFT implementation** + +Introduced two shell commands for change table's or family's sft: + +change\_sft: + Change table's or table column family's sft. Examples: + hbase\> change\_sft 't1','FILE' + hbase\> change\_sft 't2','cf1','FILE' + +change\_sft\_all: + Change all of the tables's sft matching the given regex: + hbase\> change\_sft\_all 't.\*','FILE' + hbase\> change\_sft\_all 'ns:.\*','FILE' + hbase\> change\_sft\_all 'ns:t.\*','FILE' + + +--- + +* [HBASE-26742](https://issues.apache.org/jira/browse/HBASE-26742) | *Major* | **Comparator of NOT\_EQUAL NULL is invalid for checkAndMutate** + +The semantics of checkAndPut for null(or empty) value comparator is changed, the old match is always true. +But we should consider that EQUAL or NOT\_EQUAL for null check is a common usage, so the semantics of checkAndPut for matching null is correct now. +There is rare use of LESS or GREATER null, so keep the semantics for them. + + +--- + +* [HBASE-26688](https://issues.apache.org/jira/browse/HBASE-26688) | *Major* | **Threads shared EMPTY\_RESULT may lead to unexpected client job down.** + +Result#advance with empty cell list will always return false but not raise NoSuchElementException when called multiple times. +This is a behavior change so it is an 'incompatible change', but since it will not introduce any compile error and the old behavior is 'broken', so we also fix it for current release branches. + + +--- + +* [HBASE-26473](https://issues.apache.org/jira/browse/HBASE-26473) | *Major* | **Introduce \`db.hbase.container\_operations\` span attribute** + + Introduces an HBase-specific tracing attribute called `db.hbase.container_operations`. This attribute contains a list of table operations contained in the batch/list/envelope operation, allowing an operator to seek, for example, all `PUT` operations, even they occur inside of a `BATCH` or `COMPARE_AND_SET`. + + +--- + +* [HBASE-26469](https://issues.apache.org/jira/browse/HBASE-26469) | *Critical* | **correct HBase shell exit behavior to match code passed to exit** + + +User input handling has been refactored to make use of IRB sessions directly and the HBase shell attempts to ensure user provided calls to exit are able to convey failure and success. + +Those scripting use of the HBase shell should be aware that the exit code may have changed: + * a 0 code, or no code, passed to a call to exit from stdin in non-interactive mode will now exit cleanly. in prior versions this would have exited with an error and non-zero exit code. (note that in HBase 2.4.x this call will still result in a non-zero exit code) + * for other combinations of passing in an initialization script or reading from stdin with using the non-interactive flag, the exit code being 0 or non-0 should now line up with releases prior to 2.4, which is a change in behavior compared to versions 2.4.0 - 2.4.9. + +Please see the issue details for a table of expected exit codes. + + +--- + +* [HBASE-26631](https://issues.apache.org/jira/browse/HBASE-26631) | *Major* | **Upgrade junit to 4.13.2** + +Upgrade junit to 4.13.2 for addressing CVE-2020-15250. + + +--- + +* [HBASE-26347](https://issues.apache.org/jira/browse/HBASE-26347) | *Major* | **Support detect and exclude slow DNs in fan-out of WAL** + +This issue provides the method to detect slow datanodes by checking the packets processing time of each datanode connected by the WAL. When a datanode is considered slow, the datanode will be added to an exclude cache on the regionserver, and every stream created will exclude all the cached slow datanodes in a configured period. The exclude logic cooperate with the log rolling logic, will react more sensitively to the lower slow datanodes, whatever there is hardware failure or hotspots. + +hbase.regionserver.async.wal.max.exclude.datanode.count(default 3)and hbase.regionserver.async.wal.exclude.datanode.info.ttl.hour (default 6) means no more than 3 slow datanodes will be excluded on one regionserver, and the exclude cache for the slow datanodes is valid in 6 hours. + +There are two conditions used to determine whether a datanode is slow, +1. For small packet, we just have a simple time limit(configured by hbase.regionserver.async.wal.datanode.slow.packet.process.time.millis, default 6s), without considering the size of the packet. + +2. For large packet, we will calculate the speed, and check if the speed (configured by hbase.regionserver.async.wal.datanode.slow.packet.speed.min.kbs, default 20KB/s) is too slow. + +The large and small split point is configured by hbase.regionserver.async.wal.datanode.slow.check.speed.packet.data.length.min (default 64KB). + + +--- + +* [HBASE-26537](https://issues.apache.org/jira/browse/HBASE-26537) | *Major* | **FuzzyRowFilter backwards compatibility** + +HBASE-15676 introduced a backwards incompatible change which makes it impossible to upgrade server first, then client, without potentially incorrect scanning results if FuzzyRowFilter is in use. This change corrects that problem by introducing a backwards compatible workaround. + + +--- + +* [HBASE-26542](https://issues.apache.org/jira/browse/HBASE-26542) | *Minor* | **Apply a \`package\` to test protobuf files** + +The protobuf structures used in test are all now scoped by the package name \`hbase.test.pb\`. + + +--- + +* [HBASE-26512](https://issues.apache.org/jira/browse/HBASE-26512) | *Major* | **Make timestamp format configurable in HBase shell scan output** + +HBASE-23930 changed the formatting of the timestamp attribute on each Cell as displayed by the HBase shell to be formatted as an ISO-8601 string rather that milliseconds since the epoch. Some users may have logic which expects the timestamp to be displayed as milliseconds since the epoch. This change introduces the configuration property hbase.shell.timestamp.format.epoch which controls whether the shell will print an ISO-8601 formatted timestamp (the default "false") or milliseconds since the epoch ("true"). + + +--- + +* [HBASE-26363](https://issues.apache.org/jira/browse/HBASE-26363) | *Major* | **OpenTelemetry configuration support for per-process service names** + +Each HBase process can have its own `service.name`, a value that can be completely customized by the operator. See the comment and examples in conf/hbase-env.sh. + + +--- + +* [HBASE-26362](https://issues.apache.org/jira/browse/HBASE-26362) | *Major* | **Upload mvn site artifacts for nightly build to nightlies** + +Now we will upload the site artifacts to nightlies for nightly build as well as pre commit build. + + +--- + +* [HBASE-26316](https://issues.apache.org/jira/browse/HBASE-26316) | *Minor* | **Per-table or per-CF compression codec setting overrides** + +It is now possible to specify codec configuration options as part of table or column family schema definitions. The configuration options will only apply to the defined scope. For example: + + hbase\> create 'sometable', \\ + { NAME =\> 'somefamily', COMPRESSION =\> 'ZSTD' }, \\ + CONFIGURATION =\> { 'hbase.io.compress.zstd.level' =\> '9' } + + +--- + +* [HBASE-26329](https://issues.apache.org/jira/browse/HBASE-26329) | *Major* | **Upgrade commons-io to 2.11.0** + +Upgraded commons-io to 2.11.0. + + +--- + +* [HBASE-26186](https://issues.apache.org/jira/browse/HBASE-26186) | *Major* | **jenkins script for caching artifacts should verify cached file before relying on it** + +Add a '--verify-tar-gz' option to cache-apache-project-artifact.sh for verifying whether the cached file can be parsed as a gzipped tarball. +Use this option in our nightly job to avoid failures on broken cached hadoop tarballs. + + +--- + +* [HBASE-26339](https://issues.apache.org/jira/browse/HBASE-26339) | *Major* | **SshPublisher will skip uploading artifacts if the build is failure** + +Now we will mark build as unstable instead of failure when the yetus script returns error. This is used to solve the problem that the SshPublisher jenkins plugin will skip uploading artifacts if the build is marked as failure. In fact, the test output will be more important when there are UT failures. + + +--- + +* [HBASE-26317](https://issues.apache.org/jira/browse/HBASE-26317) | *Major* | **Publish the test logs for pre commit jenkins job to nightlies** + +Now we will upload test\_logs.zip for our pre commit jobs to nightlies to save space on jenkins node. You can see the test\_logs.txt to get the actual url of the test\_logs.zip, or visit https://nightlies.apache.org/hbase directly to find the artifacts. + + +--- + +* [HBASE-26313](https://issues.apache.org/jira/browse/HBASE-26313) | *Major* | **Publish the test logs for our nightly jobs to nightlies.apache.org** + +Now we will upload test\_logs.zip for our nightly jobs to nightlies to save space on jenkins node. You can see the test\_logs.txt to get the actual url of the test\_logs.zip, or visit https://nightlies.apache.org/hbase directly to find the artifacts. + + +--- + +* [HBASE-26318](https://issues.apache.org/jira/browse/HBASE-26318) | *Major* | **Publish test logs for flaky jobs to nightlies** + +Now we will upload the surefire output for our flaky test jobs to nightlies to save space on jenkins node. You can see the test\_logs.txt to get the actual url of the surefire output, or visit https://nightlies.apache.org/hbase directly to find the artifacts. + + +--- + +* [HBASE-26259](https://issues.apache.org/jira/browse/HBASE-26259) | *Major* | **Fallback support to pure Java compression** + +This change introduces provided compression codecs to HBase as + new Maven modules. Each module provides compression codec support that formerly required Hadoop native codecs, which in turn relies on native code integration, which may or may not be available on a given hardware platform or in an operational environment. We now provide codecs in the HBase distribution for users whom for whatever reason cannot or do not wish to deploy the Hadoop native codecs. + + +--- + +* [HBASE-26274](https://issues.apache.org/jira/browse/HBASE-26274) | *Major* | **Create an option to reintroduce BlockCache to mapreduce job** + +Introduce \`hfile.onheap.block.cache.fixed.size\` and default to disable. When using ClientSideRegionScanner, it will be enabled with a fixed size for caching INDEX/LEAF\_INDEX block when a client, e.g. snapshot scanner, scans the entire HFile and does not need to seek/reseek to index block multiple times. + + +--- + +* [HBASE-26270](https://issues.apache.org/jira/browse/HBASE-26270) | *Minor* | **Provide getConfiguration method for Region and Store interface** + +Provide 'getReadOnlyConfiguration' method for Store and Region interface + + +--- + +* [HBASE-26273](https://issues.apache.org/jira/browse/HBASE-26273) | *Major* | **TableSnapshotInputFormat/TableSnapshotInputFormatImpl should use ReadType.STREAM for scanning HFiles** + +HBase's MapReduce API which can operate over HBase snapshots will now default to using ReadType.STREAM instead of ReadType.DEFAULT (which is PREAD) as a result of this change. HBase developers expect that STREAM will perform significantly better for average Snapshot-based batch jobs. Users can restore the previous functionality (using PREAD) by updating their code to explicitly set a value of \`ReadType.PREAD\` on the \`Scan\` object they provide to TableSnapshotInputFormat, or by setting the configuration property "hbase.TableSnapshotInputFormat.scanner.readtype" to "PREAD" in hbase-site.xml. + + +--- + +* [HBASE-26276](https://issues.apache.org/jira/browse/HBASE-26276) | *Major* | **Allow HashTable/SyncTable to perform rawScan when comparing cells** + +Added --rawScan option to HashTable job, which allows HashTable/SyncTable to perform raw scans. If this property is omitted, it defaults to false. When used together with --versions set to a high value, SyncTable will fabricate delete markers to all old versions still hanging (not cleaned yet by major compaction), avoiding the inconsistencies reported in HBASE-21596. + + +--- + +* [HBASE-26147](https://issues.apache.org/jira/browse/HBASE-26147) | *Major* | **Add dry run mode to hbase balancer** + +This change adds new API to the Admin interface for triggering Region balancing on a cluster. A new BalanceRequest object was introduced which allows for configuring a dry run of the balancer (compute a plan without enacting it) and running the balancer in the presence of RITs. Corresponding API was added to the HBase shell as well. + + +--- + +* [HBASE-26204](https://issues.apache.org/jira/browse/HBASE-26204) | *Major* | **VerifyReplication should obtain token for peerQuorumAddress too** + +VerifyReplication obtains tokens even if the peer quorum parameter is used. VerifyReplication with peer quorum can be used for secure clusters also. + + +--- + +* [HBASE-26180](https://issues.apache.org/jira/browse/HBASE-26180) | *Major* | **Introduce a initial refresh interval for RpcConnectionRegistry** + +Introduced a 'hbase.client.bootstrap.initial\_refresh\_delay\_secs' config to control the first refresh delay for bootstrap nodes. The default value is 1/10 of periodic refresh interval. + + +--- + +* [HBASE-26173](https://issues.apache.org/jira/browse/HBASE-26173) | *Major* | **Return only a sub set of region servers as bootstrap nodes** + +Introduced a 'hbase.client.bootstrap.node.limit' config to limit the max number of bootstrap nodes we return to client. The default value is 10. + + +--- + +* [HBASE-26182](https://issues.apache.org/jira/browse/HBASE-26182) | *Major* | **Allow disabling refresh of connection registry endpoint** + +Set 'hbase.client.bootstrap.refresh\_interval\_secs' to -1 can disable refresh of connection registry endpoint. + + +--- + +* [HBASE-26212](https://issues.apache.org/jira/browse/HBASE-26212) | *Minor* | **Allow AuthUtil automatic renewal to be disabled** + +This change introduces a configuration property "hbase.client.keytab.automatic.renewal" to control AuthUtil, the class which automatically tries to perform Kerberos ticket renewal in client applications. This configuration property defaults to "true", meaning that AuthUtil will automatically attempt to renew Kerberos tickets per its capabilities. Those who want AuthUtil to not renew client Kerberos tickets can set this property to be "false". + + +--- + +* [HBASE-26172](https://issues.apache.org/jira/browse/HBASE-26172) | *Major* | **Deprecate MasterRegistry** + +MasterRegistry is deprecated. Please use RpcConnectionRegistry instead. + + +--- + +* [HBASE-26193](https://issues.apache.org/jira/browse/HBASE-26193) | *Major* | **Do not store meta region location as permanent state on zookeeper** + +Introduce a new 'info' family in master local region for storing the location of meta regions. +We will still mirror the location of meta regions to ZooKeeper, for backwards compatibility. But now you can also clean the meta location znodes(usually prefixed with 'meta-region-server') on ZooKeeper without mess up the cluster state. You can get a clean restart of the cluster, and after restarting, we will mirror the location of meta regions to ZooKeeper again. + + +--- + +* [HBASE-24652](https://issues.apache.org/jira/browse/HBASE-24652) | *Minor* | **master-status UI make date type fields sortable** + +Makes RegionServer 'Start time' sortable in the Master UI + + +--- + +* [HBASE-26200](https://issues.apache.org/jira/browse/HBASE-26200) | *Major* | **Undo 'HBASE-25165 Change 'State time' in UI so sorts (#2508)' in favor of HBASE-24652** + +Undid showing RegionServer 'Start time' in ISO-8601 format. Revert. + + +--- + +* [HBASE-6908](https://issues.apache.org/jira/browse/HBASE-6908) | *Major* | **Pluggable Call BlockingQueue for HBaseServer** + +Can pass in a FQCN to load as the call queue implementation. + +Standardized arguments to the constructor are the max queue length, the PriorityFunction, and the Configuration. + +PluggableBlockingQueue abstract class provided to help guide the correct constructor signature. + +Hard fails with PluggableRpcQueueNotFound if the class fails to load as a BlockingQueue\ + +Upstreaming on behalf of Hubspot, we are interested in defining our own custom RPC queue and don't want to get involved in necessarily upstreaming internal requirements/iterations. + + +--- + +* [HBASE-26196](https://issues.apache.org/jira/browse/HBASE-26196) | *Major* | **Support configuration override for remote cluster of HFileOutputFormat locality sensitive** + +Allow any configuration for the remote cluster in HFileOutputFormat2 that could be useful the different configuration from the job's configuration is necessary to connect the remote cluster, for instance, non-secure vs secure. + + +--- + +* [HBASE-26150](https://issues.apache.org/jira/browse/HBASE-26150) | *Major* | **Let region server also carry ClientMetaService** + +Introduced a RpcConnectionRegistry. + +Config 'hbase.client.bootstrap.servers' to set up the initial bootstrap nodes. The registry will refresh the bootstrap server periodically or on errors. Notice that, masters and region servers both implement the necessary rpc interface so you are free to config either masters or region servers as the initial bootstrap nodes. + + +--- + +* [HBASE-26127](https://issues.apache.org/jira/browse/HBASE-26127) | *Major* | **Backport HBASE-23898 "Add trace support for simple apis in async client" to branch-2** + +https://github.com/apache/hbase/commit/4fbc4c29f22e35a72e73dac3aa58359a8d8c7be3 + + +--- + +* [HBASE-26160](https://issues.apache.org/jira/browse/HBASE-26160) | *Minor* | **Configurable disallowlist for live editing of loglevels** + +Adds a new hbase.ui.logLevels.readonly.loggers config which takes a comma-separated list of logger names. Similar to log4j configurations, the logger names can be prefixes or a full logger name. The log level of read only loggers cannot be changed via the logLevel UI or setlevel CLI. This is useful for securing sensitive loggers, such as the SecurityLogger used for audit logs. + + +--- + +* [HBASE-26126](https://issues.apache.org/jira/browse/HBASE-26126) | *Major* | **Backport HBASE-25424 "Find a way to config OpenTelemetry tracing without directly depending on opentelemetry-sdk" to branch-2** + +https://github.com/apache/hbase/commit/3d29c0c2b4520edb06c0c5d3674cdb6547a57651 + + +--- + +* [HBASE-26125](https://issues.apache.org/jira/browse/HBASE-26125) | *Major* | **Backport HBASE-25401 "Add trace support for async call in rpc client" to branch-2** + +https://github.com/apache/hbase/commit/ca096437d7e096b514ddda53ec2f97b85d90752d + + +--- + +* [HBASE-26154](https://issues.apache.org/jira/browse/HBASE-26154) | *Minor* | **Provide exception metric for quota exceeded and throttling** + +Adds "exceptions.quotaExceeded" and "exceptions.rpcThrottling" to HBase server and Thrift server metrics. + + +--- + +* [HBASE-26098](https://issues.apache.org/jira/browse/HBASE-26098) | *Major* | **Support passing a customized Configuration object when creating TestingHBaseCluster** + +Now TestingHBaseClusterOption support passing a Configuration object when building. + + +--- + +* [HBASE-26124](https://issues.apache.org/jira/browse/HBASE-26124) | *Major* | **Backport HBASE-25373 "Remove HTrace completely in code base and try to make use of OpenTelemetry" to branch-2** + +https://github.com/apache/hbase/commit/f0493016062267fc37e14659d9183673d42a8f1d + + +--- + +* [HBASE-26146](https://issues.apache.org/jira/browse/HBASE-26146) | *Minor* | **Allow custom opts for hbck in hbase bin** + +Adds HBASE\_HBCK\_OPTS environment variable to bin/hbase for passing extra options to hbck/hbck2. Defaults to HBASE\_SERVER\_JAAS\_OPTS if specified, or HBASE\_REGIONSERVER\_OPTS. + + +--- + +* [HBASE-26088](https://issues.apache.org/jira/browse/HBASE-26088) | *Critical* | **conn.getBufferedMutator(tableName) leaks thread executors and other problems** + +The API doc for Connection#getBufferedMutator(TableName) and Connection#getBufferedMutator(BufferedMutatorParams) mentioned that when user dont pass a ThreadPool to be used, we use the ThreadPool in the Connection. But in reality, we were creating new ThreadPool in such cases. + +We are keeping the behaviour of code as is but corrected the Javadoc and also a bug of not closing this new pool while Closing the BufferedMutator. + + +--- + +* [HBASE-25986](https://issues.apache.org/jira/browse/HBASE-25986) | *Minor* | **Expose the NORMALIZARION\_ENABLED table descriptor through a property in hbase-site** + +New config: hbase.table.normalization.enabled + +Default value: false + +Description: This config is used to set default behaviour of normalizer at table level. To override this at table level one can set NORMALIZATION\_ENABLED at table descriptor level and that property will be honored. Of course, this property at table level can only work if normalizer is enabled at cluster level using "normalizer\_switch true" command. + + +--- + +* [HBASE-26080](https://issues.apache.org/jira/browse/HBASE-26080) | *Major* | **Implement a new mini cluster class for end users** + +Introduce a new TestingHBaseCluster for end users to start a mini hbase cluster in tests. + +After starting the cluster, you can create a Connection with the returned Configuration instance for accessing the cluster. + +Besides the mini hbase cluster, TestingHBaseCluster will also start a zookeeper cluster and dfs cluster. But you can not control the zookeeper and dfs cluster separately, as for end users, you do not need to test the behavior of HBase cluster when these systems are broken. + +We provide methods for start/stop master and region server, and also the whole hbase cluster. Notice that, we do not provide methods to get the address for masters and regionservers, or locate a region, you could use the Admin interface to do this. + + +--- + +* [HBASE-22923](https://issues.apache.org/jira/browse/HBASE-22923) | *Major* | **hbase:meta is assigned to localhost when we downgrade the hbase version** + +Introduced new config: hbase.min.version.move.system.tables + +When the operator uses this configuration option, any version between +the current cluster version and the value of "hbase.min.version.move.system.tables" +does not trigger any auto-region movement. Auto-region movement here +refers to auto-migration of system table regions to newer server versions. +It is assumed that the configured range of versions does not require special +handling of moving system table regions to higher versioned RegionServer. +This auto-migration is done by AssignmentManager#checkIfShouldMoveSystemRegionAsync(). +Example: Let's assume the cluster is on version 1.4.0 and we have +set "hbase.min.version.move.system.tables" as "2.0.0". Now if we upgrade +one RegionServer on 1.4.0 cluster to 1.6.0 (\< 2.0.0), then AssignmentManager will +not move hbase:meta, hbase:namespace and other system table regions +to newly brought up RegionServer 1.6.0 as part of auto-migration. +However, if we upgrade one RegionServer on 1.4.0 cluster to 2.2.0 (\> 2.0.0), +then AssignmentManager will move all system table regions to newly brought +up RegionServer 2.2.0 as part of auto-migration done by +AssignmentManager#checkIfShouldMoveSystemRegionAsync(). + +Overall, assuming we have system RSGroup where we keep HBase system tables, if we use +config "hbase.min.version.move.system.tables" with value x.y.z then while upgrading cluster to +version greater than or equal to x.y.z, the first RegionServer that we upgrade must +belong to system RSGroup only. + + +--- + +* [HBASE-25902](https://issues.apache.org/jira/browse/HBASE-25902) | *Critical* | **Add missing CFs in meta during HBase 1 to 2.3+ Upgrade** + +While upgrading cluster from 1.x to 2.3+ versions, after the active master is done setting it's status as 'Initialized', it attempts to add 'table' and 'repl\_barrier' CFs in meta. Once CFs are added successfully, master is aborted with PleaseRestartMasterException because master has missed certain initialization events (e.g ClusterSchemaService is not initialized and tableStateManager fails to migrate table states from ZK to meta due to missing CFs). Subsequent active master initialization is expected to be smooth. +In the presence of multi masters, when one of them becomes active for the first time after upgrading to HBase 2.3+, it is aborted after fixing CFs in meta and one of the other backup masters will take over and become active soon. Hence, overall this is expected to be smooth upgrade if we have backup masters configured. If not, operator is expected to restart same master again manually. + + +--- + +* [HBASE-26029](https://issues.apache.org/jira/browse/HBASE-26029) | *Critical* | **It is not reliable to use nodeDeleted event to track region server's death** + +Introduce a new step in ServerCrashProcedure to move the replication queues of the dead region server to other live region servers, as this is the only reliable way to get the death event of a region server. +The old ReplicationTracker related code have all been purged as they are not used any more. + + +--- + +* [HBASE-25877](https://issues.apache.org/jira/browse/HBASE-25877) | *Major* | **Add access check for compactionSwitch** + +Now calling RSRpcService.compactionSwitch, i.e, Admin.compactionSwitch at client side, requires ADMIN permission. +This is an incompatible change but it is also a bug, as we should not allow any users to disable compaction on a regionserver, so we apply this to all active branches. + + +--- + +* [HBASE-25984](https://issues.apache.org/jira/browse/HBASE-25984) | *Critical* | **FSHLog WAL lockup with sync future reuse [RS deadlock]** + +Fixes a WAL lockup issue due to premature reuse of the sync futures by the WAL consumers. The lockup causes the WAL system to hang resulting in blocked appends and syncs thus holding up the RPC handlers from progressing. Only workaround without this fix is to force abort the region server. + + +--- + +* [HBASE-25993](https://issues.apache.org/jira/browse/HBASE-25993) | *Major* | **Make excluded SSL cipher suites configurable for all Web UIs** + +Add "ssl.server.exclude.cipher.list" configuration to excluded cipher suites for the http server started by the InfoServer. + + +--- + +* [HBASE-25920](https://issues.apache.org/jira/browse/HBASE-25920) | *Major* | **Support Hadoop 3.3.1** + +Fixes to make unit tests pass and to make it so an hbase built from branch-2 against a 3.3.1RC can run on a 3.3.1RC small cluster. + + +--- + +* [HBASE-25969](https://issues.apache.org/jira/browse/HBASE-25969) | *Major* | **Cleanup netty-all transitive includes** + +We have an (old) netty-all in our produced artifacts. It is transitively included from hadoop. It is needed by MiniMRCluster referenced from a few MR tests in hbase. This commit adds netty-all excludes everywhere else but where tests will fail unless the transitive is allowed through. TODO: move MR and/or MR tests out of hbase core. + + +--- + +* [HBASE-25963](https://issues.apache.org/jira/browse/HBASE-25963) | *Major* | **HBaseCluster should be marked as IA.Public** + +Change HBaseCluster to IA.Public as its sub class MiniHBaseCluster is IA.Public. + + +--- + +* [HBASE-25841](https://issues.apache.org/jira/browse/HBASE-25841) | *Minor* | **Add basic jshell support** + +This change adds a new command \`hbase jshell\` command-line interface. It launches an interactive JShell session with HBase on the classpath, as well as a the client package already imported. + + +--- + +* [HBASE-25894](https://issues.apache.org/jira/browse/HBASE-25894) | *Major* | **Improve the performance for region load and region count related cost functions** + +In CostFromRegionLoadFunction, now we will only recompute the cost for a given region server in regionMoved function, instead of computing all the costs every time. +Introduced a DoubleArrayCost for better abstraction, and also try to only compute the final cost on demand as the computation is also a bit expensive. + + +--- + +* [HBASE-25869](https://issues.apache.org/jira/browse/HBASE-25869) | *Major* | **WAL value compression** + +WAL storage can be expensive, especially if the cell values represented in the edits are large, consisting of blobs or significant lengths of text. Such WALs might need to be kept around for a fairly long time to satisfy replication constraints on a space limited (or space-contended) filesystem. + +Enable WAL compression and, with this feature, WAL value compression, to save space in exchange for slightly higher WAL append latencies. The degree of performance impact will depend on the compression algorithm selection. SNAPPY or ZSTD are recommended algorithms, if native codec support is available. SNAPPY may even provide an overall improvement in WAL commit latency, so is the best choice. GZ can be a reasonable fallback where native codec support is not available. + +To enable WAL compression, value compression, and select the desired algorithm, edit your site configuration like so: + +\ + +After this change a region server can only accept regions (as seen by master) after it's first report to master is sent successfully. Prior to this change there could be cases where the region server finishes calling regionServerStartup but is actually stuck during initialization due to issues like misconfiguration and master tries to assign regions and they are stuck because the region server is in a weird state and not ready to serve them. + + +--- + +* [HBASE-25826](https://issues.apache.org/jira/browse/HBASE-25826) | *Major* | **Revisit the synchronization of balancer implementation** + +Narrow down the public facing API for LoadBalancer by removing balanceTable and setConf methods. +Redesigned the initilization sequence to simplify the initialization code. Now all the setters are just 'setter', all the initialization work are moved to initialize method. +Rename setClusterMetrics to updateClusterMetrics, as it will be called periodically while other setters will only be called once before initialization. +Add javadoc for LoadBalancer class to mention how to do synchronization on implementation classes. + + +--- + +* [HBASE-25834](https://issues.apache.org/jira/browse/HBASE-25834) | *Major* | **Remove balanceTable method from LoadBalancer interface** + +Remove balanceTable method from LoadBalancer interface as we never call it outside balancer implementation. +Mark balanceTable method as protected in BaseLoadBalancer. +Mark balanceCluster method as final in BaseLoadBalancer, the implementation classes should not override it anymore, just implement the balanceTable method is enough. + + +--- + +* [HBASE-25756](https://issues.apache.org/jira/browse/HBASE-25756) | *Minor* | **Support alternate compression for major and minor compactions** + +It is now possible to specify alternate compression algorithms for major or minor compactions, via new ColumnFamilyBuilder or HColumnDescriptor methods {get,set}{Major,Minor}CompressionType(), or shell schema attributes COMPRESSION\_COMPACT\_MAJOR or COMPRESSION\_COMPACT\_MINOR. This can be used to select a fast algorithm for frequent minor compactions and a slower algorithm offering better compression ratios for infrequent major compactions. + + +--- + +* [HBASE-25766](https://issues.apache.org/jira/browse/HBASE-25766) | *Major* | **Introduce RegionSplitRestriction that restricts the pattern of the split point** + +After HBASE-25766, we can specify a split restriction, "KeyPrefix" or "DelimitedKeyPrefix", to a table with the "hbase.regionserver.region.split\_restriction.type" property. The "KeyPrefix" split restriction groups rows by a prefix of the row-key. And the "DelimitedKeyPrefix" split restriction groups rows by a prefix of the row-key with a delimiter. + +For example: +\`\`\` +# Create a table with a "KeyPrefix" split restriction, where the prefix length is 2 bytes +hbase\> create 'tbl1', 'fam', {CONFIGURATION =\> {'hbase.regionserver.region.split\_restriction.type' =\> 'KeyPrefix', 'hbase.regionserver.region.split\_restriction.prefix\_length' =\> '2'}} + +# Create a table with a "DelimitedKeyPrefix" split restriction, where the delimiter is a comma (,) +hbase\> create 'tbl2', 'fam', {CONFIGURATION =\> {'hbase.regionserver.region.split\_restriction.type' =\> 'DelimitedKeyPrefix', 'hbase.regionserver.region.split\_restriction.delimiter' =\> ','}} +\`\`\` + +Instead of specifying a split restriction to a table directly, we can also set the properties in hbase-site.xml. In this case, the specified split restriction is applied for all the tables. + +Note that the split restriction is also applied to a user-specified split point so that we don't allow users to break the restriction, which is different behavior from the existing KeyPrefixRegionSplitPolicy and DelimitedKeyPrefixRegionSplitPolicy. + + +--- + +* [HBASE-25775](https://issues.apache.org/jira/browse/HBASE-25775) | *Major* | **Use a special balancer to deal with maintenance mode** + +Introduced a MaintenanceLoadBalancer to be used only under maintenance mode. Typically you should not use it as your balancer implementation. + + +--- + +* [HBASE-25767](https://issues.apache.org/jira/browse/HBASE-25767) | *Major* | **CandidateGenerator.getRandomIterationOrder is too slow on large cluster** + +In the actual implementation classes of CandidateGenerator, now we just random select a start point and then iterate sequentially, instead of using the old way, where we will create a big array to hold all the integers in [0, num\_regions\_in\_cluster), shuffle the array, and then iterate on the array. +The new implementation is 'random' enough as every time we just select one candidate. The problem for the old implementation is that, it will create an array every time when we want to get a candidate, if we have tens of thousands regions, we will create an array with tens of thousands length everytime, which causes big GC pressure and slow down the balancer execution. + + +--- + +* [HBASE-25744](https://issues.apache.org/jira/browse/HBASE-25744) | *Major* | **Change default of \`hbase.normalizer.merge.min\_region\_size.mb\` to \`0\`** + +Before this change, by default, the normalizer would exclude any region with a total \`storefileSizeMB\` \<= 1 from merge consideration. This changes the default so that these small regions will be merged away. + + +--- + +* [HBASE-25687](https://issues.apache.org/jira/browse/HBASE-25687) | *Major* | **Backport "HBASE-25681 Add a switch for server/table queryMeter" to branch-2 and branch-1** + +Adds flags to disable server and table metrics. They are default on. + +"hbase.regionserver.enable.server.query.meter" +"hbase.regionserver.enable.table.query.meter"; + + +--- + +* [HBASE-25199](https://issues.apache.org/jira/browse/HBASE-25199) | *Minor* | **Remove HStore#getStoreHomedir** + +Moved the following methods from HStore to HRegionFileSystem + +- #getStoreHomedir(Path, RegionInfo, byte[]) +- #getStoreHomedir(Path, String, byte[]) + + +--- + +* [HBASE-25685](https://issues.apache.org/jira/browse/HBASE-25685) | *Major* | **asyncprofiler2.0 no longer supports svg; wants html** + +If asyncprofiler 1.x, all is good. If asyncprofiler 2.x and it is hbase-2.3.x or hbase-2.4.x, add '?output=html' to get flamegraphs from the profiler. + +Otherwise, if hbase-2.5+ and asyncprofiler2, all works. If asyncprofiler1 and hbase-2.5+, you may have to add '?output=svg' to the query. + + +--- + +* [HBASE-25518](https://issues.apache.org/jira/browse/HBASE-25518) | *Major* | **Support separate child regions to different region servers** + +Config key for enable/disable automatically separate child regions to different region servers in the procedure of split regions. One child will be kept to the server where parent region is on, and the other child will be assigned to a random server. + +hbase.master.auto.separate.child.regions.after.split.enabled + +Default setting is false/off. + + +--- + +* [HBASE-25665](https://issues.apache.org/jira/browse/HBASE-25665) | *Major* | **Disable reverse DNS lookup for SASL Kerberos client connection** + +New client side configuration \`hbase.unsafe.client.kerberos.hostname.disable.reversedns\` is added. + +This configuration is advanced for experts and you shouldn't specify unless you really what is this configuration and doing. +HBase secure client using SASL Kerberos performs DNS reverse lookup to get hostname for server principal using InetAddress.getCanonicalHostName by default (false for this configuration). +If you set true for this configuration, HBase client doen't perform DNS reverse lookup for server principal and use InetAddress.getHostName which is sent by HBase cluster instead. +This helps your client application deploy under unusual network environment which DNS doesn't provide reverse lookup. +Check the description of the JIRA ticket, HBASE-25665 carefully and check that this configuration fits your environment and deployment actually before enable this configuration. + + +--- + +* [HBASE-25374](https://issues.apache.org/jira/browse/HBASE-25374) | *Minor* | **Make REST Client connection and socket time out configurable** + +Configuration parameter to set rest client connection timeout + +"hbase.rest.client.conn.timeout" Default is 2 \* 1000 + +"hbase.rest.client.socket.timeout" Default of 30 \* 1000 + + +--- + +* [HBASE-25566](https://issues.apache.org/jira/browse/HBASE-25566) | *Major* | **RoundRobinTableInputFormat** + +Adds RoundRobinTableInputFormat, a subclass of TableInputFormat, that takes the TIF#getSplits list and resorts it so as to spread the InputFormats as broadly about the cluster as possible. RRTIF works to frustrate bunching of InputSplits on RegionServers to avoid the scenario where a few RegionServers are working hard fielding many InputSplits while others idle hosting a few or none. + + +--- + +* [HBASE-25587](https://issues.apache.org/jira/browse/HBASE-25587) | *Major* | **[hbck2] Schedule SCP for all unknown servers** + +Adds scheduleSCPsForUnknownServers to Hbck Service. + + +--- + +* [HBASE-25636](https://issues.apache.org/jira/browse/HBASE-25636) | *Minor* | **Expose HBCK report as metrics** + +Expose HBCK repost results in metrics, includes: "orphanRegionsOnRS", "orphanRegionsOnFS", "inconsistentRegions", "holes", "overlaps", "unknownServerRegions" and "emptyRegionInfoRegions". + + +--- + +* [HBASE-25582](https://issues.apache.org/jira/browse/HBASE-25582) | *Major* | **Support setting scan ReadType to be STREAM at cluster level** + +Adding a new meaning for the config 'hbase.storescanner.pread.max.bytes' when configured with a value \<0. +In HBase 2.x we allow the Scan op to specify a ReadType (PREAD / STREAM/ DEFAULT). When Scan comes with DEFAULT read type, we will start scan with preads and later switch to stream read once we see we are scanning a total data size \> value of hbase.storescanner.pread.max.bytes. (This is calculated for data per region:cf). This config defaults to 4 x of HFile block size = 256 KB by default. +This jira added a new meaning for this config when configured with a -ve value. In such case, for all scans with DEFAULT read type, we will start with STREAM read itself. (Switch at begin of the scan itself) + + +--- + +* [HBASE-25492](https://issues.apache.org/jira/browse/HBASE-25492) | *Major* | **Create table with rsgroup info in branch-2** + +HBASE-25492 added a new interface in TableDescriptor which allows user to define RSGroup name while creating or modifying a table. + + +--- + +* [HBASE-25460](https://issues.apache.org/jira/browse/HBASE-25460) | *Major* | **Expose drainingServers as cluster metric** + +Exposed new jmx metrics: "draininigRegionServers" and "numDrainingRegionServers" to provide "comma separated names for regionservers that are put in draining mode" and "num of such regionservers" respectively. + + +--- + +* [HBASE-25615](https://issues.apache.org/jira/browse/HBASE-25615) | *Major* | **Upgrade java version in pre commit docker file** + +jdk8u232-b09 -\> jdk8u282-b08 +jdk-11.0.6\_10 -\> jdk-11.0.10\_9 + + +--- + +* [HBASE-23887](https://issues.apache.org/jira/browse/HBASE-23887) | *Major* | **New L1 cache : AdaptiveLRU** + +Introduced new L1 cache: AdaptiveLRU. This is supposed to provide better performance than default LRU cache. +Set config key "hfile.block.cache.policy" to "AdaptiveLRU" in hbase-site in order to start using this new cache. + + +--- + +* [HBASE-25449](https://issues.apache.org/jira/browse/HBASE-25449) | *Major* | **'dfs.client.read.shortcircuit' should not be set in hbase-default.xml** + +The presence of HDFS short-circuit read configuration properties in hbase-default.xml inadvertently causes short-circuit reads to not happen inside of RegionServers, despite short-circuit reads being enabled in hdfs-site.xml. + + +--- + +* [HBASE-25333](https://issues.apache.org/jira/browse/HBASE-25333) | *Major* | **Add maven enforcer rule to ban VisibleForTesting imports** + +Ban the imports of guava VisiableForTesting, which means you should not use this annotation in HBase any more. +For IA.Public and IA.LimitedPrivate classes, typically you should not expose any test related fields/methods there, and if you want to hide something, use IA.Private on the specific fields/methods. +For IA.Private classes, if you want to expose something only for tests, use the RestrictedApi annotation from error prone, where it could cause a compilation error if someone break the rule in the future. + + +--- + +* [HBASE-25441](https://issues.apache.org/jira/browse/HBASE-25441) | *Critical* | **add security check for some APIs in RSRpcServices** + +RsRpcServices APIs that can be accessed only through Admin rights: +- stopServer +- updateFavoredNodes +- updateConfiguration +- clearRegionBlockCache +- clearSlowLogsResponses + + +--- + +* [HBASE-25432](https://issues.apache.org/jira/browse/HBASE-25432) | *Blocker* | **we should add security checks for setTableStateInMeta and fixMeta** + +setTableStateInMeta and fixMeta can be accessed only through Admin rights + + +--- + +* [HBASE-25318](https://issues.apache.org/jira/browse/HBASE-25318) | *Minor* | **Configure where IntegrationTestImportTsv generates HFiles** + +Added IntegrationTestImportTsv.generatedHFileFolder configuration property to override the default location in IntegrationTestImportTsv. Useful for running the integration test when HDFS Transparent Encryption is enabled. + + +--- + +* [HBASE-24751](https://issues.apache.org/jira/browse/HBASE-24751) | *Minor* | **Display Task completion time and/or processing duration on Web UI** + +Adds completion time to tasks display. + + +--- + +* [HBASE-25456](https://issues.apache.org/jira/browse/HBASE-25456) | *Critical* | **setRegionStateInMeta need security check** + +setRegionStateInMeta can be accessed only through Admin rights + + +--- + +* [HBASE-25451](https://issues.apache.org/jira/browse/HBASE-25451) | *Major* | **Upgrade commons-io to 2.8.0** + +Upgrade commons-io to 2.8.0. Remove deprecated IOUtils.closeQuietly call in code base. + + +--- + +* [HBASE-22749](https://issues.apache.org/jira/browse/HBASE-22749) | *Major* | **Distributed MOB compactions** + + +MOB compaction is now handled in-line with per-region compaction on region + servers + +- regions with mob data store per-hfile metadata about which mob hfiles are + referenced +- admin requested major compaction will also rewrite MOB files; periodic RS + initiated major compaction will not +- periodically a chore in the master will initiate a major compaction that + will rewrite MOB values to ensure it happens. controlled by + 'hbase.mob.compaction.chore.period'. default is weekly +- control how many RS the chore requests major compaction on in parallel + with 'hbase.mob.major.compaction.region.batch.size'. default is as + parallel as possible. +- periodic chore in master will scan backing hfiles from regions to get the + set of referenced mob hfiles and archive those that are no longer + referenced. control period with 'hbase.master.mob.cleaner.period' +- Optionally, RS that are compacting mob files can limit write + amplification by not rewriting values from mob hfiles over a certain size + limit. opt-in by setting 'hbase.mob.compaction.type' to 'optimized'. + control threshold by 'hbase.mob.compactions.max.file.size'. + default is 1GiB +- Should smoothly integrate with existing MOB users via rolling upgrade. + will delay old MOB file cleanup until per-region compaction has managed + to compact each region at least once so that used mob hfile metadata can + be gathered. + +This improvement obviates the dataloss in HBASE-22075. - # HBASE 2.2.0 Release Notes @@ -32,7 +1608,7 @@ These release notes cover new developer and user-facing incompatibilities, impor See the document http://hbase.apache.org/book.html#upgrade2.2 about how to upgrade from 2.0 or 2.1 to 2.2+. -HBase 2.2+ uses a new Procedure form assiging/unassigning/moving Regions. It does not process HBase 2.1 and 2.0's Unassign/Assign Procedure types. Upgrade requires that we first drain the Master Procedure Store of old style Procedures before starting the new 2.2 Master. So you need to make sure that before you kill the old version (2.0 or 2.1) Master, there is no region in transition. And once the new version (2.2+) Master is up, you can rolling upgrade RegionServers one by one. +HBase 2.2+ uses a new Procedure form assiging/unassigning/moving Regions. It does not process HBase 2.1 and 2.0's Unassign/Assign Procedure types. Upgrade requires that we first drain the Master Procedure Store of old style Procedures before starting the new 2.2 Master. So you need to make sure that before you kill the old version (2.0 or 2.1) Master, there is no region in transition. And once the new version (2.2+) Master is up, you can rolling upgrade RegionServers one by one. And there is a more safer way if you are running 2.1.1+ or 2.0.3+ cluster. It need four steps to upgrade Master. @@ -421,15 +1997,15 @@ Previously the recovered.edits directory was under the root directory. This JIRA When oldwals (and hfile) cleaner cleans stale wals (and hfiles), it will periodically check and wait the clean results from filesystem, the total wait time will be no more than a max time. -The periodically wait and check configurations are hbase.oldwals.cleaner.thread.check.interval.msec (default is 500 ms) and hbase.regionserver.hfilecleaner.thread.check.interval.msec (default is 1000 ms). +The periodically wait and check configurations are hbase.oldwals.cleaner.thread.check.interval.msec (default is 500 ms) and hbase.regionserver.hfilecleaner.thread.check.interval.msec (default is 1000 ms). Meanwhile, The max time configurations are hbase.oldwals.cleaner.thread.timeout.msec and hbase.regionserver.hfilecleaner.thread.timeout.msec, they are set to 60 seconds by default. All support dynamic configuration. -e.g. in the oldwals cleaning scenario, one may consider tuning hbase.oldwals.cleaner.thread.timeout.msec and hbase.oldwals.cleaner.thread.check.interval.msec +e.g. in the oldwals cleaning scenario, one may consider tuning hbase.oldwals.cleaner.thread.timeout.msec and hbase.oldwals.cleaner.thread.check.interval.msec -1. While deleting a oldwal never complete (strange but possible), then delete file task needs to wait for a max of 60 seconds. Here, 60 seconds might be too long, or the opposite way is to increase more than 60 seconds in the use cases of slow file delete. +1. While deleting a oldwal never complete (strange but possible), then delete file task needs to wait for a max of 60 seconds. Here, 60 seconds might be too long, or the opposite way is to increase more than 60 seconds in the use cases of slow file delete. 2. The check and wait of a file delete is set to default in the period of 500 milliseconds, one might want to tune this checking period to a short interval to check more frequently or to a longer interval to avoid checking too often to manage their delete file task checking period (the longer interval may be use to avoid checking too fast while using a high latency storage). @@ -461,12 +2037,12 @@ Solution: After this jira, the compaction event tracker will be writed to HFile. * [HBASE-21820](https://issues.apache.org/jira/browse/HBASE-21820) | *Major* | **Implement CLUSTER quota scope** -HBase contains two quota scopes: MACHINE and CLUSTER. Before this patch, set quota operations did not expose scope option to client api and use MACHINE as default, CLUSTER scope can not be set and used. +HBase contains two quota scopes: MACHINE and CLUSTER. Before this patch, set quota operations did not expose scope option to client api and use MACHINE as default, CLUSTER scope can not be set and used. Shell commands are as follows: set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec' This issue implements CLUSTER scope in a simple way: For user, namespace, user over namespace quota, use [ClusterLimit / RSNum] as machine limit. For table and user over table quota, use [ClusterLimit / TotalTableRegionNum \* MachineTableRegionNum] as machine limit. -After this patch, user can set CLUSTER scope quota, but MACHINE is still default if user ignore scope. +After this patch, user can set CLUSTER scope quota, but MACHINE is still default if user ignore scope. Shell commands are as follows: set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec' set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec', SCOPE =\> MACHINE @@ -491,11 +2067,11 @@ Remove bloom filter type ROWPREFIX\_DELIMITED. May add it back when find a bette * [HBASE-21783](https://issues.apache.org/jira/browse/HBASE-21783) | *Major* | **Support exceed user/table/ns throttle quota if region server has available quota** -Support enable or disable exceed throttle quota. Exceed throttle quota means, user can over consume user/namespace/table quota if region server has additional available quota because other users don't consume at the same time. +Support enable or disable exceed throttle quota. Exceed throttle quota means, user can over consume user/namespace/table quota if region server has additional available quota because other users don't consume at the same time. Use the following shell commands to enable/disable exceed throttle quota: enable\_exceed\_throttle\_quota disable\_exceed\_throttle\_quota -There are two limits when enable exceed throttle quota: -1. Must set at least one read and one write region server throttle quota; +There are two limits when enable exceed throttle quota: +1. Must set at least one read and one write region server throttle quota; 2. All region server throttle quotas must be in seconds time unit. Because once previous requests exceed their quota and consume region server quota, quota in other time units may be refilled in a long time, this may affect later requests. @@ -621,7 +2197,7 @@ Add a clearRegionLocationCache method in Connection to clear the region location * [HBASE-21713](https://issues.apache.org/jira/browse/HBASE-21713) | *Major* | **Support set region server throttle quota** -Support set region server rpc throttle quota which represents the read/write ability of region servers and throttles when region server's total requests exceeding the limit. +Support set region server rpc throttle quota which represents the read/write ability of region servers and throttles when region server's total requests exceeding the limit. Use the following shell command to set RS quota: set\_quota TYPE =\> THROTTLE, REGIONSERVER =\> 'all', THROTTLE\_TYPE =\> WRITE, LIMIT =\> '20000req/sec' @@ -650,7 +2226,7 @@ Adds shell support for the following: * [HBASE-21734](https://issues.apache.org/jira/browse/HBASE-21734) | *Major* | **Some optimization in FilterListWithOR** -After HBASE-21620, the filterListWithOR has been a bit slow because we need to merge each sub-filter's RC , while before HBASE-21620, we will skip many RC merging, but the logic was wrong. So here we choose another way to optimaze the performance: removing the KeyValueUtil#toNewKeyCell. +After HBASE-21620, the filterListWithOR has been a bit slow because we need to merge each sub-filter's RC , while before HBASE-21620, we will skip many RC merging, but the logic was wrong. So here we choose another way to optimaze the performance: removing the KeyValueUtil#toNewKeyCell. Anoop Sam John suggested that the KeyValueUtil#toNewKeyCell can save some GC before because if we copy key part of cell into a single byte[], then the block the cell refering won't be refered by the filter list any more, the upper layer can GC the data block quickly. while after HBASE-21620, we will update the prevCellList for every encountered cell now, so the lifecycle of cell in prevCellList for FilterList will be quite shorter. so just use the cell ref for saving cpu. BTW, we removed all the arrays streams usage in filter list, because it's also quite time-consuming in our test. @@ -702,15 +2278,15 @@ Python3 support was added to dev-support/submit-patch.py. To install newly requi In HBASE-21657, I simplified the path of estimatedSerialiedSize() & estimatedSerialiedSizeOfCell() by moving the general getSerializedSize() and heapSize() from ExtendedCell to Cell interface. The patch also included some other improvments: -1. For 99% of case, our cells has no tags, so let the HFileScannerImpl just return the NoTagsByteBufferKeyValue if no tags, which means we can save - lots of cpu time when sending no tags cell to rpc because can just return the length instead of getting the serialize size by caculating offset/length +1. For 99% of case, our cells has no tags, so let the HFileScannerImpl just return the NoTagsByteBufferKeyValue if no tags, which means we can save + lots of cpu time when sending no tags cell to rpc because can just return the length instead of getting the serialize size by caculating offset/length of each fields(row/cf/cq..) 2. Move the subclass's getSerializedSize implementation from ExtendedCell to their own class, which mean we did not need to call ExtendedCell's getSerialiedSize() firstly, then forward to subclass's getSerializedSize(withTags). 3. Give a estimated result arraylist size for avoiding the frequent list extension when in a big scan, now we estimate the array size as min(scan.rows, 512). it's also help a lot. -We gain almost ~40% throughput improvement in 100% scan case for branch-2 (cacheHitRatio~100%)[1], it's a good thing. While it's a incompatible change in +We gain almost ~40% throughput improvement in 100% scan case for branch-2 (cacheHitRatio~100%)[1], it's a good thing. While it's a incompatible change in some case, such as if the upstream user implemented their own Cells, although it's rare but can happen, then their compile will be error. @@ -732,7 +2308,7 @@ Before this issue, thrift1 server and thrift2 server are totally different serve * [HBASE-21661](https://issues.apache.org/jira/browse/HBASE-21661) | *Major* | **Provide Thrift2 implementation of Table/Admin** -ThriftAdmin/ThriftTable are implemented based on Thrift2. With ThriftAdmin/ThriftTable, People can use thrift2 protocol just like HTable/HBaseAdmin. +ThriftAdmin/ThriftTable are implemented based on Thrift2. With ThriftAdmin/ThriftTable, People can use thrift2 protocol just like HTable/HBaseAdmin. Example of using ThriftConnection Configuration conf = HBaseConfiguration.create(); conf.set(ClusterConnection.HBASE\_CLIENT\_CONNECTION\_IMPL,ThriftConnection.class.getName()); @@ -766,7 +2342,7 @@ Add a new configuration "hbase.skip.load.duplicate.table.coprocessor". The defau * [HBASE-21650](https://issues.apache.org/jira/browse/HBASE-21650) | *Major* | **Add DDL operation and some other miscellaneous to thrift2** -Added DDL operations and some other structure definition to thrift2. Methods added: +Added DDL operations and some other structure definition to thrift2. Methods added: create/modify/addColumnFamily/deleteColumnFamily/modifyColumnFamily/enable/disable/truncate/delete table create/modify/delete namespace get(list)TableDescriptor(s)/get(list)NamespaceDescirptor(s) @@ -845,8 +2421,8 @@ hbase(main):003:0> rit hbase(main):004:0> unassign '56f0c38c81ae453d19906ce156a2d6a1' 0 row(s) in 0.0540 seconds -hbase(main):005:0> rit -IntegrationTestBigLinkedList,L\xCC\xCC\xCC\xCC\xCC\xCC\xCB,1539117183224.56f0c38c81ae453d19906ce156a2d6a1. state=PENDING_CLOSE, ts=Tue Oct 09 20:33:34 UTC 2018 (0s ago), server=null +hbase(main):005:0> rit +IntegrationTestBigLinkedList,L\xCC\xCC\xCC\xCC\xCC\xCC\xCB,1539117183224.56f0c38c81ae453d19906ce156a2d6a1. state=PENDING_CLOSE, ts=Tue Oct 09 20:33:34 UTC 2018 (0s ago), server=null 1 row(s) in 0.0170 seconds ``` @@ -1329,7 +2905,7 @@ This represents an incompatible change for users who relied on this implementati This enhances the AccessControlClient APIs to retrieve the permissions based on namespace, table name, family and qualifier for specific user. AccessControlClient can also validate a user whether allowed to perform specified operations on a particular table. Following APIs have been added, -1) getUserPermissions(Connection connection, String tableRegex, byte[] columnFamily, byte[] columnQualifier, String userName) +1) getUserPermissions(Connection connection, String tableRegex, byte[] columnFamily, byte[] columnQualifier, String userName) Scope of retrieving permission will be same as existing. 2) hasPermission(onnection connection, String tableName, byte[] columnFamily, byte[] columnQualifier, String userName, Permission.Action... actions) Scope of validating user privilege, @@ -2095,11 +3671,11 @@ ColumnValueFilter provides a way to fetch matched cells only by providing specif A region is flushed if its memory component exceeds the region flush threshold. A flush policy decides which stores to flush by comparing the size of the store to a column-family-flush threshold. -If the overall size of all memstores in the machine exceeds the bounds defined by the administrator (denoted global pressure) a region is selected and flushed. +If the overall size of all memstores in the machine exceeds the bounds defined by the administrator (denoted global pressure) a region is selected and flushed. HBASE-18294 changes flush decisions to be based on heap-occupancy and not data (key-value) size, consistently across levels. This rolls back some of the changes by HBASE-16747. Specifically, (1) RSs, Regions and stores track their overall on-heap and off-heap occupancy, (2) A region is flushed when its on-heap+off-heap size exceeds the region flush threshold specified in hbase.hregion.memstore.flush.size, -(3) The store to be flushed is chosen based on its on-heap+off-heap size +(3) The store to be flushed is chosen based on its on-heap+off-heap size (4) At the RS level, a flush is triggered when the overall on-heap exceeds the on-heap limit, or when the overall off-heap size exceeds the off-heap limit (low/high water marks). Note that when the region flush size is set to XXmb a region flush may be triggered even before writing keys and values of size XX because the total heap occupancy of the region which includes additional metadata exceeded the threshold. @@ -2615,13 +4191,13 @@ And for server side, the default hbase.client.serverside.retries.multiplier was * [HBASE-18090](https://issues.apache.org/jira/browse/HBASE-18090) | *Major* | **Improve TableSnapshotInputFormat to allow more multiple mappers per region** -In this task, we make it possible to run multiple mappers per region in the table snapshot. The following code is primary table snapshot mapper initializatio: +In this task, we make it possible to run multiple mappers per region in the table snapshot. The following code is primary table snapshot mapper initializatio: TableMapReduceUtil.initTableSnapshotMapperJob( snapshotName, // The name of the snapshot (of a table) to read from scan, // Scan instance to control CF and attribute selection mapper, // mapper - outputKeyClass, // mapper output key + outputKeyClass, // mapper output key outputValueClass, // mapper output value job, // The current job to adjust true, // upload HBase jars and jars for any of the configured job classes via the distributed cache (tmpjars) @@ -2634,7 +4210,7 @@ TableMapReduceUtil.initTableSnapshotMapperJob( snapshotName, // The name of the snapshot (of a table) to read from scan, // Scan instance to control CF and attribute selection mapper, // mapper - outputKeyClass, // mapper output key + outputKeyClass, // mapper output key outputValueClass, // mapper output value job, // The current job to adjust true, // upload HBase jars and jars for any of the configured job classes via the distributed cache (tmpjars) @@ -2672,7 +4248,7 @@ List\ getTags() Optional\ getTag(byte type) byte[] cloneTags() -The above APIs helps to read tags from the Cell. +The above APIs helps to read tags from the Cell. CellUtil#createCell(Cell cell, List\ tags) CellUtil#createCell(Cell cell, byte[] tags) @@ -2808,7 +4384,7 @@ Change the import order rule that now we should put the shaded import at bottom. * [HBASE-19187](https://issues.apache.org/jira/browse/HBASE-19187) | *Minor* | **Remove option to create on heap bucket cache** Removing the on heap Bucket cache feature. -The config "hbase.bucketcache.ioengine" no longer support the 'heap' value. +The config "hbase.bucketcache.ioengine" no longer support the 'heap' value. Its supported values now are 'offheap', 'file:\', 'files:\' and 'mmap:\' @@ -2964,12 +4540,12 @@ Removes blanket bypass mechanism (Observer#bypass). Instead, a curated subset of The below methods have been marked deprecated in hbase2. We would have liked to have removed them because they use IA.Private parameters but they are in use by CoreCoprocessors or are critical to downstreamers and we have no alternatives to provide currently. @Deprecated public boolean prePrepareTimeStampForDeleteVersion(final Mutation mutation, final Cell kv, final byte[] byteNow, final Get get) throws IOException { - + @Deprecated public boolean preWALRestore(final RegionInfo info, final WALKey logKey, final WALEdit logEdit) throws IOException { @Deprecated public void postWALRestore(final RegionInfo info, final WALKey logKey, final WALEdit logEdit) throws IOException { - -@Deprecated public DeleteTracker postInstantiateDeleteTracker(DeleteTracker result) throws IOException + +@Deprecated public DeleteTracker postInstantiateDeleteTracker(DeleteTracker result) throws IOException Metrics are updated now even if the Coprocessor does a bypass; e.g. The put count is updated even if a Coprocessor bypasses the core put operation (We do it this way so no need for Coprocessors to have access to our core metrics system). @@ -3000,7 +4576,7 @@ Made defaults for Server#isStopping and Server#getFileSystem. Should have done t * [HBASE-19047](https://issues.apache.org/jira/browse/HBASE-19047) | *Critical* | **CP exposed Scanner types should not extend Shipper** RegionObserver#preScannerOpen signature changed -RegionScanner preScannerOpen( ObserverContext\ c, Scan scan, RegionScanner s) -\> void preScannerOpen( ObserverContext\ c, Scan scan) +RegionScanner preScannerOpen( ObserverContext\ c, Scan scan, RegionScanner s) -\> void preScannerOpen( ObserverContext\ c, Scan scan) The pre hook can no longer return a RegionScanner instance. @@ -3084,12 +4660,12 @@ Add missing deprecation tag for long getRpcTimeout(TimeUnit unit) in AsyncTableB * [HBASE-18410](https://issues.apache.org/jira/browse/HBASE-18410) | *Major* | **FilterList Improvement.** -In this task, we fixed all existing bugs in FilterList, and did the code refactor which ensured interface compatibility . +In this task, we fixed all existing bugs in FilterList, and did the code refactor which ensured interface compatibility . -The primary bug fixes are : -1. For sub-filter in FilterList with MUST\_PASS\_ONE, if previous filterKeyValue() of sub-filter returns NEXT\_COL, we cannot make sure that the next cell will be the first cell in next column, because FilterList choose the minimal forward step among sub-filters, and it may return a SKIP. so here we add an extra check to ensure that the next cell will match preivous return code for sub-filters. +The primary bug fixes are : +1. For sub-filter in FilterList with MUST\_PASS\_ONE, if previous filterKeyValue() of sub-filter returns NEXT\_COL, we cannot make sure that the next cell will be the first cell in next column, because FilterList choose the minimal forward step among sub-filters, and it may return a SKIP. so here we add an extra check to ensure that the next cell will match preivous return code for sub-filters. 2. Previous logic about transforming cell of FilterList is incorrect, we should set the previous transform result (rather than the given cell in question) as the initial vaule of transform cell before call filterKeyValue() of FilterList. -3. Handle the ReturnCodes which the previous code did not handle. +3. Handle the ReturnCodes which the previous code did not handle. About code refactor, we divided the FilterList into two separated sub-classes: FilterListWithOR and FilterListWithAND, The FilterListWithOR has been optimised to choose the next minimal step to seek cell rather than SKIP cell one by one, and the FilterListWithAND has been optimised to choose the next maximal key to seek among sub-filters in filter list. All in all, The code in FilterList is clean and easier to follow now. @@ -3901,7 +5477,7 @@ Changes ObserverContext from a class to an interface and hides away constructor, * [HBASE-18649](https://issues.apache.org/jira/browse/HBASE-18649) | *Major* | **Deprecate KV Usage in MR to move to Cells in 3.0** -All the mappers and reducers output type will be now of MapReduceCell type. No more KeyValue type. How ever in branch-2 for compatibility we have allowed the older interfaces/classes that work with KeyValue to stay in the code base but they have been marked as deprecated. +All the mappers and reducers output type will be now of MapReduceCell type. No more KeyValue type. How ever in branch-2 for compatibility we have allowed the older interfaces/classes that work with KeyValue to stay in the code base but they have been marked as deprecated. The following interfaces/classes have been deprecated in branch-2 Import#KeyValueWritableComparablePartitioner Import#KeyValueWritableComparator @@ -3936,8 +5512,8 @@ The changes of IA.Public/IA.LimitedPrivate classes are shown below: HTableDescriptor class \* boolean hasRegionMemstoreReplication() + boolean hasRegionMemStoreReplication() -\* HTableDescriptor setRegionMemstoreReplication(boolean) -+ HTableDescriptor setRegionMemStoreReplication(boolean) +\* HTableDescriptor setRegionMemstoreReplication(boolean) ++ HTableDescriptor setRegionMemStoreReplication(boolean) RegionLoadStats class \* int getMemstoreLoad() @@ -4013,8 +5589,8 @@ HBaseTestingUtility class - void modifyTableSync(Admin admin, HTableDescriptor desc) - HRegion createLocalHRegion(HTableDescriptor desc, byte [] startKey, byte [] endKey) - HRegion createLocalHRegion(HRegionInfo info, HTableDescriptor desc) -- HRegion createLocalHRegion(HRegionInfo info, TableDescriptor desc) -+ HRegion createLocalHRegion(RegionInfo info, TableDescriptor desc) +- HRegion createLocalHRegion(HRegionInfo info, TableDescriptor desc) ++ HRegion createLocalHRegion(RegionInfo info, TableDescriptor desc) - HRegion createLocalHRegion(HRegionInfo info, HTableDescriptor desc, WAL wal) - HRegion createLocalHRegion(HRegionInfo info, TableDescriptor desc, WAL wal) + HRegion createLocalHRegion(RegionInfo info, TableDescriptor desc, WAL wal) @@ -4121,7 +5697,7 @@ We used to pass the RegionServerServices (RSS) which gave Coprocesosrs (CP) all Removed method getRegionServerServices from CP exposed RegionCoprocessorEnvironment and RegionServerCoprocessorEnvironment and replaced with getCoprocessorRegionServerServices. This returns a new interface CoprocessorRegionServerServices which is only a subset of RegionServerServices. With that below methods are no longer exposed for CPs WAL getWAL(HRegionInfo regionInfo) -List\ getWALs() +List\ getWALs() FlushRequester getFlushRequester() RegionServerAccounting getRegionServerAccounting() RegionServerRpcQuotaManager getRegionServerRpcQuotaManager() @@ -4161,8 +5737,8 @@ void addToOnlineRegions(Region region) boolean removeFromOnlineRegions(final Region r, ServerName destination) Also 3 methods name have been changed -List\ getOnlineRegions(TableName tableName) -\> List\ getRegions(TableName tableName) -List\ getOnlineRegions() -\> List\ getRegions() +List\ getOnlineRegions(TableName tableName) -\> List\ getRegions(TableName tableName) +List\ getOnlineRegions() -\> List\ getRegions() Region getFromOnlineRegions(final String encodedRegionName) -\> Region getRegion(final String encodedRegionName) @@ -4225,7 +5801,7 @@ void closeReader(boolean evictOnClose) throws IOException; void markCompactedAway(); void deleteReader() throws IOException; -Notice that these methods are still available in HStoreFile. +Notice that these methods are still available in HStoreFile. And the return value of getFirstKey and getLastKey are changed from Cell to Optional\ to better indicate that they may not be available. @@ -4528,7 +6104,7 @@ Replaces hbase-shaded-server-\.jar with hbase-shaded-mapreduce-\,SnapshotDescription,TableDescripto) + preRestoreSnapshot(ObserverContext\,List\, List\,String) ++ preGetTableDescriptors(ObserverContext\,List\, List\,String) + postGetTableDescriptors(ObserverContext\,List\, List\,String) + preGetTableNames(ObserverContext\,List\, String) + postGetTableNames(ObserverContext\,List\, String) @@ -5063,11 +6639,11 @@ Committed to master and branch-2. Thanks! In order to use this feature, a user must 
 1. Register their tables when configuring their job -
2. Create a composite key of the tablename and original rowkey to send as the mapper output key. +
2. Create a composite key of the tablename and original rowkey to send as the mapper output key. 

To register their tables (and configure their job for incremental load into multiple tables), a user must call the static MultiHFileOutputFormat.configureIncrementalLoad function to register the HBase tables that will be ingested into. 

 -To create the composite key, a helper function MultiHFileOutputFormat2.createCompositeKey should be called with the destination tablename and rowkey as arguments, and the result should be output as the mapper key. +To create the composite key, a helper function MultiHFileOutputFormat2.createCompositeKey should be called with the destination tablename and rowkey as arguments, and the result should be output as the mapper key. 
Before this JIRA, for HFileOutputFormat2 a configuration for the storage policy was set per Column Family. This was set manually by the user. In this JIRA, this is unchanged when using HFileOutputFormat2. However, when specifically using MultiHFileOutputFormat2, the user now has to manually set the prefix by creating a composite of the table name and the column family. The user can create the new composite value by calling MultiHFileOutputFormat2.createCompositeKey with the tablename and column family as arguments. @@ -5080,9 +6656,9 @@ The configuration parameter "hbase.mapreduce.hfileoutputformat.table.name" is no * [HBASE-18229](https://issues.apache.org/jira/browse/HBASE-18229) | *Critical* | **create new Async Split API to embrace AM v2** -A new splitRegionAsync() API is added in client. The existing splitRegion() and split() API will call the new API so client does not have to change its code. +A new splitRegionAsync() API is added in client. The existing splitRegion() and split() API will call the new API so client does not have to change its code. -Move HBaseAdmin.splitXXX() logic to master, client splitXXX() API now go to master directly instead of going to RegionServer first. +Move HBaseAdmin.splitXXX() logic to master, client splitXXX() API now go to master directly instead of going to RegionServer first. Also added splitSync() API @@ -5236,7 +6812,7 @@ Add unit tests for truncate\_preserve * [HBASE-18240](https://issues.apache.org/jira/browse/HBASE-18240) | *Major* | **Add hbase-thirdparty, a project with hbase utility including an hbase-shaded-thirdparty module with guava, netty, etc.** -Adds a new project, hbase-thirdparty, at https://git-wip-us.apache.org/repos/asf/hbase-thirdparty used by core hbase. GroupID org.apache.hbase.thirdparty. Version 1.0.0. +Adds a new project, hbase-thirdparty, at https://git-wip-us.apache.org/repos/asf/hbase-thirdparty used by core hbase. GroupID org.apache.hbase.thirdparty. Version 1.0.0. This project packages relocated third-party libraries used by Apache HBase such as protobuf, guava, and netty among others. HBase core depends on it. @@ -5275,9 +6851,9 @@ After HBASE-17110 the bytable strategy for SimpleLoadBalancer will also take ser Adds clear\_compaction\_queues to the hbase shell. {code} Clear compaction queues on a regionserver. - The queue\_name contains short and long. + The queue\_name contains short and long. short is shortCompactions's queue,long is longCompactions's queue. - + Examples: hbase\> clear\_compaction\_queues 'host187.example.com,60020' hbase\> clear\_compaction\_queues 'host187.example.com,60020','long' @@ -5367,8 +6943,8 @@ Adds a sort of procedures before submission so system tables are queued first (w * [HBASE-18008](https://issues.apache.org/jira/browse/HBASE-18008) | *Major* | **Any HColumnDescriptor we give out should be immutable** -1) The HColumnDescriptor got from Admin, AsyncAdmin, and Table is immutable. -2) HColumnDescriptor have been marked as "Deprecated" and user should substituted +1) The HColumnDescriptor got from Admin, AsyncAdmin, and Table is immutable. +2) HColumnDescriptor have been marked as "Deprecated" and user should substituted ColumnFamilyDescriptor for HColumnDescriptor. 3) ColumnFamilyDescriptor is constructed through ColumnFamilyDescriptorBuilder and it contains all of the read-only methods from HColumnDescriptor 4) The value to which the IS\_MOB/MOB\_THRESHOLD is mapped is stored as String rather than Boolean/Long. The MOB is an new feature to 2.0 so this change should be acceptable @@ -5551,7 +7127,7 @@ The default behavior for abort() method of StateMachineProcedure class is change * [HBASE-16851](https://issues.apache.org/jira/browse/HBASE-16851) | *Major* | **User-facing documentation for the In-Memory Compaction feature** -Two blog posts on Apache HBase blog: user manual and programmer manual. +Two blog posts on Apache HBase blog: user manual and programmer manual. Ref. guide draft published: https://docs.google.com/document/d/1Xi1jh\_30NKnjE3wSR-XF5JQixtyT6H\_CdFTaVi78LKw/edit @@ -5564,18 +7140,18 @@ Ref. guide draft published: https://docs.google.com/document/d/1Xi1jh\_30NKnjE3w CompactingMemStore achieves these gains through smart use of RAM. The algorithm periodically re-organizes the in-memory data in efficient data structures and reduces redundancies. The HBase server’s memory footprint therefore periodically expands and contracts. The outcome is longer lifetime of data in memory, less I/O, and overall faster performance. More details about the algorithm and its use appear in the Apache HBase Blog: https://blogs.apache.org/hbase/ How To Use: -The in-memory compaction level can be configured both globally and per column family. The supported levels are none (DefaultMemStore), basic, and eager. +The in-memory compaction level can be configured both globally and per column family. The supported levels are none (DefaultMemStore), basic, and eager. -By default, all tables apply basic in-memory compaction. This global configuration can be overridden in hbase-site.xml, as follows: +By default, all tables apply basic in-memory compaction. This global configuration can be overridden in hbase-site.xml, as follows: \ \hbase.hregion.compacting.memstore.type\ \\\ \ -The level can also be configured in the HBase shell per column family, as follows: +The level can also be configured in the HBase shell per column family, as follows: -create ‘\’, +create ‘\’, {NAME =\> ‘\’, IN\_MEMORY\_COMPACTION =\> ‘\’} @@ -5656,7 +7232,7 @@ MVCCPreAssign is added by HBASE-16698, but pre-assign mvcc is only used in put/d * [HBASE-16466](https://issues.apache.org/jira/browse/HBASE-16466) | *Major* | **HBase snapshots support in VerifyReplication tool to reduce load on live HBase cluster with large tables** -Support for snapshots in VerifyReplication tool i.e. verifyrep can compare source table snapshot against peer table snapshot which reduces load on RS by reading data from HDFS directly using Snapshot scanners. +Support for snapshots in VerifyReplication tool i.e. verifyrep can compare source table snapshot against peer table snapshot which reduces load on RS by reading data from HDFS directly using Snapshot scanners. Instead of comparing against live tables whose state changes due to writes and compactions its better to compare HBase snapshots which are immutable in nature. @@ -5827,7 +7403,7 @@ Now small scan and limited scan could also return partial results. * [HBASE-16014](https://issues.apache.org/jira/browse/HBASE-16014) | *Major* | **Get and Put constructor argument lists are divergent** Add 2 constructors fot API Get -1. Get(byte[], int, int) +1. Get(byte[], int, int) 2. Get(ByteBuffer) @@ -5986,7 +7562,7 @@ Changes all tests to use the TestName JUnit Rule everywhere rather than hardcode The HBase cleaner chore process cleans up old WAL files and archived HFiles. Cleaner operation can affect query performance when running heavy workloads, so disable the cleaner during peak hours. The cleaner has the following HBase shell commands: -- cleaner\_chore\_enabled: Queries whether cleaner chore is enabled/ disabled. +- cleaner\_chore\_enabled: Queries whether cleaner chore is enabled/ disabled. - cleaner\_chore\_run: Manually runs the cleaner to remove files. - cleaner\_chore\_switch: enables or disables the cleaner and returns the previous state of the cleaner. For example, cleaner-switch true enables the cleaner. @@ -6049,8 +7625,8 @@ Now the scan.setSmall method is deprecated. Consider using scan.setLimit and sca Mob compaction partition policy can be set by hbase\> create 't1', {NAME =\> 'f1', IS\_MOB =\> true, MOB\_THRESHOLD =\> 1000000, MOB\_COMPACT\_PARTITION\_POLICY =\> 'weekly'} - -or + +or hbase\> alter 't1', {NAME =\> 'f1', IS\_MOB =\> true, MOB\_THRESHOLD =\> 1000000, MOB\_COMPACT\_PARTITION\_POLICY =\> 'monthly'} @@ -6093,16 +7669,16 @@ Fix inability at finding static content post push of parent issue moving us to j * [HBASE-9774](https://issues.apache.org/jira/browse/HBASE-9774) | *Major* | **HBase native metrics and metric collection for coprocessors** -This issue adds two new modules, hbase-metrics and hbase-metrics-api which define and implement the "new" metric system used internally within HBase. These two modules (and some other code in hbase-hadoop2-compat) module are referred as "HBase metrics framework" which is HBase-specific and independent of any other metrics library (including Hadoop metrics2 and dropwizards metrics). +This issue adds two new modules, hbase-metrics and hbase-metrics-api which define and implement the "new" metric system used internally within HBase. These two modules (and some other code in hbase-hadoop2-compat) module are referred as "HBase metrics framework" which is HBase-specific and independent of any other metrics library (including Hadoop metrics2 and dropwizards metrics). HBase Metrics API (hbase-metrics-api) contains the interface that HBase exposes internally and to third party code (including coprocessors). It is a thin -abstraction over the actual implementation for backwards compatibility guarantees. The metrics API in this hbase-metrics-api module is inspired by the Dropwizard metrics 3.1 API, however, the API is completely independent. +abstraction over the actual implementation for backwards compatibility guarantees. The metrics API in this hbase-metrics-api module is inspired by the Dropwizard metrics 3.1 API, however, the API is completely independent. -hbase-metrics module contains implementation of the "HBase Metrics API", including MetricRegistry, Counter, Histogram, etc. These are highly concurrent implementations of the Metric interfaces. Metrics in HBase are grouped into different sets (like WAL, RPC, RegionServer, etc). Each group of metrics should be tracked via a MetricRegistry specific to that group. +hbase-metrics module contains implementation of the "HBase Metrics API", including MetricRegistry, Counter, Histogram, etc. These are highly concurrent implementations of the Metric interfaces. Metrics in HBase are grouped into different sets (like WAL, RPC, RegionServer, etc). Each group of metrics should be tracked via a MetricRegistry specific to that group. Historically, HBase has been using Hadoop's Metrics2 framework [3] for collecting and reporting the metrics internally. However, due to the difficultly of dealing with the Metrics2 framework, HBase is moving away from Hadoop's metrics implementation to its custom implementation. The move will happen incrementally, and during the time, both Hadoop Metrics2-based metrics and hbase-metrics module based classes will be in the source code. All new implementations for metrics SHOULD use the new API and framework. -This jira also introduces the metrics API to coprocessor implementations. Coprocessor writes can export custom metrics using the API and have those collected via metrics2 sinks, as well as exported via JMX in regionserver metrics. +This jira also introduces the metrics API to coprocessor implementations. Coprocessor writes can export custom metrics using the API and have those collected via metrics2 sinks, as well as exported via JMX in regionserver metrics. More documentation available at: hbase-metrics-api/README.txt @@ -6166,7 +7742,7 @@ Move locking to be procedure (Pv2) rather than zookeeper based. All locking move * [HBASE-17470](https://issues.apache.org/jira/browse/HBASE-17470) | *Major* | **Remove merge region code from region server** -In 1.x branches, Admin.mergeRegions calls MASTER via dispatchMergingRegions RPC; when executing dispatchMergingRegions RPC, MASTER calls RS via MergeRegions to complete the merge in RS-side. +In 1.x branches, Admin.mergeRegions calls MASTER via dispatchMergingRegions RPC; when executing dispatchMergingRegions RPC, MASTER calls RS via MergeRegions to complete the merge in RS-side. With HBASE-16119, the merge logic moves to master-side. This JIRA cleans up unused RPCs (dispatchMergingRegions and MergeRegions) , removes dangerous tools such as Merge and HMerge, and deletes unused RegionServer-side merge region logic in 2.0 release. @@ -6336,7 +7912,7 @@ Possible memstore compaction policies are: Memory compaction policeman be set at the column family level at table creation time: {code} create ‘\’, - {NAME =\> ‘\’, + {NAME =\> ‘\’, IN\_MEMORY\_COMPACTION =\> ‘\’} {code} or as a property at the global configuration level by setting the property in hbase-site.xml, with BASIC being the default value: @@ -6374,7 +7950,7 @@ Provides ability to restrict table coprocessors based on HDFS path whitelist. (P * [HBASE-17221](https://issues.apache.org/jira/browse/HBASE-17221) | *Major* | **Abstract out an interface for RpcServer.Call** -Provide an interface RpcCall on the server side. +Provide an interface RpcCall on the server side. RpcServer.Call now is marked as @InterfaceAudience.Private, and implements the interface RpcCall, @@ -6682,7 +8258,7 @@ Add AsyncConnection, AsyncTable and AsyncTableRegionLocator. Now the AsyncTable This issue fix three bugs: 1. rpcTimeout configuration not work for one rpc call in AP -2. operationTimeout configuration not work for multi-request (batch, put) in AP +2. operationTimeout configuration not work for multi-request (batch, put) in AP 3. setRpcTimeout and setOperationTimeout in HTable is not worked for AP and BufferedMutator. @@ -6712,7 +8288,7 @@ exist in a cleanly closed file. If an EOF is detected due to parsing or other errors while there are still unparsed bytes before the end-of-file trailer, we now reset the WAL to the very beginning and attempt a clean read-through. Because we will retry these failures indefinitely, two additional changes are made to help with diagnostics: \* On each retry attempt, a log message like the below will be emitted at the WARN level: - + Processing end of WAL file '{}'. At position {}, which is too far away from reported file length {}. Restarting WAL reading (see HBASE-15983 for details). @@ -7035,7 +8611,7 @@ Adds logging of region and server. Helpful debugging. Logging now looks like thi * [HBASE-14743](https://issues.apache.org/jira/browse/HBASE-14743) | *Minor* | **Add metrics around HeapMemoryManager** -A memory metrics reveals situations happened in both MemStores and BlockCache in RegionServer. Through this metrics, users/operators can know +A memory metrics reveals situations happened in both MemStores and BlockCache in RegionServer. Through this metrics, users/operators can know 1). Current size of MemStores and BlockCache in bytes. 2). Occurrence for Memstore minor and major flush. (named unblocked flush and blocked flush respectively, shown in histogram) 3). Dynamic changes in size between MemStores and BlockCache. (with Increase/Decrease as prefix, shown in histogram). And a counter for no changes, named DoNothingCounter. @@ -7062,7 +8638,7 @@ When LocalHBaseCluster is started from the command line the Master would give up * [HBASE-16052](https://issues.apache.org/jira/browse/HBASE-16052) | *Major* | **Improve HBaseFsck Scalability** -HBASE-16052 improves the performance and scalability of HBaseFsck, especially for large clusters with a small number of large tables. +HBASE-16052 improves the performance and scalability of HBaseFsck, especially for large clusters with a small number of large tables. Searching for lingering reference files is now a multi-threaded operation. Loading HDFS region directory information is now multi-threaded at the region-level instead of the table-level to maximize concurrency. A performance bug in HBaseFsck that resulted in redundant I/O and RPCs was fixed by introducing a FileStatusFilter that filters FileStatus objects directly. @@ -7078,7 +8654,7 @@ If zk based replication queue is used and useMulti is false, we will schedule a * [HBASE-3727](https://issues.apache.org/jira/browse/HBASE-3727) | *Minor* | **MultiHFileOutputFormat** -MultiHFileOutputFormat support output of HFiles from multiple tables. It will output directories and hfiles as follow, +MultiHFileOutputFormat support output of HFiles from multiple tables. It will output directories and hfiles as follow, --table1 --family1 --family2 @@ -7102,7 +8678,7 @@ Prior to this change, the integration test clients (IntegrationTest\*) relied on * [HBASE-13823](https://issues.apache.org/jira/browse/HBASE-13823) | *Major* | **Procedure V2: unnecessaery operations on AssignmentManager#recoverTableInDisablingState() and recoverTableInEnablingState()** -For cluster upgraded from 1.0.x or older releases, master startup would not continue the in-progress enable/disable table process. If orphaned znode with ENABLING/DISABLING state exists in the cluster, run hbck or manually fix the issue. +For cluster upgraded from 1.0.x or older releases, master startup would not continue the in-progress enable/disable table process. If orphaned znode with ENABLING/DISABLING state exists in the cluster, run hbck or manually fix the issue. For new cluster or cluster upgraded from 1.1.x and newer release, there is no issue to worry about. @@ -7111,9 +8687,9 @@ For new cluster or cluster upgraded from 1.1.x and newer release, there is no is * [HBASE-16095](https://issues.apache.org/jira/browse/HBASE-16095) | *Major* | **Add priority to TableDescriptor and priority region open thread pool** -Adds a PRIORITY property to the HTableDescriptor. PRIORITY should be in the same range as the RpcScheduler defines it (HConstants.XXX\_QOS). +Adds a PRIORITY property to the HTableDescriptor. PRIORITY should be in the same range as the RpcScheduler defines it (HConstants.XXX\_QOS). -Table priorities are only used for region opening for now. There can be other uses later (like RpcScheduling). +Table priorities are only used for region opening for now. There can be other uses later (like RpcScheduling). Regions of high priority tables (priority \>= than HIGH\_QOS) are opened from a different thread pool than the regular region open thread pool. However, table priorities are not used as a global order for region assigning or opening. @@ -7129,7 +8705,7 @@ When a replication endpoint is sent a shutdown request by the replication source * [HBASE-16087](https://issues.apache.org/jira/browse/HBASE-16087) | *Major* | **Replication shouldn't start on a master if if only hosts system tables** -Masters will no longer start any replication threads if they are hosting only system tables. +Masters will no longer start any replication threads if they are hosting only system tables. In order to change this add something to the config for tables on master that doesn't start with "hbase:" ( Replicating system tables is something that's currently unsupported and can open up security holes, so do this at your own peril) @@ -7138,7 +8714,7 @@ In order to change this add something to the config for tables on master that do * [HBASE-14548](https://issues.apache.org/jira/browse/HBASE-14548) | *Major* | **Expand how table coprocessor jar and dependency path can be specified** -Allow a directory containing the jars or some wildcards to be specified, such as: hdfs://namenode:port/user/hadoop-user/ +Allow a directory containing the jars or some wildcards to be specified, such as: hdfs://namenode:port/user/hadoop-user/ or hdfs://namenode:port/user/hadoop-user/\*.jar @@ -7185,12 +8761,12 @@ This patch introduces a new infrastructure for creation and maintenance of Maven NOTE that this patch should introduce two new WARNINGs ("Using platform encoding ... to copy filtered resources") into the hbase install process. These warnings are hard-wired into the maven-archetype-plugin:create-from-project goal. See hbase/hbase-archetypes/README.md, footnote [6] for details. -After applying the patch, see hbase/hbase-archetypes/README.md for details regarding the new archetype infrastructure introduced by this patch. (The README text is also conveniently positioned at the top of the patch itself.) +After applying the patch, see hbase/hbase-archetypes/README.md for details regarding the new archetype infrastructure introduced by this patch. (The README text is also conveniently positioned at the top of the patch itself.) -Here is the opening paragraph of the README.md file: -================= -The hbase-archetypes subproject of hbase provides an infrastructure for creation and maintenance of Maven archetypes pertinent to HBase. Upon deployment to the archetype catalog of the central Maven repository, these archetypes may be used by end-user developers to autogenerate completely configured Maven projects (including fully-functioning sample code) through invocation of the archetype:generate goal of the maven-archetype-plugin. -======== +Here is the opening paragraph of the README.md file: +================= +The hbase-archetypes subproject of hbase provides an infrastructure for creation and maintenance of Maven archetypes pertinent to HBase. Upon deployment to the archetype catalog of the central Maven repository, these archetypes may be used by end-user developers to autogenerate completely configured Maven projects (including fully-functioning sample code) through invocation of the archetype:generate goal of the maven-archetype-plugin. +======== The README.md file also contains several paragraphs under the heading, "Notes for contributors and committers to the HBase project", which explains the layout of 'hbase-archetypes', and how archetypes are created and installed into the local Maven repository, ready for deployment to the central Maven repository. It also outlines how new archetypes may be developed and added to the collection in the future. @@ -7249,7 +8825,7 @@ Adds a FifoRpcSchedulerFactory so you can try the FifoRpcScheduler by setting " * [HBASE-15989](https://issues.apache.org/jira/browse/HBASE-15989) | *Major* | **Remove hbase.online.schema.update.enable** -Removes the "hbase.online.schema.update.enable" property. +Removes the "hbase.online.schema.update.enable" property. from now, every operation that alter the schema (e.g. modifyTable, addFamily, removeFamily, ...) will use the online schema update. there is no need to disable/enable the table. @@ -7318,12 +8894,12 @@ See http://mail-archives.apache.org/mod\_mbox/hbase-dev/201605.mbox/%3CCAMUu0w-Z * [HBASE-15228](https://issues.apache.org/jira/browse/HBASE-15228) | *Major* | **Add the methods to RegionObserver to trigger start/complete restoring WALs** -Added two hooks around WAL restore. +Added two hooks around WAL restore. preReplayWALs(final ObserverContext\ ctx, HRegionInfo info, Path edits) and -postReplayWALs(final ObserverContext\ ctx, HRegionInfo info, Path edits) +postReplayWALs(final ObserverContext\ ctx, HRegionInfo info, Path edits) -Will be called at start and end of restore of a WAL file. +Will be called at start and end of restore of a WAL file. The other hook around WAL restore (preWALRestore ) will be called before restore of every entry within the WAL file. @@ -7565,12 +9141,12 @@ No functional change. Added javadoc, comments, and extra trace-level logging to Use 'hbase.hstore.compaction.date.tiered.window.factory.class' to specify the window implementation you like for date tiered compaction. Now the only and default implementation is org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory. -{code} -\ -\hbase.hstore.compaction.date.tiered.window.factory.class\ -\org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory\ -\ -\ +{code} +\ +\hbase.hstore.compaction.date.tiered.window.factory.class\ +\org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory\ +\ +\ {code} @@ -7669,15 +9245,15 @@ With this patch combined with HBASE-15389, when we compact, we can output multip 2. Bulk load files and the old file generated by major compaction before upgrading to DTCP. This will change the way to enable date tiered compaction. -To turn it on: +To turn it on: hbase.hstore.engine.class: org.apache.hadoop.hbase.regionserver.DateTieredStoreEngine -With tiered compaction all servers in the cluster will promote windows to higher tier at the same time, so using a compaction throttle is recommended: -hbase.regionserver.throughput.controller:org.apache.hadoop.hbase.regionserver.compactions.PressureAwareCompactionThroughputController +With tiered compaction all servers in the cluster will promote windows to higher tier at the same time, so using a compaction throttle is recommended: +hbase.regionserver.throughput.controller:org.apache.hadoop.hbase.regionserver.compactions.PressureAwareCompactionThroughputController hbase.hstore.compaction.throughput.higher.bound and hbase.hstore.compaction.throughput.lower.bound need to be set for desired throughput range as uncompressed rates. -Because there will most likely be more store files around, we need to adjust the configuration so that flush won't be blocked and compaction will be properly throttled: -hbase.hstore.blockingStoreFiles: change to 50 if using all default parameters when turning on date tiered compaction. Use 1.5~2 x projected file count if changing the parameters, Projected file count = windows per tier x tier count + incoming window min + files older than max age +Because there will most likely be more store files around, we need to adjust the configuration so that flush won't be blocked and compaction will be properly throttled: +hbase.hstore.blockingStoreFiles: change to 50 if using all default parameters when turning on date tiered compaction. Use 1.5~2 x projected file count if changing the parameters, Projected file count = windows per tier x tier count + incoming window min + files older than max age Because major compaction is turned on now, we also need to adjust the configuration for max file to compact according to the larger file count: hbase.hstore.compaction.max: set to the same number as hbase.hstore.blockingStoreFiles. @@ -7774,7 +9350,7 @@ Adds a configuration parameter "hbase.ipc.max.request.size" which defaults to 25 * [HBASE-15412](https://issues.apache.org/jira/browse/HBASE-15412) | *Major* | **Add average region size metric** -Adds a new metric for called "averageRegionSize" that is emitted as a regionserver metric. Metric description: +Adds a new metric for called "averageRegionSize" that is emitted as a regionserver metric. Metric description: Average region size over the region server including memstore and storefile sizes @@ -7817,7 +9393,7 @@ Fixed an issue in REST server checkAndDelete operation where the remaining cells * [HBASE-15377](https://issues.apache.org/jira/browse/HBASE-15377) | *Major* | **Per-RS Get metric is time based, per-region metric is size-based** -Per-region metrics related to Get histograms are changed from being response size based into being latency based similar to the per-regionserver metrics of the same name. +Per-region metrics related to Get histograms are changed from being response size based into being latency based similar to the per-regionserver metrics of the same name. Added GetSize histogram metrics at the per-regionserver and per-region level for the response sizes. @@ -7826,9 +9402,9 @@ Added GetSize histogram metrics at the per-regionserver and per-region level for * [HBASE-6721](https://issues.apache.org/jira/browse/HBASE-6721) | *Major* | **RegionServer Group based Assignment** -[ADVANCED USERS ONLY] This patch adds a new experimental module hbase-rsgroup. It is an advanced feature for partitioning regionservers into distinctive groups for strict isolation, and should only be used by users who are sophisticated enough to understand the full implications and have a sufficient background in managing HBase clusters. +[ADVANCED USERS ONLY] This patch adds a new experimental module hbase-rsgroup. It is an advanced feature for partitioning regionservers into distinctive groups for strict isolation, and should only be used by users who are sophisticated enough to understand the full implications and have a sufficient background in managing HBase clusters. -RSGroups can be defined and managed with shell commands or corresponding Java APIs. A server can be added to a group with hostname and port pair, and tables can be moved to this group so that only regionservers in the same rsgroup can host the regions of the table. RegionServers and tables can only belong to 1 group at a time. By default, all tables and regionservers belong to the "default" group. System tables can also be put into a group using the regular APIs. A custom balancer implementation tracks assignments per rsgroup and makes sure to move regions to the relevant regionservers in that group. The group information is stored in a regular HBase table, and a zookeeper-based read-only cache is used at the cluster bootstrap time. +RSGroups can be defined and managed with shell commands or corresponding Java APIs. A server can be added to a group with hostname and port pair, and tables can be moved to this group so that only regionservers in the same rsgroup can host the regions of the table. RegionServers and tables can only belong to 1 group at a time. By default, all tables and regionservers belong to the "default" group. System tables can also be put into a group using the regular APIs. A custom balancer implementation tracks assignments per rsgroup and makes sure to move regions to the relevant regionservers in that group. The group information is stored in a regular HBase table, and a zookeeper-based read-only cache is used at the cluster bootstrap time. To enable, add the following to your hbase-site.xml and restart your Master: @@ -7857,7 +9433,7 @@ This adds a group to the 'hbase:rsgroup' system table. Add a server (hostname + * [HBASE-15435](https://issues.apache.org/jira/browse/HBASE-15435) | *Major* | **Add WAL (in bytes) written metric** -Adds a new metric named "writtenBytes" as a per-regionserver metric. Metric Description: +Adds a new metric named "writtenBytes" as a per-regionserver metric. Metric Description: Size (in bytes) of the data written to the WAL. @@ -7908,7 +9484,7 @@ on branch-1, branch-1.2 and branch 1.3 we now check if the exception is meta-cle * [HBASE-15376](https://issues.apache.org/jira/browse/HBASE-15376) | *Major* | **ScanNext metric is size-based while every other per-operation metric is time based** -Removed ScanNext histogram metrics as regionserver level and per-region level metrics since the semantics is not compatible with other similar metrics (size histogram vs latency histogram). +Removed ScanNext histogram metrics as regionserver level and per-region level metrics since the semantics is not compatible with other similar metrics (size histogram vs latency histogram). Instead, this patch adds ScanTime and ScanSize histogram metrics at the regionserver and per-region level. @@ -7931,7 +9507,7 @@ Previously RPC request scheduler in HBase had 2 modes in could operate in: This patch adds new type of scheduler to HBase, based on the research around controlled delay (CoDel) algorithm [1], used in networking to combat bufferbloat, as well as some analysis on generalizing it to generic request queues [2]. The purpose of that work is to prevent long standing call queues caused by discrepancy between request rate and available throughput, caused by kernel/disk IO/networking stalls. -New RPC scheduler could be enabled by setting hbase.ipc.server.callqueue.type=codel in configuration. Several additional params allow to configure algorithm behavior - +New RPC scheduler could be enabled by setting hbase.ipc.server.callqueue.type=codel in configuration. Several additional params allow to configure algorithm behavior - hbase.ipc.server.callqueue.codel.target.delay hbase.ipc.server.callqueue.codel.interval @@ -8105,7 +9681,7 @@ Removed IncrementPerformanceTest. It is not as configurable as the additions mad * [HBASE-15218](https://issues.apache.org/jira/browse/HBASE-15218) | *Blocker* | **On RS crash and replay of WAL, loosing all Tags in Cells** -This issue fixes +This issue fixes - In case of normal WAL (Not encrypted) we were loosing all cell tags on WAL replay after an RS crash - In case of encrypted WAL we were not even persisting Cell tags in WAL. Tags from all unflushed (to HFile) Cells will get lost even after WAL replay recovery is done. @@ -8154,13 +9730,13 @@ If you are using co processors and refer the Cells in the read results, DO NOT s * [HBASE-15145](https://issues.apache.org/jira/browse/HBASE-15145) | *Major* | **HBCK and Replication should authenticate to zookepeer using server principal** -Added a new command line argument: --auth-as-server to enable authenticating to ZooKeeper as the HBase Server principal. This is required for secure clusters for doing replication operations like add\_peer, list\_peers, etc until HBASE-11392 is fixed. This advanced option can also be used for manually fixing secure znodes. +Added a new command line argument: --auth-as-server to enable authenticating to ZooKeeper as the HBase Server principal. This is required for secure clusters for doing replication operations like add\_peer, list\_peers, etc until HBASE-11392 is fixed. This advanced option can also be used for manually fixing secure znodes. -Commands can now be invoked like: -hbase --auth-as-server shell -hbase --auth-as-server zkcli +Commands can now be invoked like: +hbase --auth-as-server shell +hbase --auth-as-server zkcli -HBCK in secure setup also needs to authenticate to ZK using servers principals.This is turned on by default (no need to pass additional argument). +HBCK in secure setup also needs to authenticate to ZK using servers principals.This is turned on by default (no need to pass additional argument). When authenticating as server, HBASE\_SERVER\_JAAS\_OPTS is concatenated to HBASE\_OPTS if defined in hbase-env.sh. Otherwise, HBASE\_REGIONSERVER\_OPTS is concatenated. @@ -8209,7 +9785,7 @@ The \`hbase version\` command now outputs directly to stdout rather than to a lo * [HBASE-15027](https://issues.apache.org/jira/browse/HBASE-15027) | *Major* | **Refactor the way the CompactedHFileDischarger threads are created** The property 'hbase.hfile.compactions.discharger.interval' has been renamed to 'hbase.hfile.compaction.discharger.interval' that describes the interval after which the compaction discharger chore service should run. -The property 'hbase.hfile.compaction.discharger.thread.count' describes the thread count that does the compaction discharge work. +The property 'hbase.hfile.compaction.discharger.thread.count' describes the thread count that does the compaction discharge work. The CompactedHFilesDischarger is a chore service now started as part of the RegionServer and this chore service iterates over all the onlineRegions in that RS and uses the RegionServer's executor service to launch a set of threads that does this job of compaction files clean up. @@ -8217,8 +9793,8 @@ The CompactedHFilesDischarger is a chore service now started as part of the Regi * [HBASE-14468](https://issues.apache.org/jira/browse/HBASE-14468) | *Major* | **Compaction improvements: FIFO compaction policy** -FIFO compaction policy selects only files which have all cells expired. The column family MUST have non-default TTL. -Essentially, FIFO compactor does only one job: collects expired store files. +FIFO compaction policy selects only files which have all cells expired. The column family MUST have non-default TTL. +Essentially, FIFO compactor does only one job: collects expired store files. Because we do not do any real compaction, we do not use CPU and IO (disk and network), we do not evict hot data from a block cache. The result: improved throughput and latency both write and read. See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style @@ -8281,7 +9857,7 @@ All clients before 1.2.0 will not get this multi request chunking based upon blo * [HBASE-14951](https://issues.apache.org/jira/browse/HBASE-14951) | *Minor* | **Make hbase.regionserver.maxlogs obsolete** -Rolling WAL events across a cluster can be highly correlated, hence flushing memstores, hence triggering minor compactions, that can be promoted to major ones. These events are highly correlated in time if there is a balanced write-load on the regions in a table. Default value for maximum WAL files (\* hbase.regionserver.maxlogs\*), which controls WAL rolling events - 32 is too small for many modern deployments. +Rolling WAL events across a cluster can be highly correlated, hence flushing memstores, hence triggering minor compactions, that can be promoted to major ones. These events are highly correlated in time if there is a balanced write-load on the regions in a table. Default value for maximum WAL files (\* hbase.regionserver.maxlogs\*), which controls WAL rolling events - 32 is too small for many modern deployments. Now we calculate this value dynamically (if not defined by user), using the following formula: maxLogs = Math.max( 32, HBASE\_HEAP\_SIZE \* memstoreRatio \* 2/ LogRollSize), where @@ -8289,7 +9865,7 @@ maxLogs = Math.max( 32, HBASE\_HEAP\_SIZE \* memstoreRatio \* 2/ LogRollSize), w memstoreRatio is \*hbase.regionserver.global.memstore.size\* LogRollSize is maximum WAL file size (default 0.95 \* HDFS block size) -We need to make sure that we avoid fully or minimize events when RS has to flush memstores prematurely only because it reached artificial limit of hbase.regionserver.maxlogs, this is why we put this 2 x multiplier in equation, this gives us maximum WAL capacity of 2 x RS memstore-size. +We need to make sure that we avoid fully or minimize events when RS has to flush memstores prematurely only because it reached artificial limit of hbase.regionserver.maxlogs, this is why we put this 2 x multiplier in equation, this gives us maximum WAL capacity of 2 x RS memstore-size. Runaway WAL files. @@ -8321,7 +9897,7 @@ Setting it to false ( the default ) will help ensure a more even distribution of * [HBASE-14534](https://issues.apache.org/jira/browse/HBASE-14534) | *Minor* | **Bump yammer/coda/dropwizard metrics dependency version** -Updated yammer metrics to version 3.1.2 (now it's been renamed to dropwizard). API has changed quite a bit, consult https://dropwizard.github.io/metrics/3.1.0/manual/core/ for additional information. +Updated yammer metrics to version 3.1.2 (now it's been renamed to dropwizard). API has changed quite a bit, consult https://dropwizard.github.io/metrics/3.1.0/manual/core/ for additional information. Note that among other things, in yammer 2.2.0 histograms were by default created in non-biased mode (uniform sampling), while in 3.1.0 histograms created via MetricsRegistry.histogram(...) are by default exponentially decayed. This shouldn't affect end users, though. @@ -8375,7 +9951,7 @@ Following are the additional configurations added for this enhancement, For example: If source cluster FS client configurations are copied in peer cluster under directory /home/user/dc1/ then hbase.replication.cluster.id should be configured as dc1 and hbase.replication.conf.dir as /home/user -Note: +Note: a. Any modification to source cluster FS client configuration files in peer cluster side replication configuration directory then it needs to restart all its peer(s) cluster RS with default hbase.replication.source.fs.conf.provider. b. Only 'xml' type files will be loaded by the default hbase.replication.source.fs.conf.provider. @@ -8573,7 +10149,7 @@ This patch adds shell support for region normalizer (see HBASE-13103). - 'normalizer\_switch' allows user to turn normalizer on and off - 'normalize' runs region normalizer if it's turned on. -Also 'alter' command has been extended to allow user to enable/disable region normalization per table (disabled by default). Use it as +Also 'alter' command has been extended to allow user to enable/disable region normalization per table (disabled by default). Use it as alter 'testtable', {NORMALIZATION\_MODE =\> 'true'} @@ -8871,14 +10447,14 @@ For more details on how to use the feature please consult the HBase Reference Gu Removed Table#getRowOrBefore, Region#getClosestRowBefore, Store#getRowKeyAtOrBefore, RemoteHTable#getRowOrBefore apis and Thrift support for getRowOrBefore. Also removed two coprocessor hooks preGetClosestRowBefore and postGetClosestRowBefore. -User using this api can instead use reverse scan something like below, -{code} - Scan scan = new Scan(row); - scan.setSmall(true); - scan.setCaching(1); - scan.setReversed(true); - scan.addFamily(family); -{code} +User using this api can instead use reverse scan something like below, +{code} + Scan scan = new Scan(row); + scan.setSmall(true); + scan.setCaching(1); + scan.setReversed(true); + scan.addFamily(family); +{code} pass this scan object to the scanner and retrieve the first Result from scanner output. @@ -8894,7 +10470,7 @@ Changes parameters to filterColumn so takes a Cell rather than a byte []. hbase-client-1.2.7-SNAPSHOT.jar, ColumnPrefixFilter.class package org.apache.hadoop.hbase.filter -ColumnPrefixFilter.filterColumn ( byte[ ] buffer, int qualifierOffset, int qualifierLength ) : Filter.ReturnCode +ColumnPrefixFilter.filterColumn ( byte[ ] buffer, int qualifierOffset, int qualifierLength ) : Filter.ReturnCode org/apache/hadoop/hbase/filter/ColumnPrefixFilter.filterColumn:([BII)Lorg/apache/hadoop/hbase/filter/Filter$ReturnCode; Ditto for filterColumnValue in SingleColumnValueFilter. Takes a Cell instead of byte array. @@ -9088,7 +10664,7 @@ hbase-shaded-client and hbase-shaded-server modules will not build the actual ja * [HBASE-13754](https://issues.apache.org/jira/browse/HBASE-13754) | *Major* | **Allow non KeyValue Cell types also to oswrite** -This jira has removed the already deprecated method +This jira has removed the already deprecated method KeyValue#oswrite(final KeyValue kv, final OutputStream out) @@ -9128,11 +10704,11 @@ Purge support for parsing zookeepers zoo.cfg deprecated since hbase-0.96.0 MOTIVATION -A pipelined scan API is introduced for speeding up applications that combine massive data traversal with compute-intensive processing. Traditional HBase scans save network trips through prefetching the data to the client side cache. However, they prefetch synchronously: the fetch request to regionserver is invoked only when the entire cache is consumed. This leads to a stop-and-wait access pattern, in which the client stalls until the next chunk of data is fetched. Applications that do significant processing can benefit from background data prefetching, which eliminates this bottleneck. The pipelined scan implementation overlaps the cache population at the client side with application processing. Namely, it issues a new scan RPC when the iteration retrieves 50% of the cache. If the application processing (that is, the time between invocations of next()) is substantial, the new chunk of data will be available before the previous one is exhausted, and the client will not experience any delay. Ideally, the prefetch and the processing times should be balanced. +A pipelined scan API is introduced for speeding up applications that combine massive data traversal with compute-intensive processing. Traditional HBase scans save network trips through prefetching the data to the client side cache. However, they prefetch synchronously: the fetch request to regionserver is invoked only when the entire cache is consumed. This leads to a stop-and-wait access pattern, in which the client stalls until the next chunk of data is fetched. Applications that do significant processing can benefit from background data prefetching, which eliminates this bottleneck. The pipelined scan implementation overlaps the cache population at the client side with application processing. Namely, it issues a new scan RPC when the iteration retrieves 50% of the cache. If the application processing (that is, the time between invocations of next()) is substantial, the new chunk of data will be available before the previous one is exhausted, and the client will not experience any delay. Ideally, the prefetch and the processing times should be balanced. API AND CONFIGURATION -Asynchronous scanning can be configured either globally for all tables and scans, or on per-scan basis via a new Scan class API. +Asynchronous scanning can be configured either globally for all tables and scans, or on per-scan basis via a new Scan class API. Configuration in hbase-site.xml: hbase.client.scanner.async.prefetch, default false: @@ -9175,8 +10751,8 @@ Introduces a new config hbase.fs.tmp.dir which is a directory in HDFS (or defaul * [HBASE-10800](https://issues.apache.org/jira/browse/HBASE-10800) | *Major* | **Use CellComparator instead of KVComparator** -From 2.0 branch onwards KVComparator and its subclasses MetaComparator, RawBytesComparator are all deprecated. -All the comparators are moved to CellComparator. MetaCellComparator, a subclass of CellComparator, will be used to compare hbase:meta cells. +From 2.0 branch onwards KVComparator and its subclasses MetaComparator, RawBytesComparator are all deprecated. +All the comparators are moved to CellComparator. MetaCellComparator, a subclass of CellComparator, will be used to compare hbase:meta cells. Previously exposed static instances KeyValue.COMPARATOR, KeyValue.META\_COMPARATOR and KeyValue.RAW\_COMPARATOR are deprecated instead use CellComparator.COMPARATOR and CellComparator.META\_COMPARATOR. Also note that there will be no RawBytesComparator. Where ever we need to compare raw bytes use Bytes.BYTES\_RAWCOMPARATOR. CellComparator will always operate on cells and its components, abstracting the fact that a cell can be backed by a single byte[] as opposed to how KVComparators were working. @@ -9194,7 +10770,7 @@ Adds a renewLease call to ClientScanner * [HBASE-13564](https://issues.apache.org/jira/browse/HBASE-13564) | *Major* | **Master MBeans are not published** To use the coprocessor-based JMX implementation provided by HBase for Master. -Add below property in hbase-site.xml file: +Add below property in hbase-site.xml file: \ \hbase.coprocessor.master.classes\ @@ -9310,7 +10886,7 @@ Compose thrift exception text from the text of the entire cause chain of the und * [HBASE-13275](https://issues.apache.org/jira/browse/HBASE-13275) | *Major* | **Setting hbase.security.authorization to false does not disable authorization** -Prior to this change the configuration setting 'hbase.security.authorization' had no effect if security coprocessor were installed. The act of installing the security coprocessors was assumed to indicate active authorizaton was desired and required. Now it is possible to install the security coprocessors yet have them operate in a passive state with active authorization disabled by setting 'hbase.security.authorization' to false. This can be useful but is probably not what you want. For more information, consult the Security section of the HBase online manual. +Prior to this change the configuration setting 'hbase.security.authorization' had no effect if security coprocessor were installed. The act of installing the security coprocessors was assumed to indicate active authorizaton was desired and required. Now it is possible to install the security coprocessors yet have them operate in a passive state with active authorization disabled by setting 'hbase.security.authorization' to false. This can be useful but is probably not what you want. For more information, consult the Security section of the HBase online manual. 'hbase.security.authorization' defaults to true for backwards comptatible behavior. @@ -9346,15 +10922,15 @@ Use hbase.client.scanner.max.result.size instead to enforce practical chunk size Results returned from RPC calls may now be returned as partials -When is a Result marked as a partial? +When is a Result marked as a partial? When the server must stop the scan because the max size limit has been reached. Means that the LAST Result returned within the ScanResult's Result array may be marked as a partial if the scan's max size limit caused it to stop in the middle of a row. Incompatible Change: The return type of InternalScanners#next and RegionScanners#nextRaw has been changed to NextState from boolean The previous boolean return value can be accessed via NextState#hasMoreValues() Provides more context as to what happened inside the scanner -Scan caching default has been changed to Integer.Max\_Value -This value works together with the new maxResultSize value from HBASE-12976 (defaults to 2MB) +Scan caching default has been changed to Integer.Max\_Value +This value works together with the new maxResultSize value from HBASE-12976 (defaults to 2MB) Results returned from server on basis of size rather than number of rows Provides better use of network since row size varies amongst tables @@ -9672,14 +11248,14 @@ This client is on by default in master branch (2.0 hbase). It is off in branch-1 Namespace auditor provides basic quota support for namespaces in terms of number of tables and number of regions. In order to use namespace quotas, quota support must be enabled by setting "hbase.quota.enabled" property to true in hbase-site.xml file. -The users can add quota information to namespace, while creating new namespaces or by altering existing ones. +The users can add quota information to namespace, while creating new namespaces or by altering existing ones. Examples: 1. create\_namespace 'ns1', {'hbase.namespace.quota.maxregions'=\>'10'} 2. create\_namespace 'ns2', {'hbase.namespace.quota.maxtables'=\>'2','hbase.namespace.quota.maxregions'=\>'5'} 3. alter\_namespace 'ns3', {METHOD =\> 'set', 'hbase.namespace.quota.maxtables'=\>'5','hbase.namespace.quota.maxregions'=\>'25'} -The quotas can be modified/added to namespace at any point of time. To remove quotas, the following command can be used: +The quotas can be modified/added to namespace at any point of time. To remove quotas, the following command can be used: alter\_namespace 'ns3', {METHOD =\> 'unset', NAME =\> 'hbase.namespace.quota.maxtables'} alter\_namespace 'ns3', {METHOD =\> 'unset', NAME =\> 'hbase.namespace.quota.maxregions'} @@ -9839,7 +11415,7 @@ NavigableMap\\> getFamilyMap() * [HBASE-12084](https://issues.apache.org/jira/browse/HBASE-12084) | *Major* | **Remove deprecated APIs from Result** The below KeyValue based APIs are removed from Result -KeyValue[] raw() +KeyValue[] raw() List\ list() List\ getColumn(byte [] family, byte [] qualifier) KeyValue getColumnLatest(byte [] family, byte [] qualifier) @@ -9854,7 +11430,7 @@ Cell getColumnLatestCell(byte [] family, int foffset, int flength, byte [] quali respectively Also the constructors which were taking KeyValues also removed -Result(KeyValue [] cells) +Result(KeyValue [] cells) Result(List\ kvs) @@ -9865,7 +11441,7 @@ Result(List\ kvs) The following APIs are removed from Filter KeyValue transform(KeyValue) KeyValue getNextKeyHint(KeyValue) -and replaced with +and replaced with Cell transformCell(Cell) Cell getNextCellHint(Cell) respectively. @@ -10012,6 +11588,3 @@ To enable zoo.cfg reading, for which support may be removed in a future release, properties from a zoo.cfg file has been deprecated. \ \ - - - diff --git a/bin/chaos-daemon.sh b/bin/chaos-daemon.sh index 084e519321a2..8e27f4a5d9f3 100644 --- a/bin/chaos-daemon.sh +++ b/bin/chaos-daemon.sh @@ -19,7 +19,7 @@ # */ # -usage="Usage: chaos-daemon.sh (start|stop) chaosagent" +usage="Usage: chaos-daemon.sh (start|stop) (chaosagent|chaosmonkeyrunner)" # if no args specified, show usage if [ $# -le 1 ]; then @@ -51,11 +51,6 @@ bin=$(cd "$bin">/dev/null || exit; pwd) . "$bin"/hbase-config.sh . "$bin"/hbase-common.sh -CLASSPATH=$HBASE_CONF_DIR -for f in ../lib/*.jar; do - CLASSPATH=${CLASSPATH}:$f -done - # get log directory if [ "$HBASE_LOG_DIR" = "" ]; then export HBASE_LOG_DIR="$HBASE_HOME/logs" @@ -79,7 +74,7 @@ if [ "$JAVA_HOME" = "" ]; then fi export HBASE_LOG_PREFIX=hbase-$HBASE_IDENT_STRING-$command-$HOSTNAME -export CHAOS_LOGFILE=$HBASE_LOG_PREFIX.log +export HBASE_LOGFILE=$HBASE_LOG_PREFIX.log if [ -z "${HBASE_ROOT_LOGGER}" ]; then export HBASE_ROOT_LOGGER=${HBASE_ROOT_LOGGER:-"INFO,RFA"} @@ -89,7 +84,7 @@ if [ -z "${HBASE_SECURITY_LOGGER}" ]; then export HBASE_SECURITY_LOGGER=${HBASE_SECURITY_LOGGER:-"INFO,RFAS"} fi -CHAOS_LOGLOG=${CHAOS_LOGLOG:-"${HBASE_LOG_DIR}/${CHAOS_LOGFILE}"} +CHAOS_LOGLOG=${CHAOS_LOGLOG:-"${HBASE_LOG_DIR}/${HBASE_LOGFILE}"} CHAOS_PID=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.pid if [ -z "$CHAOS_JAVA_OPTS" ]; then @@ -101,15 +96,20 @@ case $startStop in (start) check_before_start echo running $command - CMD="${JAVA_HOME}/bin/java -Dapp.home=${HBASE_CONF_DIR}/../ ${CHAOS_JAVA_OPTS} -cp ${CLASSPATH} org.apache.hadoop.hbase.chaos.ChaosService -$command start &>> ${CHAOS_LOGLOG} &" - - eval $CMD + command_args="" + if [ "$command" = "chaosagent" ]; then + command_args=" -${command} start" + elif [ "$command" = "chaosmonkeyrunner" ]; then + command_args="-c $HBASE_CONF_DIR $@" + fi + HBASE_OPTS="$HBASE_OPTS $CHAOS_JAVA_OPTS" . $bin/hbase --config "${HBASE_CONF_DIR}" $command $command_args >> ${CHAOS_LOGLOG} 2>&1 & PID=$(echo $!) + disown -h -r echo ${PID} >${CHAOS_PID} - echo "Chaos ${1} process Started with ${PID} !" + echo "Chaos ${command} process Started with ${PID} !" now=$(date) - echo "${now} Chaos ${1} process Started with ${PID} !" >>${CHAOS_LOGLOG} + echo "${now} Chaos ${command} process Started with ${PID} !" >>${CHAOS_LOGLOG} ;; (stop) diff --git a/bin/considerAsDead.sh b/bin/considerAsDead.sh index ae1b8d885bf3..848e276cd004 100755 --- a/bin/considerAsDead.sh +++ b/bin/considerAsDead.sh @@ -17,7 +17,7 @@ # * See the License for the specific language governing permissions and # * limitations under the License. # */ -# +# usage="Usage: considerAsDead.sh --hostname serverName" @@ -50,12 +50,12 @@ do rs_parts=(${rs//,/ }) hostname=${rs_parts[0]} echo $deadhost - echo $hostname + echo $hostname if [ "$deadhost" == "$hostname" ]; then znode="$zkrs/$rs" echo "ZNode Deleting:" $znode $bin/hbase zkcli delete $znode > /dev/null 2>&1 sleep 1 - ssh $HBASE_SSH_OPTS $hostname $remote_cmd 2>&1 | sed "s/^/$hostname: /" - fi + ssh $HBASE_SSH_OPTS $hostname $remote_cmd 2>&1 | sed "s/^/$hostname: /" + fi done diff --git a/bin/draining_servers.rb b/bin/draining_servers.rb index 12f0ba4e1741..f253d9b64ac7 100644 --- a/bin/draining_servers.rb +++ b/bin/draining_servers.rb @@ -25,7 +25,6 @@ java_import org.apache.hadoop.hbase.HBaseConfiguration java_import org.apache.hadoop.hbase.client.ConnectionFactory -java_import org.apache.hadoop.hbase.client.HBaseAdmin java_import org.apache.hadoop.hbase.zookeeper.ZKUtil java_import org.apache.hadoop.hbase.zookeeper.ZNodePaths java_import org.slf4j.LoggerFactory @@ -61,6 +60,7 @@ def getServers(admin) def getServerNames(hostOrServers, config) ret = [] connection = ConnectionFactory.createConnection(config) + admin = nil hostOrServers.each do |host_or_server| # check whether it is already serverName. No need to connect to cluster @@ -135,8 +135,12 @@ def listServers(_options) hostOrServers = ARGV[1..ARGV.size] -# Create a logger and save it to ruby global -$LOG = LoggerFactory.getLogger(NAME) +# disable debug/info logging on this script for clarity +log_level = 'ERROR' +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop.hbase', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.zookeeper', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop', log_level) + case ARGV[0] when 'add' if ARGV.length < 2 diff --git a/bin/get-active-master.rb b/bin/get-active-master.rb index d8c96fe3d4ad..65e0c3cd0fe0 100644 --- a/bin/get-active-master.rb +++ b/bin/get-active-master.rb @@ -24,9 +24,10 @@ java_import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker # disable debug/info logging on this script for clarity -log_level = org.apache.log4j.Level::ERROR -org.apache.log4j.Logger.getLogger('org.apache.hadoop.hbase').setLevel(log_level) -org.apache.log4j.Logger.getLogger('org.apache.zookeeper').setLevel(log_level) +log_level = 'ERROR' +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop.hbase', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.zookeeper', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop', log_level) config = HBaseConfiguration.create diff --git a/bin/graceful_stop.sh b/bin/graceful_stop.sh index fc18239830b2..da3495b1d7bf 100755 --- a/bin/graceful_stop.sh +++ b/bin/graceful_stop.sh @@ -115,7 +115,7 @@ if [ "$nob" == "true" ]; then HBASE_BALANCER_STATE=false else log "Disabling load balancer" - HBASE_BALANCER_STATE=$(echo 'balance_switch false' | "$bin"/hbase --config "${HBASE_CONF_DIR}" shell -n | tail -1) + HBASE_BALANCER_STATE=$(echo 'balance_switch false' | "$bin"/hbase --config "${HBASE_CONF_DIR}" shell -n | grep 'Previous balancer state' | awk -F": " '{print $2}') log "Previous balancer state was $HBASE_BALANCER_STATE" fi diff --git a/bin/hbase b/bin/hbase index 75aa81b7c3a9..4863d7963696 100755 --- a/bin/hbase +++ b/bin/hbase @@ -305,10 +305,13 @@ else # make it easier to check for shaded/not later on. shaded_jar="" fi +# here we will add slf4j-api, commons-logging, jul-to-slf4j, jcl-over-slf4j +# to classpath, as they are all logging bridges. Only exclude log4j* so we +# will not actually log anything out. Add it later if necessary for f in "${HBASE_HOME}"/lib/client-facing-thirdparty/*.jar; do if [[ ! "${f}" =~ ^.*/htrace-core-3.*\.jar$ ]] && \ - [ "${f}" != "htrace-core.jar$" ] && \ - [[ ! "${f}" =~ ^.*/slf4j-log4j.*$ ]]; then + [[ "${f}" != "htrace-core.jar$" ]] && \ + [[ ! "${f}" =~ ^.*/log4j.*$ ]]; then CLASSPATH="${CLASSPATH}:${f}" fi done @@ -487,6 +490,11 @@ add_jdk11_deps_to_classpath() { done } +add_jdk11_jvm_flags() { + # Keep in sync with hbase-surefire.jdk11.flags in the root pom.xml + HBASE_OPTS="$HBASE_OPTS -Dorg.apache.hbase.thirdparty.io.netty.tryReflectionSetAccessible=true --add-modules jdk.unsupported --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.util.concurrent=ALL-UNNAMED --add-exports java.base/jdk.internal.misc=ALL-UNNAMED --add-exports java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-exports java.base/sun.net.dns=ALL-UNNAMED --add-exports java.base/sun.net.util=ALL-UNNAMED" +} + add_opentelemetry_agent() { if [ -e "${OPENTELEMETRY_JAVAAGENT_PATH}" ] ; then agent_jar="${OPENTELEMETRY_JAVAAGENT_PATH}" @@ -671,7 +679,7 @@ elif [ "$COMMAND" = "mapredcp" ] ; then for f in "${HBASE_HOME}"/lib/client-facing-thirdparty/*.jar; do if [[ ! "${f}" =~ ^.*/htrace-core-3.*\.jar$ ]] && \ [ "${f}" != "htrace-core.jar$" ] && \ - [[ ! "${f}" =~ ^.*/slf4j-log4j.*$ ]]; then + [[ ! "${f}" =~ ^.*/log4j.*$ ]]; then echo -n ":${f}" fi done @@ -703,6 +711,10 @@ elif [ "$COMMAND" = "pre-upgrade" ] ; then CLASS='org.apache.hadoop.hbase.tool.PreUpgradeValidator' elif [ "$COMMAND" = "completebulkload" ] ; then CLASS='org.apache.hadoop.hbase.tool.BulkLoadHFilesTool' +elif [ "$COMMAND" = "chaosagent" ] ; then + CLASS='org.apache.hadoop.hbase.chaos.ChaosService' +elif [ "$COMMAND" = "chaosmonkeyrunner" ] ; then + CLASS='org.apache.hadoop.hbase.chaos.util.ChaosMonkeyRunner' elif [ "$COMMAND" = "hbtop" ] ; then CLASS='org.apache.hadoop.hbase.hbtop.HBTop' if [ -n "${shaded_jar}" ] ; then @@ -720,18 +732,25 @@ elif [ "$COMMAND" = "hbtop" ] ; then done fi - if [ -f "${HBASE_HOME}/conf/log4j-hbtop.properties" ] ; then - HBASE_HBTOP_OPTS="${HBASE_HBTOP_OPTS} -Dlog4j.configuration=file:${HBASE_HOME}/conf/log4j-hbtop.properties" + if [ -f "${HBASE_HOME}/conf/log4j2-hbtop.properties" ] ; then + HBASE_HBTOP_OPTS="${HBASE_HBTOP_OPTS} -Dlog4j2.configurationFile=file:${HBASE_HOME}/conf/log4j2-hbtop.properties" fi HBASE_OPTS="${HBASE_OPTS} ${HBASE_HBTOP_OPTS}" else CLASS=$COMMAND +if [[ "$CLASS" =~ .*IntegrationTest.* ]] ; then + for f in ${HBASE_HOME}/lib/test/*.jar; do + if [ -f "${f}" ]; then + CLASSPATH="${CLASSPATH}:${f}" + fi + done + fi fi # Add lib/jdk11 jars to the classpath if [ "${DEBUG}" = "true" ]; then - echo "Deciding on addition of lib/jdk11 jars to the classpath" + echo "Deciding on addition of lib/jdk11 jars to the classpath and setting JVM module flags" fi addJDK11Jars=false @@ -773,14 +792,17 @@ fi if [ "${addJDK11Jars}" = "true" ]; then add_jdk11_deps_to_classpath + add_jdk11_jvm_flags if [ "${DEBUG}" = "true" ]; then - echo "Added JDK11 jars to classpath." - fi + echo "Added JDK11 jars to classpath." + echo "Added JDK11 JVM flags too." + fi elif [ "${DEBUG}" = "true" ]; then echo "JDK11 jars skipped from classpath." + echo "Skipped adding JDK11 JVM flags." fi -if [[ -n "${HBASE_TRACE_OPTS}" ]]; then +if [[ "${HBASE_OTEL_TRACING_ENABLED:-false}" = "true" ]] ; then if [ "${DEBUG}" = "true" ]; then echo "Attaching opentelemetry agent" fi @@ -810,10 +832,9 @@ fi HEAP_SETTINGS="$JAVA_HEAP_MAX $JAVA_OFFHEAP_MAX" # by now if we're running a command it means we need logging -for f in ${HBASE_HOME}/lib/client-facing-thirdparty/slf4j-log4j*.jar; do +for f in ${HBASE_HOME}/lib/client-facing-thirdparty/log4j*.jar; do if [ -f "${f}" ]; then CLASSPATH="${CLASSPATH}:${f}" - break fi done @@ -822,10 +843,11 @@ export CLASSPATH if [ "${DEBUG}" = "true" ]; then echo "classpath=${CLASSPATH}" >&2 HBASE_OPTS="${HBASE_OPTS} -Xdiag" + echo "HBASE_OPTS=${HBASE_OPTS}" fi # resolve the command arguments -read -r -a CMD_ARGS <<< "$@" +CMD_ARGS=("$@") if [ "${#JSHELL_ARGS[@]}" -gt 0 ] ; then CMD_ARGS=("${JSHELL_ARGS[@]}" "${CMD_ARGS[@]}") fi diff --git a/bin/hbase-cleanup.sh b/bin/hbase-cleanup.sh index 92b40cca6ae0..69c1f72b6074 100755 --- a/bin/hbase-cleanup.sh +++ b/bin/hbase-cleanup.sh @@ -74,7 +74,7 @@ check_for_znodes() { znodes=`"$bin"/hbase zkcli ls $zparent/$zchild 2>&1 | tail -1 | sed "s/\[//" | sed "s/\]//"` if [ "$znodes" != "" ]; then echo -n "ZNode(s) [${znodes}] of $command are not expired. Exiting without cleaning hbase data." - echo #force a newline + echo #force a newline exit 1; else echo -n "All ZNode(s) of $command are expired." @@ -99,7 +99,7 @@ execute_clean_acls() { clean_up() { case $1 in - --cleanZk) + --cleanZk) execute_zk_command "deleteall ${zparent}"; ;; --cleanHdfs) @@ -120,7 +120,7 @@ clean_up() { ;; *) ;; - esac + esac } check_znode_exists() { diff --git a/bin/hbase-config.sh b/bin/hbase-config.sh index 3e85ec59fb63..0e8b3feed213 100644 --- a/bin/hbase-config.sh +++ b/bin/hbase-config.sh @@ -103,7 +103,7 @@ do break fi done - + # Allow alternate hbase conf dir location. HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}" # List of hbase regions servers. @@ -162,7 +162,7 @@ fi # memory usage to explode. Tune the variable down to prevent vmem explosion. export MALLOC_ARENA_MAX=${MALLOC_ARENA_MAX:-4} -# Now having JAVA_HOME defined is required +# Now having JAVA_HOME defined is required if [ -z "$JAVA_HOME" ]; then cat 1>&2 <&1)" - echo "${properties}" | "${GREP}" java.runtime.version | head -1 | "${SED}" -e 's/.* = \([^ ]*\)/\1/' + # Avoid calling java repeatedly + if [ -z "$read_java_version_cached" ]; then + properties="$("${JAVA_HOME}/bin/java" -XshowSettings:properties -version 2>&1)" + read_java_version_cached="$(echo "${properties}" | "${GREP}" java.runtime.version | head -1 | "${SED}" -e 's/.* = \([^ ]*\)/\1/')" + fi + echo "$read_java_version_cached" } # Inspect the system properties exposed by this JVM to identify the major diff --git a/bin/hbase-daemon.sh b/bin/hbase-daemon.sh index 11c13eb52300..b3514bfd42ae 100755 --- a/bin/hbase-daemon.sh +++ b/bin/hbase-daemon.sh @@ -78,22 +78,34 @@ hbase_rotate_log () fi } -cleanAfterRun() { - if [ -f ${HBASE_PID} ]; then - # If the process is still running time to tear it down. - kill -9 `cat ${HBASE_PID}` > /dev/null 2>&1 - rm -f ${HBASE_PID} > /dev/null 2>&1 +function sighup_handler +{ + # pass through SIGHUP if we can + if [ -f "${HBASE_PID}" ] ; then + kill -s HUP "$(cat "${HBASE_PID}")" fi +} - if [ -f ${HBASE_ZNODE_FILE} ]; then - if [ "$command" = "master" ]; then - HBASE_OPTS="$HBASE_OPTS $HBASE_MASTER_OPTS" $bin/hbase master clear > /dev/null 2>&1 +function sigterm_handler +{ + if [ -f "${HBASE_PID}" ]; then + kill -s TERM "$(cat "${HBASE_PID}")" + waitForProcessEnd "$(cat "${HBASE_PID}")" "${command}" + fi + cleanAfterRun +} + +cleanAfterRun() { + rm -f "${HBASE_PID}" > /dev/null 2>&1 + if [ -f "${HBASE_ZNODE_FILE}" ]; then + if [ "${command}" = "master" ]; then + HBASE_OPTS="$HBASE_OPTS $HBASE_MASTER_OPTS" "${bin}/hbase" master clear > /dev/null 2>&1 else - #call ZK to delete the node - ZNODE=`cat ${HBASE_ZNODE_FILE}` - HBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS" $bin/hbase zkcli delete ${ZNODE} > /dev/null 2>&1 + # call ZK to delete the node + ZNODE="$(cat "${HBASE_ZNODE_FILE}")" + HBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS" "${bin}/hbase" zkcli delete "${ZNODE}" > /dev/null 2>&1 fi - rm ${HBASE_ZNODE_FILE} + rm -f "${HBASE_ZNODE_FILE}" > /dev/null 2>&1 fi } @@ -171,10 +183,10 @@ export HBASE_ZNODE_FILE=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.znode export HBASE_AUTOSTART_FILE=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.autostart if [ -n "$SERVER_GC_OPTS" ]; then - export SERVER_GC_OPTS=${SERVER_GC_OPTS/"-Xloggc:"/"-Xloggc:${HBASE_LOGGC}"} + export SERVER_GC_OPTS=${SERVER_GC_OPTS/""/"${HBASE_LOGGC}"} fi if [ -n "$CLIENT_GC_OPTS" ]; then - export CLIENT_GC_OPTS=${CLIENT_GC_OPTS/"-Xloggc:"/"-Xloggc:${HBASE_LOGGC}"} + export CLIENT_GC_OPTS=${CLIENT_GC_OPTS/""/"${HBASE_LOGGC}"} fi # Set default scheduling priority @@ -225,7 +237,9 @@ case $startStop in ;; (foreground_start) - trap cleanAfterRun SIGHUP SIGINT SIGTERM EXIT + trap sighup_handler HUP + trap sigterm_handler INT TERM EXIT + if [ "$HBASE_NO_REDIRECT_LOG" != "" ]; then # NO REDIRECT echo "`date` Starting $command on `hostname`" diff --git a/bin/hbase.cmd b/bin/hbase.cmd index 3b569099090f..240b63c7ec71 100644 --- a/bin/hbase.cmd +++ b/bin/hbase.cmd @@ -332,6 +332,7 @@ set HBASE_OPTS=%HBASE_OPTS% -Djava.util.logging.config.class="org.apache.hadoop. if not defined HBASE_ROOT_LOGGER ( set HBASE_ROOT_LOGGER=INFO,console ) + set HBASE_OPTS=%HBASE_OPTS% -Dhbase.root.logger="%HBASE_ROOT_LOGGER%" if defined JAVA_LIBRARY_PATH ( @@ -348,6 +349,7 @@ if not defined HBASE_SECURITY_LOGGER ( set HBASE_SECURITY_LOGGER=INFO,DRFAS ) ) + set HBASE_OPTS=%HBASE_OPTS% -Dhbase.security.logger="%HBASE_SECURITY_LOGGER%" set HEAP_SETTINGS=%JAVA_HEAP_MAX% %JAVA_OFFHEAP_MAX% diff --git a/bin/master-backup.sh b/bin/master-backup.sh index feca4ab86572..5d3f7cb75615 100755 --- a/bin/master-backup.sh +++ b/bin/master-backup.sh @@ -17,7 +17,7 @@ # * See the License for the specific language governing permissions and # * limitations under the License. # */ -# +# # Run a shell command on all backup master hosts. # # Environment Variables @@ -45,7 +45,7 @@ bin=`cd "$bin">/dev/null; pwd` . "$bin"/hbase-config.sh # If the master backup file is specified in the command line, -# then it takes precedence over the definition in +# then it takes precedence over the definition in # hbase-env.sh. Save it here. HOSTLIST=$HBASE_BACKUP_MASTERS @@ -69,6 +69,6 @@ if [ -f $HOSTLIST ]; then sleep $HBASE_SLAVE_SLEEP fi done -fi +fi wait diff --git a/bin/regionservers.sh b/bin/regionservers.sh index b83c1f3c79eb..b10e5a3ec9f4 100755 --- a/bin/regionservers.sh +++ b/bin/regionservers.sh @@ -17,7 +17,7 @@ # * See the License for the specific language governing permissions and # * limitations under the License. # */ -# +# # Run a shell command on all regionserver hosts. # # Environment Variables @@ -45,7 +45,7 @@ bin=`cd "$bin">/dev/null; pwd` . "$bin"/hbase-config.sh # If the regionservers file is specified in the command line, -# then it takes precedence over the definition in +# then it takes precedence over the definition in # hbase-env.sh. Save it here. HOSTLIST=$HBASE_REGIONSERVERS diff --git a/bin/replication/copy_tables_desc.rb b/bin/replication/copy_tables_desc.rb index 44a24f9eea09..39bed5423c71 100644 --- a/bin/replication/copy_tables_desc.rb +++ b/bin/replication/copy_tables_desc.rb @@ -27,11 +27,8 @@ java_import org.apache.hadoop.conf.Configuration java_import org.apache.hadoop.hbase.HBaseConfiguration java_import org.apache.hadoop.hbase.HConstants -java_import org.apache.hadoop.hbase.HTableDescriptor java_import org.apache.hadoop.hbase.TableName java_import org.apache.hadoop.hbase.client.ConnectionFactory -java_import org.apache.hadoop.hbase.client.HBaseAdmin -java_import org.slf4j.LoggerFactory # Name of this script NAME = 'copy_tables_desc'.freeze @@ -45,7 +42,7 @@ def usage def copy(src, dst, table) # verify if table exists in source cluster begin - t = src.getTableDescriptor(TableName.valueOf(table)) + t = src.getDescriptor(TableName.valueOf(table)) rescue org.apache.hadoop.hbase.TableNotFoundException puts format("Source table \"%s\" doesn't exist, skipping.", table) return @@ -62,9 +59,13 @@ def copy(src, dst, table) puts format('Schema for table "%s" was succesfully copied to remote cluster.', table) end -usage if ARGV.size < 2 || ARGV.size > 3 +# disable debug/info logging on this script for clarity +log_level = 'ERROR' +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop.hbase', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.zookeeper', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop', log_level) -LOG = LoggerFactory.getLogger(NAME) +usage if ARGV.size < 2 || ARGV.size > 3 parts1 = ARGV[0].split(':') diff --git a/bin/shutdown_regionserver.rb b/bin/shutdown_regionserver.rb index a131776e32ae..d8ddb2a8625d 100644 --- a/bin/shutdown_regionserver.rb +++ b/bin/shutdown_regionserver.rb @@ -24,7 +24,6 @@ include Java java_import org.apache.hadoop.hbase.HBaseConfiguration -java_import org.apache.hadoop.hbase.client.HBaseAdmin java_import org.apache.hadoop.hbase.client.ConnectionFactory def usage(msg = nil) @@ -35,6 +34,12 @@ def usage(msg = nil) abort end +# disable debug/info logging on this script for clarity +log_level = 'ERROR' +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop.hbase', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.zookeeper', log_level) +org.apache.hadoop.hbase.logging.Log4jUtils.setAllLevels('org.apache.hadoop', log_level) + usage if ARGV.empty? ARGV.each do |x| diff --git a/bin/stop-hbase.sh b/bin/stop-hbase.sh index b47ae1f7743b..d10e618f2d21 100755 --- a/bin/stop-hbase.sh +++ b/bin/stop-hbase.sh @@ -52,7 +52,7 @@ fi export HBASE_LOG_PREFIX=hbase-$HBASE_IDENT_STRING-master-$HOSTNAME export HBASE_LOGFILE=$HBASE_LOG_PREFIX.log -logout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out +logout=$HBASE_LOG_DIR/$HBASE_LOG_PREFIX.out loglog="${HBASE_LOG_DIR}/${HBASE_LOGFILE}" pid=${HBASE_PID_DIR:-/tmp}/hbase-$HBASE_IDENT_STRING-master.pid @@ -74,7 +74,7 @@ fi # distributed == false means that the HMaster will kill ZK when it exits # HBASE-6504 - only take the first line of the output in case verbose gc is on distMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1` -if [ "$distMode" == 'true' ] +if [ "$distMode" == 'true' ] then "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" stop zookeeper fi diff --git a/bin/test/process_based_cluster.sh b/bin/test/process_based_cluster.sh index eb8633f502cb..1c4c72532134 100755 --- a/bin/test/process_based_cluster.sh +++ b/bin/test/process_based_cluster.sh @@ -68,7 +68,7 @@ while [ $# -ne 0 ]; do -h|--help) print_usage ;; --kill) - IS_KILL=1 + IS_KILL=1 cmd_specified ;; --show) IS_SHOW=1 @@ -106,5 +106,3 @@ else echo "No command specified" >&2 exit 1 fi - - diff --git a/bin/zookeepers.sh b/bin/zookeepers.sh index 97bf41b60528..5d22d82a559a 100755 --- a/bin/zookeepers.sh +++ b/bin/zookeepers.sh @@ -17,7 +17,7 @@ # * See the License for the specific language governing permissions and # * limitations under the License. # */ -# +# # Run a shell command on all zookeeper hosts. # # Environment Variables diff --git a/conf/hbase-env.cmd b/conf/hbase-env.cmd index 4beebf646dee..84519d5606d2 100644 --- a/conf/hbase-env.cmd +++ b/conf/hbase-env.cmd @@ -32,7 +32,7 @@ @rem set HBASE_OFFHEAPSIZE=1000 @rem For example, to allocate 8G of offheap, to 8G: -@rem etHBASE_OFFHEAPSIZE=8G +@rem set HBASE_OFFHEAPSIZE=8G @rem Extra Java runtime options. @rem Below are what we set by default. May only work with SUN JVM. @@ -82,6 +82,9 @@ set HBASE_OPTS=%HBASE_OPTS% "-XX:+UseConcMarkSweepGC" "-Djava.net.preferIPv4Stac @rem Tell HBase whether it should manage it's own instance of ZooKeeper or not. @rem set HBASE_MANAGES_ZK=true +@rem Tell HBase the logger level and appenders +@rem set HBASE_ROOT_LOGGER=INFO,DRFA + @rem Uncomment to enable trace, you can change the options to use other exporters such as jaeger or @rem zipkin. See https://github.com/open-telemetry/opentelemetry-java-instrumentation on how to @rem configure exporters and other components through system properties. diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh index ee71a0ab56dc..763ac364a2e2 100644 --- a/conf/hbase-env.sh +++ b/conf/hbase-env.sh @@ -33,7 +33,7 @@ # The maximum amount of heap to use. Default is left to JVM default. # export HBASE_HEAPSIZE=1G -# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of +# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of # offheap, set the value to "8G". # export HBASE_OFFHEAPSIZE=1G @@ -70,7 +70,7 @@ # export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc: -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M" # See the package documentation for org.apache.hadoop.hbase.io.hfile for other configurations -# needed setting up off-heap block caching. +# needed setting up off-heap block caching. # Uncomment and adjust to enable JMX exporting # See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access. @@ -101,7 +101,7 @@ # Where log files are stored. $HBASE_HOME/logs by default. # export HBASE_LOG_DIR=${HBASE_HOME}/logs -# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers +# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070" # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071" # export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072" @@ -125,13 +125,13 @@ # Tell HBase whether it should manage it's own instance of ZooKeeper or not. # export HBASE_MANAGES_ZK=true -# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the -# RFA appender. Please refer to the log4j.properties file to see more details on this appender. +# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the +# RFA appender. Please refer to the log4j2.properties file to see more details on this appender. # In case one needs to do log rolling on a date change, one should set the environment property # HBASE_ROOT_LOGGER to ",DRFA". # For example: -# HBASE_ROOT_LOGGER=INFO,DRFA -# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as +# export HBASE_ROOT_LOGGER=INFO,DRFA +# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as # DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context. # Tell HBase whether it should include Hadoop's lib when start up, @@ -142,32 +142,69 @@ # export GREP="${GREP-grep}" # export SED="${SED-sed}" -# Tracing -# Uncomment some combination of these lines to enable tracing. You should change the options to use -# the exporters appropriate to your environment. See -# https://github.com/open-telemetry/opentelemetry-java-instrumentation for details on how to -# configure exporters and other components through system properties. # -# The presence HBASE_TRACE_OPTS indicates that tracing should be enabled, and serves as site-wide -# settings. -# export HBASE_TRACE_OPTS="-Dotel.traces.exporter=none -Dotel.metrics.exporter=none" +## OpenTelemetry Tracing # -# Per-process configuration variables allow for fine-grained configuration control. -# export HBASE_SHELL_OPTS="${HBASE_SHELL_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-shell" -# export HBASE_JSHELL_OPTS="${HBASE_JSHELL_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-jshell" -# export HBASE_HBCK_OPTS="${HBASE_HBCK_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-hbck" -# export HBASE_MASTER_OPTS="${HBASE_MASTER_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-master" -# export HBASE_REGIONSERVER_OPTS="${HBASE_REGIONSERVER_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-regionserver" -# export HBASE_THRIFT_OPTS="${HBASE_THRIFT_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-thrift" -# export HBASE_REST_OPTS="${HBASE_REST_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-rest" -# export HBASE_ZOOKEEPER_OPTS="${HBASE_ZOOKEEPER_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-zookeeper" -# export HBASE_PE_OPTS="${HBASE_PE_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-performanceevaluation" -# export HBASE_LTT_OPTS="${HBASE_LTT_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-loadtesttool" -# export HBASE_CANARY_OPTS="${HBASE_CANARY_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-canary" -# export HBASE_HBTOP_OPTS="${HBASE_HBTOP_OPTS} ${HBASE_TRACE_OPTS} -Dotel.resource.attributes=service.name=hbase-hbtop" +# HBase is instrumented for tracing using OpenTelemetry. None of the other OpenTelemetry signals +# are supported at this time. Configuring tracing involves setting several configuration points, +# via environment variable or system property. This configuration prefers setting environment +# variables whenever possible because they are picked up by all processes launched by `bin/hbase`. +# Use system properties when you launch multiple processes from the same configuration directory -- +# when you need to specify different configuration values for different hbase processes that are +# launched using the same HBase configuration (i.e., a single-host pseudo-distributed cluster or +# launching the `bin/hbase shell` from a host that is also running an instance of the master). See +# https://github.com/open-telemetry/opentelemetry-java/tree/v1.15.0/sdk-extensions/autoconfigure +# for an inventory of configuration points and detailed explanations of each of them. # -# Manually specify a value for OPENTELEMETRY_JAVAAGENT_PATH to override the autodiscovery mechanism -# export OPENTELEMETRY_JAVAAGENT_PATH="" +# Note also that as of this writing, the javaagent logs to stderr and is not configured along with +# the rest of HBase's logging configuration. +# +# `HBASE_OTEL_TRACING_ENABLED`, required. Enable attaching the opentelemetry javaagent to the +# process via support provided by `bin/hbase`. When this value us `false`, the agent is not added +# to the process launch arguments and all further OpenTelemetry configuration is ignored. +#export HBASE_OTEL_TRACING_ENABLED=true +# +# `OPENTELEMETRY_JAVAAGENT_PATH`, optional. Override the javaagent provided by HBase in `lib/trace` +# with an alternate. Use when you need to upgrade the agent version or swap out the official one +# for an alternative implementation. +#export OPENTELEMETRY_JAVAAGENT_PATH="" +# +# `OTEL_FOO_EXPORTER`, required. Specify an Exporter implementation per signal type. HBase only +# makes explicit use of the traces signal at this time, so the important one is +# `OTEL_TRACES_EXPORTER`. Specify its value based on the exporter required for your tracing +# environment. The other two should be uncommented and specified as `none`, otherwise the agent +# may report errors while attempting to export these other signals to an unconfigured destination. +# https://github.com/open-telemetry/opentelemetry-java/tree/v1.15.0/sdk-extensions/autoconfigure#exporters +#export OTEL_TRACES_EXPORTER="" +#export OTEL_METRICS_EXPORTER="none" +#export OTEL_LOGS_EXPORTER="none" +# +# `OTEL_SERVICE_NAME`, required. Specify "resource attributes", and specifically the `service.name`, +# as a unique value for each HBase process. OpenTelemetry allows for specifying this value in one +# of two ways, via environment variables with the `OTEL_` prefix, or via system properties with the +# `otel.` prefix. Which you use with HBase is decided based on whether this configuration file is +# read by a single process or shared by multiple HBase processes. For the default standalone mode +# or an environment where all processes share the same configuration file, use the `otel` system +# properties by uncommenting all of the `HBASE_FOO_OPTS` exports below. When this configuration file +# is being consumed by only a single process -- for example, from a systemd configuration or in a +# container template -- replace use of `HBASE_FOO_OPTS` with the standard `OTEL_SERVICE_NAME` and/or +# `OTEL_RESOURCE_ATTRIBUTES` environment variables. For further details, see +# https://github.com/open-telemetry/opentelemetry-java/tree/v1.15.0/sdk-extensions/autoconfigure#opentelemetry-resource +#export HBASE_CANARY_OPTS="${HBASE_CANARY_OPTS} -Dotel.resource.attributes=service.name=hbase-canary" +#export HBASE_HBCK_OPTS="${HBASE_HBCK_OPTS} -Dotel.resource.attributes=service.name=hbase-hbck" +#export HBASE_HBTOP_OPTS="${HBASE_HBTOP_OPTS} -Dotel.resource.attributes=service.name=hbase-hbtop" +#export HBASE_JSHELL_OPTS="${HBASE_JSHELL_OPTS} -Dotel.resource.attributes=service.name=hbase-jshell" +#export HBASE_LTT_OPTS="${HBASE_LTT_OPTS} -Dotel.resource.attributes=service.name=hbase-loadtesttool" +#export HBASE_MASTER_OPTS="${HBASE_MASTER_OPTS} -Dotel.resource.attributes=service.name=hbase-master" +#export HBASE_PE_OPTS="${HBASE_PE_OPTS} -Dotel.resource.attributes=service.name=hbase-performanceevaluation" +#export HBASE_REGIONSERVER_OPTS="${HBASE_REGIONSERVER_OPTS} -Dotel.resource.attributes=service.name=hbase-regionserver" +#export HBASE_REST_OPTS="${HBASE_REST_OPTS} -Dotel.resource.attributes=service.name=hbase-rest" +#export HBASE_SHELL_OPTS="${HBASE_SHELL_OPTS} -Dotel.resource.attributes=service.name=hbase-shell" +#export HBASE_THRIFT_OPTS="${HBASE_THRIFT_OPTS} -Dotel.resource.attributes=service.name=hbase-thrift" +#export HBASE_ZOOKEEPER_OPTS="${HBASE_ZOOKEEPER_OPTS} -Dotel.resource.attributes=service.name=hbase-zookeeper" -# Additional argments passed to jshell invocation +# +# JDK11+ JShell +# +# Additional arguments passed to jshell invocation # export HBASE_JSHELL_ARGS="--startup DEFAULT --startup PRINTING --startup hbase_startup.jsh" diff --git a/conf/hbase-policy.xml b/conf/hbase-policy.xml index bf472407d173..5a0256d5164a 100644 --- a/conf/hbase-policy.xml +++ b/conf/hbase-policy.xml @@ -24,20 +24,20 @@ security.client.protocol.acl * - ACL for ClientProtocol and AdminProtocol implementations (ie. + ACL for ClientProtocol and AdminProtocol implementations (ie. clients talking to HRegionServers) - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. security.admin.protocol.acl * - ACL for HMasterInterface protocol implementation (ie. + ACL for HMasterInterface protocol implementation (ie. clients talking to HMaster for admin operations). - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. @@ -46,8 +46,8 @@ * ACL for HMasterRegionInterface protocol implementations (for HRegionServers communicating with HMaster) - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. diff --git a/conf/log4j.properties b/conf/log4j.properties deleted file mode 100644 index 2282fa5d4a35..000000000000 --- a/conf/log4j.properties +++ /dev/null @@ -1,139 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Define some default values that can be overridden by system properties -hbase.root.logger=INFO,console -hbase.security.logger=INFO,console -hbase.log.dir=. -hbase.log.file=hbase.log -hbase.log.level=INFO - -# Define the root logger to the system property "hbase.root.logger". -log4j.rootLogger=${hbase.root.logger} - -# Logging Threshold -log4j.threshold=ALL - -# -# Daily Rolling File Appender -# -log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file} - -# Rollver at midnight -log4j.appender.DRFA.DatePattern=.yyyy-MM-dd - -# 30-day backup -#log4j.appender.DRFA.MaxBackupIndex=30 -log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %.1000m%n - -# Rolling File Appender properties -hbase.log.maxfilesize=256MB -hbase.log.maxbackupindex=20 - -# Rolling File Appender -log4j.appender.RFA=org.apache.log4j.RollingFileAppender -log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file} - -log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize} -log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex} - -log4j.appender.RFA.layout=org.apache.log4j.PatternLayout -log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %.1000m%n - -# -# Security audit appender -# -hbase.security.log.file=SecurityAuth.audit -hbase.security.log.maxfilesize=256MB -hbase.security.log.maxbackupindex=20 -log4j.appender.RFAS=org.apache.log4j.RollingFileAppender -log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file} -log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize} -log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex} -log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %.1000m%n -log4j.category.SecurityLogger=${hbase.security.logger} -log4j.additivity.SecurityLogger=false -#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE -#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.visibility.VisibilityController=TRACE - -# -# Null Appender -# -log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender - -# -# console -# Add "console" to rootlogger above if you want to use this -# -log4j.appender.console=org.apache.log4j.ConsoleAppender -log4j.appender.console.target=System.err -log4j.appender.console.layout=org.apache.log4j.PatternLayout -log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %.1000m%n - -log4j.appender.asyncconsole=org.apache.hadoop.hbase.AsyncConsoleAppender -log4j.appender.asyncconsole.target=System.err - -# Custom Logging levels - -log4j.logger.org.apache.zookeeper=${hbase.log.level} -#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG -log4j.logger.org.apache.hadoop.hbase=${hbase.log.level} -log4j.logger.org.apache.hadoop.hbase.META=${hbase.log.level} -# Make these two classes INFO-level. Make them DEBUG to see more zk debug. -log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=${hbase.log.level} -log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKWatcher=${hbase.log.level} -#log4j.logger.org.apache.hadoop.dfs=DEBUG -# Set this class to log INFO only otherwise its OTT -# Enable this to get detailed connection error/retry logging. -# log4j.logger.org.apache.hadoop.hbase.client.ConnectionImplementation=TRACE - - -# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output) -#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG - -# Uncomment the below if you want to remove logging of client region caching' -# and scan of hbase:meta messages -# log4j.logger.org.apache.hadoop.hbase.client.ConnectionImplementation=INFO - -# EventCounter -# Add "EventCounter" to rootlogger if you want to use this -# Uncomment the line below to add EventCounter information -# log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - -# Prevent metrics subsystem start/stop messages (HBASE-17722) -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsConfig=WARN -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsSinkAdapter=WARN -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsSystemImpl=WARN - -# Disable request log by default, you can enable this by changing the appender -log4j.category.http.requests=INFO,NullAppender -log4j.additivity.http.requests=false -# Replace the above with this configuration if you want an http access.log -#log4j.appender.accessRFA=org.apache.log4j.RollingFileAppender -#log4j.appender.accessRFA.File=/var/log/hbase/access.log -#log4j.appender.accessRFA.layout=org.apache.log4j.PatternLayout -#log4j.appender.accessRFA.layout.ConversionPattern=%m%n -#log4j.appender.accessRFA.MaxFileSize=200MB -#log4j.appender.accessRFA.MaxBackupIndex=10 -# route http.requests to the accessRFA appender -#log4j.logger.http.requests=INFO,accessRFA -# disable http.requests.* entries going up to the root logger -#log4j.additivity.http.requests=false diff --git a/conf/log4j2-hbtop.properties b/conf/log4j2-hbtop.properties new file mode 100644 index 000000000000..de2f97641da7 --- /dev/null +++ b/conf/log4j2-hbtop.properties @@ -0,0 +1,35 @@ +#/** +# * Licensed to the Apache Software Foundation (ASF) under one +# * or more contributor license agreements. See the NOTICE file +# * distributed with this work for additional information +# * regarding copyright ownership. The ASF licenses this file +# * to you under the Apache License, Version 2.0 (the +# * "License"); you may not use this file except in compliance +# * with the License. You may obtain a copy of the License at +# * +# * http://www.apache.org/licenses/LICENSE-2.0 +# * +# * Unless required by applicable law or agreed to in writing, software +# * distributed under the License is distributed on an "AS IS" BASIS, +# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# * See the License for the specific language governing permissions and +# * limitations under the License. +# */ + +status = warn +dest = err +name = PropertiesConfig + +# console +appender.console.type = Console +appender.console.target = SYSTEM_ERR +appender.console.name = console +appender.console.layout.type = PatternLayout +appender.console.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %m%n + +rootLogger = WARN,console + +# ZooKeeper will still put stuff at WARN +logger.zookeeper.name = org.apache.zookeeper +logger.zookeeper.level = ERROR + diff --git a/conf/log4j2.properties b/conf/log4j2.properties new file mode 100644 index 000000000000..5ffcfda24176 --- /dev/null +++ b/conf/log4j2.properties @@ -0,0 +1,137 @@ +#/** +# * Licensed to the Apache Software Foundation (ASF) under one +# * or more contributor license agreements. See the NOTICE file +# * distributed with this work for additional information +# * regarding copyright ownership. The ASF licenses this file +# * to you under the Apache License, Version 2.0 (the +# * "License"); you may not use this file except in compliance +# * with the License. You may obtain a copy of the License at +# * +# * http://www.apache.org/licenses/LICENSE-2.0 +# * +# * Unless required by applicable law or agreed to in writing, software +# * distributed under the License is distributed on an "AS IS" BASIS, +# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# * See the License for the specific language governing permissions and +# * limitations under the License. +# */ + +status = warn +dest = err +name = PropertiesConfig + +# Console appender +appender.console.type = Console +appender.console.target = SYSTEM_ERR +appender.console.name = console +appender.console.layout.type = PatternLayout +appender.console.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n + +# Daily Rolling File Appender +appender.DRFA.type = RollingFile +appender.DRFA.name = DRFA +appender.DRFA.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log} +appender.DRFA.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log}.%d{yyyy-MM-dd} +appender.DRFA.createOnDemand = true +appender.DRFA.layout.type = PatternLayout +appender.DRFA.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.DRFA.policies.type = Policies +appender.DRFA.policies.time.type = TimeBasedTriggeringPolicy +appender.DRFA.policies.time.interval = 1 +appender.DRFA.policies.time.modulate = true +appender.DRFA.policies.size.type = SizeBasedTriggeringPolicy +appender.DRFA.policies.size.size = ${sys:hbase.log.maxfilesize:-256MB} +appender.DRFA.strategy.type = DefaultRolloverStrategy +appender.DRFA.strategy.max = ${sys:hbase.log.maxbackupindex:-20} + +# Rolling File Appender +appender.RFA.type = RollingFile +appender.RFA.name = RFA +appender.RFA.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log} +appender.RFA.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log}.%i +appender.RFA.createOnDemand = true +appender.RFA.layout.type = PatternLayout +appender.RFA.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.RFA.policies.type = Policies +appender.RFA.policies.size.type = SizeBasedTriggeringPolicy +appender.RFA.policies.size.size = ${sys:hbase.log.maxfilesize:-256MB} +appender.RFA.strategy.type = DefaultRolloverStrategy +appender.RFA.strategy.max = ${sys:hbase.log.maxbackupindex:-20} + +# Security Audit Appender +appender.RFAS.type = RollingFile +appender.RFAS.name = RFAS +appender.RFAS.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.security.log.file:-SecurityAuth.audit} +appender.RFAS.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.security.log.file:-SecurityAuth.audit}.%i +appender.RFAS.createOnDemand = true +appender.RFAS.layout.type = PatternLayout +appender.RFAS.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.RFAS.policies.type = Policies +appender.RFAS.policies.size.type = SizeBasedTriggeringPolicy +appender.RFAS.policies.size.size = ${sys:hbase.security.log.maxfilesize:-256MB} +appender.RFAS.strategy.type = DefaultRolloverStrategy +appender.RFAS.strategy.max = ${sys:hbase.security.log.maxbackupindex:-20} + +# Http Access Log RFA, uncomment this if you want an http access.log +# appender.AccessRFA.type = RollingFile +# appender.AccessRFA.name = AccessRFA +# appender.AccessRFA.fileName = /var/log/hbase/access.log +# appender.AccessRFA.filePattern = /var/log/hbase/access.log.%i +# appender.AccessRFA.createOnDemand = true +# appender.AccessRFA.layout.type = PatternLayout +# appender.AccessRFA.layout.pattern = %m%n +# appender.AccessRFA.policies.type = Policies +# appender.AccessRFA.policies.size.type = SizeBasedTriggeringPolicy +# appender.AccessRFA.policies.size.size = 200MB +# appender.AccessRFA.strategy.type = DefaultRolloverStrategy +# appender.AccessRFA.strategy.max = 10 + +# Null Appender +appender.NullAppender.type = Null +appender.NullAppender.name = NullAppender + +rootLogger = ${sys:hbase.root.logger:-INFO,console} + +logger.SecurityLogger.name = SecurityLogger +logger.SecurityLogger = ${sys:hbase.security.logger:-INFO,console} +logger.SecurityLogger.additivity = false + +# Custom Logging levels +# logger.zookeeper.name = org.apache.zookeeper +# logger.zookeeper.level = ERROR + +# logger.FSNamesystem.name = org.apache.hadoop.fs.FSNamesystem +# logger.FSNamesystem.level = DEBUG + +# logger.hbase.name = org.apache.hadoop.hbase +# logger.hbase.level = DEBUG + +# logger.META.name = org.apache.hadoop.hbase.META +# logger.META.level = DEBUG + +# Make these two classes below DEBUG to see more zk debug. +# logger.ZKUtil.name = org.apache.hadoop.hbase.zookeeper.ZKUtil +# logger.ZKUtil.level = DEBUG + +# logger.ZKWatcher.name = org.apache.hadoop.hbase.zookeeper.ZKWatcher +# logger.ZKWatcher.level = DEBUG + +# logger.dfs.name = org.apache.hadoop.dfs +# logger.dfs.level = DEBUG + +# Prevent metrics subsystem start/stop messages (HBASE-17722) +logger.MetricsConfig.name = org.apache.hadoop.metrics2.impl.MetricsConfig +logger.MetricsConfig.level = WARN + +logger.MetricsSinkAdapte.name = org.apache.hadoop.metrics2.impl.MetricsSinkAdapter +logger.MetricsSinkAdapte.level = WARN + +logger.MetricsSystemImpl.name = org.apache.hadoop.metrics2.impl.MetricsSystemImpl +logger.MetricsSystemImpl.level = WARN + +# Disable request log by default, you can enable this by changing the appender +logger.http.name = http.requests +logger.http.additivity = false +logger.http = INFO,NullAppender +# Replace the above with this configuration if you want an http access.log +# logger.http = INFO,AccessRFA diff --git a/dev-support/Dockerfile b/dev-support/Dockerfile index 923e26563b3a..39fd5645e889 100644 --- a/dev-support/Dockerfile +++ b/dev-support/Dockerfile @@ -20,16 +20,14 @@ # # Specifically, it's used for the flaky test reporting job defined in # dev-support/flaky-tests/flaky-reporting.Jenkinsfile -FROM ubuntu:18.04 +FROM ubuntu:22.04 COPY . /hbase/dev-support RUN DEBIAN_FRONTEND=noninteractive apt-get -qq -y update \ && DEBIAN_FRONTEND=noninteractive apt-get -qq -y install --no-install-recommends \ - curl='7.58.0-*' \ - python2.7='2.7.17-*' \ - python-pip='9.0.1-*' \ - python-setuptools='39.0.1-*' \ + curl='7.81.0-*' \ + python3-pip='22.0.2+dfsg-*' \ && apt-get clean \ - && rm -rf /var/lib/apt/lists/* -RUN pip install -r /hbase/dev-support/python-requirements.txt + && rm -rf /var/lib/apt/lists/* \ + && pip3 install --no-cache-dir -r /hbase/dev-support/python-requirements.txt diff --git a/dev-support/HBase Code Template.xml b/dev-support/HBase Code Template.xml index 3b666c97a8a6..9c69a5a40b34 100644 --- a/dev-support/HBase Code Template.xml +++ b/dev-support/HBase Code Template.xml @@ -38,4 +38,4 @@ ${type_declaration} \ No newline at end of file +// ${todo} Implement constructor diff --git a/dev-support/HOW_TO_YETUS_LOCAL.md b/dev-support/HOW_TO_YETUS_LOCAL.md index 8d22978d422c..2ac4ecd09dc1 100644 --- a/dev-support/HOW_TO_YETUS_LOCAL.md +++ b/dev-support/HOW_TO_YETUS_LOCAL.md @@ -87,7 +87,7 @@ these personalities; a pre-packaged personality can be selected via the `--project` parameter. There is a provided HBase personality in Yetus, however the HBase project maintains its own within the HBase source repository. Specify the path to the personality file using `--personality`. The HBase repository -places this file under `dev-support/hbase-personality.sh`. +places this file under `dev-support/hbase-personality.sh`. ## Docker mode diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile index 7c4064ef8f9e..e6ca653bfa34 100644 --- a/dev-support/Jenkinsfile +++ b/dev-support/Jenkinsfile @@ -31,25 +31,26 @@ pipeline { disableConcurrentBuilds() } environment { - YETUS_RELEASE = '0.12.0' + YETUS_RELEASE = '0.15.0' // where we'll write everything from different steps. Need a copy here so the final step can check for success/failure. OUTPUT_DIR_RELATIVE_GENERAL = 'output-general' - OUTPUT_DIR_RELATIVE_JDK7 = 'output-jdk7' OUTPUT_DIR_RELATIVE_JDK8_HADOOP2 = 'output-jdk8-hadoop2' OUTPUT_DIR_RELATIVE_JDK8_HADOOP3 = 'output-jdk8-hadoop3' OUTPUT_DIR_RELATIVE_JDK11_HADOOP3 = 'output-jdk11-hadoop3' + OUTPUT_DIR_RELATIVE_JDK17_HADOOP3 = 'output-jdk17-hadoop3' PROJECT = 'hbase' PROJECT_PERSONALITY = 'https://raw.githubusercontent.com/apache/hbase/master/dev-support/hbase-personality.sh' PERSONALITY_FILE = 'tools/personality.sh' // This section of the docs tells folks not to use the javadoc tag. older branches have our old version of the check for said tag. - AUTHOR_IGNORE_LIST = 'src/main/asciidoc/_chapters/developer.adoc,dev-support/test-patch.sh' - WHITESPACE_IGNORE_LIST = '.*/generated/.*' + AUTHOR_IGNORE_LIST = 'src/main/asciidoc/_chapters/developer.adoc' + BLANKS_EOL_IGNORE_FILE = 'dev-support/blanks-eol-ignore.txt' + BLANKS_TABS_IGNORE_FILE = 'dev-support/blanks-tabs-ignore.txt' // output from surefire; sadly the archive function in yetus only works on file names. ARCHIVE_PATTERN_LIST = 'TEST-*.xml,org.apache.h*.txt,*.dumpstream,*.dump' // These tests currently have known failures. Once they burn down to 0, remove from here so that new problems will cause a failure. - TESTS_FILTER = 'cc,checkstyle,javac,javadoc,pylint,shellcheck,whitespace,perlcritic,ruby-lint,rubocop,mvnsite' - EXCLUDE_TESTS_URL = "${JENKINS_URL}/job/HBase/job/HBase-Find-Flaky-Tests/job/${BRANCH_NAME}/lastSuccessfulBuild/artifact/output/excludes" + TESTS_FILTER = 'checkstyle,javac,javadoc,pylint,shellcheck,shelldocs,blanks,perlcritic,ruby-lint,rubocop' + EXCLUDE_TESTS_URL = "${JENKINS_URL}/job/HBase-Find-Flaky-Tests/job/${BRANCH_NAME}/lastSuccessfulBuild/artifact/output/excludes" // TODO does hadoopcheck need to be jdk specific? SHALLOW_CHECKS = 'all,-shadedjars,-unit' // run by the 'yetus general check' DEEP_CHECKS = 'compile,htmlout,javac,maven,mvninstall,shadedjars,unit' // run by 'yetus jdkX (HadoopY) checks' @@ -92,7 +93,7 @@ pipeline { rm -rf "${YETUS_DIR}" "${WORKSPACE}/component/dev-support/jenkins-scripts/cache-apache-project-artifact.sh" \ --working-dir "${WORKSPACE}/downloads-yetus" \ - --keys 'https://www.apache.org/dist/yetus/KEYS' \ + --keys 'https://downloads.apache.org/yetus/KEYS' \ --verify-tar-gz \ "${WORKSPACE}/yetus-${YETUS_RELEASE}-bin.tar.gz" \ "yetus/${YETUS_RELEASE}/apache-yetus-${YETUS_RELEASE}-bin.tar.gz" @@ -139,7 +140,7 @@ pipeline { echo "Ensure we have a copy of Hadoop ${HADOOP2_VERSION}" "${WORKSPACE}/component/dev-support/jenkins-scripts/cache-apache-project-artifact.sh" \ --working-dir "${WORKSPACE}/downloads-hadoop-2" \ - --keys 'http://www.apache.org/dist/hadoop/common/KEYS' \ + --keys 'https://downloads.apache.org/hadoop/common/KEYS' \ --verify-tar-gz \ "${WORKSPACE}/hadoop-${HADOOP2_VERSION}-bin.tar.gz" \ "hadoop/common/hadoop-${HADOOP2_VERSION}/hadoop-${HADOOP2_VERSION}.tar.gz" @@ -167,7 +168,7 @@ pipeline { echo "Ensure we have a copy of Hadoop ${HADOOP3_VERSION}" "${WORKSPACE}/component/dev-support/jenkins-scripts/cache-apache-project-artifact.sh" \ --working-dir "${WORKSPACE}/downloads-hadoop-3" \ - --keys 'http://www.apache.org/dist/hadoop/common/KEYS' \ + --keys 'https://downloads.apache.org/hadoop/common/KEYS' \ --verify-tar-gz \ "${WORKSPACE}/hadoop-${HADOOP3_VERSION}-bin.tar.gz" \ "hadoop/common/hadoop-${HADOOP3_VERSION}/hadoop-${HADOOP3_VERSION}.tar.gz" @@ -186,10 +187,10 @@ pipeline { // stash with given name for all tests we might run, so that we can unstash all of them even if // we skip some due to e.g. branch-specific JDK or Hadoop support stash name: 'general-result', allowEmpty: true, includes: "${OUTPUT_DIR_RELATIVE_GENERAL}/doesn't-match" - stash name: 'jdk7-result', allowEmpty: true, includes: "${OUTPUT_DIR_RELATIVE_JDK7}/doesn't-match" stash name: 'jdk8-hadoop2-result', allowEmpty: true, includes: "${OUTPUT_DIR_RELATIVE_JDK8_HADOOP2}/doesn't-match" stash name: 'jdk8-hadoop3-result', allowEmpty: true, includes: "${OUTPUT_DIR_RELATIVE_JDK8_HADOOP3}/doesn't-match" stash name: 'jdk11-hadoop3-result', allowEmpty: true, includes: "${OUTPUT_DIR_RELATIVE_JDK11_HADOOP3}/doesn't-match" + stash name: 'jdk17-hadoop3-result', allowEmpty: true, includes: "${OUTPUT_DIR_RELATIVE_JDK17_HADOOP3}/doesn't-match" stash name: 'srctarball-result', allowEmpty: true, includes: "output-srctarball/doesn't-match" } } @@ -204,7 +205,10 @@ pipeline { environment { BASEDIR = "${env.WORKSPACE}/component" TESTS = "${env.SHALLOW_CHECKS}" - SET_JAVA_HOME = '/usr/lib/jvm/java-8' + SET_JAVA_HOME = getJavaHomeForYetusGeneralCheck(env.BRANCH_NAME) + JAVA8_HOME = "/usr/lib/jvm/java-8" + // Activates hadoop 3.0 profile in maven runs. + HADOOP_PROFILE = '3.0' OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_GENERAL}" OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_GENERAL}" ASF_NIGHTLIES_GENERAL_CHECK_BASE="${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" @@ -269,14 +273,14 @@ pipeline { if [ -d "${OUTPUT_DIR}/branch-site" ]; then echo "Remove ${OUTPUT_DIR}/branch-site for saving space" rm -rf "${OUTPUT_DIR}/branch-site" - python ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/branch-site" > "${OUTPUT_DIR}/branch-site.html" + python3 ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/branch-site" > "${OUTPUT_DIR}/branch-site.html" else echo "No branch-site, skipping" fi if [ -d "${OUTPUT_DIR}/patch-site" ]; then echo "Remove ${OUTPUT_DIR}/patch-site for saving space" rm -rf "${OUTPUT_DIR}/patch-site" - python ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/patch-site" > "${OUTPUT_DIR}/patch-site.html" + python3 ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/patch-site" > "${OUTPUT_DIR}/patch-site.html" else echo "No patch-site, skipping" fi @@ -296,28 +300,29 @@ pipeline { } } } - stage ('yetus jdk7 checks') { + stage ('yetus jdk8 hadoop2 checks') { agent { node { label 'hbase' } } when { - branch 'branch-1*' + branch '*branch-2*' } environment { BASEDIR = "${env.WORKSPACE}/component" TESTS = "${env.DEEP_CHECKS}" - OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK7}" - OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK7}" - SET_JAVA_HOME = "/usr/lib/jvm/java-7" + OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP2}" + OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP2}" + SET_JAVA_HOME = '/usr/lib/jvm/java-8' + SKIP_ERRORPRONE = true } steps { // Must do prior to anything else, since if one of them timesout we'll stash the commentfile sh '''#!/usr/bin/env bash set -e rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" - echo '(x) {color:red}-1 jdk7 checks{color}' >"${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk8 hadoop2 checks{color}' >"${OUTPUT_DIR}/commentfile" echo "-- Something went wrong running this stage, please [check relevant console output|${BUILD_URL}/console]." >> "${OUTPUT_DIR}/commentfile" ''' unstash 'yetus' @@ -338,12 +343,12 @@ pipeline { set -e declare -i status=0 if "${BASEDIR}/dev-support/hbase_nightly_yetus.sh" ; then - echo '(/) {color:green}+1 jdk7 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(/) {color:green}+1 jdk8 hadoop2 checks{color}' > "${OUTPUT_DIR}/commentfile" else - echo '(x) {color:red}-1 jdk7 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk8 hadoop2 checks{color}' > "${OUTPUT_DIR}/commentfile" status=1 fi - echo "-- For more information [see jdk7 report|${BUILD_URL}/JDK7_20Nightly_20Build_20Report/]" >> "${OUTPUT_DIR}/commentfile" + echo "-- For more information [see jdk8 (hadoop2) report|${BUILD_URL}JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]" >> "${OUTPUT_DIR}/commentfile" exit "${status}" ''' ) @@ -356,7 +361,7 @@ pipeline { } post { always { - stash name: 'jdk7-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" + stash name: 'jdk8-hadoop2-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e @@ -386,7 +391,7 @@ pipeline { if [ -f "${OUTPUT_DIR}/test_logs.zip" ]; then echo "Remove ${OUTPUT_DIR}/test_logs.zip for saving space" rm -rf "${OUTPUT_DIR}/test_logs.zip" - python ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" + python3 ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" else echo "No test_logs.zip, skipping" fi @@ -401,30 +406,36 @@ pipeline { // Has to be relative to WORKSPACE. reportDir : "${env.OUTPUT_DIR_RELATIVE}", reportFiles : 'console-report.html', - reportName : 'JDK7 Nightly Build Report' + reportName : 'JDK8 Nightly Build Report (Hadoop2)' ] } } } - stage ('yetus jdk8 hadoop2 checks') { + stage ('yetus jdk8 hadoop3 checks') { agent { node { label 'hbase' } } + when { + branch '*branch-2*' + } environment { BASEDIR = "${env.WORKSPACE}/component" TESTS = "${env.DEEP_CHECKS}" - OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP2}" - OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP2}" + OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP3}" + OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP3}" SET_JAVA_HOME = '/usr/lib/jvm/java-8' + // Activates hadoop 3.0 profile in maven runs. + HADOOP_PROFILE = '3.0' + SKIP_ERRORPRONE = true } steps { // Must do prior to anything else, since if one of them timesout we'll stash the commentfile sh '''#!/usr/bin/env bash set -e rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" - echo '(x) {color:red}-1 jdk8 hadoop2 checks{color}' >"${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk8 hadoop3 checks{color}' >"${OUTPUT_DIR}/commentfile" echo "-- Something went wrong running this stage, please [check relevant console output|${BUILD_URL}/console]." >> "${OUTPUT_DIR}/commentfile" ''' unstash 'yetus' @@ -445,12 +456,12 @@ pipeline { set -e declare -i status=0 if "${BASEDIR}/dev-support/hbase_nightly_yetus.sh" ; then - echo '(/) {color:green}+1 jdk8 hadoop2 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(/) {color:green}+1 jdk8 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" else - echo '(x) {color:red}-1 jdk8 hadoop2 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk8 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" status=1 fi - echo "-- For more information [see jdk8 (hadoop2) report|${BUILD_URL}JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]" >> "${OUTPUT_DIR}/commentfile" + echo "-- For more information [see jdk8 (hadoop3) report|${BUILD_URL}JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]" >> "${OUTPUT_DIR}/commentfile" exit "${status}" ''' ) @@ -463,7 +474,7 @@ pipeline { } post { always { - stash name: 'jdk8-hadoop2-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" + stash name: 'jdk8-hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e @@ -493,7 +504,7 @@ pipeline { if [ -f "${OUTPUT_DIR}/test_logs.zip" ]; then echo "Remove ${OUTPUT_DIR}/test_logs.zip for saving space" rm -rf "${OUTPUT_DIR}/test_logs.zip" - python ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" + python3 ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" else echo "No test_logs.zip, skipping" fi @@ -508,37 +519,36 @@ pipeline { // Has to be relative to WORKSPACE. reportDir : "${env.OUTPUT_DIR_RELATIVE}", reportFiles : 'console-report.html', - reportName : 'JDK8 Nightly Build Report (Hadoop2)' + reportName : 'JDK8 Nightly Build Report (Hadoop3)' ] } } } - stage ('yetus jdk8 hadoop3 checks') { + stage ('yetus jdk11 hadoop3 checks') { agent { node { label 'hbase' } } when { - not { - branch 'branch-1*' - } + branch '*branch-2*' } environment { BASEDIR = "${env.WORKSPACE}/component" TESTS = "${env.DEEP_CHECKS}" - OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP3}" - OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP3}" - SET_JAVA_HOME = '/usr/lib/jvm/java-8' + OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK11_HADOOP3}" + OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK11_HADOOP3}" + SET_JAVA_HOME = "/usr/lib/jvm/java-11" // Activates hadoop 3.0 profile in maven runs. HADOOP_PROFILE = '3.0' + SKIP_ERRORPRONE = true } steps { // Must do prior to anything else, since if one of them timesout we'll stash the commentfile sh '''#!/usr/bin/env bash set -e rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" - echo '(x) {color:red}-1 jdk8 hadoop3 checks{color}' >"${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk11 hadoop3 checks{color}' >"${OUTPUT_DIR}/commentfile" echo "-- Something went wrong running this stage, please [check relevant console output|${BUILD_URL}/console]." >> "${OUTPUT_DIR}/commentfile" ''' unstash 'yetus' @@ -559,12 +569,12 @@ pipeline { set -e declare -i status=0 if "${BASEDIR}/dev-support/hbase_nightly_yetus.sh" ; then - echo '(/) {color:green}+1 jdk8 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(/) {color:green}+1 jdk11 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" else - echo '(x) {color:red}-1 jdk8 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk11 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" status=1 fi - echo "-- For more information [see jdk8 (hadoop3) report|${BUILD_URL}JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]" >> "${OUTPUT_DIR}/commentfile" + echo "-- For more information [see jdk11 report|${BUILD_URL}JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]" >> "${OUTPUT_DIR}/commentfile" exit "${status}" ''' ) @@ -577,7 +587,7 @@ pipeline { } post { always { - stash name: 'jdk8-hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" + stash name: 'jdk11-hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e @@ -592,7 +602,7 @@ pipeline { else echo "No archiver directory, skipping compressing." fi -''' + ''' sshPublisher(publishers: [ sshPublisherDesc(configName: 'Nightlies', transfers: [ @@ -607,11 +617,11 @@ pipeline { if [ -f "${OUTPUT_DIR}/test_logs.zip" ]; then echo "Remove ${OUTPUT_DIR}/test_logs.zip for saving space" rm -rf "${OUTPUT_DIR}/test_logs.zip" - python ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" + python3 ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" else echo "No test_logs.zip, skipping" fi -''' + ''' // Has to be relative to WORKSPACE. archiveArtifacts artifacts: "${env.OUTPUT_DIR_RELATIVE}/*" archiveArtifacts artifacts: "${env.OUTPUT_DIR_RELATIVE}/**/*" @@ -622,39 +632,34 @@ pipeline { // Has to be relative to WORKSPACE. reportDir : "${env.OUTPUT_DIR_RELATIVE}", reportFiles : 'console-report.html', - reportName : 'JDK8 Nightly Build Report (Hadoop3)' + reportName : 'JDK11 Nightly Build Report (Hadoop3)' ] } } } - stage ('yetus jdk11 hadoop3 checks') { + + stage ('yetus jdk17 hadoop3 checks') { agent { node { label 'hbase' } } - when { - not { - branch 'branch-1*' - } - } environment { BASEDIR = "${env.WORKSPACE}/component" TESTS = "${env.DEEP_CHECKS}" - OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK11_HADOOP3}" - OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK11_HADOOP3}" - SET_JAVA_HOME = "/usr/lib/jvm/java-11" + OUTPUT_DIR_RELATIVE = "${env.OUTPUT_DIR_RELATIVE_JDK17_HADOOP3}" + OUTPUT_DIR = "${env.WORKSPACE}/${env.OUTPUT_DIR_RELATIVE_JDK17_HADOOP3}" + SET_JAVA_HOME = "/usr/lib/jvm/java-17" // Activates hadoop 3.0 profile in maven runs. HADOOP_PROFILE = '3.0' - // ErrorProne is broken on JDK11, see HBASE-23894 - SKIP_ERROR_PRONE = 'true' + SKIP_ERRORPRONE = true } steps { // Must do prior to anything else, since if one of them timesout we'll stash the commentfile sh '''#!/usr/bin/env bash set -e rm -rf "${OUTPUT_DIR}" && mkdir "${OUTPUT_DIR}" - echo '(x) {color:red}-1 jdk11 hadoop3 checks{color}' >"${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk17 hadoop3 checks{color}' >"${OUTPUT_DIR}/commentfile" echo "-- Something went wrong running this stage, please [check relevant console output|${BUILD_URL}/console]." >> "${OUTPUT_DIR}/commentfile" ''' unstash 'yetus' @@ -675,12 +680,12 @@ pipeline { set -e declare -i status=0 if "${BASEDIR}/dev-support/hbase_nightly_yetus.sh" ; then - echo '(/) {color:green}+1 jdk11 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(/) {color:green}+1 jdk17 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" else - echo '(x) {color:red}-1 jdk11 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" + echo '(x) {color:red}-1 jdk17 hadoop3 checks{color}' > "${OUTPUT_DIR}/commentfile" status=1 fi - echo "-- For more information [see jdk11 report|${BUILD_URL}JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]" >> "${OUTPUT_DIR}/commentfile" + echo "-- For more information [see jdk17 report|${BUILD_URL}JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]" >> "${OUTPUT_DIR}/commentfile" exit "${status}" ''' ) @@ -693,7 +698,7 @@ pipeline { } post { always { - stash name: 'jdk11-hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" + stash name: 'jdk17-hadoop3-result', includes: "${OUTPUT_DIR_RELATIVE}/commentfile" junit testResults: "${env.OUTPUT_DIR_RELATIVE}/**/target/**/TEST-*.xml", allowEmptyResults: true // zip surefire reports. sh '''#!/bin/bash -e @@ -723,7 +728,7 @@ pipeline { if [ -f "${OUTPUT_DIR}/test_logs.zip" ]; then echo "Remove ${OUTPUT_DIR}/test_logs.zip for saving space" rm -rf "${OUTPUT_DIR}/test_logs.zip" - python ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" + python3 ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${OUTPUT_DIR_RELATIVE}" > "${OUTPUT_DIR}/test_logs.html" else echo "No test_logs.zip, skipping" fi @@ -738,15 +743,21 @@ pipeline { // Has to be relative to WORKSPACE. reportDir : "${env.OUTPUT_DIR_RELATIVE}", reportFiles : 'console-report.html', - reportName : 'JDK11 Nightly Build Report (Hadoop3)' + reportName : 'JDK17 Nightly Build Report (Hadoop3)' ] } } } + // This is meant to mimic what a release manager will do to create RCs. // See http://hbase.apache.org/book.html#maven.release // TODO (HBASE-23870): replace this with invocation of the release tool stage ('packaging and integration') { + agent { + node { + label 'hbase-large' + } + } tools { maven 'maven_latest' // this needs to be set to the jdk that ought to be used to build releases on the branch the Jenkinsfile is stored in. @@ -757,6 +768,9 @@ pipeline { BRANCH = "${env.BRANCH_NAME}" } steps { + dir('component') { + checkout scm + } sh '''#!/bin/bash -e echo "Setting up directories" rm -rf "output-srctarball" && mkdir "output-srctarball" @@ -806,7 +820,7 @@ pipeline { ''' unstash 'hadoop-2' sh '''#!/bin/bash -xe - if [[ "${BRANCH}" = branch-2* ]] || [[ "${BRANCH}" = branch-1* ]]; then + if [[ "${BRANCH}" = branch-2* ]]; then echo "Attempting to use run an instance on top of Hadoop 2." artifact=$(ls -1 "${WORKSPACE}"/hadoop-2*.tar.gz | head -n 1) tar --strip-components=1 -xzf "${artifact}" -C "hadoop-2" @@ -830,44 +844,40 @@ pipeline { ''' unstash 'hadoop-3' sh '''#!/bin/bash -e - if [[ "${BRANCH}" = branch-1* ]]; then - echo "Skipping to run against Hadoop 3 for branch ${BRANCH}" - else - echo "Attempting to use run an instance on top of Hadoop 3." - artifact=$(ls -1 "${WORKSPACE}"/hadoop-3*.tar.gz | head -n 1) - tar --strip-components=1 -xzf "${artifact}" -C "hadoop-3" - if ! "${BASEDIR}/dev-support/hbase_nightly_pseudo-distributed-test.sh" \ - --single-process \ - --working-dir output-integration/hadoop-3 \ - --hbase-client-install hbase-client \ - hbase-install \ - hadoop-3/bin/hadoop \ - hadoop-3/share/hadoop/yarn/timelineservice \ - hadoop-3/share/hadoop/yarn/test/hadoop-yarn-server-tests-*-tests.jar \ - hadoop-3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-*-tests.jar \ - hadoop-3/bin/mapred \ - >output-integration/hadoop-3.log 2>&1 ; then - echo "(x) {color:red}-1 client integration test{color}\n--Failed when running client tests on top of Hadoop 3. [see log for details|${BUILD_URL}/artifact/output-integration/hadoop-3.log]. (note that this means we didn't check the Hadoop 3 shaded client)" >output-integration/commentfile - exit 2 - fi - echo "Attempting to use run an instance on top of Hadoop 3, relying on the Hadoop client artifacts for the example client program." - if ! "${BASEDIR}/dev-support/hbase_nightly_pseudo-distributed-test.sh" \ - --single-process \ - --hadoop-client-classpath hadoop-3/share/hadoop/client/hadoop-client-api-*.jar:hadoop-3/share/hadoop/client/hadoop-client-runtime-*.jar \ - --working-dir output-integration/hadoop-3-shaded \ - --hbase-client-install hbase-client \ - hbase-install \ - hadoop-3/bin/hadoop \ - hadoop-3/share/hadoop/yarn/timelineservice \ - hadoop-3/share/hadoop/yarn/test/hadoop-yarn-server-tests-*-tests.jar \ - hadoop-3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-*-tests.jar \ - hadoop-3/bin/mapred \ - >output-integration/hadoop-3-shaded.log 2>&1 ; then - echo "(x) {color:red}-1 client integration test{color}\n--Failed when running client tests on top of Hadoop 3 using Hadoop's shaded client. [see log for details|${BUILD_URL}/artifact/output-integration/hadoop-3-shaded.log]." >output-integration/commentfile - exit 2 - fi - echo "(/) {color:green}+1 client integration test{color}" >output-integration/commentfile + echo "Attempting to use run an instance on top of Hadoop 3." + artifact=$(ls -1 "${WORKSPACE}"/hadoop-3*.tar.gz | head -n 1) + tar --strip-components=1 -xzf "${artifact}" -C "hadoop-3" + if ! "${BASEDIR}/dev-support/hbase_nightly_pseudo-distributed-test.sh" \ + --single-process \ + --working-dir output-integration/hadoop-3 \ + --hbase-client-install hbase-client \ + hbase-install \ + hadoop-3/bin/hadoop \ + hadoop-3/share/hadoop/yarn/timelineservice \ + hadoop-3/share/hadoop/yarn/test/hadoop-yarn-server-tests-*-tests.jar \ + hadoop-3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-*-tests.jar \ + hadoop-3/bin/mapred \ + >output-integration/hadoop-3.log 2>&1 ; then + echo "(x) {color:red}-1 client integration test{color}\n--Failed when running client tests on top of Hadoop 3. [see log for details|${BUILD_URL}/artifact/output-integration/hadoop-3.log]. (note that this means we didn't check the Hadoop 3 shaded client)" >output-integration/commentfile + exit 2 + fi + echo "Attempting to use run an instance on top of Hadoop 3, relying on the Hadoop client artifacts for the example client program." + if ! "${BASEDIR}/dev-support/hbase_nightly_pseudo-distributed-test.sh" \ + --single-process \ + --hadoop-client-classpath hadoop-3/share/hadoop/client/hadoop-client-api-*.jar:hadoop-3/share/hadoop/client/hadoop-client-runtime-*.jar \ + --working-dir output-integration/hadoop-3-shaded \ + --hbase-client-install hbase-client \ + hbase-install \ + hadoop-3/bin/hadoop \ + hadoop-3/share/hadoop/yarn/timelineservice \ + hadoop-3/share/hadoop/yarn/test/hadoop-yarn-server-tests-*-tests.jar \ + hadoop-3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-*-tests.jar \ + hadoop-3/bin/mapred \ + >output-integration/hadoop-3-shaded.log 2>&1 ; then + echo "(x) {color:red}-1 client integration test{color}\n--Failed when running client tests on top of Hadoop 3 using Hadoop's shaded client. [see log for details|${BUILD_URL}/artifact/output-integration/hadoop-3-shaded.log]." >output-integration/commentfile + exit 2 fi + echo "(/) {color:green}+1 client integration test{color}" >output-integration/commentfile ''' } post { @@ -888,7 +898,7 @@ pipeline { if [ -f "${SRC_TAR}" ]; then echo "Remove ${SRC_TAR} for saving space" rm -rf "${SRC_TAR}" - python ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/output-srctarball" > "${WORKSPACE}/output-srctarball/hbase-src.html" + python3 ${BASEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/output-srctarball" > "${WORKSPACE}/output-srctarball/hbase-src.html" else echo "No hbase-src.tar.gz, skipping" fi @@ -907,18 +917,30 @@ pipeline { always { script { try { + sh "printenv" + // wipe out all the output directories before unstashing + sh''' + echo "Clean up result directories" + rm -rf ${OUTPUT_DIR_RELATIVE_GENERAL} + rm -rf ${OUTPUT_DIR_RELATIVE_JDK8_HADOOP2} + rm -rf ${OUTPUT_DIR_RELATIVE_JDK8_HADOOP3} + rm -rf ${OUTPUT_DIR_RELATIVE_JDK11_HADOOP3} + rm -rf ${OUTPUT_DIR_RELATIVE_JDK17_HADOOP3} + rm -rf output-srctarball + rm -rf output-integration + ''' unstash 'general-result' - unstash 'jdk7-result' unstash 'jdk8-hadoop2-result' unstash 'jdk8-hadoop3-result' unstash 'jdk11-hadoop3-result' + unstash 'jdk17-hadoop3-result' unstash 'srctarball-result' - sh "printenv" + def results = ["${env.OUTPUT_DIR_RELATIVE_GENERAL}/commentfile", - "${env.OUTPUT_DIR_RELATIVE_JDK7}/commentfile", "${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP2}/commentfile", "${env.OUTPUT_DIR_RELATIVE_JDK8_HADOOP3}/commentfile", "${env.OUTPUT_DIR_RELATIVE_JDK11_HADOOP3}/commentfile", + "${env.OUTPUT_DIR_RELATIVE_JDK17_HADOOP3}/commentfile", 'output-srctarball/commentfile', 'output-integration/commentfile'] echo env.BRANCH_NAME @@ -940,8 +962,14 @@ pipeline { echo "[INFO] Comment:" echo comment echo "" - echo "[INFO] There are ${currentBuild.changeSets.size()} change sets." - getJirasToComment(currentBuild).each { currentIssue -> + echo "[DEBUG] checking to see if feature branch" + def jiras = getJirasToComment(env.BRANCH_NAME, []) + if (jiras.isEmpty()) { + echo "[DEBUG] non-feature branch, checking change messages for jira keys." + echo "[INFO] There are ${currentBuild.changeSets.size()} change sets." + jiras = getJirasToCommentFromChangesets(currentBuild) + } + jiras.each { currentIssue -> jiraComment issueKey: currentIssue, body: comment } } catch (Exception exception) { @@ -954,7 +982,7 @@ pipeline { } import org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper @NonCPS -List getJirasToComment(RunWrapper thisBuild) { +List getJirasToCommentFromChangesets(RunWrapper thisBuild) { def seenJiras = [] thisBuild.changeSets.each { cs -> cs.getItems().each { change -> @@ -964,16 +992,30 @@ List getJirasToComment(RunWrapper thisBuild) { echo " ${change.commitId}" echo " ${change.author}" echo "" - msg.eachMatch("HBASE-[0-9]+") { currentIssue -> - echo "[DEBUG] found jira key: ${currentIssue}" - if (currentIssue in seenJiras) { - echo "[DEBUG] already commented on ${currentIssue}." - } else { - echo "[INFO] commenting on ${currentIssue}." - seenJiras << currentIssue - } - } + seenJiras = getJirasToComment(msg, seenJiras) } } return seenJiras } +@NonCPS +List getJirasToComment(CharSequence source, List seen) { + source.eachMatch("HBASE-[0-9]+") { currentIssue -> + echo "[DEBUG] found jira key: ${currentIssue}" + if (currentIssue in seen) { + echo "[DEBUG] already commented on ${currentIssue}." + } else { + echo "[INFO] commenting on ${currentIssue}." + seen << currentIssue + } + } + return seen +} +@NonCPS +String getJavaHomeForYetusGeneralCheck(String branchName) { + // for 2.x, build with java 11, for 3.x, build with java 17 + if (branchName.indexOf("branch-2") >=0) { + return "/usr/lib/jvm/java-11"; + } else { + return "/usr/lib/jvm/java-17" + } +} diff --git a/dev-support/Jenkinsfile_GitHub b/dev-support/Jenkinsfile_GitHub index a33f0ae99db9..eff1e2d36fec 100644 --- a/dev-support/Jenkinsfile_GitHub +++ b/dev-support/Jenkinsfile_GitHub @@ -18,7 +18,7 @@ pipeline { agent { - label 'Hadoop' + label 'hbase' } options { @@ -36,22 +36,29 @@ pipeline { YETUS_REL = 'yetus' DOCKERFILE_REL = "${SRC_REL}/dev-support/docker/Dockerfile" YETUS_DRIVER_REL = "${SRC_REL}/dev-support/jenkins_precommit_github_yetus.sh" - // Branch or tag name. Yetus release tags are 'rel/X.Y.Z' - YETUS_VERSION = 'rel/0.12.0' + YETUS_VERSION = '0.15.0' GENERAL_CHECK_PLUGINS = 'all,-javadoc,-jira,-shadedjars,-unit' JDK_SPECIFIC_PLUGINS = 'compile,github,htmlout,javac,javadoc,maven,mvninstall,shadedjars,unit' + // This section of the docs tells folks not to use the javadoc tag. older branches have our old version of the check for said tag. + AUTHOR_IGNORE_LIST = 'src/main/asciidoc/_chapters/developer.adoc' + BLANKS_EOL_IGNORE_FILE = 'dev-support/blanks-eol-ignore.txt' + BLANKS_TABS_IGNORE_FILE = 'dev-support/blanks-tabs-ignore.txt' // output from surefire; sadly the archive function in yetus only works on file names. ARCHIVE_PATTERN_LIST = 'TEST-*.xml,org.apache.h*.txt,*.dumpstream,*.dump' // These tests currently have known failures. Once they burn down to 0, remove from here so that new problems will cause a failure. - TESTS_FILTER = 'cc,checkstyle,javac,javadoc,pylint,shellcheck,whitespace,perlcritic,ruby-lint,rubocop,mvnsite' - EXCLUDE_TESTS_URL = "${JENKINS_URL}/job/HBase/job/HBase-Find-Flaky-Tests/job/${CHANGE_TARGET}/lastSuccessfulBuild/artifact/output/excludes" - + TESTS_FILTER = 'checkstyle,javac,javadoc,pylint,shellcheck,shelldocs,blanks,perlcritic,ruby-lint,rubocop' + EXCLUDE_TESTS_URL = "${JENKINS_URL}/job/HBase-Find-Flaky-Tests/job/${CHANGE_TARGET}/lastSuccessfulBuild/artifact/output/excludes" + // set build parallel + BUILD_THREAD = 4 + SUREFIRE_FIRST_PART_FORK_COUNT = '0.5C' + SUREFIRE_SECOND_PART_FORK_COUNT = '0.5C' // a global view of paths. parallel stages can land on the same host concurrently, so each // stage works in its own subdirectory. there is an "output" under each of these // directories, which we retrieve after the build is complete. WORKDIR_REL_GENERAL_CHECK = 'yetus-general-check' WORKDIR_REL_JDK8_HADOOP2_CHECK = 'yetus-jdk8-hadoop2-check' WORKDIR_REL_JDK11_HADOOP3_CHECK = 'yetus-jdk11-hadoop3-check' + WORKDIR_REL_JDK17_HADOOP3_CHECK = 'yetus-jdk17-hadoop3-check' ASF_NIGHTLIES = 'https://nightlies.apache.org' ASF_NIGHTLIES_BASE_ORI = "${ASF_NIGHTLIES}/hbase/${JOB_NAME}/${BUILD_NUMBER}" ASF_NIGHTLIES_BASE = "${ASF_NIGHTLIES_BASE_ORI.replaceAll(' ', '%20')}" @@ -69,13 +76,15 @@ pipeline { stage ('yetus general check') { agent { node { - label 'Hadoop' + label 'hbase' } } environment { // customized per parallel stage PLUGINS = "${GENERAL_CHECK_PLUGINS}" - SET_JAVA_HOME = '/usr/lib/jvm/java-8' + SET_JAVA_HOME = "/usr/lib/jvm/java-11" + JAVA8_HOME = "/usr/lib/jvm/java-8" + HADOOP_PROFILE = '3.0' WORKDIR_REL = "${WORKDIR_REL_GENERAL_CHECK}" // identical for all parallel stages WORKDIR = "${WORKSPACE}/${WORKDIR_REL}" @@ -87,16 +96,20 @@ pipeline { YETUS_DRIVER = "${WORKDIR}/${YETUS_DRIVER_REL}" ASF_NIGHTLIES_GENERAL_CHECK_BASE="${ASF_NIGHTLIES_BASE}/${WORKDIR_REL}/${PATCH_REL}" } + when { + // this will return true if the pipeline is building a change request, such as a GitHub pull request. + changeRequest() + } steps { dir("${SOURCEDIR}") { checkout scm } dir("${YETUSDIR}") { - checkout([ - $class : 'GitSCM', - branches : [[name: "${YETUS_VERSION}"]], - userRemoteConfigs: [[url: 'https://github.com/apache/yetus.git']]] - ) + sh'''#!/usr/bin/env bash + wget https://dlcdn.apache.org/yetus/${YETUS_VERSION}/apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + tar --strip-components=1 -xzf apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + rm apache-yetus-${YETUS_VERSION}-bin.tar.gz + ''' } dir("${WORKDIR}") { withCredentials([ @@ -140,14 +153,14 @@ pipeline { if [ -d "${PATCHDIR}/branch-site" ]; then echo "Remove ${PATCHDIR}/branch-site for saving space" rm -rf "${PATCHDIR}/branch-site" - python ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/branch-site" > "${PATCHDIR}/branch-site.html" + python3 ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/branch-site" > "${PATCHDIR}/branch-site.html" else echo "No branch-site, skipping" fi if [ -d "${PATCHDIR}/patch-site" ]; then echo "Remove ${PATCHDIR}/patch-site for saving space" rm -rf "${PATCHDIR}/patch-site" - python ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/patch-site" > "${PATCHDIR}/patch-site.html" + python3 ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}/patch-site" > "${PATCHDIR}/patch-site.html" else echo "No patch-site, skipping" fi @@ -192,7 +205,7 @@ pipeline { stage ('yetus jdk8 Hadoop2 checks') { agent { node { - label 'Hadoop' + label 'hbase' } } environment { @@ -210,16 +223,20 @@ pipeline { YETUS_DRIVER = "${WORKDIR}/${YETUS_DRIVER_REL}" SKIP_ERRORPRONE = true } + when { + // this will return true if the pipeline is building a change request, such as a GitHub pull request. + changeRequest() + } steps { dir("${SOURCEDIR}") { checkout scm } dir("${YETUSDIR}") { - checkout([ - $class : 'GitSCM', - branches : [[name: "${YETUS_VERSION}"]], - userRemoteConfigs: [[url: 'https://github.com/apache/yetus.git']]] - ) + sh'''#!/usr/bin/env bash + wget https://dlcdn.apache.org/yetus/${YETUS_VERSION}/apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + tar --strip-components=1 -xzf apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + rm apache-yetus-${YETUS_VERSION}-bin.tar.gz + ''' } dir("${WORKDIR}") { withCredentials([ @@ -250,7 +267,8 @@ pipeline { } post { always { - junit testResults: "${WORKDIR_REL}/${SRC_REL}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${WORKDIR_REL}/${SRC_REL}/**/target/**/TEST-*.xml", + allowEmptyResults: true, skipPublishingChecks: true sh label: 'zip surefire reports', script: '''#!/bin/bash -e if [ -d "${PATCHDIR}/archiver" ]; then count=$(find "${PATCHDIR}/archiver" -type f | wc -l) @@ -278,7 +296,7 @@ pipeline { if [ -f "${PATCHDIR}/test_logs.zip" ]; then echo "Remove ${PATCHDIR}/test_logs.zip for saving space" rm -rf "${PATCHDIR}/test_logs.zip" - python ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${WORKDIR_REL}/${PATCH_REL}" > "${PATCHDIR}/test_logs.html" + python3 ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${WORKDIR_REL}/${PATCH_REL}" > "${PATCHDIR}/test_logs.html" else echo "No test_logs.zip, skipping" fi @@ -293,7 +311,7 @@ pipeline { // Has to be relative to WORKSPACE reportDir: "${WORKDIR_REL}/${PATCH_REL}", reportFiles: 'report.html', - reportName: 'PR JDK8 Hadoop3 Check Report' + reportName: 'PR JDK8 Hadoop2 Check Report' ] } // Jenkins pipeline jobs fill slaves on PRs without this :( @@ -323,7 +341,7 @@ pipeline { stage ('yetus jdk11 hadoop3 checks') { agent { node { - label 'Hadoop' + label 'hbase' } } environment { @@ -342,16 +360,20 @@ pipeline { YETUS_DRIVER = "${WORKDIR}/${YETUS_DRIVER_REL}" SKIP_ERRORPRONE = true } + when { + // this will return true if the pipeline is building a change request, such as a GitHub pull request. + changeRequest() + } steps { dir("${SOURCEDIR}") { checkout scm } dir("${YETUSDIR}") { - checkout([ - $class : 'GitSCM', - branches : [[name: "${YETUS_VERSION}"]], - userRemoteConfigs: [[url: 'https://github.com/apache/yetus.git']]] - ) + sh'''#!/usr/bin/env bash + wget https://dlcdn.apache.org/yetus/${YETUS_VERSION}/apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + tar --strip-components=1 -xzf apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + rm apache-yetus-${YETUS_VERSION}-bin.tar.gz + ''' } dir("${WORKDIR}") { withCredentials([ @@ -382,7 +404,8 @@ pipeline { } post { always { - junit testResults: "${WORKDIR_REL}/${SRC_REL}/**/target/**/TEST-*.xml", allowEmptyResults: true + junit testResults: "${WORKDIR_REL}/${SRC_REL}/**/target/**/TEST-*.xml", + allowEmptyResults: true, skipPublishingChecks: true sh label: 'zip surefire reports', script: '''#!/bin/bash -e if [ -d "${PATCHDIR}/archiver" ]; then count=$(find "${PATCHDIR}/archiver" -type f | wc -l) @@ -410,7 +433,7 @@ pipeline { if [ -f "${PATCHDIR}/test_logs.zip" ]; then echo "Remove ${PATCHDIR}/test_logs.zip for saving space" rm -rf "${PATCHDIR}/test_logs.zip" - python ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${WORKDIR_REL}/${PATCH_REL}" > "${PATCHDIR}/test_logs.html" + python3 ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${WORKDIR_REL}/${PATCH_REL}" > "${PATCHDIR}/test_logs.html" else echo "No test_logs.zip, skipping" fi @@ -452,6 +475,143 @@ pipeline { } } } + stage ('yetus jdk17 hadoop3 checks') { + agent { + node { + label 'hbase' + } + } + environment { + // customized per parallel stage + PLUGINS = "${JDK_SPECIFIC_PLUGINS}" + SET_JAVA_HOME = '/usr/lib/jvm/java-17' + HADOOP_PROFILE = '3.0' + WORKDIR_REL = "${WORKDIR_REL_JDK17_HADOOP3_CHECK}" + // identical for all parallel stages + WORKDIR = "${WORKSPACE}/${WORKDIR_REL}" + YETUSDIR = "${WORKDIR}/${YETUS_REL}" + SOURCEDIR = "${WORKDIR}/${SRC_REL}" + PATCHDIR = "${WORKDIR}/${PATCH_REL}" + BUILD_URL_ARTIFACTS = "artifact/${WORKDIR_REL}/${PATCH_REL}" + DOCKERFILE = "${WORKDIR}/${DOCKERFILE_REL}" + YETUS_DRIVER = "${WORKDIR}/${YETUS_DRIVER_REL}" + SKIP_ERRORPRONE = true + } + when { + // this will return true if the pipeline is building a change request, such as a GitHub pull request. + changeRequest() + } + steps { + dir("${SOURCEDIR}") { + checkout scm + } + dir("${YETUSDIR}") { + sh'''#!/usr/bin/env bash + wget https://dlcdn.apache.org/yetus/${YETUS_VERSION}/apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + tar --strip-components=1 -xzf apache-yetus-${YETUS_VERSION}-bin.tar.gz && \ + rm apache-yetus-${YETUS_VERSION}-bin.tar.gz + ''' + } + dir("${WORKDIR}") { + withCredentials([ + usernamePassword( + credentialsId: 'apache-hbase-at-github.com', + passwordVariable: 'GITHUB_PASSWORD', + usernameVariable: 'GITHUB_USER' + )]) { + script { + def ret = sh( + label: 'test-patch', + returnStatus: true, + script: '''#!/bin/bash -e + hostname -a ; pwd ; ls -la + printenv 2>&1 | sort + echo "[INFO] Launching Yetus via ${YETUS_DRIVER}" + "${YETUS_DRIVER}" + ''' + ) + if (ret != 0) { + // mark the build as UNSTABLE instead of FAILURE, to avoid skipping the later publish of + // test output. See HBASE-26339 for more details. + currentBuild.result = 'UNSTABLE' + } + } + } + } + } + post { + always { + junit testResults: "${WORKDIR_REL}/${SRC_REL}/**/target/**/TEST-*.xml", + allowEmptyResults: true, skipPublishingChecks: true + sh label: 'zip surefire reports', script: '''#!/bin/bash -e + if [ -d "${PATCHDIR}/archiver" ]; then + count=$(find "${PATCHDIR}/archiver" -type f | wc -l) + if [[ 0 -ne ${count} ]]; then + echo "zipping ${count} archived files" + zip -q -m -r "${PATCHDIR}/test_logs.zip" "${PATCHDIR}/archiver" + else + echo "No archived files, skipping compressing." + fi + else + echo "No archiver directory, skipping compressing." + fi + ''' + sshPublisher(publishers: [ + sshPublisherDesc(configName: 'Nightlies', + transfers: [ + sshTransfer(remoteDirectory: "hbase/${JOB_NAME}/${BUILD_NUMBER}", + sourceFiles: "${env.WORKDIR_REL}/${env.PATCH_REL}/test_logs.zip" + ) + ] + ) + ]) + // remove the big test logs zip file, store the nightlies url in test_logs.txt + sh '''#!/bin/bash -e + if [ -f "${PATCHDIR}/test_logs.zip" ]; then + echo "Remove ${PATCHDIR}/test_logs.zip for saving space" + rm -rf "${PATCHDIR}/test_logs.zip" + python3 ${SOURCEDIR}/dev-support/gen_redirect_html.py "${ASF_NIGHTLIES_BASE}/${WORKDIR_REL}/${PATCH_REL}" > "${PATCHDIR}/test_logs.html" + else + echo "No test_logs.zip, skipping" + fi + ''' + // Has to be relative to WORKSPACE. + archiveArtifacts artifacts: "${WORKDIR_REL}/${PATCH_REL}/*", excludes: "${WORKDIR_REL}/${PATCH_REL}/precommit" + archiveArtifacts artifacts: "${WORKDIR_REL}/${PATCH_REL}/**/*", excludes: "${WORKDIR_REL}/${PATCH_REL}/precommit/**/*" + publishHTML target: [ + allowMissing: true, + keepAll: true, + alwaysLinkToLastBuild: true, + // Has to be relative to WORKSPACE + reportDir: "${WORKDIR_REL}/${PATCH_REL}", + reportFiles: 'report.html', + reportName: 'PR JDK17 Hadoop3 Check Report' + ] + } + // Jenkins pipeline jobs fill slaves on PRs without this :( + cleanup() { + script { + sh label: 'Cleanup workspace', script: '''#!/bin/bash -e + # See YETUS-764 + if [ -f "${PATCHDIR}/pidfile.txt" ]; then + echo "test-patch process appears to still be running: killing" + kill `cat "${PATCHDIR}/pidfile.txt"` || true + sleep 10 + fi + if [ -f "${PATCHDIR}/cidfile.txt" ]; then + echo "test-patch container appears to still be running: killing" + docker kill `cat "${PATCHDIR}/cidfile.txt"` || true + fi + # See HADOOP-13951 + chmod -R u+rxw "${WORKSPACE}" + ''' + dir ("${WORKDIR}") { + deleteDir() + } + } + } + } + } } } } diff --git a/dev-support/blanks-eol-ignore.txt b/dev-support/blanks-eol-ignore.txt new file mode 100644 index 000000000000..6912be308371 --- /dev/null +++ b/dev-support/blanks-eol-ignore.txt @@ -0,0 +1,24 @@ +## +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +## +.*/generated/.* +# we have generated code for other languages in hbase-examples +.*/gen-cpp/.* +.*/gen-perl/.* +.*/gen-php/.* +.*/gen-py/.* +.*/gen-rb/.* diff --git a/dev-support/blanks-tabs-ignore.txt b/dev-support/blanks-tabs-ignore.txt new file mode 100644 index 000000000000..49185487846e --- /dev/null +++ b/dev-support/blanks-tabs-ignore.txt @@ -0,0 +1,28 @@ +## +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +## +.*/generated/.* +# we have generated code for other languages in hbase-examples +.*/gen-cpp/.* +.*/gen-perl/.* +.*/gen-php/.* +.*/gen-py/.* +.*/gen-rb/.* +# we have tabs in asciidoc, not sure whether it is OK to replace them with spaces +src/main/asciidoc/.* +# perl officially suggests use tab instead of space for indentation +.*/*.pl diff --git a/dev-support/checkcompatibility.py b/dev-support/checkcompatibility.py index d132c350a9a0..914f8dd42f17 100755 --- a/dev-support/checkcompatibility.py +++ b/dev-support/checkcompatibility.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/env python3 # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file @@ -32,7 +32,7 @@ # --annotation org.apache.yetus.audience.InterfaceAudience.LimitedPrivate \ # --include-file "hbase-*" \ # --known_problems_path ~/known_problems.json \ -# rel/1.0.0 branch-1.2 +# rel/1.3.0 branch-1.4 import json import logging @@ -41,7 +41,9 @@ import shutil import subprocess import sys -import urllib2 +import urllib.request +import urllib.error +import urllib.parse from collections import namedtuple try: import argparse @@ -55,11 +57,11 @@ def check_output(*popenargs, **kwargs): - """ Run command with arguments and return its output as a byte string. - Backported from Python 2.7 as it's implemented as pure python on stdlib. - >>> check_output(['/usr/bin/python', '--version']) - Python 2.6.2 """ - process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs) + """ Run command with arguments and return its output as a byte string. """ + process = subprocess.Popen(stdout=subprocess.PIPE, + universal_newlines=True, + *popenargs, + **kwargs) output, _ = process.communicate() retcode = process.poll() if retcode: @@ -69,7 +71,7 @@ def check_output(*popenargs, **kwargs): error = subprocess.CalledProcessError(retcode, cmd) error.output = output raise error - return output + return output.strip() def get_repo_dir(): @@ -136,6 +138,8 @@ def get_repo_name(remote_name="origin"): def build_tree(java_path, verbose): """ Run the Java build within 'path'. """ logging.info("Building in %s ", java_path) + # special hack for comparing with rel/2.0.0, see HBASE-26063 for more details + subprocess.check_call(["sed", "-i", "2148s/3.0.0/3.0.4/g", "pom.xml"], cwd=java_path) mvn_cmd = ["mvn", "--batch-mode", "-DskipTests", "-Dmaven.javadoc.skip=true", "package"] if not verbose: @@ -159,7 +163,7 @@ def checkout_java_acc(force): url = "https://github.com/lvc/japi-compliance-checker/archive/2.4.tar.gz" scratch_dir = get_scratch_dir() path = os.path.join(scratch_dir, os.path.basename(url)) - jacc = urllib2.urlopen(url) + jacc = urllib.request.urlopen(url) with open(path, 'wb') as w: w.write(jacc.read()) @@ -172,7 +176,7 @@ def checkout_java_acc(force): def find_jars(path): """ Return a list of jars within 'path' to be checked for compatibility. """ - all_jars = set(check_output(["find", path, "-name", "*.jar"]).splitlines()) + all_jars = set(check_output(["find", path, "-type", "f", "-name", "*.jar"]).splitlines()) return [j for j in all_jars if ( "-tests" not in j and @@ -194,8 +198,8 @@ def ascii_encode_dict(data): """ Iterate through a dictionary of data and convert all unicode to ascii. This method was taken from stackoverflow.com/questions/9590382/forcing-python-json-module-to-work-with-ascii """ - ascii_encode = lambda x: x.encode('ascii') if isinstance(x, unicode) else x - return dict(map(ascii_encode, pair) for pair in data.items()) + ascii_encode = lambda x: x.encode('ascii') if isinstance(x, str) else x + return dict(list(map(ascii_encode, pair)) for pair in list(data.items())) def process_json(path): @@ -227,9 +231,9 @@ def compare_results(tool_results, known_issues, compare_warnings): unexpected_issues = [unexpected_issue(check=check, issue_type=issue_type, known_count=known_count, observed_count=tool_results[check][issue_type]) - for check, known_issue_counts in known_issues.items() - for issue_type, known_count in known_issue_counts.items() - if tool_results[check][issue_type] > known_count] + for check, known_issue_counts in list(known_issues.items()) + for issue_type, known_count in list(known_issue_counts.items()) + if compare_tool_results_count(tool_results, check, issue_type, known_count)] if not compare_warnings: unexpected_issues = [tup for tup in unexpected_issues @@ -241,6 +245,14 @@ def compare_results(tool_results, known_issues, compare_warnings): return bool(unexpected_issues) +def compare_tool_results_count(tool_results, check, issue_type, known_count): + """ Check problem counts are no more than the known count. + (This function exists just so can add in logging; previous was inlined + one-liner but this made it hard debugging) + """ + # logging.info("known_count=%s, check key=%s, tool_results=%s, issue_type=%s", + # str(known_count), str(check), str(tool_results), str(issue_type)) + return tool_results[check][issue_type] > known_count def process_java_acc_output(output): """ Process the output string to find the problems and warnings in both the @@ -299,14 +311,14 @@ def run_java_acc(src_name, src_jars, dst_name, dst_jars, annotations, skip_annot logging.info("Annotations are: %s", annotations) annotations_path = os.path.join(get_scratch_dir(), "annotations.txt") logging.info("Annotations path: %s", annotations_path) - with file(annotations_path, "w") as f: + with open(annotations_path, "w") as f: f.write('\n'.join(annotations)) args.extend(["-annotations-list", annotations_path]) if skip_annotations is not None: skip_annotations_path = os.path.join( get_scratch_dir(), "skip_annotations.txt") - with file(skip_annotations_path, "w") as f: + with open(skip_annotations_path, "w") as f: f.write('\n'.join(skip_annotations)) args.extend(["-skip-annotations-list", skip_annotations_path]) @@ -325,14 +337,14 @@ def get_known_problems(json_path, src_rev, dst_rev): keys in the format source_branch/destination_branch and the values dictionaries with binary and source problems and warnings Example: - {'branch-1.0.0': { - 'rel/1.0.0': {'binary': {'problems': 123, 'warnings': 16}, + {'branch-1.3': { + 'rel/1.3.0': {'binary': {'problems': 123, 'warnings': 16}, 'source': {'problems': 167, 'warnings': 1}}, - 'branch-1.2.0': {'binary': {'problems': 0, 'warnings': 0}, + 'branch-1.4': {'binary': {'problems': 0, 'warnings': 0}, 'source': {'problems': 0, 'warnings': 0}} }, - 'branch-1.2.0': { - 'rel/1.2.1': {'binary': {'problems': 13, 'warnings': 1}, + 'branch-1.4': { + 'rel/1.4.1': {'binary': {'problems': 13, 'warnings': 1}, 'source': {'problems': 23, 'warnings': 0}} } } """ diff --git a/dev-support/checkstyle_report.py b/dev-support/checkstyle_report.py index 0b700b9789c5..99092c23b8ca 100755 --- a/dev-support/checkstyle_report.py +++ b/dev-support/checkstyle_report.py @@ -1,4 +1,4 @@ -#!/usr/bin/python +#!/usr/bin/env python3 ## # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file @@ -29,8 +29,8 @@ from collections import defaultdict if len(sys.argv) != 3 : - print "usage: %s checkstyle-result-master.xml checkstyle-result-patch.xml" % sys.argv[0] - exit(1) + print("usage: %s checkstyle-result-master.xml checkstyle-result-patch.xml" % sys.argv[0]) + sys.exit(1) def path_key(x): path = x.attrib['name'] @@ -40,8 +40,8 @@ def error_name(x): error_class = x.attrib['source'] return error_class[error_class.rfind(".") + 1:] -def print_row(path, error, master_errors, patch_errors): - print '%s\t%s\t%s\t%s' % (path,error, master_errors,patch_errors) +def print_row(path, err, master_errors, patch_errors): + print('%s\t%s\t%s\t%s' % (path, err, master_errors, patch_errors)) master = etree.parse(sys.argv[1]) patch = etree.parse(sys.argv[2]) @@ -49,32 +49,32 @@ def print_row(path, error, master_errors, patch_errors): master_dict = defaultdict(int) ret_value = 0 -for child in master.getroot().getchildren(): +for child in list(master.getroot()): if child.tag != 'file': - continue + continue file = path_key(child) - for error_tag in child.getchildren(): - error = error_name(error_tag) - if (file, error) in master_dict: - master_dict[(file, error)] += 1 - else: - master_dict[(file, error)] = 1 + for error_tag in list(child): + error = error_name(error_tag) + if (file, error) in master_dict: + master_dict[(file, error)] += 1 + else: + master_dict[(file, error)] = 1 -for child in patch.getroot().getchildren(): - if child.tag != 'file': - continue - temp_dict = defaultdict(int) - for error_tag in child.getchildren(): - error = error_name(error_tag) - if error in temp_dict: - temp_dict[error] += 1 - else: - temp_dict[error] = 1 +for child in list(patch.getroot()): + if child.tag != 'file': + continue + temp_dict = defaultdict(int) + for error_tag in list(child): + error = error_name(error_tag) + if error in temp_dict: + temp_dict[error] += 1 + else: + temp_dict[error] = 1 - file = path_key(child) - for error, count in temp_dict.iteritems(): - if count > master_dict[(file, error)]: - print_row(file, error, master_dict[(file, error)], count) - ret_value = 1 + file = path_key(child) + for error, count in temp_dict.items(): + if count > master_dict[(file, error)]: + print_row(file, error, master_dict[(file, error)], count) + ret_value = 1 sys.exit(ret_value) diff --git a/dev-support/code-coverage/README.md b/dev-support/code-coverage/README.md new file mode 100644 index 000000000000..0b3eaf044acb --- /dev/null +++ b/dev-support/code-coverage/README.md @@ -0,0 +1,49 @@ + + +# Code analysis + +The `run-coverage.sh` script runs maven with the **jacoco** profile +which generates the test coverage data for the java classes. +If the required parameters are given it also runs the sonar code analysis +and uploads the results to the given SonarQube Server. + +## Running code analysis + +After running the script the reports generated by the JaCoCo +code coverage library can be found under the `/target/site/jacoco/` folder of +the related modules. + +Here is how you can generate the code coverage report: + +```sh dev/code-coverage/run-coverage.sh``` + +## Publishing coverage results to SonarQube + +The required parameters for publishing the results to SonarQube are: + +- host URL, +- login credentials, +- project key + +The project name is an optional parameter. + +Here is an example command for running and publishing the coverage data: + +`./dev/code-coverage/run-coverage.sh -l ProjectCredentials +-u https://exampleserver.com -k Project_Key -n Project_Name` diff --git a/dev-support/code-coverage/run-coverage.sh b/dev-support/code-coverage/run-coverage.sh new file mode 100755 index 000000000000..2accb313a65c --- /dev/null +++ b/dev-support/code-coverage/run-coverage.sh @@ -0,0 +1,78 @@ +#!/usr/bin/env bash +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +usage() { + echo + echo "Options:" + echo " -h Display help" + echo " -u SonarQube Host URL" + echo " -l SonarQube Login Credentials" + echo " -k SonarQube Project Key" + echo " -n SonarQube Project Name" + echo + echo "Important:" + echo " The required parameters for publishing the coverage results to SonarQube are:" + echo " - Host URL" + echo " - Login Credentials" + echo " - Project Key" + echo +} + +execute() { + SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd)" + MAIN_POM="${SCRIPT_DIR}/../../pom.xml" + + echo "Running unit and integration tests with runAllTests profile" + + mvn -B -e -f "${MAIN_POM}" clean test -PrunAllTests -Pjacoco -Pbuild-with-jdk11 -fn + + echo "Starting verifying phase" + + mvn -B -e -f "${MAIN_POM}" verify -DskipTests -DskipITs -Pjacoco -Pbuild-with-jdk11 -fn + + echo "Starting sonar scanner analysis" + + # If the required parameters are given, the code coverage results are uploaded to the SonarQube Server + if [ -n "$SONAR_LOGIN" ] && [ -n "$SONAR_PROJECT_KEY" ] && [ -n "$SONAR_URL" ]; then + mvn -B -e -f "${MAIN_POM}" sonar:sonar -Dsonar.host.url="$SONAR_URL" \ + -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectKey="$SONAR_PROJECT_KEY" \ + -Dsonar.projectName="$SONAR_PROJECT_NAME" -Pjacoco + fi + + echo "Build finished" +} + +while getopts ":u:l:k:n:h" option; do + case $option in + u) SONAR_URL=${OPTARG:-} ;; + l) SONAR_LOGIN=${OPTARG:-} ;; + k) SONAR_PROJECT_KEY=${OPTARG:-} ;; + n) SONAR_PROJECT_NAME=${OPTARG:-} ;; + h) # Display usage + usage + exit + ;; + \?) # Invalid option + echo "Error: Invalid option" + exit + ;; + esac +done + +# Start code analysis +execute diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile index 2be34529fe40..26b2c35b3462 100644 --- a/dev-support/docker/Dockerfile +++ b/dev-support/docker/Dockerfile @@ -21,14 +21,14 @@ # tweaking unrelated aspects of the image. # start with a minimal image into which we can download remote tarballs -FROM ubuntu:18.04 AS BASE_IMAGE +FROM ubuntu:22.04 AS base_image SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update && \ DEBIAN_FRONTEND=noninteractive apt-get -qq install --no-install-recommends -y \ - ca-certificates=20180409 \ - curl='7.58.0-*' \ - locales='2.27-*' \ + ca-certificates=20211016 \ + curl='7.81.0-*' \ + locales='2.35-*' \ ## # install dependencies from system packages. # be careful not to install any system packages (i.e., findbugs) that will @@ -36,107 +36,109 @@ RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update && \ # # bring the base image into conformance with the expectations imposed by # Yetus and our personality file of what a build environment looks like. - bash='4.4.18-*' \ - build-essential=12.4ubuntu1 \ - diffutils='1:3.6-*' \ - git='1:2.17.1-*' \ - rsync='3.1.2-*' \ - tar='1.29b-*' \ - wget='1.19.4-*' \ + bash='5.1-*' \ + build-essential=12.9ubuntu3 \ + diffutils='1:3.8-*' \ + git='1:2.34.1-*' \ + rsync='3.2.3-*' \ + tar='1.34+dfsg-*' \ + wget='1.21.2-*' \ # install the dependencies required in order to enable the sundry precommit # checks/features provided by Yetus plugins. - bats='0.4.0-*' \ - libperl-critic-perl='1.130-*' \ - python3='3.6.7-*' \ - python3-pip='9.0.1-*' \ - python3-setuptools='39.0.1-*' \ - ruby=1:2.5.1 \ - ruby-dev=1:2.5.1 \ - shellcheck='0.4.6-*' \ + bats='1.2.1-*' \ + libperl-critic-perl='1.140-*' \ + python3='3.10.6-*' \ + python3-pip='22.0.2+dfsg-*' \ + python3-setuptools='59.6.0-*' \ + ruby=1:3.0* \ + ruby-dev=1:3.0* \ + shellcheck='0.8.0-*' \ + libxml2-dev='2.9.13+dfsg-*' \ + libxml2-utils='2.9.13+dfsg-*' \ && \ apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN python3 -mpip install --upgrade pip && \ - python3 -mpip install pylint==2.4.4 - -RUN gem install --no-document \ - rake:13.0.1 \ - rubocop:0.80.0 \ - ruby-lint:2.3.1 - -RUN locale-gen en_US.UTF-8 + rm -rf /var/lib/apt/lists/* \ + && \ + python3 -mpip install --upgrade pip && \ + python3 -mpip install pylint==2.15.5 \ + && \ + gem install --no-document \ + rake:13.0.3 \ + rubocop:1.37.1 \ + ruby-lint:2.3.1 \ + && \ + locale-gen en_US.UTF-8 ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 ## # download sundry dependencies # -FROM BASE_IMAGE AS SPOTBUGS_DOWNLOAD_IMAGE -ENV SPOTBUGS_VERSION '4.2.2' -ENV SPOTBUGS_URL "https://repo.maven.apache.org/maven2/com/github/spotbugs/spotbugs/${SPOTBUGS_VERSION}/spotbugs-${SPOTBUGS_VERSION}.tgz" -ENV SPOTBUGS_SHA256 '4967c72396e34b86b9458d0c34c5ed185770a009d357df8e63951ee2844f769f' +FROM base_image AS spotbugs_download_image +ENV SPOTBUGS_VERSION='4.7.3' +ENV SPOTBUGS_URL="https://repo.maven.apache.org/maven2/com/github/spotbugs/spotbugs/${SPOTBUGS_VERSION}/spotbugs-${SPOTBUGS_VERSION}.tgz" +ENV SPOTBUGS_SHA512='09a9fe0e5a6ec8e9d6d116c361b5c34c9d0560c0271241f02fadee911952adfcd69dc184f6de1cc4d4a8fe2c84c162689ea9a691dcae0779935eedf390fcc4ad' SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN curl --location --fail --silent --show-error --output /tmp/spotbugs.tgz "${SPOTBUGS_URL}" && \ - echo "${SPOTBUGS_SHA256} */tmp/spotbugs.tgz" | sha256sum -c - + echo "${SPOTBUGS_SHA512} */tmp/spotbugs.tgz" | sha512sum -c - -FROM BASE_IMAGE AS HADOLINT_DOWNLOAD_IMAGE -ENV HADOLINT_VERSION '1.17.5' -ENV HADOLINT_URL "https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-Linux-x86_64" -ENV HADOLINT_SHA256 '20dd38bc0602040f19268adc14c3d1aae11af27b463af43f3122076baf827a35' +FROM base_image AS hadolint_download_image +ENV HADOLINT_VERSION='2.10.0' +ENV HADOLINT_URL="https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-Linux-x86_64" +ENV HADOLINT_SHA512='4816c95243bedf15476d2225f487fc17465495fb2031e1a4797d82a26db83a1edb63e4fed084b80cef17d5eb67eb45508caadaf7cd0252fb061187113991a338' SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN curl --location --fail --silent --show-error --output /tmp/hadolint "${HADOLINT_URL}" && \ - echo "${HADOLINT_SHA256} */tmp/hadolint" | sha256sum -c - + echo "${HADOLINT_SHA512} */tmp/hadolint" | sha512sum -c - -FROM BASE_IMAGE AS MAVEN_DOWNLOAD_IMAGE -ENV MAVEN_VERSION='3.6.3' -ENV MAVEN_URL "https://archive.apache.org/dist/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz" -ENV MAVEN_SHA512 'c35a1803a6e70a126e80b2b3ae33eed961f83ed74d18fcd16909b2d44d7dada3203f1ffe726c17ef8dcca2dcaa9fca676987befeadc9b9f759967a8cb77181c0' +FROM base_image AS maven_download_image +ENV MAVEN_VERSION='3.9.8' +ENV MAVEN_URL="https://archive.apache.org/dist/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz" +ENV MAVEN_SHA512='7d171def9b85846bf757a2cec94b7529371068a0670df14682447224e57983528e97a6d1b850327e4ca02b139abaab7fcb93c4315119e6f0ffb3f0cbc0d0b9a2' SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN curl --location --fail --silent --show-error --output /tmp/maven.tar.gz "${MAVEN_URL}" && \ echo "${MAVEN_SHA512} */tmp/maven.tar.gz" | sha512sum -c - -FROM BASE_IMAGE AS OPENJDK7_DOWNLOAD_IMAGE -ENV OPENJDK7_URL 'https://cdn.azul.com/zulu/bin/zulu7.36.0.5-ca-jdk7.0.252-linux_x64.tar.gz' -ENV OPENJDK7_SHA256 'e0f34c242e6d456dac3e2c8a9eaeacfa8ea75c4dfc3e8818190bf0326e839d82' -SHELL ["/bin/bash", "-o", "pipefail", "-c"] -RUN curl --location --fail --silent --show-error --output /tmp/zuluopenjdk7.tar.gz "${OPENJDK7_URL}" && \ - echo "${OPENJDK7_SHA256} */tmp/zuluopenjdk7.tar.gz" | sha256sum -c - - -FROM BASE_IMAGE AS OPENJDK8_DOWNLOAD_IMAGE -ENV OPENJDK8_URL 'https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/jdk8u282-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u282b08.tar.gz' -ENV OPENJDK8_SHA256 'e6e6e0356649b9696fa5082cfcb0663d4bef159fc22d406e3a012e71fce83a5c' +FROM base_image AS openjdk8_download_image +ENV OPENJDK8_URL='https://github.com/adoptium/temurin8-binaries/releases/download/jdk8u412-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u412b08.tar.gz' +ENV OPENJDK8_SHA256='b9884a96f78543276a6399c3eb8c2fd8a80e6b432ea50e87d3d12d495d1d2808' SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN curl --location --fail --silent --show-error --output /tmp/adoptopenjdk8.tar.gz "${OPENJDK8_URL}" && \ echo "${OPENJDK8_SHA256} */tmp/adoptopenjdk8.tar.gz" | sha256sum -c - -FROM BASE_IMAGE AS OPENJDK11_DOWNLOAD_IMAGE -ENV OPENJDK11_URL 'https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.10%2B9/OpenJDK11U-jdk_x64_linux_hotspot_11.0.10_9.tar.gz' -ENV OPENJDK11_SHA256 'ae78aa45f84642545c01e8ef786dfd700d2226f8b12881c844d6a1f71789cb99' +FROM base_image AS openjdk11_download_image +ENV OPENJDK11_URL='https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.23%2B9/OpenJDK11U-jdk_x64_linux_hotspot_11.0.23_9.tar.gz' +ENV OPENJDK11_SHA256='23e47ea7a3015be3240f21185fd902adebdcf76530757c9b482c7eb5bd3417c2' SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN curl --location --fail --silent --show-error --output /tmp/adoptopenjdk11.tar.gz "${OPENJDK11_URL}" && \ echo "${OPENJDK11_SHA256} */tmp/adoptopenjdk11.tar.gz" | sha256sum -c - +FROM base_image AS openjdk17_download_image +ENV OPENJDK17_URL='https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.11%2B9/OpenJDK17U-jdk_x64_linux_hotspot_17.0.11_9.tar.gz' +ENV OPENJDK17_SHA256='aa7fb6bb342319d227a838af5c363bfa1b4a670c209372f9e6585bd79da6220c' +SHELL ["/bin/bash", "-o", "pipefail", "-c"] +RUN curl --location --fail --silent --show-error --output /tmp/adoptopenjdk17.tar.gz "${OPENJDK17_URL}" && \ + echo "${OPENJDK17_SHA256} */tmp/adoptopenjdk17.tar.gz" | sha256sum -c - + ## # build the final image # -FROM BASE_IMAGE +FROM base_image SHELL ["/bin/bash", "-o", "pipefail", "-c"] # hadolint ignore=DL3010 -COPY --from=SPOTBUGS_DOWNLOAD_IMAGE /tmp/spotbugs.tgz /tmp/spotbugs.tgz +COPY --from=spotbugs_download_image /tmp/spotbugs.tgz /tmp/spotbugs.tgz RUN tar xzf /tmp/spotbugs.tgz -C /opt && \ ln -s "/opt/$(tar -tf /tmp/spotbugs.tgz | head -n1 | cut -d/ -f1)" /opt/spotbugs && \ chmod -R a+x /opt/spotbugs/bin/* && \ rm /tmp/spotbugs.tgz -COPY --from=HADOLINT_DOWNLOAD_IMAGE /tmp/hadolint /tmp/hadolint +COPY --from=hadolint_download_image /tmp/hadolint /tmp/hadolint RUN mv /tmp/hadolint /usr/local/bin && \ chmod a+x /usr/local/bin/hadolint # hadolint ignore=DL3010 -COPY --from=MAVEN_DOWNLOAD_IMAGE /tmp/maven.tar.gz /tmp/maven.tar.gz +COPY --from=maven_download_image /tmp/maven.tar.gz /tmp/maven.tar.gz RUN tar xzf /tmp/maven.tar.gz -C /opt && \ ln -s "/opt/$(dirname "$(tar -tf /tmp/maven.tar.gz | head -n1)")" /opt/maven && \ rm /tmp/maven.tar.gz @@ -150,15 +152,7 @@ RUN tar xzf /tmp/maven.tar.gz -C /opt && \ # # hadolint ignore=DL3010 -COPY --from=OPENJDK7_DOWNLOAD_IMAGE /tmp/zuluopenjdk7.tar.gz /tmp/zuluopenjdk7.tar.gz -RUN mkdir -p /usr/lib/jvm && \ - tar xzf /tmp/zuluopenjdk7.tar.gz -C /usr/lib/jvm && \ - ln -s "/usr/lib/jvm/$(basename "$(tar -tf /tmp/zuluopenjdk7.tar.gz | head -n1)")" /usr/lib/jvm/java-7-zuluopenjdk && \ - ln -s /usr/lib/jvm/java-7-zuluopenjdk /usr/lib/jvm/java-7 && \ - rm /tmp/zuluopenjdk7.tar.gz - -# hadolint ignore=DL3010 -COPY --from=OPENJDK8_DOWNLOAD_IMAGE /tmp/adoptopenjdk8.tar.gz /tmp/adoptopenjdk8.tar.gz +COPY --from=openjdk8_download_image /tmp/adoptopenjdk8.tar.gz /tmp/adoptopenjdk8.tar.gz RUN mkdir -p /usr/lib/jvm && \ tar xzf /tmp/adoptopenjdk8.tar.gz -C /usr/lib/jvm && \ ln -s "/usr/lib/jvm/$(basename "$(tar -tf /tmp/adoptopenjdk8.tar.gz | head -n1)")" /usr/lib/jvm/java-8-adoptopenjdk && \ @@ -166,20 +160,26 @@ RUN mkdir -p /usr/lib/jvm && \ rm /tmp/adoptopenjdk8.tar.gz # hadolint ignore=DL3010 -COPY --from=OPENJDK11_DOWNLOAD_IMAGE /tmp/adoptopenjdk11.tar.gz /tmp/adoptopenjdk11.tar.gz +COPY --from=openjdk11_download_image /tmp/adoptopenjdk11.tar.gz /tmp/adoptopenjdk11.tar.gz RUN mkdir -p /usr/lib/jvm && \ tar xzf /tmp/adoptopenjdk11.tar.gz -C /usr/lib/jvm && \ ln -s "/usr/lib/jvm/$(basename "$(tar -tf /tmp/adoptopenjdk11.tar.gz | head -n1)")" /usr/lib/jvm/java-11-adoptopenjdk && \ ln -s /usr/lib/jvm/java-11-adoptopenjdk /usr/lib/jvm/java-11 && \ rm /tmp/adoptopenjdk11.tar.gz +# hadolint ignore=DL3010 +COPY --from=openjdk17_download_image /tmp/adoptopenjdk17.tar.gz /tmp/adoptopenjdk17.tar.gz +RUN mkdir -p /usr/lib/jvm && \ + tar xzf /tmp/adoptopenjdk17.tar.gz -C /usr/lib/jvm && \ + ln -s "/usr/lib/jvm/$(basename "$(tar -tf /tmp/adoptopenjdk17.tar.gz | head -n1)")" /usr/lib/jvm/java-17-adoptopenjdk && \ + ln -s /usr/lib/jvm/java-17-adoptopenjdk /usr/lib/jvm/java-17 && \ + rm /tmp/adoptopenjdk17.tar.gz + # configure default environment for Yetus. Yetus in dockermode seems to require # these values to be specified here; the various --foo-path flags do not # propigate as expected, while these are honored. # TODO (nd): is this really true? investigate and file a ticket. -ENV SPOTBUGS_HOME '/opt/spotbugs' -ENV MAVEN_HOME '/opt/maven' -ENV MAVEN_OPTS '-Xmx3.6G' +ENV SPOTBUGS_HOME='/opt/spotbugs' MAVEN_HOME='/opt/maven' CMD ["/bin/bash"] diff --git a/dev-support/flaky-tests/findHangingTests.py b/dev-support/flaky-tests/findHangingTests.py index 328516ebf344..e07638ac7d78 100755 --- a/dev-support/flaky-tests/findHangingTests.py +++ b/dev-support/flaky-tests/findHangingTests.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/env python3 ## # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file @@ -45,8 +45,8 @@ def get_bad_tests(console_url): """ response = requests.get(console_url) if response.status_code != 200: - print "Error getting consoleText. Response = {} {}".format( - response.status_code, response.reason) + print("Error getting consoleText. Response = {} {}".format( + response.status_code, response.reason)) return # All tests: All testcases which were run. @@ -59,25 +59,27 @@ def get_bad_tests(console_url): hanging_tests_set = set() failed_tests_set = set() timeout_tests_set = set() - for line in response.content.splitlines(): + for line in response.content.decode("utf-8").splitlines(): result1 = re.findall("Running org.apache.hadoop.hbase.(.*)", line) if len(result1) == 1: - test_case = result1[0] + # Sometimes the maven build output might have some malformed lines. See HBASE-27874 + test_case = result1[0].split("WARNING")[0].strip() if test_case in all_tests_set: - print ("ERROR! Multiple tests with same name '{}'. Might get wrong results " - "for this test.".format(test_case)) + print(("ERROR! Multiple tests with same name '{}'. Might get wrong results " + "for this test.".format(test_case))) else: hanging_tests_set.add(test_case) all_tests_set.add(test_case) result2 = re.findall("Tests run:.*?- in org.apache.hadoop.hbase.(.*)", line) if len(result2) == 1: - test_case = result2[0] + # Sometimes the maven build output might have some malformed lines. See HBASE-27874 + test_case = result2[0].split("WARNING")[0].strip() if "FAILURE!" in line: failed_tests_set.add(test_case) if test_case not in hanging_tests_set: - print ("ERROR! No test '{}' found in hanging_tests. Might get wrong results " + print(("ERROR! No test '{}' found in hanging_tests. Might get wrong results " "for this test. This may also happen if maven is set to retry failing " - "tests.".format(test_case)) + "tests.".format(test_case))) else: hanging_tests_set.remove(test_case) result3 = re.match("^\\s+(\\w*).*\\sTestTimedOut", line) @@ -86,30 +88,30 @@ def get_bad_tests(console_url): timeout_tests_set.add(test_case) for bad_string in BAD_RUN_STRINGS: if re.match(".*" + bad_string + ".*", line): - print "Bad string found in build:\n > {}".format(line) - print "Result > total tests: {:4} failed : {:4} timedout : {:4} hanging : {:4}".format( - len(all_tests_set), len(failed_tests_set), len(timeout_tests_set), len(hanging_tests_set)) + print("Bad string found in build:\n > {}".format(line)) + print("Result > total tests: {:4} failed : {:4} timedout : {:4} hanging : {:4}".format( + len(all_tests_set), len(failed_tests_set), len(timeout_tests_set), len(hanging_tests_set))) return [all_tests_set, failed_tests_set, timeout_tests_set, hanging_tests_set] if __name__ == "__main__": if len(sys.argv) != 2: - print "ERROR : Provide the jenkins job console URL as the only argument." + print("ERROR : Provide the jenkins job console URL as the only argument.") sys.exit(1) - print "Fetching {}".format(sys.argv[1]) + print("Fetching {}".format(sys.argv[1])) result = get_bad_tests(sys.argv[1]) if not result: sys.exit(1) [all_tests, failed_tests, timedout_tests, hanging_tests] = result - print "Found {} hanging tests:".format(len(hanging_tests)) + print("Found {} hanging tests:".format(len(hanging_tests))) for test in hanging_tests: - print test - print "\n" - print "Found {} failed tests of which {} timed out:".format( - len(failed_tests), len(timedout_tests)) + print(test) + print("\n") + print("Found {} failed tests of which {} timed out:".format( + len(failed_tests), len(timedout_tests))) for test in failed_tests: - print "{0} {1}".format(test, ("(Timed Out)" if test in timedout_tests else "")) + print("{0} {1}".format(test, ("(Timed Out)" if test in timedout_tests else ""))) print ("\nA test may have had 0 or more atomic test failures before it timed out. So a " "'Timed Out' test may have other errors too.") diff --git a/dev-support/flaky-tests/flaky-reporting.Jenkinsfile b/dev-support/flaky-tests/flaky-reporting.Jenkinsfile index 22f88faad93f..f50900954532 100644 --- a/dev-support/flaky-tests/flaky-reporting.Jenkinsfile +++ b/dev-support/flaky-tests/flaky-reporting.Jenkinsfile @@ -43,11 +43,11 @@ pipeline { set -x fi declare -a flaky_args - flaky_args=("${flaky_args[@]}" --urls "${JENKINS_URL}/job/HBase/job/HBase%20Nightly/job/${BRANCH_NAME}" --is-yetus True --max-builds 20) - flaky_args=("${flaky_args[@]}" --urls "${JENKINS_URL}/job/HBase/job/HBase-Flaky-Tests/job/${BRANCH_NAME}" --is-yetus False --max-builds 50) + flaky_args=("${flaky_args[@]}" --urls "${JENKINS_URL}/job/HBase%20Nightly/job/${BRANCH_NAME}" --is-yetus True --max-builds 20) + flaky_args=("${flaky_args[@]}" --urls "${JENKINS_URL}/job/HBase-Flaky-Tests/job/${BRANCH_NAME}" --is-yetus False --max-builds 50) docker build -t hbase-dev-support dev-support docker run --ulimit nproc=12500 -v "${WORKSPACE}":/hbase -u `id -u`:`id -g` --workdir=/hbase hbase-dev-support \ - python dev-support/flaky-tests/report-flakies.py --mvn -v -o output "${flaky_args[@]}" + ./dev-support/flaky-tests/report-flakies.py --mvn -v -o output "${flaky_args[@]}" ''' sshPublisher(publishers: [ sshPublisherDesc(configName: 'Nightlies', diff --git a/dev-support/flaky-tests/report-flakies.py b/dev-support/flaky-tests/report-flakies.py index d29ecfa4da6e..16096e3344a5 100755 --- a/dev-support/flaky-tests/report-flakies.py +++ b/dev-support/flaky-tests/report-flakies.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/env python3 ## # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file @@ -140,7 +140,7 @@ def expand_multi_config_projects(cli_args): raise Exception("Failed to get job information from jenkins for url '" + job_url + "'. Jenkins returned HTTP status " + str(request.status_code)) response = request.json() - if response.has_key("activeConfigurations"): + if "activeConfigurations" in response: for config in response["activeConfigurations"]: final_expanded_urls.append({'url':config["url"], 'max_builds': max_builds, 'excludes': excluded_builds, 'is_yetus': is_yetus}) @@ -167,7 +167,7 @@ def expand_multi_config_projects(cli_args): url = url_max_build["url"] excludes = url_max_build["excludes"] json_response = requests.get(url + "/api/json?tree=id,builds%5Bnumber,url%5D").json() - if json_response.has_key("builds"): + if "builds" in json_response: builds = json_response["builds"] logger.info("Analyzing job: %s", url) else: @@ -238,27 +238,27 @@ def expand_multi_config_projects(cli_args): # Sort tests in descending order by flakyness. sorted_test_to_build_ids = OrderedDict( - sorted(test_to_build_ids.iteritems(), key=lambda x: x[1]['flakyness'], reverse=True)) + sorted(iter(test_to_build_ids.items()), key=lambda x: x[1]['flakyness'], reverse=True)) url_to_bad_test_results[url] = sorted_test_to_build_ids if len(sorted_test_to_build_ids) > 0: - print "URL: {}".format(url) - print "{:>60} {:10} {:25} {}".format( - "Test Name", "Total Runs", "Bad Runs(failed/timeout/hanging)", "Flakyness") + print("URL: {}".format(url)) + print("{:>60} {:10} {:25} {}".format( + "Test Name", "Total Runs", "Bad Runs(failed/timeout/hanging)", "Flakyness")) for bad_test in sorted_test_to_build_ids: test_status = sorted_test_to_build_ids[bad_test] - print "{:>60} {:10} {:7} ( {:4} / {:5} / {:5} ) {:2.0f}%".format( + print("{:>60} {:10} {:7} ( {:4} / {:5} / {:5} ) {:2.0f}%".format( bad_test, len(test_status['all']), test_status['bad_count'], len(test_status['failed']), len(test_status['timeout']), - len(test_status['hanging']), test_status['flakyness']) + len(test_status['hanging']), test_status['flakyness'])) else: - print "No flaky tests founds." + print("No flaky tests founds.") if len(url_to_build_ids[url]) == len(build_ids_without_tests_run): - print "None of the analyzed builds have test result." + print("None of the analyzed builds have test result.") - print "Builds analyzed: {}".format(url_to_build_ids[url]) - print "Builds without any test runs: {}".format(build_ids_without_tests_run) - print "" + print("Builds analyzed: {}".format(url_to_build_ids[url])) + print("Builds without any test runs: {}".format(build_ids_without_tests_run)) + print("") all_bad_tests = all_hanging_tests.union(all_failed_tests) diff --git a/dev-support/flaky-tests/run-flaky-tests.Jenkinsfile b/dev-support/flaky-tests/run-flaky-tests.Jenkinsfile index a681d3ca0e43..ff5399549092 100644 --- a/dev-support/flaky-tests/run-flaky-tests.Jenkinsfile +++ b/dev-support/flaky-tests/run-flaky-tests.Jenkinsfile @@ -16,12 +16,14 @@ // under the License. pipeline { agent { - node { + dockerfile { + dir 'dev-support/docker' label 'hbase' + args '-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro' } } triggers { - cron('H H/4 * * *') // Every four hours. See https://jenkins.io/doc/book/pipeline/syntax/#cron-syntax + cron('@hourly') // See https://jenkins.io/doc/book/pipeline/syntax/#cron-syntax } options { // this should roughly match how long we tell the flaky dashboard to look at @@ -31,31 +33,35 @@ pipeline { } environment { ASF_NIGHTLIES = 'https://nightlies.apache.org' + JAVA_HOME = '/usr/lib/jvm/java-17' } parameters { booleanParam(name: 'DEBUG', defaultValue: false, description: 'Produce a lot more meta-information.') } - tools { - // this should match what the yetus nightly job for the branch will use - maven 'maven_latest' - jdk "jdk_1.8_latest" - } stages { stage ('run flaky tests') { steps { sh '''#!/usr/bin/env bash set -e + MVN="${MAVEN_HOME}/bin/mvn" + # print the maven version and java version + ${MVN} --version declare -a curl_args=(--fail) - declare -a mvn_args=(--batch-mode -fn -Dbuild.id="${BUILD_ID}" -Dmaven.repo.local="${WORKSPACE}/local-repository") + tmpdir=$(realpath target) + declare -a mvn_args=(--batch-mode -fn -Dbuild.id="${BUILD_ID}" -Dmaven.repo.local="${WORKSPACE}/local-repository" -Djava.io.tmpdir=${tmpdir}) if [ "${DEBUG}" = "true" ]; then curl_args=("${curl_args[@]}" -v) mvn_args=("${mvn_args[@]}" -X) set -x fi - curl "${curl_args[@]}" -o includes.txt "${JENKINS_URL}/job/HBase/job/HBase-Find-Flaky-Tests/job/${BRANCH_NAME}/lastSuccessfulBuild/artifact/output/includes" + # need to build against hadoop 3.0 profile for branch-2 when using jdk 11+ + if [[ "${BRANCH_NAME}" == *"branch-2"* ]]; then + mvn_args=("${mvn_args[@]}" -Dhadoop.profile=3.0) + fi + curl "${curl_args[@]}" -o includes.txt "${JENKINS_URL}/job/HBase-Find-Flaky-Tests/job/${BRANCH_NAME}/lastSuccessfulBuild/artifact/output/includes" if [ -s includes.txt ]; then rm -rf local-repository/org/apache/hbase - mvn clean "${mvn_args[@]}" + ${MVN} clean "${mvn_args[@]}" rm -rf "target/machine" && mkdir -p "target/machine" if [ -x dev-support/gather_machine_environment.sh ]; then "./dev-support/gather_machine_environment.sh" "target/machine" @@ -64,11 +70,11 @@ pipeline { else echo "Skipped gathering machine environment because we couldn't read the script to do so." fi - mvn -T0.25C package "${mvn_args[@]}" -Dtest="$(cat includes.txt)" -Dmaven.test.redirectTestOutputToFile=true -Dsurefire.firstPartForkCount=0.25C -Dsurefire.secondPartForkCount=0.25C + ${MVN} -T0.25C package "${mvn_args[@]}" -Dtest="$(cat includes.txt)" -Dmaven.test.redirectTestOutputToFile=true -Dsurefire.firstPartForkCount=0.25C -Dsurefire.secondPartForkCount=0.25C else echo "set of flaky tests is currently empty." fi -''' + ''' } } } diff --git a/dev-support/gen_redirect_html.py b/dev-support/gen_redirect_html.py index 0e73a5716563..2689fd8aa4a3 100755 --- a/dev-support/gen_redirect_html.py +++ b/dev-support/gen_redirect_html.py @@ -1,4 +1,4 @@ -#!/usr/bin/python +#!/usr/bin/env python3 ## # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file @@ -20,8 +20,8 @@ from string import Template if len(sys.argv) != 2 : - print "usage: %s " % sys.argv[0] - exit(1) + print("usage: %s " % sys.argv[0]) + sys.exit(1) url = sys.argv[1].replace(" ", "%20") template = Template(""" @@ -34,4 +34,4 @@ """) output = template.substitute(url = url) -print output +print(output) diff --git a/dev-support/hbase-personality.sh b/dev-support/hbase-personality.sh index 8794f1bf8b0c..a6a3d95e937a 100755 --- a/dev-support/hbase-personality.sh +++ b/dev-support/hbase-personality.sh @@ -80,9 +80,9 @@ function personality_globals # TODO use PATCH_BRANCH to select jdk versions to use. # Yetus 0.7.0 enforces limits. Default proclimit is 1000. - # Up it. See HBASE-19902 for how we arrived at this number. + # Up it. See HBASE-25081 for how we arrived at this number. #shellcheck disable=SC2034 - PROC_LIMIT=12500 + PROC_LIMIT=30000 # Set docker container to run with 20g. Default is 4g in yetus. # See HBASE-19902 for how we arrived at 20g. @@ -119,6 +119,22 @@ function personality_parse_args delete_parameter "${i}" ASF_NIGHTLIES_GENERAL_CHECK_BASE=${i#*=} ;; + --build-thread=*) + delete_parameter "${i}" + BUILD_THREAD=${i#*=} + ;; + --surefire-first-part-fork-count=*) + delete_parameter "${i}" + SUREFIRE_FIRST_PART_FORK_COUNT=${i#*=} + ;; + --surefire-second-part-fork-count=*) + delete_parameter "${i}" + SUREFIRE_SECOND_PART_FORK_COUNT=${i#*=} + ;; + --java8-home=*) + delete_parameter "${i}" + JAVA8_HOME=${i#*=} + ;; esac done } @@ -133,8 +149,6 @@ function personality_modules local repostatus=$1 local testtype=$2 local extra="" - local branch1jdk8=() - local jdk8module="" local MODULES=("${CHANGED_MODULES[@]}") yetus_info "Personality: ${repostatus} ${testtype}" @@ -144,15 +158,27 @@ function personality_modules # At a few points, hbase modules can run build, test, etc. in parallel # Let it happen. Means we'll use more CPU but should be for short bursts. # https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3 - extra="--threads=2 -DHBasePatchProcess" - if [[ "${PATCH_BRANCH}" = branch-1* ]]; then - extra="${extra} -Dhttps.protocols=TLSv1.2" + if [[ "${testtype}" == mvnsite ]]; then + yetus_debug "Skip specifying --threads since maven-site-plugin does not support building in parallel." + else + if [[ -n "${BUILD_THREAD}" ]]; then + extra="--threads=${BUILD_THREAD}" + else + extra="--threads=2" + fi fi + # Set java.io.tmpdir to avoid exhausting the /tmp space + # Just simply set to 'target', it is not very critical so we do not care + # whether it is placed in the root directory or a sub module's directory + # let's make it absolute + tmpdir=$(realpath target) + extra="${extra} -Djava.io.tmpdir=${tmpdir} -DHBasePatchProcess" + # If we have HADOOP_PROFILE specified and we're on branch-2.x, pass along # the hadoop.profile system property. Ensures that Hadoop2 and Hadoop3 # logic is not both activated within Maven. - if [[ -n "${HADOOP_PROFILE}" ]] && [[ "${PATCH_BRANCH}" = branch-2* ]] ; then + if [[ -n "${HADOOP_PROFILE}" ]] && [[ "${PATCH_BRANCH}" == *"branch-2"* ]] ; then extra="${extra} -Dhadoop.profile=${HADOOP_PROFILE}" fi @@ -179,21 +205,6 @@ function personality_modules return fi - # This list should include any modules that require jdk8. Maven should be configured to only - # include them when a proper JDK is in use, but that doesn' work if we specifically ask for the - # module to build as yetus does if something changes in the module. Rather than try to - # figure out what jdk is in use so we can duplicate the module activation logic, just - # build at the top level if anything changes in one of these modules and let maven sort it out. - branch1jdk8=(hbase-error-prone hbase-tinylfu-blockcache) - if [[ "${PATCH_BRANCH}" = branch-1* ]]; then - for jdk8module in "${branch1jdk8[@]}"; do - if [[ "${MODULES[*]}" =~ ${jdk8module} ]]; then - MODULES=(.) - break - fi - done - fi - if [[ ${testtype} == spotbugs ]]; then # Run spotbugs on each module individually to diff pre-patch and post-patch results and # report new warnings for changed modules only. @@ -213,8 +224,7 @@ function personality_modules return fi - if [[ ${testtype} == compile ]] && [[ "${SKIP_ERRORPRONE}" != "true" ]] && - [[ "${PATCH_BRANCH}" != branch-1* ]] ; then + if [[ ${testtype} == compile ]] && [[ "${SKIP_ERRORPRONE}" != "true" ]]; then extra="${extra} -PerrorProne" fi @@ -232,6 +242,15 @@ function personality_modules extra="${extra} -Dbuild.id=${BUILD_ID}" fi + # set forkCount + if [[ -n "${SUREFIRE_FIRST_PART_FORK_COUNT}" ]]; then + extra="${extra} -Dsurefire.firstPartForkCount=${SUREFIRE_FIRST_PART_FORK_COUNT}" + fi + + if [[ -n "${SUREFIRE_SECOND_PART_FORK_COUNT}" ]]; then + extra="${extra} -Dsurefire.secondPartForkCount=${SUREFIRE_SECOND_PART_FORK_COUNT}" + fi + # If the set of changed files includes CommonFSUtils then add the hbase-server # module to the set of modules (if not already included) to be tested for f in "${CHANGED_FILES[@]}" @@ -356,10 +375,13 @@ function refguide_filefilter { local filename=$1 - if [[ ${filename} =~ src/main/asciidoc ]] || - [[ ${filename} =~ src/main/xslt ]] || - [[ ${filename} =~ hbase-common/src/main/resources/hbase-default\.xml ]]; then - add_test refguide + # we only generate ref guide on master branch now + if [[ "${PATCH_BRANCH}" = master ]]; then + if [[ ${filename} =~ src/main/asciidoc ]] || + [[ ${filename} =~ src/main/xslt ]] || + [[ ${filename} =~ hbase-common/src/main/resources/hbase-default\.xml ]]; then + add_test refguide + fi fi } @@ -405,11 +427,7 @@ function refguide_rebuild return 1 fi - if [[ "${PATCH_BRANCH}" = branch-1* ]]; then - pdf_output="book.pdf" - else - pdf_output="apache_hbase_reference_guide.pdf" - fi + pdf_output="apache_hbase_reference_guide.pdf" if [[ ! -f "${PATCH_DIR}/${repostatus}-site/${pdf_output}" ]]; then add_vote_table -1 refguide "${repostatus} failed to produce the pdf version of the reference guide." @@ -467,12 +485,12 @@ function shadedjars_rebuild local -a maven_args=('clean' 'verify' '-fae' '--batch-mode' '-pl' 'hbase-shaded/hbase-shaded-check-invariants' '-am' - '-Dtest=NoUnitTests' '-DHBasePatchProcess' '-Prelease' + '-DskipTests' '-DHBasePatchProcess' '-Prelease' '-Dmaven.javadoc.skip=true' '-Dcheckstyle.skip=true' '-Dspotbugs.skip=true') # If we have HADOOP_PROFILE specified and we're on branch-2.x, pass along # the hadoop.profile system property. Ensures that Hadoop2 and Hadoop3 # logic is not both activated within Maven. - if [[ -n "${HADOOP_PROFILE}" ]] && [[ "${PATCH_BRANCH}" = branch-2* ]] ; then + if [[ -n "${HADOOP_PROFILE}" ]] && [[ "${PATCH_BRANCH}" = *"branch-2"* ]] ; then maven_args+=("-Dhadoop.profile=${HADOOP_PROFILE}") fi @@ -546,6 +564,7 @@ function hadoopcheck_rebuild local result=0 local hbase_hadoop2_versions local hbase_hadoop3_versions + local savejavahome=${JAVA_HOME} if [[ "${repostatus}" = branch ]]; then return 0 @@ -561,87 +580,45 @@ function hadoopcheck_rebuild # All supported Hadoop versions that we want to test the compilation with # See the Hadoop section on prereqs in the HBase Reference Guide - if [[ "${PATCH_BRANCH}" = branch-1.3 ]]; then - yetus_info "Setting Hadoop 2 versions to test based on branch-1.3 rules." - if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop2_versions="2.4.1 2.5.2 2.6.5 2.7.7" - else - hbase_hadoop2_versions="2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 2.7.5 2.7.6 2.7.7" - fi - elif [[ "${PATCH_BRANCH}" = branch-1.4 ]]; then - yetus_info "Setting Hadoop 2 versions to test based on branch-1.4 rules." - if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop2_versions="2.7.7" - else - hbase_hadoop2_versions="2.7.1 2.7.2 2.7.3 2.7.4 2.7.5 2.7.6 2.7.7" - fi - elif [[ "${PATCH_BRANCH}" = branch-1 ]]; then - yetus_info "Setting Hadoop 2 versions to test based on branch-1 rules." - if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop2_versions="2.10.0" - else - hbase_hadoop2_versions="2.10.0" - fi - elif [[ "${PATCH_BRANCH}" = branch-2.0 ]]; then - yetus_info "Setting Hadoop 2 versions to test based on branch-2.0 rules." - if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop2_versions="2.6.5 2.7.7 2.8.5" - else - hbase_hadoop2_versions="2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 2.7.5 2.7.6 2.7.7 2.8.2 2.8.3 2.8.4 2.8.5" - fi - elif [[ "${PATCH_BRANCH}" = branch-2.1 ]]; then - yetus_info "Setting Hadoop 2 versions to test based on branch-2.1 rules." - if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop2_versions="2.7.7 2.8.5" - else - hbase_hadoop2_versions="2.7.1 2.7.2 2.7.3 2.7.4 2.7.5 2.7.6 2.7.7 2.8.2 2.8.3 2.8.4 2.8.5" - fi - elif [[ "${PATCH_BRANCH}" = branch-2.2 ]]; then - yetus_info "Setting Hadoop 2 versions to test based on branch-2.2 rules." - if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop2_versions="2.8.5 2.9.2 2.10.0" - else - hbase_hadoop2_versions="2.8.5 2.9.2 2.10.0" - fi - elif [[ "${PATCH_BRANCH}" = branch-2.* ]]; then - yetus_info "Setting Hadoop 2 versions to test based on branch-2.3+ rules." - if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop2_versions="2.10.0" - else - hbase_hadoop2_versions="2.10.0" - fi + if [[ "${PATCH_BRANCH}" = *"branch-2"* ]]; then + yetus_info "Setting Hadoop 2 versions to test based on branch-2.5+ rules." + hbase_hadoop2_versions="2.10.2" else yetus_info "Setting Hadoop 2 versions to null on master/feature branch rules since we do not support hadoop 2 for hbase 3.x any more." hbase_hadoop2_versions="" fi - if [[ "${PATCH_BRANCH}" = branch-1* ]]; then - yetus_info "Setting Hadoop 3 versions to test based on branch-1.x rules." - hbase_hadoop3_versions="" - elif [[ "${PATCH_BRANCH}" = branch-2.0 ]] || [[ "${PATCH_BRANCH}" = branch-2.1 ]]; then - yetus_info "Setting Hadoop 3 versions to test based on branch-2.0/branch-2.1 rules" + + if [[ "${PATCH_BRANCH}" = *"branch-2.5"* ]]; then + yetus_info "Setting Hadoop 3 versions to test based on branch-2.5 rules" if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop3_versions="3.0.3 3.1.2" + hbase_hadoop3_versions="3.2.4 3.3.6 3.4.0" else - hbase_hadoop3_versions="3.0.3 3.1.1 3.1.2" + hbase_hadoop3_versions="3.2.3 3.2.4 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.4.0" fi else - yetus_info "Setting Hadoop 3 versions to test based on branch-2.2+/master/feature branch rules" + yetus_info "Setting Hadoop 3 versions to test based on branch-2.6+/master/feature branch rules" if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then - hbase_hadoop3_versions="3.1.2 3.2.1" + hbase_hadoop3_versions="3.3.6 3.4.0" else - hbase_hadoop3_versions="3.1.1 3.1.2 3.2.0 3.2.1" + hbase_hadoop3_versions="3.3.5 3.3.6 3.4.0" fi fi export MAVEN_OPTS="${MAVEN_OPTS}" for hadoopver in ${hbase_hadoop2_versions}; do logfile="${PATCH_DIR}/patch-javac-${hadoopver}.txt" + # alawys use java8 to build with hadoop 2.x + if [[ -n "${JAVA8_HOME}" ]]; then + yetus_info "Switching to java 8 for building against hadoop 2.x" + export JAVA_HOME=${JAVA8_HOME} + fi # disabled because "maven_executor" needs to return both command and args # shellcheck disable=2046 echo_and_redirect "${logfile}" \ $(maven_executor) clean install \ -DskipTests -DHBasePatchProcess \ -Dhadoop-two.version="${hadoopver}" + export JAVA_HOME=${savejavahome} count=$(${GREP} -c '\[ERROR\]' "${logfile}") if [[ ${count} -gt 0 ]]; then add_vote_table -1 hadoopcheck "${BUILDMODEMSG} causes ${count} errors with Hadoop v${hadoopver}." @@ -651,7 +628,7 @@ function hadoopcheck_rebuild done hadoop_profile="" - if [[ "${PATCH_BRANCH}" = branch-2* ]]; then + if [[ "${PATCH_BRANCH}" == *"branch-2"* ]]; then hadoop_profile="-Dhadoop.profile=3.0" fi for hadoopver in ${hbase_hadoop3_versions}; do @@ -698,6 +675,14 @@ function hadoopcheck_rebuild # TODO if we need the protoc check, we probably need to check building all the modules that rely on hbase-protocol add_test_type hbaseprotoc +function hbaseprotoc_initialize +{ + # So long as there are inter-module dependencies on the protoc modules, we + # need to run a full `mvn install` before a patch can be tested. + yetus_debug "initializing HBase Protoc plugin." + maven_add_install hbaseprotoc +} + ## @description hbaseprotoc file filter ## @audience private ## @stability evolving @@ -741,7 +726,7 @@ function hbaseprotoc_rebuild # Need to run 'install' instead of 'compile' because shading plugin # is hooked-up to 'install'; else hbase-protocol-shaded is left with # half of its process done. - modules_workers patch hbaseprotoc install -DskipTests -X -DHBasePatchProcess + modules_workers patch hbaseprotoc install -DskipTests -DHBasePatchProcess # shellcheck disable=SC2153 until [[ $i -eq "${#MODULE[@]}" ]]; do @@ -824,6 +809,55 @@ function hbaseanti_patchfile return 0 } +###################################### + +add_test_type spotless + +## @description spotless file filter +## @audience private +## @stability evolving +## @param filename +function spotless_filefilter +{ + # always add spotless check as it can format almost all types of files + add_test spotless +} +## @description run spotless:check to check format issues +## @audience private +## @stability evolving +## @param repostatus +function spotless_rebuild +{ + local repostatus=$1 + local logfile="${PATCH_DIR}/${repostatus}-spotless.txt" + + if ! verify_needed_test spotless; then + return 0 + fi + + big_console_header "Checking spotless on ${repostatus}" + + start_clock + + local -a maven_args=('spotless:check') + + # disabled because "maven_executor" needs to return both command and args + # shellcheck disable=2046 + echo_and_redirect "${logfile}" $(maven_executor) "${maven_args[@]}" + + count=$(${GREP} -c '\[ERROR\]' "${logfile}") + if [[ ${count} -gt 0 ]]; then + add_vote_table -1 spotless "${repostatus} has ${count} errors when running spotless:check, run spotless:apply to fix." + add_footer_table spotless "@@BASE@@/${repostatus}-spotless.txt" + return 1 + fi + + add_vote_table +1 spotless "${repostatus} has no errors when running spotless:check." + return 0 +} + +###################################### + ## @description process the javac output for generating WARNING/ERROR ## @audience private ## @stability evolving diff --git a/dev-support/hbase_docker/Dockerfile b/dev-support/hbase_docker/Dockerfile index 58d35af4bcf4..2c246d003f46 100644 --- a/dev-support/hbase_docker/Dockerfile +++ b/dev-support/hbase_docker/Dockerfile @@ -14,38 +14,38 @@ # See the License for the specific language governing permissions and # limitations under the License. -FROM ubuntu:18.04 AS BASE_IMAGE +FROM ubuntu:22.04 AS base_image SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update && \ DEBIAN_FRONTEND=noninteractive apt-get -qq install --no-install-recommends -y \ - ca-certificates=20180409 \ - curl='7.58.0-*' \ - git='1:2.17.1-*' \ - locales='2.27-*' \ + ca-certificates=20211016 \ + curl='7.81.0-*' \ + git='1:2.34.1-*' \ + locales='2.35-*' \ && \ apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN locale-gen en_US.UTF-8 + rm -rf /var/lib/apt/lists/* \ + && \ + locale-gen en_US.UTF-8 ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 -FROM BASE_IMAGE AS MAVEN_DOWNLOAD_IMAGE -ENV MAVEN_VERSION='3.6.3' +FROM base_image AS maven_download_image +ENV MAVEN_VERSION='3.8.6' ENV MAVEN_URL "https://archive.apache.org/dist/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz" -ENV MAVEN_SHA512 'c35a1803a6e70a126e80b2b3ae33eed961f83ed74d18fcd16909b2d44d7dada3203f1ffe726c17ef8dcca2dcaa9fca676987befeadc9b9f759967a8cb77181c0' +ENV MAVEN_SHA512 'f790857f3b1f90ae8d16281f902c689e4f136ebe584aba45e4b1fa66c80cba826d3e0e52fdd04ed44b4c66f6d3fe3584a057c26dfcac544a60b301e6d0f91c26' SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN curl --location --fail --silent --show-error --output /tmp/maven.tar.gz "${MAVEN_URL}" && \ echo "${MAVEN_SHA512} */tmp/maven.tar.gz" | sha512sum -c - -FROM BASE_IMAGE AS OPENJDK8_DOWNLOAD_IMAGE -ENV OPENJDK8_URL 'https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/jdk8u232-b09/OpenJDK8U-jdk_x64_linux_hotspot_8u232b09.tar.gz' -ENV OPENJDK8_SHA256 '7b7884f2eb2ba2d47f4c0bf3bb1a2a95b73a3a7734bd47ebf9798483a7bcc423' +FROM base_image AS openjdk8_download_image +ENV OPENJDK8_URL 'https://github.com/adoptium/temurin8-binaries/releases/download/jdk8u352-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u352b08.tar.gz' +ENV OPENJDK8_SHA256 '1633bd7590cb1cd72f5a1378ae8294451028b274d798e2a4ac672059a2f00fee' SHELL ["/bin/bash", "-o", "pipefail", "-c"] RUN curl --location --fail --silent --show-error --output /tmp/adoptopenjdk8.tar.gz "${OPENJDK8_URL}" && \ echo "${OPENJDK8_SHA256} */tmp/adoptopenjdk8.tar.gz" | sha256sum -c - -FROM BASE_IMAGE +FROM base_image SHELL ["/bin/bash", "-o", "pipefail", "-c"] # @@ -54,13 +54,13 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"] # # hadolint ignore=DL3010 -COPY --from=MAVEN_DOWNLOAD_IMAGE /tmp/maven.tar.gz /tmp/maven.tar.gz +COPY --from=maven_download_image /tmp/maven.tar.gz /tmp/maven.tar.gz RUN tar xzf /tmp/maven.tar.gz -C /opt && \ ln -s "/opt/$(dirname "$(tar -tf /tmp/maven.tar.gz | head -n1)")" /opt/maven && \ rm /tmp/maven.tar.gz # hadolint ignore=DL3010 -COPY --from=OPENJDK8_DOWNLOAD_IMAGE /tmp/adoptopenjdk8.tar.gz /tmp/adoptopenjdk8.tar.gz +COPY --from=openjdk8_download_image /tmp/adoptopenjdk8.tar.gz /tmp/adoptopenjdk8.tar.gz RUN mkdir -p /usr/lib/jvm && \ tar xzf /tmp/adoptopenjdk8.tar.gz -C /usr/lib/jvm && \ ln -s "/usr/lib/jvm/$(basename "$(tar -tf /tmp/adoptopenjdk8.tar.gz | head -n1)")" /usr/lib/jvm/java-8-adoptopenjdk && \ @@ -74,12 +74,16 @@ ENV PATH "${JAVA_HOME}/bin:${MAVEN_HOME}/bin:${PATH}" # Pull down HBase and build it into /root/hbase-bin. WORKDIR /root -RUN git clone https://gitbox.apache.org/repos/asf/hbase.git -b master -RUN mvn clean install -DskipTests assembly:single -f ./hbase/pom.xml -RUN mkdir -p hbase-bin -RUN find /root/hbase/hbase-assembly/target -iname '*.tar.gz' -not -iname '*client*' \ - | head -n 1 \ - | xargs -I{} tar xzf {} --strip-components 1 -C /root/hbase-bin +ARG BRANCH_OR_TAG=master +RUN git clone --depth 1 -b ${BRANCH_OR_TAG} https://github.com/apache/hbase.git \ + && \ + mvn -T1C clean install -DskipTests assembly:single -f ./hbase/pom.xml \ + && \ + mkdir -p hbase-bin \ + && \ + find /root/hbase/hbase-assembly/target -iname '*.tar.gz' -not -iname '*client*' \ + | head -n 1 \ + | xargs -I{} tar xzf {} --strip-components 1 -C /root/hbase-bin # Set HBASE_HOME, add it to the path, and start HBase. ENV HBASE_HOME /root/hbase-bin diff --git a/dev-support/hbase_docker/README.md b/dev-support/hbase_docker/README.md index 1750e809cc78..3d0641afaee9 100644 --- a/dev-support/hbase_docker/README.md +++ b/dev-support/hbase_docker/README.md @@ -22,22 +22,25 @@ under the License. ## Overview The Dockerfile in this folder can be used to build a Docker image running -the latest HBase master branch in standalone mode. It does this by setting -up necessary dependencies, checking out the master branch of HBase from -GitHub, and then building HBase. By default, this image will start the HMaster -and launch the HBase shell when run. +a specific HBase branch or tag(default to master) in standalone mode. It +does this by setting up necessary dependencies, checking out the specific +branch or tag of HBase from GitHub, and then building HBase. By default, +this image will start the HMaster and launch the HBase shell when run. ## Usage 1. Ensure that you have a recent version of Docker installed from [docker.io](http://www.docker.io). 1. Set this folder as your working directory. -1. Type `docker build -t hbase_docker .` to build a Docker image called **hbase_docker**. - This may take 10 minutes or more the first time you run the command since it will - create a Maven repository inside the image as well as checkout the master branch - of HBase. +1. Type `docker build -t hbase_docker --build-arg BRANCH_OR_TAG=.` + to build a Docker image called **hbase_docker**. This may take 10 minutes + or more the first time you run the command since it will create a Maven + repository inside the image as well as checkout the master branch of HBase. 1. When this completes successfully, you can run `docker run -it hbase_docker` to access an HBase shell running inside of a container created from the **hbase_docker** image. Alternatively, you can type `docker run -it hbase_docker bash` to start a container without a running HMaster. Within this environment, HBase is built in `/root/hbase-bin`. + +> NOTE: When running on mac m1 platforms, the docker file requires setting platfrom flag explicitly. +> You may use same instructions above running from to the "./m1" sub-dir. diff --git a/dev-support/hbase_docker/m1/Dockerfile b/dev-support/hbase_docker/m1/Dockerfile new file mode 100644 index 000000000000..fa88638a7aef --- /dev/null +++ b/dev-support/hbase_docker/m1/Dockerfile @@ -0,0 +1,92 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +FROM amd64/ubuntu:22.04 AS base_image +SHELL ["/bin/bash", "-o", "pipefail", "-c"] + +RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update && \ + DEBIAN_FRONTEND=noninteractive apt-get -qq install --no-install-recommends -y \ + ca-certificates=20211016 \ + curl='7.81.0-*' \ + git='1:2.34.1-*' \ + locales='2.35-*' \ + && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* \ + && \ + locale-gen en_US.UTF-8 +ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8 + +FROM base_image AS maven_download_image +ENV MAVEN_VERSION='3.8.6' +ENV MAVEN_URL "https://archive.apache.org/dist/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz" +ENV MAVEN_SHA512 'f790857f3b1f90ae8d16281f902c689e4f136ebe584aba45e4b1fa66c80cba826d3e0e52fdd04ed44b4c66f6d3fe3584a057c26dfcac544a60b301e6d0f91c26' +SHELL ["/bin/bash", "-o", "pipefail", "-c"] +RUN curl --location --fail --silent --show-error --output /tmp/maven.tar.gz "${MAVEN_URL}" && \ + echo "${MAVEN_SHA512} */tmp/maven.tar.gz" | sha512sum -c - + +FROM base_image AS openjdk8_download_image +ENV OPENJDK8_URL 'https://github.com/adoptium/temurin8-binaries/releases/download/jdk8u352-b08/OpenJDK8U-jdk_x64_linux_hotspot_8u352b08.tar.gz' +ENV OPENJDK8_SHA256 '1633bd7590cb1cd72f5a1378ae8294451028b274d798e2a4ac672059a2f00fee' +SHELL ["/bin/bash", "-o", "pipefail", "-c"] +RUN curl --location --fail --silent --show-error --output /tmp/adoptopenjdk8.tar.gz "${OPENJDK8_URL}" && \ + echo "${OPENJDK8_SHA256} */tmp/adoptopenjdk8.tar.gz" | sha256sum -c - + +FROM base_image +SHELL ["/bin/bash", "-o", "pipefail", "-c"] + +# +# when updating java or maven versions here, consider also updating +# `dev-support/docker/Dockerfile` as well. +# + +# hadolint ignore=DL3010 +COPY --from=maven_download_image /tmp/maven.tar.gz /tmp/maven.tar.gz +RUN tar xzf /tmp/maven.tar.gz -C /opt && \ + ln -s "/opt/$(dirname "$(tar -tf /tmp/maven.tar.gz | head -n1)")" /opt/maven && \ + rm /tmp/maven.tar.gz + +# hadolint ignore=DL3010 +COPY --from=openjdk8_download_image /tmp/adoptopenjdk8.tar.gz /tmp/adoptopenjdk8.tar.gz +RUN mkdir -p /usr/lib/jvm && \ + tar xzf /tmp/adoptopenjdk8.tar.gz -C /usr/lib/jvm && \ + ln -s "/usr/lib/jvm/$(basename "$(tar -tf /tmp/adoptopenjdk8.tar.gz | head -n1)")" /usr/lib/jvm/java-8-adoptopenjdk && \ + ln -s /usr/lib/jvm/java-8-adoptopenjdk /usr/lib/jvm/java-8 && \ + rm /tmp/adoptopenjdk8.tar.gz + +ENV MAVEN_HOME '/opt/maven' +ENV JAVA_HOME '/usr/lib/jvm/java-8' +ENV PATH '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' +ENV PATH "${JAVA_HOME}/bin:${MAVEN_HOME}/bin:${PATH}" + +# Pull down HBase and build it into /root/hbase-bin. +WORKDIR /root +ARG BRANCH_OR_TAG=master +RUN git clone --depth 1 -b ${BRANCH_OR_TAG} https://github.com/apache/hbase.git \ + && \ + mvn -T1C clean install -DskipTests assembly:single -f ./hbase/pom.xml \ + && \ + mkdir -p hbase-bin \ + && \ + find /root/hbase/hbase-assembly/target -iname '*.tar.gz' -not -iname '*client*' \ + | head -n 1 \ + | xargs -I{} tar xzf {} --strip-components 1 -C /root/hbase-bin + +# Set HBASE_HOME, add it to the path, and start HBase. +ENV HBASE_HOME /root/hbase-bin +ENV PATH "/root/hbase-bin/bin:${PATH}" + +CMD ["/bin/bash", "-c", "start-hbase.sh; hbase shell"] diff --git a/dev-support/hbase_eclipse_formatter.xml b/dev-support/hbase_eclipse_formatter.xml index 6dec653ad620..99000a62e214 100644 --- a/dev-support/hbase_eclipse_formatter.xml +++ b/dev-support/hbase_eclipse_formatter.xml @@ -18,297 +18,401 @@ * limitations under the License. */ --> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev-support/hbase_nightly_pseudo-distributed-test.sh b/dev-support/hbase_nightly_pseudo-distributed-test.sh index 4ad431c69fa0..8d03195715cf 100755 --- a/dev-support/hbase_nightly_pseudo-distributed-test.sh +++ b/dev-support/hbase_nightly_pseudo-distributed-test.sh @@ -198,23 +198,21 @@ echo "Writing out configuration for HBase." rm -rf "${working_dir}/hbase-conf" mkdir "${working_dir}/hbase-conf" -if [ -f "${component_install}/conf/log4j.properties" ]; then - cp "${component_install}/conf/log4j.properties" "${working_dir}/hbase-conf/log4j.properties" +if [ -f "${component_install}/conf/log4j2.properties" ]; then + cp "${component_install}/conf/log4j2.properties" "${working_dir}/hbase-conf/log4j2.properties" else - cat >"${working_dir}/hbase-conf/log4j.properties" <"${working_dir}/hbase-conf/log4j2.properties" <"${working_dir}/example.tsv" < the test didn't finish notFinishedCounter=$(($notFinishedCounter + 1)) notFinishedList="$notFinishedList,$testClass" - fi + fi done #list of all tests that failed @@ -411,7 +411,7 @@ echo echo "Tests in error are: $errorPresList" echo "Tests that didn't finish are: $notFinishedPresList" echo -echo "Execution time in minutes: $exeTime" +echo "Execution time in minutes: $exeTime" echo "##########################" diff --git a/dev-support/jenkinsEnv.sh b/dev-support/jenkinsEnv.sh index d7fe87339e2a..969ece4dc4c6 100755 --- a/dev-support/jenkinsEnv.sh +++ b/dev-support/jenkinsEnv.sh @@ -33,4 +33,3 @@ export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin: export MAVEN_OPTS="${MAVEN_OPTS:-"-Xmx3100M -XX:-UsePerfData"}" ulimit -n - diff --git a/dev-support/jenkins_precommit_github_yetus.sh b/dev-support/jenkins_precommit_github_yetus.sh index 693e07b63332..2d8421fd756d 100755 --- a/dev-support/jenkins_precommit_github_yetus.sh +++ b/dev-support/jenkins_precommit_github_yetus.sh @@ -41,6 +41,9 @@ declare -a required_envs=( "SOURCEDIR" "TESTS_FILTER" "YETUSDIR" + "AUTHOR_IGNORE_LIST" + "BLANKS_EOL_IGNORE_FILE" + "BLANKS_TABS_IGNORE_FILE" ) # Validate params for required_env in "${required_envs[@]}"; do @@ -57,7 +60,7 @@ if [ ${missing_env} -gt 0 ]; then fi # TODO (HBASE-23900): cannot assume test-patch runs directly from sources -TESTPATCHBIN="${YETUSDIR}/precommit/src/main/shell/test-patch.sh" +TESTPATCHBIN="${YETUSDIR}/bin/test-patch" # this must be clean for every run rm -rf "${PATCHDIR}" @@ -67,14 +70,6 @@ mkdir -p "${PATCHDIR}" mkdir "${PATCHDIR}/machine" "${SOURCEDIR}/dev-support/gather_machine_environment.sh" "${PATCHDIR}/machine" -# If CHANGE_URL is set (e.g., Github Branch Source plugin), process it. -# Otherwise exit, because we don't want HBase to do a -# full build. We wouldn't normally do this check for smaller -# projects. :) -if [[ -z "${CHANGE_URL}" ]]; then - echo "Full build skipped" > "${PATCHDIR}/report.html" - exit 0 -fi # enable debug output for yetus if [[ "true" = "${DEBUG}" ]]; then YETUS_ARGS+=("--debug") @@ -95,8 +90,8 @@ YETUS_ARGS+=("--brief-report-file=${PATCHDIR}/brief.txt") YETUS_ARGS+=("--console-report-file=${PATCHDIR}/console.txt") YETUS_ARGS+=("--html-report-file=${PATCHDIR}/report.html") # enable writing back to Github -YETUS_ARGS+=("--github-password=${GITHUB_PASSWORD}") -YETUS_ARGS+=("--github-user=${GITHUB_USER}") +YETUS_ARGS+=("--github-token=${GITHUB_PASSWORD}") +YETUS_ARGS+=("--github-write-comment") # auto-kill any surefire stragglers during unit test runs YETUS_ARGS+=("--reapermode=kill") # set relatively high limits for ASF machines @@ -117,8 +112,9 @@ YETUS_ARGS+=("--docker") YETUS_ARGS+=("--dockerfile=${DOCKERFILE}") YETUS_ARGS+=("--mvn-custom-repos") YETUS_ARGS+=("--java-home=${SET_JAVA_HOME}") -YETUS_ARGS+=("--whitespace-eol-ignore-list=.*/generated/.*") -YETUS_ARGS+=("--whitespace-tabs-ignore-list=.*/generated/.*") +YETUS_ARGS+=("--author-ignore-list=${AUTHOR_IGNORE_LIST}") +YETUS_ARGS+=("--blanks-eol-ignore-file=${BLANKS_EOL_IGNORE_FILE}") +YETUS_ARGS+=("--blanks-tabs-ignore-file=${BLANKS_TABS_IGNORE_FILE}*") YETUS_ARGS+=("--tests-filter=${TESTS_FILTER}") YETUS_ARGS+=("--personality=${SOURCEDIR}/dev-support/hbase-personality.sh") YETUS_ARGS+=("--quick-hadoopcheck") @@ -147,6 +143,19 @@ YETUS_ARGS+=("--github-use-emoji-vote") if [[ -n "${ASF_NIGHTLIES_GENERAL_CHECK_BASE}" ]]; then YETUS_ARGS+=("--asf-nightlies-general-check-base=${ASF_NIGHTLIES_GENERAL_CHECK_BASE}") fi +# pass build parallelism in +if [[ -n "${BUILD_THREAD}" ]]; then + YETUS_ARGS+=("--build-thread=${BUILD_THREAD}") +fi +if [[ -n "${SUREFIRE_FIRST_PART_FORK_COUNT}" ]]; then + YETUS_ARGS+=("--surefire-first-part-fork-count=${SUREFIRE_FIRST_PART_FORK_COUNT}") +fi +if [[ -n "${SUREFIRE_SECOND_PART_FORK_COUNT}" ]]; then + YETUS_ARGS+=("--surefire-second-part-fork-count=${SUREFIRE_SECOND_PART_FORK_COUNT}") +fi +if [[ -n "${JAVA8_HOME}" ]]; then + YETUS_ARGS+=("--java8-home=${JAVA8_HOME}") +fi echo "Launching yetus with command line:" echo "${TESTPATCHBIN} ${YETUS_ARGS[*]}" diff --git a/dev-support/license-header b/dev-support/license-header new file mode 100644 index 000000000000..2379ddac12cc --- /dev/null +++ b/dev-support/license-header @@ -0,0 +1,17 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ diff --git a/dev-support/make_rc.sh b/dev-support/make_rc.sh index 73691791f6aa..1892d5a906e6 100755 --- a/dev-support/make_rc.sh +++ b/dev-support/make_rc.sh @@ -21,7 +21,7 @@ # timestamp suffix. Deploys builds to maven. # # To finish, check what was build. If good copy to people.apache.org and -# close the maven repos. Call a vote. +# close the maven repos. Call a vote. # # Presumes that dev-support/generate-hadoopX-poms.sh has already been run. # Presumes your settings.xml all set up so can sign artifacts published to mvn, etc. diff --git a/dev-support/python-requirements.txt b/dev-support/python-requirements.txt index 8ef693e7221d..73b7c8e11600 100644 --- a/dev-support/python-requirements.txt +++ b/dev-support/python-requirements.txt @@ -15,8 +15,8 @@ # See the License for the specific language governing permissions and # limitations under the License. # -requests -future -gitpython -rbtools -jinja2 +requests==2.32.0 +future==0.18.3 +gitpython==3.1.41 +rbtools==4.0 +jinja2==3.1.4 diff --git a/dev-support/rebase_all_git_branches.sh b/dev-support/rebase_all_git_branches.sh index ef213c8fb3db..5c63e4054691 100755 --- a/dev-support/rebase_all_git_branches.sh +++ b/dev-support/rebase_all_git_branches.sh @@ -17,11 +17,11 @@ # specific language governing permissions and limitations # under the License. -# This script assumes that your remote is called "origin" +# This script assumes that your remote is called "origin" # and that your local master branch is called "master". # I am sure it could be made more abstract but these are the defaults. -# Edit this line to point to your default directory, +# Edit this line to point to your default directory, # or always pass a directory to the script. DEFAULT_DIR="EDIT_ME" @@ -69,13 +69,13 @@ function check_git_branch_status { } function get_jira_status { - # This function expects as an argument the JIRA ID, + # This function expects as an argument the JIRA ID, # and returns 99 if resolved and 1 if it couldn't # get the status. - # The JIRA status looks like this in the HTML: + # The JIRA status looks like this in the HTML: # span id="resolution-val" class="value resolved" > - # The following is a bit brittle, but filters for lines with + # The following is a bit brittle, but filters for lines with # resolution-val returns 99 if it's resolved jira_url='https://issues.apache.org/jira/rest/api/2/issue' jira_id="$1" @@ -106,7 +106,7 @@ while getopts ":hd:" opt; do print_usage exit 0 ;; - *) + *) echo "Invalid argument: $OPTARG" >&2 print_usage >&2 exit 1 @@ -135,7 +135,7 @@ get_tracking_branches for i in "${tracking_branches[@]}"; do git checkout -q "$i" # Exit if git status is dirty - check_git_branch_status + check_git_branch_status git pull -q --rebase status=$? if [ "$status" -ne 0 ]; then @@ -169,7 +169,7 @@ for i in "${all_branches[@]}"; do git checkout -q "$i" # Exit if git status is dirty - check_git_branch_status + check_git_branch_status # If this branch has a remote, don't rebase it # If it has a remote, it has a log with at least one entry @@ -184,7 +184,7 @@ for i in "${all_branches[@]}"; do echo "Failed. Rolling back. Rebase $i manually." git rebase --abort fi - elif [ $status -ne 0 ]; then + elif [ $status -ne 0 ]; then # If status is 0 it means there is a remote branch, we already took care of it echo "Unknown error: $?" >&2 exit 1 @@ -195,10 +195,10 @@ done for i in "${deleted_branches[@]}"; do read -p "$i's JIRA is resolved. Delete? " yn case $yn in - [Yy]) + [Yy]) git branch -D $i ;; - *) + *) echo "To delete it manually, run git branch -D $deleted_branches" ;; esac diff --git a/dev-support/smart-apply-patch.sh b/dev-support/smart-apply-patch.sh index 9200e3ba921c..a8a22b06ef16 100755 --- a/dev-support/smart-apply-patch.sh +++ b/dev-support/smart-apply-patch.sh @@ -52,7 +52,7 @@ if $PATCH -p0 -E --dry-run < $PATCH_FILE 2>&1 > $TMP; then # correct place to put those files. # NOTE 2014/07/17: -# Temporarily disabling below check since our jenkins boxes seems to be not defaulting to bash +# Temporarily disabling below check since our jenkins boxes seems to be not defaulting to bash # causing below checks to fail. Once it is fixed, we can revert the commit and enable this again. # TMP2=/tmp/tmp.paths.2.$$ diff --git a/dev-support/submit-patch.py b/dev-support/submit-patch.py deleted file mode 100755 index 8c529153f65f..000000000000 --- a/dev-support/submit-patch.py +++ /dev/null @@ -1,311 +0,0 @@ -#!/usr/bin/env python -## -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# -# Makes a patch for the current branch, creates/updates the review board request and uploads new -# patch to jira. Patch is named as (JIRA).(branch name).(patch number).patch as per Yetus' naming -# rules. If no jira is specified, patch will be named (branch name).patch and jira and review board -# are not updated. Review board id is retrieved from the remote link in the jira. -# Print help: submit-patch.py --h -import argparse -from builtins import input, str -import getpass -import git -import json -import logging -import os -import re -import requests -import subprocess -import sys - -parser = argparse.ArgumentParser( - epilog = "To avoid having to enter jira/review board username/password every time, setup an " - "encrypted ~/.apache-cred files as follows:\n" - "1) Create a file with following single " - "line: \n{\"jira_username\" : \"appy\", \"jira_password\":\"123\", " - "\"rb_username\":\"appy\", \"rb_password\" : \"@#$\"}\n" - "2) Encrypt it with openssl.\n" - "openssl enc -aes-256-cbc -in -out ~/.apache-creds\n" - "3) Delete original file.\n" - "Now onwards, you'll need to enter this encryption key only once per run. If you " - "forget the key, simply regenerate ~/.apache-cred file again.", - formatter_class=argparse.RawTextHelpFormatter -) -parser.add_argument("-b", "--branch", - help = "Branch to use for generating diff. If not specified, tracking branch " - "is used. If there is no tracking branch, error will be thrown.") - -# Arguments related to Jira. -parser.add_argument("-jid", "--jira-id", - help = "Jira id of the issue. If set, we deduce next patch version from " - "attachments in the jira and also upload the new patch. Script will " - "ask for jira username/password for authentication. If not set, " - "patch is named .patch.") - -# Arguments related to Review Board. -parser.add_argument("-srb", "--skip-review-board", - help = "Don't create/update the review board.", - default = False, action = "store_true") -parser.add_argument("--reviewers", - help = "Comma separated list of users to add as reviewers.") - -# Misc arguments -parser.add_argument("--patch-dir", default = "~/patches", - help = "Directory to store patch files. If it doesn't exist, it will be " - "created. Default: ~/patches") -parser.add_argument("--rb-repo", default = "hbase-git", - help = "Review board repository. Default: hbase-git") -args = parser.parse_args() - -# Setup logger -logging.basicConfig() -logger = logging.getLogger("submit-patch") -logger.setLevel(logging.INFO) - - -def log_fatal_and_exit(*arg): - logger.fatal(*arg) - sys.exit(1) - - -def assert_status_code(response, expected_status_code, description): - if response.status_code != expected_status_code: - log_fatal_and_exit(" Oops, something went wrong when %s. \nResponse: %s %s\nExiting..", - description, response.status_code, response.reason) - - -# Make repo instance to interact with git repo. -try: - repo = git.Repo(os.getcwd()) - git = repo.git -except git.exc.InvalidGitRepositoryError as e: - log_fatal_and_exit(" '%s' is not valid git repo directory.\nRun from base directory of " - "HBase's git repo.", e) - -logger.info(" Active branch: %s", repo.active_branch.name) -# Do not proceed if there are uncommitted changes. -if repo.is_dirty(): - log_fatal_and_exit(" Git status is dirty. Commit locally first.") - - -# Returns base branch for creating diff. -def get_base_branch(): - # if --branch is set, use it as base branch for computing diff. Also check that it's a valid branch. - if args.branch is not None: - base_branch = args.branch - # Check that given branch exists. - for ref in repo.refs: - if ref.name == base_branch: - return base_branch - log_fatal_and_exit(" Branch '%s' does not exist in refs.", base_branch) - else: - # if --branch is not set, use tracking branch as base branch for computing diff. - # If there is no tracking branch, log error and quit. - tracking_branch = repo.active_branch.tracking_branch() - if tracking_branch is None: - log_fatal_and_exit(" Active branch doesn't have a tracking_branch. Please specify base " - " branch for computing diff using --branch flag.") - logger.info(" Using tracking branch as base branch") - return tracking_branch.name - - -# Returns patch name having format (JIRA).(branch name).(patch number).patch. If no jira is -# specified, patch is name (branch name).patch. -def get_patch_name(branch): - if args.jira_id is None: - return branch + ".patch" - - patch_name_prefix = args.jira_id.upper() + "." + branch - return get_patch_name_with_version(patch_name_prefix) - - -# Fetches list of attachments from the jira, deduces next version for the patch and returns final -# patch name. -def get_patch_name_with_version(patch_name_prefix): - # JIRA's rest api is broken wrt to attachments. https://jira.atlassian.com/browse/JRA-27637. - # Using crude way to get list of attachments. - url = "https://issues.apache.org/jira/browse/" + args.jira_id - logger.info("Getting list of attachments for jira %s from %s", args.jira_id, url) - html = requests.get(url) - if html.status_code == 404: - log_fatal_and_exit(" Invalid jira id : %s", args.jira_id) - if html.status_code != 200: - log_fatal_and_exit(" Cannot fetch jira information. Status code %s", html.status_code) - # Iterate over patch names starting from version 1 and return when name is not already used. - content = str(html.content, 'utf-8') - for i in range(1, 1000): - name = patch_name_prefix + "." + ('{0:03d}'.format(i)) + ".patch" - if name not in content: - return name - - -# Validates that patch directory exists, if not, creates it. -def validate_patch_dir(patch_dir): - # Create patch_dir if it doesn't exist. - if not os.path.exists(patch_dir): - logger.warn(" Patch directory doesn't exist. Creating it.") - os.mkdir(patch_dir) - else: - # If patch_dir exists, make sure it's a directory. - if not os.path.isdir(patch_dir): - log_fatal_and_exit(" '%s' exists but is not a directory. Specify another directory.", - patch_dir) - - -# Make sure current branch is ahead of base_branch by exactly 1 commit. Quits if -# - base_branch has commits not in current branch -# - current branch is same as base branch -# - current branch is ahead of base_branch by more than 1 commits -def check_diff_between_branches(base_branch): - only_in_base_branch = list(repo.iter_commits("HEAD.." + base_branch)) - only_in_active_branch = list(repo.iter_commits(base_branch + "..HEAD")) - if len(only_in_base_branch) != 0: - log_fatal_and_exit(" '%s' is ahead of current branch by %s commits. Rebase " - "and try again.", base_branch, len(only_in_base_branch)) - if len(only_in_active_branch) == 0: - log_fatal_and_exit(" Current branch is same as '%s'. Exiting...", base_branch) - if len(only_in_active_branch) > 1: - log_fatal_and_exit(" Current branch is ahead of '%s' by %s commits. Squash into single " - "commit and try again.", base_branch, len(only_in_active_branch)) - - -# If ~/.apache-creds is present, load credentials from it otherwise prompt user. -def get_credentials(): - creds = dict() - creds_filepath = os.path.expanduser("~/.apache-creds") - if os.path.exists(creds_filepath): - try: - logger.info(" Reading ~/.apache-creds for Jira and ReviewBoard credentials") - content = subprocess.check_output("openssl enc -aes-256-cbc -d -in " + creds_filepath, - shell=True) - except subprocess.CalledProcessError as e: - log_fatal_and_exit(" Couldn't decrypt ~/.apache-creds file. Exiting..") - creds = json.loads(content) - else: - creds['jira_username'] = input("Jira username:") - creds['jira_password'] = getpass.getpass("Jira password:") - if not args.skip_review_board: - creds['rb_username'] = input("Review Board username:") - creds['rb_password'] = getpass.getpass("Review Board password:") - return creds - - -def attach_patch_to_jira(issue_url, patch_filepath, patch_filename, creds): - # Upload patch to jira using REST API. - headers = {'X-Atlassian-Token': 'no-check'} - files = {'file': (patch_filename, open(patch_filepath, 'rb'), 'text/plain')} - jira_auth = requests.auth.HTTPBasicAuth(creds['jira_username'], creds['jira_password']) - attachment_url = issue_url + "/attachments" - r = requests.post(attachment_url, headers = headers, files = files, auth = jira_auth) - assert_status_code(r, 200, "uploading patch to jira") - - -def get_jira_summary(issue_url): - r = requests.get(issue_url + "?fields=summary") - assert_status_code(r, 200, "fetching jira summary") - return json.loads(r.content)["fields"]["summary"] - - -def get_review_board_id_if_present(issue_url, rb_link_title): - r = requests.get(issue_url + "/remotelink") - assert_status_code(r, 200, "fetching remote links") - links = json.loads(r.content) - for link in links: - if link["object"]["title"] == rb_link_title: - res = re.search("reviews.apache.org/r/([0-9]+)", link["object"]["url"]) - return res.group(1) - return None - - -base_branch = get_base_branch() -# Remove remote repo name from branch name if present. This assumes that we don't use '/' in -# actual branch names. -base_branch_without_remote = base_branch.split('/')[-1] -logger.info(" Base branch: %s", base_branch) - -check_diff_between_branches(base_branch) - -patch_dir = os.path.abspath(os.path.expanduser(args.patch_dir)) -logger.info(" Patch directory: %s", patch_dir) -validate_patch_dir(patch_dir) - -patch_filename = get_patch_name(base_branch_without_remote) -logger.info(" Patch name: %s", patch_filename) -patch_filepath = os.path.join(patch_dir, patch_filename) - -diff = git.format_patch(base_branch, stdout = True) -with open(patch_filepath, "w") as f: - f.write(diff.encode('utf8')) - -if args.jira_id is not None: - creds = get_credentials() - issue_url = "https://issues.apache.org/jira/rest/api/2/issue/" + args.jira_id - - attach_patch_to_jira(issue_url, patch_filepath, patch_filename, creds) - - if not args.skip_review_board: - rb_auth = requests.auth.HTTPBasicAuth(creds['rb_username'], creds['rb_password']) - - rb_link_title = "Review Board (" + base_branch_without_remote + ")" - rb_id = get_review_board_id_if_present(issue_url, rb_link_title) - - # If no review board link found, create new review request and add its link to jira. - if rb_id is None: - reviews_url = "https://reviews.apache.org/api/review-requests/" - data = {"repository" : "hbase-git"} - r = requests.post(reviews_url, data = data, auth = rb_auth) - assert_status_code(r, 201, "creating new review request") - review_request = json.loads(r.content)["review_request"] - absolute_url = review_request["absolute_url"] - logger.info(" Created new review request: %s", absolute_url) - - # Use jira summary as review's summary too. - summary = get_jira_summary(issue_url) - # Use commit message as description. - description = repo.head.commit.message - update_draft_data = {"bugs_closed" : [args.jira_id.upper()], "target_groups" : "hbase", - "target_people" : args.reviewers, "summary" : summary, - "description" : description } - draft_url = review_request["links"]["draft"]["href"] - r = requests.put(draft_url, data = update_draft_data, auth = rb_auth) - assert_status_code(r, 200, "updating review draft") - - draft_request = json.loads(r.content)["draft"] - diff_url = draft_request["links"]["draft_diffs"]["href"] - files = {'path' : (patch_filename, open(patch_filepath, 'rb'))} - r = requests.post(diff_url, files = files, auth = rb_auth) - assert_status_code(r, 201, "uploading diff to review draft") - - r = requests.put(draft_url, data = {"public" : True}, auth = rb_auth) - assert_status_code(r, 200, "publishing review request") - - # Add link to review board in the jira. - remote_link = json.dumps({'object': {'url': absolute_url, 'title': rb_link_title}}) - jira_auth = requests.auth.HTTPBasicAuth(creds['jira_username'], creds['jira_password']) - r = requests.post(issue_url + "/remotelink", data = remote_link, auth = jira_auth, - headers={'Content-Type':'application/json'}) - else: - logger.info(" Updating existing review board: https://reviews.apache.org/r/%s", rb_id) - draft_url = "https://reviews.apache.org/api/review-requests/" + rb_id + "/draft/" - diff_url = draft_url + "diffs/" - files = {'path' : (patch_filename, open(patch_filepath, 'rb'))} - r = requests.post(diff_url, files = files, auth = rb_auth) - assert_status_code(r, 201, "uploading diff to review draft") - - r = requests.put(draft_url, data = {"public" : True}, auth = rb_auth) - assert_status_code(r, 200, "publishing review request") diff --git a/dev-support/test-util.sh b/dev-support/test-util.sh index 9219bb96606c..b97e2de383fc 100755 --- a/dev-support/test-util.sh +++ b/dev-support/test-util.sh @@ -32,7 +32,7 @@ options: -h Show this message -c Run 'mvn clean' before running the tests -f FILE Run the additional tests listed in the FILE - -u Only run unit tests. Default is to run + -u Only run unit tests. Default is to run unit and integration tests -n N Run each test N times. Default = 1. -s N Print N slowest tests @@ -92,7 +92,7 @@ do r) server=1 ;; - ?) + ?) usage exit 1 esac @@ -175,7 +175,7 @@ done # Print a report of the slowest running tests if [ ! -z $showSlowest ]; then - + testNameIdx=0 for (( i = 0; i < ${#test[@]}; i++ )) do diff --git a/dev-support/zombie-detector.sh b/dev-support/zombie-detector.sh index df4c197ce4df..3a2708a14adf 100755 --- a/dev-support/zombie-detector.sh +++ b/dev-support/zombie-detector.sh @@ -29,7 +29,7 @@ #set -x # printenv -### Setup some variables. +### Setup some variables. bindir=$(dirname $0) # This key is set by our surefire configuration up in the main pom.xml diff --git a/hbase-annotations/pom.xml b/hbase-annotations/pom.xml index 0e1910b0fb2a..99389c91a2ea 100644 --- a/hbase-annotations/pom.xml +++ b/hbase-annotations/pom.xml @@ -1,4 +1,4 @@ - + 4.0.0 - hbase org.apache.hbase - 2.5.0-SNAPSHOT + hbase + ${revision} .. diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java index c2510efb026a..d9bae8490637 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java @@ -15,13 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to the client. This tests the hbase-client package and all of the client * tests in hbase-server. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java index 4341becbd68a..a168adec08af 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to coprocessors. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java index a91033fa2d38..84f346baaea2 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to the {@code org.apache.hadoop.hbase.filter} package. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java index 22fbc1b724ff..c23bfa298b36 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as failing commonly on public build infrastructure. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java index c2375ca4e5cb..8eee0e6ae4b9 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java @@ -15,13 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to the {@code org.apache.hadoop.hbase.io} package. Things like HFile and * the like. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IntegrationTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IntegrationTests.java index 6bc712e270cf..4e555b73fedb 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IntegrationTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IntegrationTests.java @@ -15,23 +15,20 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as 'integration/system' test, meaning that the test class has the following * characteristics: *
    - *
  • Possibly takes hours to complete
  • - *
  • Can be run on a mini cluster or an actual cluster
  • - *
  • Can make changes to the given cluster (starting stopping daemons, etc)
  • - *
  • Should not be run in parallel of other integration tests
  • + *
  • Possibly takes hours to complete
  • + *
  • Can be run on a mini cluster or an actual cluster
  • + *
  • Can make changes to the given cluster (starting stopping daemons, etc)
  • + *
  • Should not be run in parallel of other integration tests
  • *
- * - * Integration / System tests should have a class name starting with "IntegrationTest", and - * should be annotated with @Category(IntegrationTests.class). Integration tests can be run - * using the IntegrationTestsDriver class or from mvn verify. - * + * Integration / System tests should have a class name starting with "IntegrationTest", and should + * be annotated with @Category(IntegrationTests.class). Integration tests can be run using the + * IntegrationTestsDriver class or from mvn verify. * @see SmallTests * @see MediumTests * @see LargeTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/LargeTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/LargeTests.java index aa183d5607d7..b47e5bab9a46 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/LargeTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/LargeTests.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,21 +15,19 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tagging a test as 'large', means that the test class has the following characteristics: *
    - *
  • it can executed in an isolated JVM (Tests can however be executed in different JVM on the - * same machine simultaneously so be careful two concurrent tests end up fighting over ports - * or other singular resources).
  • - *
  • ideally, the whole large test-suite/class, no matter how many or how few test methods it - * has, will run in last less than three minutes
  • - *
  • No large test can take longer than ten minutes; it will be killed. See 'Integeration Tests' - * if you need to run tests longer than this.
  • + *
  • it can executed in an isolated JVM (Tests can however be executed in different JVM on the + * same machine simultaneously so be careful two concurrent tests end up fighting over ports or + * other singular resources).
  • + *
  • ideally, the whole large test-suite/class, no matter how many or how few test methods it has, + * will run in last less than three minutes
  • + *
  • No large test can take longer than ten minutes; it will be killed. See 'Integeration Tests' + * if you need to run tests longer than this.
  • *
- * * @see SmallTests * @see MediumTests * @see IntegrationTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java index 4b49da4e4dc0..0e68ab3c0340 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to mapred or mapreduce. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java index e837f49a268a..5dcf51b27e59 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to the master. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MediumTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MediumTests.java index 0f8055b5bab0..d1f836ec0049 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MediumTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MediumTests.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,21 +15,18 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tagging a test as 'medium' means that the test class has the following characteristics: *
    - *
  • it can be executed in an isolated JVM (Tests can however be executed in different JVMs on - * the same machine simultaneously so be careful two concurrent tests end up fighting over ports - * or other singular resources).
  • - *
  • ideally, the whole medium test-suite/class, no matter how many or how few test methods it - * has, will complete in 50 seconds; otherwise make it a 'large' test.
  • + *
  • it can be executed in an isolated JVM (Tests can however be executed in different JVMs on the + * same machine simultaneously so be careful two concurrent tests end up fighting over ports or + * other singular resources).
  • + *
  • ideally, the whole medium test-suite/class, no matter how many or how few test methods it + * has, will complete in 50 seconds; otherwise make it a 'large' test.
  • *
- * - * Use it for tests that cannot be tagged as 'small'. Use it when you need to start up a cluster. - * + * Use it for tests that cannot be tagged as 'small'. Use it when you need to start up a cluster. * @see SmallTests * @see LargeTests * @see IntegrationTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MetricsTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MetricsTests.java index 59962a74c280..27beaacf963e 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MetricsTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MetricsTests.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java index 2759bfc96df7..695042e801bf 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as not easily falling into any of the below categories. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java index 4edb9bf031d2..929bd6487edf 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to RPC. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java index 0f03b761fcb1..3439afa76eba 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to the regionserver. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java index 8b8be4de8125..df606c960c25 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to replication. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java index e7d1d1d4c88c..a648b4c39e03 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to the REST capability of HBase. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java index 5263d467cbee..a4e55ad3aba0 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java @@ -15,12 +15,10 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as related to security. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SmallTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SmallTests.java index 80e6c9d24209..64d2bce381b6 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SmallTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SmallTests.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -20,14 +20,14 @@ /** * Tagging a test as 'small' means that the test class has the following characteristics: *
    - *
  • it can be run simultaneously with other small tests all in the same JVM
  • - *
  • ideally, the WHOLE implementing test-suite/class, no matter how many or how few test - * methods it has, should take less than 15 seconds to complete
  • - *
  • it does not use a cluster
  • + *
  • it can be run simultaneously with other small tests all in the same JVM
  • + *
  • ideally, the WHOLE implementing test-suite/class, no matter how many or how few test methods + * it has, should take less than 15 seconds to complete
  • + *
  • it does not use a cluster
  • *
- * * @see MediumTests * @see LargeTests * @see IntegrationTests */ -public interface SmallTests {} +public interface SmallTests { +} diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java index efc8d5ddc84c..d1f433b9719d 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java @@ -15,13 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** - * Tag a test as related to mapreduce and taking longer than 5 minutes to run on public build + * Tag a test as related to mapreduce and taking longer than 5 minutes to run on public build * infrastructure. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java index 85507de5ad4d..f556979e5b6a 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java @@ -15,13 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** * Tag a test as region tests which takes longer than 5 minutes to run on public build * infrastructure. - * * @see org.apache.hadoop.hbase.testclassification.ClientTests * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests * @see org.apache.hadoop.hbase.testclassification.FilterTests diff --git a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ZKTests.java b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ZKTests.java index 86aa6bdc85e6..9fa0579ed47e 100644 --- a/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ZKTests.java +++ b/hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ZKTests.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.testclassification; /** diff --git a/hbase-archetypes/hbase-archetype-builder/pom.xml b/hbase-archetypes/hbase-archetype-builder/pom.xml index 8974db52aa25..f593480e682d 100644 --- a/hbase-archetypes/hbase-archetype-builder/pom.xml +++ b/hbase-archetypes/hbase-archetype-builder/pom.xml @@ -1,6 +1,5 @@ - - + + hbase-client__copy-src-to-build-archetype-subdir - generate-resources copy-resources + generate-resources /${project.basedir}/../${hbase-client.dir}/${build.archetype.subdir} @@ -76,29 +75,30 @@ hbase-client__copy-pom-to-temp-for-xslt-processing - generate-resources copy-resources + generate-resources /${project.basedir}/../${hbase-client.dir}/${temp.exemplar.subdir} /${project.basedir}/../${hbase-client.dir} - true + true + pom.xml - + hbase-shaded-client__copy-src-to-build-archetype-subdir - generate-resources copy-resources + generate-resources /${project.basedir}/../${hbase-shaded-client.dir}/${build.archetype.subdir} @@ -113,20 +113,21 @@ hbase-shaded-client__copy-pom-to-temp-for-xslt-processing - generate-resources copy-resources + generate-resources /${project.basedir}/../${hbase-shaded-client.dir}/${temp.exemplar.subdir} /${project.basedir}/../${hbase-shaded-client.dir} - true + true + pom.xml - + @@ -137,10 +138,10 @@ using xml-maven-plugin for xslt transformation, below. --> hbase-client-ARCHETYPE__copy-pom-to-temp-for-xslt-processing - prepare-package copy-resources + prepare-package /${project.basedir}/../${hbase-client.dir}/${temp.archetype.subdir} @@ -149,16 +150,16 @@ pom.xml - + hbase-shaded-client-ARCHETYPE__copy-pom-to-temp-for-xslt-processing - prepare-package copy-resources + prepare-package /${project.basedir}/../${hbase-shaded-client.dir}/${temp.archetype.subdir} @@ -167,7 +168,7 @@ pom.xml - + @@ -178,14 +179,15 @@ org.codehaus.mojo xml-maven-plugin + ${xml.maven.version} modify-exemplar-pom-files-via-xslt - process-resources transform + process-resources @@ -212,10 +214,10 @@ prevent warnings when project is generated from archetype. --> modify-archetype-pom-files-via-xslt - package transform + package @@ -242,32 +244,32 @@ - maven-antrun-plugin + maven-antrun-plugin make-scripts-executable - process-resources run + process-resources - - + + run-createArchetypes-script - compile run + compile - - - + + + run-installArchetypes-script - install run + install - - - + + + diff --git a/hbase-archetypes/hbase-client-project/pom.xml b/hbase-archetypes/hbase-client-project/pom.xml index 7662e73fe482..ca08510b0f01 100644 --- a/hbase-archetypes/hbase-client-project/pom.xml +++ b/hbase-archetypes/hbase-client-project/pom.xml @@ -1,8 +1,5 @@ - + 4.0.0 - hbase-archetypes org.apache.hbase - 2.5.0-SNAPSHOT + hbase-archetypes + ${revision} .. hbase-client-project @@ -64,13 +61,23 @@ runtime - org.slf4j - slf4j-log4j12 + org.apache.logging.log4j + log4j-api + runtime + + + org.apache.logging.log4j + log4j-core + runtime + + + org.apache.logging.log4j + log4j-slf4j-impl runtime - log4j - log4j + org.apache.logging.log4j + log4j-1.2-api runtime diff --git a/hbase-archetypes/hbase-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/client/HelloHBase.java b/hbase-archetypes/hbase-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/client/HelloHBase.java index 5164ab21716c..5eb5081d435b 100644 --- a/hbase-archetypes/hbase-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/client/HelloHBase.java +++ b/hbase-archetypes/hbase-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/client/HelloHBase.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -38,19 +37,17 @@ import org.apache.hadoop.hbase.util.Bytes; /** - * Successful running of this application requires access to an active instance - * of HBase. For install instructions for a standalone instance of HBase, please - * refer to https://hbase.apache.org/book.html#quickstart + * Successful running of this application requires access to an active instance of HBase. For + * install instructions for a standalone instance of HBase, please refer to + * https://hbase.apache.org/book.html#quickstart */ public final class HelloHBase { - protected static final String MY_NAMESPACE_NAME = "myTestNamespace"; + static final String MY_NAMESPACE_NAME = "myTestNamespace"; static final TableName MY_TABLE_NAME = TableName.valueOf("myTestTable"); static final byte[] MY_COLUMN_FAMILY_NAME = Bytes.toBytes("cf"); - static final byte[] MY_FIRST_COLUMN_QUALIFIER - = Bytes.toBytes("myFirstColumn"); - static final byte[] MY_SECOND_COLUMN_QUALIFIER - = Bytes.toBytes("mySecondColumn"); + static final byte[] MY_FIRST_COLUMN_QUALIFIER = Bytes.toBytes("myFirstColumn"); + static final byte[] MY_SECOND_COLUMN_QUALIFIER = Bytes.toBytes("mySecondColumn"); static final byte[] MY_ROW_ID = Bytes.toBytes("rowId01"); // Private constructor included here to avoid checkstyle warnings @@ -61,21 +58,21 @@ public static void main(final String[] args) throws IOException { final boolean deleteAllAtEOJ = true; /** - * ConnectionFactory#createConnection() automatically looks for - * hbase-site.xml (HBase configuration parameters) on the system's - * CLASSPATH, to enable creation of Connection to HBase via ZooKeeper. + * ConnectionFactory#createConnection() automatically looks for hbase-site.xml (HBase + * configuration parameters) on the system's CLASSPATH, to enable creation of Connection to + * HBase via ZooKeeper. */ try (Connection connection = ConnectionFactory.createConnection(); - Admin admin = connection.getAdmin()) { + Admin admin = connection.getAdmin()) { admin.getClusterStatus(); // assure connection successfully established - System.out.println("\n*** Hello HBase! -- Connection has been " - + "established via ZooKeeper!!\n"); + System.out + .println("\n*** Hello HBase! -- Connection has been " + "established via ZooKeeper!!\n"); createNamespaceAndTable(admin); System.out.println("Getting a Table object for [" + MY_TABLE_NAME - + "] with which to perform CRUD operations in HBase."); + + "] with which to perform CRUD operations in HBase."); try (Table table = connection.getTable(MY_TABLE_NAME)) { putRowToTable(table); @@ -93,9 +90,8 @@ public static void main(final String[] args) throws IOException { } /** - * Invokes Admin#createNamespace and Admin#createTable to create a namespace - * with a table that has one column-family. - * + * Invokes Admin#createNamespace and Admin#createTable to create a namespace with a table that has + * one column-family. * @param admin Standard Admin object * @throws IOException If IO problem encountered */ @@ -104,48 +100,38 @@ static void createNamespaceAndTable(final Admin admin) throws IOException { if (!namespaceExists(admin, MY_NAMESPACE_NAME)) { System.out.println("Creating Namespace [" + MY_NAMESPACE_NAME + "]."); - admin.createNamespace(NamespaceDescriptor - .create(MY_NAMESPACE_NAME).build()); + admin.createNamespace(NamespaceDescriptor.create(MY_NAMESPACE_NAME).build()); } if (!admin.tableExists(MY_TABLE_NAME)) { System.out.println("Creating Table [" + MY_TABLE_NAME.getNameAsString() - + "], with one Column Family [" - + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "]."); + + "], with one Column Family [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "]."); TableDescriptor desc = TableDescriptorBuilder.newBuilder(MY_TABLE_NAME) - .setColumnFamily(ColumnFamilyDescriptorBuilder.of(MY_COLUMN_FAMILY_NAME)) - .build(); + .setColumnFamily(ColumnFamilyDescriptorBuilder.of(MY_COLUMN_FAMILY_NAME)).build(); admin.createTable(desc); } } /** - * Invokes Table#put to store a row (with two new columns created 'on the - * fly') into the table. - * + * Invokes Table#put to store a row (with two new columns created 'on the fly') into the table. * @param table Standard Table object (used for CRUD operations). * @throws IOException If IO problem encountered */ static void putRowToTable(final Table table) throws IOException { - table.put(new Put(MY_ROW_ID).addColumn(MY_COLUMN_FAMILY_NAME, - MY_FIRST_COLUMN_QUALIFIER, - Bytes.toBytes("Hello")).addColumn(MY_COLUMN_FAMILY_NAME, - MY_SECOND_COLUMN_QUALIFIER, - Bytes.toBytes("World!"))); - - System.out.println("Row [" + Bytes.toString(MY_ROW_ID) - + "] was put into Table [" - + table.getName().getNameAsString() + "] in HBase;\n" - + " the row's two columns (created 'on the fly') are: [" - + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" - + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER) - + "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" - + Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]"); + table.put(new Put(MY_ROW_ID) + .addColumn(MY_COLUMN_FAMILY_NAME, MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("Hello")) + .addColumn(MY_COLUMN_FAMILY_NAME, MY_SECOND_COLUMN_QUALIFIER, Bytes.toBytes("World!"))); + + System.out.println("Row [" + Bytes.toString(MY_ROW_ID) + "] was put into Table [" + + table.getName().getNameAsString() + "] in HBase;\n" + + " the row's two columns (created 'on the fly') are: [" + + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER) + + "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + + Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]"); } /** * Invokes Table#get and prints out the contents of the retrieved row. - * * @param table Standard Table object * @throws IOException If IO problem encountered */ @@ -153,38 +139,32 @@ static void getAndPrintRowContents(final Table table) throws IOException { Result row = table.get(new Get(MY_ROW_ID)); - System.out.println("Row [" + Bytes.toString(row.getRow()) - + "] was retrieved from Table [" - + table.getName().getNameAsString() - + "] in HBase, with the following content:"); + System.out.println("Row [" + Bytes.toString(row.getRow()) + "] was retrieved from Table [" + + table.getName().getNameAsString() + "] in HBase, with the following content:"); - for (Entry> colFamilyEntry - : row.getNoVersionMap().entrySet()) { + for (Entry> colFamilyEntry : row.getNoVersionMap() + .entrySet()) { String columnFamilyName = Bytes.toString(colFamilyEntry.getKey()); - System.out.println(" Columns in Column Family [" + columnFamilyName - + "]:"); + System.out.println(" Columns in Column Family [" + columnFamilyName + "]:"); - for (Entry columnNameAndValueMap - : colFamilyEntry.getValue().entrySet()) { + for (Entry columnNameAndValueMap : colFamilyEntry.getValue().entrySet()) { System.out.println(" Value of Column [" + columnFamilyName + ":" - + Bytes.toString(columnNameAndValueMap.getKey()) + "] == " - + Bytes.toString(columnNameAndValueMap.getValue())); + + Bytes.toString(columnNameAndValueMap.getKey()) + "] == " + + Bytes.toString(columnNameAndValueMap.getValue())); } } } /** * Checks to see whether a namespace exists. - * - * @param admin Standard Admin object + * @param admin Standard Admin object * @param namespaceName Name of namespace * @return true If namespace exists * @throws IOException If IO problem encountered */ - static boolean namespaceExists(final Admin admin, final String namespaceName) - throws IOException { + static boolean namespaceExists(final Admin admin, final String namespaceName) throws IOException { try { admin.getNamespaceDescriptor(namespaceName); } catch (NamespaceNotFoundException e) { @@ -195,28 +175,24 @@ static boolean namespaceExists(final Admin admin, final String namespaceName) /** * Invokes Table#delete to delete test data (i.e. the row) - * * @param table Standard Table object * @throws IOException If IO problem is encountered */ static void deleteRow(final Table table) throws IOException { - System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) - + "] from Table [" - + table.getName().getNameAsString() + "]."); + System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) + "] from Table [" + + table.getName().getNameAsString() + "]."); table.delete(new Delete(MY_ROW_ID)); } /** - * Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to - * disable/delete Table and delete Namespace. - * + * Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to disable/delete + * Table and delete Namespace. * @param admin Standard Admin object * @throws IOException If IO problem is encountered */ static void deleteNamespaceAndTable(final Admin admin) throws IOException { if (admin.tableExists(MY_TABLE_NAME)) { - System.out.println("Disabling/deleting Table [" - + MY_TABLE_NAME.getNameAsString() + "]."); + System.out.println("Disabling/deleting Table [" + MY_TABLE_NAME.getNameAsString() + "]."); admin.disableTable(MY_TABLE_NAME); // Disable a table before deleting it. admin.deleteTable(MY_TABLE_NAME); } diff --git a/hbase-archetypes/hbase-client-project/src/main/resources/log4j.properties b/hbase-archetypes/hbase-client-project/src/main/resources/log4j.properties deleted file mode 100644 index 0b01e57e6ea6..000000000000 --- a/hbase-archetypes/hbase-client-project/src/main/resources/log4j.properties +++ /dev/null @@ -1,121 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -# Define some default values that can be overridden by system properties -hbase.root.logger=INFO,console -hbase.security.logger=INFO,console -hbase.log.dir=. -hbase.log.file=hbase.log - -# Define the root logger to the system property "hbase.root.logger". -log4j.rootLogger=${hbase.root.logger} - -# Logging Threshold -log4j.threshold=ALL - -# -# Daily Rolling File Appender -# -log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file} - -# Rollver at midnight -log4j.appender.DRFA.DatePattern=.yyyy-MM-dd - -# 30-day backup -#log4j.appender.DRFA.MaxBackupIndex=30 -log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n - -# Rolling File Appender properties -hbase.log.maxfilesize=256MB -hbase.log.maxbackupindex=20 - -# Rolling File Appender -log4j.appender.RFA=org.apache.log4j.RollingFileAppender -log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file} - -log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize} -log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex} - -log4j.appender.RFA.layout=org.apache.log4j.PatternLayout -log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n - -# -# Security audit appender -# -hbase.security.log.file=SecurityAuth.audit -hbase.security.log.maxfilesize=256MB -hbase.security.log.maxbackupindex=20 -log4j.appender.RFAS=org.apache.log4j.RollingFileAppender -log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file} -log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize} -log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex} -log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.category.SecurityLogger=${hbase.security.logger} -log4j.additivity.SecurityLogger=false -#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE -#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.visibility.VisibilityController=TRACE - -# -# Null Appender -# -log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender - -# -# console -# Add "console" to rootlogger above if you want to use this -# -log4j.appender.console=org.apache.log4j.ConsoleAppender -log4j.appender.console.target=System.err -log4j.appender.console.layout=org.apache.log4j.PatternLayout -log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n - -# Custom Logging levels - -log4j.logger.org.apache.zookeeper=INFO -#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG -log4j.logger.org.apache.hadoop.hbase=INFO -# Make these two classes INFO-level. Make them DEBUG to see more zk debug. -log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO -log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKWatcher=INFO -#log4j.logger.org.apache.hadoop.dfs=DEBUG -# Set this class to log INFO only otherwise its OTT -# Enable this to get detailed connection error/retry logging. -# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE - - -# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output) -#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG - -# Uncomment the below if you want to remove logging of client region caching' -# and scan of hbase:meta messages -# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO -# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO - -# EventCounter -# Add "EventCounter" to rootlogger if you want to use this -# Uncomment the line below to add EventCounter information -# log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - -# Prevent metrics subsystem start/stop messages (HBASE-17722) -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsConfig=WARN -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsSinkAdapter=WARN -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsSystemImpl=WARN diff --git a/hbase-archetypes/hbase-client-project/src/main/resources/log4j2.properties b/hbase-archetypes/hbase-client-project/src/main/resources/log4j2.properties new file mode 100644 index 000000000000..5ffcfda24176 --- /dev/null +++ b/hbase-archetypes/hbase-client-project/src/main/resources/log4j2.properties @@ -0,0 +1,137 @@ +#/** +# * Licensed to the Apache Software Foundation (ASF) under one +# * or more contributor license agreements. See the NOTICE file +# * distributed with this work for additional information +# * regarding copyright ownership. The ASF licenses this file +# * to you under the Apache License, Version 2.0 (the +# * "License"); you may not use this file except in compliance +# * with the License. You may obtain a copy of the License at +# * +# * http://www.apache.org/licenses/LICENSE-2.0 +# * +# * Unless required by applicable law or agreed to in writing, software +# * distributed under the License is distributed on an "AS IS" BASIS, +# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# * See the License for the specific language governing permissions and +# * limitations under the License. +# */ + +status = warn +dest = err +name = PropertiesConfig + +# Console appender +appender.console.type = Console +appender.console.target = SYSTEM_ERR +appender.console.name = console +appender.console.layout.type = PatternLayout +appender.console.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n + +# Daily Rolling File Appender +appender.DRFA.type = RollingFile +appender.DRFA.name = DRFA +appender.DRFA.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log} +appender.DRFA.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log}.%d{yyyy-MM-dd} +appender.DRFA.createOnDemand = true +appender.DRFA.layout.type = PatternLayout +appender.DRFA.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.DRFA.policies.type = Policies +appender.DRFA.policies.time.type = TimeBasedTriggeringPolicy +appender.DRFA.policies.time.interval = 1 +appender.DRFA.policies.time.modulate = true +appender.DRFA.policies.size.type = SizeBasedTriggeringPolicy +appender.DRFA.policies.size.size = ${sys:hbase.log.maxfilesize:-256MB} +appender.DRFA.strategy.type = DefaultRolloverStrategy +appender.DRFA.strategy.max = ${sys:hbase.log.maxbackupindex:-20} + +# Rolling File Appender +appender.RFA.type = RollingFile +appender.RFA.name = RFA +appender.RFA.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log} +appender.RFA.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log}.%i +appender.RFA.createOnDemand = true +appender.RFA.layout.type = PatternLayout +appender.RFA.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.RFA.policies.type = Policies +appender.RFA.policies.size.type = SizeBasedTriggeringPolicy +appender.RFA.policies.size.size = ${sys:hbase.log.maxfilesize:-256MB} +appender.RFA.strategy.type = DefaultRolloverStrategy +appender.RFA.strategy.max = ${sys:hbase.log.maxbackupindex:-20} + +# Security Audit Appender +appender.RFAS.type = RollingFile +appender.RFAS.name = RFAS +appender.RFAS.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.security.log.file:-SecurityAuth.audit} +appender.RFAS.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.security.log.file:-SecurityAuth.audit}.%i +appender.RFAS.createOnDemand = true +appender.RFAS.layout.type = PatternLayout +appender.RFAS.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.RFAS.policies.type = Policies +appender.RFAS.policies.size.type = SizeBasedTriggeringPolicy +appender.RFAS.policies.size.size = ${sys:hbase.security.log.maxfilesize:-256MB} +appender.RFAS.strategy.type = DefaultRolloverStrategy +appender.RFAS.strategy.max = ${sys:hbase.security.log.maxbackupindex:-20} + +# Http Access Log RFA, uncomment this if you want an http access.log +# appender.AccessRFA.type = RollingFile +# appender.AccessRFA.name = AccessRFA +# appender.AccessRFA.fileName = /var/log/hbase/access.log +# appender.AccessRFA.filePattern = /var/log/hbase/access.log.%i +# appender.AccessRFA.createOnDemand = true +# appender.AccessRFA.layout.type = PatternLayout +# appender.AccessRFA.layout.pattern = %m%n +# appender.AccessRFA.policies.type = Policies +# appender.AccessRFA.policies.size.type = SizeBasedTriggeringPolicy +# appender.AccessRFA.policies.size.size = 200MB +# appender.AccessRFA.strategy.type = DefaultRolloverStrategy +# appender.AccessRFA.strategy.max = 10 + +# Null Appender +appender.NullAppender.type = Null +appender.NullAppender.name = NullAppender + +rootLogger = ${sys:hbase.root.logger:-INFO,console} + +logger.SecurityLogger.name = SecurityLogger +logger.SecurityLogger = ${sys:hbase.security.logger:-INFO,console} +logger.SecurityLogger.additivity = false + +# Custom Logging levels +# logger.zookeeper.name = org.apache.zookeeper +# logger.zookeeper.level = ERROR + +# logger.FSNamesystem.name = org.apache.hadoop.fs.FSNamesystem +# logger.FSNamesystem.level = DEBUG + +# logger.hbase.name = org.apache.hadoop.hbase +# logger.hbase.level = DEBUG + +# logger.META.name = org.apache.hadoop.hbase.META +# logger.META.level = DEBUG + +# Make these two classes below DEBUG to see more zk debug. +# logger.ZKUtil.name = org.apache.hadoop.hbase.zookeeper.ZKUtil +# logger.ZKUtil.level = DEBUG + +# logger.ZKWatcher.name = org.apache.hadoop.hbase.zookeeper.ZKWatcher +# logger.ZKWatcher.level = DEBUG + +# logger.dfs.name = org.apache.hadoop.dfs +# logger.dfs.level = DEBUG + +# Prevent metrics subsystem start/stop messages (HBASE-17722) +logger.MetricsConfig.name = org.apache.hadoop.metrics2.impl.MetricsConfig +logger.MetricsConfig.level = WARN + +logger.MetricsSinkAdapte.name = org.apache.hadoop.metrics2.impl.MetricsSinkAdapter +logger.MetricsSinkAdapte.level = WARN + +logger.MetricsSystemImpl.name = org.apache.hadoop.metrics2.impl.MetricsSystemImpl +logger.MetricsSystemImpl.level = WARN + +# Disable request log by default, you can enable this by changing the appender +logger.http.name = http.requests +logger.http.additivity = false +logger.http = INFO,NullAppender +# Replace the above with this configuration if you want an http access.log +# logger.http = INFO,AccessRFA diff --git a/hbase-archetypes/hbase-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/client/TestHelloHBase.java b/hbase-archetypes/hbase-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/client/TestHelloHBase.java index 9a92e606ffb0..b08ecf7ab1b5 100644 --- a/hbase-archetypes/hbase-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/client/TestHelloHBase.java +++ b/hbase-archetypes/hbase-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/client/TestHelloHBase.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -44,10 +44,9 @@ public class TestHelloHBase { @ClassRule public static final HBaseClassTestRule CLASS_RULE = - HBaseClassTestRule.forClass(TestHelloHBase.class); + HBaseClassTestRule.forClass(TestHelloHBase.class); - private static final HBaseTestingUtility TEST_UTIL - = new HBaseTestingUtility(); + private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @BeforeClass public static void beforeClass() throws Exception { @@ -67,13 +66,11 @@ public void testNamespaceExists() throws Exception { Admin admin = TEST_UTIL.getAdmin(); exists = HelloHBase.namespaceExists(admin, NONEXISTENT_NAMESPACE); - assertEquals("#namespaceExists failed: found nonexistent namespace.", - false, exists); + assertEquals("#namespaceExists failed: found nonexistent namespace.", false, exists); admin.createNamespace(NamespaceDescriptor.create(EXISTING_NAMESPACE).build()); exists = HelloHBase.namespaceExists(admin, EXISTING_NAMESPACE); - assertEquals("#namespaceExists failed: did NOT find existing namespace.", - true, exists); + assertEquals("#namespaceExists failed: did NOT find existing namespace.", true, exists); admin.deleteNamespace(EXISTING_NAMESPACE); } @@ -82,14 +79,11 @@ public void testCreateNamespaceAndTable() throws Exception { Admin admin = TEST_UTIL.getAdmin(); HelloHBase.createNamespaceAndTable(admin); - boolean namespaceExists - = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME); - assertEquals("#createNamespaceAndTable failed to create namespace.", - true, namespaceExists); + boolean namespaceExists = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME); + assertEquals("#createNamespaceAndTable failed to create namespace.", true, namespaceExists); boolean tableExists = admin.tableExists(HelloHBase.MY_TABLE_NAME); - assertEquals("#createNamespaceAndTable failed to create table.", - true, tableExists); + assertEquals("#createNamespaceAndTable failed to create table.", true, tableExists); admin.disableTable(HelloHBase.MY_TABLE_NAME); admin.deleteTable(HelloHBase.MY_TABLE_NAME); @@ -100,8 +94,7 @@ public void testCreateNamespaceAndTable() throws Exception { public void testPutRowToTable() throws IOException { Admin admin = TEST_UTIL.getAdmin(); admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build()); - Table table - = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); + Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); HelloHBase.putRowToTable(table); Result row = table.get(new Get(HelloHBase.MY_ROW_ID)); @@ -115,13 +108,10 @@ public void testPutRowToTable() throws IOException { public void testDeleteRow() throws IOException { Admin admin = TEST_UTIL.getAdmin(); admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build()); - Table table - = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); + Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); - table.put(new Put(HelloHBase.MY_ROW_ID). - addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME, - HelloHBase.MY_FIRST_COLUMN_QUALIFIER, - Bytes.toBytes("xyz"))); + table.put(new Put(HelloHBase.MY_ROW_ID).addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME, + HelloHBase.MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("xyz"))); HelloHBase.deleteRow(table); Result row = table.get(new Get(HelloHBase.MY_ROW_ID)); assertEquals("#deleteRow failed to delete row.", true, row.isEmpty()); diff --git a/hbase-archetypes/hbase-shaded-client-project/pom.xml b/hbase-archetypes/hbase-shaded-client-project/pom.xml index dd35fedbb8e3..1e66d1e443b9 100644 --- a/hbase-archetypes/hbase-shaded-client-project/pom.xml +++ b/hbase-archetypes/hbase-shaded-client-project/pom.xml @@ -1,8 +1,5 @@ - + 4.0.0 - hbase-archetypes org.apache.hbase - 2.5.0-SNAPSHOT + hbase-archetypes + ${revision} .. hbase-shaded-client-project @@ -44,16 +41,16 @@ org.apache.hbase hbase-testing-util test - - - javax.xml.bind - jaxb-api - - - javax.ws.rs - jsr311-api - - + + + javax.xml.bind + jaxb-api + + + javax.ws.rs + jsr311-api + + org.apache.hbase @@ -70,13 +67,23 @@ runtime - org.slf4j - slf4j-log4j12 + org.apache.logging.log4j + log4j-api + runtime + + + org.apache.logging.log4j + log4j-core + runtime + + + org.apache.logging.log4j + log4j-slf4j-impl runtime - log4j - log4j + org.apache.logging.log4j + log4j-1.2-api runtime diff --git a/hbase-archetypes/hbase-shaded-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/shaded_client/HelloHBase.java b/hbase-archetypes/hbase-shaded-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/shaded_client/HelloHBase.java index 94a1e711d47f..00a82fe50db1 100644 --- a/hbase-archetypes/hbase-shaded-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/shaded_client/HelloHBase.java +++ b/hbase-archetypes/hbase-shaded-client-project/src/main/java/org/apache/hbase/archetypes/exemplars/shaded_client/HelloHBase.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -37,19 +36,17 @@ import org.apache.hadoop.hbase.util.Bytes; /** - * Successful running of this application requires access to an active instance - * of HBase. For install instructions for a standalone instance of HBase, please - * refer to https://hbase.apache.org/book.html#quickstart + * Successful running of this application requires access to an active instance of HBase. For + * install instructions for a standalone instance of HBase, please refer to + * https://hbase.apache.org/book.html#quickstart */ public final class HelloHBase { - protected static final String MY_NAMESPACE_NAME = "myTestNamespace"; + static final String MY_NAMESPACE_NAME = "myTestNamespace"; static final TableName MY_TABLE_NAME = TableName.valueOf("myTestTable"); static final byte[] MY_COLUMN_FAMILY_NAME = Bytes.toBytes("cf"); - static final byte[] MY_FIRST_COLUMN_QUALIFIER - = Bytes.toBytes("myFirstColumn"); - static final byte[] MY_SECOND_COLUMN_QUALIFIER - = Bytes.toBytes("mySecondColumn"); + static final byte[] MY_FIRST_COLUMN_QUALIFIER = Bytes.toBytes("myFirstColumn"); + static final byte[] MY_SECOND_COLUMN_QUALIFIER = Bytes.toBytes("mySecondColumn"); static final byte[] MY_ROW_ID = Bytes.toBytes("rowId01"); // Private constructor included here to avoid checkstyle warnings @@ -60,21 +57,21 @@ public static void main(final String[] args) throws IOException { final boolean deleteAllAtEOJ = true; /** - * ConnectionFactory#createConnection() automatically looks for - * hbase-site.xml (HBase configuration parameters) on the system's - * CLASSPATH, to enable creation of Connection to HBase via ZooKeeper. + * ConnectionFactory#createConnection() automatically looks for hbase-site.xml (HBase + * configuration parameters) on the system's CLASSPATH, to enable creation of Connection to + * HBase via ZooKeeper. */ try (Connection connection = ConnectionFactory.createConnection(); - Admin admin = connection.getAdmin()) { + Admin admin = connection.getAdmin()) { admin.getClusterStatus(); // assure connection successfully established - System.out.println("\n*** Hello HBase! -- Connection has been " - + "established via ZooKeeper!!\n"); + System.out + .println("\n*** Hello HBase! -- Connection has been " + "established via ZooKeeper!!\n"); createNamespaceAndTable(admin); System.out.println("Getting a Table object for [" + MY_TABLE_NAME - + "] with which to perform CRUD operations in HBase."); + + "] with which to perform CRUD operations in HBase."); try (Table table = connection.getTable(MY_TABLE_NAME)) { putRowToTable(table); @@ -92,9 +89,8 @@ public static void main(final String[] args) throws IOException { } /** - * Invokes Admin#createNamespace and Admin#createTable to create a namespace - * with a table that has one column-family. - * + * Invokes Admin#createNamespace and Admin#createTable to create a namespace with a table that has + * one column-family. * @param admin Standard Admin object * @throws IOException If IO problem encountered */ @@ -103,47 +99,38 @@ static void createNamespaceAndTable(final Admin admin) throws IOException { if (!namespaceExists(admin, MY_NAMESPACE_NAME)) { System.out.println("Creating Namespace [" + MY_NAMESPACE_NAME + "]."); - admin.createNamespace(NamespaceDescriptor - .create(MY_NAMESPACE_NAME).build()); + admin.createNamespace(NamespaceDescriptor.create(MY_NAMESPACE_NAME).build()); } if (!admin.tableExists(MY_TABLE_NAME)) { System.out.println("Creating Table [" + MY_TABLE_NAME.getNameAsString() - + "], with one Column Family [" - + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "]."); + + "], with one Column Family [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + "]."); admin.createTable(new HTableDescriptor(MY_TABLE_NAME) - .addFamily(new HColumnDescriptor(MY_COLUMN_FAMILY_NAME))); + .addFamily(new HColumnDescriptor(MY_COLUMN_FAMILY_NAME))); } } /** - * Invokes Table#put to store a row (with two new columns created 'on the - * fly') into the table. - * + * Invokes Table#put to store a row (with two new columns created 'on the fly') into the table. * @param table Standard Table object (used for CRUD operations). * @throws IOException If IO problem encountered */ static void putRowToTable(final Table table) throws IOException { - table.put(new Put(MY_ROW_ID).addColumn(MY_COLUMN_FAMILY_NAME, - MY_FIRST_COLUMN_QUALIFIER, - Bytes.toBytes("Hello")).addColumn(MY_COLUMN_FAMILY_NAME, - MY_SECOND_COLUMN_QUALIFIER, - Bytes.toBytes("World!"))); - - System.out.println("Row [" + Bytes.toString(MY_ROW_ID) - + "] was put into Table [" - + table.getName().getNameAsString() + "] in HBase;\n" - + " the row's two columns (created 'on the fly') are: [" - + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" - + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER) - + "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" - + Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]"); + table.put(new Put(MY_ROW_ID) + .addColumn(MY_COLUMN_FAMILY_NAME, MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("Hello")) + .addColumn(MY_COLUMN_FAMILY_NAME, MY_SECOND_COLUMN_QUALIFIER, Bytes.toBytes("World!"))); + + System.out.println("Row [" + Bytes.toString(MY_ROW_ID) + "] was put into Table [" + + table.getName().getNameAsString() + "] in HBase;\n" + + " the row's two columns (created 'on the fly') are: [" + + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + Bytes.toString(MY_FIRST_COLUMN_QUALIFIER) + + "] and [" + Bytes.toString(MY_COLUMN_FAMILY_NAME) + ":" + + Bytes.toString(MY_SECOND_COLUMN_QUALIFIER) + "]"); } /** * Invokes Table#get and prints out the contents of the retrieved row. - * * @param table Standard Table object * @throws IOException If IO problem encountered */ @@ -151,38 +138,32 @@ static void getAndPrintRowContents(final Table table) throws IOException { Result row = table.get(new Get(MY_ROW_ID)); - System.out.println("Row [" + Bytes.toString(row.getRow()) - + "] was retrieved from Table [" - + table.getName().getNameAsString() - + "] in HBase, with the following content:"); + System.out.println("Row [" + Bytes.toString(row.getRow()) + "] was retrieved from Table [" + + table.getName().getNameAsString() + "] in HBase, with the following content:"); - for (Entry> colFamilyEntry - : row.getNoVersionMap().entrySet()) { + for (Entry> colFamilyEntry : row.getNoVersionMap() + .entrySet()) { String columnFamilyName = Bytes.toString(colFamilyEntry.getKey()); - System.out.println(" Columns in Column Family [" + columnFamilyName - + "]:"); + System.out.println(" Columns in Column Family [" + columnFamilyName + "]:"); - for (Entry columnNameAndValueMap - : colFamilyEntry.getValue().entrySet()) { + for (Entry columnNameAndValueMap : colFamilyEntry.getValue().entrySet()) { System.out.println(" Value of Column [" + columnFamilyName + ":" - + Bytes.toString(columnNameAndValueMap.getKey()) + "] == " - + Bytes.toString(columnNameAndValueMap.getValue())); + + Bytes.toString(columnNameAndValueMap.getKey()) + "] == " + + Bytes.toString(columnNameAndValueMap.getValue())); } } } /** * Checks to see whether a namespace exists. - * - * @param admin Standard Admin object + * @param admin Standard Admin object * @param namespaceName Name of namespace * @return true If namespace exists * @throws IOException If IO problem encountered */ - static boolean namespaceExists(final Admin admin, final String namespaceName) - throws IOException { + static boolean namespaceExists(final Admin admin, final String namespaceName) throws IOException { try { admin.getNamespaceDescriptor(namespaceName); } catch (NamespaceNotFoundException e) { @@ -193,28 +174,24 @@ static boolean namespaceExists(final Admin admin, final String namespaceName) /** * Invokes Table#delete to delete test data (i.e. the row) - * * @param table Standard Table object * @throws IOException If IO problem is encountered */ static void deleteRow(final Table table) throws IOException { - System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) - + "] from Table [" - + table.getName().getNameAsString() + "]."); + System.out.println("Deleting row [" + Bytes.toString(MY_ROW_ID) + "] from Table [" + + table.getName().getNameAsString() + "]."); table.delete(new Delete(MY_ROW_ID)); } /** - * Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to - * disable/delete Table and delete Namespace. - * + * Invokes Admin#disableTable, Admin#deleteTable, and Admin#deleteNamespace to disable/delete + * Table and delete Namespace. * @param admin Standard Admin object * @throws IOException If IO problem is encountered */ static void deleteNamespaceAndTable(final Admin admin) throws IOException { if (admin.tableExists(MY_TABLE_NAME)) { - System.out.println("Disabling/deleting Table [" - + MY_TABLE_NAME.getNameAsString() + "]."); + System.out.println("Disabling/deleting Table [" + MY_TABLE_NAME.getNameAsString() + "]."); admin.disableTable(MY_TABLE_NAME); // Disable a table before deleting it. admin.deleteTable(MY_TABLE_NAME); } diff --git a/hbase-archetypes/hbase-shaded-client-project/src/main/resources/log4j.properties b/hbase-archetypes/hbase-shaded-client-project/src/main/resources/log4j.properties deleted file mode 100644 index 0b01e57e6ea6..000000000000 --- a/hbase-archetypes/hbase-shaded-client-project/src/main/resources/log4j.properties +++ /dev/null @@ -1,121 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -# Define some default values that can be overridden by system properties -hbase.root.logger=INFO,console -hbase.security.logger=INFO,console -hbase.log.dir=. -hbase.log.file=hbase.log - -# Define the root logger to the system property "hbase.root.logger". -log4j.rootLogger=${hbase.root.logger} - -# Logging Threshold -log4j.threshold=ALL - -# -# Daily Rolling File Appender -# -log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file} - -# Rollver at midnight -log4j.appender.DRFA.DatePattern=.yyyy-MM-dd - -# 30-day backup -#log4j.appender.DRFA.MaxBackupIndex=30 -log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n - -# Rolling File Appender properties -hbase.log.maxfilesize=256MB -hbase.log.maxbackupindex=20 - -# Rolling File Appender -log4j.appender.RFA=org.apache.log4j.RollingFileAppender -log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file} - -log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize} -log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex} - -log4j.appender.RFA.layout=org.apache.log4j.PatternLayout -log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n - -# -# Security audit appender -# -hbase.security.log.file=SecurityAuth.audit -hbase.security.log.maxfilesize=256MB -hbase.security.log.maxbackupindex=20 -log4j.appender.RFAS=org.apache.log4j.RollingFileAppender -log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file} -log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize} -log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex} -log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.category.SecurityLogger=${hbase.security.logger} -log4j.additivity.SecurityLogger=false -#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE -#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.visibility.VisibilityController=TRACE - -# -# Null Appender -# -log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender - -# -# console -# Add "console" to rootlogger above if you want to use this -# -log4j.appender.console=org.apache.log4j.ConsoleAppender -log4j.appender.console.target=System.err -log4j.appender.console.layout=org.apache.log4j.PatternLayout -log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n - -# Custom Logging levels - -log4j.logger.org.apache.zookeeper=INFO -#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG -log4j.logger.org.apache.hadoop.hbase=INFO -# Make these two classes INFO-level. Make them DEBUG to see more zk debug. -log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO -log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKWatcher=INFO -#log4j.logger.org.apache.hadoop.dfs=DEBUG -# Set this class to log INFO only otherwise its OTT -# Enable this to get detailed connection error/retry logging. -# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE - - -# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output) -#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG - -# Uncomment the below if you want to remove logging of client region caching' -# and scan of hbase:meta messages -# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO -# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO - -# EventCounter -# Add "EventCounter" to rootlogger if you want to use this -# Uncomment the line below to add EventCounter information -# log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - -# Prevent metrics subsystem start/stop messages (HBASE-17722) -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsConfig=WARN -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsSinkAdapter=WARN -log4j.logger.org.apache.hadoop.metrics2.impl.MetricsSystemImpl=WARN diff --git a/hbase-archetypes/hbase-shaded-client-project/src/main/resources/log4j2.properties b/hbase-archetypes/hbase-shaded-client-project/src/main/resources/log4j2.properties new file mode 100644 index 000000000000..5ffcfda24176 --- /dev/null +++ b/hbase-archetypes/hbase-shaded-client-project/src/main/resources/log4j2.properties @@ -0,0 +1,137 @@ +#/** +# * Licensed to the Apache Software Foundation (ASF) under one +# * or more contributor license agreements. See the NOTICE file +# * distributed with this work for additional information +# * regarding copyright ownership. The ASF licenses this file +# * to you under the Apache License, Version 2.0 (the +# * "License"); you may not use this file except in compliance +# * with the License. You may obtain a copy of the License at +# * +# * http://www.apache.org/licenses/LICENSE-2.0 +# * +# * Unless required by applicable law or agreed to in writing, software +# * distributed under the License is distributed on an "AS IS" BASIS, +# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# * See the License for the specific language governing permissions and +# * limitations under the License. +# */ + +status = warn +dest = err +name = PropertiesConfig + +# Console appender +appender.console.type = Console +appender.console.target = SYSTEM_ERR +appender.console.name = console +appender.console.layout.type = PatternLayout +appender.console.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n + +# Daily Rolling File Appender +appender.DRFA.type = RollingFile +appender.DRFA.name = DRFA +appender.DRFA.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log} +appender.DRFA.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log}.%d{yyyy-MM-dd} +appender.DRFA.createOnDemand = true +appender.DRFA.layout.type = PatternLayout +appender.DRFA.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.DRFA.policies.type = Policies +appender.DRFA.policies.time.type = TimeBasedTriggeringPolicy +appender.DRFA.policies.time.interval = 1 +appender.DRFA.policies.time.modulate = true +appender.DRFA.policies.size.type = SizeBasedTriggeringPolicy +appender.DRFA.policies.size.size = ${sys:hbase.log.maxfilesize:-256MB} +appender.DRFA.strategy.type = DefaultRolloverStrategy +appender.DRFA.strategy.max = ${sys:hbase.log.maxbackupindex:-20} + +# Rolling File Appender +appender.RFA.type = RollingFile +appender.RFA.name = RFA +appender.RFA.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log} +appender.RFA.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.log.file:-hbase.log}.%i +appender.RFA.createOnDemand = true +appender.RFA.layout.type = PatternLayout +appender.RFA.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.RFA.policies.type = Policies +appender.RFA.policies.size.type = SizeBasedTriggeringPolicy +appender.RFA.policies.size.size = ${sys:hbase.log.maxfilesize:-256MB} +appender.RFA.strategy.type = DefaultRolloverStrategy +appender.RFA.strategy.max = ${sys:hbase.log.maxbackupindex:-20} + +# Security Audit Appender +appender.RFAS.type = RollingFile +appender.RFAS.name = RFAS +appender.RFAS.fileName = ${sys:hbase.log.dir:-.}/${sys:hbase.security.log.file:-SecurityAuth.audit} +appender.RFAS.filePattern = ${sys:hbase.log.dir:-.}/${sys:hbase.security.log.file:-SecurityAuth.audit}.%i +appender.RFAS.createOnDemand = true +appender.RFAS.layout.type = PatternLayout +appender.RFAS.layout.pattern = %d{ISO8601} %-5p [%t] %c{2}: %.1000m%n +appender.RFAS.policies.type = Policies +appender.RFAS.policies.size.type = SizeBasedTriggeringPolicy +appender.RFAS.policies.size.size = ${sys:hbase.security.log.maxfilesize:-256MB} +appender.RFAS.strategy.type = DefaultRolloverStrategy +appender.RFAS.strategy.max = ${sys:hbase.security.log.maxbackupindex:-20} + +# Http Access Log RFA, uncomment this if you want an http access.log +# appender.AccessRFA.type = RollingFile +# appender.AccessRFA.name = AccessRFA +# appender.AccessRFA.fileName = /var/log/hbase/access.log +# appender.AccessRFA.filePattern = /var/log/hbase/access.log.%i +# appender.AccessRFA.createOnDemand = true +# appender.AccessRFA.layout.type = PatternLayout +# appender.AccessRFA.layout.pattern = %m%n +# appender.AccessRFA.policies.type = Policies +# appender.AccessRFA.policies.size.type = SizeBasedTriggeringPolicy +# appender.AccessRFA.policies.size.size = 200MB +# appender.AccessRFA.strategy.type = DefaultRolloverStrategy +# appender.AccessRFA.strategy.max = 10 + +# Null Appender +appender.NullAppender.type = Null +appender.NullAppender.name = NullAppender + +rootLogger = ${sys:hbase.root.logger:-INFO,console} + +logger.SecurityLogger.name = SecurityLogger +logger.SecurityLogger = ${sys:hbase.security.logger:-INFO,console} +logger.SecurityLogger.additivity = false + +# Custom Logging levels +# logger.zookeeper.name = org.apache.zookeeper +# logger.zookeeper.level = ERROR + +# logger.FSNamesystem.name = org.apache.hadoop.fs.FSNamesystem +# logger.FSNamesystem.level = DEBUG + +# logger.hbase.name = org.apache.hadoop.hbase +# logger.hbase.level = DEBUG + +# logger.META.name = org.apache.hadoop.hbase.META +# logger.META.level = DEBUG + +# Make these two classes below DEBUG to see more zk debug. +# logger.ZKUtil.name = org.apache.hadoop.hbase.zookeeper.ZKUtil +# logger.ZKUtil.level = DEBUG + +# logger.ZKWatcher.name = org.apache.hadoop.hbase.zookeeper.ZKWatcher +# logger.ZKWatcher.level = DEBUG + +# logger.dfs.name = org.apache.hadoop.dfs +# logger.dfs.level = DEBUG + +# Prevent metrics subsystem start/stop messages (HBASE-17722) +logger.MetricsConfig.name = org.apache.hadoop.metrics2.impl.MetricsConfig +logger.MetricsConfig.level = WARN + +logger.MetricsSinkAdapte.name = org.apache.hadoop.metrics2.impl.MetricsSinkAdapter +logger.MetricsSinkAdapte.level = WARN + +logger.MetricsSystemImpl.name = org.apache.hadoop.metrics2.impl.MetricsSystemImpl +logger.MetricsSystemImpl.level = WARN + +# Disable request log by default, you can enable this by changing the appender +logger.http.name = http.requests +logger.http.additivity = false +logger.http = INFO,NullAppender +# Replace the above with this configuration if you want an http access.log +# logger.http = INFO,AccessRFA diff --git a/hbase-archetypes/hbase-shaded-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/shaded_client/TestHelloHBase.java b/hbase-archetypes/hbase-shaded-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/shaded_client/TestHelloHBase.java index 0f0f7d91ade4..f87d9d7c700b 100644 --- a/hbase-archetypes/hbase-shaded-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/shaded_client/TestHelloHBase.java +++ b/hbase-archetypes/hbase-shaded-client-project/src/test/java/org/apache/hbase/archetypes/exemplars/shaded_client/TestHelloHBase.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -44,10 +44,9 @@ public class TestHelloHBase { @ClassRule public static final HBaseClassTestRule CLASS_RULE = - HBaseClassTestRule.forClass(TestHelloHBase.class); + HBaseClassTestRule.forClass(TestHelloHBase.class); - private static final HBaseTestingUtility TEST_UTIL - = new HBaseTestingUtility(); + private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @BeforeClass public static void beforeClass() throws Exception { @@ -67,13 +66,11 @@ public void testNamespaceExists() throws Exception { Admin admin = TEST_UTIL.getAdmin(); exists = HelloHBase.namespaceExists(admin, NONEXISTENT_NAMESPACE); - assertEquals("#namespaceExists failed: found nonexistent namespace.", - false, exists); + assertEquals("#namespaceExists failed: found nonexistent namespace.", false, exists); admin.createNamespace(NamespaceDescriptor.create(EXISTING_NAMESPACE).build()); exists = HelloHBase.namespaceExists(admin, EXISTING_NAMESPACE); - assertEquals("#namespaceExists failed: did NOT find existing namespace.", - true, exists); + assertEquals("#namespaceExists failed: did NOT find existing namespace.", true, exists); admin.deleteNamespace(EXISTING_NAMESPACE); } @@ -82,14 +79,11 @@ public void testCreateNamespaceAndTable() throws Exception { Admin admin = TEST_UTIL.getAdmin(); HelloHBase.createNamespaceAndTable(admin); - boolean namespaceExists - = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME); - assertEquals("#createNamespaceAndTable failed to create namespace.", - true, namespaceExists); + boolean namespaceExists = HelloHBase.namespaceExists(admin, HelloHBase.MY_NAMESPACE_NAME); + assertEquals("#createNamespaceAndTable failed to create namespace.", true, namespaceExists); boolean tableExists = admin.tableExists(HelloHBase.MY_TABLE_NAME); - assertEquals("#createNamespaceAndTable failed to create table.", - true, tableExists); + assertEquals("#createNamespaceAndTable failed to create table.", true, tableExists); admin.disableTable(HelloHBase.MY_TABLE_NAME); admin.deleteTable(HelloHBase.MY_TABLE_NAME); @@ -100,8 +94,7 @@ public void testCreateNamespaceAndTable() throws Exception { public void testPutRowToTable() throws IOException { Admin admin = TEST_UTIL.getAdmin(); admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build()); - Table table - = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); + Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); HelloHBase.putRowToTable(table); Result row = table.get(new Get(HelloHBase.MY_ROW_ID)); @@ -115,13 +108,10 @@ public void testPutRowToTable() throws IOException { public void testDeleteRow() throws IOException { Admin admin = TEST_UTIL.getAdmin(); admin.createNamespace(NamespaceDescriptor.create(HelloHBase.MY_NAMESPACE_NAME).build()); - Table table - = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); + Table table = TEST_UTIL.createTable(HelloHBase.MY_TABLE_NAME, HelloHBase.MY_COLUMN_FAMILY_NAME); - table.put(new Put(HelloHBase.MY_ROW_ID). - addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME, - HelloHBase.MY_FIRST_COLUMN_QUALIFIER, - Bytes.toBytes("xyz"))); + table.put(new Put(HelloHBase.MY_ROW_ID).addColumn(HelloHBase.MY_COLUMN_FAMILY_NAME, + HelloHBase.MY_FIRST_COLUMN_QUALIFIER, Bytes.toBytes("xyz"))); HelloHBase.deleteRow(table); Result row = table.get(new Get(HelloHBase.MY_ROW_ID)); assertEquals("#deleteRow failed to delete row.", true, row.isEmpty()); diff --git a/hbase-archetypes/pom.xml b/hbase-archetypes/pom.xml index 9cdd4cff599e..d30ab1eb6975 100644 --- a/hbase-archetypes/pom.xml +++ b/hbase-archetypes/pom.xml @@ -1,6 +1,5 @@ - - + + 4.0.0 - hbase-build-configuration org.apache.hbase - 2.5.0-SNAPSHOT + hbase-build-configuration + ${revision} ../hbase-build-configuration @@ -68,10 +67,10 @@ spotbugs-maven-plugin - false spotbugs + false ${project.basedir}/../dev-support/spotbugs-exclude.xml diff --git a/hbase-assembly/pom.xml b/hbase-assembly/pom.xml index 7e9389a76ba1..c6049f744324 100644 --- a/hbase-assembly/pom.xml +++ b/hbase-assembly/pom.xml @@ -1,4 +1,4 @@ - + 4.0.0 - hbase-build-configuration org.apache.hbase - 2.5.0-SNAPSHOT + hbase-build-configuration + ${revision} ../hbase-build-configuration hbase-assembly - Apache HBase - Assembly - - Module that does project assembly and that is all that it does. - pom + Apache HBase - Assembly + Module that does project assembly and that is all that it does. true - - - - - org.apache.maven.plugins - maven-remote-resources-plugin - - - aggregate-licenses - - process - - - - ${build.year} - ${license.debug.print.included} - ${license.bundles.dependencies} - ${license.bundles.jquery} - ${license.bundles.logo} - ${license.bundles.bootstrap} - - - ${project.groupId}:hbase-resource-bundle:${project.version} - - - ${project.groupId}:hbase-resource-bundle:${project.version} - - - supplemental-models.xml - - - - - - - maven-assembly-plugin - - - hbase-${project.version} - false - true - posix - - ${assembly.file} - src/main/assembly/client.xml - - - - - maven-dependency-plugin - - - - create-hbase-generated-classpath - test - - build-classpath - - - ${project.parent.basedir}/target/cached_classpath.txt - jline,jruby-complete,hbase-shaded-client,hbase-shaded-client-byo-hadoop,hbase-shaded-mapreduce - - - - - - create-hbase-generated-classpath-jline - test - - build-classpath - - - ${project.parent.basedir}/target/cached_classpath_jline.txt - jline - - - - - - create-hbase-generated-classpath-jruby - test - - build-classpath - - - ${project.parent.basedir}/target/cached_classpath_jruby.txt - jruby-complete - - - - - - - unpack-dependency-notices - prepare-package - - unpack-dependencies - - - pom - true - **\/NOTICE,**\/NOTICE.txt - - - - - - org.codehaus.mojo - exec-maven-plugin - ${exec.maven.version} - - - concat-NOTICE-files - package - - exec - - - env - - bash - -c - cat maven-shared-archive-resources/META-INF/NOTICE \ - `find ${project.build.directory}/dependency -iname NOTICE -or -iname NOTICE.txt` - - - ${project.build.directory}/NOTICE.aggregate - ${project.build.directory} - - - - - - - @@ -188,7 +47,7 @@ org.apache.hbase hbase-shaded-mapreduce - + org.apache.hbase hbase-it @@ -257,16 +116,16 @@ hbase-external-blockcache - org.apache.hbase - hbase-testing-util + org.apache.hbase + hbase-testing-util - org.apache.hbase - hbase-metrics-api + org.apache.hbase + hbase-metrics-api - org.apache.hbase - hbase-metrics + org.apache.hbase + hbase-metrics org.apache.hbase @@ -277,9 +136,9 @@ hbase-protocol-shaded - org.apache.hbase - hbase-resource-bundle - true + org.apache.hbase + hbase-resource-bundle + true org.apache.httpcomponents @@ -313,6 +172,10 @@ org.apache.hbase hbase-compression-aircompressor + + org.apache.hbase + hbase-compression-brotli + org.apache.hbase hbase-compression-lz4 @@ -352,26 +215,172 @@ jul-to-slf4j - org.slf4j - slf4j-log4j12 + org.apache.logging.log4j + log4j-api - log4j - log4j + org.apache.logging.log4j + log4j-core + + + org.apache.logging.log4j + log4j-slf4j-impl io.opentelemetry.javaagent opentelemetry-javaagent - all + + + org.apache.logging.log4j + log4j-1.2-api + + + + + org.apache.maven.plugins + maven-remote-resources-plugin + + + aggregate-licenses + + process + + + + ${build.year} + ${license.debug.print.included} + ${license.bundles.dependencies} + ${license.bundles.jquery} + ${license.bundles.vega} + ${license.bundles.logo} + ${license.bundles.bootstrap} + + + ${project.groupId}:hbase-resource-bundle:${project.version} + + + ${project.groupId}:hbase-resource-bundle:${project.version} + + + supplemental-models.xml + + + + + + + maven-assembly-plugin + + + hbase-${project.version} + false + true + posix + + ${assembly.file} + src/main/assembly/client.xml + + + + + maven-dependency-plugin + + + + create-hbase-generated-classpath + + build-classpath + + test + + ${project.parent.basedir}/target/cached_classpath.txt + jline,jruby-complete,hbase-shaded-client,hbase-shaded-client-byo-hadoop,hbase-shaded-mapreduce + + + + + + create-hbase-generated-classpath-jline + + build-classpath + + test + + ${project.parent.basedir}/target/cached_classpath_jline.txt + jline + + + + + + create-hbase-generated-classpath-jruby + + build-classpath + + test + + ${project.parent.basedir}/target/cached_classpath_jruby.txt + jruby-complete + + + + + + + unpack-dependency-notices + + unpack-dependencies + + prepare-package + + pom + true + **\/NOTICE,**\/NOTICE.txt + + + + + + org.codehaus.mojo + exec-maven-plugin + ${exec.maven.version} + + + concat-NOTICE-files + + exec + + package + + env + + bash + -c + cat maven-shared-archive-resources/META-INF/NOTICE \ + `find ${project.build.directory}/dependency -iname NOTICE -or -iname NOTICE.txt` + + ${project.build.directory}/NOTICE.aggregate + ${project.build.directory} + + + + + + + rsgroup - !skip-rsgroup + !skip-rsgroup @@ -379,6 +388,17 @@ org.apache.hbase hbase-rsgroup + + junit + junit + + + org.mockito + mockito-core + + + compile +
diff --git a/hbase-assembly/src/main/assembly/client.xml b/hbase-assembly/src/main/assembly/client.xml index 48940b75d5be..54ed2e96fdd8 100644 --- a/hbase-assembly/src/main/assembly/client.xml +++ b/hbase-assembly/src/main/assembly/client.xml @@ -52,20 +52,20 @@ com.sun.jersey:* com.sun.jersey.contribs:* jline:jline + junit:junit com.github.stephenc.findbugs:findbugs-annotations commons-logging:commons-logging - log4j:log4j org.apache.hbase:hbase-shaded-client org.apache.hbase:hbase-shaded-client-byo-hadoop org.apache.hbase:hbase-shaded-mapreduce org.apache.htrace:htrace-core4 org.apache.htrace:htrace-core org.apache.yetus:audience-annotations - org.slf4j:slf4j-api - org.slf4j:jcl-over-slf4j - org.slf4j:jul-to-slf4j - org.slf4j:slf4j-log4j12 + org.slf4j:* + org.apache.logging.log4j:* io.opentelemetry.javaagent:* + org.hamcrest:hamcrest-core + org.mockito:mockito-core @@ -146,14 +146,11 @@ com.github.stephenc.findbugs:findbugs-annotations commons-logging:commons-logging - log4j:log4j org.apache.htrace:htrace-core4 org.apache.htrace:htrace-core org.apache.yetus:audience-annotations - org.slf4j:slf4j-api - org.slf4j:jcl-over-slf4j - org.slf4j:jul-to-slf4j - org.slf4j:slf4j-log4j12 + org.slf4j:* + org.apache.logging.log4j:* io.opentelemetry:* @@ -163,6 +160,15 @@ io.opentelemetry.javaagent:* + + + lib/test + + junit:junit + org.hamcrest:hamcrest-core + org.mockito:mockito-core + + diff --git a/hbase-assembly/src/main/assembly/hadoop-three-compat.xml b/hbase-assembly/src/main/assembly/hadoop-three-compat.xml index 6c670a77f7e0..9c73df1f2f45 100644 --- a/hbase-assembly/src/main/assembly/hadoop-three-compat.xml +++ b/hbase-assembly/src/main/assembly/hadoop-three-compat.xml @@ -49,11 +49,9 @@ org.apache.hbase:hbase-metrics org.apache.hbase:hbase-metrics-api org.apache.hbase:hbase-procedure - org.apache.hbase:hbase-protocol org.apache.hbase:hbase-protocol-shaded org.apache.hbase:hbase-replication org.apache.hbase:hbase-rest - org.apache.hbase:hbase-rsgroup org.apache.hbase:hbase-server org.apache.hbase:hbase-shell org.apache.hbase:hbase-testing-util @@ -106,13 +104,15 @@ org.apache.hbase:hbase-shaded-mapreduce com.github.stephenc.findbugs:findbugs-annotations commons-logging:commons-logging - log4j:log4j org.apache.htrace:htrace-core4 org.apache.htrace:htrace-core org.apache.yetus:audience-annotations - org.slf4j:slf4j-api - org.slf4j:slf4j-log4j12 + org.slf4j:* + org.apache.logging.log4j:* io.opentelemetry.javaagent:* + junit:junit + org.hamcrest:hamcrest-core + org.mockito:mockito-core @@ -205,14 +205,11 @@ com.github.stephenc.findbugs:findbugs-annotations commons-logging:commons-logging - log4j:log4j org.apache.htrace:htrace-core4 org.apache.htrace:htrace-core org.apache.yetus:audience-annotations - org.slf4j:slf4j-api - org.slf4j:jcl-over-slf4j - org.slf4j:jul-to-slf4j - org.slf4j:slf4j-log4j12 + org.slf4j:* + org.apache.logging.log4j:* io.opentelemetry:* @@ -226,7 +223,7 @@ lib/jdk11 true - com.sun.activation:javax.activation + com.sun.activation:javax.activation + com.sun.xml.ws:jaxws-ri:pom + lib/trace @@ -264,6 +265,16 @@ io.opentelemetry.javaagent:* + + + + lib/test + + junit:junit + org.hamcrest:hamcrest-core + org.mockito:mockito-core + + diff --git a/hbase-assembly/src/main/assembly/hadoop-two-compat.xml b/hbase-assembly/src/main/assembly/hadoop-two-compat.xml index 750027812e05..acfb69a7ebdf 100644 --- a/hbase-assembly/src/main/assembly/hadoop-two-compat.xml +++ b/hbase-assembly/src/main/assembly/hadoop-two-compat.xml @@ -103,7 +103,6 @@ com.sun.jersey:* commons-logging:commons-logging jline:jline - log4j:log4j org.apache.hbase:hbase-shaded-client-byo-hadoop org.apache.hbase:hbase-shaded-client org.apache.hbase:hbase-shaded-mapreduce @@ -111,8 +110,9 @@ org.apache.htrace:htrace-core org.apache.yetus:audience-annotations org.jruby:jruby-complete - org.slf4j:slf4j-api - org.slf4j:slf4j-log4j12 + org.slf4j:* + org.apache.logging.log4j:* + io.opentelemetry.javaagent:* @@ -205,12 +205,12 @@ com.github.stephenc.findbugs:findbugs-annotations commons-logging:commons-logging - log4j:log4j org.apache.htrace:htrace-core4 org.apache.htrace:htrace-core org.apache.yetus:audience-annotations - org.slf4j:slf4j-api - org.slf4j:slf4j-log4j12 + org.slf4j:* + org.apache.logging.log4j:* + io.opentelemetry:* @@ -219,6 +219,12 @@ jline:jline + + lib/trace + + io.opentelemetry.javaagent:* + + lib/jdk11 true @@ -252,8 +258,12 @@ org.glassfish.pfl:* org.jvnet.mimepull:mimepull org.jvnet.staxex:stax-ex - - - + + + + com.sun.xml.ws:jaxws-ri:pom + + + diff --git a/hbase-asyncfs/pom.xml b/hbase-asyncfs/pom.xml index 5b6f727fe3e7..83df649bf2ba 100644 --- a/hbase-asyncfs/pom.xml +++ b/hbase-asyncfs/pom.xml @@ -1,6 +1,5 @@ - - + + 4.0.0 - hbase-build-configuration org.apache.hbase - 2.5.0-SNAPSHOT + hbase-build-configuration + ${revision} ../hbase-build-configuration hbase-asyncfs Apache HBase - Asynchronous FileSystem HBase Asynchronous FileSystem Implementation for WAL - - - - - org.apache.maven.plugins - maven-source-plugin - - - - maven-assembly-plugin - - true - - - - net.revelc.code - warbucks-maven-plugin - - - org.apache.maven.plugins - maven-checkstyle-plugin - - true - - - - @@ -103,19 +75,8 @@ org.bouncycastle - bcprov-jdk15on - test - - - org.apache.hadoop - hadoop-minikdc + bcprov-jdk18on test - - - bouncycastle - bcprov-jdk15 - - org.apache.kerby @@ -149,16 +110,48 @@ test - org.slf4j - slf4j-log4j12 + org.apache.logging.log4j + log4j-api + test + + + org.apache.logging.log4j + log4j-core test - log4j - log4j + org.apache.logging.log4j + log4j-slf4j-impl + test + + + org.apache.logging.log4j + log4j-1.2-api test + + + + + maven-assembly-plugin + + true + + + + net.revelc.code + warbucks-maven-plugin + + + org.apache.maven.plugins + maven-checkstyle-plugin + + true + + + + @@ -166,8 +159,9 @@ hadoop-2.0 - - !hadoop.profile + + + !hadoop.profile @@ -194,6 +188,11 @@ hadoop-minicluster test + + org.apache.hadoop + hadoop-minikdc + test + 4.0.0 - hbase org.apache.hbase - 2.5.0-SNAPSHOT + hbase + ${revision} .. hbase-build-configuration - Apache HBase - Build Configuration - Configure the build-support artifacts for maven build pom + Apache HBase - Build Configuration + Configure the build-support artifacts for maven build + + + org.apache.hbase + hbase-annotations + test-jar + test + + + org.apache.yetus + audience-annotations + + @@ -50,18 +62,6 @@ - - - org.apache.hbase - hbase-annotations - test-jar - test - - - org.apache.yetus - audience-annotations - - @@ -69,31 +69,6 @@ false - - - - 9+181-r4173-1 - - - - com.google.errorprone - error_prone_core - ${error-prone.version} - provided - - - com.google.code.findbugs - jsr305 - - - - - com.google.errorprone - javac - ${javac.version} - provided - - @@ -101,17 +76,12 @@ org.apache.maven.plugins maven-compiler-plugin - ${compileSource} - ${compileSource} - - true + ${releaseTarget} true -XDcompilePolicy=simple - -Xplugin:ErrorProne -XepDisableWarningsInGeneratedCode -Xep:FallThrough:OFF -Xep:MutablePublicArray:OFF -Xep:ClassNewInstance:ERROR -Xep:MissingDefault:ERROR - - -J-Xbootclasspath/p:${settings.localRepository}/com/google/errorprone/javac/${javac.version}/javac-${javac.version}.jar + -Xplugin:ErrorProne -XepDisableWarningsInGeneratedCode -XepExcludedPaths:.*/target/.* -Xep:FallThrough:OFF -Xep:MutablePublicArray:OFF -Xep:ClassNewInstance:ERROR -Xep:MissingDefault:ERROR -Xep:BanJNDI:WARN @@ -122,8 +92,44 @@ + + org.apache.maven.plugins + maven-enforcer-plugin + + + jdk11-required + + enforce + + + + + [11,) + + + + + + + + + apple-silicon-workaround + + + mac + aarch64 + + + + osx-x86_64 + + diff --git a/hbase-checkstyle/pom.xml b/hbase-checkstyle/pom.xml index 6771dc8ebb4a..b32925ee07b2 100644 --- a/hbase-checkstyle/pom.xml +++ b/hbase-checkstyle/pom.xml @@ -1,7 +1,5 @@ - + -4.0.0 -org.apache.hbase -hbase-checkstyle -2.5.0-SNAPSHOT -Apache HBase - Checkstyle -Module to hold Checkstyle properties for HBase. - + 4.0.0 + - hbase org.apache.hbase - 2.5.0-SNAPSHOT + hbase + ${revision} .. + org.apache.hbase + hbase-checkstyle + ${revision} + Apache HBase - Checkstyle + Module to hold Checkstyle properties for HBase. - - - - - org.apache.maven.plugins - maven-site-plugin - - true - - - - - maven-assembly-plugin - - true - - - - + + + + + maven-source-plugin + + true + + + + org.apache.maven.plugins + maven-site-plugin + + true + + + + + maven-assembly-plugin + + true + + + + diff --git a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml index 9351ecbfe6a0..2d8e880ddf4a 100644 --- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml +++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml @@ -55,4 +55,5 @@ + diff --git a/hbase-client/pom.xml b/hbase-client/pom.xml index 01a74cc6188c..35692b105980 100644 --- a/hbase-client/pom.xml +++ b/hbase-client/pom.xml @@ -1,6 +1,5 @@ - - + + 4.0.0 - hbase-build-configuration org.apache.hbase - 2.5.0-SNAPSHOT + hbase-build-configuration + ${revision} ../hbase-build-configuration hbase-client Apache HBase - Client Client of HBase - - - - - - - maven-assembly-plugin - - true - - - - - org.apache.maven.plugins - maven-source-plugin - - - net.revelc.code - warbucks-maven-plugin - - - @@ -177,13 +154,18 @@ test - org.slf4j - slf4j-log4j12 + org.apache.logging.log4j + log4j-api test - log4j - log4j + org.apache.logging.log4j + log4j-core + test + + + org.apache.logging.log4j + log4j-slf4j-impl test @@ -200,6 +182,11 @@ mockito-core test + + org.hamcrest + hamcrest-library + test + org.apache.commons commons-crypto @@ -211,6 +198,23 @@ + + + + + + + maven-assembly-plugin + + true + + + + net.revelc.code + warbucks-maven-plugin + + + @@ -232,8 +236,9 @@ hadoop-2.0 - - !hadoop.profile + + + !hadoop.profile @@ -388,8 +393,7 @@ lifecycle-mapping - - + diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/Abortable.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/Abortable.java index b137a7da2ceb..b0a5a86d50bb 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/Abortable.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/Abortable.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -23,8 +22,8 @@ /** * Interface to support the aborting of a given server or client. *

- * This is used primarily for ZooKeeper usage when we could get an unexpected - * and fatal exception, requiring an abort. + * This is used primarily for ZooKeeper usage when we could get an unexpected and fatal exception, + * requiring an abort. *

* Implemented by the Master, RegionServer, and TableServers (client). */ @@ -33,13 +32,12 @@ public interface Abortable { /** * Abort the server or client. * @param why Why we're aborting. - * @param e Throwable that caused abort. Can be null. + * @param e Throwable that caused abort. Can be null. */ void abort(String why, Throwable e); /** - * It just call another abort method and the Throwable - * parameter is null. + * It just calls another abort method and the Throwable parameter is null. * @param why Why we're aborting. * @see Abortable#abort(String, Throwable) */ diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java index b1fcd945b7d6..813c060b5200 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hbase; +import static org.apache.hadoop.hbase.client.RegionLocator.LOCATOR_META_REPLICAS_MODE; import static org.apache.hadoop.hbase.util.FutureUtils.addListener; import java.io.IOException; @@ -29,6 +30,7 @@ import java.util.Optional; import java.util.SortedMap; import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ThreadLocalRandom; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; @@ -63,21 +65,20 @@ public class AsyncMetaTableAccessor { private static final Logger LOG = LoggerFactory.getLogger(AsyncMetaTableAccessor.class); - /** The delimiter for meta columns for replicaIds > 0 */ private static final char META_REPLICA_ID_DELIMITER = '_'; /** A regex for parsing server columns from meta. See above javadoc for meta layout */ - private static final Pattern SERVER_COLUMN_PATTERN = Pattern - .compile("^server(_[0-9a-fA-F]{4})?$"); + private static final Pattern SERVER_COLUMN_PATTERN = + Pattern.compile("^server(_[0-9a-fA-F]{4})?$"); public static CompletableFuture tableExists(AsyncTable metaTable, - TableName tableName) { + TableName tableName) { return getTableState(metaTable, tableName).thenApply(Optional::isPresent); } public static CompletableFuture> getTableState(AsyncTable metaTable, - TableName tableName) { + TableName tableName) { CompletableFuture> future = new CompletableFuture<>(); Get get = new Get(tableName.getName()).addColumn(getTableFamily(), getStateColumn()); long time = EnvironmentEdgeManager.currentTime(); @@ -100,14 +101,9 @@ public static CompletableFuture> getTableState(AsyncTable> getRegionLocation( - AsyncTable metaTable, byte[] regionName) { + /** Returns the HRegionLocation from meta for the given region */ + public static CompletableFuture> + getRegionLocation(AsyncTable metaTable, byte[] regionName) { CompletableFuture> future = new CompletableFuture<>(); try { RegionInfo parsedRegionInfo = MetaTableAccessor.parseRegionInfoFromRegionName(regionName); @@ -127,14 +123,9 @@ public static CompletableFuture> getRegionLocation( return future; } - /** - * Returns the HRegionLocation from meta for the given encoded region name - * @param metaTable - * @param encodedRegionName region we're looking for - * @return HRegionLocation for the given region - */ - public static CompletableFuture> getRegionLocationWithEncodedName( - AsyncTable metaTable, byte[] encodedRegionName) { + /** Returns the HRegionLocation from meta for the given encoded region name */ + public static CompletableFuture> + getRegionLocationWithEncodedName(AsyncTable metaTable, byte[] encodedRegionName) { CompletableFuture> future = new CompletableFuture<>(); addListener( metaTable @@ -149,8 +140,10 @@ public static CompletableFuture> getRegionLocationWith .filter(result -> MetaTableAccessor.getRegionInfo(result) != null).forEach(result -> { getRegionLocations(result).ifPresent(locations -> { for (HRegionLocation location : locations.getRegionLocations()) { - if (location != null && - encodedRegionNameStr.equals(location.getRegion().getEncodedName())) { + if ( + location != null + && encodedRegionNameStr.equals(location.getRegion().getEncodedName()) + ) { future.complete(Optional.of(location)); return; } @@ -166,24 +159,23 @@ private static Optional getTableState(Result r) throws IOException { Cell cell = r.getColumnLatestCell(getTableFamily(), getStateColumn()); if (cell == null) return Optional.empty(); try { - return Optional.of(TableState.parseFrom( - TableName.valueOf(r.getRow()), - Arrays.copyOfRange(cell.getValueArray(), cell.getValueOffset(), cell.getValueOffset() - + cell.getValueLength()))); + return Optional.of( + TableState.parseFrom(TableName.valueOf(r.getRow()), Arrays.copyOfRange(cell.getValueArray(), + cell.getValueOffset(), cell.getValueOffset() + cell.getValueLength()))); } catch (DeserializationException e) { throw new IOException("Failed to parse table state from result: " + r, e); } } /** - * Used to get all region locations for the specific table. - * @param metaTable + * Used to get all region locations for the specific table + * @param metaTable scanner over meta table * @param tableName table we're looking for, can be null for getting all regions * @return the list of region locations. The return value will be wrapped by a * {@link CompletableFuture}. */ public static CompletableFuture> getTableHRegionLocations( - AsyncTable metaTable, TableName tableName) { + AsyncTable metaTable, TableName tableName) { CompletableFuture> future = new CompletableFuture<>(); addListener(getTableRegionsAndLocations(metaTable, tableName, true), (locations, err) -> { if (err != null) { @@ -202,53 +194,53 @@ public static CompletableFuture> getTableHRegionLocations( /** * Used to get table regions' info and server. - * @param metaTable - * @param tableName table we're looking for, can be null for getting all regions + * @param metaTable scanner over meta table + * @param tableName table we're looking for, can be null for getting all regions * @param excludeOfflinedSplitParents don't return split parents * @return the list of regioninfos and server. The return value will be wrapped by a * {@link CompletableFuture}. */ private static CompletableFuture>> getTableRegionsAndLocations( - final AsyncTable metaTable, - final TableName tableName, final boolean excludeOfflinedSplitParents) { + final AsyncTable metaTable, final TableName tableName, + final boolean excludeOfflinedSplitParents) { CompletableFuture>> future = new CompletableFuture<>(); if (TableName.META_TABLE_NAME.equals(tableName)) { future.completeExceptionally(new IOException( - "This method can't be used to locate meta regions;" + " use MetaTableLocator instead")); + "This method can't be used to locate meta regions;" + " use MetaTableLocator instead")); } // Make a version of CollectingVisitor that collects RegionInfo and ServerAddress CollectingVisitor> visitor = new CollectingVisitor>() { - private RegionLocations current = null; - - @Override - public boolean visit(Result r) throws IOException { - Optional currentRegionLocations = getRegionLocations(r); - current = currentRegionLocations.orElse(null); - if (current == null || current.getRegionLocation().getRegion() == null) { - LOG.warn("No serialized RegionInfo in " + r); - return true; + private RegionLocations current = null; + + @Override + public boolean visit(Result r) throws IOException { + Optional currentRegionLocations = getRegionLocations(r); + current = currentRegionLocations.orElse(null); + if (current == null || current.getRegionLocation().getRegion() == null) { + LOG.warn("No serialized RegionInfo in " + r); + return true; + } + RegionInfo hri = current.getRegionLocation().getRegion(); + if (excludeOfflinedSplitParents && hri.isSplitParent()) return true; + // Else call super and add this Result to the collection. + return super.visit(r); } - RegionInfo hri = current.getRegionLocation().getRegion(); - if (excludeOfflinedSplitParents && hri.isSplitParent()) return true; - // Else call super and add this Result to the collection. - return super.visit(r); - } - @Override - void add(Result r) { - if (current == null) { - return; - } - for (HRegionLocation loc : current.getRegionLocations()) { - if (loc != null) { - this.results.add(new Pair(loc.getRegion(), loc - .getServerName())); + @Override + void add(Result r) { + if (current == null) { + return; + } + for (HRegionLocation loc : current.getRegionLocations()) { + if (loc != null) { + this.results + .add(new Pair(loc.getRegion(), loc.getServerName())); + } } } - } - }; + }; addListener(scanMeta(metaTable, tableName, QueryType.REGION, visitor), (v, error) -> { if (error != null) { @@ -262,28 +254,28 @@ void add(Result r) { /** * Performs a scan of META table for given table. - * @param metaTable - * @param tableName table withing we scan - * @param type scanned part of meta - * @param visitor Visitor invoked against each row + * @param metaTable scanner over meta table + * @param tableName table within we scan + * @param type scanned part of meta + * @param visitor Visitor invoked against each row */ private static CompletableFuture scanMeta(AsyncTable metaTable, - TableName tableName, QueryType type, final Visitor visitor) { + TableName tableName, QueryType type, final Visitor visitor) { return scanMeta(metaTable, getTableStartRowForMeta(tableName, type), getTableStopRowForMeta(tableName, type), type, Integer.MAX_VALUE, visitor); } /** * Performs a scan of META table for given table. - * @param metaTable - * @param startRow Where to start the scan - * @param stopRow Where to stop the scan - * @param type scanned part of meta - * @param maxRows maximum rows to return - * @param visitor Visitor invoked against each row + * @param metaTable scanner over meta table + * @param startRow Where to start the scan + * @param stopRow Where to stop the scan + * @param type scanned part of meta + * @param maxRows maximum rows to return + * @param visitor Visitor invoked against each row */ private static CompletableFuture scanMeta(AsyncTable metaTable, - byte[] startRow, byte[] stopRow, QueryType type, int maxRows, final Visitor visitor) { + byte[] startRow, byte[] stopRow, QueryType type, int maxRows, final Visitor visitor) { int rowUpperLimit = maxRows > 0 ? maxRows : Integer.MAX_VALUE; Scan scan = getMetaScan(metaTable, rowUpperLimit); for (byte[] family : type.getFamilies()) { @@ -298,12 +290,42 @@ private static CompletableFuture scanMeta(AsyncTable 1) { + int replicaId = ThreadLocalRandom.current().nextInt(numOfReplicas); + + // When the replicaId is 0, do not set to Consistency.TIMELINE + if (replicaId > 0) { + scan.setReplicaId(replicaId); + scan.setConsistency(Consistency.TIMELINE); + } + } + metaTable.scan(scan, new MetaTableScanResultConsumer(rowUpperLimit, visitor, future)); + }); + } else { + if (metaReplicaMode == CatalogReplicaMode.HEDGED_READ) { + scan.setConsistency(Consistency.TIMELINE); + } + metaTable.scan(scan, new MetaTableScanResultConsumer(rowUpperLimit, visitor, future)); + } + return future; } @@ -318,7 +340,7 @@ private static final class MetaTableScanResultConsumer implements AdvancedScanRe private final CompletableFuture future; MetaTableScanResultConsumer(int rowUpperLimit, Visitor visitor, - CompletableFuture future) { + CompletableFuture future) { this.rowUpperLimit = rowUpperLimit; this.visitor = visitor; this.future = future; @@ -332,7 +354,7 @@ public void onError(Throwable error) { @Override @edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "NP_NONNULL_PARAM_VIOLATION", - justification = "https://github.com/findbugsproject/findbugs/issues/79") + justification = "https://github.com/findbugsproject/findbugs/issues/79") public void onComplete() { future.complete(null); } @@ -366,8 +388,10 @@ private static Scan getMetaScan(AsyncTable metaTable, int rowUpperLimit) { Scan scan = new Scan(); int scannerCaching = metaTable.getConfiguration().getInt(HConstants.HBASE_META_SCANNER_CACHING, HConstants.DEFAULT_HBASE_META_SCANNER_CACHING); - if (metaTable.getConfiguration().getBoolean(HConstants.USE_META_REPLICAS, - HConstants.DEFAULT_USE_META_REPLICAS)) { + if ( + metaTable.getConfiguration().getBoolean(HConstants.USE_META_REPLICAS, + HConstants.DEFAULT_USE_META_REPLICAS) + ) { scan.setConsistency(Consistency.TIMELINE); } if (rowUpperLimit <= scannerCaching) { @@ -384,9 +408,13 @@ private static Scan getMetaScan(AsyncTable metaTable, int rowUpperLimit) { * can't deserialize the result. */ private static Optional getRegionLocations(final Result r) { - if (r == null) return Optional.empty(); + if (r == null) { + return Optional.empty(); + } Optional regionInfo = getHRegionInfo(r, getRegionInfoColumn()); - if (!regionInfo.isPresent()) return Optional.empty(); + if (!regionInfo.isPresent()) { + return Optional.empty(); + } List locations = new ArrayList(1); NavigableMap> familyMap = r.getNoVersionMap(); @@ -394,15 +422,18 @@ private static Optional getRegionLocations(final Result r) { locations.add(getRegionLocation(r, regionInfo.get(), 0)); NavigableMap infoMap = familyMap.get(getCatalogFamily()); - if (infoMap == null) return Optional.of(new RegionLocations(locations)); + if (infoMap == null) { + return Optional.of(new RegionLocations(locations)); + } // iterate until all serverName columns are seen int replicaId = 0; byte[] serverColumn = getServerColumn(replicaId); - SortedMap serverMap = null; - serverMap = infoMap.tailMap(serverColumn, false); + SortedMap serverMap = infoMap.tailMap(serverColumn, false); - if (serverMap.isEmpty()) return Optional.of(new RegionLocations(locations)); + if (serverMap.isEmpty()) { + return Optional.of(new RegionLocations(locations)); + } for (Map.Entry entry : serverMap.entrySet()) { replicaId = parseReplicaIdFromServerColumn(entry.getKey()); @@ -423,16 +454,15 @@ private static Optional getRegionLocations(final Result r) { } /** - * Returns the HRegionLocation parsed from the given meta row Result - * for the given regionInfo and replicaId. The regionInfo can be the default region info - * for the replica. - * @param r the meta row result + * Returns the HRegionLocation parsed from the given meta row Result for the given regionInfo and + * replicaId. The regionInfo can be the default region info for the replica. + * @param r the meta row result * @param regionInfo RegionInfo for default replica - * @param replicaId the replicaId for the HRegionLocation + * @param replicaId the replicaId for the HRegionLocation * @return HRegionLocation parsed from the given meta row Result for the given replicaId */ private static HRegionLocation getRegionLocation(final Result r, final RegionInfo regionInfo, - final int replicaId) { + final int replicaId) { Optional serverName = getServerName(r, replicaId); long seqNum = getSeqNumDuringOpen(r, replicaId); RegionInfo replicaInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, replicaId); @@ -448,8 +478,8 @@ private static Optional getServerName(final Result r, final int repl byte[] serverColumn = getServerColumn(replicaId); Cell cell = r.getColumnLatestCell(getCatalogFamily(), serverColumn); if (cell == null || cell.getValueLength() == 0) return Optional.empty(); - String hostAndPort = Bytes.toString(cell.getValueArray(), cell.getValueOffset(), - cell.getValueLength()); + String hostAndPort = + Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); byte[] startcodeColumn = getStartCodeColumn(replicaId); cell = r.getColumnLatestCell(getCatalogFamily(), startcodeColumn); if (cell == null || cell.getValueLength() == 0) return Optional.empty(); @@ -463,8 +493,8 @@ private static Optional getServerName(final Result r, final int repl } /** - * The latest seqnum that the server writing to meta observed when opening the region. - * E.g. the seqNum when the result of {@link #getServerName(Result, int)} was written. + * The latest seqnum that the server writing to meta observed when opening the region. E.g. the + * seqNum when the result of {@link #getServerName(Result, int)} was written. * @param r Result to pull the seqNum from * @return SeqNum, or HConstants.NO_SEQNUM if there's no value written. */ @@ -533,7 +563,7 @@ private static byte[] getTableStopRowForMeta(TableName tableName, QueryType type /** * Returns the RegionInfo object from the column {@link HConstants#CATALOG_FAMILY} and * qualifier of the catalog table result. - * @param r a Result object from the catalog table scan + * @param r a Result object from the catalog table scan * @param qualifier Column family qualifier * @return An RegionInfo instance. */ @@ -585,7 +615,7 @@ private static byte[] getServerColumn(int replicaId) { return replicaId == 0 ? HConstants.SERVER_QUALIFIER : Bytes.toBytes(HConstants.SERVER_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** @@ -597,7 +627,7 @@ private static byte[] getStartCodeColumn(int replicaId) { return replicaId == 0 ? HConstants.STARTCODE_QUALIFIER : Bytes.toBytes(HConstants.STARTCODE_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** @@ -609,12 +639,12 @@ private static byte[] getSeqNumColumn(int replicaId) { return replicaId == 0 ? HConstants.SEQNUM_QUALIFIER : Bytes.toBytes(HConstants.SEQNUM_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** - * Parses the replicaId from the server column qualifier. See top of the class javadoc - * for the actual meta layout + * Parses the replicaId from the server column qualifier. See top of the class javadoc for the + * actual meta layout * @param serverColumn the column qualifier * @return an int for the replicaId */ diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStats.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStats.java index 91cedd60299d..615b3a467e6e 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStats.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStats.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,7 +20,6 @@ import java.util.Collections; import java.util.Map; import java.util.stream.Collectors; - import org.apache.hadoop.hbase.client.RegionInfo; import org.apache.yetus.audience.InterfaceAudience; @@ -56,9 +54,8 @@ public int getExceptionCount() { private String getFailedRegions() { return exceptions.keySet().stream() - .map(regionName -> RegionInfo.prettyPrint(RegionInfo.encodeRegionName(regionName))) - .collect(Collectors.toList()) - .toString(); + .map(regionName -> RegionInfo.prettyPrint(RegionInfo.encodeRegionName(regionName))) + .collect(Collectors.toList()).toString(); } @InterfaceAudience.Private @@ -68,11 +65,8 @@ public static CacheEvictionStatsBuilder builder() { @Override public String toString() { - return "CacheEvictionStats{" + - "evictedBlocks=" + evictedBlocks + - ", maxCacheSize=" + maxCacheSize + - ", failedRegionsSize=" + getExceptionCount() + - ", failedRegions=" + getFailedRegions() + - '}'; + return "CacheEvictionStats{" + "evictedBlocks=" + evictedBlocks + ", maxCacheSize=" + + maxCacheSize + ", failedRegionsSize=" + getExceptionCount() + ", failedRegions=" + + getFailedRegions() + '}'; } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsAggregator.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsAggregator.java index 85d68dcc08bc..fabe7f030278 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsAggregator.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsAggregator.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -39,4 +38,4 @@ public synchronized void append(CacheEvictionStats stats) { public synchronized CacheEvictionStats sum() { return this.builder.build(); } -} \ No newline at end of file +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsBuilder.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsBuilder.java index d9e1400da16b..4b31d98611bc 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsBuilder.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/CacheEvictionStatsBuilder.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -20,7 +19,6 @@ import java.util.HashMap; import java.util.Map; - import org.apache.yetus.audience.InterfaceAudience; @InterfaceAudience.Private @@ -42,7 +40,7 @@ public CacheEvictionStatsBuilder withMaxCacheSize(long maxCacheSize) { return this; } - public void addException(byte[] regionName, Throwable ie){ + public void addException(byte[] regionName, Throwable ie) { exceptions.put(regionName, ie); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java index 13ab3ed47cee..8bfde779176e 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,27 +15,26 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; -import java.io.IOException; - import org.apache.yetus.audience.InterfaceAudience; /** - * Returned to the clients when their request was discarded due to server being overloaded. - * Clients should retry upon receiving it. + * Returned to the clients when their request was discarded due to server being overloaded. Clients + * should retry upon receiving it. */ @SuppressWarnings("serial") @InterfaceAudience.Public -public class CallDroppedException extends IOException { +public class CallDroppedException extends HBaseServerException { public CallDroppedException() { - super(); + // For now all call drops are due to server being overloaded. + // We could decouple this if desired. + super(true); } // Absence of this constructor prevents proper unwrapping of // remote exception on the client side public CallDroppedException(String message) { - super(message); + super(true, message); } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java index 12fa242693c8..ecad4d9f0bc2 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,16 +15,17 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; -import java.io.IOException; - import org.apache.yetus.audience.InterfaceAudience; +/** + * Returned to clients when their request was dropped because the call queue was too big to accept a + * new call. Clients should retry upon receiving it. + */ @SuppressWarnings("serial") @InterfaceAudience.Public -public class CallQueueTooBigException extends IOException { +public class CallQueueTooBigException extends CallDroppedException { public CallQueueTooBigException() { super(); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/CatalogReplicaMode.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/CatalogReplicaMode.java similarity index 61% rename from hbase-client/src/main/java/org/apache/hadoop/hbase/client/CatalogReplicaMode.java rename to hbase-client/src/main/java/org/apache/hadoop/hbase/CatalogReplicaMode.java index 40062e32e83c..b89673d45a88 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/CatalogReplicaMode.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/CatalogReplicaMode.java @@ -6,34 +6,34 @@ * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at - * http://www.apache.org/licenses/LICENSE-2.0 + * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ -package org.apache.hadoop.hbase.client; +package org.apache.hadoop.hbase; import org.apache.yetus.audience.InterfaceAudience; /** - *

There are two modes with catalog replica support.

- * + *

+ * There are two modes with catalog replica support. + *

*
    - *
  1. HEDGED_READ - Client sends requests to the primary region first, within a - * configured amount of time, if there is no response coming back, - * client sends requests to all replica regions and takes the first - * response.
  2. - * - *
  3. LOAD_BALANCE - Client sends requests to replica regions in a round-robin mode, - * if results from replica regions are stale, next time, client sends requests for - * these stale locations to the primary region. In this mode, scan - * requests are load balanced across all replica regions.
  4. + *
  5. HEDGED_READ - Client sends requests to the primary region first, within a configured amount + * of time, if there is no response coming back, client sends requests to all replica regions and + * takes the first response.
  6. + *
  7. LOAD_BALANCE - Client sends requests to replica regions in a round-robin mode, if results + * from replica regions are stale, next time, client sends requests for these stale locations to the + * primary region. In this mode, scan requests are load balanced across all replica regions.
  8. *
*/ @InterfaceAudience.Private -enum CatalogReplicaMode { +public enum CatalogReplicaMode { NONE { @Override public String toString() { @@ -54,7 +54,7 @@ public String toString() { }; public static CatalogReplicaMode fromString(final String value) { - for(CatalogReplicaMode mode : values()) { + for (CatalogReplicaMode mode : values()) { if (mode.toString().equalsIgnoreCase(value)) { return mode; } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClockOutOfSyncException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClockOutOfSyncException.java index a63ca6936ec1..1afcb30ece01 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClockOutOfSyncException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClockOutOfSyncException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -19,12 +18,10 @@ package org.apache.hadoop.hbase; import java.io.IOException; - import org.apache.yetus.audience.InterfaceAudience; /** - * This exception is thrown by the master when a region server clock skew is - * too high. + * This exception is thrown by the master when a region server clock skew is too high. */ @SuppressWarnings("serial") @InterfaceAudience.Public diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java index 1dd01faf808a..67438677dadd 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java @@ -15,29 +15,27 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.io.IOException; import java.util.UUID; - -import org.apache.yetus.audience.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.yetus.audience.InterfaceAudience; + import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterIdProtos; -import org.apache.hadoop.hbase.util.Bytes; /** - * The identifier for this cluster. - * It is serialized to the filesystem and up into zookeeper. This is a container for the id. - * Also knows how to serialize and deserialize the cluster id. + * The identifier for this cluster. It is serialized to the filesystem and up into zookeeper. This + * is a container for the id. Also knows how to serialize and deserialize the cluster id. */ @InterfaceAudience.Private public class ClusterId { private final String id; /** - * New ClusterID. Generates a uniqueid. + * New ClusterID. Generates a uniqueid. */ public ClusterId() { this(UUID.randomUUID().toString()); @@ -47,20 +45,18 @@ public ClusterId(final String uuid) { this.id = uuid; } - /** - * @return The clusterid serialized using pb w/ pb magic prefix - */ - public byte [] toByteArray() { + /** Returns The clusterid serialized using pb w/ pb magic prefix */ + public byte[] toByteArray() { return ProtobufUtil.prependPBMagic(convert().toByteArray()); } /** + * Parse the serialized representation of the {@link ClusterId} * @param bytes A pb serialized {@link ClusterId} instance with pb magic prefix * @return An instance of {@link ClusterId} made from bytes - * @throws DeserializationException * @see #toByteArray() */ - public static ClusterId parseFrom(final byte [] bytes) throws DeserializationException { + public static ClusterId parseFrom(final byte[] bytes) throws DeserializationException { if (ProtobufUtil.isPBMagicPrefix(bytes)) { int pblen = ProtobufUtil.lengthOfPBMagic(); ClusterIdProtos.ClusterId.Builder builder = ClusterIdProtos.ClusterId.newBuilder(); @@ -78,18 +74,13 @@ public static ClusterId parseFrom(final byte [] bytes) throws DeserializationExc } } - /** - * @return A pb instance to represent this instance. - */ + /** Returns A pb instance to represent this instance. */ public ClusterIdProtos.ClusterId convert() { ClusterIdProtos.ClusterId.Builder builder = ClusterIdProtos.ClusterId.newBuilder(); return builder.setClusterId(this.id).build(); } - /** - * @param cid - * @return A {@link ClusterId} made from the passed in cid - */ + /** Returns A {@link ClusterId} made from the passed in cid */ public static ClusterId convert(final ClusterIdProtos.ClusterId cid) { return new ClusterId(cid.getClusterId()); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetrics.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetrics.java index 497ab938856b..a8a1493c349a 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetrics.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetrics.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -16,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import edu.umd.cs.findbugs.annotations.Nullable; @@ -39,65 +37,62 @@ *
  • The average cluster load.
  • *
  • The number of regions deployed on the cluster.
  • *
  • The number of requests since last report.
  • - *
  • Detailed region server loading and resource usage information, - * per server and per region.
  • + *
  • Detailed region server loading and resource usage information, per server and per + * region.
  • *
  • Regions in transition at master
  • *
  • The unique cluster ID
  • * - * {@link Option} provides a way to get desired ClusterStatus information. - * The following codes will get all the cluster information. + * {@link Option} provides a way to get desired ClusterStatus information. The following + * codes will get all the cluster information. + * *
    - * {@code
    - * // Original version still works
    - * Admin admin = connection.getAdmin();
    - * ClusterMetrics metrics = admin.getClusterStatus();
    - * // or below, a new version which has the same effects
    - * ClusterMetrics metrics = admin.getClusterStatus(EnumSet.allOf(Option.class));
    + * {
    + *   @code
    + *   // Original version still works
    + *   Admin admin = connection.getAdmin();
    + *   ClusterMetrics metrics = admin.getClusterStatus();
    + *   // or below, a new version which has the same effects
    + *   ClusterMetrics metrics = admin.getClusterStatus(EnumSet.allOf(Option.class));
      * }
      * 
    - * If information about live servers is the only wanted. - * then codes in the following way: + * + * If information about live servers is the only wanted. then codes in the following way: + * *
    - * {@code
    - * Admin admin = connection.getAdmin();
    - * ClusterMetrics metrics = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
    + * {
    + *   @code
    + *   Admin admin = connection.getAdmin();
    + *   ClusterMetrics metrics = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
      * }
      * 
    */ @InterfaceAudience.Public public interface ClusterMetrics { - /** - * @return the HBase version string as reported by the HMaster - */ + /** Returns the HBase version string as reported by the HMaster */ @Nullable String getHBaseVersion(); - /** - * @return the names of region servers on the dead list - */ + /** Returns the names of region servers on the dead list */ List getDeadServerNames(); - /** - * @return the names of region servers on the live list - */ + /** Returns the names of region servers on the unknown list */ + List getUnknownServerNames(); + + /** Returns the names of region servers on the live list */ Map getLiveServerMetrics(); - /** - * @return the number of regions deployed on the cluster - */ + /** Returns the number of regions deployed on the cluster */ default int getRegionCount() { return getLiveServerMetrics().entrySet().stream() - .mapToInt(v -> v.getValue().getRegionMetrics().size()).sum(); + .mapToInt(v -> v.getValue().getRegionMetrics().size()).sum(); } - /** - * @return the number of requests since last report - */ + /** Returns the number of requests since last report */ default long getRequestCount() { return getLiveServerMetrics().entrySet().stream() - .flatMap(v -> v.getValue().getRegionMetrics().values().stream()) - .mapToLong(RegionMetrics::getRequestCount).sum(); + .flatMap(v -> v.getValue().getRegionMetrics().values().stream()) + .mapToLong(RegionMetrics::getRequestCount).sum(); } /** @@ -107,9 +102,7 @@ default long getRequestCount() { @Nullable ServerName getMasterName(); - /** - * @return the names of backup masters - */ + /** Returns the names of backup masters */ List getBackupMasterNames(); @InterfaceAudience.Private @@ -122,17 +115,15 @@ default long getRequestCount() { default long getLastMajorCompactionTimestamp(TableName table) { return getLiveServerMetrics().values().stream() - .flatMap(s -> s.getRegionMetrics().values().stream()) - .filter(r -> RegionInfo.getTable(r.getRegionName()).equals(table)) - .mapToLong(RegionMetrics::getLastMajorCompactionTimestamp).min().orElse(0); + .flatMap(s -> s.getRegionMetrics().values().stream()) + .filter(r -> RegionInfo.getTable(r.getRegionName()).equals(table)) + .mapToLong(RegionMetrics::getLastMajorCompactionTimestamp).min().orElse(0); } default long getLastMajorCompactionTimestamp(byte[] regionName) { return getLiveServerMetrics().values().stream() - .filter(s -> s.getRegionMetrics().containsKey(regionName)) - .findAny() - .map(s -> s.getRegionMetrics().get(regionName).getLastMajorCompactionTimestamp()) - .orElse(0L); + .filter(s -> s.getRegionMetrics().containsKey(regionName)).findAny() + .map(s -> s.getRegionMetrics().get(regionName).getLastMajorCompactionTimestamp()).orElse(0L); } @Nullable @@ -142,25 +133,28 @@ default long getLastMajorCompactionTimestamp(byte[] regionName) { List getServersName(); - /** - * @return the average cluster load - */ + /** Returns the average cluster load */ default double getAverageLoad() { int serverSize = getLiveServerMetrics().size(); if (serverSize == 0) { return 0; } - return (double)getRegionCount() / (double)serverSize; + return (double) getRegionCount() / (double) serverSize; } /** - * Provide region states count for given table. - * e.g howmany regions of give table are opened/closed/rit etc - * + * Provide region states count for given table. e.g howmany regions of give table are + * opened/closed/rit etc * @return map of table to region states count */ Map getTableRegionStatesCount(); + /** + * Provide the list of master tasks + */ + @Nullable + List getMasterTasks(); + /** * Kinds of ClusterMetrics */ @@ -185,6 +179,10 @@ enum Option { * metrics about dead region servers */ DEAD_SERVERS, + /** + * metrics about unknown region servers + */ + UNKNOWN_SERVERS, /** * metrics about master name */ @@ -213,5 +211,9 @@ enum Option { * metrics about table to no of regions status count */ TABLE_TO_REGIONS_COUNT, + /** + * metrics about monitored tasks + */ + TASKS, } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetricsBuilder.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetricsBuilder.java index 493fe71b8b0f..9ca65463e022 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetricsBuilder.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterMetricsBuilder.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -16,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import edu.umd.cs.findbugs.annotations.Nullable; @@ -26,13 +24,13 @@ import java.util.Map; import java.util.TreeMap; import java.util.stream.Collectors; - import org.apache.hadoop.hbase.client.RegionStatesCount; import org.apache.hadoop.hbase.master.RegionState; import org.apache.yetus.audience.InterfaceAudience; import org.apache.hbase.thirdparty.com.google.common.base.Preconditions; import org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations; + import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos; import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos.Option; @@ -43,45 +41,43 @@ public final class ClusterMetricsBuilder { public static ClusterStatusProtos.ClusterStatus toClusterStatus(ClusterMetrics metrics) { - ClusterStatusProtos.ClusterStatus.Builder builder - = ClusterStatusProtos.ClusterStatus.newBuilder() - .addAllBackupMasters(metrics.getBackupMasterNames().stream() - .map(ProtobufUtil::toServerName).collect(Collectors.toList())) - .addAllDeadServers(metrics.getDeadServerNames().stream() - .map(ProtobufUtil::toServerName).collect(Collectors.toList())) + ClusterStatusProtos.ClusterStatus.Builder builder = + ClusterStatusProtos.ClusterStatus.newBuilder() + .addAllBackupMasters(metrics.getBackupMasterNames().stream().map(ProtobufUtil::toServerName) + .collect(Collectors.toList())) + .addAllDeadServers(metrics.getDeadServerNames().stream().map(ProtobufUtil::toServerName) + .collect(Collectors.toList())) + .addAllUnknownServers(metrics.getUnknownServerNames().stream() + .map(ProtobufUtil::toServerName).collect(Collectors.toList())) .addAllLiveServers(metrics.getLiveServerMetrics().entrySet().stream() - .map(s -> ClusterStatusProtos.LiveServerInfo - .newBuilder() - .setServer(ProtobufUtil.toServerName(s.getKey())) - .setServerLoad(ServerMetricsBuilder.toServerLoad(s.getValue())) - .build()) - .collect(Collectors.toList())) + .map(s -> ClusterStatusProtos.LiveServerInfo.newBuilder() + .setServer(ProtobufUtil.toServerName(s.getKey())) + .setServerLoad(ServerMetricsBuilder.toServerLoad(s.getValue())).build()) + .collect(Collectors.toList())) .addAllMasterCoprocessors(metrics.getMasterCoprocessorNames().stream() - .map(n -> HBaseProtos.Coprocessor.newBuilder().setName(n).build()) - .collect(Collectors.toList())) + .map(n -> HBaseProtos.Coprocessor.newBuilder().setName(n).build()) + .collect(Collectors.toList())) .addAllRegionsInTransition(metrics.getRegionStatesInTransition().stream() - .map(r -> ClusterStatusProtos.RegionInTransition - .newBuilder() - .setSpec(HBaseProtos.RegionSpecifier - .newBuilder() - .setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.REGION_NAME) - .setValue(UnsafeByteOperations.unsafeWrap(r.getRegion().getRegionName())) - .build()) - .setRegionState(r.convert()) - .build()) - .collect(Collectors.toList())) + .map(r -> ClusterStatusProtos.RegionInTransition.newBuilder() + .setSpec(HBaseProtos.RegionSpecifier.newBuilder() + .setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.REGION_NAME) + .setValue(UnsafeByteOperations.unsafeWrap(r.getRegion().getRegionName())).build()) + .setRegionState(r.convert()).build()) + .collect(Collectors.toList())) .setMasterInfoPort(metrics.getMasterInfoPort()) .addAllServersName(metrics.getServersName().stream().map(ProtobufUtil::toServerName) .collect(Collectors.toList())) .addAllTableRegionStatesCount(metrics.getTableRegionStatesCount().entrySet().stream() - .map(status -> - ClusterStatusProtos.TableRegionStatesCount.newBuilder() - .setTableName(ProtobufUtil.toProtoTableName((status.getKey()))) - .setRegionStatesCount(ProtobufUtil.toTableRegionStatesCount(status.getValue())) - .build()) + .map(status -> ClusterStatusProtos.TableRegionStatesCount.newBuilder() + .setTableName(ProtobufUtil.toProtoTableName(status.getKey())) + .setRegionStatesCount(ProtobufUtil.toTableRegionStatesCount(status.getValue())).build()) .collect(Collectors.toList())); if (metrics.getMasterName() != null) { - builder.setMaster(ProtobufUtil.toServerName((metrics.getMasterName()))); + builder.setMaster(ProtobufUtil.toServerName(metrics.getMasterName())); + } + if (metrics.getMasterTasks() != null) { + builder.addAllMasterTasks(metrics.getMasterTasks().stream() + .map(t -> ProtobufUtil.toServerTask(t)).collect(Collectors.toList())); } if (metrics.getBalancerOn() != null) { builder.setBalancerOn(metrics.getBalancerOn()); @@ -91,38 +87,35 @@ public static ClusterStatusProtos.ClusterStatus toClusterStatus(ClusterMetrics m } if (metrics.getHBaseVersion() != null) { builder.setHbaseVersion( - FSProtos.HBaseVersionFileContent.newBuilder() - .setVersion(metrics.getHBaseVersion())); + FSProtos.HBaseVersionFileContent.newBuilder().setVersion(metrics.getHBaseVersion())); } return builder.build(); } - public static ClusterMetrics toClusterMetrics( - ClusterStatusProtos.ClusterStatus proto) { + public static ClusterMetrics toClusterMetrics(ClusterStatusProtos.ClusterStatus proto) { ClusterMetricsBuilder builder = ClusterMetricsBuilder.newBuilder(); - builder.setLiveServerMetrics(proto.getLiveServersList().stream() + builder + .setLiveServerMetrics(proto.getLiveServersList().stream() .collect(Collectors.toMap(e -> ProtobufUtil.toServerName(e.getServer()), - ServerMetricsBuilder::toServerMetrics))) - .setDeadServerNames(proto.getDeadServersList().stream() - .map(ProtobufUtil::toServerName) - .collect(Collectors.toList())) - .setBackerMasterNames(proto.getBackupMastersList().stream() - .map(ProtobufUtil::toServerName) - .collect(Collectors.toList())) - .setRegionsInTransition(proto.getRegionsInTransitionList().stream() - .map(ClusterStatusProtos.RegionInTransition::getRegionState) - .map(RegionState::convert) - .collect(Collectors.toList())) - .setMasterCoprocessorNames(proto.getMasterCoprocessorsList().stream() - .map(HBaseProtos.Coprocessor::getName) - .collect(Collectors.toList())) - .setServerNames(proto.getServersNameList().stream().map(ProtobufUtil::toServerName) - .collect(Collectors.toList())) - .setTableRegionStatesCount( - proto.getTableRegionStatesCountList().stream() - .collect(Collectors.toMap( - e -> ProtobufUtil.toTableName(e.getTableName()), - e -> ProtobufUtil.toTableRegionStatesCount(e.getRegionStatesCount())))); + ServerMetricsBuilder::toServerMetrics))) + .setDeadServerNames(proto.getDeadServersList().stream().map(ProtobufUtil::toServerName) + .collect(Collectors.toList())) + .setUnknownServerNames(proto.getUnknownServersList().stream().map(ProtobufUtil::toServerName) + .collect(Collectors.toList())) + .setBackerMasterNames(proto.getBackupMastersList().stream().map(ProtobufUtil::toServerName) + .collect(Collectors.toList())) + .setRegionsInTransition(proto.getRegionsInTransitionList().stream() + .map(ClusterStatusProtos.RegionInTransition::getRegionState).map(RegionState::convert) + .collect(Collectors.toList())) + .setMasterCoprocessorNames(proto.getMasterCoprocessorsList().stream() + .map(HBaseProtos.Coprocessor::getName).collect(Collectors.toList())) + .setServerNames(proto.getServersNameList().stream().map(ProtobufUtil::toServerName) + .collect(Collectors.toList())) + .setTableRegionStatesCount(proto.getTableRegionStatesCountList().stream() + .collect(Collectors.toMap(e -> ProtobufUtil.toTableName(e.getTableName()), + e -> ProtobufUtil.toTableRegionStatesCount(e.getRegionStatesCount())))) + .setMasterTasks(proto.getMasterTasksList().stream().map(t -> ProtobufUtil.getServerTask(t)) + .collect(Collectors.toList())); if (proto.hasClusterId()) { builder.setClusterId(ClusterId.convert(proto.getClusterId()).toString()); } @@ -152,20 +145,37 @@ public static ClusterMetrics toClusterMetrics( */ public static ClusterMetrics.Option toOption(ClusterStatusProtos.Option option) { switch (option) { - case HBASE_VERSION: return ClusterMetrics.Option.HBASE_VERSION; - case LIVE_SERVERS: return ClusterMetrics.Option.LIVE_SERVERS; - case DEAD_SERVERS: return ClusterMetrics.Option.DEAD_SERVERS; - case REGIONS_IN_TRANSITION: return ClusterMetrics.Option.REGIONS_IN_TRANSITION; - case CLUSTER_ID: return ClusterMetrics.Option.CLUSTER_ID; - case MASTER_COPROCESSORS: return ClusterMetrics.Option.MASTER_COPROCESSORS; - case MASTER: return ClusterMetrics.Option.MASTER; - case BACKUP_MASTERS: return ClusterMetrics.Option.BACKUP_MASTERS; - case BALANCER_ON: return ClusterMetrics.Option.BALANCER_ON; - case SERVERS_NAME: return ClusterMetrics.Option.SERVERS_NAME; - case MASTER_INFO_PORT: return ClusterMetrics.Option.MASTER_INFO_PORT; - case TABLE_TO_REGIONS_COUNT: return ClusterMetrics.Option.TABLE_TO_REGIONS_COUNT; + case HBASE_VERSION: + return ClusterMetrics.Option.HBASE_VERSION; + case LIVE_SERVERS: + return ClusterMetrics.Option.LIVE_SERVERS; + case DEAD_SERVERS: + return ClusterMetrics.Option.DEAD_SERVERS; + case UNKNOWN_SERVERS: + return ClusterMetrics.Option.UNKNOWN_SERVERS; + case REGIONS_IN_TRANSITION: + return ClusterMetrics.Option.REGIONS_IN_TRANSITION; + case CLUSTER_ID: + return ClusterMetrics.Option.CLUSTER_ID; + case MASTER_COPROCESSORS: + return ClusterMetrics.Option.MASTER_COPROCESSORS; + case MASTER: + return ClusterMetrics.Option.MASTER; + case BACKUP_MASTERS: + return ClusterMetrics.Option.BACKUP_MASTERS; + case BALANCER_ON: + return ClusterMetrics.Option.BALANCER_ON; + case SERVERS_NAME: + return ClusterMetrics.Option.SERVERS_NAME; + case MASTER_INFO_PORT: + return ClusterMetrics.Option.MASTER_INFO_PORT; + case TABLE_TO_REGIONS_COUNT: + return ClusterMetrics.Option.TABLE_TO_REGIONS_COUNT; + case TASKS: + return ClusterMetrics.Option.TASKS; // should not reach here - default: throw new IllegalArgumentException("Invalid option: " + option); + default: + throw new IllegalArgumentException("Invalid option: " + option); } } @@ -176,20 +186,37 @@ public static ClusterMetrics.Option toOption(ClusterStatusProtos.Option option) */ public static ClusterStatusProtos.Option toOption(ClusterMetrics.Option option) { switch (option) { - case HBASE_VERSION: return ClusterStatusProtos.Option.HBASE_VERSION; - case LIVE_SERVERS: return ClusterStatusProtos.Option.LIVE_SERVERS; - case DEAD_SERVERS: return ClusterStatusProtos.Option.DEAD_SERVERS; - case REGIONS_IN_TRANSITION: return ClusterStatusProtos.Option.REGIONS_IN_TRANSITION; - case CLUSTER_ID: return ClusterStatusProtos.Option.CLUSTER_ID; - case MASTER_COPROCESSORS: return ClusterStatusProtos.Option.MASTER_COPROCESSORS; - case MASTER: return ClusterStatusProtos.Option.MASTER; - case BACKUP_MASTERS: return ClusterStatusProtos.Option.BACKUP_MASTERS; - case BALANCER_ON: return ClusterStatusProtos.Option.BALANCER_ON; - case SERVERS_NAME: return Option.SERVERS_NAME; - case MASTER_INFO_PORT: return ClusterStatusProtos.Option.MASTER_INFO_PORT; - case TABLE_TO_REGIONS_COUNT: return ClusterStatusProtos.Option.TABLE_TO_REGIONS_COUNT; + case HBASE_VERSION: + return ClusterStatusProtos.Option.HBASE_VERSION; + case LIVE_SERVERS: + return ClusterStatusProtos.Option.LIVE_SERVERS; + case DEAD_SERVERS: + return ClusterStatusProtos.Option.DEAD_SERVERS; + case UNKNOWN_SERVERS: + return ClusterStatusProtos.Option.UNKNOWN_SERVERS; + case REGIONS_IN_TRANSITION: + return ClusterStatusProtos.Option.REGIONS_IN_TRANSITION; + case CLUSTER_ID: + return ClusterStatusProtos.Option.CLUSTER_ID; + case MASTER_COPROCESSORS: + return ClusterStatusProtos.Option.MASTER_COPROCESSORS; + case MASTER: + return ClusterStatusProtos.Option.MASTER; + case BACKUP_MASTERS: + return ClusterStatusProtos.Option.BACKUP_MASTERS; + case BALANCER_ON: + return ClusterStatusProtos.Option.BALANCER_ON; + case SERVERS_NAME: + return Option.SERVERS_NAME; + case MASTER_INFO_PORT: + return ClusterStatusProtos.Option.MASTER_INFO_PORT; + case TABLE_TO_REGIONS_COUNT: + return ClusterStatusProtos.Option.TABLE_TO_REGIONS_COUNT; + case TASKS: + return ClusterStatusProtos.Option.TASKS; // should not reach here - default: throw new IllegalArgumentException("Invalid option: " + option); + default: + throw new IllegalArgumentException("Invalid option: " + option); } } @@ -200,7 +227,7 @@ public static ClusterStatusProtos.Option toOption(ClusterMetrics.Option option) */ public static EnumSet toOptions(List options) { return options.stream().map(ClusterMetricsBuilder::toOption) - .collect(Collectors.toCollection(() -> EnumSet.noneOf(ClusterMetrics.Option.class))); + .collect(Collectors.toCollection(() -> EnumSet.noneOf(ClusterMetrics.Option.class))); } /** @@ -215,9 +242,11 @@ public static List toOptions(EnumSet deadServerNames = Collections.emptyList(); + private List unknownServerNames = Collections.emptyList(); private Map liveServerMetrics = new TreeMap<>(); @Nullable private ServerName masterName; @@ -231,18 +260,27 @@ public static ClusterMetricsBuilder newBuilder() { private int masterInfoPort; private List serversName = Collections.emptyList(); private Map tableRegionStatesCount = Collections.emptyMap(); + @Nullable + private List masterTasks; private ClusterMetricsBuilder() { } + public ClusterMetricsBuilder setHBaseVersion(String value) { this.hbaseVersion = value; return this; } + public ClusterMetricsBuilder setDeadServerNames(List value) { this.deadServerNames = value; return this; } + public ClusterMetricsBuilder setUnknownServerNames(List value) { + this.unknownServerNames = value; + return this; + } + public ClusterMetricsBuilder setLiveServerMetrics(Map value) { liveServerMetrics.putAll(value); return this; @@ -252,62 +290,66 @@ public ClusterMetricsBuilder setMasterName(ServerName value) { this.masterName = value; return this; } + public ClusterMetricsBuilder setBackerMasterNames(List value) { this.backupMasterNames = value; return this; } + public ClusterMetricsBuilder setRegionsInTransition(List value) { this.regionsInTransition = value; return this; } + public ClusterMetricsBuilder setClusterId(String value) { this.clusterId = value; return this; } + public ClusterMetricsBuilder setMasterCoprocessorNames(List value) { this.masterCoprocessorNames = value; return this; } + public ClusterMetricsBuilder setBalancerOn(@Nullable Boolean value) { this.balancerOn = value; return this; } + public ClusterMetricsBuilder setMasterInfoPort(int value) { this.masterInfoPort = value; return this; } + public ClusterMetricsBuilder setServerNames(List serversName) { this.serversName = serversName; return this; } - public ClusterMetricsBuilder setTableRegionStatesCount( - Map tableRegionStatesCount) { + public ClusterMetricsBuilder setMasterTasks(List masterTasks) { + this.masterTasks = masterTasks; + return this; + } + + public ClusterMetricsBuilder + setTableRegionStatesCount(Map tableRegionStatesCount) { this.tableRegionStatesCount = tableRegionStatesCount; return this; } public ClusterMetrics build() { - return new ClusterMetricsImpl( - hbaseVersion, - deadServerNames, - liveServerMetrics, - masterName, - backupMasterNames, - regionsInTransition, - clusterId, - masterCoprocessorNames, - balancerOn, - masterInfoPort, - serversName, - tableRegionStatesCount - ); + return new ClusterMetricsImpl(hbaseVersion, deadServerNames, unknownServerNames, + liveServerMetrics, masterName, backupMasterNames, regionsInTransition, clusterId, + masterCoprocessorNames, balancerOn, masterInfoPort, serversName, tableRegionStatesCount, + masterTasks); } + private static class ClusterMetricsImpl implements ClusterMetrics { @Nullable private final String hbaseVersion; private final List deadServerNames; private final Map liveServerMetrics; + private final List unknownServerNames; @Nullable private final ServerName masterName; private final List backupMasterNames; @@ -320,20 +362,17 @@ private static class ClusterMetricsImpl implements ClusterMetrics { private final int masterInfoPort; private final List serversName; private final Map tableRegionStatesCount; + private final List masterTasks; ClusterMetricsImpl(String hbaseVersion, List deadServerNames, - Map liveServerMetrics, - ServerName masterName, - List backupMasterNames, - List regionsInTransition, - String clusterId, - List masterCoprocessorNames, - Boolean balancerOn, - int masterInfoPort, - List serversName, - Map tableRegionStatesCount) { + List unknownServerNames, Map liveServerMetrics, + ServerName masterName, List backupMasterNames, + List regionsInTransition, String clusterId, List masterCoprocessorNames, + Boolean balancerOn, int masterInfoPort, List serversName, + Map tableRegionStatesCount, List masterTasks) { this.hbaseVersion = hbaseVersion; this.deadServerNames = Preconditions.checkNotNull(deadServerNames); + this.unknownServerNames = Preconditions.checkNotNull(unknownServerNames); this.liveServerMetrics = Preconditions.checkNotNull(liveServerMetrics); this.masterName = masterName; this.backupMasterNames = Preconditions.checkNotNull(backupMasterNames); @@ -344,6 +383,7 @@ private static class ClusterMetricsImpl implements ClusterMetrics { this.masterInfoPort = masterInfoPort; this.serversName = serversName; this.tableRegionStatesCount = Preconditions.checkNotNull(tableRegionStatesCount); + this.masterTasks = masterTasks; } @Override @@ -356,6 +396,11 @@ public List getDeadServerNames() { return Collections.unmodifiableList(deadServerNames); } + @Override + public List getUnknownServerNames() { + return Collections.unmodifiableList(unknownServerNames); + } + @Override public Map getLiveServerMetrics() { return Collections.unmodifiableMap(liveServerMetrics); @@ -406,6 +451,11 @@ public Map getTableRegionStatesCount() { return Collections.unmodifiableMap(tableRegionStatesCount); } + @Override + public List getMasterTasks() { + return masterTasks; + } + @Override public String toString() { StringBuilder sb = new StringBuilder(1024); @@ -414,15 +464,15 @@ public String toString() { int backupMastersSize = getBackupMasterNames().size(); sb.append("\nNumber of backup masters: " + backupMastersSize); if (backupMastersSize > 0) { - for (ServerName serverName: getBackupMasterNames()) { + for (ServerName serverName : getBackupMasterNames()) { sb.append("\n " + serverName); } } int serversSize = getLiveServerMetrics().size(); int serversNameSize = getServersName().size(); - sb.append("\nNumber of live region servers: " - + (serversSize > 0 ? serversSize : serversNameSize)); + sb.append( + "\nNumber of live region servers: " + (serversSize > 0 ? serversSize : serversNameSize)); if (serversSize > 0) { for (ServerName serverName : getLiveServerMetrics().keySet()) { sb.append("\n " + serverName.getServerName()); @@ -441,6 +491,14 @@ public String toString() { } } + int unknownServerSize = getUnknownServerNames().size(); + sb.append("\nNumber of unknown region servers: " + unknownServerSize); + if (unknownServerSize > 0) { + for (ServerName serverName : getUnknownServerNames()) { + sb.append("\n " + serverName); + } + } + sb.append("\nAverage load: " + getAverageLoad()); sb.append("\nNumber of requests: " + getRequestCount()); sb.append("\nNumber of regions: " + getRegionCount()); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java index 6fdb588a4f37..d21d610126a0 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -16,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import edu.umd.cs.findbugs.annotations.Nullable; @@ -26,7 +24,6 @@ import java.util.List; import java.util.Map; import java.util.stream.Collectors; - import org.apache.hadoop.hbase.client.RegionStatesCount; import org.apache.hadoop.hbase.master.RegionState; import org.apache.yetus.audience.InterfaceAudience; @@ -45,32 +42,37 @@ *
  • The average cluster load.
  • *
  • The number of regions deployed on the cluster.
  • *
  • The number of requests since last report.
  • - *
  • Detailed region server loading and resource usage information, - * per server and per region.
  • + *
  • Detailed region server loading and resource usage information, per server and per + * region.
  • *
  • Regions in transition at master
  • *
  • The unique cluster ID
  • * * {@link ClusterMetrics.Option} provides a way to get desired ClusterStatus information. * The following codes will get all the cluster information. + * *
    - * {@code
    - * // Original version still works
    - * Admin admin = connection.getAdmin();
    - * ClusterStatus status = admin.getClusterStatus();
    - * // or below, a new version which has the same effects
    - * ClusterStatus status = admin.getClusterStatus(EnumSet.allOf(Option.class));
    + * {
    + *   @code
    + *   // Original version still works
    + *   Admin admin = connection.getAdmin();
    + *   ClusterStatus status = admin.getClusterStatus();
    + *   // or below, a new version which has the same effects
    + *   ClusterStatus status = admin.getClusterStatus(EnumSet.allOf(Option.class));
      * }
      * 
    - * If information about live servers is the only wanted. - * then codes in the following way: + * + * If information about live servers is the only wanted. then codes in the following way: + * *
    - * {@code
    - * Admin admin = connection.getAdmin();
    - * ClusterStatus status = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
    + * {
    + *   @code
    + *   Admin admin = connection.getAdmin();
    + *   ClusterStatus status = admin.getClusterStatus(EnumSet.of(Option.LIVE_SERVERS));
      * }
      * 
    - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link ClusterMetrics} instead. + * + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link ClusterMetrics} + * instead. */ @InterfaceAudience.Public @Deprecated @@ -86,26 +88,18 @@ public class ClusterStatus implements ClusterMetrics { */ @Deprecated public ClusterStatus(final String hbaseVersion, final String clusterid, - final Map servers, - final Collection deadServers, - final ServerName master, - final Collection backupMasters, - final List rit, - final String[] masterCoprocessors, - final Boolean balancerOn, - final int masterInfoPort) { + final Map servers, final Collection deadServers, + final ServerName master, final Collection backupMasters, + final List rit, final String[] masterCoprocessors, final Boolean balancerOn, + final int masterInfoPort) { // TODO: make this constructor private this(ClusterMetricsBuilder.newBuilder().setHBaseVersion(hbaseVersion) .setDeadServerNames(new ArrayList<>(deadServers)) - .setLiveServerMetrics(servers.entrySet().stream() - .collect(Collectors.toMap(e -> e.getKey(), e -> e.getValue()))) + .setLiveServerMetrics( + servers.entrySet().stream().collect(Collectors.toMap(e -> e.getKey(), e -> e.getValue()))) .setBackerMasterNames(new ArrayList<>(backupMasters)).setBalancerOn(balancerOn) - .setClusterId(clusterid) - .setMasterCoprocessorNames(Arrays.asList(masterCoprocessors)) - .setMasterName(master) - .setMasterInfoPort(masterInfoPort) - .setRegionsInTransition(rit) - .build()); + .setClusterId(clusterid).setMasterCoprocessorNames(Arrays.asList(masterCoprocessors)) + .setMasterName(master).setMasterInfoPort(masterInfoPort).setRegionsInTransition(rit).build()); } @InterfaceAudience.Private @@ -113,24 +107,27 @@ public ClusterStatus(ClusterMetrics metrics) { this.metrics = metrics; } - /** - * @return the names of region servers on the dead list - */ + /** Returns the names of region servers on the dead list */ @Override public List getDeadServerNames() { return metrics.getDeadServerNames(); } + @Override + public List getUnknownServerNames() { + return metrics.getUnknownServerNames(); + } + @Override public Map getLiveServerMetrics() { return metrics.getLiveServerMetrics(); } /** - * @return the number of region servers in the cluster - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getLiveServerMetrics()}. - */ + * @return the number of region servers in the cluster + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getLiveServerMetrics()}. + */ @Deprecated public int getServersSize() { return metrics.getLiveServerMetrics().size(); @@ -139,8 +136,8 @@ public int getServersSize() { /** * @return the number of dead region servers in the cluster * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * (HBASE-13656). - * Use {@link #getDeadServerNames()}. + * (HBASE-13656). Use + * {@link #getDeadServerNames()}. */ @Deprecated public int getDeadServers() { @@ -149,8 +146,8 @@ public int getDeadServers() { /** * @return the number of dead region servers in the cluster - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getDeadServerNames()}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getDeadServerNames()}. */ @Deprecated public int getDeadServersSize() { @@ -159,8 +156,8 @@ public int getDeadServersSize() { /** * @return the number of regions deployed on the cluster - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionCount()}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionCount()}. */ @Deprecated public int getRegionsCount() { @@ -169,8 +166,8 @@ public int getRegionsCount() { /** * @return the number of requests since last report - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRequestCount()} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRequestCount()} instead. */ @Deprecated public int getRequestsCount() { @@ -193,9 +190,8 @@ public List getRegionStatesInTransition() { return metrics.getRegionStatesInTransition(); } - /** - * @return the HBase version string as reported by the HMaster - */ + /** Returns the HBase version string as reported by the HMaster */ + @Override public String getHBaseVersion() { return metrics.getHBaseVersion(); } @@ -214,14 +210,14 @@ public boolean equals(Object o) { return false; } ClusterStatus other = (ClusterStatus) o; - return Objects.equal(getHBaseVersion(), other.getHBaseVersion()) && - Objects.equal(getLiveServerLoads(), other.getLiveServerLoads()) && - getDeadServerNames().containsAll(other.getDeadServerNames()) && - Arrays.equals(getMasterCoprocessors(), other.getMasterCoprocessors()) && - Objects.equal(getMaster(), other.getMaster()) && - getBackupMasters().containsAll(other.getBackupMasters()) && - Objects.equal(getClusterId(), other.getClusterId()) && - getMasterInfoPort() == other.getMasterInfoPort(); + return Objects.equal(getHBaseVersion(), other.getHBaseVersion()) + && Objects.equal(getLiveServerLoads(), other.getLiveServerLoads()) + && getDeadServerNames().containsAll(other.getDeadServerNames()) + && Arrays.equals(getMasterCoprocessors(), other.getMasterCoprocessors()) + && Objects.equal(getMaster(), other.getMaster()) + && getBackupMasters().containsAll(other.getBackupMasters()) + && Objects.equal(getClusterId(), other.getClusterId()) + && getMasterInfoPort() == other.getMasterInfoPort(); } @Override @@ -239,8 +235,8 @@ public byte getVersion() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getLiveServerMetrics()} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getLiveServerMetrics()} instead. */ @Deprecated public Collection getServers() { @@ -250,8 +246,8 @@ public Collection getServers() { /** * Returns detailed information about the current master {@link ServerName}. * @return current master information if it exists - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getMasterName} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link #getMasterName} + * instead. */ @Deprecated public ServerName getMaster() { @@ -260,8 +256,8 @@ public ServerName getMaster() { /** * @return the number of backup masters in the cluster - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getBackupMasterNames} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getBackupMasterNames} instead. */ @Deprecated public int getBackupMastersSize() { @@ -270,8 +266,8 @@ public int getBackupMastersSize() { /** * @return the names of backup masters - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getBackupMasterNames} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getBackupMasterNames} instead. */ @Deprecated public List getBackupMasters() { @@ -279,10 +275,9 @@ public List getBackupMasters() { } /** - * @param sn * @return Server's load or null if not found. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getLiveServerMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getLiveServerMetrics} instead. */ @Deprecated public ServerLoad getLoad(final ServerName sn) { @@ -290,6 +285,7 @@ public ServerLoad getLoad(final ServerName sn) { return serverMetrics == null ? null : new ServerLoad(serverMetrics); } + @Override public String getClusterId() { return metrics.getClusterId(); } @@ -300,8 +296,9 @@ public List getMasterCoprocessorNames() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getMasterCoprocessorNames} instead. + * Get the list of master coprocessor names. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getMasterCoprocessorNames} instead. */ @Deprecated public String[] getMasterCoprocessors() { @@ -310,8 +307,9 @@ public String[] getMasterCoprocessors() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getLastMajorCompactionTimestamp(TableName)} instead. + * Get the last major compaction time for a given table. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getLastMajorCompactionTimestamp(TableName)} instead. */ @Deprecated public long getLastMajorCompactionTsForTable(TableName table) { @@ -319,8 +317,9 @@ public long getLastMajorCompactionTsForTable(TableName table) { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getLastMajorCompactionTimestamp(byte[])} instead. + * Get the last major compaction time for a given region. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getLastMajorCompactionTimestamp(byte[])} instead. */ @Deprecated public long getLastMajorCompactionTsForRegion(final byte[] region) { @@ -328,8 +327,8 @@ public long getLastMajorCompactionTsForRegion(final byte[] region) { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * No flag in 2.0 + * Returns true if the balancer is enabled. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 No flag in 2.0 */ @Deprecated public boolean isBalancerOn() { @@ -356,6 +355,11 @@ public Map getTableRegionStatesCount() { return metrics.getTableRegionStatesCount(); } + @Override + public List getMasterTasks() { + return metrics.getMasterTasks(); + } + @Override public String toString() { StringBuilder sb = new StringBuilder(1024); @@ -364,15 +368,15 @@ public String toString() { int backupMastersSize = getBackupMastersSize(); sb.append("\nNumber of backup masters: " + backupMastersSize); if (backupMastersSize > 0) { - for (ServerName serverName: metrics.getBackupMasterNames()) { + for (ServerName serverName : metrics.getBackupMasterNames()) { sb.append("\n " + serverName); } } int serversSize = getServersSize(); int serversNameSize = getServersName().size(); - sb.append("\nNumber of live region servers: " - + (serversSize > 0 ? serversSize : serversNameSize)); + sb.append( + "\nNumber of live region servers: " + (serversSize > 0 ? serversSize : serversNameSize)); if (serversSize > 0) { for (ServerName serverName : metrics.getLiveServerMetrics().keySet()) { sb.append("\n " + serverName.getServerName()); @@ -398,7 +402,7 @@ public String toString() { int ritSize = metrics.getRegionStatesInTransition().size(); sb.append("\nNumber of regions in transition: " + ritSize); if (ritSize > 0) { - for (RegionState state: metrics.getRegionStatesInTransition()) { + for (RegionState state : metrics.getRegionStatesInTransition()) { sb.append("\n " + state.toDescriptiveString()); } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ConcurrentTableModificationException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ConcurrentTableModificationException.java index 86aca2bc8177..b8b2519dc09f 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ConcurrentTableModificationException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ConcurrentTableModificationException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/Coprocessor.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/Coprocessor.java index c0d9b603a8ab..20cc35da042d 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/Coprocessor.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/Coprocessor.java @@ -7,33 +7,28 @@ * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. */ - package org.apache.hadoop.hbase; +import com.google.protobuf.Service; import java.io.IOException; import java.util.Collections; - -import com.google.protobuf.Service; import org.apache.yetus.audience.InterfaceAudience; import org.apache.yetus.audience.InterfaceStability; /** * Base interface for the 4 coprocessors - MasterCoprocessor, RegionCoprocessor, - * RegionServerCoprocessor, and WALCoprocessor. - * Do NOT implement this interface directly. Unless an implementation implements one (or more) of - * the above mentioned 4 coprocessors, it'll fail to be loaded by any coprocessor host. + * RegionServerCoprocessor, and WALCoprocessor. Do NOT implement this interface directly. Unless an + * implementation implements one (or more) of the above mentioned 4 coprocessors, it'll fail to be + * loaded by any coprocessor host. Example: Building a coprocessor to observe Master operations. * - * Example: - * Building a coprocessor to observe Master operations. *
      * class MyMasterCoprocessor implements MasterCoprocessor {
      *   @Override
    @@ -48,6 +43,7 @@
      * 
    * * Building a Service which can be loaded by both Master and RegionServer + * *
      * class MyCoprocessorService implements MasterCoprocessor, RegionServerCoprocessor {
      *   @Override
    @@ -87,18 +83,19 @@ enum State {
        * Called by the {@link CoprocessorEnvironment} during it's own startup to initialize the
        * coprocessor.
        */
    -  default void start(CoprocessorEnvironment env) throws IOException {}
    +  default void start(CoprocessorEnvironment env) throws IOException {
    +  }
     
       /**
    -   * Called by the {@link CoprocessorEnvironment} during it's own shutdown to stop the
    -   * coprocessor.
    +   * Called by the {@link CoprocessorEnvironment} during it's own shutdown to stop the coprocessor.
        */
    -  default void stop(CoprocessorEnvironment env) throws IOException {}
    +  default void stop(CoprocessorEnvironment env) throws IOException {
    +  }
     
       /**
        * Coprocessor endpoints providing protobuf services should override this method.
    -   * @return Iterable of {@link Service}s or empty collection. Implementations should never
    -   * return null.
    +   * @return Iterable of {@link Service}s or empty collection. Implementations should never return
    +   *         null.
        */
       default Iterable getServices() {
         return Collections.EMPTY_SET;
    diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
    index 4fab7333dcd9..32e06d610247 100644
    --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
    +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
    @@ -7,16 +7,14 @@
      * "License"); you may not use this file except in compliance
      * with the License.  You may obtain a copy of the License at
      *
    - *   http://www.apache.org/licenses/LICENSE-2.0
    + *     http://www.apache.org/licenses/LICENSE-2.0
      *
    - * Unless required by applicable law or agreed to in writing,
    - * software distributed under the License is distributed on an
    - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    - * KIND, either express or implied.  See the License for the
    - * specific language governing permissions and limitations
    - * under the License.
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase;
     
     import org.apache.hadoop.conf.Configuration;
    @@ -30,29 +28,27 @@
     @InterfaceStability.Evolving
     public interface CoprocessorEnvironment {
     
    -  /** @return the Coprocessor interface version */
    +  /** Returns the Coprocessor interface version */
       int getVersion();
     
    -  /** @return the HBase version as a string (e.g. "0.21.0") */
    +  /** Returns the HBase version as a string (e.g. "0.21.0") */
       String getHBaseVersion();
     
    -  /** @return the loaded coprocessor instance */
    +  /** Returns the loaded coprocessor instance */
       C getInstance();
     
    -  /** @return the priority assigned to the loaded coprocessor */
    +  /** Returns the priority assigned to the loaded coprocessor */
       int getPriority();
     
    -  /** @return the load sequence number */
    +  /** Returns the load sequence number */
       int getLoadSequence();
     
       /**
    -   * @return a Read-only Configuration; throws {@link UnsupportedOperationException} if you try
    -   *   to set a configuration.
    +   * Returns a Read-only Configuration; throws {@link UnsupportedOperationException} if you try to
    +   * set a configuration.
        */
       Configuration getConfiguration();
     
    -  /**
    -   * @return the classloader for the loaded coprocessor instance
    -   */
    +  /** Returns the classloader for the loaded coprocessor instance */
       ClassLoader getClassLoader();
     }
    diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
    index 509844e367d8..7e1821de7d47 100644
    --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
    +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
    @@ -1,5 +1,4 @@
    -/**
    - *
    +/*
      * Licensed to the Apache Software Foundation (ASF) under one
      * or more contributor license agreements.  See the NOTICE file
      * distributed with this work for additional information
    @@ -41,7 +40,7 @@ public DoNotRetryIOException(String message) {
       }
     
       /**
    -   * @param message the message for this exception
    +   * @param message   the message for this exception
        * @param throwable the {@link Throwable} to use for this exception
        */
       public DoNotRetryIOException(String message, Throwable throwable) {
    diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
    index 76f374c412f0..f4391f1025c4 100644
    --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
    +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
    @@ -7,24 +7,22 @@
      * "License"); you may not use this file except in compliance
      * with the License.  You may obtain a copy of the License at
      *
    - *   http://www.apache.org/licenses/LICENSE-2.0
    + *     http://www.apache.org/licenses/LICENSE-2.0
      *
    - * Unless required by applicable law or agreed to in writing,
    - * software distributed under the License is distributed on an
    - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    - * KIND, either express or implied.  See the License for the
    - * specific language governing permissions and limitations
    - * under the License.
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
      */
     package org.apache.hadoop.hbase;
     
     import java.io.IOException;
    -
     import org.apache.yetus.audience.InterfaceAudience;
     
     /**
    - * Thrown during flush if the possibility snapshot content was not properly
    - * persisted into store files.  Response should include replay of wal content.
    + * Thrown during flush if the possibility snapshot content was not properly persisted into store
    + * files. Response should include replay of wal content.
      */
     @InterfaceAudience.Public
     public class DroppedSnapshotException extends IOException {
    @@ -43,9 +41,8 @@ public DroppedSnapshotException(String message) {
     
       /**
        * DroppedSnapshotException with cause
    -   *
        * @param message the message for this exception
    -   * @param cause the cause for this exception
    +   * @param cause   the cause for this exception
        */
       public DroppedSnapshotException(String message, Throwable cause) {
         super(message, cause);
    diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/FailedCloseWALAfterInitializedErrorException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/FailedCloseWALAfterInitializedErrorException.java
    index 6445be9cfaf8..e5e2f7b7ccaf 100644
    --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/FailedCloseWALAfterInitializedErrorException.java
    +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/FailedCloseWALAfterInitializedErrorException.java
    @@ -7,35 +7,31 @@
      * "License"); you may not use this file except in compliance
      * with the License.  You may obtain a copy of the License at
      *
    - *   http://www.apache.org/licenses/LICENSE-2.0
    + *     http://www.apache.org/licenses/LICENSE-2.0
      *
    - * Unless required by applicable law or agreed to in writing,
    - * software distributed under the License is distributed on an
    - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    - * KIND, either express or implied.  See the License for the
    - * specific language governing permissions and limitations
    - * under the License.
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
      */
     package org.apache.hadoop.hbase;
     
    -
     import java.io.IOException;
    -
     import org.apache.yetus.audience.InterfaceAudience;
     
     /**
      * Throw when failed cleanup unsuccessful initialized wal
      */
     @InterfaceAudience.Public
    -public class FailedCloseWALAfterInitializedErrorException
    -  extends IOException {
    +public class FailedCloseWALAfterInitializedErrorException extends IOException {
     
       private static final long serialVersionUID = -5463156587431677322L;
     
       /**
        * constructor with error msg and throwable
        * @param msg message
    -   * @param t throwable
    +   * @param t   throwable
        */
       public FailedCloseWALAfterInitializedErrorException(String msg, Throwable t) {
         super(msg, t);
    @@ -55,4 +51,4 @@ public FailedCloseWALAfterInitializedErrorException(String msg) {
       public FailedCloseWALAfterInitializedErrorException() {
         super();
       }
    -}
    \ No newline at end of file
    +}
    diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HBaseServerException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HBaseServerException.java
    new file mode 100644
    index 000000000000..47a86f9492f5
    --- /dev/null
    +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HBaseServerException.java
    @@ -0,0 +1,67 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.hadoop.hbase;
    +
    +import org.apache.yetus.audience.InterfaceAudience;
    +
    +/**
    + * Base class for exceptions thrown by an HBase server. May contain extra info about the state of
    + * the server when the exception was thrown.
    + */
    +@InterfaceAudience.Public
    +public class HBaseServerException extends HBaseIOException {
    +  private boolean serverOverloaded;
    +
    +  public HBaseServerException() {
    +    this(false);
    +  }
    +
    +  public HBaseServerException(String message) {
    +    this(false, message);
    +  }
    +
    +  public HBaseServerException(boolean serverOverloaded) {
    +    this.serverOverloaded = serverOverloaded;
    +  }
    +
    +  public HBaseServerException(boolean serverOverloaded, String message) {
    +    super(message);
    +    this.serverOverloaded = serverOverloaded;
    +  }
    +
    +  /** Returns True if the server was considered overloaded when the exception was thrown */
    +  public static boolean isServerOverloaded(Throwable t) {
    +    if (t instanceof HBaseServerException) {
    +      return ((HBaseServerException) t).isServerOverloaded();
    +    }
    +    return false;
    +  }
    +
    +  /**
    +   * Necessary for parsing RemoteException on client side
    +   * @param serverOverloaded True if server was overloaded when exception was thrown
    +   */
    +  public void setServerOverloaded(boolean serverOverloaded) {
    +    this.serverOverloaded = serverOverloaded;
    +  }
    +
    +  /** Returns True if server was considered overloaded when exception was thrown */
    +  public boolean isServerOverloaded() {
    +    return serverOverloaded;
    +  }
    +}
    diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
    index 2f21d60878bf..d2d39a4c4156 100644
    --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
    +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
    @@ -1,5 +1,4 @@
    -/**
    - *
    +/*
      * Licensed to the Apache Software Foundation (ASF) under one
      * or more contributor license agreements.  See the NOTICE file
      * distributed with this work for additional information
    @@ -19,8 +18,6 @@
     package org.apache.hadoop.hbase;
     
     import java.util.Map;
    -
    -import org.apache.yetus.audience.InterfaceAudience;
     import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
     import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
     import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor;
    @@ -29,33 +26,43 @@
     import org.apache.hadoop.hbase.exceptions.HBaseException;
     import org.apache.hadoop.hbase.io.compress.Compression;
     import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
    +import org.apache.hadoop.hbase.io.encoding.IndexBlockEncoding;
     import org.apache.hadoop.hbase.regionserver.BloomType;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.apache.hadoop.hbase.util.PrettyPrinter.Unit;
    +import org.apache.yetus.audience.InterfaceAudience;
     
     /**
    - * An HColumnDescriptor contains information about a column family such as the
    - * number of versions, compression settings, etc.
    - *
    - * It is used as input when creating a table or adding a column.
    + * An HColumnDescriptor contains information about a column family such as the number of versions,
    + * compression settings, etc. It is used as input when creating a table or adding a column.
      */
     @InterfaceAudience.Public
     @Deprecated // remove it in 3.0
     public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable {
    -  public static final String IN_MEMORY_COMPACTION = ColumnFamilyDescriptorBuilder.IN_MEMORY_COMPACTION;
    +  public static final String IN_MEMORY_COMPACTION =
    +    ColumnFamilyDescriptorBuilder.IN_MEMORY_COMPACTION;
       public static final String COMPRESSION = ColumnFamilyDescriptorBuilder.COMPRESSION;
    -  public static final String COMPRESSION_COMPACT = ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT;
    -  public static final String COMPRESSION_COMPACT_MAJOR = ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MAJOR;
    -  public static final String COMPRESSION_COMPACT_MINOR = ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MINOR;
    +  public static final String COMPRESSION_COMPACT =
    +    ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT;
    +  public static final String COMPRESSION_COMPACT_MAJOR =
    +    ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MAJOR;
    +  public static final String COMPRESSION_COMPACT_MINOR =
    +    ColumnFamilyDescriptorBuilder.COMPRESSION_COMPACT_MINOR;
       public static final String ENCODE_ON_DISK = "ENCODE_ON_DISK";
    -  public static final String DATA_BLOCK_ENCODING = ColumnFamilyDescriptorBuilder.DATA_BLOCK_ENCODING;
    +  public static final String DATA_BLOCK_ENCODING =
    +    ColumnFamilyDescriptorBuilder.DATA_BLOCK_ENCODING;
       public static final String BLOCKCACHE = ColumnFamilyDescriptorBuilder.BLOCKCACHE;
    -  public static final String CACHE_DATA_ON_WRITE = ColumnFamilyDescriptorBuilder.CACHE_DATA_ON_WRITE;
    -  public static final String CACHE_INDEX_ON_WRITE = ColumnFamilyDescriptorBuilder.CACHE_INDEX_ON_WRITE;
    -  public static final String CACHE_BLOOMS_ON_WRITE = ColumnFamilyDescriptorBuilder.CACHE_BLOOMS_ON_WRITE;
    -  public static final String EVICT_BLOCKS_ON_CLOSE = ColumnFamilyDescriptorBuilder.EVICT_BLOCKS_ON_CLOSE;
    +  public static final String CACHE_DATA_ON_WRITE =
    +    ColumnFamilyDescriptorBuilder.CACHE_DATA_ON_WRITE;
    +  public static final String CACHE_INDEX_ON_WRITE =
    +    ColumnFamilyDescriptorBuilder.CACHE_INDEX_ON_WRITE;
    +  public static final String CACHE_BLOOMS_ON_WRITE =
    +    ColumnFamilyDescriptorBuilder.CACHE_BLOOMS_ON_WRITE;
    +  public static final String EVICT_BLOCKS_ON_CLOSE =
    +    ColumnFamilyDescriptorBuilder.EVICT_BLOCKS_ON_CLOSE;
       public static final String CACHE_DATA_IN_L1 = "CACHE_DATA_IN_L1";
    -  public static final String PREFETCH_BLOCKS_ON_OPEN = ColumnFamilyDescriptorBuilder.PREFETCH_BLOCKS_ON_OPEN;
    +  public static final String PREFETCH_BLOCKS_ON_OPEN =
    +    ColumnFamilyDescriptorBuilder.PREFETCH_BLOCKS_ON_OPEN;
       public static final String BLOCKSIZE = ColumnFamilyDescriptorBuilder.BLOCKSIZE;
       public static final String LENGTH = "LENGTH";
       public static final String TTL = ColumnFamilyDescriptorBuilder.TTL;
    @@ -72,46 +79,62 @@ public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable:
    +   * Construct a column descriptor specifying only the family name The other attributes are
    +   * defaulted.
    +   * @param familyName Column family name. Must be 'printable' -- digit or letter -- and may not
    +   *                   contain a :
        * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
    -   *             (HBASE-18433).
    -   *             Use {@link ColumnFamilyDescriptorBuilder#of(String)}.
    +   *             (HBASE-18433). Use
    +   *             {@link ColumnFamilyDescriptorBuilder#of(String)}.
        */
       @Deprecated
       public HColumnDescriptor(final String familyName) {
    @@ -119,29 +142,26 @@ public HColumnDescriptor(final String familyName) {
       }
     
       /**
    -   * Construct a column descriptor specifying only the family name
    -   * The other attributes are defaulted.
    -   *
    -   * @param familyName Column family name. Must be 'printable' -- digit or
    -   * letter -- and may not contain a :
    +   * Construct a column descriptor specifying only the family name The other attributes are
    +   * defaulted.
    +   * @param familyName Column family name. Must be 'printable' -- digit or letter -- and may not
    +   *                   contain a :
        * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
    -   *             (HBASE-18433).
    -   *             Use {@link ColumnFamilyDescriptorBuilder#of(byte[])}.
    +   *             (HBASE-18433). Use
    +   *             {@link ColumnFamilyDescriptorBuilder#of(byte[])}.
        */
       @Deprecated
    -  public HColumnDescriptor(final byte [] familyName) {
    +  public HColumnDescriptor(final byte[] familyName) {
         this(new ModifyableColumnFamilyDescriptor(familyName));
       }
     
       /**
    -   * Constructor.
    -   * Makes a deep copy of the supplied descriptor.
    -   * Can make a modifiable descriptor from an UnmodifyableHColumnDescriptor.
    -   *
    +   * Constructor. Makes a deep copy of the supplied descriptor. Can make a modifiable descriptor
    +   * from an UnmodifyableHColumnDescriptor.
        * @param desc The descriptor.
        * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
    -   *             (HBASE-18433).
    -   *             Use {@link ColumnFamilyDescriptorBuilder#copy(ColumnFamilyDescriptor)}.
    +   *             (HBASE-18433). Use
    +   *             {@link ColumnFamilyDescriptorBuilder#copy(ColumnFamilyDescriptor)}.
        */
       @Deprecated
       public HColumnDescriptor(HColumnDescriptor desc) {
    @@ -149,8 +169,7 @@ public HColumnDescriptor(HColumnDescriptor desc) {
       }
     
       protected HColumnDescriptor(HColumnDescriptor desc, boolean deepClone) {
    -    this(deepClone ? new ModifyableColumnFamilyDescriptor(desc)
    -            : desc.delegatee);
    +    this(deepClone ? new ModifyableColumnFamilyDescriptor(desc) : desc.delegatee);
       }
     
       protected HColumnDescriptor(ModifyableColumnFamilyDescriptor delegate) {
    @@ -158,51 +177,42 @@ protected HColumnDescriptor(ModifyableColumnFamilyDescriptor delegate) {
       }
     
       /**
    +   * Check if a given family name is allowed.
        * @param b Family name.
        * @return b
    -   * @throws IllegalArgumentException If not null and not a legitimate family
    -   * name: i.e. 'printable' and ends in a ':' (Null passes are allowed because
    -   * b can be null when deserializing).  Cannot start with a '.'
    -   * either. Also Family can not be an empty value or equal "recovered.edits".
    +   * @throws IllegalArgumentException If not null and not a legitimate family name: i.e. 'printable'
    +   *                                  and ends in a ':' (Null passes are allowed because
    +   *                                  b can be null when deserializing). Cannot start
    +   *                                  with a '.' either. Also Family can not be an empty value or
    +   *                                  equal "recovered.edits".
        * @deprecated since 2.0.0 and will be removed in 3.0.0. Use
    -   *   {@link ColumnFamilyDescriptorBuilder#isLegalColumnFamilyName(byte[])} instead.
    +   *             {@link ColumnFamilyDescriptorBuilder#isLegalColumnFamilyName(byte[])} instead.
        * @see ColumnFamilyDescriptorBuilder#isLegalColumnFamilyName(byte[])
        * @see HBASE-18008
        */
       @Deprecated
    -  public static byte [] isLegalFamilyName(final byte [] b) {
    +  public static byte[] isLegalFamilyName(final byte[] b) {
         return ColumnFamilyDescriptorBuilder.isLegalColumnFamilyName(b);
       }
     
    -  /**
    -   * @return Name of this column family
    -   */
    +  /** Returns Name of this column family */
       @Override
    -  public byte [] getName() {
    +  public byte[] getName() {
         return delegatee.getName();
       }
     
    -  /**
    -   * @return The name string of this column family
    -   */
    +  /** Returns The name string of this column family */
       @Override
       public String getNameAsString() {
         return delegatee.getNameAsString();
       }
     
    -  /**
    -   * @param key The key.
    -   * @return The value.
    -   */
       @Override
       public byte[] getValue(byte[] key) {
         return delegatee.getValue(key);
       }
     
    -  /**
    -   * @param key The key.
    -   * @return The value as a string.
    -   */
    +  @Override
       public String getValue(String key) {
         byte[] value = getValue(Bytes.toBytes(key));
         return value == null ? null : Bytes.toString(value);
    @@ -213,38 +223,25 @@ public Map getValues() {
         return delegatee.getValues();
       }
     
    -  /**
    -   * @param key The key.
    -   * @param value The value.
    -   * @return this (for chained invocation)
    -   */
       public HColumnDescriptor setValue(byte[] key, byte[] value) {
         getDelegateeForModification().setValue(key, value);
         return this;
       }
     
    -  /**
    -   * @param key Key whose key and value we're to remove from HCD parameters.
    -   */
    -  public void remove(final byte [] key) {
    +  public void remove(final byte[] key) {
         getDelegateeForModification().removeValue(new Bytes(key));
       }
     
    -  /**
    -   * @param key The key.
    -   * @param value The value.
    -   * @return this (for chained invocation)
    -   */
       public HColumnDescriptor setValue(String key, String value) {
         getDelegateeForModification().setValue(key, value);
         return this;
       }
     
       /**
    -   * @return compression type being used for the column family
    +   * Returns compression type being used for the column family
        * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
    -   *             (HBASE-13655).
    -   *             Use {@link #getCompressionType()}.
    +   *             (HBASE-13655). Use
    +   *             {@link #getCompressionType()}.
        */
       @Deprecated
       public Compression.Algorithm getCompression() {
    @@ -252,10 +249,10 @@ public Compression.Algorithm getCompression() {
       }
     
       /**
    -   *  @return compression type being used for the column family for major compaction
    -   *  @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
    -   *             (HBASE-13655).
    -   *             Use {@link #getCompactionCompressionType()}.
    +   * Returns compression type being used for the column family for major compaction
    +   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0
    +   *             (HBASE-13655). Use
    +   *             {@link #getCompactionCompressionType()}.
        */
       @Deprecated
       public Compression.Algorithm getCompactionCompression() {
    @@ -268,6 +265,7 @@ public int getMaxVersions() {
       }
     
       /**
    +   * Set maximum versions to keep
        * @param value maximum number of versions
        * @return this (for chained invocation)
        */
    @@ -278,7 +276,6 @@ public HColumnDescriptor setMaxVersions(int value) {
     
       /**
        * Set minimum and maximum versions to keep
    -   *
        * @param minVersions minimal number of versions
        * @param maxVersions maximum number of versions
        * @return this (for chained invocation)
    @@ -291,9 +288,9 @@ public HColumnDescriptor setVersions(int minVersions, int maxVersions) {
         }
     
         if (maxVersions < minVersions) {
    -      throw new IllegalArgumentException("Unable to set MaxVersion to " + maxVersions
    -        + " and set MinVersion to " + minVersions
    -        + ", as maximum versions must be >= minimum versions.");
    +      throw new IllegalArgumentException(
    +        "Unable to set MaxVersion to " + maxVersions + " and set MinVersion to " + minVersions
    +          + ", as maximum versions must be >= minimum versions.");
         }
         setMinVersions(minVersions);
         setMaxVersions(maxVersions);
    @@ -306,8 +303,8 @@ public int getBlocksize() {
       }
     
       /**
    -   * @param value Blocksize to use when writing out storefiles/hfiles on this
    -   * column family.
    +   * Set block size to use when writing
    +   * @param value Blocksize to use when writing out storefiles/hfiles on this column family.
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setBlocksize(int value) {
    @@ -326,10 +323,9 @@ public Compression.Algorithm getCompressionType() {
       }
     
       /**
    -   * Compression types supported in hbase.
    -   * LZO is not bundled as part of the hbase distribution.
    -   * See LZO Compression
    -   * for how to enable it.
    +   * Compression types supported in hbase. LZO is not bundled as part of the hbase distribution. See
    +   * LZO Compression for how to
    +   * enable it.
        * @param value Compression type setting.
        * @return this (for chained invocation)
        */
    @@ -343,6 +339,11 @@ public DataBlockEncoding getDataBlockEncoding() {
         return delegatee.getDataBlockEncoding();
       }
     
    +  @Override
    +  public IndexBlockEncoding getIndexBlockEncoding() {
    +    return delegatee.getIndexBlockEncoding();
    +  }
    +
       /**
        * Set data block encoding algorithm used in block cache.
        * @param value What kind of data block encoding will be used.
    @@ -356,8 +357,6 @@ public HColumnDescriptor setDataBlockEncoding(DataBlockEncoding value) {
       /**
        * Set whether the tags should be compressed along with DataBlockEncoding. When no
        * DataBlockEncoding is been used, this is having no effect.
    -   *
    -   * @param value
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setCompressTags(boolean value) {
    @@ -386,10 +385,9 @@ public Compression.Algorithm getMinorCompactionCompressionType() {
       }
     
       /**
    -   * Compression types supported in hbase.
    -   * LZO is not bundled as part of the hbase distribution.
    -   * See LZO Compression
    -   * for how to enable it.
    +   * Compression types supported in hbase. LZO is not bundled as part of the hbase distribution. See
    +   * LZO Compression for how to
    +   * enable it.
        * @param value Compression type setting.
        * @return this (for chained invocation)
        */
    @@ -414,8 +412,9 @@ public boolean isInMemory() {
       }
     
       /**
    +   * Set or clear the in memory flag.
        * @param value True if we are to favor keeping all values for this column family in the
    -   * HRegionServer cache
    +   *              HRegionServer cache
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setInMemory(boolean value) {
    @@ -429,8 +428,8 @@ public MemoryCompactionPolicy getInMemoryCompaction() {
       }
     
       /**
    -   * @param value the prefered in-memory compaction policy
    -   *                  for this column family
    +   * Set the in memory compaction policy.
    +   * @param value the prefered in-memory compaction policy for this column family
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setInMemoryCompaction(MemoryCompactionPolicy value) {
    @@ -444,8 +443,8 @@ public KeepDeletedCells getKeepDeletedCells() {
       }
     
       /**
    -   * @param value True if deleted rows should not be collected
    -   * immediately.
    +   * Set the keep deleted cells policy.
    +   * @param value True if deleted rows should not be collected immediately.
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setKeepDeletedCells(KeepDeletedCells value) {
    @@ -454,9 +453,9 @@ public HColumnDescriptor setKeepDeletedCells(KeepDeletedCells value) {
       }
     
       /**
    -   * By default, HBase only consider timestamp in versions. So a previous Delete with higher ts
    -   * will mask a later Put with lower ts. Set this to true to enable new semantics of versions.
    -   * We will also consider mvcc in versions. See HBASE-15968 for details.
    +   * By default, HBase only consider timestamp in versions. So a previous Delete with higher ts will
    +   * mask a later Put with lower ts. Set this to true to enable new semantics of versions. We will
    +   * also consider mvcc in versions. See HBASE-15968 for details.
        */
       @Override
       public boolean isNewVersionBehavior() {
    @@ -468,13 +467,13 @@ public HColumnDescriptor setNewVersionBehavior(boolean newVersionBehavior) {
         return this;
       }
     
    -
       @Override
       public int getTimeToLive() {
         return delegatee.getTimeToLive();
       }
     
       /**
    +   * Set the time to live of cell contents
        * @param value Time-to-live of cell contents, in seconds.
        * @return this (for chained invocation)
        */
    @@ -484,8 +483,9 @@ public HColumnDescriptor setTimeToLive(int value) {
       }
     
       /**
    +   * Set the time to live of cell contents
        * @param value Time to live of cell contents, in human readable format
    -   *                   @see org.apache.hadoop.hbase.util.PrettyPrinter#format(String, Unit)
    +   * @see org.apache.hadoop.hbase.util.PrettyPrinter#format(String, Unit)
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setTimeToLive(String value) throws HBaseException {
    @@ -499,8 +499,8 @@ public int getMinVersions() {
       }
     
       /**
    -   * @param value The minimum number of versions to keep.
    -   * (used when timeToLive is set)
    +   * Set the minimum number of versions to keep.
    +   * @param value The minimum number of versions to keep. (used when timeToLive is set)
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setMinVersions(int value) {
    @@ -514,8 +514,9 @@ public boolean isBlockCacheEnabled() {
       }
     
       /**
    -   * @param value True if hfile DATA type blocks should be cached (We always cache
    -   * INDEX and BLOOM blocks; you cannot turn this off).
    +   * Set or clear the block cache enabled flag.
    +   * @param value True if hfile DATA type blocks should be cached (We always cache INDEX and BLOOM
    +   *              blocks; you cannot turn this off).
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setBlockCacheEnabled(boolean value) {
    @@ -529,6 +530,7 @@ public BloomType getBloomFilterType() {
       }
     
       /**
    +   * Set the bloom filter type.
        * @param value bloom filter type
        * @return this (for chained invocation)
        */
    @@ -542,10 +544,6 @@ public int getScope() {
         return delegatee.getScope();
       }
     
    - /**
    -  * @param value the scope tag
    -  * @return this (for chained invocation)
    -  */
       public HColumnDescriptor setScope(int value) {
         getDelegateeForModification().setScope(value);
         return this;
    @@ -557,6 +555,7 @@ public boolean isCacheDataOnWrite() {
       }
     
       /**
    +   * Set or clear the cache data on write flag.
        * @param value true if we should cache data blocks on write
        * @return this (for chained invocation)
        */
    @@ -566,8 +565,7 @@ public HColumnDescriptor setCacheDataOnWrite(boolean value) {
       }
     
       /**
    -   * This is a noop call from HBase 2.0 onwards
    -   *
    +   * Set or clear the cache in L1 flag. This is a noop call from HBase 2.0 onwards
        * @return this (for chained invocation)
        * @deprecated Since 2.0 and will be removed in 3.0 with out any replacement. Caching data in on
        *             heap Cache, when there are both on heap LRU Cache and Bucket Cache will no longer
    @@ -584,6 +582,7 @@ public boolean isCacheIndexesOnWrite() {
       }
     
       /**
    +   * Set or clear the cache indexes on write flag.
        * @param value true if we should cache index blocks on write
        * @return this (for chained invocation)
        */
    @@ -598,6 +597,7 @@ public boolean isCacheBloomsOnWrite() {
       }
     
       /**
    +   * Set or clear the cache bloom filters on write flag.
        * @param value true if we should cache bloomfilter blocks on write
        * @return this (for chained invocation)
        */
    @@ -612,8 +612,8 @@ public boolean isEvictBlocksOnClose() {
       }
     
       /**
    -   * @param value true if we should evict cached blocks from the blockcache on
    -   * close
    +   * Set or clear the evict bloom filters on close flag.
    +   * @param value true if we should evict cached blocks from the blockcache on close
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setEvictBlocksOnClose(boolean value) {
    @@ -627,6 +627,7 @@ public boolean isPrefetchBlocksOnOpen() {
       }
     
       /**
    +   * Set or clear the prefetch on open flag.
        * @param value true if we should prefetch blocks into the blockcache on open
        * @return this (for chained invocation)
        */
    @@ -635,17 +636,12 @@ public HColumnDescriptor setPrefetchBlocksOnOpen(boolean value) {
         return this;
       }
     
    -  /**
    -   * @see java.lang.Object#toString()
    -   */
       @Override
       public String toString() {
         return delegatee.toString();
       }
     
    -  /**
    -   * @return Column family descriptor with only the customized attributes.
    -   */
    +  /** Returns Column family descriptor with only the customized attributes. */
       @Override
       public String toStringCustomizedValues() {
         return delegatee.toStringCustomizedValues();
    @@ -659,9 +655,6 @@ public static Map getDefaultValues() {
         return ColumnFamilyDescriptorBuilder.getDefaultValues();
       }
     
    -  /**
    -   * @see java.lang.Object#equals(java.lang.Object)
    -   */
       @Override
       public boolean equals(Object obj) {
         if (this == obj) {
    @@ -673,9 +666,6 @@ public boolean equals(Object obj) {
         return false;
       }
     
    -  /**
    -   * @see java.lang.Object#hashCode()
    -   */
       @Override
       public int hashCode() {
         return delegatee.hashCode();
    @@ -687,7 +677,7 @@ public int compareTo(HColumnDescriptor other) {
       }
     
       /**
    -   * @return This instance serialized with pb with pb magic prefix
    +   * Returns This instance serialized with pb with pb magic prefix
        * @see #parseFrom(byte[])
        */
       public byte[] toByteArray() {
    @@ -695,12 +685,12 @@ public byte[] toByteArray() {
       }
     
       /**
    +   * Parse a serialized representation of a {@link HColumnDescriptor}
        * @param bytes A pb serialized {@link HColumnDescriptor} instance with pb magic prefix
        * @return An instance of {@link HColumnDescriptor} made from bytes
    -   * @throws DeserializationException
        * @see #toByteArray()
        */
    -  public static HColumnDescriptor parseFrom(final byte [] bytes) throws DeserializationException {
    +  public static HColumnDescriptor parseFrom(final byte[] bytes) throws DeserializationException {
         ColumnFamilyDescriptor desc = ColumnFamilyDescriptorBuilder.parseFrom(bytes);
         if (desc instanceof ModifyableColumnFamilyDescriptor) {
           return new HColumnDescriptor((ModifyableColumnFamilyDescriptor) desc);
    @@ -721,7 +711,7 @@ public Map getConfiguration() {
     
       /**
        * Setter for storing a configuration setting.
    -   * @param key Config key. Same as XML config key e.g. hbase.something.or.other.
    +   * @param key   Config key. Same as XML config key e.g. hbase.something.or.other.
        * @param value String value. If null, removes the configuration.
        */
       public HColumnDescriptor setConfiguration(String key, String value) {
    @@ -743,7 +733,6 @@ public String getEncryptionType() {
     
       /**
        * Set the encryption algorithm for use with this family
    -   * @param value
        */
       public HColumnDescriptor setEncryptionType(String value) {
         getDelegateeForModification().setEncryptionType(value);
    @@ -814,8 +803,8 @@ public short getDFSReplication() {
       /**
        * Set the replication factor to hfile(s) belonging to this family
        * @param value number of replicas the blocks(s) belonging to this CF should have, or
    -   *          {@link #DEFAULT_DFS_REPLICATION} for the default replication factor set in the
    -   *          filesystem
    +   *              {@link #DEFAULT_DFS_REPLICATION} for the default replication factor set in the
    +   *              filesystem
        * @return this (for chained invocation)
        */
       public HColumnDescriptor setDFSReplication(short value) {
    @@ -831,7 +820,7 @@ public String getStoragePolicy() {
       /**
        * Set the storage policy for use with this family
        * @param value the policy to set, valid setting includes: "LAZY_PERSIST",
    -   *          "ALL_SSD", "ONE_SSD", "HOT", "WARM", "COLD"
    +   *              "ALL_SSD", "ONE_SSD", "HOT", "WARM", "COLD"
        */
       public HColumnDescriptor setStoragePolicy(String value) {
         getDelegateeForModification().setStoragePolicy(value);
    diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
    index 2a0e804ff7ab..33d7d98c61e0 100644
    --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
    +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
    @@ -1,5 +1,4 @@
    -/**
    - *
    +/*
      * Licensed to the Apache Software Foundation (ASF) under one
      * or more contributor license agreements.  See the NOTICE file
      * distributed with this work for additional information
    @@ -24,7 +23,6 @@
     import java.util.Arrays;
     import java.util.List;
     import java.util.stream.Collectors;
    -
     import org.apache.hadoop.conf.Configuration;
     import org.apache.hadoop.hbase.KeyValue.KVComparator;
     import org.apache.hadoop.hbase.client.RegionInfo;
    @@ -38,40 +36,38 @@
     import org.apache.yetus.audience.InterfaceAudience;
     import org.slf4j.Logger;
     import org.slf4j.LoggerFactory;
    +
     import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
     import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
     
     /**
      * Information about a region. A region is a range of keys in the whole keyspace of a table, an
    - * identifier (a timestamp) for differentiating between subset ranges (after region split)
    - * and a replicaId for differentiating the instance for the same range and some status information
    - * about the region.
    - *
    - * The region has a unique name which consists of the following fields:
    + * identifier (a timestamp) for differentiating between subset ranges (after region split) and a
    + * replicaId for differentiating the instance for the same range and some status information about
    + * the region. The region has a unique name which consists of the following fields:
      * 
      - *
    • tableName : The name of the table
    • - *
    • startKey : The startKey for the region.
    • - *
    • regionId : A timestamp when the region is created.
    • - *
    • replicaId : An id starting from 0 to differentiate replicas of the same region range - * but hosted in separated servers. The same region range can be hosted in multiple locations.
    • - *
    • encodedName : An MD5 encoded string for the region name.
    • + *
    • tableName : The name of the table
    • + *
    • startKey : The startKey for the region.
    • + *
    • regionId : A timestamp when the region is created.
    • + *
    • replicaId : An id starting from 0 to differentiate replicas of the same region range but + * hosted in separated servers. The same region range can be hosted in multiple locations.
    • + *
    • encodedName : An MD5 encoded string for the region name.
    • *
    - * - *
    Other than the fields in the region name, region info contains: + *
    + * Other than the fields in the region name, region info contains: *
      - *
    • endKey : the endKey for the region (exclusive)
    • - *
    • split : Whether the region is split
    • - *
    • offline : Whether the region is offline
    • + *
    • endKey : the endKey for the region (exclusive)
    • + *
    • split : Whether the region is split
    • + *
    • offline : Whether the region is offline
    • *
    - * * In 0.98 or before, a list of table's regions would fully cover the total keyspace, and at any * point in time, a row key always belongs to a single region, which is hosted in a single server. * In 0.99+, a region can have multiple instances (called replicas), and thus a range (or row) can * correspond to multiple HRegionInfo's. These HRI's share the same fields however except the * replicaId field. If the replicaId is not set, it defaults to 0, which is compatible with the * previous behavior of a range corresponding to 1 region. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * use {@link RegionInfoBuilder} to build {@link RegionInfo}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. use + * {@link RegionInfoBuilder} to build {@link RegionInfo}. */ @Deprecated @InterfaceAudience.Public @@ -79,46 +75,36 @@ public class HRegionInfo implements RegionInfo { private static final Logger LOG = LoggerFactory.getLogger(HRegionInfo.class); /** - * The new format for a region name contains its encodedName at the end. - * The encoded name also serves as the directory name for the region - * in the filesystem. - * - * New region name format: - * <tablename>,,<startkey>,<regionIdTimestamp>.<encodedName>. - * where, - * <encodedName> is a hex version of the MD5 hash of - * <tablename>,<startkey>,<regionIdTimestamp> - * - * The old region name format: - * <tablename>,<startkey>,<regionIdTimestamp> - * For region names in the old format, the encoded name is a 32-bit - * JenkinsHash integer value (in its decimal notation, string form). - *

    - * **NOTE** - * - * The first hbase:meta region, and regions created by an older - * version of HBase (0.20 or prior) will continue to use the - * old region name format. + * The new format for a region name contains its encodedName at the end. The encoded name also + * serves as the directory name for the region in the filesystem. New region name format: + * <tablename>,,<startkey>,<regionIdTimestamp>.<encodedName>. where, <encodedName> + * is a hex version of the MD5 hash of <tablename>,<startkey>,<regionIdTimestamp> The old + * region name format: <tablename>,<startkey>,<regionIdTimestamp> For region names in the + * old format, the encoded name is a 32-bit JenkinsHash integer value (in its decimal notation, + * string form). + *

    + * **NOTE** The first hbase:meta region, and regions created by an older version of HBase (0.20 or + * prior) will continue to use the old region name format. */ /** A non-capture group so that this can be embedded. */ - public static final String ENCODED_REGION_NAME_REGEX = RegionInfoBuilder.ENCODED_REGION_NAME_REGEX; + public static final String ENCODED_REGION_NAME_REGEX = + RegionInfoBuilder.ENCODED_REGION_NAME_REGEX; private static final int MAX_REPLICA_ID = 0xFFFF; /** - * @param regionName * @return the encodedName - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#encodeRegionName(byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#encodeRegionName(byte[])}. */ @Deprecated - public static String encodeRegionName(final byte [] regionName) { + public static String encodeRegionName(final byte[] regionName) { return RegionInfo.encodeRegionName(regionName); } /** - * @return Return a short, printable name for this region (usually encoded name) for us logging. + * Returns Return a short, printable name for this region (usually encoded name) for us logging. */ @Override public String getShortNameToLog() { @@ -126,19 +112,19 @@ public String getShortNameToLog() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#getShortNameToLog(RegionInfo...)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#getShortNameToLog(RegionInfo...)}. */ @Deprecated - public static String getShortNameToLog(HRegionInfo...hris) { + public static String getShortNameToLog(HRegionInfo... hris) { return RegionInfo.getShortNameToLog(Arrays.asList(hris)); } /** - * @return Return a String of short, printable names for hris - * (usually encoded name) for us logging. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#getShortNameToLog(List)})}. + * @return Return a String of short, printable names for hris (usually encoded name) + * for us logging. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#getShortNameToLog(List)})}. */ @Deprecated public static String getShortNameToLog(final List hris) { @@ -149,9 +135,9 @@ public static String getShortNameToLog(final List hris) { * Use logging. * @param encodedRegionName The encoded regionname. * @return hbase:meta if passed 1028785192 else returns - * encodedRegionName - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#prettyPrint(String)}. + * encodedRegionName + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#prettyPrint(String)}. */ @Deprecated @InterfaceAudience.Private @@ -159,7 +145,7 @@ public static String prettyPrint(final String encodedRegionName) { return RegionInfo.prettyPrint(encodedRegionName); } - private byte [] endKey = HConstants.EMPTY_BYTE_ARRAY; + private byte[] endKey = HConstants.EMPTY_BYTE_ARRAY; // This flag is in the parent of a split while the parent is still referenced by daughter regions. // We USED to set this flag when we disabled a table but now table state is kept up in zookeeper // as of 0.90.0 HBase. And now in DisableTableProcedure, finally we will create bunch of @@ -167,14 +153,14 @@ public static String prettyPrint(final String encodedRegionName) { // will not change the offLine flag. private boolean offLine = false; private long regionId = -1; - private transient byte [] regionName = HConstants.EMPTY_BYTE_ARRAY; + private transient byte[] regionName = HConstants.EMPTY_BYTE_ARRAY; private boolean split = false; - private byte [] startKey = HConstants.EMPTY_BYTE_ARRAY; + private byte[] startKey = HConstants.EMPTY_BYTE_ARRAY; private int hashCode = -1; - //TODO: Move NO_HASH to HStoreFile which is really the only place it is used. + // TODO: Move NO_HASH to HStoreFile which is really the only place it is used. public static final String NO_HASH = null; private String encodedName = null; - private byte [] encodedNameAsBytes = null; + private byte[] encodedNameAsBytes = null; private int replicaId = DEFAULT_REPLICA_ID; // Current TableName @@ -188,7 +174,7 @@ public static String prettyPrint(final String encodedRegionName) { /** HRegionInfo for first meta region */ // TODO: How come Meta regions still do not have encoded region names? Fix. public static final HRegionInfo FIRST_META_REGIONINFO = - new HRegionInfo(1L, TableName.META_TABLE_NAME); + new HRegionInfo(1L, TableName.META_TABLE_NAME); private void setHashCode() { int result = Arrays.hashCode(this.regionName); @@ -202,8 +188,7 @@ private void setHashCode() { } /** - * Private constructor used constructing HRegionInfo for the - * first meta regions + * Private constructor used constructing HRegionInfo for the first meta regions */ private HRegionInfo(long regionId, TableName tableName) { this(regionId, tableName, DEFAULT_REPLICA_ID); @@ -225,73 +210,61 @@ public HRegionInfo(final TableName tableName) { /** * Construct HRegionInfo with explicit parameters - * * @param tableName the table name - * @param startKey first key in region - * @param endKey end of key range - * @throws IllegalArgumentException + * @param startKey first key in region + * @param endKey end of key range */ public HRegionInfo(final TableName tableName, final byte[] startKey, final byte[] endKey) - throws IllegalArgumentException { + throws IllegalArgumentException { this(tableName, startKey, endKey, false); } /** * Construct HRegionInfo with explicit parameters - * - * @param tableName the table descriptor - * @param startKey first key in region - * @param endKey end of key range - * @param split true if this region has split and we have daughter regions - * regions that may or may not hold references to this region. - * @throws IllegalArgumentException + * @param tableName the table name + * @param startKey first key in region + * @param endKey end of key range + * @param split true if this region has split and we have daughter regions regions that may or + * may not hold references to this region. */ public HRegionInfo(final TableName tableName, final byte[] startKey, final byte[] endKey, - final boolean split) - throws IllegalArgumentException { + final boolean split) throws IllegalArgumentException { this(tableName, startKey, endKey, split, EnvironmentEdgeManager.currentTime()); } /** * Construct HRegionInfo with explicit parameters - * - * @param tableName the table descriptor - * @param startKey first key in region - * @param endKey end of key range - * @param split true if this region has split and we have daughter regions - * regions that may or may not hold references to this region. - * @param regionid Region id to use. - * @throws IllegalArgumentException + * @param tableName the table name + * @param startKey first key in region + * @param endKey end of key range + * @param split true if this region has split and we have daughter regions regions that may or + * may not hold references to this region. + * @param regionId Region id to use. */ - public HRegionInfo(final TableName tableName, final byte[] startKey, - final byte[] endKey, final boolean split, final long regionid) - throws IllegalArgumentException { - this(tableName, startKey, endKey, split, regionid, DEFAULT_REPLICA_ID); + public HRegionInfo(final TableName tableName, final byte[] startKey, final byte[] endKey, + final boolean split, final long regionId) throws IllegalArgumentException { + this(tableName, startKey, endKey, split, regionId, DEFAULT_REPLICA_ID); } /** * Construct HRegionInfo with explicit parameters - * - * @param tableName the table descriptor - * @param startKey first key in region - * @param endKey end of key range - * @param split true if this region has split and we have daughter regions - * regions that may or may not hold references to this region. - * @param regionid Region id to use. + * @param tableName the table name + * @param startKey first key in region + * @param endKey end of key range + * @param split true if this region has split and we have daughter regions regions that may or + * may not hold references to this region. + * @param regionId Region id to use. * @param replicaId the replicaId to use - * @throws IllegalArgumentException */ - public HRegionInfo(final TableName tableName, final byte[] startKey, - final byte[] endKey, final boolean split, final long regionid, - final int replicaId) - throws IllegalArgumentException { + public HRegionInfo(final TableName tableName, final byte[] startKey, final byte[] endKey, + final boolean split, final long regionId, final int replicaId) throws IllegalArgumentException { super(); if (tableName == null) { throw new IllegalArgumentException("TableName cannot be null"); } this.tableName = tableName; this.offLine = false; - this.regionId = regionid; + this.regionId = regionId; this.replicaId = replicaId; if (this.replicaId > MAX_REPLICA_ID) { throw new IllegalArgumentException("ReplicaId cannot be greater than" + MAX_REPLICA_ID); @@ -300,17 +273,14 @@ public HRegionInfo(final TableName tableName, final byte[] startKey, this.regionName = createRegionName(this.tableName, startKey, regionId, replicaId, true); this.split = split; - this.endKey = endKey == null? HConstants.EMPTY_END_ROW: endKey.clone(); - this.startKey = startKey == null? - HConstants.EMPTY_START_ROW: startKey.clone(); + this.endKey = endKey == null ? HConstants.EMPTY_END_ROW : endKey.clone(); + this.startKey = startKey == null ? HConstants.EMPTY_START_ROW : startKey.clone(); this.tableName = tableName; setHashCode(); } /** - * Costruct a copy of another HRegionInfo - * - * @param other + * Construct a copy of another HRegionInfo */ public HRegionInfo(RegionInfo other) { super(); @@ -334,92 +304,91 @@ public HRegionInfo(HRegionInfo other, int replicaId) { /** * Make a region name of passed parameters. - * @param tableName - * @param startKey Can be null - * @param regionid Region id (Usually timestamp from when region was created). - * @param newFormat should we create the region name in the new format - * (such that it contains its encoded name?). + * @param tableName the table name + * @param startKey Can be null + * @param regionId Region id (Usually timestamp from when region was created). + * @param newFormat should we create the region name in the new format (such that it contains its + * encoded name?). * @return Region name made of passed tableName, startKey and id - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#createRegionName(TableName, byte[], long, boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#createRegionName(TableName, byte[], long, boolean)}. */ @Deprecated @InterfaceAudience.Private - public static byte [] createRegionName(final TableName tableName, - final byte [] startKey, final long regionid, boolean newFormat) { - return RegionInfo.createRegionName(tableName, startKey, Long.toString(regionid), newFormat); + public static byte[] createRegionName(final TableName tableName, final byte[] startKey, + final long regionId, boolean newFormat) { + return RegionInfo.createRegionName(tableName, startKey, Long.toString(regionId), newFormat); } /** * Make a region name of passed parameters. - * @param tableName - * @param startKey Can be null - * @param id Region id (Usually timestamp from when region was created). - * @param newFormat should we create the region name in the new format - * (such that it contains its encoded name?). + * @param tableName the table name + * @param startKey Can be null + * @param id Region id (Usually timestamp from when region was created). + * @param newFormat should we create the region name in the new format (such that it contains its + * encoded name?). * @return Region name made of passed tableName, startKey and id - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#createRegionName(TableName, byte[], String, boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#createRegionName(TableName, byte[], String, boolean)}. */ @Deprecated @InterfaceAudience.Private - public static byte [] createRegionName(final TableName tableName, - final byte [] startKey, final String id, boolean newFormat) { + public static byte[] createRegionName(final TableName tableName, final byte[] startKey, + final String id, boolean newFormat) { return RegionInfo.createRegionName(tableName, startKey, Bytes.toBytes(id), newFormat); } /** * Make a region name of passed parameters. - * @param tableName - * @param startKey Can be null - * @param regionid Region id (Usually timestamp from when region was created). - * @param replicaId - * @param newFormat should we create the region name in the new format - * (such that it contains its encoded name?). + * @param tableName the table name + * @param startKey Can be null + * @param regionId Region id (Usually timestamp from when region was created). + * @param newFormat should we create the region name in the new format (such that it contains its + * encoded name?). * @return Region name made of passed tableName, startKey, id and replicaId - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#createRegionName(TableName, byte[], long, int, boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#createRegionName(TableName, byte[], long, int, boolean)}. */ @Deprecated @InterfaceAudience.Private - public static byte [] createRegionName(final TableName tableName, - final byte [] startKey, final long regionid, int replicaId, boolean newFormat) { - return RegionInfo.createRegionName(tableName, startKey, Bytes.toBytes(Long.toString(regionid)), - replicaId, newFormat); + public static byte[] createRegionName(final TableName tableName, final byte[] startKey, + final long regionId, int replicaId, boolean newFormat) { + return RegionInfo.createRegionName(tableName, startKey, Bytes.toBytes(Long.toString(regionId)), + replicaId, newFormat); } /** * Make a region name of passed parameters. - * @param tableName - * @param startKey Can be null - * @param id Region id (Usually timestamp from when region was created). - * @param newFormat should we create the region name in the new format - * (such that it contains its encoded name?). + * @param tableName the table name + * @param startKey Can be null + * @param id Region id (Usually timestamp from when region was created). + * @param newFormat should we create the region name in the new format (such that it contains its + * encoded name?). * @return Region name made of passed tableName, startKey and id - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#createRegionName(TableName, byte[], byte[], boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#createRegionName(TableName, byte[], byte[], boolean)}. */ @Deprecated @InterfaceAudience.Private - public static byte [] createRegionName(final TableName tableName, - final byte [] startKey, final byte [] id, boolean newFormat) { + public static byte[] createRegionName(final TableName tableName, final byte[] startKey, + final byte[] id, boolean newFormat) { return RegionInfo.createRegionName(tableName, startKey, id, DEFAULT_REPLICA_ID, newFormat); } + /** * Make a region name of passed parameters. - * @param tableName - * @param startKey Can be null - * @param id Region id (Usually timestamp from when region was created). - * @param replicaId + * @param tableName the table name + * @param startKey Can be null + * @param id Region id (Usually timestamp from when region was created) * @param newFormat should we create the region name in the new format * @return Region name made of passed tableName, startKey, id and replicaId - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#createRegionName(TableName, byte[], byte[], int, boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#createRegionName(TableName, byte[], byte[], int, boolean)}. */ @Deprecated @InterfaceAudience.Private - public static byte [] createRegionName(final TableName tableName, - final byte [] startKey, final byte [] id, final int replicaId, boolean newFormat) { + public static byte[] createRegionName(final TableName tableName, final byte[] startKey, + final byte[] id, final int replicaId, boolean newFormat) { return RegionInfo.createRegionName(tableName, startKey, id, replicaId, newFormat); } @@ -427,20 +396,19 @@ public HRegionInfo(HRegionInfo other, int replicaId) { * Gets the table name from the specified region name. * @param regionName to extract the table name from * @return Table name - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#getTable(byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#getTable(byte[])}. */ @Deprecated - public static TableName getTable(final byte [] regionName) { + public static TableName getTable(final byte[] regionName) { return RegionInfo.getTable(regionName); } /** * Gets the start key from the specified region name. - * @param regionName * @return Start key. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#getStartKey(byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#getStartKey(byte[])}. */ @Deprecated public static byte[] getStartKey(final byte[] regionName) throws IOException { @@ -449,34 +417,29 @@ public static byte[] getStartKey(final byte[] regionName) throws IOException { /** * Separate elements of a regionName. - * @param regionName * @return Array of byte[] containing tableName, startKey and id - * @throws IOException - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#parseRegionName(byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#parseRegionName(byte[])}. */ @Deprecated @InterfaceAudience.Private - public static byte [][] parseRegionName(final byte [] regionName) throws IOException { + public static byte[][] parseRegionName(final byte[] regionName) throws IOException { return RegionInfo.parseRegionName(regionName); } /** - * - * @param regionName * @return if region name is encoded. - * @throws IOException - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#isEncodedRegionName(byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#isEncodedRegionName(byte[])}. */ @Deprecated public static boolean isEncodedRegionName(byte[] regionName) throws IOException { return RegionInfo.isEncodedRegionName(regionName); } - /** @return the regionId */ + /** Returns the regionId */ @Override - public long getRegionId(){ + public long getRegionId() { return regionId; } @@ -485,13 +448,11 @@ public long getRegionId(){ * @see #getRegionNameAsString() */ @Override - public byte [] getRegionName(){ + public byte[] getRegionName() { return regionName; } - /** - * @return Region name as a String for use in logging, etc. - */ + /** Returns Region name as a String for use in logging, etc. */ @Override public String getRegionNameAsString() { if (RegionInfo.hasEncodedName(this.regionName)) { @@ -505,9 +466,7 @@ public String getRegionNameAsString() { return Bytes.toStringBinary(this.regionName) + "." + this.getEncodedName(); } - /** - * @return the encoded region name - */ + /** Returns the encoded region name */ @Override public synchronized String getEncodedName() { if (this.encodedName == null) { @@ -517,37 +476,32 @@ public synchronized String getEncodedName() { } @Override - public synchronized byte [] getEncodedNameAsBytes() { + public synchronized byte[] getEncodedNameAsBytes() { if (this.encodedNameAsBytes == null) { this.encodedNameAsBytes = Bytes.toBytes(getEncodedName()); } return this.encodedNameAsBytes; } - /** - * @return the startKey - */ + /** Returns the startKey */ @Override - public byte [] getStartKey(){ + public byte[] getStartKey() { return startKey; } - /** - * @return the endKey - */ + /** Returns the endKey */ @Override - public byte [] getEndKey(){ + public byte[] getEndKey() { return endKey; } /** * Get current table name of the region - * @return TableName */ @Override public TableName getTable() { // This method name should be getTableName but there was already a method getTableName - // that returned a byte array. It is unfortunate given everywhere else, getTableName returns + // that returned a byte array. It is unfortunate given everywhere else, getTableName returns // a TableName instance. if (tableName == null || tableName.getName().length == 0) { tableName = getTable(getRegionName()); @@ -556,94 +510,77 @@ public TableName getTable() { } /** - * Returns true if the given inclusive range of rows is fully contained - * by this region. For example, if the region is foo,a,g and this is - * passed ["b","c"] or ["a","c"] it will return true, but if this is passed - * ["b","z"] it will return false. + * Returns true if the given inclusive range of rows is fully contained by this region. For + * example, if the region is foo,a,g and this is passed ["b","c"] or ["a","c"] it will return + * true, but if this is passed ["b","z"] it will return false. * @throws IllegalArgumentException if the range passed is invalid (ie. end < start) */ @Override public boolean containsRange(byte[] rangeStartKey, byte[] rangeEndKey) { if (Bytes.compareTo(rangeStartKey, rangeEndKey) > 0) { - throw new IllegalArgumentException( - "Invalid range: " + Bytes.toStringBinary(rangeStartKey) + - " > " + Bytes.toStringBinary(rangeEndKey)); + throw new IllegalArgumentException("Invalid range: " + Bytes.toStringBinary(rangeStartKey) + + " > " + Bytes.toStringBinary(rangeEndKey)); } boolean firstKeyInRange = Bytes.compareTo(rangeStartKey, startKey) >= 0; boolean lastKeyInRange = - Bytes.compareTo(rangeEndKey, endKey) < 0 || - Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY); + Bytes.compareTo(rangeEndKey, endKey) < 0 || Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY); return firstKeyInRange && lastKeyInRange; } - /** - * @return true if the given row falls in this region. - */ + /** Returns true if the given row falls in this region. */ @Override public boolean containsRow(byte[] row) { - return Bytes.compareTo(row, startKey) >= 0 && - (Bytes.compareTo(row, endKey) < 0 || - Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY)); + return Bytes.compareTo(row, startKey) >= 0 + && (Bytes.compareTo(row, endKey) < 0 || Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY)); } - /** - * @return true if this region is from hbase:meta - */ + /** Returns true if this region is from hbase:meta */ public boolean isMetaTable() { return isMetaRegion(); } - /** - * @return true if this region is a meta region - */ + /** Returns true if this region is a meta region */ @Override public boolean isMetaRegion() { - return tableName.equals(HRegionInfo.FIRST_META_REGIONINFO.getTable()); + return tableName.equals(HRegionInfo.FIRST_META_REGIONINFO.getTable()); } - /** - * @return true if this region is from a system table - */ + /** Returns true if this region is from a system table */ public boolean isSystemTable() { return tableName.isSystemTable(); } - /** - * @return true if has been split and has daughters. - */ + /** Returns true if has been split and has daughters. */ @Override public boolean isSplit() { return this.split; } /** + * Set or clear the split status flag. * @param split set split status */ public void setSplit(boolean split) { this.split = split; } - /** - * @return true if this region is offline. - */ + /** Returns true if this region is offline. */ @Override public boolean isOffline() { return this.offLine; } /** - * The parent of a region split is offline while split daughters hold - * references to the parent. Offlined regions are closed. + * The parent of a region split is offline while split daughters hold references to the parent. + * Offlined regions are closed. * @param offLine Set online/offline status. */ public void setOffline(boolean offLine) { this.offLine = offLine; } - /** - * @return true if this is a split parent region. - */ + /** Returns true if this is a split parent region. */ @Override public boolean isSplitParent() { if (!isSplit()) return false; @@ -667,14 +604,11 @@ public int getReplicaId() { */ @Override public String toString() { - return "{ENCODED => " + getEncodedName() + ", " + - HConstants.NAME + " => '" + Bytes.toStringBinary(this.regionName) - + "', STARTKEY => '" + - Bytes.toStringBinary(this.startKey) + "', ENDKEY => '" + - Bytes.toStringBinary(this.endKey) + "'" + - (isOffline()? ", OFFLINE => true": "") + - (isSplit()? ", SPLIT => true": "") + - ((replicaId > 0)? ", REPLICA_ID => " + replicaId : "") + "}"; + return "{ENCODED => " + getEncodedName() + ", " + HConstants.NAME + " => '" + + Bytes.toStringBinary(this.regionName) + "', STARTKEY => '" + + Bytes.toStringBinary(this.startKey) + "', ENDKEY => '" + Bytes.toStringBinary(this.endKey) + + "'" + (isOffline() ? ", OFFLINE => true" : "") + (isSplit() ? ", SPLIT => true" : "") + + ((replicaId > 0) ? ", REPLICA_ID => " + replicaId : "") + "}"; } /** @@ -691,7 +625,7 @@ public boolean equals(Object o) { if (!(o instanceof HRegionInfo)) { return false; } - return this.compareTo((HRegionInfo)o) == 0; + return this.compareTo((HRegionInfo) o) == 0; } /** @@ -704,17 +638,15 @@ public int hashCode() { /** * @return Comparator to use comparing {@link KeyValue}s. - * @deprecated Use Region#getCellComparator(). deprecated for hbase 2.0, remove for hbase 3.0 + * @deprecated Use Region#getCellComparator(). deprecated for hbase 2.0, remove for hbase 3.0 */ @Deprecated public KVComparator getComparator() { - return isMetaRegion()? - KeyValue.META_COMPARATOR: KeyValue.COMPARATOR; + return isMetaRegion() ? KeyValue.META_COMPARATOR : KeyValue.COMPARATOR; } /** * Convert a HRegionInfo to the protobuf RegionInfo - * * @return the converted RegionInfo */ HBaseProtos.RegionInfo convert() { @@ -723,12 +655,11 @@ HBaseProtos.RegionInfo convert() { /** * Convert a HRegionInfo to a RegionInfo - * * @param info the HRegionInfo to convert * @return the converted RegionInfo - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use toRegionInfo(org.apache.hadoop.hbase.client.RegionInfo) - * in org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * toRegionInfo(org.apache.hadoop.hbase.client.RegionInfo) in + * org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil. */ @Deprecated @InterfaceAudience.Private @@ -738,12 +669,11 @@ public static HBaseProtos.RegionInfo convert(final HRegionInfo info) { /** * Convert a RegionInfo to a HRegionInfo - * * @param proto the RegionInfo to convert * @return the converted HRegionInfo - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use toRegionInfo(HBaseProtos.RegionInfo) - * in org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * toRegionInfo(HBaseProtos.RegionInfo) in + * org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil. */ @Deprecated @InterfaceAudience.Private @@ -753,17 +683,12 @@ public static HRegionInfo convert(final HBaseProtos.RegionInfo proto) { // RegionInfo into HRegionInfo which is what is wanted here. HRegionInfo hri; if (ri.isMetaRegion()) { - hri = ri.getReplicaId() == RegionInfo.DEFAULT_REPLICA_ID ? - HRegionInfo.FIRST_META_REGIONINFO : - new HRegionInfo(ri.getRegionId(), ri.getTable(), ri.getReplicaId()); + hri = ri.getReplicaId() == RegionInfo.DEFAULT_REPLICA_ID + ? HRegionInfo.FIRST_META_REGIONINFO + : new HRegionInfo(ri.getRegionId(), ri.getTable(), ri.getReplicaId()); } else { - hri = new HRegionInfo( - ri.getTable(), - ri.getStartKey(), - ri.getEndKey(), - ri.isSplit(), - ri.getRegionId(), - ri.getReplicaId()); + hri = new HRegionInfo(ri.getTable(), ri.getStartKey(), ri.getEndKey(), ri.isSplit(), + ri.getRegionId(), ri.getReplicaId()); if (proto.hasOffline()) { hri.setOffline(proto.getOffline()); } @@ -772,38 +697,41 @@ public static HRegionInfo convert(final HBaseProtos.RegionInfo proto) { } /** + * Serialize a {@link HRegionInfo} into a byte array. * @return This instance serialized as protobuf w/ a magic pb prefix. * @see #parseFrom(byte[]) - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#toByteArray(RegionInfo)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#toByteArray(RegionInfo)}. */ @Deprecated - public byte [] toByteArray() { + public byte[] toByteArray() { return RegionInfo.toByteArray(this); } /** - * @return A deserialized {@link HRegionInfo} - * or null if we failed deserialize or passed bytes null + * Parse a serialized representation of a {@link HRegionInfo}. + * @return A deserialized {@link HRegionInfo} or null if we failed deserialize or passed bytes + * null * @see #toByteArray() - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#parseFromOrNull(byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#parseFromOrNull(byte[])}. */ @Deprecated - public static HRegionInfo parseFromOrNull(final byte [] bytes) { + public static HRegionInfo parseFromOrNull(final byte[] bytes) { if (bytes == null) return null; return parseFromOrNull(bytes, 0, bytes.length); } /** - * @return A deserialized {@link HRegionInfo} or null - * if we failed deserialize or passed bytes null + * Parse a serialized representation of a {@link HRegionInfo}. + * @return A deserialized {@link HRegionInfo} or null if we failed deserialize or passed bytes + * null * @see #toByteArray() - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#parseFromOrNull(byte[], int, int)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#parseFromOrNull(byte[], int, int)}. */ @Deprecated - public static HRegionInfo parseFromOrNull(final byte [] bytes, int offset, int len) { + public static HRegionInfo parseFromOrNull(final byte[] bytes, int offset, int len) { if (bytes == null || len <= 0) return null; try { return parseFrom(bytes, offset, len); @@ -813,31 +741,31 @@ public static HRegionInfo parseFromOrNull(final byte [] bytes, int offset, int l } /** + * Parse a serialized representation of a {@link HRegionInfo}. * @param bytes A pb RegionInfo serialized with a pb magic prefix. * @return A deserialized {@link HRegionInfo} - * @throws DeserializationException * @see #toByteArray() - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#parseFrom(byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#parseFrom(byte[])}. */ - public static HRegionInfo parseFrom(final byte [] bytes) throws DeserializationException { + public static HRegionInfo parseFrom(final byte[] bytes) throws DeserializationException { if (bytes == null) return null; return parseFrom(bytes, 0, bytes.length); } /** - * @param bytes A pb RegionInfo serialized with a pb magic prefix. + * Parse a serialized representation of a {@link HRegionInfo}. + * @param bytes A pb RegionInfo serialized with a pb magic prefix. * @param offset starting point in the byte array - * @param len length to read on the byte array + * @param len length to read on the byte array * @return A deserialized {@link HRegionInfo} - * @throws DeserializationException * @see #toByteArray() - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#parseFrom(byte[], int, int)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#parseFrom(byte[], int, int)}. */ @Deprecated - public static HRegionInfo parseFrom(final byte [] bytes, int offset, int len) - throws DeserializationException { + public static HRegionInfo parseFrom(final byte[] bytes, int offset, int len) + throws DeserializationException { if (ProtobufUtil.isPBMagicPrefix(bytes, offset, len)) { int pblen = ProtobufUtil.lengthOfPBMagic(); try { @@ -854,44 +782,38 @@ public static HRegionInfo parseFrom(final byte [] bytes, int offset, int len) } /** - * Use this instead of {@link #toByteArray()} when writing to a stream and you want to use - * the pb mergeDelimitedFrom (w/o the delimiter, pb reads to EOF which may not be what you want). + * Use this instead of {@link #toByteArray()} when writing to a stream and you want to use the pb + * mergeDelimitedFrom (w/o the delimiter, pb reads to EOF which may not be what you want). * @return This instance serialized as a delimited protobuf w/ a magic pb prefix. - * @throws IOException * @see #toByteArray() - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#toDelimitedByteArray(RegionInfo)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#toDelimitedByteArray(RegionInfo)}. */ @Deprecated - public byte [] toDelimitedByteArray() throws IOException { + public byte[] toDelimitedByteArray() throws IOException { return RegionInfo.toDelimitedByteArray(this); } /** - * Get the descriptive name as {@link RegionState} does it but with hidden - * startkey optionally - * @param state - * @param conf + * Get the descriptive name as {@link RegionState} does it but with hidden startkey optionally * @return descriptive string - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use RegionInfoDisplay#getDescriptiveNameFromRegionStateForDisplay(RegionState, Configuration) - * over in hbase-server module. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * RegionInfoDisplay#getDescriptiveNameFromRegionStateForDisplay(RegionState, + * Configuration) over in hbase-server module. */ @Deprecated @InterfaceAudience.Private public static String getDescriptiveNameFromRegionStateForDisplay(RegionState state, - Configuration conf) { + Configuration conf) { return RegionInfoDisplay.getDescriptiveNameFromRegionStateForDisplay(state, conf); } /** * Get the end key for display. Optionally hide the real end key. - * @param hri - * @param conf * @return the endkey - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use RegionInfoDisplay#getEndKeyForDisplay(RegionInfo, Configuration) - * over in hbase-server module. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * RegionInfoDisplay#getEndKeyForDisplay(RegionInfo, Configuration) over in + * hbase-server module. */ @Deprecated @InterfaceAudience.Private @@ -901,12 +823,10 @@ public static byte[] getEndKeyForDisplay(HRegionInfo hri, Configuration conf) { /** * Get the start key for display. Optionally hide the real start key. - * @param hri - * @param conf * @return the startkey - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use RegionInfoDisplay#getStartKeyForDisplay(RegionInfo, Configuration) - * over in hbase-server module. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * RegionInfoDisplay#getStartKeyForDisplay(RegionInfo, Configuration) over in + * hbase-server module. */ @Deprecated @InterfaceAudience.Private @@ -916,12 +836,10 @@ public static byte[] getStartKeyForDisplay(HRegionInfo hri, Configuration conf) /** * Get the region name for display. Optionally hide the start key. - * @param hri - * @param conf * @return region name as String - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use RegionInfoDisplay#getRegionNameAsStringForDisplay(RegionInfo, Configuration) - * over in hbase-server module. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * RegionInfoDisplay#getRegionNameAsStringForDisplay(RegionInfo, Configuration) over + * in hbase-server module. */ @Deprecated @InterfaceAudience.Private @@ -931,12 +849,10 @@ public static String getRegionNameAsStringForDisplay(HRegionInfo hri, Configurat /** * Get the region name for display. Optionally hide the start key. - * @param hri - * @param conf * @return region name bytes - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use RegionInfoDisplay#getRegionNameForDisplay(RegionInfo, Configuration) - * over in hbase-server module. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * RegionInfoDisplay#getRegionNameForDisplay(RegionInfo, Configuration) over in + * hbase-server module. */ @Deprecated @InterfaceAudience.Private @@ -945,13 +861,11 @@ public static byte[] getRegionNameForDisplay(HRegionInfo hri, Configuration conf } /** - * Parses an HRegionInfo instance from the passed in stream. Presumes the HRegionInfo was + * Parses an HRegionInfo instance from the passed in stream. Presumes the HRegionInfo was * serialized to the stream with {@link #toDelimitedByteArray()} - * @param in * @return An instance of HRegionInfo. - * @throws IOException - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#parseFrom(DataInputStream)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#parseFrom(DataInputStream)}. */ @Deprecated @InterfaceAudience.Private @@ -959,12 +873,12 @@ public static HRegionInfo parseFrom(final DataInputStream in) throws IOException // I need to be able to move back in the stream if this is not a pb serialization so I can // do the Writable decoding instead. int pblen = ProtobufUtil.lengthOfPBMagic(); - byte [] pbuf = new byte[pblen]; - if (in.markSupported()) { //read it with mark() + byte[] pbuf = new byte[pblen]; + if (in.markSupported()) { // read it with mark() in.mark(pblen); } - //assumption: if Writable serialization, it should be longer than pblen. + // assumption: if Writable serialization, it should be longer than pblen. in.readFully(pbuf, 0, pblen); if (ProtobufUtil.isPBMagicPrefix(pbuf)) { return convert(HBaseProtos.RegionInfo.parseDelimitedFrom(in)); @@ -976,14 +890,13 @@ public static HRegionInfo parseFrom(final DataInputStream in) throws IOException /** * Serializes given HRegionInfo's as a byte array. Use this instead of {@link #toByteArray()} when * writing to a stream and you want to use the pb mergeDelimitedFrom (w/o the delimiter, pb reads - * to EOF which may not be what you want). {@link #parseDelimitedFrom(byte[], int, int)} can - * be used to read back the instances. + * to EOF which may not be what you want). {@link #parseDelimitedFrom(byte[], int, int)} can be + * used to read back the instances. * @param infos HRegionInfo objects to serialize * @return This instance serialized as a delimited protobuf w/ a magic pb prefix. - * @throws IOException * @see #toByteArray() - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#toDelimitedByteArray(RegionInfo...)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#toDelimitedByteArray(RegionInfo...)}. */ @Deprecated @InterfaceAudience.Private @@ -994,16 +907,16 @@ public static byte[] toDelimitedByteArray(HRegionInfo... infos) throws IOExcepti /** * Parses all the HRegionInfo instances from the passed in stream until EOF. Presumes the * HRegionInfo's were serialized to the stream with {@link #toDelimitedByteArray()} - * @param bytes serialized bytes + * @param bytes serialized bytes * @param offset the start offset into the byte[] buffer * @param length how far we should read into the byte[] buffer * @return All the hregioninfos that are in the byte array. Keeps reading till we hit the end. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionInfo#parseDelimitedFrom(byte[], int, int)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link RegionInfo#parseDelimitedFrom(byte[], int, int)}. */ @Deprecated public static List parseDelimitedFrom(final byte[] bytes, final int offset, - final int length) throws IOException { + final int length) throws IOException { if (bytes == null) { throw new IllegalArgumentException("Can't build an object with empty bytes array"); } @@ -1023,11 +936,9 @@ public static List parseDelimitedFrom(final byte[] bytes, final int /** * Check whether two regions are adjacent - * @param regionA - * @param regionB * @return true if two regions are adjacent - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link org.apache.hadoop.hbase.client.RegionInfo#areAdjacent(RegionInfo, RegionInfo)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link org.apache.hadoop.hbase.client.RegionInfo#areAdjacent(RegionInfo, RegionInfo)}. */ @Deprecated public static boolean areAdjacent(HRegionInfo regionA, HRegionInfo regionB) { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java index fd679bd0cbc4..3180baa17a60 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -24,17 +23,13 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * Data structure to hold RegionInfo and the address for the hosting - * HRegionServer. Immutable. Comparable, but we compare the 'location' only: - * i.e. the hostname and port, and *not* the regioninfo. This means two - * instances are the same if they refer to the same 'location' (the same - * hostname and port), though they may be carrying different regions. - * - * On a big cluster, each client will have thousands of instances of this object, often - * 100 000 of them if not million. It's important to keep the object size as small - * as possible. - * - *
    This interface has been marked InterfaceAudience.Public in 0.96 and 0.98. + * Data structure to hold RegionInfo and the address for the hosting HRegionServer. Immutable. + * Comparable, but we compare the 'location' only: i.e. the hostname and port, and *not* the + * regioninfo. This means two instances are the same if they refer to the same 'location' (the same + * hostname and port), though they may be carrying different regions. On a big cluster, each client + * will have thousands of instances of this object, often 100 000 of them if not million. It's + * important to keep the object size as small as possible.
    + * This interface has been marked InterfaceAudience.Public in 0.96 and 0.98. */ @InterfaceAudience.Public public class HRegionLocation implements Comparable { @@ -58,7 +53,7 @@ public HRegionLocation(RegionInfo regionInfo, ServerName serverName, long seqNum @Override public String toString() { return "region=" + (this.regionInfo == null ? "null" : this.regionInfo.getRegionNameAsString()) - + ", hostname=" + this.serverName + ", seqNum=" + seqNum; + + ", hostname=" + this.serverName + ", seqNum=" + seqNum; } /** @@ -75,7 +70,7 @@ public boolean equals(Object o) { if (!(o instanceof HRegionLocation)) { return false; } - return this.compareTo((HRegionLocation)o) == 0; + return this.compareTo((HRegionLocation) o) == 0; } /** @@ -87,19 +82,16 @@ public int hashCode() { } /** - * - * @return Immutable HRegionInfo + * Returns immutable HRegionInfo * @deprecated Since 2.0.0. Will remove in 3.0.0. Use {@link #getRegion()}} instead. */ @Deprecated - public HRegionInfo getRegionInfo(){ + public HRegionInfo getRegionInfo() { return regionInfo == null ? null : new ImmutableHRegionInfo(regionInfo); } - /** - * @return regionInfo - */ - public RegionInfo getRegion(){ + /** Returns regionInfo */ + public RegionInfo getRegion() { return regionInfo; } @@ -116,8 +108,8 @@ public long getSeqNum() { } /** - * @return String made of hostname and port formatted as - * per {@link Addressing#createHostAndPortStr(String, int)} + * Returns String made of hostname and port formatted as per + * {@link Addressing#createHostAndPortStr(String, int)} */ public String getHostnamePort() { return Addressing.createHostAndPortStr(this.getHostname(), this.getPort()); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java index 8f9e77ac6488..808cb5a40606 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -42,12 +41,12 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * HTableDescriptor contains the details about an HBase table such as the descriptors of - * all the column families, is the table a catalog table, hbase:meta , - * if the table is read only, the maximum size of the memstore, - * when the region split should occur, coprocessors associated with it etc... - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link TableDescriptorBuilder} to build {@link HTableDescriptor}. + * HTableDescriptor contains the details about an HBase table such as the descriptors of all the + * column families, is the table a catalog table, hbase:meta , if the table is read + * only, the maximum size of the memstore, when the region split should occur, coprocessors + * associated with it etc... + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link TableDescriptorBuilder} to build {@link HTableDescriptor}. */ @Deprecated @InterfaceAudience.Public @@ -66,26 +65,34 @@ public class HTableDescriptor implements TableDescriptor, ComparableHADOOP-1581 HBASE: (HBASE-174) Un-openable tablename bug + * @see HADOOP-1581 HBASE: (HBASE-174) + * Un-openable tablename bug */ public HTableDescriptor(final TableName name) { this(new ModifyableTableDescriptor(name)); @@ -94,8 +101,8 @@ public HTableDescriptor(final TableName name) { /** * Construct a table descriptor by cloning the descriptor passed as a parameter. *

    - * Makes a deep copy of the supplied descriptor. - * Can make a modifiable descriptor from an ImmutableHTableDescriptor. + * Makes a deep copy of the supplied descriptor. Can make a modifiable descriptor from an + * ImmutableHTableDescriptor. * @param desc The descriptor. */ public HTableDescriptor(final HTableDescriptor desc) { @@ -103,8 +110,7 @@ public HTableDescriptor(final HTableDescriptor desc) { } protected HTableDescriptor(final HTableDescriptor desc, boolean deepClone) { - this(deepClone ? new ModifyableTableDescriptor(desc.getTableName(), desc) - : desc.delegatee); + this(deepClone ? new ModifyableTableDescriptor(desc.getTableName(), desc) : desc.delegatee); } public HTableDescriptor(final TableDescriptor desc) { @@ -112,11 +118,11 @@ public HTableDescriptor(final TableDescriptor desc) { } /** - * Construct a table descriptor by cloning the descriptor passed as a parameter - * but using a different table name. + * Construct a table descriptor by cloning the descriptor passed as a parameter but using a + * different table name. *

    - * Makes a deep copy of the supplied descriptor. - * Can make a modifiable descriptor from an ImmutableHTableDescriptor. + * Makes a deep copy of the supplied descriptor. Can make a modifiable descriptor from an + * ImmutableHTableDescriptor. * @param name Table name. * @param desc The descriptor. */ @@ -130,7 +136,6 @@ protected HTableDescriptor(ModifyableTableDescriptor delegatee) { /** * This is vestigial API. It will be removed in 3.0. - * * @return always return the false */ public boolean isRootRegion() { @@ -138,11 +143,8 @@ public boolean isRootRegion() { } /** - * Checks if this table is hbase:meta - * region. - * - * @return true if this table is hbase:meta - * region + * Checks if this table is hbase:meta region. + * @return true if this table is hbase:meta region */ @Override public boolean isMetaRegion() { @@ -151,7 +153,6 @@ public boolean isMetaRegion() { /** * Checks if the table is a hbase:meta table - * * @return true if table is hbase:meta region. */ @Override @@ -159,9 +160,7 @@ public boolean isMetaTable() { return delegatee.isMetaTable(); } - /** - * @return Getter for fetching an unmodifiable map. - */ + /** Returns Getter for fetching an unmodifiable map. */ @Override public Map getValues() { return delegatee.getValues(); @@ -169,8 +168,7 @@ public Map getValues() { /** * Setter for storing metadata as a (key, value) pair in map - * - * @param key The key. + * @param key The key. * @param value The value. If null, removes the setting. */ public HTableDescriptor setValue(byte[] key, byte[] value) { @@ -180,7 +178,6 @@ public HTableDescriptor setValue(byte[] key, byte[] value) { /* * Setter for storing metadata as a (key, value) pair in map - * * @param key The key. * @param value The value. If null, removes the setting. */ @@ -191,8 +188,7 @@ public HTableDescriptor setValue(final Bytes key, final Bytes value) { /** * Setter for storing metadata as a (key, value) pair in map - * - * @param key The key. + * @param key The key. * @param value The value. If null, removes the setting. */ public HTableDescriptor setValue(String key, String value) { @@ -202,9 +198,7 @@ public HTableDescriptor setValue(String key, String value) { /** * Remove metadata represented by the key from the map - * - * @param key Key whose key and value we're to remove from HTableDescriptor - * parameters. + * @param key Key whose key and value we're to remove from HTableDescriptor parameters. */ public void remove(final String key) { getDelegateeForModification().removeValue(Bytes.toBytes(key)); @@ -212,9 +206,7 @@ public void remove(final String key) { /** * Remove metadata represented by the key from the map - * - * @param key Key whose key and value we're to remove from HTableDescriptor - * parameters. + * @param key Key whose key and value we're to remove from HTableDescriptor parameters. */ public void remove(Bytes key) { getDelegateeForModification().removeValue(key); @@ -222,18 +214,15 @@ public void remove(Bytes key) { /** * Remove metadata represented by the key from the map - * - * @param key Key whose key and value we're to remove from HTableDescriptor - * parameters. + * @param key Key whose key and value we're to remove from HTableDescriptor parameters. */ - public void remove(final byte [] key) { + public void remove(final byte[] key) { getDelegateeForModification().removeValue(key); } /** - * Check if the readOnly flag of the table is set. If the readOnly flag is - * set then the contents of the table can only be read from but not modified. - * + * Check if the readOnly flag of the table is set. If the readOnly flag is set then the contents + * of the table can only be read from but not modified. * @return true if all columns in the table should be read only */ @Override @@ -242,12 +231,10 @@ public boolean isReadOnly() { } /** - * Setting the table as read only sets all the columns in the table as read - * only. By default all tables are modifiable, but if the readOnly flag is - * set to true then the contents of the table can only be read but not modified. - * - * @param readOnly True if all of the columns in the table should be read - * only. + * Setting the table as read only sets all the columns in the table as read only. By default all + * tables are modifiable, but if the readOnly flag is set to true then the contents of the table + * can only be read but not modified. + * @param readOnly True if all of the columns in the table should be read only. */ public HTableDescriptor setReadOnly(final boolean readOnly) { getDelegateeForModification().setReadOnly(readOnly); @@ -255,9 +242,8 @@ public HTableDescriptor setReadOnly(final boolean readOnly) { } /** - * Check if the compaction enable flag of the table is true. If flag is - * false then no minor/major compactions will be done in real. - * + * Check if the compaction enable flag of the table is true. If flag is false then no minor/major + * compactions will be done in real. * @return true if table compaction enabled */ @Override @@ -267,7 +253,6 @@ public boolean isCompactionEnabled() { /** * Setting the table compaction enable flag. - * * @param isEnable True if enable compaction. */ public HTableDescriptor setCompactionEnabled(final boolean isEnable) { @@ -276,9 +261,8 @@ public HTableDescriptor setCompactionEnabled(final boolean isEnable) { } /** - * Check if the region split enable flag of the table is true. If flag is - * false then no split will be done. - * + * Check if the region split enable flag of the table is true. If flag is false then no split will + * be done. * @return true if table region split enabled */ @Override @@ -288,7 +272,6 @@ public boolean isSplitEnabled() { /** * Setting the table region split enable flag. - * * @param isEnable True if enable split. */ public HTableDescriptor setSplitEnabled(final boolean isEnable) { @@ -296,11 +279,9 @@ public HTableDescriptor setSplitEnabled(final boolean isEnable) { return this; } - /** - * Check if the region merge enable flag of the table is true. If flag is - * false then no merge will be done. - * + * Check if the region merge enable flag of the table is true. If flag is false then no merge will + * be done. * @return true if table region merge enabled */ @Override @@ -310,7 +291,6 @@ public boolean isMergeEnabled() { /** * Setting the table region merge enable flag. - * * @param isEnable True if enable merge. */ public HTableDescriptor setMergeEnabled(final boolean isEnable) { @@ -319,9 +299,8 @@ public HTableDescriptor setMergeEnabled(final boolean isEnable) { } /** - * Check if normalization enable flag of the table is true. If flag is - * false then no region normalizer won't attempt to normalize this table. - * + * Check if normalization enable flag of the table is true. If flag is false then no region + * normalizer won't attempt to normalize this table. * @return true if region normalization is enabled for this table */ @Override @@ -331,7 +310,6 @@ public boolean isNormalizationEnabled() { /** * Setting the table normalization enable flag. - * * @param isEnable True if enable normalization. */ public HTableDescriptor setNormalizationEnabled(final boolean isEnable) { @@ -379,8 +357,6 @@ public Durability getDurability() { /** * Get the name of the table - * - * @return TableName */ @Override public TableName getTableName() { @@ -389,7 +365,6 @@ public TableName getTableName() { /** * Get the name of the table as a String - * * @return name of table as a String */ public String getNameAsString() { @@ -397,9 +372,9 @@ public String getNameAsString() { } /** - * This sets the class associated with the region split policy which - * determines when a region split should occur. The class used by - * default is defined in org.apache.hadoop.hbase.regionserver.RegionSplitPolicy + * This sets the class associated with the region split policy which determines when a region + * split should occur. The class used by default is defined in + * org.apache.hadoop.hbase.regionserver.RegionSplitPolicy * @param clazz the class name */ public HTableDescriptor setRegionSplitPolicyClassName(String clazz) { @@ -408,46 +383,40 @@ public HTableDescriptor setRegionSplitPolicyClassName(String clazz) { } /** - * This gets the class associated with the region split policy which - * determines when a region split should occur. The class used by - * default is defined in org.apache.hadoop.hbase.regionserver.RegionSplitPolicy - * - * @return the class name of the region split policy for this table. - * If this returns null, the default split policy is used. + * This gets the class associated with the region split policy which determines when a region + * split should occur. The class used by default is defined in + * org.apache.hadoop.hbase.regionserver.RegionSplitPolicy + * @return the class name of the region split policy for this table. If this returns null, the + * default split policy is used. */ @Override - public String getRegionSplitPolicyClassName() { + public String getRegionSplitPolicyClassName() { return delegatee.getRegionSplitPolicyClassName(); } /** - * Returns the maximum size upto which a region can grow to after which a region - * split is triggered. The region size is represented by the size of the biggest - * store file in that region. - * + * Returns the maximum size upto which a region can grow to after which a region split is + * triggered. The region size is represented by the size of the biggest store file in that region. * @return max hregion size for table, -1 if not set. - * * @see #setMaxFileSize(long) */ - @Override + @Override public long getMaxFileSize() { return delegatee.getMaxFileSize(); } /** - * Sets the maximum size upto which a region can grow to after which a region - * split is triggered. The region size is represented by the size of the biggest - * store file in that region, i.e. If the biggest store file grows beyond the - * maxFileSize, then the region split is triggered. This defaults to a value of - * 256 MB. + * Sets the maximum size upto which a region can grow to after which a region split is triggered. + * The region size is represented by the size of the biggest store file in that region, i.e. If + * the biggest store file grows beyond the maxFileSize, then the region split is triggered. This + * defaults to a value of 256 MB. *

    - * This is not an absolute value and might vary. Assume that a single row exceeds - * the maxFileSize then the storeFileSize will be greater than maxFileSize since - * a single row cannot be split across multiple regions + * This is not an absolute value and might vary. Assume that a single row exceeds the maxFileSize + * then the storeFileSize will be greater than maxFileSize since a single row cannot be split + * across multiple regions *

    - * - * @param maxFileSize The maximum file size that a store file can grow to - * before a split is triggered. + * @param maxFileSize The maximum file size that a store file can grow to before a split is + * triggered. */ public HTableDescriptor setMaxFileSize(long maxFileSize) { getDelegateeForModification().setMaxFileSize(maxFileSize); @@ -461,9 +430,7 @@ public HTableDescriptor setMaxFileSize(String maxFileSize) throws HBaseException /** * Returns the size of the memstore after which a flush to filesystem is triggered. - * * @return memory cache flush size for each hregion, -1 if not set. - * * @see #setMemStoreFlushSize(long) */ @Override @@ -472,9 +439,8 @@ public long getMemStoreFlushSize() { } /** - * Represents the maximum size of the memstore after which the contents of the - * memstore are flushed to the filesystem. This defaults to a size of 64 MB. - * + * Represents the maximum size of the memstore after which the contents of the memstore are + * flushed to the filesystem. This defaults to a size of 64 MB. * @param memstoreFlushSize memory cache flush size for each hregion */ public HTableDescriptor setMemStoreFlushSize(long memstoreFlushSize) { @@ -511,8 +477,8 @@ public String getFlushPolicyClassName() { } /** - * Adds a column family. - * For the updating purpose please use {@link #modifyFamily(HColumnDescriptor)} instead. + * Adds a column family. For the updating purpose please use + * {@link #modifyFamily(HColumnDescriptor)} instead. * @param family HColumnDescriptor of family to add. */ public HTableDescriptor addFamily(final HColumnDescriptor family) { @@ -535,13 +501,12 @@ public HTableDescriptor modifyFamily(final HColumnDescriptor family) { * @param familyName Family name or column name. * @return true if the table contains the specified family name */ - public boolean hasFamily(final byte [] familyName) { + public boolean hasFamily(final byte[] familyName) { return delegatee.hasColumnFamily(familyName); } /** - * @return Name of this table and then a map of all of the column family - * descriptors. + * @return Name of this table and then a map of all of the column family descriptors. * @see #getNameAsString() */ @Override @@ -550,28 +515,24 @@ public String toString() { } /** - * @return Name of this table and then a map of all of the column family - * descriptors (with only the non-default column family attributes) + * @return Name of this table and then a map of all of the column family descriptors (with only + * the non-default column family attributes) */ @Override public String toStringCustomizedValues() { return delegatee.toStringCustomizedValues(); } - /** - * @return map of all table attributes formatted into string. - */ + /** Returns map of all table attributes formatted into string. */ public String toStringTableAttributes() { - return delegatee.toStringTableAttributes(); + return delegatee.toStringTableAttributes(); } /** - * Compare the contents of the descriptor with another one passed as a parameter. - * Checks if the obj passed is an instance of HTableDescriptor, if yes then the - * contents of the descriptors are compared. - * + * Compare the contents of the descriptor with another one passed as a parameter. Checks if the + * obj passed is an instance of HTableDescriptor, if yes then the contents of the descriptors are + * compared. * @return true if the contents of the the two descriptors exactly match - * * @see java.lang.Object#equals(java.lang.Object) */ @Override @@ -596,11 +557,10 @@ public int hashCode() { // Comparable /** - * Compares the descriptor with another descriptor which is passed as a parameter. - * This compares the content of the two descriptors and not the reference. - * - * @return 0 if the contents of the descriptors are exactly matching, - * 1 if there is a mismatch in the contents + * Compares the descriptor with another descriptor which is passed as a parameter. This compares + * the content of the two descriptors and not the reference. + * @return 0 if the contents of the descriptors are exactly matching, 1 if there is a mismatch in + * the contents */ @Override public int compareTo(final HTableDescriptor other) { @@ -608,19 +568,17 @@ public int compareTo(final HTableDescriptor other) { } /** - * Returns an unmodifiable collection of all the {@link HColumnDescriptor} - * of all the column families of the table. + * Returns an unmodifiable collection of all the {@link HColumnDescriptor} of all the column + * families of the table. * @deprecated since 2.0.0 and will be removed in 3.0.0. Use {@link #getColumnFamilies()} instead. - * @return Immutable collection of {@link HColumnDescriptor} of all the - * column families. + * @return Immutable collection of {@link HColumnDescriptor} of all the column families. * @see #getColumnFamilies() * @see HBASE-18008 */ @Deprecated public Collection getFamilies() { - return Stream.of(delegatee.getColumnFamilies()) - .map(this::toHColumnDescriptor) - .collect(Collectors.toList()); + return Stream.of(delegatee.getColumnFamilies()).map(this::toHColumnDescriptor) + .collect(Collectors.toList()); } /** @@ -641,25 +599,23 @@ public HTableDescriptor setRegionReplication(int regionReplication) { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #hasRegionMemStoreReplication()} instead + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #hasRegionMemStoreReplication()} instead */ @Deprecated public boolean hasRegionMemstoreReplication() { return hasRegionMemStoreReplication(); } - /** - * @return true if the read-replicas memstore replication is enabled. - */ + /** Returns true if the read-replicas memstore replication is enabled. */ @Override public boolean hasRegionMemStoreReplication() { return delegatee.hasRegionMemStoreReplication(); } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #setRegionMemStoreReplication(boolean)} instead + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #setRegionMemStoreReplication(boolean)} instead */ @Deprecated public HTableDescriptor setRegionMemstoreReplication(boolean memstoreReplication) { @@ -667,13 +623,11 @@ public HTableDescriptor setRegionMemstoreReplication(boolean memstoreReplication } /** - * Enable or Disable the memstore replication from the primary region to the replicas. - * The replication will be used only for meta operations (e.g. flush, compaction, ...) - * - * @param memstoreReplication true if the new data written to the primary region - * should be replicated. - * false if the secondaries can tollerate to have new - * data only when the primary flushes the memstore. + * Enable or Disable the memstore replication from the primary region to the replicas. The + * replication will be used only for meta operations (e.g. flush, compaction, ...) + * @param memstoreReplication true if the new data written to the primary region should be + * replicated. false if the secondaries can tollerate to have new data + * only when the primary flushes the memstore. */ public HTableDescriptor setRegionMemStoreReplication(boolean memstoreReplication) { getDelegateeForModification().setRegionMemStoreReplication(memstoreReplication); @@ -691,15 +645,13 @@ public int getPriority() { } /** - * Returns all the column family names of the current table. The map of - * HTableDescriptor contains mapping of family name to HColumnDescriptors. - * This returns all the keys of the family map which represents the column - * family names of the table. - * + * Returns all the column family names of the current table. The map of HTableDescriptor contains + * mapping of family name to HColumnDescriptors. This returns all the keys of the family map which + * represents the column family names of the table. * @return Immutable sorted set of the keys of the families. * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * (HBASE-18008). - * Use {@link #getColumnFamilyNames()}. + * (HBASE-18008). Use + * {@link #getColumnFamilyNames()}. */ @Deprecated public Set getFamiliesKeys() { @@ -708,7 +660,6 @@ public Set getFamiliesKeys() { /** * Returns the count of the column families of the table. - * * @return Count of column families of the table */ @Override @@ -717,9 +668,7 @@ public int getColumnFamilyCount() { } /** - * Returns an array all the {@link HColumnDescriptor} of the column families - * of the table. - * + * Returns an array all the {@link HColumnDescriptor} of the column families of the table. * @return Array of all the HColumnDescriptors of the current table * @deprecated since 2.0.0 and will be removed in 3.0.0. * @see #getFamilies() @@ -728,19 +677,17 @@ public int getColumnFamilyCount() { @Deprecated @Override public HColumnDescriptor[] getColumnFamilies() { - return Stream.of(delegatee.getColumnFamilies()) - .map(this::toHColumnDescriptor) - .toArray(size -> new HColumnDescriptor[size]); + return Stream.of(delegatee.getColumnFamilies()).map(this::toHColumnDescriptor) + .toArray(size -> new HColumnDescriptor[size]); } /** - * Returns the HColumnDescriptor for a specific column family with name as - * specified by the parameter column. + * Returns the HColumnDescriptor for a specific column family with name as specified by the + * parameter column. * @param column Column family name - * @return Column descriptor for the passed family name or the family on - * passed in column. + * @return Column descriptor for the passed family name or the family on passed in column. * @deprecated since 2.0.0 and will be removed in 3.0.0. Use {@link #getColumnFamily(byte[])} - * instead. + * instead. * @see #getColumnFamily(byte[]) * @see HBASE-18008 */ @@ -749,16 +696,13 @@ public HColumnDescriptor getFamily(final byte[] column) { return toHColumnDescriptor(delegatee.getColumnFamily(column)); } - /** - * Removes the HColumnDescriptor with name specified by the parameter column - * from the table descriptor - * + * Removes the HColumnDescriptor with name specified by the parameter column from the table + * descriptor * @param column Name of the column family to be removed. - * @return Column descriptor for the passed family name or the family on - * passed in column. + * @return Column descriptor for the passed family name or the family on passed in column. */ - public HColumnDescriptor removeFamily(final byte [] column) { + public HColumnDescriptor removeFamily(final byte[] column) { return toHColumnDescriptor(getDelegateeForModification().removeColumnFamily(column)); } @@ -780,13 +724,11 @@ protected HColumnDescriptor toHColumnDescriptor(ColumnFamilyDescriptor desc) { } /** - * Add a table coprocessor to this table. The coprocessor - * type must be org.apache.hadoop.hbase.coprocessor.RegionCoprocessor. - * It won't check if the class can be loaded or not. - * Whether a coprocessor is loadable or not will be determined when - * a region is opened. + * Add a table coprocessor to this table. The coprocessor type must be + * org.apache.hadoop.hbase.coprocessor.RegionCoprocessor. It won't check if the class can be + * loaded or not. Whether a coprocessor is loadable or not will be determined when a region is + * opened. * @param className Full class name. - * @throws IOException */ public HTableDescriptor addCoprocessor(String className) throws IOException { getDelegateeForModification().setCoprocessor(className); @@ -794,39 +736,31 @@ public HTableDescriptor addCoprocessor(String className) throws IOException { } /** - * Add a table coprocessor to this table. The coprocessor - * type must be org.apache.hadoop.hbase.coprocessor.RegionCoprocessor. - * It won't check if the class can be loaded or not. - * Whether a coprocessor is loadable or not will be determined when - * a region is opened. - * @param jarFilePath Path of the jar file. If it's null, the class will be - * loaded from default classloader. - * @param className Full class name. - * @param priority Priority - * @param kvs Arbitrary key-value parameter pairs passed into the coprocessor. - * @throws IOException - */ - public HTableDescriptor addCoprocessor(String className, Path jarFilePath, - int priority, final Map kvs) - throws IOException { - getDelegateeForModification().setCoprocessor( - CoprocessorDescriptorBuilder.newBuilder(className) - .setJarPath(jarFilePath == null ? null : jarFilePath.toString()) - .setPriority(priority) - .setProperties(kvs == null ? Collections.emptyMap() : kvs) - .build()); + * Add a table coprocessor to this table. The coprocessor type must be + * org.apache.hadoop.hbase.coprocessor.RegionCoprocessor. It won't check if the class can be + * loaded or not. Whether a coprocessor is loadable or not will be determined when a region is + * opened. + * @param jarFilePath Path of the jar file. If it's null, the class will be loaded from default + * classloader. + * @param className Full class name. + * @param priority Priority + * @param kvs Arbitrary key-value parameter pairs passed into the coprocessor. + */ + public HTableDescriptor addCoprocessor(String className, Path jarFilePath, int priority, + final Map kvs) throws IOException { + getDelegateeForModification().setCoprocessor(CoprocessorDescriptorBuilder.newBuilder(className) + .setJarPath(jarFilePath == null ? null : jarFilePath.toString()).setPriority(priority) + .setProperties(kvs == null ? Collections.emptyMap() : kvs).build()); return this; } /** - * Add a table coprocessor to this table. The coprocessor - * type must be org.apache.hadoop.hbase.coprocessor.RegionCoprocessor. - * It won't check if the class can be loaded or not. - * Whether a coprocessor is loadable or not will be determined when - * a region is opened. + * Add a table coprocessor to this table. The coprocessor type must be + * org.apache.hadoop.hbase.coprocessor.RegionCoprocessor. It won't check if the class can be + * loaded or not. Whether a coprocessor is loadable or not will be determined when a region is + * opened. * @param specStr The Coprocessor specification all in in one String formatted so matches - * {@link HConstants#CP_HTD_ATTR_VALUE_PATTERN} - * @throws IOException + * {@link HConstants#CP_HTD_ATTR_VALUE_PATTERN} */ public HTableDescriptor addCoprocessorWithSpec(final String specStr) throws IOException { getDelegateeForModification().setCoprocessorWithSpec(specStr); @@ -835,7 +769,6 @@ public HTableDescriptor addCoprocessorWithSpec(final String specStr) throws IOEx /** * Check if the table has an attached co-processor represented by the name className - * * @param classNameToMatch - Class name of the co-processor * @return true of the table has a co-processor className */ @@ -858,14 +791,17 @@ public void removeCoprocessor(String className) { } public final static String NAMESPACE_FAMILY_INFO = TableDescriptorBuilder.NAMESPACE_FAMILY_INFO; - public final static byte[] NAMESPACE_FAMILY_INFO_BYTES = TableDescriptorBuilder.NAMESPACE_FAMILY_INFO_BYTES; - public final static byte[] NAMESPACE_COL_DESC_BYTES = TableDescriptorBuilder.NAMESPACE_COL_DESC_BYTES; + public final static byte[] NAMESPACE_FAMILY_INFO_BYTES = + TableDescriptorBuilder.NAMESPACE_FAMILY_INFO_BYTES; + public final static byte[] NAMESPACE_COL_DESC_BYTES = + TableDescriptorBuilder.NAMESPACE_COL_DESC_BYTES; /** Table descriptor for namespace table */ - public static final HTableDescriptor NAMESPACE_TABLEDESC - = new HTableDescriptor(TableDescriptorBuilder.NAMESPACE_TABLEDESC); + public static final HTableDescriptor NAMESPACE_TABLEDESC = + new HTableDescriptor(TableDescriptorBuilder.NAMESPACE_TABLEDESC); /** + * Set the table owner. * @deprecated since 0.94.1 * @see HBASE-6188 */ @@ -876,6 +812,7 @@ public HTableDescriptor setOwner(User owner) { } /** + * Set the table owner. * @deprecated since 0.94.1 * @see HBASE-6188 */ @@ -887,6 +824,7 @@ public HTableDescriptor setOwnerString(String ownerString) { } /** + * Get the table owner. * @deprecated since 0.94.1 * @see HBASE-6188 */ @@ -897,22 +835,20 @@ public String getOwnerString() { } /** - * @return This instance serialized with pb with pb magic prefix - * @see #parseFrom(byte[]) + * Returns This instance serialized with pb with pb magic prefix */ public byte[] toByteArray() { return TableDescriptorBuilder.toByteArray(delegatee); } /** + * Parse the serialized representation of a {@link HTableDescriptor} * @param bytes A pb serialized {@link HTableDescriptor} instance with pb magic prefix * @return An instance of {@link HTableDescriptor} made from bytes - * @throws DeserializationException - * @throws IOException * @see #toByteArray() */ - public static HTableDescriptor parseFrom(final byte [] bytes) - throws DeserializationException, IOException { + public static HTableDescriptor parseFrom(final byte[] bytes) + throws DeserializationException, IOException { TableDescriptor desc = TableDescriptorBuilder.parseFrom(bytes); if (desc instanceof ModifyableTableDescriptor) { return new HTableDescriptor((ModifyableTableDescriptor) desc); @@ -932,16 +868,14 @@ public String getConfigurationValue(String key) { * Getter for fetching an unmodifiable map. */ public Map getConfiguration() { - return delegatee.getValues().entrySet().stream() - .collect(Collectors.toMap( - e -> Bytes.toString(e.getKey().get(), e.getKey().getOffset(), e.getKey().getLength()), - e -> Bytes.toString(e.getValue().get(), e.getValue().getOffset(), e.getValue().getLength()) - )); + return delegatee.getValues().entrySet().stream().collect(Collectors.toMap( + e -> Bytes.toString(e.getKey().get(), e.getKey().getOffset(), e.getKey().getLength()), + e -> Bytes.toString(e.getValue().get(), e.getValue().getOffset(), e.getValue().getLength()))); } /** * Setter for storing a configuration setting in map. - * @param key Config key. Same as XML config key e.g. hbase.something.or.other. + * @param key Config key. Same as XML config key e.g. hbase.something.or.other. * @param value String value. If null, removes the setting. */ public HTableDescriptor setConfiguration(String key, String value) { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckEmptyRegionInfo.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckEmptyRegionInfo.java new file mode 100644 index 000000000000..5d1ca54bf1be --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckEmptyRegionInfo.java @@ -0,0 +1,38 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO to present Empty Region Info from Catalog Janitor Inconsistencies Report via REST API. These + * inconsistencies are shown on hbck.jsp page on Active HMaster UI as part of Catalog Janitor + * inconsistencies. + */ +@InterfaceAudience.Public +public class HbckEmptyRegionInfo { + private final String regionInfo; + + public HbckEmptyRegionInfo(String emptyRegionInfo) { + this.regionInfo = emptyRegionInfo; + } + + public String getRegionInfo() { + return regionInfo; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckInconsistentRegions.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckInconsistentRegions.java new file mode 100644 index 000000000000..f32f73a73d15 --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckInconsistentRegions.java @@ -0,0 +1,51 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import java.util.List; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO to present HBCK Inconsistent Regions from HBCK Inconsistencies Report via REST API. These + * inconsistencies are shown on hbck.jsp page on Active HMaster UI as part of HBCK inconsistencies. + */ +@InterfaceAudience.Public +public class HbckInconsistentRegions { + private final String regionId; + private final HbckServerName serverNameInMeta; + private final List listOfServers; + + public HbckInconsistentRegions(String inconsistentRegionId, HbckServerName serverNameInMeta, + List listOfServerName) { + this.regionId = inconsistentRegionId; + this.serverNameInMeta = serverNameInMeta; + this.listOfServers = listOfServerName; + } + + public String getRegionId() { + return regionId; + } + + public HbckServerName getServerNameInMeta() { + return serverNameInMeta; + } + + public List getListOfServers() { + return listOfServers; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOrphanRegionsOnFS.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOrphanRegionsOnFS.java new file mode 100644 index 000000000000..43a045fb2933 --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOrphanRegionsOnFS.java @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO to present Orphan Region on FS from HBCK Inconsistencies Report via REST API. These + * inconsistencies are shown on hbck.jsp page on Active HMaster UI as part of HBCK Inconsistencies. + */ +@InterfaceAudience.Public +public class HbckOrphanRegionsOnFS { + private final String regionId; + private final String regionHdfsPath; + + public HbckOrphanRegionsOnFS(String regionId, String orphanRegionHdfsPath) { + this.regionId = regionId; + this.regionHdfsPath = orphanRegionHdfsPath; + } + + public String getRegionId() { + return regionId; + } + + public String getRegionHdfsPath() { + return regionHdfsPath; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOrphanRegionsOnRS.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOrphanRegionsOnRS.java new file mode 100644 index 000000000000..2d442b7a9e40 --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOrphanRegionsOnRS.java @@ -0,0 +1,43 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO to present Orphan Region on RS from HBCK Inconsistencies Report via REST API. These + * inconsistencies are shown on hbck.jsp page on Active HMaster UI as part of HBCK Inconsistencies. + */ +@InterfaceAudience.Public +public class HbckOrphanRegionsOnRS { + private final String regionId; + private final HbckServerName rsName; + + public HbckOrphanRegionsOnRS(String orphanRegionId, HbckServerName orphanRegionRsName) { + this.regionId = orphanRegionId; + this.rsName = orphanRegionRsName; + } + + public String getRegionId() { + return regionId; + } + + public HbckServerName getRsName() { + return rsName; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOverlapRegions.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOverlapRegions.java new file mode 100644 index 000000000000..4170932bf563 --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckOverlapRegions.java @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO to present Region Overlap from Catalog Janitor Inconsistencies Report via REST API. These + * inconsistencies are shown on hbck.jsp page on Active HMaster UI as part of Catalog Janitor + * inconsistencies. + */ +@InterfaceAudience.Public +public class HbckOverlapRegions { + private final HbckRegionDetails region1Info; + private final HbckRegionDetails region2Info; + + public HbckOverlapRegions(HbckRegionDetails region1Info, HbckRegionDetails region2Info) { + this.region1Info = region1Info; + this.region2Info = region2Info; + } + + public HbckRegionDetails getRegion1Info() { + return region1Info; + } + + public HbckRegionDetails getRegion2Info() { + return region2Info; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckRegionDetails.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckRegionDetails.java new file mode 100644 index 000000000000..a79245636276 --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckRegionDetails.java @@ -0,0 +1,54 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO class for HBCK RegionInfo in HBCK Inconsistencies report. + */ +@InterfaceAudience.Public +public class HbckRegionDetails { + private final String regionId; + private final String tableName; + private final String startKey; + private final String endKey; + + public HbckRegionDetails(String regionId, String tableName, String startKey, String endKey) { + this.regionId = regionId; + this.tableName = tableName; + this.startKey = startKey; + this.endKey = endKey; + } + + public String getRegionId() { + return regionId; + } + + public String getTableName() { + return tableName; + } + + public String getStartKey() { + return startKey; + } + + public String getEndKey() { + return endKey; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckRegionHoles.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckRegionHoles.java new file mode 100644 index 000000000000..643e014735a0 --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckRegionHoles.java @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO to present Region Holes from Catalog Janitor Inconsistencies Report via REST API. These + * inconsistencies are shown on hbck.jsp page on Active HMaster UI as part of Catalog Janitor + * inconsistencies. + */ +@InterfaceAudience.Public +public class HbckRegionHoles { + private final HbckRegionDetails region1Info; + private final HbckRegionDetails region2Info; + + public HbckRegionHoles(HbckRegionDetails region1Info, HbckRegionDetails region2Info) { + this.region1Info = region1Info; + this.region2Info = region2Info; + } + + public HbckRegionDetails getRegion1Info() { + return region1Info; + } + + public HbckRegionDetails getRegion2Info() { + return region2Info; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckServerName.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckServerName.java new file mode 100644 index 000000000000..2c6b899fb15c --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckServerName.java @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO class for ServerName in HBCK Inconsistencies report. + */ +@InterfaceAudience.Public +public class HbckServerName { + private final String hostName; + private final int hostPort; + private final long startCode; + + public HbckServerName(String hostName, int hostPort, long startCode) { + this.hostName = hostName; + this.hostPort = hostPort; + this.startCode = startCode; + } + + public String getHostName() { + return hostName; + } + + public int getHostPort() { + return hostPort; + } + + public long getStartCode() { + return startCode; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckUnknownServers.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckUnknownServers.java new file mode 100644 index 000000000000..c070f84e69fe --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/HbckUnknownServers.java @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** + * POJO to present Unknown Regions from Catalog Janitor Inconsistencies Report via REST API. These + * inconsistencies are shown on hbck.jsp page on Active HMaster UI as part of Catalog Janitor + * inconsistencies. + */ +@InterfaceAudience.Public +public class HbckUnknownServers { + private final HbckRegionDetails regionInfo; + private final HbckServerName serverName; + + public HbckUnknownServers(HbckRegionDetails regionInfo, HbckServerName unknownServerName) { + this.regionInfo = regionInfo; + this.serverName = unknownServerName; + } + + public HbckRegionDetails getRegionInfo() { + return regionInfo; + } + + public HbckServerName getServerName() { + return serverName; + } +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java index 63c26e2c393f..2a099157bc76 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,12 +20,13 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * Thrown if a request is table schema modification is requested but - * made for an invalid family name. + * Thrown if a request is table schema modification is requested but made for an invalid family + * name. */ @InterfaceAudience.Public public class InvalidFamilyOperationException extends DoNotRetryIOException { private static final long serialVersionUID = (1L << 22) - 1L; + /** default constructor */ public InvalidFamilyOperationException() { super(); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/KeepDeletedCells.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/KeepDeletedCells.java index dd19fa1c2279..2ae80cade98a 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/KeepDeletedCells.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/KeepDeletedCells.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -24,27 +23,25 @@ * Ways to keep cells marked for delete around. */ /* - * Don't change the TRUE/FALSE labels below, these have to be called - * this way for backwards compatibility. + * Don't change the TRUE/FALSE labels below, these have to be called this way for backwards + * compatibility. */ @InterfaceAudience.Public public enum KeepDeletedCells { /** Deleted Cells are not retained. */ FALSE, /** - * Deleted Cells are retained until they are removed by other means - * such TTL or VERSIONS. - * If no TTL is specified or no new versions of delete cells are - * written, they are retained forever. + * Deleted Cells are retained until they are removed by other means such TTL or VERSIONS. If no + * TTL is specified or no new versions of delete cells are written, they are retained forever. */ TRUE, /** - * Deleted Cells are retained until the delete marker expires due to TTL. - * This is useful when TTL is combined with MIN_VERSIONS and one - * wants to keep a minimum number of versions around but at the same - * time remove deleted cells after the TTL. + * Deleted Cells are retained until the delete marker expires due to TTL. This is useful when TTL + * is combined with MIN_VERSIONS and one wants to keep a minimum number of versions around but at + * the same time remove deleted cells after the TTL. */ TTL; + public static KeepDeletedCells getValue(String val) { return valueOf(val.toUpperCase()); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java index 35cdecba9bb6..86e394e33403 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -26,6 +25,7 @@ @InterfaceAudience.Public public class MasterNotRunningException extends HBaseIOException { private static final long serialVersionUID = (1L << 23) - 1L; + /** default constructor */ public MasterNotRunningException() { super(); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/MemoryCompactionPolicy.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/MemoryCompactionPolicy.java index 099ea4054591..b913ac0506cd 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/MemoryCompactionPolicy.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/MemoryCompactionPolicy.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -31,16 +30,15 @@ public enum MemoryCompactionPolicy { NONE, /** * Basic policy applies optimizations which modify the index to a more compacted representation. - * This is beneficial in all access patterns. The smaller the cells are the greater the - * benefit of this policy. - * This is the default policy. + * This is beneficial in all access patterns. The smaller the cells are the greater the benefit of + * this policy. This is the default policy. */ BASIC, /** - * In addition to compacting the index representation as the basic policy, eager policy - * eliminates duplication while the data is still in memory (much like the - * on-disk compaction does after the data is flushed to disk). This policy is most useful for - * applications with high data churn or small working sets. + * In addition to compacting the index representation as the basic policy, eager policy eliminates + * duplication while the data is still in memory (much like the on-disk compaction does after the + * data is flushed to disk). This policy is most useful for applications with high data churn or + * small working sets. */ EAGER, /** diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java index c55086c7fbe7..f29104df3c0c 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java @@ -33,11 +33,11 @@ import java.util.NavigableMap; import java.util.SortedMap; import java.util.TreeMap; +import java.util.concurrent.ThreadLocalRandom; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.Cell.Type; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.Consistency; import org.apache.hadoop.hbase.client.Delete; @@ -72,11 +72,12 @@ import org.apache.hadoop.hbase.util.ExceptionUtil; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.PairOfSameType; -import org.apache.hbase.thirdparty.com.google.common.base.Throwables; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hbase.thirdparty.com.google.common.base.Throwables; + /** *

    * Read/write operations on hbase:meta region as well as assignment information stored @@ -153,6 +154,7 @@ public class MetaTableAccessor { private static final byte SEPARATED_BYTE = 0x00; @InterfaceAudience.Private + @SuppressWarnings("ImmutableEnumChecker") public enum QueryType { ALL(HConstants.TABLE_FAMILY, HConstants.CATALOG_FAMILY), REGION(HConstants.CATALOG_FAMILY), @@ -174,8 +176,8 @@ byte[][] getFamilies() { static final char META_REPLICA_ID_DELIMITER = '_'; /** A regex for parsing server columns from meta. See above javadoc for meta layout */ - private static final Pattern SERVER_COLUMN_PATTERN - = Pattern.compile("^server(_[0-9a-fA-F]{4})?$"); + private static final Pattern SERVER_COLUMN_PATTERN = + Pattern.compile("^server(_[0-9a-fA-F]{4})?$"); //////////////////////// // Reading operations // @@ -184,10 +186,10 @@ byte[][] getFamilies() { /** * Performs a full scan of hbase:meta for regions. * @param connection connection we're using - * @param visitor Visitor invoked against each row in regions family. + * @param visitor Visitor invoked against each row in regions family. */ public static void fullScanRegions(Connection connection, final Visitor visitor) - throws IOException { + throws IOException { scanMeta(connection, null, null, QueryType.REGION, visitor); } @@ -202,17 +204,17 @@ public static List fullScanRegions(Connection connection) throws IOExcep /** * Performs a full scan of hbase:meta for tables. * @param connection connection we're using - * @param visitor Visitor invoked against each row in tables family. + * @param visitor Visitor invoked against each row in tables family. */ public static void fullScanTables(Connection connection, final Visitor visitor) - throws IOException { + throws IOException { scanMeta(connection, null, null, QueryType.TABLE, visitor); } /** * Performs a full scan of hbase:meta. * @param connection connection we're using - * @param type scanned part of meta + * @param type scanned part of meta * @return List of {@link Result} */ private static List fullScan(Connection connection, QueryType type) throws IOException { @@ -257,12 +259,10 @@ private static Result get(final Table t, final Get g) throws IOException { * @deprecated use {@link #getRegionLocation(Connection, byte[])} instead */ @Deprecated - public static Pair getRegion(Connection connection, byte [] regionName) + public static Pair getRegion(Connection connection, byte[] regionName) throws IOException { HRegionLocation location = getRegionLocation(connection, regionName); - return location == null - ? null - : new Pair<>(location.getRegionInfo(), location.getServerName()); + return location == null ? null : new Pair<>(location.getRegionInfo(), location.getServerName()); } /** @@ -272,7 +272,7 @@ public static Pair getRegion(Connection connection, byte * @return HRegionLocation for the given region */ public static HRegionLocation getRegionLocation(Connection connection, byte[] regionName) - throws IOException { + throws IOException { byte[] row = regionName; RegionInfo parsedInfo = null; try { @@ -287,8 +287,10 @@ public static HRegionLocation getRegionLocation(Connection connection, byte[] re get.addFamily(HConstants.CATALOG_FAMILY); Result r = get(getMetaHTable(connection), get); RegionLocations locations = getRegionLocations(r); - return locations == null ? null - : locations.getRegionLocation(parsedInfo == null ? 0 : parsedInfo.getReplicaId()); + return locations == null + ? null + : locations.getRegionLocation( + parsedInfo == null ? RegionInfo.DEFAULT_REPLICA_ID : parsedInfo.getReplicaId()); } /** @@ -298,16 +300,14 @@ public static HRegionLocation getRegionLocation(Connection connection, byte[] re * @return HRegionLocation for the given region */ public static HRegionLocation getRegionLocation(Connection connection, RegionInfo regionInfo) - throws IOException { - return getRegionLocation(getCatalogFamilyRow(connection, regionInfo), - regionInfo, regionInfo.getReplicaId()); + throws IOException { + return getRegionLocation(getCatalogFamilyRow(connection, regionInfo), regionInfo, + regionInfo.getReplicaId()); } - /** - * @return Return the {@link HConstants#CATALOG_FAMILY} row from hbase:meta. - */ + /** Returns Return the {@link HConstants#CATALOG_FAMILY} row from hbase:meta. */ public static Result getCatalogFamilyRow(Connection connection, RegionInfo ri) - throws IOException { + throws IOException { Get get = new Get(getMetaKeyForRegion(ri)); get.addFamily(HConstants.CATALOG_FAMILY); return get(getMetaHTable(connection), get); @@ -318,81 +318,80 @@ public static byte[] getMetaKeyForRegion(RegionInfo regionInfo) { return RegionReplicaUtil.getRegionInfoForDefaultReplica(regionInfo).getRegionName(); } - /** Returns an HRI parsed from this regionName. Not all the fields of the HRI - * is stored in the name, so the returned object should only be used for the fields - * in the regionName. + /** + * Returns an HRI parsed from this regionName. Not all the fields of the HRI is stored in the + * name, so the returned object should only be used for the fields in the regionName. */ // This should be moved to RegionInfo? TODO. public static RegionInfo parseRegionInfoFromRegionName(byte[] regionName) throws IOException { byte[][] fields = RegionInfo.parseRegionName(regionName); long regionId = Long.parseLong(Bytes.toString(fields[2])); int replicaId = fields.length > 3 ? Integer.parseInt(Bytes.toString(fields[3]), 16) : 0; - return RegionInfoBuilder.newBuilder(TableName.valueOf(fields[0])) - .setStartKey(fields[1]).setRegionId(regionId).setReplicaId(replicaId).build(); + return RegionInfoBuilder.newBuilder(TableName.valueOf(fields[0])).setStartKey(fields[1]) + .setRegionId(regionId).setReplicaId(replicaId).build(); } /** * Gets the result in hbase:meta for the specified region. * @param connection connection we're using - * @param regionName region we're looking for + * @param regionInfo region we're looking for * @return result of the specified region */ - public static Result getRegionResult(Connection connection, - byte[] regionName) throws IOException { - Get get = new Get(regionName); + public static Result getRegionResult(Connection connection, RegionInfo regionInfo) + throws IOException { + Get get = new Get(getMetaKeyForRegion(regionInfo)); get.addFamily(HConstants.CATALOG_FAMILY); return get(getMetaHTable(connection), get); } /** - * Scans META table for a row whose key contains the specified regionEncodedName, - * returning a single related Result instance if any row is found, null otherwise. - * - * @param connection the connection to query META table. + * Scans META table for a row whose key contains the specified regionEncodedName, returning + * a single related Result instance if any row is found, null otherwise. + * @param connection the connection to query META table. * @param regionEncodedName the region encoded name to look for at META. * @return Result instance with the row related info in META, null otherwise. * @throws IOException if any errors occur while querying META. */ - public static Result scanByRegionEncodedName(Connection connection, - String regionEncodedName) throws IOException { - RowFilter rowFilter = new RowFilter(CompareOperator.EQUAL, - new SubstringComparator(regionEncodedName)); + public static Result scanByRegionEncodedName(Connection connection, String regionEncodedName) + throws IOException { + RowFilter rowFilter = + new RowFilter(CompareOperator.EQUAL, new SubstringComparator(regionEncodedName)); Scan scan = getMetaScan(connection.getConfiguration(), 1); scan.setFilter(rowFilter); try (Table table = getMetaHTable(connection); - ResultScanner resultScanner = table.getScanner(scan)) { + ResultScanner resultScanner = table.getScanner(scan)) { return resultScanner.next(); } } /** - * @return Return all regioninfos listed in the 'info:merge*' columns of - * the regionName row. + * Returns Return all regioninfos listed in the 'info:merge*' columns of the {@code regionInfo} + * row. */ @Nullable - public static List getMergeRegions(Connection connection, byte[] regionName) - throws IOException { - return getMergeRegions(getRegionResult(connection, regionName).rawCells()); + public static List getMergeRegions(Connection connection, RegionInfo regionInfo) + throws IOException { + return getMergeRegions(getRegionResult(connection, regionInfo).rawCells()); } /** - * Check whether the given {@code regionName} has any 'info:merge*' columns. + * Check whether the given {@code regionInfo} has any 'info:merge*' columns. */ - public static boolean hasMergeRegions(Connection conn, byte[] regionName) throws IOException { - return hasMergeRegions(getRegionResult(conn, regionName).rawCells()); + public static boolean hasMergeRegions(Connection conn, RegionInfo regionInfo) throws IOException { + return hasMergeRegions(getRegionResult(conn, regionInfo).rawCells()); } /** - * @return Deserialized values of <qualifier,regioninfo> pairs taken from column values that - * match the regex 'info:merge.*' in array of cells. + * Returns Deserialized values of <qualifier,regioninfo> pairs taken from column values that + * match the regex 'info:merge.*' in array of cells. */ @Nullable - public static Map getMergeRegionsWithName(Cell [] cells) { + public static Map getMergeRegionsWithName(Cell[] cells) { if (cells == null) { return null; } Map regionsToMerge = null; - for (Cell cell: cells) { + for (Cell cell : cells) { if (!isMergeQualifierPrefix(cell)) { continue; } @@ -410,21 +409,21 @@ public static Map getMergeRegionsWithName(Cell [] cells) { } /** - * @return Deserialized regioninfo values taken from column values that match - * the regex 'info:merge.*' in array of cells. + * Returns Deserialized regioninfo values taken from column values that match the regex + * 'info:merge.*' in array of cells. */ @Nullable - public static List getMergeRegions(Cell [] cells) { + public static List getMergeRegions(Cell[] cells) { Map mergeRegionsWithName = getMergeRegionsWithName(cells); return (mergeRegionsWithName == null) ? null : new ArrayList<>(mergeRegionsWithName.values()); } /** - * @return True if any merge regions present in cells; i.e. - * the column in cell matches the regex 'info:merge.*'. + * Returns True if any merge regions present in cells; i.e. the column in + * cell matches the regex 'info:merge.*'. */ - public static boolean hasMergeRegions(Cell [] cells) { - for (Cell cell: cells) { + public static boolean hasMergeRegions(Cell[] cells) { + for (Cell cell : cells) { if (!isMergeQualifierPrefix(cell)) { continue; } @@ -433,65 +432,61 @@ public static boolean hasMergeRegions(Cell [] cells) { return false; } - /** - * @return True if the column in cell matches the regex 'info:merge.*'. - */ + /** Returns True if the column in cell matches the regex 'info:merge.*'. */ private static boolean isMergeQualifierPrefix(Cell cell) { // Check to see if has family and that qualifier starts with the merge qualifier 'merge' - return CellUtil.matchingFamily(cell, HConstants.CATALOG_FAMILY) && - PrivateCellUtil.qualifierStartsWith(cell, HConstants.MERGE_QUALIFIER_PREFIX); + return CellUtil.matchingFamily(cell, HConstants.CATALOG_FAMILY) + && PrivateCellUtil.qualifierStartsWith(cell, HConstants.MERGE_QUALIFIER_PREFIX); } /** * Lists all of the regions currently in META. - * - * @param connection to connect with + * @param connection to connect with * @param excludeOfflinedSplitParents False if we are to include offlined/splitparents regions, * true and we'll leave out offlined regions from returned list * @return List of all user-space regions. */ public static List getAllRegions(Connection connection, - boolean excludeOfflinedSplitParents) - throws IOException { + boolean excludeOfflinedSplitParents) throws IOException { List> result; - result = getTableRegionsAndLocations(connection, null, - excludeOfflinedSplitParents); + result = getTableRegionsAndLocations(connection, null, excludeOfflinedSplitParents); return getListOfRegionInfos(result); } /** - * Gets all of the regions of the specified table. Do not use this method - * to get meta table regions, use methods in MetaTableLocator instead. + * Gets all of the regions of the specified table. Do not use this method to get meta table + * regions, use methods in MetaTableLocator instead. * @param connection connection we're using - * @param tableName table we're looking for + * @param tableName table we're looking for * @return Ordered list of {@link RegionInfo}. */ public static List getTableRegions(Connection connection, TableName tableName) - throws IOException { + throws IOException { return getTableRegions(connection, tableName, false); } /** - * Gets all of the regions of the specified table. Do not use this method - * to get meta table regions, use methods in MetaTableLocator instead. - * @param connection connection we're using - * @param tableName table we're looking for - * @param excludeOfflinedSplitParents If true, do not include offlined split - * parents in the return. + * Gets all of the regions of the specified table. Do not use this method to get meta table + * regions, use methods in MetaTableLocator instead. + * @param connection connection we're using + * @param tableName table we're looking for + * @param excludeOfflinedSplitParents If true, do not include offlined split parents in the + * return. * @return Ordered list of {@link RegionInfo}. */ public static List getTableRegions(Connection connection, TableName tableName, - final boolean excludeOfflinedSplitParents) throws IOException { + final boolean excludeOfflinedSplitParents) throws IOException { List> result = getTableRegionsAndLocations(connection, tableName, excludeOfflinedSplitParents); return getListOfRegionInfos(result); } - private static List getListOfRegionInfos( - final List> pairs) { + @SuppressWarnings("MixedMutabilityReturnType") + private static List + getListOfRegionInfos(final List> pairs) { if (pairs == null || pairs.isEmpty()) { return Collections.emptyList(); } @@ -503,30 +498,28 @@ private static List getListOfRegionInfos( } /** - * @param tableName table we're working with - * @return start row for scanning META according to query type + * Returns start row for scanning META according to query type */ public static byte[] getTableStartRowForMeta(TableName tableName, QueryType type) { if (tableName == null) { return null; } switch (type) { - case REGION: - byte[] startRow = new byte[tableName.getName().length + 2]; - System.arraycopy(tableName.getName(), 0, startRow, 0, tableName.getName().length); - startRow[startRow.length - 2] = HConstants.DELIMITER; - startRow[startRow.length - 1] = HConstants.DELIMITER; - return startRow; - case ALL: - case TABLE: - default: - return tableName.getName(); + case REGION: + byte[] startRow = new byte[tableName.getName().length + 2]; + System.arraycopy(tableName.getName(), 0, startRow, 0, tableName.getName().length); + startRow[startRow.length - 2] = HConstants.DELIMITER; + startRow[startRow.length - 1] = HConstants.DELIMITER; + return startRow; + case ALL: + case TABLE: + default: + return tableName.getName(); } } /** - * @param tableName table we're working with - * @return stop row for scanning META according to query type + * Returns stop row for scanning META according to query type */ public static byte[] getTableStopRowForMeta(TableName tableName, QueryType type) { if (tableName == null) { @@ -534,30 +527,28 @@ public static byte[] getTableStopRowForMeta(TableName tableName, QueryType type) } final byte[] stopRow; switch (type) { - case REGION: - stopRow = new byte[tableName.getName().length + 3]; - System.arraycopy(tableName.getName(), 0, stopRow, 0, tableName.getName().length); - stopRow[stopRow.length - 3] = ' '; - stopRow[stopRow.length - 2] = HConstants.DELIMITER; - stopRow[stopRow.length - 1] = HConstants.DELIMITER; - break; - case ALL: - case TABLE: - default: - stopRow = new byte[tableName.getName().length + 1]; - System.arraycopy(tableName.getName(), 0, stopRow, 0, tableName.getName().length); - stopRow[stopRow.length - 1] = ' '; - break; + case REGION: + stopRow = new byte[tableName.getName().length + 3]; + System.arraycopy(tableName.getName(), 0, stopRow, 0, tableName.getName().length); + stopRow[stopRow.length - 3] = ' '; + stopRow[stopRow.length - 2] = HConstants.DELIMITER; + stopRow[stopRow.length - 1] = HConstants.DELIMITER; + break; + case ALL: + case TABLE: + default: + stopRow = new byte[tableName.getName().length + 1]; + System.arraycopy(tableName.getName(), 0, stopRow, 0, tableName.getName().length); + stopRow[stopRow.length - 1] = ' '; + break; } return stopRow; } /** - * This method creates a Scan object that will only scan catalog rows that - * belong to the specified table. It doesn't specify any columns. - * This is a better alternative to just using a start row and scan until - * it hits a new table since that requires parsing the HRI to get the table - * name. + * This method creates a Scan object that will only scan catalog rows that belong to the specified + * table. It doesn't specify any columns. This is a better alternative to just using a start row + * and scan until it hits a new table since that requires parsing the HRI to get the table name. * @param tableName bytes of table's name * @return configured Scan object */ @@ -591,29 +582,28 @@ private static Scan getMetaScan(Configuration conf, int rowUpperLimit) { /** * Do not use this method to get meta table regions, use methods in MetaTableLocator instead. * @param connection connection we're using - * @param tableName table we're looking for + * @param tableName table we're looking for * @return Return list of regioninfos and server. */ public static List> - getTableRegionsAndLocations(Connection connection, TableName tableName) - throws IOException { + getTableRegionsAndLocations(Connection connection, TableName tableName) throws IOException { return getTableRegionsAndLocations(connection, tableName, true); } /** * Do not use this method to get meta table regions, use methods in MetaTableLocator instead. - * @param connection connection we're using - * @param tableName table to work with, can be null for getting all regions + * @param connection connection we're using + * @param tableName table to work with, can be null for getting all regions * @param excludeOfflinedSplitParents don't return split parents * @return Return list of regioninfos and server addresses. */ // What happens here when 1M regions in hbase:meta? This won't scale? public static List> getTableRegionsAndLocations( - Connection connection, @Nullable final TableName tableName, - final boolean excludeOfflinedSplitParents) throws IOException { + Connection connection, @Nullable final TableName tableName, + final boolean excludeOfflinedSplitParents) throws IOException { if (tableName != null && tableName.equals(TableName.META_TABLE_NAME)) { - throw new IOException("This method can't be used to locate meta regions;" - + " use MetaTableLocator instead"); + throw new IOException( + "This method can't be used to locate meta regions;" + " use MetaTableLocator instead"); } // Make a version of CollectingVisitor that collects RegionInfo and ServerAddress CollectingVisitor> visitor = @@ -645,23 +635,19 @@ void add(Result r) { } } }; - scanMeta(connection, - getTableStartRowForMeta(tableName, QueryType.REGION), - getTableStopRowForMeta(tableName, QueryType.REGION), - QueryType.REGION, visitor); + scanMeta(connection, getTableStartRowForMeta(tableName, QueryType.REGION), + getTableStopRowForMeta(tableName, QueryType.REGION), QueryType.REGION, visitor); return visitor.getResults(); } /** + * Get the user regions a given server is hosting. * @param connection connection we're using * @param serverName server whose regions we're interested in - * @return List of user regions installed on this server (does not include - * catalog regions). - * @throws IOException + * @return List of user regions installed on this server (does not include catalog regions). */ - public static NavigableMap - getServerUserRegions(Connection connection, final ServerName serverName) - throws IOException { + public static NavigableMap getServerUserRegions(Connection connection, + final ServerName serverName) throws IOException { final NavigableMap hris = new TreeMap<>(); // Fill the above hris map with entries from hbase:meta that have the passed // servername. @@ -684,10 +670,9 @@ void add(Result r) { return hris; } - public static void fullScanMetaAndPrint(Connection connection) - throws IOException { + public static void fullScanMetaAndPrint(Connection connection) throws IOException { Visitor v = r -> { - if (r == null || r.isEmpty()) { + if (r == null || r.isEmpty()) { return true; } LOG.info("fullScanMetaAndPrint.Current Meta Row: " + r); @@ -711,18 +696,24 @@ public static void fullScanMetaAndPrint(Connection connection) } public static void scanMetaForTableRegions(Connection connection, Visitor visitor, - TableName tableName) throws IOException { - scanMeta(connection, tableName, QueryType.REGION, Integer.MAX_VALUE, visitor); + TableName tableName, CatalogReplicaMode metaReplicaMode) throws IOException { + scanMeta(connection, tableName, QueryType.REGION, Integer.MAX_VALUE, visitor, metaReplicaMode); + } + + public static void scanMetaForTableRegions(Connection connection, Visitor visitor, + TableName tableName) throws IOException { + scanMetaForTableRegions(connection, visitor, tableName, CatalogReplicaMode.NONE); } private static void scanMeta(Connection connection, TableName table, QueryType type, int maxRows, - final Visitor visitor) throws IOException { + final Visitor visitor, CatalogReplicaMode metaReplicaMode) throws IOException { scanMeta(connection, getTableStartRowForMeta(table, type), getTableStopRowForMeta(table, type), - type, maxRows, visitor); + type, null, maxRows, visitor, metaReplicaMode); + } private static void scanMeta(Connection connection, @Nullable final byte[] startRow, - @Nullable final byte[] stopRow, QueryType type, final Visitor visitor) throws IOException { + @Nullable final byte[] stopRow, QueryType type, final Visitor visitor) throws IOException { scanMeta(connection, startRow, stopRow, type, Integer.MAX_VALUE, visitor); } @@ -735,7 +726,7 @@ private static void scanMeta(Connection connection, @Nullable final byte[] start * @param rowLimit max number of rows to return */ public static void scanMeta(Connection connection, final Visitor visitor, - final TableName tableName, final byte[] row, final int rowLimit) throws IOException { + final TableName tableName, final byte[] row, final int rowLimit) throws IOException { byte[] startRow = null; byte[] stopRow = null; if (tableName != null) { @@ -753,23 +744,21 @@ public static void scanMeta(Connection connection, final Visitor visitor, /** * Performs a scan of META table. * @param connection connection we're using - * @param startRow Where to start the scan. Pass null if want to begin scan - * at first row. - * @param stopRow Where to stop the scan. Pass null if want to scan all rows - * from the start one - * @param type scanned part of meta - * @param maxRows maximum rows to return - * @param visitor Visitor invoked against each row. + * @param startRow Where to start the scan. Pass null if want to begin scan at first row. + * @param stopRow Where to stop the scan. Pass null if want to scan all rows from the start one + * @param type scanned part of meta + * @param maxRows maximum rows to return + * @param visitor Visitor invoked against each row. */ static void scanMeta(Connection connection, @Nullable final byte[] startRow, - @Nullable final byte[] stopRow, QueryType type, int maxRows, final Visitor visitor) - throws IOException { - scanMeta(connection, startRow, stopRow, type, null, maxRows, visitor); + @Nullable final byte[] stopRow, QueryType type, int maxRows, final Visitor visitor) + throws IOException { + scanMeta(connection, startRow, stopRow, type, null, maxRows, visitor, CatalogReplicaMode.NONE); } private static void scanMeta(Connection connection, @Nullable final byte[] startRow, - @Nullable final byte[] stopRow, QueryType type, @Nullable Filter filter, int maxRows, - final Visitor visitor) throws IOException { + @Nullable final byte[] stopRow, QueryType type, @Nullable Filter filter, int maxRows, + final Visitor visitor, CatalogReplicaMode metaReplicaMode) throws IOException { int rowUpperLimit = maxRows > 0 ? maxRows : Integer.MAX_VALUE; Scan scan = getMetaScan(connection.getConfiguration(), rowUpperLimit); @@ -787,13 +776,32 @@ private static void scanMeta(Connection connection, @Nullable final byte[] start } if (LOG.isTraceEnabled()) { - LOG.trace("Scanning META" + " starting at row=" + Bytes.toStringBinary(startRow) + - " stopping at row=" + Bytes.toStringBinary(stopRow) + " for max=" + rowUpperLimit + - " with caching=" + scan.getCaching()); + LOG.trace("Scanning META" + " starting at row=" + Bytes.toStringBinary(startRow) + + " stopping at row=" + Bytes.toStringBinary(stopRow) + " for max=" + rowUpperLimit + + " with caching=" + scan.getCaching()); } int currentRow = 0; try (Table metaTable = getMetaHTable(connection)) { + switch (metaReplicaMode) { + case LOAD_BALANCE: + int numOfReplicas = metaTable.getDescriptor().getRegionReplication(); + if (numOfReplicas > 1) { + int replicaId = ThreadLocalRandom.current().nextInt(numOfReplicas); + + // When the replicaId is 0, do not set to Consistency.TIMELINE + if (replicaId > 0) { + scan.setReplicaId(replicaId); + scan.setConsistency(Consistency.TIMELINE); + } + } + break; + case HEDGED_READ: + scan.setConsistency(Consistency.TIMELINE); + break; + default: + // Do nothing + } try (ResultScanner scanner = metaTable.getScanner(scan)) { Result data; while ((data = scanner.next()) != null) { @@ -814,12 +822,10 @@ private static void scanMeta(Connection connection, @Nullable final byte[] start } } - /** - * @return Get closest metatable region row to passed row - */ + /** Returns Get closest metatable region row to passed row */ @NonNull private static RegionInfo getClosestRegionInfo(Connection connection, - @NonNull final TableName tableName, @NonNull final byte[] row) throws IOException { + @NonNull final TableName tableName, @NonNull final byte[] row) throws IOException { byte[] searchRow = RegionInfo.createRegionName(tableName, row, HConstants.NINES, false); Scan scan = getMetaScan(connection.getConfiguration(), 1); scan.setReversed(true); @@ -827,13 +833,13 @@ private static RegionInfo getClosestRegionInfo(Connection connection, try (ResultScanner resultScanner = getMetaHTable(connection).getScanner(scan)) { Result result = resultScanner.next(); if (result == null) { - throw new TableNotFoundException("Cannot find row in META " + - " for table: " + tableName + ", row=" + Bytes.toStringBinary(row)); + throw new TableNotFoundException("Cannot find row in META " + " for table: " + tableName + + ", row=" + Bytes.toStringBinary(row)); } RegionInfo regionInfo = getRegionInfo(result); if (regionInfo == null) { - throw new IOException("RegionInfo was null or empty in Meta for " + - tableName + ", row=" + Bytes.toStringBinary(row)); + throw new IOException("RegionInfo was null or empty in Meta for " + tableName + ", row=" + + Bytes.toStringBinary(row)); } return regionInfo; } @@ -885,9 +891,10 @@ private static byte[] getRegionStateColumn() { * @return a byte[] for state qualifier */ public static byte[] getRegionStateColumn(int replicaId) { - return replicaId == 0 ? HConstants.STATE_QUALIFIER - : Bytes.toBytes(HConstants.STATE_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + return replicaId == 0 + ? HConstants.STATE_QUALIFIER + : Bytes.toBytes(HConstants.STATE_QUALIFIER_STR + META_REPLICA_ID_DELIMITER + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** @@ -896,9 +903,10 @@ public static byte[] getRegionStateColumn(int replicaId) { * @return a byte[] for sn column qualifier */ public static byte[] getServerNameColumn(int replicaId) { - return replicaId == 0 ? HConstants.SERVERNAME_QUALIFIER - : Bytes.toBytes(HConstants.SERVERNAME_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + return replicaId == 0 + ? HConstants.SERVERNAME_QUALIFIER + : Bytes.toBytes(HConstants.SERVERNAME_QUALIFIER_STR + META_REPLICA_ID_DELIMITER + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** @@ -910,7 +918,7 @@ public static byte[] getServerColumn(int replicaId) { return replicaId == 0 ? HConstants.SERVER_QUALIFIER : Bytes.toBytes(HConstants.SERVER_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** @@ -922,7 +930,7 @@ public static byte[] getStartCodeColumn(int replicaId) { return replicaId == 0 ? HConstants.STARTCODE_QUALIFIER : Bytes.toBytes(HConstants.STARTCODE_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** @@ -934,12 +942,12 @@ public static byte[] getSeqNumColumn(int replicaId) { return replicaId == 0 ? HConstants.SEQNUM_QUALIFIER : Bytes.toBytes(HConstants.SEQNUM_QUALIFIER_STR + META_REPLICA_ID_DELIMITER - + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); + + String.format(RegionInfo.REPLICA_ID_FORMAT, replicaId)); } /** - * Parses the replicaId from the server column qualifier. See top of the class javadoc - * for the actual meta layout + * Parses the replicaId from the server column qualifier. See top of the class javadoc for the + * actual meta layout * @param serverColumn the column qualifier * @return an int for the replicaId */ @@ -969,14 +977,14 @@ public static ServerName getServerName(final Result r, final int replicaId) { byte[] serverColumn = getServerColumn(replicaId); Cell cell = r.getColumnLatestCell(getCatalogFamily(), serverColumn); if (cell == null || cell.getValueLength() == 0) return null; - String hostAndPort = Bytes.toString( - cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); + String hostAndPort = + Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); byte[] startcodeColumn = getStartCodeColumn(replicaId); cell = r.getColumnLatestCell(getCatalogFamily(), startcodeColumn); if (cell == null || cell.getValueLength() == 0) return null; try { return ServerName.valueOf(hostAndPort, - Bytes.toLong(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())); + Bytes.toLong(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())); } catch (IllegalArgumentException e) { LOG.error("Ignoring invalid region for server " + hostAndPort + "; cell=" + cell, e); return null; @@ -987,15 +995,14 @@ public static ServerName getServerName(final Result r, final int replicaId) { * Returns the {@link ServerName} from catalog table {@link Result} where the region is * transitioning on. It should be the same as {@link MetaTableAccessor#getServerName(Result,int)} * if the server is at OPEN state. - * * @param r Result to pull the transitioning server name from - * @return A ServerName instance or {@link MetaTableAccessor#getServerName(Result,int)} - * if necessary fields not found or empty. + * @return A ServerName instance or {@link MetaTableAccessor#getServerName(Result,int)} if + * necessary fields not found or empty. */ @Nullable public static ServerName getTargetServerName(final Result r, final int replicaId) { - final Cell cell = r.getColumnLatestCell(HConstants.CATALOG_FAMILY, - getServerNameColumn(replicaId)); + final Cell cell = + r.getColumnLatestCell(HConstants.CATALOG_FAMILY, getServerNameColumn(replicaId)); if (cell == null || cell.getValueLength() == 0) { RegionLocations locations = MetaTableAccessor.getRegionLocations(r); if (locations != null) { @@ -1006,13 +1013,13 @@ public static ServerName getTargetServerName(final Result r, final int replicaId } return null; } - return ServerName.parseServerName(Bytes.toString(cell.getValueArray(), cell.getValueOffset(), - cell.getValueLength())); + return ServerName.parseServerName( + Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())); } /** - * The latest seqnum that the server writing to meta observed when opening the region. - * E.g. the seqNum when the result of {@link #getServerName(Result, int)} was written. + * The latest seqnum that the server writing to meta observed when opening the region. E.g. the + * seqNum when the result of {@link #getServerName(Result, int)} was written. * @param r Result to pull the seqNum from * @return SeqNum, or HConstants.NO_SEQNUM if there's no value written. */ @@ -1023,8 +1030,7 @@ private static long getSeqNumDuringOpen(final Result r, final int replicaId) { } /** - * Returns the daughter regions by reading the corresponding columns of the catalog table - * Result. + * Returns the daughter regions by reading the corresponding columns of the catalog table Result. * @param data a Result object from the catalog table scan * @return pair of RegionInfo or PairOfSameType(null, null) if region is not a split parent */ @@ -1036,8 +1042,8 @@ public static PairOfSameType getDaughterRegions(Result data) { /** * Returns an HRegionLocationList extracted from the result. - * @return an HRegionLocationList containing all locations for the region range or null if - * we can't deserialize the result. + * @return an HRegionLocationList containing all locations for the region range or null if we + * can't deserialize the result. */ @Nullable public static RegionLocations getRegionLocations(final Result r) { @@ -1046,7 +1052,7 @@ public static RegionLocations getRegionLocations(final Result r) { if (regionInfo == null) return null; List locations = new ArrayList<>(1); - NavigableMap> familyMap = r.getNoVersionMap(); + NavigableMap> familyMap = r.getNoVersionMap(); locations.add(getRegionLocation(r, regionInfo, 0)); @@ -1080,16 +1086,15 @@ public static RegionLocations getRegionLocations(final Result r) { } /** - * Returns the HRegionLocation parsed from the given meta row Result - * for the given regionInfo and replicaId. The regionInfo can be the default region info - * for the replica. - * @param r the meta row result + * Returns the HRegionLocation parsed from the given meta row Result for the given regionInfo and + * replicaId. The regionInfo can be the default region info for the replica. + * @param r the meta row result * @param regionInfo RegionInfo for default replica - * @param replicaId the replicaId for the HRegionLocation + * @param replicaId the replicaId for the HRegionLocation * @return HRegionLocation parsed from the given meta row Result for the given replicaId */ private static HRegionLocation getRegionLocation(final Result r, final RegionInfo regionInfo, - final int replicaId) { + final int replicaId) { ServerName serverName = getServerName(r, replicaId); long seqNum = getSeqNumDuringOpen(r, replicaId); RegionInfo replicaInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, replicaId); @@ -1098,8 +1103,7 @@ private static HRegionLocation getRegionLocation(final Result r, final RegionInf /** * Returns RegionInfo object from the column - * HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog - * table Result. + * HConstants.CATALOG_FAMILY:HConstants.REGIONINFO_QUALIFIER of the catalog table Result. * @param data a Result object from the catalog table scan * @return RegionInfo or null */ @@ -1110,26 +1114,25 @@ public static RegionInfo getRegionInfo(Result data) { /** * Returns the RegionInfo object from the column {@link HConstants#CATALOG_FAMILY} and * qualifier of the catalog table result. - * @param r a Result object from the catalog table scan + * @param r a Result object from the catalog table scan * @param qualifier Column family qualifier * @return An RegionInfo instance or null. */ @Nullable - public static RegionInfo getRegionInfo(final Result r, byte [] qualifier) { + public static RegionInfo getRegionInfo(final Result r, byte[] qualifier) { Cell cell = r.getColumnLatestCell(getCatalogFamily(), qualifier); if (cell == null) return null; - return RegionInfo.parseFromOrNull(cell.getValueArray(), - cell.getValueOffset(), cell.getValueLength()); + return RegionInfo.parseFromOrNull(cell.getValueArray(), cell.getValueOffset(), + cell.getValueLength()); } /** * Fetch table state for given table from META table - * @param conn connection to use + * @param conn connection to use * @param tableName table to fetch state for */ @Nullable - public static TableState getTableState(Connection conn, TableName tableName) - throws IOException { + public static TableState getTableState(Connection conn, TableName tableName) throws IOException { if (tableName.equals(TableName.META_TABLE_NAME)) { return new TableState(tableName, TableState.State.ENABLED); } @@ -1144,8 +1147,7 @@ public static TableState getTableState(Connection conn, TableName tableName) * @param conn connection to use * @return map {tableName -> state} */ - public static Map getTableStates(Connection conn) - throws IOException { + public static Map getTableStates(Connection conn) throws IOException { final Map states = new LinkedHashMap<>(); Visitor collector = r -> { TableState state = getTableState(r); @@ -1159,19 +1161,17 @@ public static Map getTableStates(Connection conn) } /** - * Updates state in META - * Do not use. For internal use only. - * @param conn connection to use + * Updates state in META Do not use. For internal use only. + * @param conn connection to use * @param tableName table to look for */ - public static void updateTableState(Connection conn, TableName tableName, - TableState.State actual) throws IOException { + public static void updateTableState(Connection conn, TableName tableName, TableState.State actual) + throws IOException { updateTableState(conn, new TableState(tableName, actual)); } /** - * Decode table state from META Result. - * Should contain cell from HConstants.TABLE_FAMILY + * Decode table state from META Result. Should contain cell from HConstants.TABLE_FAMILY * @return null if not found */ @Nullable @@ -1196,8 +1196,7 @@ public interface Visitor { /** * Visit the catalog table row. * @param r A row from catalog table - * @return True if we are to proceed scanning the table, else false if - * we are to stop now. + * @return True if we are to proceed scanning the table, else false if we are to stop now. */ boolean visit(final Result r) throws IOException; } @@ -1213,6 +1212,7 @@ public interface CloseableVisitor extends Visitor, Closeable { */ static abstract class CollectingVisitor implements Visitor { final List results = new ArrayList<>(); + @Override public boolean visit(Result r) throws IOException { if (r != null && !r.isEmpty()) { @@ -1223,10 +1223,7 @@ public boolean visit(Result r) throws IOException { abstract void add(Result r); - /** - * @return Collected results; wait till visits complete to collect all - * possible results - */ + /** Returns Collected results; wait till visits complete to collect all possible results */ List getResults() { return this.results; } @@ -1260,7 +1257,7 @@ public boolean visit(Result rowResult) throws IOException { return true; } - //skip over offline and split regions + // skip over offline and split regions if (!(info.isOffline() || info.isSplit())) { return visitInternal(rowResult); } @@ -1269,10 +1266,10 @@ public boolean visit(Result rowResult) throws IOException { } /** - * A Visitor for a table. Provides a consistent view of the table's - * hbase:meta entries during concurrent splits (see HBASE-5986 for details). This class - * does not guarantee ordered traversal of meta entries, and can block until the - * hbase:meta entries for daughters are available during splits. + * A Visitor for a table. Provides a consistent view of the table's hbase:meta entries during + * concurrent splits (see HBASE-5986 for details). This class does not guarantee ordered traversal + * of meta entries, and can block until the hbase:meta entries for daughters are available during + * splits. */ public static abstract class TableVisitorBase extends DefaultVisitorBase { private TableName tableName; @@ -1288,7 +1285,7 @@ public final boolean visit(Result rowResult) throws IOException { if (info == null) { return true; } - if (!(info.getTable().equals(tableName))) { + if (!info.getTable().equals(tableName)) { return false; } return super.visit(rowResult); @@ -1298,11 +1295,21 @@ public final boolean visit(Result rowResult) throws IOException { //////////////////////// // Editing operations // //////////////////////// + + /** + * Generates and returns a {@link Put} containing the {@link RegionInfo} for the catalog table. + * @throws IllegalArgumentException when the provided RegionInfo is not the default replica. + */ + public static Put makePutFromRegionInfo(RegionInfo regionInfo) throws IOException { + return makePutFromRegionInfo(regionInfo, EnvironmentEdgeManager.currentTime()); + } + /** - * Generates and returns a Put containing the region into for the catalog table + * Generates and returns a {@link Put} containing the {@link RegionInfo} for the catalog table. + * @throws IllegalArgumentException when the provided RegionInfo is not the default replica. */ public static Put makePutFromRegionInfo(RegionInfo regionInfo, long ts) throws IOException { - return addRegionInfo(new Put(regionInfo.getRegionName(), ts), regionInfo); + return addRegionInfo(new Put(getMetaKeyForRegion(regionInfo), ts), regionInfo); } /** @@ -1312,7 +1319,11 @@ public static Delete makeDeleteFromRegionInfo(RegionInfo regionInfo, long ts) { if (regionInfo == null) { throw new IllegalArgumentException("Can't make a delete for null region"); } - Delete delete = new Delete(regionInfo.getRegionName()); + if (regionInfo.getReplicaId() != RegionInfo.DEFAULT_REPLICA_ID) { + throw new IllegalArgumentException( + "Can't make delete for a replica region. Operate on the primary"); + } + Delete delete = new Delete(getMetaKeyForRegion(regionInfo)); delete.addFamily(getCatalogFamily(), ts); return delete; } @@ -1321,26 +1332,18 @@ public static Delete makeDeleteFromRegionInfo(RegionInfo regionInfo, long ts) { * Adds split daughters to the Put */ private static Put addDaughtersToPut(Put put, RegionInfo splitA, RegionInfo splitB) - throws IOException { + throws IOException { if (splitA != null) { - put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY) - .setRow(put.getRow()) - .setFamily(HConstants.CATALOG_FAMILY) - .setQualifier(HConstants.SPLITA_QUALIFIER) - .setTimestamp(put.getTimestamp()) - .setType(Type.Put) - .setValue(RegionInfo.toByteArray(splitA)) - .build()); + put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(put.getRow()) + .setFamily(HConstants.CATALOG_FAMILY).setQualifier(HConstants.SPLITA_QUALIFIER) + .setTimestamp(put.getTimestamp()).setType(Cell.Type.Put) + .setValue(RegionInfo.toByteArray(splitA)).build()); } if (splitB != null) { - put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY) - .setRow(put.getRow()) - .setFamily(HConstants.CATALOG_FAMILY) - .setQualifier(HConstants.SPLITB_QUALIFIER) - .setTimestamp(put.getTimestamp()) - .setType(Type.Put) - .setValue(RegionInfo.toByteArray(splitB)) - .build()); + put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(put.getRow()) + .setFamily(HConstants.CATALOG_FAMILY).setQualifier(HConstants.SPLITB_QUALIFIER) + .setTimestamp(put.getTimestamp()).setType(Cell.Type.Put) + .setValue(RegionInfo.toByteArray(splitB)).build()); } return put; } @@ -1348,7 +1351,7 @@ private static Put addDaughtersToPut(Put put, RegionInfo splitA, RegionInfo spli /** * Put the passed p to the hbase:meta table. * @param connection connection we're using - * @param p Put to add to hbase:meta + * @param p Put to add to hbase:meta */ private static void putToMetaTable(Connection connection, Put p) throws IOException { try (Table table = getMetaHTable(connection)) { @@ -1368,10 +1371,10 @@ private static void put(Table t, Put p) throws IOException { /** * Put the passed ps to the hbase:meta table. * @param connection connection we're using - * @param ps Put to add to hbase:meta + * @param ps Put to add to hbase:meta */ public static void putsToMetaTable(final Connection connection, final List ps) - throws IOException { + throws IOException { if (ps.isEmpty()) { return; } @@ -1389,10 +1392,10 @@ public static void putsToMetaTable(final Connection connection, final List /** * Delete the passed d from the hbase:meta table. * @param connection connection we're using - * @param d Delete to add to hbase:meta + * @param d Delete to add to hbase:meta */ private static void deleteFromMetaTable(final Connection connection, final Delete d) - throws IOException { + throws IOException { List dels = new ArrayList<>(1); dels.add(d); deleteFromMetaTable(connection, dels); @@ -1401,25 +1404,26 @@ private static void deleteFromMetaTable(final Connection connection, final Delet /** * Delete the passed deletes from the hbase:meta table. * @param connection connection we're using - * @param deletes Deletes to add to hbase:meta This list should support #remove. + * @param deletes Deletes to add to hbase:meta This list should support #remove. */ private static void deleteFromMetaTable(final Connection connection, final List deletes) - throws IOException { + throws IOException { try (Table t = getMetaHTable(connection)) { debugLogMutations(deletes); t.delete(deletes); } } - private static Put addRegionStateToPut(Put put, RegionState.State state) throws IOException { - put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY) - .setRow(put.getRow()) - .setFamily(HConstants.CATALOG_FAMILY) - .setQualifier(getRegionStateColumn()) - .setTimestamp(put.getTimestamp()) - .setType(Cell.Type.Put) - .setValue(Bytes.toBytes(state.name())) - .build()); + /** + * Set the column value corresponding to this {@code replicaId}'s {@link RegionState} to the + * provided {@code state}. Mutates the provided {@link Put}. + */ + private static Put addRegionStateToPut(Put put, int replicaId, RegionState.State state) + throws IOException { + put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(put.getRow()) + .setFamily(HConstants.CATALOG_FAMILY).setQualifier(getRegionStateColumn(replicaId)) + .setTimestamp(put.getTimestamp()).setType(Cell.Type.Put).setValue(Bytes.toBytes(state.name())) + .build()); return put; } @@ -1427,28 +1431,28 @@ private static Put addRegionStateToPut(Put put, RegionState.State state) throws * Update state column in hbase:meta. */ public static void updateRegionState(Connection connection, RegionInfo ri, - RegionState.State state) throws IOException { - Put put = new Put(RegionReplicaUtil.getRegionInfoForDefaultReplica(ri).getRegionName()); - MetaTableAccessor.putsToMetaTable(connection, - Collections.singletonList(addRegionStateToPut(put, state))); + RegionState.State state) throws IOException { + final Put put = makePutFromRegionInfo(ri); + addRegionStateToPut(put, ri.getReplicaId(), state); + putsToMetaTable(connection, Collections.singletonList(put)); } /** * Adds daughter region infos to hbase:meta row for the specified region. Note that this does not * add its daughter's as different rows, but adds information about the daughters in the same row * as the parent. Use - * {@link #splitRegion(Connection, RegionInfo, long, RegionInfo, RegionInfo, ServerName, int)} - * if you want to do that. + * {@link #splitRegion(Connection, RegionInfo, long, RegionInfo, RegionInfo, ServerName, int)} if + * you want to do that. * @param connection connection we're using * @param regionInfo RegionInfo of parent region - * @param splitA first split daughter of the parent regionInfo - * @param splitB second split daughter of the parent regionInfo + * @param splitA first split daughter of the parent regionInfo + * @param splitB second split daughter of the parent regionInfo * @throws IOException if problem connecting or updating meta */ public static void addSplitsToParent(Connection connection, RegionInfo regionInfo, - RegionInfo splitA, RegionInfo splitB) throws IOException { + RegionInfo splitA, RegionInfo splitB) throws IOException { try (Table meta = getMetaHTable(connection)) { - Put put = makePutFromRegionInfo(regionInfo, EnvironmentEdgeManager.currentTime()); + Put put = makePutFromRegionInfo(regionInfo); addDaughtersToPut(put, splitA, splitB); meta.put(put); debugLogMutation(put); @@ -1458,48 +1462,48 @@ public static void addSplitsToParent(Connection connection, RegionInfo regionInf /** * Adds a (single) hbase:meta row for the specified new region and its daughters. Note that this - * does not add its daughter's as different rows, but adds information about the daughters - * in the same row as the parent. Use - * {@link #splitRegion(Connection, RegionInfo, long, RegionInfo, RegionInfo, ServerName, int)} - * if you want to do that. + * does not add its daughter's as different rows, but adds information about the daughters in the + * same row as the parent. Use + * {@link #splitRegion(Connection, RegionInfo, long, RegionInfo, RegionInfo, ServerName, int)} if + * you want to do that. * @param connection connection we're using * @param regionInfo region information * @throws IOException if problem connecting or updating meta */ public static void addRegionToMeta(Connection connection, RegionInfo regionInfo) - throws IOException { + throws IOException { addRegionsToMeta(connection, Collections.singletonList(regionInfo), 1); } /** - * Adds a hbase:meta row for each of the specified new regions. Initial state for new regions - * is CLOSED. - * @param connection connection we're using + * Adds a hbase:meta row for each of the specified new regions. Initial state for new regions is + * CLOSED. + * @param connection connection we're using * @param regionInfos region information list * @throws IOException if problem connecting or updating meta */ public static void addRegionsToMeta(Connection connection, List regionInfos, - int regionReplication) throws IOException { + int regionReplication) throws IOException { addRegionsToMeta(connection, regionInfos, regionReplication, EnvironmentEdgeManager.currentTime()); } /** - * Adds a hbase:meta row for each of the specified new regions. Initial state for new regions - * is CLOSED. - * @param connection connection we're using + * Adds a hbase:meta row for each of the specified new regions. Initial state for new regions is + * CLOSED. + * @param connection connection we're using * @param regionInfos region information list - * @param ts desired timestamp + * @param ts desired timestamp * @throws IOException if problem connecting or updating meta */ private static void addRegionsToMeta(Connection connection, List regionInfos, - int regionReplication, long ts) throws IOException { + int regionReplication, long ts) throws IOException { List puts = new ArrayList<>(); for (RegionInfo regionInfo : regionInfos) { if (RegionReplicaUtil.isDefaultReplica(regionInfo)) { Put put = makePutFromRegionInfo(regionInfo, ts); // New regions are added with initial state of CLOSED. - addRegionStateToPut(put, RegionState.State.CLOSED); + addRegionStateToPut(put, regionInfo.getReplicaId(), RegionState.State.CLOSED); // Add empty locations for region replicas so that number of replicas can be cached // whenever the primary region is looked up from meta for (int i = 1; i < regionReplication; i++) { @@ -1517,41 +1521,36 @@ static Put addMergeRegions(Put put, Collection mergeRegions) throws int max = mergeRegions.size(); if (max > limit) { // Should never happen!!!!! But just in case. - throw new RuntimeException("Can't merge " + max + " regions in one go; " + limit + - " is upper-limit."); + throw new RuntimeException( + "Can't merge " + max + " regions in one go; " + limit + " is upper-limit."); } int counter = 0; - for (RegionInfo ri: mergeRegions) { + for (RegionInfo ri : mergeRegions) { String qualifier = String.format(HConstants.MERGE_QUALIFIER_PREFIX_STR + "%04d", counter++); - put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY). - setRow(put.getRow()). - setFamily(HConstants.CATALOG_FAMILY). - setQualifier(Bytes.toBytes(qualifier)). - setTimestamp(put.getTimestamp()). - setType(Type.Put). - setValue(RegionInfo.toByteArray(ri)). - build()); + put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(put.getRow()) + .setFamily(HConstants.CATALOG_FAMILY).setQualifier(Bytes.toBytes(qualifier)) + .setTimestamp(put.getTimestamp()).setType(Cell.Type.Put) + .setValue(RegionInfo.toByteArray(ri)).build()); } return put; } /** - * Merge regions into one in an atomic operation. Deletes the merging regions in - * hbase:meta and adds the merged region. - * @param connection connection we're using + * Merge regions into one in an atomic operation. Deletes the merging regions in hbase:meta and + * adds the merged region. + * @param connection connection we're using * @param mergedRegion the merged region - * @param parentSeqNum Parent regions to merge and their next open sequence id used - * by serial replication. Set to -1 if not needed by this table. - * @param sn the location of the region + * @param parentSeqNum Parent regions to merge and their next open sequence id used by serial + * replication. Set to -1 if not needed by this table. + * @param sn the location of the region */ public static void mergeRegions(Connection connection, RegionInfo mergedRegion, - Map parentSeqNum, ServerName sn, int regionReplication) - throws IOException { + Map parentSeqNum, ServerName sn, int regionReplication) throws IOException { try (Table meta = getMetaHTable(connection)) { long time = HConstants.LATEST_TIMESTAMP; List mutations = new ArrayList<>(); List replicationParents = new ArrayList<>(); - for (Map.Entry e: parentSeqNum.entrySet()) { + for (Map.Entry e : parentSeqNum.entrySet()) { RegionInfo ri = e.getKey(); long seqNum = e.getValue(); // Deletes for merging regions @@ -1569,7 +1568,7 @@ public static void mergeRegions(Connection connection, RegionInfo mergedRegion, // default OFFLINE state. If Master gets restarted after this step, start up sequence of // master tries to assign this offline region. This is followed by re-assignments of the // merged region from resumed {@link MergeTableRegionsProcedure} - addRegionStateToPut(putOfMerged, RegionState.State.CLOSED); + addRegionStateToPut(putOfMerged, RegionInfo.DEFAULT_REPLICA_ID, RegionState.State.CLOSED); mutations.add(putOfMerged); // The merged is a new region, openSeqNum = 1 is fine. ServerName may be null // if crash after merge happened but before we got to here.. means in-memory @@ -1597,23 +1596,21 @@ public static void mergeRegions(Connection connection, RegionInfo mergedRegion, * Splits the region into two in an atomic operation. Offlines the parent region with the * information that it is split into two, and also adds the daughter regions. Does not add the * location information to the daughter regions since they are not open yet. - * @param connection connection we're using - * @param parent the parent region which is split + * @param connection connection we're using + * @param parent the parent region which is split * @param parentOpenSeqNum the next open sequence id for parent region, used by serial - * replication. -1 if not necessary. - * @param splitA Split daughter region A - * @param splitB Split daughter region B - * @param sn the location of the region + * replication. -1 if not necessary. + * @param splitA Split daughter region A + * @param splitB Split daughter region B + * @param sn the location of the region */ public static void splitRegion(Connection connection, RegionInfo parent, long parentOpenSeqNum, - RegionInfo splitA, RegionInfo splitB, ServerName sn, int regionReplication) - throws IOException { + RegionInfo splitA, RegionInfo splitB, ServerName sn, int regionReplication) throws IOException { try (Table meta = getMetaHTable(connection)) { long time = EnvironmentEdgeManager.currentTime(); // Put for parent - Put putParent = makePutFromRegionInfo(RegionInfoBuilder.newBuilder(parent) - .setOffline(true) - .setSplit(true).build(), time); + Put putParent = makePutFromRegionInfo( + RegionInfoBuilder.newBuilder(parent).setOffline(true).setSplit(true).build(), time); addDaughtersToPut(putParent, splitA, splitB); // Puts for daughters @@ -1629,8 +1626,8 @@ public static void splitRegion(Connection connection, RegionInfo parent, long pa // default OFFLINE state. If Master gets restarted after this step, start up sequence of // master tries to assign these offline regions. This is followed by re-assignments of the // daughter regions from resumed {@link SplitTableRegionProcedure} - addRegionStateToPut(putA, RegionState.State.CLOSED); - addRegionStateToPut(putB, RegionState.State.CLOSED); + addRegionStateToPut(putA, RegionInfo.DEFAULT_REPLICA_ID, RegionState.State.CLOSED); + addRegionStateToPut(putB, RegionInfo.DEFAULT_REPLICA_ID, RegionState.State.CLOSED); addSequenceNum(putA, 1, splitA.getReplicaId()); // new regions, openSeqNum = 1 is fine. addSequenceNum(putB, 1, splitB.getReplicaId()); @@ -1650,7 +1647,7 @@ public static void splitRegion(Connection connection, RegionInfo parent, long pa /** * Update state of the table in meta. * @param connection what we use for update - * @param state new state + * @param state new state */ private static void updateTableState(Connection connection, TableState state) throws IOException { Put put = makePutFromTableState(state, EnvironmentEdgeManager.currentTime()); @@ -1671,10 +1668,9 @@ public static Put makePutFromTableState(TableState state, long ts) { /** * Remove state for table from meta * @param connection to use for deletion - * @param table to delete state for + * @param table to delete state for */ - public static void deleteTableState(Connection connection, TableName table) - throws IOException { + public static void deleteTableState(Connection connection, TableName table) throws IOException { long time = EnvironmentEdgeManager.currentTime(); Delete delete = new Delete(table.getName()); delete.addColumns(getTableFamily(), getTableStateColumn(), time); @@ -1682,18 +1678,18 @@ public static void deleteTableState(Connection connection, TableName table) LOG.info("Deleted table " + table + " state from META"); } - private static void multiMutate(Table table, byte[] row, - Mutation... mutations) throws IOException { + private static void multiMutate(Table table, byte[] row, Mutation... mutations) + throws IOException { multiMutate(table, row, Arrays.asList(mutations)); } /** - * Performs an atomic multi-mutate operation against the given table. Used by the likes of - * merge and split as these want to make atomic mutations across multiple rows. + * Performs an atomic multi-mutate operation against the given table. Used by the likes of merge + * and split as these want to make atomic mutations across multiple rows. * @throws IOException even if we encounter a RuntimeException, we'll still wrap it in an IOE. */ static void multiMutate(final Table table, byte[] row, final List mutations) - throws IOException { + throws IOException { debugLogMutations(mutations); Batch.Call callable = instance -> { MutateRowsRequest.Builder builder = MutateRowsRequest.newBuilder(); @@ -1738,14 +1734,14 @@ static void multiMutate(final Table table, byte[] row, final List muta *

    * Uses passed catalog tracker to get a connection to the server hosting hbase:meta and makes * edits to that region. - * @param connection connection we're using - * @param regionInfo region to update location of - * @param openSeqNum the latest sequence number obtained when the region was open - * @param sn Server name + * @param connection connection we're using + * @param regionInfo region to update location of + * @param openSeqNum the latest sequence number obtained when the region was open + * @param sn Server name * @param masterSystemTime wall clock time from master if passed in the open region RPC */ public static void updateRegionLocation(Connection connection, RegionInfo regionInfo, - ServerName sn, long openSeqNum, long masterSystemTime) throws IOException { + ServerName sn, long openSeqNum, long masterSystemTime) throws IOException { updateLocation(connection, regionInfo, sn, openSeqNum, masterSystemTime); } @@ -1754,16 +1750,16 @@ public static void updateRegionLocation(Connection connection, RegionInfo region *

    * Connects to the specified server which should be hosting the specified catalog region name to * perform the edit. - * @param connection connection we're using - * @param regionInfo region to update location of - * @param sn Server name - * @param openSeqNum the latest sequence number obtained when the region was open + * @param connection connection we're using + * @param regionInfo region to update location of + * @param sn Server name + * @param openSeqNum the latest sequence number obtained when the region was open * @param masterSystemTime wall clock time from master if passed in the open region RPC * @throws IOException In particular could throw {@link java.net.ConnectException} if the server - * is down on other end. + * is down on other end. */ private static void updateLocation(Connection connection, RegionInfo regionInfo, ServerName sn, - long openSeqNum, long masterSystemTime) throws IOException { + long openSeqNum, long masterSystemTime) throws IOException { // region replicas are kept in the primary region's row Put put = new Put(getMetaKeyForRegion(regionInfo), masterSystemTime); addRegionInfo(put, regionInfo); @@ -1778,7 +1774,7 @@ private static void updateLocation(Connection connection, RegionInfo regionInfo, * @param regionInfo region to be deleted from META */ public static void deleteRegionInfo(Connection connection, RegionInfo regionInfo) - throws IOException { + throws IOException { Delete delete = new Delete(regionInfo.getRegionName()); delete.addFamily(getCatalogFamily(), HConstants.LATEST_TIMESTAMP); deleteFromMetaTable(connection, delete); @@ -1787,22 +1783,21 @@ public static void deleteRegionInfo(Connection connection, RegionInfo regionInfo /** * Deletes the specified regions from META. - * @param connection connection we're using + * @param connection connection we're using * @param regionsInfo list of regions to be deleted from META */ public static void deleteRegionInfos(Connection connection, List regionsInfo) - throws IOException { + throws IOException { deleteRegionInfos(connection, regionsInfo, EnvironmentEdgeManager.currentTime()); } /** * Deletes the specified regions from META. - * @param connection connection we're using + * @param connection connection we're using * @param regionsInfo list of regions to be deleted from META */ private static void deleteRegionInfos(Connection connection, List regionsInfo, - long ts) - throws IOException { + long ts) throws IOException { List deletes = new ArrayList<>(regionsInfo.size()); for (RegionInfo hri : regionsInfo) { Delete e = new Delete(hri.getRegionName()); @@ -1817,11 +1812,11 @@ private static void deleteRegionInfos(Connection connection, List re /** * Overwrites the specified regions from hbase:meta. Deletes old rows for the given regions and * adds new ones. Regions added back have state CLOSED. - * @param connection connection we're using + * @param connection connection we're using * @param regionInfos list of regions to be added to META */ public static void overwriteRegions(Connection connection, List regionInfos, - int regionReplication) throws IOException { + int regionReplication) throws IOException { // use master time for delete marker and the Put long now = EnvironmentEdgeManager.currentTime(); deleteRegionInfos(connection, regionInfos, now); @@ -1838,14 +1833,14 @@ public static void overwriteRegions(Connection connection, List regi /** * Deletes merge qualifiers for the specified merge region. - * @param connection connection we're using + * @param connection connection we're using * @param mergeRegion the merged region */ public static void deleteMergeQualifiers(Connection connection, final RegionInfo mergeRegion) - throws IOException { + throws IOException { Delete delete = new Delete(mergeRegion.getRegionName()); // NOTE: We are doing a new hbase:meta read here. - Cell[] cells = getRegionResult(connection, mergeRegion.getRegionName()).rawCells(); + Cell[] cells = getRegionResult(connection, mergeRegion).rawCells(); if (cells == null || cells.length == 0) { return; } @@ -1863,60 +1858,42 @@ public static void deleteMergeQualifiers(Connection connection, final RegionInfo // the previous GCMultipleMergedRegionsProcedure is still going on, in this case, the second // GCMultipleMergedRegionsProcedure could delete the merged region by accident! if (qualifiers.isEmpty()) { - LOG.info("No merged qualifiers for region " + mergeRegion.getRegionNameAsString() + - " in meta table, they are cleaned up already, Skip."); + LOG.info("No merged qualifiers for region " + mergeRegion.getRegionNameAsString() + + " in meta table, they are cleaned up already, Skip."); return; } deleteFromMetaTable(connection, delete); - LOG.info("Deleted merge references in " + mergeRegion.getRegionNameAsString() + - ", deleted qualifiers " + qualifiers.stream().map(Bytes::toStringBinary). - collect(Collectors.joining(", "))); - } - - public static Put addRegionInfo(final Put p, final RegionInfo hri) - throws IOException { - p.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY) - .setRow(p.getRow()) - .setFamily(getCatalogFamily()) - .setQualifier(HConstants.REGIONINFO_QUALIFIER) - .setTimestamp(p.getTimestamp()) - .setType(Type.Put) - // Serialize the Default Replica HRI otherwise scan of hbase:meta - // shows an info:regioninfo value with encoded name and region - // name that differs from that of the hbase;meta row. - .setValue(RegionInfo.toByteArray(RegionReplicaUtil.getRegionInfoForDefaultReplica(hri))) - .build()); + LOG.info( + "Deleted merge references in " + mergeRegion.getRegionNameAsString() + ", deleted qualifiers " + + qualifiers.stream().map(Bytes::toStringBinary).collect(Collectors.joining(", "))); + } + + public static Put addRegionInfo(final Put p, final RegionInfo hri) throws IOException { + p.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(p.getRow()) + .setFamily(getCatalogFamily()).setQualifier(HConstants.REGIONINFO_QUALIFIER) + .setTimestamp(p.getTimestamp()).setType(Cell.Type.Put) + // Serialize the Default Replica HRI otherwise scan of hbase:meta + // shows an info:regioninfo value with encoded name and region + // name that differs from that of the hbase;meta row. + .setValue(RegionInfo.toByteArray(RegionReplicaUtil.getRegionInfoForDefaultReplica(hri))) + .build()); return p; } public static Put addLocation(Put p, ServerName sn, long openSeqNum, int replicaId) - throws IOException { + throws IOException { CellBuilder builder = CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY); - return p.add(builder.clear() - .setRow(p.getRow()) - .setFamily(getCatalogFamily()) - .setQualifier(getServerColumn(replicaId)) - .setTimestamp(p.getTimestamp()) - .setType(Cell.Type.Put) - .setValue(Bytes.toBytes(sn.getAddress().toString())) - .build()) - .add(builder.clear() - .setRow(p.getRow()) - .setFamily(getCatalogFamily()) - .setQualifier(getStartCodeColumn(replicaId)) - .setTimestamp(p.getTimestamp()) - .setType(Cell.Type.Put) - .setValue(Bytes.toBytes(sn.getStartcode())) - .build()) - .add(builder.clear() - .setRow(p.getRow()) - .setFamily(getCatalogFamily()) - .setQualifier(getSeqNumColumn(replicaId)) - .setTimestamp(p.getTimestamp()) - .setType(Type.Put) - .setValue(Bytes.toBytes(openSeqNum)) - .build()); + return p + .add(builder.clear().setRow(p.getRow()).setFamily(getCatalogFamily()) + .setQualifier(getServerColumn(replicaId)).setTimestamp(p.getTimestamp()) + .setType(Cell.Type.Put).setValue(Bytes.toBytes(sn.getAddress().toString())).build()) + .add(builder.clear().setRow(p.getRow()).setFamily(getCatalogFamily()) + .setQualifier(getStartCodeColumn(replicaId)).setTimestamp(p.getTimestamp()) + .setType(Cell.Type.Put).setValue(Bytes.toBytes(sn.getStartcode())).build()) + .add(builder.clear().setRow(p.getRow()).setFamily(getCatalogFamily()) + .setQualifier(getSeqNumColumn(replicaId)).setTimestamp(p.getTimestamp()) + .setType(Cell.Type.Put).setValue(Bytes.toBytes(openSeqNum)).build()); } private static void writeRegionName(ByteArrayOutputStream out, byte[] regionName) { @@ -1965,11 +1942,11 @@ private static void addReplicationParent(Put put, List parents) thro byte[] value = getParentsBytes(parents); put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(put.getRow()) .setFamily(HConstants.REPLICATION_BARRIER_FAMILY).setQualifier(REPLICATION_PARENT_QUALIFIER) - .setTimestamp(put.getTimestamp()).setType(Type.Put).setValue(value).build()); + .setTimestamp(put.getTimestamp()).setType(Cell.Type.Put).setValue(value).build()); } public static Put makePutForReplicationBarrier(RegionInfo regionInfo, long openSeqNum, long ts) - throws IOException { + throws IOException { Put put = new Put(regionInfo.getRegionName(), ts); addReplicationBarrier(put, openSeqNum); return put; @@ -1979,39 +1956,24 @@ public static Put makePutForReplicationBarrier(RegionInfo regionInfo, long openS * See class comment on SerialReplicationChecker */ public static void addReplicationBarrier(Put put, long openSeqNum) throws IOException { - put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY) - .setRow(put.getRow()) - .setFamily(HConstants.REPLICATION_BARRIER_FAMILY) - .setQualifier(HConstants.SEQNUM_QUALIFIER) - .setTimestamp(put.getTimestamp()) - .setType(Type.Put) - .setValue(Bytes.toBytes(openSeqNum)) + put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(put.getRow()) + .setFamily(HConstants.REPLICATION_BARRIER_FAMILY).setQualifier(HConstants.SEQNUM_QUALIFIER) + .setTimestamp(put.getTimestamp()).setType(Cell.Type.Put).setValue(Bytes.toBytes(openSeqNum)) .build()); } public static Put addEmptyLocation(Put p, int replicaId) throws IOException { CellBuilder builder = CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY); - return p.add(builder.clear() - .setRow(p.getRow()) - .setFamily(getCatalogFamily()) - .setQualifier(getServerColumn(replicaId)) - .setTimestamp(p.getTimestamp()) - .setType(Type.Put) - .build()) - .add(builder.clear() - .setRow(p.getRow()) - .setFamily(getCatalogFamily()) - .setQualifier(getStartCodeColumn(replicaId)) - .setTimestamp(p.getTimestamp()) - .setType(Cell.Type.Put) - .build()) - .add(builder.clear() - .setRow(p.getRow()) - .setFamily(getCatalogFamily()) - .setQualifier(getSeqNumColumn(replicaId)) - .setTimestamp(p.getTimestamp()) - .setType(Cell.Type.Put) - .build()); + return p + .add(builder.clear().setRow(p.getRow()).setFamily(getCatalogFamily()) + .setQualifier(getServerColumn(replicaId)).setTimestamp(p.getTimestamp()) + .setType(Cell.Type.Put).build()) + .add(builder.clear().setRow(p.getRow()).setFamily(getCatalogFamily()) + .setQualifier(getStartCodeColumn(replicaId)).setTimestamp(p.getTimestamp()) + .setType(Cell.Type.Put).build()) + .add(builder.clear().setRow(p.getRow()).setFamily(getCatalogFamily()) + .setQualifier(getSeqNumColumn(replicaId)).setTimestamp(p.getTimestamp()) + .setType(Cell.Type.Put).build()); } public static final class ReplicationBarrierResult { @@ -2039,10 +2001,10 @@ public List getParentRegionNames() { @Override public String toString() { - return "ReplicationBarrierResult [barriers=" + Arrays.toString(barriers) + ", state=" + - state + ", parentRegionNames=" + - parentRegionNames.stream().map(Bytes::toStringBinary).collect(Collectors.joining(", ")) + - "]"; + return "ReplicationBarrierResult [barriers=" + Arrays.toString(barriers) + ", state=" + state + + ", parentRegionNames=" + + parentRegionNames.stream().map(Bytes::toStringBinary).collect(Collectors.joining(", ")) + + "]"; } } @@ -2068,7 +2030,7 @@ private static ReplicationBarrierResult getReplicationBarrierResult(Result resul } public static ReplicationBarrierResult getReplicationBarrierResult(Connection conn, - TableName tableName, byte[] row, byte[] encodedRegionName) throws IOException { + TableName tableName, byte[] row, byte[] encodedRegionName) throws IOException { byte[] metaStartKey = RegionInfo.createRegionName(tableName, row, HConstants.NINES, false); byte[] metaStopKey = RegionInfo.createRegionName(tableName, HConstants.EMPTY_START_ROW, "", false); @@ -2086,8 +2048,9 @@ public static ReplicationBarrierResult getReplicationBarrierResult(Connection co // TODO: we may look up a region which has already been split or merged so we need to check // whether the encoded name matches. Need to find a way to quit earlier when there is no // record for the given region, for now it will scan to the end of the table. - if (!Bytes.equals(encodedRegionName, - Bytes.toBytes(RegionInfo.encodeRegionName(regionName)))) { + if ( + !Bytes.equals(encodedRegionName, Bytes.toBytes(RegionInfo.encodeRegionName(regionName))) + ) { continue; } return getReplicationBarrierResult(result); @@ -2096,7 +2059,7 @@ public static ReplicationBarrierResult getReplicationBarrierResult(Connection co } public static long[] getReplicationBarrier(Connection conn, byte[] regionName) - throws IOException { + throws IOException { try (Table table = getMetaHTable(conn)) { Result result = table.get(new Get(regionName) .addColumn(HConstants.REPLICATION_BARRIER_FAMILY, HConstants.SEQNUM_QUALIFIER) @@ -2106,7 +2069,7 @@ public static long[] getReplicationBarrier(Connection conn, byte[] regionName) } public static List> getTableEncodedRegionNameAndLastBarrier(Connection conn, - TableName tableName) throws IOException { + TableName tableName) throws IOException { List> list = new ArrayList<>(); scanMeta(conn, getTableStartRowForMeta(tableName, QueryType.REPLICATION), getTableStopRowForMeta(tableName, QueryType.REPLICATION), QueryType.REPLICATION, r -> { @@ -2124,14 +2087,14 @@ public static List> getTableEncodedRegionNameAndLastBarrier(C } public static List getTableEncodedRegionNamesForSerialReplication(Connection conn, - TableName tableName) throws IOException { + TableName tableName) throws IOException { List list = new ArrayList<>(); scanMeta(conn, getTableStartRowForMeta(tableName, QueryType.REPLICATION), getTableStopRowForMeta(tableName, QueryType.REPLICATION), QueryType.REPLICATION, new FirstKeyOnlyFilter(), Integer.MAX_VALUE, r -> { list.add(RegionInfo.encodeRegionName(r.getRow())); return true; - }); + }, CatalogReplicaMode.NONE); return list; } @@ -2151,13 +2114,9 @@ private static void debugLogMutation(Mutation p) throws IOException { } private static Put addSequenceNum(Put p, long openSeqNum, int replicaId) throws IOException { - return p.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY) - .setRow(p.getRow()) - .setFamily(HConstants.CATALOG_FAMILY) - .setQualifier(getSeqNumColumn(replicaId)) - .setTimestamp(p.getTimestamp()) - .setType(Type.Put) - .setValue(Bytes.toBytes(openSeqNum)) - .build()); + return p.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(p.getRow()) + .setFamily(HConstants.CATALOG_FAMILY).setQualifier(getSeqNumColumn(replicaId)) + .setTimestamp(p.getTimestamp()).setType(Cell.Type.Put).setValue(Bytes.toBytes(openSeqNum)) + .build()); } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java index 3e06f4250af6..a49575849b04 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -7,7 +7,7 @@ * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -15,15 +15,14 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import org.apache.yetus.audience.InterfaceAudience; /** - * Exception thrown when the result needs to be chunked on the server side. - * It signals that retries should happen right away and not count against the number of - * retries because some of the multi was a success. + * Exception thrown when the result needs to be chunked on the server side. It signals that retries + * should happen right away and not count against the number of retries because some of the multi + * was a success. */ @InterfaceAudience.Public public class MultiActionResultTooLarge extends RetryImmediatelyException { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java index 5263523417ed..83e29fd9edc1 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceExistException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import org.apache.yetus.audience.InterfaceAudience; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java index 72ff1e61b849..0af01d23bddf 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/NamespaceNotFoundException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import org.apache.yetus.audience.InterfaceAudience; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java index c51fccb5955d..bc156353a1b7 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -16,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import org.apache.yetus.audience.InterfaceAudience; @@ -27,6 +25,7 @@ @InterfaceAudience.Public public class NotAllMetaRegionsOnlineException extends DoNotRetryIOException { private static final long serialVersionUID = 6439786157874827523L; + /** * default constructor */ @@ -35,8 +34,7 @@ public NotAllMetaRegionsOnlineException() { } /** - * @param message - */ + * */ public NotAllMetaRegionsOnlineException(String message) { super(message); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java index 918408778c0d..aa138478b4ab 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -19,7 +18,6 @@ package org.apache.hadoop.hbase; import java.io.IOException; - import org.apache.hadoop.hbase.util.Bytes; import org.apache.yetus.audience.InterfaceAudience; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java index e887928da828..473947b8f769 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,10 +20,10 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * This exception is thrown by the master when a region server was shut down and - * restarted so fast that the master still hasn't processed the server shutdown - * of the first instance, or when master is initializing and client call admin - * operations, or when an operation is performed on a region server that is still starting. + * This exception is thrown by the master when a region server was shut down and restarted so fast + * that the master still hasn't processed the server shutdown of the first instance, or when master + * is initializing and client call admin operations, or when an operation is performed on a region + * server that is still starting. */ @SuppressWarnings("serial") @InterfaceAudience.Public diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseRestartMasterException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseRestartMasterException.java index 62f84e9495be..5e60e44243a0 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseRestartMasterException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseRestartMasterException.java @@ -1,5 +1,4 @@ /* - * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -16,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import org.apache.yetus.audience.InterfaceAudience; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RSGroupTableAccessor.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RSGroupTableAccessor.java index 406c41ee52c1..ba1ccfa5d9c8 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RSGroupTableAccessor.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RSGroupTableAccessor.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,7 +20,6 @@ import java.io.IOException; import java.util.ArrayList; import java.util.List; - import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; @@ -34,14 +32,14 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * Read rs group information from hbase:rsgroup. + * Read rs group information from hbase:rsgroup. */ @InterfaceAudience.Private public final class RSGroupTableAccessor { - //Assigned before user tables + // Assigned before user tables private static final TableName RSGROUP_TABLE_NAME = - TableName.valueOf(NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR, "rsgroup"); + TableName.valueOf(NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR, "rsgroup"); private static final byte[] META_FAMILY_BYTES = Bytes.toBytes("m"); private static final byte[] META_QUALIFIER_BYTES = Bytes.toBytes("i"); @@ -52,8 +50,7 @@ public static boolean isRSGroupsEnabled(Connection connection) throws IOExceptio return connection.getAdmin().tableExists(RSGROUP_TABLE_NAME); } - public static List getAllRSGroupInfo(Connection connection) - throws IOException { + public static List getAllRSGroupInfo(Connection connection) throws IOException { try (Table rsGroupTable = connection.getTable(RSGROUP_TABLE_NAME)) { List rsGroupInfos = new ArrayList<>(); for (Result result : rsGroupTable.getScanner(new Scan())) { @@ -71,14 +68,13 @@ private static RSGroupInfo getRSGroupInfo(Result result) throws IOException { if (rsGroupInfo == null) { return null; } - RSGroupProtos.RSGroupInfo proto = - RSGroupProtos.RSGroupInfo.parseFrom(rsGroupInfo); + RSGroupProtos.RSGroupInfo proto = RSGroupProtos.RSGroupInfo.parseFrom(rsGroupInfo); return ProtobufUtil.toGroupInfo(proto); } public static RSGroupInfo getRSGroupInfo(Connection connection, byte[] rsGroupName) - throws IOException { - try (Table rsGroupTable = connection.getTable(RSGROUP_TABLE_NAME)){ + throws IOException { + try (Table rsGroupTable = connection.getTable(RSGROUP_TABLE_NAME)) { Result result = rsGroupTable.get(new Get(rsGroupName)); return getRSGroupInfo(result); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java index 8a8d2151aa2e..aff9ff8af472 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,8 +20,7 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * Thrown when something happens related to region handling. - * Subclasses have to be more specific. + * Thrown when something happens related to region handling. Subclasses have to be more specific. */ @InterfaceAudience.Public public class RegionException extends HBaseIOException { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLoad.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLoad.java index e05a0e6f1093..d61ba86a33e3 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLoad.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLoad.java @@ -1,6 +1,4 @@ -/** - * Copyright The Apache Software Foundation - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.util.List; @@ -28,12 +25,13 @@ import org.apache.yetus.audience.InterfaceAudience; import org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations; + import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos; /** * Encapsulates per-region load metrics. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link RegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link RegionMetrics} + * instead. */ @InterfaceAudience.Public @Deprecated @@ -43,7 +41,7 @@ public class RegionLoad implements RegionMetrics { protected ClusterStatusProtos.RegionLoad regionLoadPB; private final RegionMetrics metrics; - @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD") + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD") public RegionLoad(ClusterStatusProtos.RegionLoad regionLoadPB) { this.regionLoadPB = regionLoadPB; this.metrics = RegionMetricsBuilder.toRegionMetrics(regionLoadPB); @@ -56,8 +54,8 @@ public RegionLoad(ClusterStatusProtos.RegionLoad regionLoadPB) { /** * @return the region name - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionName} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link #getRegionName} + * instead. */ @Deprecated public byte[] getName() { @@ -151,8 +149,8 @@ public Size getUncompressedStoreFileSize() { /** * @return the number of stores - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getStoreCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link #getStoreCount} + * instead. */ @Deprecated public int getStores() { @@ -161,8 +159,8 @@ public int getStores() { /** * @return the number of storefiles - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getStoreFileCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getStoreFileCount} instead. */ @Deprecated public int getStorefiles() { @@ -171,8 +169,8 @@ public int getStorefiles() { /** * @return the total size of the storefiles, in MB - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getStoreFileSize} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getStoreFileSize} instead. */ @Deprecated public int getStorefileSizeMB() { @@ -181,8 +179,8 @@ public int getStorefileSizeMB() { /** * @return the memstore size, in MB - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getMemStoreSize} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getMemStoreSize} instead. */ @Deprecated public int getMemStoreSizeMB() { @@ -191,8 +189,8 @@ public int getMemStoreSizeMB() { /** * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * ((HBASE-3935)). - * Use {@link #getStoreFileRootLevelIndexSize} instead. + * ((HBASE-3935)). Use + * {@link #getStoreFileRootLevelIndexSize} instead. */ @Deprecated public int getStorefileIndexSizeMB() { @@ -201,8 +199,8 @@ public int getStorefileIndexSizeMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getStoreFileRootLevelIndexSize()} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getStoreFileRootLevelIndexSize()} instead. */ @Deprecated public int getStorefileIndexSizeKB() { @@ -211,8 +209,8 @@ public int getStorefileIndexSizeKB() { /** * @return the number of requests made to region - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRequestCount()} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRequestCount()} instead. */ @Deprecated public long getRequestsCount() { @@ -221,8 +219,8 @@ public long getRequestsCount() { /** * @return the number of read requests made to region - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getReadRequestCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getReadRequestCount} instead. */ @Deprecated public long getReadRequestsCount() { @@ -231,8 +229,8 @@ public long getReadRequestsCount() { /** * @return the number of filtered read requests made to region - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getFilteredReadRequestCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getFilteredReadRequestCount} instead. */ @Deprecated public long getFilteredReadRequestsCount() { @@ -241,8 +239,8 @@ public long getFilteredReadRequestsCount() { /** * @return the number of write requests made to region - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getWriteRequestCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getWriteRequestCount} instead. */ @Deprecated public long getWriteRequestsCount() { @@ -251,8 +249,8 @@ public long getWriteRequestsCount() { /** * @return The current total size of root-level indexes for the region, in KB. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getStoreFileRootLevelIndexSize} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getStoreFileRootLevelIndexSize} instead. */ @Deprecated public int getRootIndexSizeKB() { @@ -261,8 +259,8 @@ public int getRootIndexSizeKB() { /** * @return The total size of all index blocks, not just the root level, in KB. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getStoreFileUncompressedDataIndexSize} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getStoreFileUncompressedDataIndexSize} instead. */ @Deprecated public int getTotalStaticIndexSizeKB() { @@ -270,10 +268,9 @@ public int getTotalStaticIndexSizeKB() { } /** - * @return The total size of all Bloom filter blocks, not just loaded into the - * block cache, in KB. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getBloomFilterSize} instead. + * @return The total size of all Bloom filter blocks, not just loaded into the block cache, in KB. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getBloomFilterSize} instead. */ @Deprecated public int getTotalStaticBloomSizeKB() { @@ -282,8 +279,8 @@ public int getTotalStaticBloomSizeKB() { /** * @return the total number of kvs in current compaction - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getCompactingCellCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getCompactingCellCount} instead. */ @Deprecated public long getTotalCompactingKVs() { @@ -292,8 +289,8 @@ public long getTotalCompactingKVs() { /** * @return the number of already compacted kvs in current compaction - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getCompactedCellCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getCompactedCellCount} instead. */ @Deprecated public long getCurrentCompactedKVs() { @@ -303,8 +300,8 @@ public long getCurrentCompactedKVs() { /** * This does not really belong inside RegionLoad but its being done in the name of expediency. * @return the completed sequence Id for the region - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getCompletedSequenceId} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getCompletedSequenceId} instead. */ @Deprecated public long getCompleteSequenceId() { @@ -313,32 +310,29 @@ public long getCompleteSequenceId() { /** * @return completed sequence id per store. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getStoreSequenceId} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getStoreSequenceId} instead. */ @Deprecated public List getStoreCompleteSequenceId() { return metrics.getStoreSequenceId().entrySet().stream() - .map(s -> ClusterStatusProtos.StoreSequenceId.newBuilder() - .setFamilyName(UnsafeByteOperations.unsafeWrap(s.getKey())) - .setSequenceId(s.getValue()) - .build()) - .collect(Collectors.toList()); + .map(s -> ClusterStatusProtos.StoreSequenceId.newBuilder() + .setFamilyName(UnsafeByteOperations.unsafeWrap(s.getKey())).setSequenceId(s.getValue()) + .build()) + .collect(Collectors.toList()); } /** * @return the uncompressed size of the storefiles in MB. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getUncompressedStoreFileSize} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getUncompressedStoreFileSize} instead. */ @Deprecated public int getStoreUncompressedSizeMB() { - return (int) metrics.getUncompressedStoreFileSize().get(Size.Unit.KILOBYTE); + return (int) metrics.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE); } - /** - * @return the data locality of region in the regionserver. - */ + /** Returns the data locality of region in the regionserver. */ @Override public float getDataLocality() { return metrics.getDataLocality(); @@ -351,17 +345,16 @@ public long getLastMajorCompactionTimestamp() { /** * @return the timestamp of the oldest hfile for any store of this region. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getLastMajorCompactionTimestamp} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getLastMajorCompactionTimestamp} instead. */ @Deprecated public long getLastMajorCompactionTs() { return metrics.getLastMajorCompactionTimestamp(); } - /** - * @return the reference count for the stores of this region - */ + /** Returns the reference count for the stores of this region */ + @Override public int getStoreRefCount() { return metrics.getStoreRefCount(); } @@ -401,47 +394,33 @@ public CompactionState getCompactionState() { */ @Override public String toString() { - StringBuilder sb = Strings.appendKeyValue(new StringBuilder(), "numberOfStores", - this.getStores()); + StringBuilder sb = + Strings.appendKeyValue(new StringBuilder(), "numberOfStores", this.getStores()); Strings.appendKeyValue(sb, "numberOfStorefiles", this.getStorefiles()); Strings.appendKeyValue(sb, "storeRefCount", this.getStoreRefCount()); - Strings.appendKeyValue(sb, "storefileUncompressedSizeMB", - this.getStoreUncompressedSizeMB()); - Strings.appendKeyValue(sb, "lastMajorCompactionTimestamp", - this.getLastMajorCompactionTs()); + Strings.appendKeyValue(sb, "storefileUncompressedSizeMB", this.getStoreUncompressedSizeMB()); + Strings.appendKeyValue(sb, "lastMajorCompactionTimestamp", this.getLastMajorCompactionTs()); Strings.appendKeyValue(sb, "storefileSizeMB", this.getStorefileSizeMB()); if (this.getStoreUncompressedSizeMB() != 0) { - Strings.appendKeyValue(sb, "compressionRatio", - String.format("%.4f", (float) this.getStorefileSizeMB() / - (float) this.getStoreUncompressedSizeMB())); + Strings.appendKeyValue(sb, "compressionRatio", String.format("%.4f", + (float) this.getStorefileSizeMB() / (float) this.getStoreUncompressedSizeMB())); } - Strings.appendKeyValue(sb, "memstoreSizeMB", - this.getMemStoreSizeMB()); - Strings.appendKeyValue(sb, "readRequestsCount", - this.getReadRequestsCount()); - Strings.appendKeyValue(sb, "writeRequestsCount", - this.getWriteRequestsCount()); - Strings.appendKeyValue(sb, "rootIndexSizeKB", - this.getRootIndexSizeKB()); - Strings.appendKeyValue(sb, "totalStaticIndexSizeKB", - this.getTotalStaticIndexSizeKB()); - Strings.appendKeyValue(sb, "totalStaticBloomSizeKB", - this.getTotalStaticBloomSizeKB()); - Strings.appendKeyValue(sb, "totalCompactingKVs", - this.getTotalCompactingKVs()); - Strings.appendKeyValue(sb, "currentCompactedKVs", - this.getCurrentCompactedKVs()); + Strings.appendKeyValue(sb, "memstoreSizeMB", this.getMemStoreSizeMB()); + Strings.appendKeyValue(sb, "readRequestsCount", this.getReadRequestsCount()); + Strings.appendKeyValue(sb, "writeRequestsCount", this.getWriteRequestsCount()); + Strings.appendKeyValue(sb, "rootIndexSizeKB", this.getRootIndexSizeKB()); + Strings.appendKeyValue(sb, "totalStaticIndexSizeKB", this.getTotalStaticIndexSizeKB()); + Strings.appendKeyValue(sb, "totalStaticBloomSizeKB", this.getTotalStaticBloomSizeKB()); + Strings.appendKeyValue(sb, "totalCompactingKVs", this.getTotalCompactingKVs()); + Strings.appendKeyValue(sb, "currentCompactedKVs", this.getCurrentCompactedKVs()); float compactionProgressPct = Float.NaN; if (this.getTotalCompactingKVs() > 0) { - compactionProgressPct = ((float) this.getCurrentCompactedKVs() / - (float) this.getTotalCompactingKVs()); + compactionProgressPct = + ((float) this.getCurrentCompactedKVs() / (float) this.getTotalCompactingKVs()); } - Strings.appendKeyValue(sb, "compactionProgressPct", - compactionProgressPct); - Strings.appendKeyValue(sb, "completeSequenceId", - this.getCompleteSequenceId()); - Strings.appendKeyValue(sb, "dataLocality", - this.getDataLocality()); + Strings.appendKeyValue(sb, "compactionProgressPct", compactionProgressPct); + Strings.appendKeyValue(sb, "completeSequenceId", this.getCompleteSequenceId()); + Strings.appendKeyValue(sb, "dataLocality", this.getDataLocality()); return sb.toString(); } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLocations.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLocations.java index 0d3a464e0f86..4c0390c6c3be 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLocations.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionLocations.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,23 +15,20 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.util.Arrays; import java.util.Collection; import java.util.Iterator; - import org.apache.hadoop.hbase.client.RegionInfo; import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.util.Bytes; import org.apache.yetus.audience.InterfaceAudience; /** - * Container for holding a list of {@link HRegionLocation}'s that correspond to the - * same range. The list is indexed by the replicaId. This is an immutable list, - * however mutation operations are provided which returns a new List via copy-on-write - * (assuming small number of locations) + * Container for holding a list of {@link HRegionLocation}'s that correspond to the same range. The + * list is indexed by the replicaId. This is an immutable list, however mutation operations are + * provided which returns a new List via copy-on-write (assuming small number of locations) */ @InterfaceAudience.Private public class RegionLocations implements Iterable { @@ -45,10 +42,9 @@ public class RegionLocations implements Iterable { private final HRegionLocation[] locations; // replicaId -> HRegionLocation. /** - * Constructs the region location list. The locations array should - * contain all the locations for known replicas for the region, and should be - * sorted in replicaId ascending order, although it can contain nulls indicating replicaIds - * that the locations of which are not known. + * Constructs the region location list. The locations array should contain all the locations for + * known replicas for the region, and should be sorted in replicaId ascending order, although it + * can contain nulls indicating replicaIds that the locations of which are not known. * @param locations an array of HRegionLocations for the same region range */ public RegionLocations(HRegionLocation... locations) { @@ -66,7 +62,7 @@ public RegionLocations(HRegionLocation... locations) { index++; } // account for the null elements in the array after maxReplicaIdIndex - maxReplicaId = maxReplicaId + (locations.length - (maxReplicaIdIndex + 1) ); + maxReplicaId = maxReplicaId + (locations.length - (maxReplicaIdIndex + 1)); if (maxReplicaId + 1 == locations.length) { this.locations = locations; @@ -79,7 +75,7 @@ public RegionLocations(HRegionLocation... locations) { } } for (HRegionLocation loc : this.locations) { - if (loc != null && loc.getServerName() != null){ + if (loc != null && loc.getServerName() != null) { numNonNullElements++; } } @@ -91,8 +87,7 @@ public RegionLocations(Collection locations) { } /** - * Returns the size of the list even if some of the elements - * might be null. + * Returns the size of the list even if some of the elements might be null. * @return the size of the list (corresponding to the max replicaId) */ public int size() { @@ -116,18 +111,18 @@ public boolean isEmpty() { } /** - * Returns a new RegionLocations with the locations removed (set to null) - * which have the destination server as given. + * Returns a new RegionLocations with the locations removed (set to null) which have the + * destination server as given. * @param serverName the serverName to remove locations of - * @return an RegionLocations object with removed locations or the same object - * if nothing is removed + * @return an RegionLocations object with removed locations or the same object if nothing is + * removed */ public RegionLocations removeByServer(ServerName serverName) { HRegionLocation[] newLocations = null; for (int i = 0; i < locations.length; i++) { // check whether something to remove if (locations[i] != null && serverName.equals(locations[i].getServerName())) { - if (newLocations == null) { //first time + if (newLocations == null) { // first time newLocations = new HRegionLocation[locations.length]; System.arraycopy(locations, 0, newLocations, 0, i); } @@ -142,8 +137,8 @@ public RegionLocations removeByServer(ServerName serverName) { /** * Removes the given location from the list * @param location the location to remove - * @return an RegionLocations object with removed locations or the same object - * if nothing is removed + * @return an RegionLocations object with removed locations or the same object if nothing is + * removed */ public RegionLocations remove(HRegionLocation location) { if (location == null) return this; @@ -153,9 +148,12 @@ public RegionLocations remove(HRegionLocation location) { // check whether something to remove. HRL.compareTo() compares ONLY the // serverName. We want to compare the HRI's as well. - if (locations[replicaId] == null - || RegionInfo.COMPARATOR.compare(location.getRegion(), locations[replicaId].getRegion()) != 0 - || !location.equals(locations[replicaId])) { + if ( + locations[replicaId] == null + || RegionInfo.COMPARATOR.compare(location.getRegion(), locations[replicaId].getRegion()) + != 0 + || !location.equals(locations[replicaId]) + ) { return this; } @@ -169,8 +167,8 @@ public RegionLocations remove(HRegionLocation location) { /** * Removes location of the given replicaId from the list * @param replicaId the replicaId of the location to remove - * @return an RegionLocations object with removed locations or the same object - * if nothing is removed + * @return an RegionLocations object with removed locations or the same object if nothing is + * removed */ public RegionLocations remove(int replicaId) { if (getRegionLocation(replicaId) == null) { @@ -204,14 +202,13 @@ public RegionLocations removeElementsWithNullLocation() { } /** - * Merges this RegionLocations list with the given list assuming - * same range, and keeping the most up to date version of the - * HRegionLocation entries from either list according to seqNum. If seqNums - * are equal, the location from the argument (other) is taken. + * Merges this RegionLocations list with the given list assuming same range, and keeping the most + * up to date version of the HRegionLocation entries from either list according to seqNum. If + * seqNums are equal, the location from the argument (other) is taken. * @param other the locations to merge with - * @return an RegionLocations object with merged locations or the same object - * if nothing is merged + * @return an RegionLocations object with merged locations or the same object if nothing is merged */ + @SuppressWarnings("ReferenceEquality") public RegionLocations mergeLocations(RegionLocations other) { assert other != null; @@ -231,8 +228,7 @@ public RegionLocations mergeLocations(RegionLocations other) { regionInfo = otherLoc.getRegion(); } - HRegionLocation selectedLoc = selectRegionLocation(thisLoc, - otherLoc, true, false); + HRegionLocation selectedLoc = selectRegionLocation(thisLoc, otherLoc, true, false); if (selectedLoc != thisLoc) { if (newLocations == null) { @@ -247,10 +243,9 @@ public RegionLocations mergeLocations(RegionLocations other) { // ensure that all replicas share the same start code. Otherwise delete them if (newLocations != null && regionInfo != null) { - for (int i=0; i < newLocations.length; i++) { + for (int i = 0; i < newLocations.length; i++) { if (newLocations[i] != null) { - if (!RegionReplicaUtil.isReplicasForSameRegion(regionInfo, - newLocations[i].getRegion())) { + if (!RegionReplicaUtil.isReplicasForSameRegion(regionInfo, newLocations[i].getRegion())) { newLocations[i] = null; } } @@ -261,7 +256,7 @@ public RegionLocations mergeLocations(RegionLocations other) { } private HRegionLocation selectRegionLocation(HRegionLocation oldLocation, - HRegionLocation location, boolean checkForEquals, boolean force) { + HRegionLocation location, boolean checkForEquals, boolean force) { if (location == null) { return oldLocation == null ? null : oldLocation; } @@ -270,44 +265,45 @@ private HRegionLocation selectRegionLocation(HRegionLocation oldLocation, return location; } - if (force - || isGreaterThan(location.getSeqNum(), oldLocation.getSeqNum(), checkForEquals)) { + if (force || isGreaterThan(location.getSeqNum(), oldLocation.getSeqNum(), checkForEquals)) { return location; } return oldLocation; } /** - * Updates the location with new only if the new location has a higher - * seqNum than the old one or force is true. - * @param location the location to add or update - * @param checkForEquals whether to update the location if seqNums for the - * HRegionLocations for the old and new location are the same - * @param force whether to force update - * @return an RegionLocations object with updated locations or the same object - * if nothing is updated + * Updates the location with new only if the new location has a higher seqNum than the old one or + * force is true. + * @param location the location to add or update + * @param checkForEquals whether to update the location if seqNums for the HRegionLocations for + * the old and new location are the same + * @param force whether to force update + * @return an RegionLocations object with updated locations or the same object if nothing is + * updated */ - public RegionLocations updateLocation(HRegionLocation location, - boolean checkForEquals, boolean force) { + @SuppressWarnings("ReferenceEquality") + public RegionLocations updateLocation(HRegionLocation location, boolean checkForEquals, + boolean force) { assert location != null; int replicaId = location.getRegion().getReplicaId(); HRegionLocation oldLoc = getRegionLocation(location.getRegion().getReplicaId()); - HRegionLocation selectedLoc = selectRegionLocation(oldLoc, location, - checkForEquals, force); + HRegionLocation selectedLoc = selectRegionLocation(oldLoc, location, checkForEquals, force); if (selectedLoc == oldLoc) { return this; } - HRegionLocation[] newLocations = new HRegionLocation[Math.max(locations.length, replicaId +1)]; + HRegionLocation[] newLocations = new HRegionLocation[Math.max(locations.length, replicaId + 1)]; System.arraycopy(locations, 0, newLocations, 0, locations.length); newLocations[replicaId] = location; // ensure that all replicas share the same start code. Otherwise delete them - for (int i=0; i < newLocations.length; i++) { + for (int i = 0; i < newLocations.length; i++) { if (newLocations[i] != null) { - if (!RegionReplicaUtil.isReplicasForSameRegion(location.getRegion(), - newLocations[i].getRegion())) { + if ( + !RegionReplicaUtil.isReplicasForSameRegion(location.getRegion(), + newLocations[i].getRegion()) + ) { newLocations[i] = null; } } @@ -327,16 +323,18 @@ public HRegionLocation getRegionLocation(int replicaId) { } /** - * Returns the region location from the list for matching regionName, which can - * be regionName or encodedRegionName + * Returns the region location from the list for matching regionName, which can be regionName or + * encodedRegionName * @param regionName regionName or encodedRegionName * @return HRegionLocation found or null */ public HRegionLocation getRegionLocationByRegionName(byte[] regionName) { for (HRegionLocation loc : locations) { if (loc != null) { - if (Bytes.equals(loc.getRegion().getRegionName(), regionName) - || Bytes.equals(loc.getRegion().getEncodedNameAsBytes(), regionName)) { + if ( + Bytes.equals(loc.getRegion().getRegionName(), regionName) + || Bytes.equals(loc.getRegion().getEncodedNameAsBytes(), regionName) + ) { return loc; } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetrics.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetrics.java index 1a8e6c8c6556..d915e7a32cac 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetrics.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetrics.java @@ -1,6 +1,4 @@ -/** - * Copyright The Apache Software Foundation - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.util.Map; @@ -26,96 +23,66 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * Encapsulates per-region load metrics. - */ + * Encapsulates per-region load metrics. + */ @InterfaceAudience.Public public interface RegionMetrics { - /** - * @return the region name - */ + /** Returns the region name */ byte[] getRegionName(); - /** - * @return the number of stores - */ + /** Returns the number of stores */ int getStoreCount(); - /** - * @return the number of storefiles - */ + /** Returns the number of storefiles */ int getStoreFileCount(); - /** - * @return the total size of the storefiles - */ + /** Returns the total size of the storefiles */ Size getStoreFileSize(); - /** - * @return the memstore size - */ + /** Returns the memstore size */ Size getMemStoreSize(); - /** - * @return the number of read requests made to region - */ + /** Returns the number of read requests made to region */ long getReadRequestCount(); - /** - * @return the number of write requests made to region - */ + /** Returns the number of write requests made to region */ long getWriteRequestCount(); - /** - * @return the number of write requests and read requests made to region - */ + /** Returns the number of write requests and read requests made to region */ default long getRequestCount() { return getReadRequestCount() + getWriteRequestCount(); } - /** - * @return the region name as a string - */ + /** Returns the region name as a string */ default String getNameAsString() { return Bytes.toStringBinary(getRegionName()); } - /** - * @return the number of filtered read requests made to region - */ + /** Returns the number of filtered read requests made to region */ long getFilteredReadRequestCount(); /** * TODO: why we pass the same value to different counters? Currently, the value from - * getStoreFileIndexSize() is same with getStoreFileRootLevelIndexSize() - * see HRegionServer#createRegionLoad. + * getStoreFileIndexSize() is same with getStoreFileRootLevelIndexSize() see + * HRegionServer#createRegionLoad. * @return The current total size of root-level indexes for the region */ Size getStoreFileIndexSize(); - /** - * @return The current total size of root-level indexes for the region - */ + /** Returns The current total size of root-level indexes for the region */ Size getStoreFileRootLevelIndexSize(); - /** - * @return The total size of all index blocks, not just the root level - */ + /** Returns The total size of all index blocks, not just the root level */ Size getStoreFileUncompressedDataIndexSize(); - /** - * @return The total size of all Bloom filter blocks, not just loaded into the block cache - */ + /** Returns The total size of all Bloom filter blocks, not just loaded into the block cache */ Size getBloomFilterSize(); - /** - * @return the total number of cells in current compaction - */ + /** Returns the total number of cells in current compaction */ long getCompactingCellCount(); - /** - * @return the number of already compacted kvs in current compaction - */ + /** Returns the number of already compacted kvs in current compaction */ long getCompactedCellCount(); /** @@ -124,35 +91,24 @@ default String getNameAsString() { */ long getCompletedSequenceId(); - /** - * @return completed sequence id per store. - */ + /** Returns completed sequence id per store. */ Map getStoreSequenceId(); - - /** - * @return the uncompressed size of the storefiles - */ + /** Returns the uncompressed size of the storefiles */ Size getUncompressedStoreFileSize(); - /** - * @return the data locality of region in the regionserver. - */ + /** Returns the data locality of region in the regionserver. */ float getDataLocality(); - /** - * @return the timestamp of the oldest hfile for any store of this region. - */ + /** Returns the timestamp of the oldest hfile for any store of this region. */ long getLastMajorCompactionTimestamp(); - /** - * @return the reference count for the stores of this region - */ + /** Returns the reference count for the stores of this region */ int getStoreRefCount(); /** - * @return the max reference count for any store file among all compacted stores files - * of this region + * Returns the max reference count for any store file among all compacted stores files of this + * region */ int getMaxCompactedStoreFileRefCount(); @@ -162,9 +118,7 @@ default String getNameAsString() { */ float getDataLocalityForSsd(); - /** - * @return the data at local weight of this region in the regionserver - */ + /** Returns the data at local weight of this region in the regionserver */ long getBlocksLocalWeight(); /** @@ -173,13 +127,9 @@ default String getNameAsString() { */ long getBlocksLocalWithSsdWeight(); - /** - * @return the block total weight of this region - */ + /** Returns the block total weight of this region */ long getBlocksTotalWeight(); - /** - * @return the compaction state of this region - */ + /** Returns the compaction state of this region */ CompactionState getCompactionState(); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetricsBuilder.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetricsBuilder.java index cca6686f5861..15a9c48bfbe4 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetricsBuilder.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionMetricsBuilder.java @@ -1,6 +1,4 @@ -/** - * Copyright The Apache Software Foundation - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.util.Collections; @@ -39,96 +36,89 @@ @InterfaceAudience.Private public final class RegionMetricsBuilder { - public static List toRegionMetrics( - AdminProtos.GetRegionLoadResponse regionLoadResponse) { + public static List + toRegionMetrics(AdminProtos.GetRegionLoadResponse regionLoadResponse) { return regionLoadResponse.getRegionLoadsList().stream() - .map(RegionMetricsBuilder::toRegionMetrics).collect(Collectors.toList()); + .map(RegionMetricsBuilder::toRegionMetrics).collect(Collectors.toList()); } public static RegionMetrics toRegionMetrics(ClusterStatusProtos.RegionLoad regionLoadPB) { return RegionMetricsBuilder - .newBuilder(regionLoadPB.getRegionSpecifier().getValue().toByteArray()) - .setBloomFilterSize(new Size(regionLoadPB.getTotalStaticBloomSizeKB(), Size.Unit.KILOBYTE)) - .setCompactedCellCount(regionLoadPB.getCurrentCompactedKVs()) - .setCompactingCellCount(regionLoadPB.getTotalCompactingKVs()) - .setCompletedSequenceId(regionLoadPB.getCompleteSequenceId()) - .setDataLocality(regionLoadPB.hasDataLocality() ? regionLoadPB.getDataLocality() : 0.0f) - .setDataLocalityForSsd(regionLoadPB.hasDataLocalityForSsd() ? - regionLoadPB.getDataLocalityForSsd() : 0.0f) - .setBlocksLocalWeight(regionLoadPB.hasBlocksLocalWeight() ? - regionLoadPB.getBlocksLocalWeight() : 0) - .setBlocksLocalWithSsdWeight(regionLoadPB.hasBlocksLocalWithSsdWeight() ? - regionLoadPB.getBlocksLocalWithSsdWeight() : 0) - .setBlocksTotalWeight(regionLoadPB.getBlocksTotalWeight()) - .setCompactionState(ProtobufUtil.createCompactionStateForRegionLoad( - regionLoadPB.getCompactionState())) - .setFilteredReadRequestCount(regionLoadPB.getFilteredReadRequestsCount()) - .setStoreFileUncompressedDataIndexSize(new Size(regionLoadPB.getTotalStaticIndexSizeKB(), - Size.Unit.KILOBYTE)) - .setLastMajorCompactionTimestamp(regionLoadPB.getLastMajorCompactionTs()) - .setMemStoreSize(new Size(regionLoadPB.getMemStoreSizeMB(), Size.Unit.MEGABYTE)) - .setReadRequestCount(regionLoadPB.getReadRequestsCount()) - .setWriteRequestCount(regionLoadPB.getWriteRequestsCount()) - .setStoreFileIndexSize(new Size(regionLoadPB.getStorefileIndexSizeKB(), - Size.Unit.KILOBYTE)) - .setStoreFileRootLevelIndexSize(new Size(regionLoadPB.getRootIndexSizeKB(), - Size.Unit.KILOBYTE)) - .setStoreCount(regionLoadPB.getStores()) - .setStoreFileCount(regionLoadPB.getStorefiles()) - .setStoreRefCount(regionLoadPB.getStoreRefCount()) - .setMaxCompactedStoreFileRefCount(regionLoadPB.getMaxCompactedStoreFileRefCount()) - .setStoreFileSize(new Size(regionLoadPB.getStorefileSizeMB(), Size.Unit.MEGABYTE)) - .setStoreSequenceIds(regionLoadPB.getStoreCompleteSequenceIdList().stream() - .collect(Collectors.toMap( - (ClusterStatusProtos.StoreSequenceId s) -> s.getFamilyName().toByteArray(), - ClusterStatusProtos.StoreSequenceId::getSequenceId))) - .setUncompressedStoreFileSize( - new Size(regionLoadPB.getStoreUncompressedSizeMB(),Size.Unit.MEGABYTE)) - .build(); - } - - private static List toStoreSequenceId( - Map ids) { + .newBuilder(regionLoadPB.getRegionSpecifier().getValue().toByteArray()) + .setBloomFilterSize(new Size(regionLoadPB.getTotalStaticBloomSizeKB(), Size.Unit.KILOBYTE)) + .setCompactedCellCount(regionLoadPB.getCurrentCompactedKVs()) + .setCompactingCellCount(regionLoadPB.getTotalCompactingKVs()) + .setCompletedSequenceId(regionLoadPB.getCompleteSequenceId()) + .setDataLocality(regionLoadPB.hasDataLocality() ? regionLoadPB.getDataLocality() : 0.0f) + .setDataLocalityForSsd( + regionLoadPB.hasDataLocalityForSsd() ? regionLoadPB.getDataLocalityForSsd() : 0.0f) + .setBlocksLocalWeight( + regionLoadPB.hasBlocksLocalWeight() ? regionLoadPB.getBlocksLocalWeight() : 0) + .setBlocksLocalWithSsdWeight( + regionLoadPB.hasBlocksLocalWithSsdWeight() ? regionLoadPB.getBlocksLocalWithSsdWeight() : 0) + .setBlocksTotalWeight(regionLoadPB.getBlocksTotalWeight()) + .setCompactionState( + ProtobufUtil.createCompactionStateForRegionLoad(regionLoadPB.getCompactionState())) + .setFilteredReadRequestCount(regionLoadPB.getFilteredReadRequestsCount()) + .setStoreFileUncompressedDataIndexSize( + new Size(regionLoadPB.getTotalStaticIndexSizeKB(), Size.Unit.KILOBYTE)) + .setLastMajorCompactionTimestamp(regionLoadPB.getLastMajorCompactionTs()) + .setMemStoreSize(new Size(regionLoadPB.getMemStoreSizeMB(), Size.Unit.MEGABYTE)) + .setReadRequestCount(regionLoadPB.getReadRequestsCount()) + .setWriteRequestCount(regionLoadPB.getWriteRequestsCount()) + .setStoreFileIndexSize(new Size(regionLoadPB.getStorefileIndexSizeKB(), Size.Unit.KILOBYTE)) + .setStoreFileRootLevelIndexSize( + new Size(regionLoadPB.getRootIndexSizeKB(), Size.Unit.KILOBYTE)) + .setStoreCount(regionLoadPB.getStores()).setStoreFileCount(regionLoadPB.getStorefiles()) + .setStoreRefCount(regionLoadPB.getStoreRefCount()) + .setMaxCompactedStoreFileRefCount(regionLoadPB.getMaxCompactedStoreFileRefCount()) + .setStoreFileSize(new Size(regionLoadPB.getStorefileSizeMB(), Size.Unit.MEGABYTE)) + .setStoreSequenceIds(regionLoadPB.getStoreCompleteSequenceIdList().stream() + .collect(Collectors.toMap( + (ClusterStatusProtos.StoreSequenceId s) -> s.getFamilyName().toByteArray(), + ClusterStatusProtos.StoreSequenceId::getSequenceId))) + .setUncompressedStoreFileSize( + new Size(regionLoadPB.getStoreUncompressedSizeMB(), Size.Unit.MEGABYTE)) + .build(); + } + + private static List + toStoreSequenceId(Map ids) { return ids.entrySet().stream() - .map(e -> ClusterStatusProtos.StoreSequenceId.newBuilder() - .setFamilyName(UnsafeByteOperations.unsafeWrap(e.getKey())) - .setSequenceId(e.getValue()) - .build()) - .collect(Collectors.toList()); + .map(e -> ClusterStatusProtos.StoreSequenceId.newBuilder() + .setFamilyName(UnsafeByteOperations.unsafeWrap(e.getKey())).setSequenceId(e.getValue()) + .build()) + .collect(Collectors.toList()); } public static ClusterStatusProtos.RegionLoad toRegionLoad(RegionMetrics regionMetrics) { return ClusterStatusProtos.RegionLoad.newBuilder() - .setRegionSpecifier(HBaseProtos.RegionSpecifier - .newBuilder().setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.REGION_NAME) - .setValue(UnsafeByteOperations.unsafeWrap(regionMetrics.getRegionName())) - .build()) - .setTotalStaticBloomSizeKB((int) regionMetrics.getBloomFilterSize() - .get(Size.Unit.KILOBYTE)) - .setCurrentCompactedKVs(regionMetrics.getCompactedCellCount()) - .setTotalCompactingKVs(regionMetrics.getCompactingCellCount()) - .setCompleteSequenceId(regionMetrics.getCompletedSequenceId()) - .setDataLocality(regionMetrics.getDataLocality()) - .setFilteredReadRequestsCount(regionMetrics.getFilteredReadRequestCount()) - .setTotalStaticIndexSizeKB((int) regionMetrics.getStoreFileUncompressedDataIndexSize() - .get(Size.Unit.KILOBYTE)) - .setLastMajorCompactionTs(regionMetrics.getLastMajorCompactionTimestamp()) - .setMemStoreSizeMB((int) regionMetrics.getMemStoreSize().get(Size.Unit.MEGABYTE)) - .setReadRequestsCount(regionMetrics.getReadRequestCount()) - .setWriteRequestsCount(regionMetrics.getWriteRequestCount()) - .setStorefileIndexSizeKB((long) regionMetrics.getStoreFileIndexSize() - .get(Size.Unit.KILOBYTE)) - .setRootIndexSizeKB((int) regionMetrics.getStoreFileRootLevelIndexSize() - .get(Size.Unit.KILOBYTE)) - .setStores(regionMetrics.getStoreCount()) - .setStorefiles(regionMetrics.getStoreFileCount()) - .setStoreRefCount(regionMetrics.getStoreRefCount()) - .setMaxCompactedStoreFileRefCount(regionMetrics.getMaxCompactedStoreFileRefCount()) - .setStorefileSizeMB((int) regionMetrics.getStoreFileSize().get(Size.Unit.MEGABYTE)) - .addAllStoreCompleteSequenceId(toStoreSequenceId(regionMetrics.getStoreSequenceId())) - .setStoreUncompressedSizeMB( - (int) regionMetrics.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE)) - .build(); + .setRegionSpecifier(HBaseProtos.RegionSpecifier.newBuilder() + .setType(HBaseProtos.RegionSpecifier.RegionSpecifierType.REGION_NAME) + .setValue(UnsafeByteOperations.unsafeWrap(regionMetrics.getRegionName())).build()) + .setTotalStaticBloomSizeKB((int) regionMetrics.getBloomFilterSize().get(Size.Unit.KILOBYTE)) + .setCurrentCompactedKVs(regionMetrics.getCompactedCellCount()) + .setTotalCompactingKVs(regionMetrics.getCompactingCellCount()) + .setCompleteSequenceId(regionMetrics.getCompletedSequenceId()) + .setDataLocality(regionMetrics.getDataLocality()) + .setFilteredReadRequestsCount(regionMetrics.getFilteredReadRequestCount()) + .setTotalStaticIndexSizeKB( + (int) regionMetrics.getStoreFileUncompressedDataIndexSize().get(Size.Unit.KILOBYTE)) + .setLastMajorCompactionTs(regionMetrics.getLastMajorCompactionTimestamp()) + .setMemStoreSizeMB((int) regionMetrics.getMemStoreSize().get(Size.Unit.MEGABYTE)) + .setReadRequestsCount(regionMetrics.getReadRequestCount()) + .setWriteRequestsCount(regionMetrics.getWriteRequestCount()) + .setStorefileIndexSizeKB((long) regionMetrics.getStoreFileIndexSize().get(Size.Unit.KILOBYTE)) + .setRootIndexSizeKB( + (int) regionMetrics.getStoreFileRootLevelIndexSize().get(Size.Unit.KILOBYTE)) + .setStores(regionMetrics.getStoreCount()).setStorefiles(regionMetrics.getStoreFileCount()) + .setStoreRefCount(regionMetrics.getStoreRefCount()) + .setMaxCompactedStoreFileRefCount(regionMetrics.getMaxCompactedStoreFileRefCount()) + .setStorefileSizeMB((int) regionMetrics.getStoreFileSize().get(Size.Unit.MEGABYTE)) + .addAllStoreCompleteSequenceId(toStoreSequenceId(regionMetrics.getStoreSequenceId())) + .setStoreUncompressedSizeMB( + (int) regionMetrics.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE)) + .build(); } public static RegionMetricsBuilder newBuilder(byte[] name) { @@ -161,6 +151,7 @@ public static RegionMetricsBuilder newBuilder(byte[] name) { private long blocksLocalWithSsdWeight; private long blocksTotalWeight; private CompactionState compactionState; + private RegionMetricsBuilder(byte[] name) { this.name = name; } @@ -169,130 +160,135 @@ public RegionMetricsBuilder setStoreCount(int value) { this.storeCount = value; return this; } + public RegionMetricsBuilder setStoreFileCount(int value) { this.storeFileCount = value; return this; } + public RegionMetricsBuilder setStoreRefCount(int value) { this.storeRefCount = value; return this; } + public RegionMetricsBuilder setMaxCompactedStoreFileRefCount(int value) { this.maxCompactedStoreFileRefCount = value; return this; } + public RegionMetricsBuilder setCompactingCellCount(long value) { this.compactingCellCount = value; return this; } + public RegionMetricsBuilder setCompactedCellCount(long value) { this.compactedCellCount = value; return this; } + public RegionMetricsBuilder setStoreFileSize(Size value) { this.storeFileSize = value; return this; } + public RegionMetricsBuilder setMemStoreSize(Size value) { this.memStoreSize = value; return this; } + public RegionMetricsBuilder setStoreFileIndexSize(Size value) { this.indexSize = value; return this; } + public RegionMetricsBuilder setStoreFileRootLevelIndexSize(Size value) { this.rootLevelIndexSize = value; return this; } + public RegionMetricsBuilder setStoreFileUncompressedDataIndexSize(Size value) { this.uncompressedDataIndexSize = value; return this; } + public RegionMetricsBuilder setBloomFilterSize(Size value) { this.bloomFilterSize = value; return this; } + public RegionMetricsBuilder setUncompressedStoreFileSize(Size value) { this.uncompressedStoreFileSize = value; return this; } + public RegionMetricsBuilder setWriteRequestCount(long value) { this.writeRequestCount = value; return this; } + public RegionMetricsBuilder setReadRequestCount(long value) { this.readRequestCount = value; return this; } + public RegionMetricsBuilder setFilteredReadRequestCount(long value) { this.filteredReadRequestCount = value; return this; } + public RegionMetricsBuilder setCompletedSequenceId(long value) { this.completedSequenceId = value; return this; } + public RegionMetricsBuilder setStoreSequenceIds(Map value) { this.storeSequenceIds = value; return this; } + public RegionMetricsBuilder setDataLocality(float value) { this.dataLocality = value; return this; } + public RegionMetricsBuilder setLastMajorCompactionTimestamp(long value) { this.lastMajorCompactionTimestamp = value; return this; } + public RegionMetricsBuilder setDataLocalityForSsd(float value) { this.dataLocalityForSsd = value; return this; } + public RegionMetricsBuilder setBlocksLocalWeight(long value) { this.blocksLocalWeight = value; return this; } + public RegionMetricsBuilder setBlocksLocalWithSsdWeight(long value) { this.blocksLocalWithSsdWeight = value; return this; } + public RegionMetricsBuilder setBlocksTotalWeight(long value) { this.blocksTotalWeight = value; return this; } + public RegionMetricsBuilder setCompactionState(CompactionState compactionState) { this.compactionState = compactionState; return this; } public RegionMetrics build() { - return new RegionMetricsImpl(name, - storeCount, - storeFileCount, - storeRefCount, - maxCompactedStoreFileRefCount, - compactingCellCount, - compactedCellCount, - storeFileSize, - memStoreSize, - indexSize, - rootLevelIndexSize, - uncompressedDataIndexSize, - bloomFilterSize, - uncompressedStoreFileSize, - writeRequestCount, - readRequestCount, - filteredReadRequestCount, - completedSequenceId, - storeSequenceIds, - dataLocality, - lastMajorCompactionTimestamp, - dataLocalityForSsd, - blocksLocalWeight, - blocksLocalWithSsdWeight, - blocksTotalWeight, - compactionState); + return new RegionMetricsImpl(name, storeCount, storeFileCount, storeRefCount, + maxCompactedStoreFileRefCount, compactingCellCount, compactedCellCount, storeFileSize, + memStoreSize, indexSize, rootLevelIndexSize, uncompressedDataIndexSize, bloomFilterSize, + uncompressedStoreFileSize, writeRequestCount, readRequestCount, filteredReadRequestCount, + completedSequenceId, storeSequenceIds, dataLocality, lastMajorCompactionTimestamp, + dataLocalityForSsd, blocksLocalWeight, blocksLocalWithSsdWeight, blocksTotalWeight, + compactionState); } private static class RegionMetricsImpl implements RegionMetrics { @@ -322,32 +318,15 @@ private static class RegionMetricsImpl implements RegionMetrics { private final long blocksLocalWithSsdWeight; private final long blocksTotalWeight; private final CompactionState compactionState; - RegionMetricsImpl(byte[] name, - int storeCount, - int storeFileCount, - int storeRefCount, - int maxCompactedStoreFileRefCount, - final long compactingCellCount, - long compactedCellCount, - Size storeFileSize, - Size memStoreSize, - Size indexSize, - Size rootLevelIndexSize, - Size uncompressedDataIndexSize, - Size bloomFilterSize, - Size uncompressedStoreFileSize, - long writeRequestCount, - long readRequestCount, - long filteredReadRequestCount, - long completedSequenceId, - Map storeSequenceIds, - float dataLocality, - long lastMajorCompactionTimestamp, - float dataLocalityForSsd, - long blocksLocalWeight, - long blocksLocalWithSsdWeight, - long blocksTotalWeight, - CompactionState compactionState) { + + RegionMetricsImpl(byte[] name, int storeCount, int storeFileCount, int storeRefCount, + int maxCompactedStoreFileRefCount, final long compactingCellCount, long compactedCellCount, + Size storeFileSize, Size memStoreSize, Size indexSize, Size rootLevelIndexSize, + Size uncompressedDataIndexSize, Size bloomFilterSize, Size uncompressedStoreFileSize, + long writeRequestCount, long readRequestCount, long filteredReadRequestCount, + long completedSequenceId, Map storeSequenceIds, float dataLocality, + long lastMajorCompactionTimestamp, float dataLocalityForSsd, long blocksLocalWeight, + long blocksLocalWithSsdWeight, long blocksTotalWeight, CompactionState compactionState) { this.name = Preconditions.checkNotNull(name); this.storeCount = storeCount; this.storeFileCount = storeFileCount; @@ -508,63 +487,43 @@ public CompactionState getCompactionState() { @Override public String toString() { - StringBuilder sb = Strings.appendKeyValue(new StringBuilder(), "storeCount", - this.getStoreCount()); - Strings.appendKeyValue(sb, "storeFileCount", - this.getStoreFileCount()); - Strings.appendKeyValue(sb, "storeRefCount", - this.getStoreRefCount()); + StringBuilder sb = + Strings.appendKeyValue(new StringBuilder(), "storeCount", this.getStoreCount()); + Strings.appendKeyValue(sb, "storeFileCount", this.getStoreFileCount()); + Strings.appendKeyValue(sb, "storeRefCount", this.getStoreRefCount()); Strings.appendKeyValue(sb, "maxCompactedStoreFileRefCount", this.getMaxCompactedStoreFileRefCount()); - Strings.appendKeyValue(sb, "uncompressedStoreFileSize", - this.getUncompressedStoreFileSize()); + Strings.appendKeyValue(sb, "uncompressedStoreFileSize", this.getUncompressedStoreFileSize()); Strings.appendKeyValue(sb, "lastMajorCompactionTimestamp", - this.getLastMajorCompactionTimestamp()); - Strings.appendKeyValue(sb, "storeFileSize", - this.getStoreFileSize()); + this.getLastMajorCompactionTimestamp()); + Strings.appendKeyValue(sb, "storeFileSize", this.getStoreFileSize()); if (this.getUncompressedStoreFileSize().get() != 0) { Strings.appendKeyValue(sb, "compressionRatio", - String.format("%.4f", - (float) this.getStoreFileSize().get(Size.Unit.MEGABYTE) / - (float) this.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE))); + String.format("%.4f", (float) this.getStoreFileSize().get(Size.Unit.MEGABYTE) + / (float) this.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE))); } - Strings.appendKeyValue(sb, "memStoreSize", - this.getMemStoreSize()); - Strings.appendKeyValue(sb, "readRequestCount", - this.getReadRequestCount()); - Strings.appendKeyValue(sb, "writeRequestCount", - this.getWriteRequestCount()); - Strings.appendKeyValue(sb, "rootLevelIndexSize", - this.getStoreFileRootLevelIndexSize()); + Strings.appendKeyValue(sb, "memStoreSize", this.getMemStoreSize()); + Strings.appendKeyValue(sb, "readRequestCount", this.getReadRequestCount()); + Strings.appendKeyValue(sb, "writeRequestCount", this.getWriteRequestCount()); + Strings.appendKeyValue(sb, "rootLevelIndexSize", this.getStoreFileRootLevelIndexSize()); Strings.appendKeyValue(sb, "uncompressedDataIndexSize", - this.getStoreFileUncompressedDataIndexSize()); - Strings.appendKeyValue(sb, "bloomFilterSize", - this.getBloomFilterSize()); - Strings.appendKeyValue(sb, "compactingCellCount", - this.getCompactingCellCount()); - Strings.appendKeyValue(sb, "compactedCellCount", - this.getCompactedCellCount()); + this.getStoreFileUncompressedDataIndexSize()); + Strings.appendKeyValue(sb, "bloomFilterSize", this.getBloomFilterSize()); + Strings.appendKeyValue(sb, "compactingCellCount", this.getCompactingCellCount()); + Strings.appendKeyValue(sb, "compactedCellCount", this.getCompactedCellCount()); float compactionProgressPct = Float.NaN; if (this.getCompactingCellCount() > 0) { - compactionProgressPct = ((float) this.getCompactedCellCount() / - (float) this.getCompactingCellCount()); + compactionProgressPct = + ((float) this.getCompactedCellCount() / (float) this.getCompactingCellCount()); } - Strings.appendKeyValue(sb, "compactionProgressPct", - compactionProgressPct); - Strings.appendKeyValue(sb, "completedSequenceId", - this.getCompletedSequenceId()); - Strings.appendKeyValue(sb, "dataLocality", - this.getDataLocality()); - Strings.appendKeyValue(sb, "dataLocalityForSsd", - this.getDataLocalityForSsd()); - Strings.appendKeyValue(sb, "blocksLocalWeight", - blocksLocalWeight); - Strings.appendKeyValue(sb, "blocksLocalWithSsdWeight", - blocksLocalWithSsdWeight); - Strings.appendKeyValue(sb, "blocksTotalWeight", - blocksTotalWeight); - Strings.appendKeyValue(sb, "compactionState", - compactionState); + Strings.appendKeyValue(sb, "compactionProgressPct", compactionProgressPct); + Strings.appendKeyValue(sb, "completedSequenceId", this.getCompletedSequenceId()); + Strings.appendKeyValue(sb, "dataLocality", this.getDataLocality()); + Strings.appendKeyValue(sb, "dataLocalityForSsd", this.getDataLocalityForSsd()); + Strings.appendKeyValue(sb, "blocksLocalWeight", blocksLocalWeight); + Strings.appendKeyValue(sb, "blocksLocalWithSsdWeight", blocksLocalWithSsdWeight); + Strings.appendKeyValue(sb, "blocksTotalWeight", blocksTotalWeight); + Strings.appendKeyValue(sb, "compactionState", compactionState); return sb.toString(); } } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionTooBusyException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionTooBusyException.java index 3024962ebd67..4cdb4ea2ade6 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionTooBusyException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RegionTooBusyException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -18,15 +18,14 @@ package org.apache.hadoop.hbase; import java.io.IOException; - import org.apache.yetus.audience.InterfaceAudience; /** - * Thrown by a region server if it will block and wait to serve a request. - * For example, the client wants to insert something to a region while the - * region is compacting. Keep variance in the passed 'msg' low because its msg is used as a key - * over in {@link org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException} - * grouping failure types. + * Thrown by a region server if it will block and wait to serve a request. For example, the client + * wants to insert something to a region while the region is compacting. Keep variance in the passed + * 'msg' low because its msg is used as a key over in + * {@link org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException} grouping failure + * types. */ @InterfaceAudience.Public public class RegionTooBusyException extends IOException { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ReplicationPeerNotFoundException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ReplicationPeerNotFoundException.java index 6f02df2028f9..4d1deebb4e87 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ReplicationPeerNotFoundException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ReplicationPeerNotFoundException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import org.apache.yetus.audience.InterfaceAudience; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java index 9df4f893c714..46cc77c61b8a 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -7,7 +7,7 @@ * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.io.IOException; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java index b22d6c4e2446..eec0ac5cdca7 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java @@ -1,6 +1,4 @@ -/** - * Copyright The Apache Software Foundation - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.util.Arrays; @@ -33,13 +30,13 @@ import org.apache.yetus.audience.InterfaceAudience; import org.apache.hbase.thirdparty.com.google.common.base.Objects; + import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos; /** * This class is used for exporting current state of load on a RegionServer. - * - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link ServerMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use {@link ServerMetrics} + * instead. */ @InterfaceAudience.Public @Deprecated @@ -68,7 +65,7 @@ public ServerLoad(ClusterStatusProtos.ServerLoad serverLoad) { this(ServerName.valueOf("localhost,1,1"), serverLoad); } - @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD") + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD") @InterfaceAudience.Private public ServerLoad(ServerName name, ClusterStatusProtos.ServerLoad serverLoad) { this(ServerMetricsBuilder.toServerMetrics(name, serverLoad)); @@ -82,16 +79,17 @@ public ServerLoad(ServerMetrics metrics) { for (RegionMetrics rl : metrics.getRegionMetrics().values()) { stores += rl.getStoreCount(); storefiles += rl.getStoreFileCount(); - storeUncompressedSizeMB += rl.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE); - storefileSizeMB += rl.getStoreFileSize().get(Size.Unit.MEGABYTE); - memstoreSizeMB += rl.getMemStoreSize().get(Size.Unit.MEGABYTE); + storeUncompressedSizeMB += (int) rl.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE); + storefileSizeMB += (int) rl.getStoreFileSize().get(Size.Unit.MEGABYTE); + memstoreSizeMB += (int) rl.getMemStoreSize().get(Size.Unit.MEGABYTE); readRequestsCount += rl.getReadRequestCount(); filteredReadRequestsCount += rl.getFilteredReadRequestCount(); writeRequestsCount += rl.getWriteRequestCount(); - storefileIndexSizeKB += rl.getStoreFileIndexSize().get(Size.Unit.KILOBYTE); - rootIndexSizeKB += rl.getStoreFileRootLevelIndexSize().get(Size.Unit.KILOBYTE); - totalStaticIndexSizeKB += rl.getStoreFileUncompressedDataIndexSize().get(Size.Unit.KILOBYTE); - totalStaticBloomSizeKB += rl.getBloomFilterSize().get(Size.Unit.KILOBYTE); + storefileIndexSizeKB += (long) rl.getStoreFileIndexSize().get(Size.Unit.KILOBYTE); + rootIndexSizeKB += (int) rl.getStoreFileRootLevelIndexSize().get(Size.Unit.KILOBYTE); + totalStaticIndexSizeKB += + (int) rl.getStoreFileUncompressedDataIndexSize().get(Size.Unit.KILOBYTE); + totalStaticBloomSizeKB += (int) rl.getBloomFilterSize().get(Size.Unit.KILOBYTE); totalCompactingKVs += rl.getCompactingCellCount(); currentCompactedKVs += rl.getCompactedCellCount(); } @@ -112,9 +110,9 @@ public ClusterStatusProtos.ServerLoad obtainServerLoadPB() { protected ClusterStatusProtos.ServerLoad serverLoad; /** - * @return number of requests since last report. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #getRequestCountPerSecond} instead. + * @return number of requests since last report. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #getRequestCountPerSecond} instead. */ @Deprecated public long getNumberOfRequests() { @@ -122,8 +120,7 @@ public long getNumberOfRequests() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * No flag in 2.0 + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 No flag in 2.0 */ @Deprecated public boolean hasNumberOfRequests() { @@ -132,8 +129,8 @@ public boolean hasNumberOfRequests() { /** * @return total Number of requests from the start of the region server. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #getRequestCount} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #getRequestCount} instead. */ @Deprecated public long getTotalNumberOfRequests() { @@ -141,8 +138,7 @@ public long getTotalNumberOfRequests() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * No flag in 2.0 + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 No flag in 2.0 */ @Deprecated public boolean hasTotalNumberOfRequests() { @@ -151,8 +147,8 @@ public boolean hasTotalNumberOfRequests() { /** * @return the amount of used heap, in MB. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #getUsedHeapSize} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #getUsedHeapSize} instead. */ @Deprecated public int getUsedHeapMB() { @@ -160,8 +156,7 @@ public int getUsedHeapMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * No flag in 2.0 + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 No flag in 2.0 */ @Deprecated public boolean hasUsedHeapMB() { @@ -170,8 +165,8 @@ public boolean hasUsedHeapMB() { /** * @return the maximum allowable size of the heap, in MB. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getMaxHeapSize} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getMaxHeapSize} instead. */ @Deprecated public int getMaxHeapMB() { @@ -179,8 +174,7 @@ public int getMaxHeapMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * No flag in 2.0 + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 No flag in 2.0 */ @Deprecated public boolean hasMaxHeapMB() { @@ -188,8 +182,8 @@ public boolean hasMaxHeapMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getStores() { @@ -197,8 +191,8 @@ public int getStores() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getStorefiles() { @@ -206,8 +200,8 @@ public int getStorefiles() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getStoreUncompressedSizeMB() { @@ -215,8 +209,8 @@ public int getStoreUncompressedSizeMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getStorefileSizeInMB() { @@ -224,8 +218,8 @@ public int getStorefileSizeInMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getStorefileSizeMB() { @@ -233,8 +227,8 @@ public int getStorefileSizeMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getMemstoreSizeInMB() { @@ -242,8 +236,8 @@ public int getMemstoreSizeInMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getMemStoreSizeMB() { @@ -251,8 +245,8 @@ public int getMemStoreSizeMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getStorefileIndexSizeInMB() { @@ -261,8 +255,8 @@ public int getStorefileIndexSizeInMB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public long getStorefileIndexSizeKB() { @@ -270,8 +264,8 @@ public long getStorefileIndexSizeKB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public long getReadRequestsCount() { @@ -279,8 +273,8 @@ public long getReadRequestsCount() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public long getFilteredReadRequestsCount() { @@ -288,8 +282,8 @@ public long getFilteredReadRequestsCount() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public long getWriteRequestsCount() { @@ -297,8 +291,8 @@ public long getWriteRequestsCount() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getRootIndexSizeKB() { @@ -306,8 +300,8 @@ public int getRootIndexSizeKB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getTotalStaticIndexSizeKB() { @@ -315,8 +309,8 @@ public int getTotalStaticIndexSizeKB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getTotalStaticBloomSizeKB() { @@ -324,8 +318,8 @@ public int getTotalStaticBloomSizeKB() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public long getTotalCompactingKVs() { @@ -333,8 +327,8 @@ public long getTotalCompactingKVs() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public long getCurrentCompactedKVs() { @@ -342,8 +336,8 @@ public long getCurrentCompactedKVs() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public int getNumberOfRegions() { @@ -400,7 +394,6 @@ public Map> getReplicationLoadSourceMap() { /** * Call directly from client such as hbase shell - * @return ReplicationLoadSink */ @Override public ReplicationLoadSink getReplicationLoadSink() { @@ -412,7 +405,8 @@ public Map getRegionMetrics() { return metrics.getRegionMetrics(); } - @Override public Map getUserMetrics() { + @Override + public Map getUserMetrics() { return metrics.getUserMetrics(); } @@ -431,16 +425,19 @@ public long getLastReportTimestamp() { return metrics.getLastReportTimestamp(); } + @Override + public List getTasks() { + return metrics.getTasks(); + } + /** - * Originally, this method factored in the effect of requests going to the - * server as well. However, this does not interact very well with the current - * region rebalancing code, which only factors number of regions. For the - * interim, until we can figure out how to make rebalancing use all the info - * available, we're just going to make load purely the number of regions. - * + * Originally, this method factored in the effect of requests going to the server as well. + * However, this does not interact very well with the current region rebalancing code, which only + * factors number of regions. For the interim, until we can figure out how to make rebalancing use + * all the info available, we're just going to make load purely the number of regions. * @return load factor for this server. - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getNumberOfRegions} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getNumberOfRegions} instead. */ @Deprecated public int getLoad() { @@ -452,21 +449,20 @@ public int getLoad() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRegionMetrics} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRegionMetrics} instead. */ @Deprecated public Map getRegionsLoad() { return getRegionMetrics().entrySet().stream() - .collect(Collectors.toMap(Map.Entry::getKey, e -> new RegionLoad(e.getValue()), - (v1, v2) -> { - throw new RuntimeException("key collisions?"); - }, () -> new TreeMap<>(Bytes.BYTES_COMPARATOR))); + .collect(Collectors.toMap(Map.Entry::getKey, e -> new RegionLoad(e.getValue()), (v1, v2) -> { + throw new RuntimeException("key collisions?"); + }, () -> new TreeMap<>(Bytes.BYTES_COMPARATOR))); } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getCoprocessorNames} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getCoprocessorNames} instead. */ @Deprecated public String[] getRegionServerCoprocessors() { @@ -474,8 +470,8 @@ public String[] getRegionServerCoprocessors() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getCoprocessorNames} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getCoprocessorNames} instead. */ @Deprecated public String[] getRsCoprocessors() { @@ -483,8 +479,8 @@ public String[] getRsCoprocessors() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getRequestCountPerSecond} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getRequestCountPerSecond} instead. */ @Deprecated public double getRequestsPerSecond() { @@ -504,30 +500,29 @@ public String toString() { Strings.appendKeyValue(sb, "numberOfStores", Integer.valueOf(this.stores)); Strings.appendKeyValue(sb, "numberOfStorefiles", Integer.valueOf(this.storefiles)); Strings.appendKeyValue(sb, "storefileUncompressedSizeMB", - Integer.valueOf(this.storeUncompressedSizeMB)); + Integer.valueOf(this.storeUncompressedSizeMB)); Strings.appendKeyValue(sb, "storefileSizeMB", Integer.valueOf(this.storefileSizeMB)); if (this.storeUncompressedSizeMB != 0) { - Strings.appendKeyValue(sb, "compressionRatio", String.format("%.4f", - (float) this.storefileSizeMB / (float) this.storeUncompressedSizeMB)); + Strings.appendKeyValue(sb, "compressionRatio", + String.format("%.4f", (float) this.storefileSizeMB / (float) this.storeUncompressedSizeMB)); } Strings.appendKeyValue(sb, "memstoreSizeMB", Integer.valueOf(this.memstoreSizeMB)); - Strings.appendKeyValue(sb, "storefileIndexSizeKB", - Long.valueOf(this.storefileIndexSizeKB)); + Strings.appendKeyValue(sb, "storefileIndexSizeKB", Long.valueOf(this.storefileIndexSizeKB)); Strings.appendKeyValue(sb, "readRequestsCount", Long.valueOf(this.readRequestsCount)); Strings.appendKeyValue(sb, "filteredReadRequestsCount", - Long.valueOf(this.filteredReadRequestsCount)); + Long.valueOf(this.filteredReadRequestsCount)); Strings.appendKeyValue(sb, "writeRequestsCount", Long.valueOf(this.writeRequestsCount)); Strings.appendKeyValue(sb, "rootIndexSizeKB", Integer.valueOf(this.rootIndexSizeKB)); Strings.appendKeyValue(sb, "totalStaticIndexSizeKB", - Integer.valueOf(this.totalStaticIndexSizeKB)); + Integer.valueOf(this.totalStaticIndexSizeKB)); Strings.appendKeyValue(sb, "totalStaticBloomSizeKB", - Integer.valueOf(this.totalStaticBloomSizeKB)); + Integer.valueOf(this.totalStaticBloomSizeKB)); Strings.appendKeyValue(sb, "totalCompactingKVs", Long.valueOf(this.totalCompactingKVs)); Strings.appendKeyValue(sb, "currentCompactedKVs", Long.valueOf(this.currentCompactedKVs)); float compactionProgressPct = Float.NaN; if (this.totalCompactingKVs > 0) { compactionProgressPct = - Float.valueOf((float) this.currentCompactedKVs / this.totalCompactingKVs); + Float.valueOf((float) this.currentCompactedKVs / this.totalCompactingKVs); } Strings.appendKeyValue(sb, "compactionProgressPct", compactionProgressPct); @@ -539,17 +534,16 @@ public String toString() { } /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link ServerMetricsBuilder#of(ServerName)} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link ServerMetricsBuilder#of(ServerName)} instead. */ @Deprecated - public static final ServerLoad EMPTY_SERVERLOAD = - new ServerLoad(ServerName.valueOf("localhost,1,1"), - ClusterStatusProtos.ServerLoad.newBuilder().build()); + public static final ServerLoad EMPTY_SERVERLOAD = new ServerLoad( + ServerName.valueOf("localhost,1,1"), ClusterStatusProtos.ServerLoad.newBuilder().build()); /** - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * Use {@link #getReportTimestamp} instead. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 Use + * {@link #getReportTimestamp} instead. */ @Deprecated public long getReportTime() { @@ -558,11 +552,10 @@ public long getReportTime() { @Override public int hashCode() { - return Objects - .hashCode(stores, storefiles, storeUncompressedSizeMB, storefileSizeMB, memstoreSizeMB, - storefileIndexSizeKB, readRequestsCount, filteredReadRequestsCount, writeRequestsCount, - rootIndexSizeKB, totalStaticIndexSizeKB, totalStaticBloomSizeKB, totalCompactingKVs, - currentCompactedKVs); + return Objects.hashCode(stores, storefiles, storeUncompressedSizeMB, storefileSizeMB, + memstoreSizeMB, storefileIndexSizeKB, readRequestsCount, filteredReadRequestsCount, + writeRequestsCount, rootIndexSizeKB, totalStaticIndexSizeKB, totalStaticBloomSizeKB, + totalCompactingKVs, currentCompactedKVs); } @Override @@ -571,16 +564,16 @@ public boolean equals(Object other) { if (other instanceof ServerLoad) { ServerLoad sl = ((ServerLoad) other); return stores == sl.stores && storefiles == sl.storefiles - && storeUncompressedSizeMB == sl.storeUncompressedSizeMB - && storefileSizeMB == sl.storefileSizeMB && memstoreSizeMB == sl.memstoreSizeMB - && storefileIndexSizeKB == sl.storefileIndexSizeKB - && readRequestsCount == sl.readRequestsCount - && filteredReadRequestsCount == sl.filteredReadRequestsCount - && writeRequestsCount == sl.writeRequestsCount && rootIndexSizeKB == sl.rootIndexSizeKB - && totalStaticIndexSizeKB == sl.totalStaticIndexSizeKB - && totalStaticBloomSizeKB == sl.totalStaticBloomSizeKB - && totalCompactingKVs == sl.totalCompactingKVs - && currentCompactedKVs == sl.currentCompactedKVs; + && storeUncompressedSizeMB == sl.storeUncompressedSizeMB + && storefileSizeMB == sl.storefileSizeMB && memstoreSizeMB == sl.memstoreSizeMB + && storefileIndexSizeKB == sl.storefileIndexSizeKB + && readRequestsCount == sl.readRequestsCount + && filteredReadRequestsCount == sl.filteredReadRequestsCount + && writeRequestsCount == sl.writeRequestsCount && rootIndexSizeKB == sl.rootIndexSizeKB + && totalStaticIndexSizeKB == sl.totalStaticIndexSizeKB + && totalStaticBloomSizeKB == sl.totalStaticBloomSizeKB + && totalCompactingKVs == sl.totalCompactingKVs + && currentCompactedKVs == sl.currentCompactedKVs; } return false; } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetrics.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetrics.java index 21fad92aa25b..0a2dd28f6f88 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetrics.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetrics.java @@ -1,5 +1,4 @@ -/** - * Copyright The Apache Software Foundation +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -7,14 +6,15 @@ * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at - * http://www.apache.org/licenses/LICENSE-2.0 + * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import edu.umd.cs.findbugs.annotations.Nullable; @@ -33,38 +33,26 @@ public interface ServerMetrics { ServerName getServerName(); - /** - * @return the version number of a regionserver. - */ + /** Returns the version number of a regionserver. */ default int getVersionNumber() { return 0; } - /** - * @return the string type version of a regionserver. - */ + /** Returns the string type version of a regionserver. */ default String getVersion() { return "0.0.0"; } - /** - * @return the number of requests per second. - */ + /** Returns the number of requests per second. */ long getRequestCountPerSecond(); - /** - * @return total Number of requests from the start of the region server. - */ + /** Returns total Number of requests from the start of the region server. */ long getRequestCount(); - /** - * @return the amount of used heap - */ + /** Returns the amount of used heap */ Size getUsedHeapSize(); - /** - * @return the maximum allowable size of the heap - */ + /** Returns the maximum allowable size of the heap */ Size getMaxHeapSize(); int getInfoServerPort(); @@ -83,19 +71,14 @@ default String getVersion() { /** * Call directly from client such as hbase shell - * @return ReplicationLoadSink */ @Nullable ReplicationLoadSink getReplicationLoadSink(); - /** - * @return region load metrics - */ + /** Returns region load metrics */ Map getRegionMetrics(); - /** - * @return metrics per user - */ + /** Returns metrics per user */ Map getUserMetrics(); /** @@ -104,14 +87,17 @@ default String getVersion() { */ Set getCoprocessorNames(); - /** - * @return the timestamp (server side) of generating this metrics - */ + /** Returns the timestamp (server side) of generating this metrics */ long getReportTimestamp(); + /** Returns the last timestamp (server side) of generating this metrics */ + long getLastReportTimestamp(); + /** - * @return the last timestamp (server side) of generating this metrics + * Called directly from clients such as the hbase shell + * @return the active monitored tasks */ - long getLastReportTimestamp(); + @Nullable + List getTasks(); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetricsBuilder.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetricsBuilder.java index d93527261d93..1e57857db69e 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetricsBuilder.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerMetricsBuilder.java @@ -1,5 +1,4 @@ -/** - * Copyright The Apache Software Foundation +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -7,18 +6,18 @@ * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at - * http://www.apache.org/licenses/LICENSE-2.0 + * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import edu.umd.cs.findbugs.annotations.Nullable; - import java.util.ArrayList; import java.util.Collection; import java.util.Collections; @@ -37,6 +36,7 @@ import org.apache.yetus.audience.InterfaceAudience; import org.apache.hbase.thirdparty.com.google.common.base.Preconditions; + import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos; import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos; @@ -44,10 +44,6 @@ @InterfaceAudience.Private public final class ServerMetricsBuilder { - /** - * @param sn the server name - * @return a empty metrics - */ public static ServerMetrics of(ServerName sn) { return newBuilder(sn).build(); } @@ -62,12 +58,12 @@ public static ServerMetrics toServerMetrics(ClusterStatusProtos.LiveServerInfo s } public static ServerMetrics toServerMetrics(ServerName serverName, - ClusterStatusProtos.ServerLoad serverLoadPB) { + ClusterStatusProtos.ServerLoad serverLoadPB) { return toServerMetrics(serverName, 0, "0.0.0", serverLoadPB); } public static ServerMetrics toServerMetrics(ServerName serverName, int versionNumber, - String version, ClusterStatusProtos.ServerLoad serverLoadPB) { + String version, ClusterStatusProtos.ServerLoad serverLoadPB) { return ServerMetricsBuilder.newBuilder(serverName) .setRequestCountPerSecond(serverLoadPB.getNumberOfRequests()) .setRequestCount(serverLoadPB.getTotalNumberOfRequests()) @@ -78,44 +74,46 @@ public static ServerMetrics toServerMetrics(ServerName serverName, int versionNu .map(HBaseProtos.Coprocessor::getName).collect(Collectors.toList())) .setRegionMetrics(serverLoadPB.getRegionLoadsList().stream() .map(RegionMetricsBuilder::toRegionMetrics).collect(Collectors.toList())) - .setUserMetrics(serverLoadPB.getUserLoadsList().stream() - .map(UserMetricsBuilder::toUserMetrics).collect(Collectors.toList())) + .setUserMetrics(serverLoadPB.getUserLoadsList().stream() + .map(UserMetricsBuilder::toUserMetrics).collect(Collectors.toList())) .setReplicationLoadSources(serverLoadPB.getReplLoadSourceList().stream() - .map(ProtobufUtil::toReplicationLoadSource).collect(Collectors.toList())) + .map(ProtobufUtil::toReplicationLoadSource).collect(Collectors.toList())) .setReplicationLoadSink(serverLoadPB.hasReplLoadSink() ? ProtobufUtil.toReplicationLoadSink(serverLoadPB.getReplLoadSink()) : null) + .setTasks(serverLoadPB.getTasksList().stream().map(ProtobufUtil::getServerTask) + .collect(Collectors.toList())) .setReportTimestamp(serverLoadPB.getReportEndTime()) .setLastReportTimestamp(serverLoadPB.getReportStartTime()).setVersionNumber(versionNumber) .setVersion(version).build(); } public static List toCoprocessor(Collection names) { - return names.stream() - .map(n -> HBaseProtos.Coprocessor.newBuilder().setName(n).build()) - .collect(Collectors.toList()); + return names.stream().map(n -> HBaseProtos.Coprocessor.newBuilder().setName(n).build()) + .collect(Collectors.toList()); } public static ClusterStatusProtos.ServerLoad toServerLoad(ServerMetrics metrics) { ClusterStatusProtos.ServerLoad.Builder builder = ClusterStatusProtos.ServerLoad.newBuilder() - .setNumberOfRequests(metrics.getRequestCountPerSecond()) - .setTotalNumberOfRequests(metrics.getRequestCount()) - .setInfoServerPort(metrics.getInfoServerPort()) - .setMaxHeapMB((int) metrics.getMaxHeapSize().get(Size.Unit.MEGABYTE)) - .setUsedHeapMB((int) metrics.getUsedHeapSize().get(Size.Unit.MEGABYTE)) - .addAllCoprocessors(toCoprocessor(metrics.getCoprocessorNames())).addAllRegionLoads( - metrics.getRegionMetrics().values().stream().map(RegionMetricsBuilder::toRegionLoad) - .collect(Collectors.toList())).addAllUserLoads( - metrics.getUserMetrics().values().stream().map(UserMetricsBuilder::toUserMetrics) - .collect(Collectors.toList())).addAllReplLoadSource( - metrics.getReplicationLoadSourceList().stream() - .map(ProtobufUtil::toReplicationLoadSource).collect(Collectors.toList())) - .setReportStartTime(metrics.getLastReportTimestamp()) - .setReportEndTime(metrics.getReportTimestamp()); + .setNumberOfRequests(metrics.getRequestCountPerSecond()) + .setTotalNumberOfRequests(metrics.getRequestCount()) + .setInfoServerPort(metrics.getInfoServerPort()) + .setMaxHeapMB((int) metrics.getMaxHeapSize().get(Size.Unit.MEGABYTE)) + .setUsedHeapMB((int) metrics.getUsedHeapSize().get(Size.Unit.MEGABYTE)) + .addAllCoprocessors(toCoprocessor(metrics.getCoprocessorNames())) + .addAllRegionLoads(metrics.getRegionMetrics().values().stream() + .map(RegionMetricsBuilder::toRegionLoad).collect(Collectors.toList())) + .addAllUserLoads(metrics.getUserMetrics().values().stream() + .map(UserMetricsBuilder::toUserMetrics).collect(Collectors.toList())) + .addAllReplLoadSource(metrics.getReplicationLoadSourceList().stream() + .map(ProtobufUtil::toReplicationLoadSource).collect(Collectors.toList())) + .addAllTasks( + metrics.getTasks().stream().map(ProtobufUtil::toServerTask).collect(Collectors.toList())) + .setReportStartTime(metrics.getLastReportTimestamp()) + .setReportEndTime(metrics.getReportTimestamp()); if (metrics.getReplicationLoadSink() != null) { builder.setReplLoadSink(ProtobufUtil.toReplicationLoadSink(metrics.getReplicationLoadSink())); } - return builder.build(); } @@ -139,6 +137,8 @@ public static ServerMetricsBuilder newBuilder(ServerName sn) { private final Set coprocessorNames = new TreeSet<>(); private long reportTimestamp = EnvironmentEdgeManager.currentTime(); private long lastReportTimestamp = 0; + private final List tasks = new ArrayList<>(); + private ServerMetricsBuilder(ServerName serverName) { this.serverName = serverName; } @@ -213,23 +213,15 @@ public ServerMetricsBuilder setLastReportTimestamp(long value) { return this; } + public ServerMetricsBuilder setTasks(List tasks) { + this.tasks.addAll(tasks); + return this; + } + public ServerMetrics build() { - return new ServerMetricsImpl( - serverName, - versionNumber, - version, - requestCountPerSecond, - requestCount, - usedHeapSize, - maxHeapSize, - infoServerPort, - sources, - sink, - regionStatus, - coprocessorNames, - reportTimestamp, - lastReportTimestamp, - userMetrics); + return new ServerMetricsImpl(serverName, versionNumber, version, requestCountPerSecond, + requestCount, usedHeapSize, maxHeapSize, infoServerPort, sources, sink, regionStatus, + coprocessorNames, reportTimestamp, lastReportTimestamp, userMetrics, tasks); } private static class ServerMetricsImpl implements ServerMetrics { @@ -249,12 +241,13 @@ private static class ServerMetricsImpl implements ServerMetrics { private final long reportTimestamp; private final long lastReportTimestamp; private final Map userMetrics; + private final List tasks; ServerMetricsImpl(ServerName serverName, int versionNumber, String version, - long requestCountPerSecond, long requestCount, Size usedHeapSize, Size maxHeapSize, - int infoServerPort, List sources, ReplicationLoadSink sink, - Map regionStatus, Set coprocessorNames, long reportTimestamp, - long lastReportTimestamp, Map userMetrics) { + long requestCountPerSecond, long requestCount, Size usedHeapSize, Size maxHeapSize, + int infoServerPort, List sources, ReplicationLoadSink sink, + Map regionStatus, Set coprocessorNames, long reportTimestamp, + long lastReportTimestamp, Map userMetrics, List tasks) { this.serverName = Preconditions.checkNotNull(serverName); this.versionNumber = versionNumber; this.version = version; @@ -267,9 +260,10 @@ private static class ServerMetricsImpl implements ServerMetrics { this.sink = sink; this.regionStatus = Preconditions.checkNotNull(regionStatus); this.userMetrics = Preconditions.checkNotNull(userMetrics); - this.coprocessorNames =Preconditions.checkNotNull(coprocessorNames); + this.coprocessorNames = Preconditions.checkNotNull(coprocessorNames); this.reportTimestamp = reportTimestamp; this.lastReportTimestamp = lastReportTimestamp; + this.tasks = tasks; } @Override @@ -282,6 +276,7 @@ public int getVersionNumber() { return versionNumber; } + @Override public String getVersion() { return version; } @@ -317,11 +312,11 @@ public List getReplicationLoadSourceList() { } @Override - public Map> getReplicationLoadSourceMap(){ - Map> sourcesMap = new HashMap<>(); - for(ReplicationLoadSource loadSource : sources){ - sourcesMap.computeIfAbsent(loadSource.getPeerID(), - peerId -> new ArrayList<>()).add(loadSource); + public Map> getReplicationLoadSourceMap() { + Map> sourcesMap = new HashMap<>(); + for (ReplicationLoadSource loadSource : sources) { + sourcesMap.computeIfAbsent(loadSource.getPeerID(), peerId -> new ArrayList<>()) + .add(loadSource); } return sourcesMap; } @@ -356,6 +351,11 @@ public long getLastReportTimestamp() { return lastReportTimestamp; } + @Override + public List getTasks() { + return tasks; + } + @Override public String toString() { int storeCount = 0; @@ -378,36 +378,37 @@ public String toString() { storeFileCount += r.getStoreFileCount(); storeRefCount += r.getStoreRefCount(); int currentMaxCompactedStoreFileRefCount = r.getMaxCompactedStoreFileRefCount(); - maxCompactedStoreFileRefCount = Math.max(maxCompactedStoreFileRefCount, - currentMaxCompactedStoreFileRefCount); - uncompressedStoreFileSizeMB += r.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE); - storeFileSizeMB += r.getStoreFileSize().get(Size.Unit.MEGABYTE); - memStoreSizeMB += r.getMemStoreSize().get(Size.Unit.MEGABYTE); - storefileIndexSizeKB += r.getStoreFileUncompressedDataIndexSize().get(Size.Unit.KILOBYTE); + maxCompactedStoreFileRefCount = + Math.max(maxCompactedStoreFileRefCount, currentMaxCompactedStoreFileRefCount); + uncompressedStoreFileSizeMB += + (long) r.getUncompressedStoreFileSize().get(Size.Unit.MEGABYTE); + storeFileSizeMB += (long) r.getStoreFileSize().get(Size.Unit.MEGABYTE); + memStoreSizeMB += (long) r.getMemStoreSize().get(Size.Unit.MEGABYTE); + storefileIndexSizeKB += + (long) r.getStoreFileUncompressedDataIndexSize().get(Size.Unit.KILOBYTE); readRequestsCount += r.getReadRequestCount(); writeRequestsCount += r.getWriteRequestCount(); filteredReadRequestsCount += r.getFilteredReadRequestCount(); - rootLevelIndexSizeKB += r.getStoreFileRootLevelIndexSize().get(Size.Unit.KILOBYTE); - bloomFilterSizeMB += r.getBloomFilterSize().get(Size.Unit.MEGABYTE); + rootLevelIndexSizeKB += (long) r.getStoreFileRootLevelIndexSize().get(Size.Unit.KILOBYTE); + bloomFilterSizeMB += (long) r.getBloomFilterSize().get(Size.Unit.MEGABYTE); compactedCellCount += r.getCompactedCellCount(); compactingCellCount += r.getCompactingCellCount(); } StringBuilder sb = Strings.appendKeyValue(new StringBuilder(), "requestsPerSecond", - Double.valueOf(getRequestCountPerSecond())); + Double.valueOf(getRequestCountPerSecond())); Strings.appendKeyValue(sb, "numberOfOnlineRegions", - Integer.valueOf(getRegionMetrics().size())); + Integer.valueOf(getRegionMetrics().size())); Strings.appendKeyValue(sb, "usedHeapMB", getUsedHeapSize()); Strings.appendKeyValue(sb, "maxHeapMB", getMaxHeapSize()); Strings.appendKeyValue(sb, "numberOfStores", storeCount); Strings.appendKeyValue(sb, "numberOfStorefiles", storeFileCount); Strings.appendKeyValue(sb, "storeRefCount", storeRefCount); - Strings.appendKeyValue(sb, "maxCompactedStoreFileRefCount", - maxCompactedStoreFileRefCount); + Strings.appendKeyValue(sb, "maxCompactedStoreFileRefCount", maxCompactedStoreFileRefCount); Strings.appendKeyValue(sb, "storefileUncompressedSizeMB", uncompressedStoreFileSizeMB); Strings.appendKeyValue(sb, "storefileSizeMB", storeFileSizeMB); if (uncompressedStoreFileSizeMB != 0) { - Strings.appendKeyValue(sb, "compressionRatio", String.format("%.4f", - (float) storeFileSizeMB / (float) uncompressedStoreFileSizeMB)); + Strings.appendKeyValue(sb, "compressionRatio", + String.format("%.4f", (float) storeFileSizeMB / (float) uncompressedStoreFileSizeMB)); } Strings.appendKeyValue(sb, "memstoreSizeMB", memStoreSizeMB); Strings.appendKeyValue(sb, "readRequestsCount", readRequestsCount); @@ -420,8 +421,7 @@ public String toString() { Strings.appendKeyValue(sb, "currentCompactedKVs", compactedCellCount); float compactionProgressPct = Float.NaN; if (compactingCellCount > 0) { - compactionProgressPct = - Float.valueOf((float) compactedCellCount / compactingCellCount); + compactionProgressPct = Float.valueOf((float) compactedCellCount / compactingCellCount); } Strings.appendKeyValue(sb, "compactionProgressPct", compactionProgressPct); Strings.appendKeyValue(sb, "coprocessors", getCoprocessorNames()); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerTask.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerTask.java new file mode 100644 index 000000000000..cd6d41169bb8 --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerTask.java @@ -0,0 +1,64 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** Information about active monitored server tasks */ +@InterfaceAudience.Public +public interface ServerTask { + + /** Task state */ + enum State { + RUNNING, + WAITING, + COMPLETE, + ABORTED; + } + + /** + * Get the task's description. + * @return the task's description, typically a name + */ + String getDescription(); + + /** + * Get the current status of the task. + * @return the task's current status + */ + String getStatus(); + + /** + * Get the current state of the task. + * @return the task's current state + */ + State getState(); + + /** + * Get the task start time. + * @return the time when the task started, or 0 if it has not started yet + */ + long getStartTime(); + + /** + * Get the task completion time. + * @return the time when the task completed, or 0 if it has not completed yet + */ + long getCompletionTime(); + +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerTaskBuilder.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerTaskBuilder.java new file mode 100644 index 000000000000..3ecd0c16cd9c --- /dev/null +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ServerTaskBuilder.java @@ -0,0 +1,127 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import org.apache.yetus.audience.InterfaceAudience; + +/** Builder for information about active monitored server tasks */ +@InterfaceAudience.Private +public final class ServerTaskBuilder { + + public static ServerTaskBuilder newBuilder() { + return new ServerTaskBuilder(); + } + + private String description = ""; + private String status = ""; + private ServerTask.State state = ServerTask.State.RUNNING; + private long startTime; + private long completionTime; + + private ServerTaskBuilder() { + } + + private static final class ServerTaskImpl implements ServerTask { + + private final String description; + private final String status; + private final ServerTask.State state; + private final long startTime; + private final long completionTime; + + private ServerTaskImpl(final String description, final String status, + final ServerTask.State state, final long startTime, final long completionTime) { + this.description = description; + this.status = status; + this.state = state; + this.startTime = startTime; + this.completionTime = completionTime; + } + + @Override + public String getDescription() { + return description; + } + + @Override + public String getStatus() { + return status; + } + + @Override + public State getState() { + return state; + } + + @Override + public long getStartTime() { + return startTime; + } + + @Override + public long getCompletionTime() { + return completionTime; + } + + @Override + public String toString() { + StringBuilder sb = new StringBuilder(512); + sb.append(getDescription()); + sb.append(": status="); + sb.append(getStatus()); + sb.append(", state="); + sb.append(getState()); + sb.append(", startTime="); + sb.append(getStartTime()); + sb.append(", completionTime="); + sb.append(getCompletionTime()); + return sb.toString(); + } + + } + + public ServerTaskBuilder setDescription(final String description) { + this.description = description; + return this; + } + + public ServerTaskBuilder setStatus(final String status) { + this.status = status; + return this; + } + + public ServerTaskBuilder setState(final ServerTask.State state) { + this.state = state; + return this; + } + + public ServerTaskBuilder setStartTime(final long startTime) { + this.startTime = startTime; + return this; + } + + public ServerTaskBuilder setCompletionTime(final long completionTime) { + this.completionTime = completionTime; + return this; + } + + public ServerTask build() { + return new ServerTaskImpl(description, status, state, startTime, completionTime); + } + +} diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/Size.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/Size.java index 0e7716a0a619..99062635f77a 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/Size.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/Size.java @@ -1,5 +1,4 @@ -/** - * Copyright The Apache Software Foundation +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -7,14 +6,15 @@ * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at - * http://www.apache.org/licenses/LICENSE-2.0 + * + * http://www.apache.org/licenses/LICENSE-2.0 + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.math.BigDecimal; @@ -24,8 +24,8 @@ import org.apache.hbase.thirdparty.com.google.common.base.Preconditions; /** - * It is used to represent the size with different units. - * This class doesn't serve for the precise computation. + * It is used to represent the size with different units. This class doesn't serve for the precise + * computation. */ @InterfaceAudience.Public public final class Size implements Comparable { @@ -40,6 +40,7 @@ public enum Unit { MEGABYTE(97, "MB"), KILOBYTE(96, "KB"), BYTE(95, "B"); + private final int orderOfSize; private final String simpleName; @@ -68,9 +69,7 @@ public Size(double value, Unit unit) { this.unit = Preconditions.checkNotNull(unit); } - /** - * @return size unit - */ + /** Returns size unit */ public Unit getUnit() { return unit; } @@ -91,7 +90,6 @@ public double get() { /** * get the value which is converted to specified unit. - * * @param unit size unit * @return the converted value */ @@ -146,7 +144,7 @@ public boolean equals(Object obj) { return true; } if (obj instanceof Size) { - return compareTo((Size)obj) == 0; + return compareTo((Size) obj) == 0; } return false; } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java index 9d67a37695ca..ae6721813a8a 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java @@ -7,14 +7,13 @@ * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * - * http://www.apache.org/licenses/LICENSE-2.0 + * http://www.apache.org/licenses/LICENSE-2.0 * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. */ package org.apache.hadoop.hbase; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java index a113f7c67bf0..98e958aa65b9 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableInfoMissingException.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -40,7 +40,7 @@ public TableInfoMissingException(String message) { } /** - * @param message the message for this exception + * @param message the message for this exception * @param throwable the {@link Throwable} to use for this exception */ public TableInfoMissingException(String message, Throwable throwable) { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java index 7e5046538abc..54f44405c584 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java index 90c015674ca6..14720811ca16 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -27,6 +26,7 @@ @InterfaceAudience.Public public class TableNotEnabledException extends DoNotRetryIOException { private static final long serialVersionUID = 262144L; + /** default constructor */ public TableNotEnabledException() { super(); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java index ae114fed0e62..416d8601fc3b 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownRegionException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownRegionException.java index 850cd9600623..dfe5f682f382 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownRegionException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownRegionException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -30,7 +29,6 @@ public class UnknownRegionException extends DoNotRetryRegionException { /** * Constructs a new UnknownRegionException with the specified detail message. - * * @param message the detail message */ public UnknownRegionException(String message) { @@ -39,9 +37,8 @@ public UnknownRegionException(String message) { /** * Constructs a new UnknownRegionException with the specified detail message and cause. - * * @param message the detail message - * @param cause the cause of the exception + * @param cause the cause of the exception */ public UnknownRegionException(String message, Throwable cause) { super(message, cause); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java index 14afb977b5de..fec8e57bee2e 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,10 +20,9 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * Thrown if a region server is passed an unknown scanner ID. - * This usually means that the client has taken too long between checkins and so the - * scanner lease on the server-side has expired OR the server-side is closing - * down and has cancelled all leases. + * Thrown if a region server is passed an unknown scanner ID. This usually means that the client has + * taken too long between checkins and so the scanner lease on the server-side has expired OR the + * server-side is closing down and has cancelled all leases. */ @InterfaceAudience.Public public class UnknownScannerException extends DoNotRetryIOException { @@ -42,7 +40,7 @@ public UnknownScannerException(String message) { } /** - * @param message the message for this exception + * @param message the message for this exception * @param exception the exception to grab data from */ public UnknownScannerException(String message, Exception exception) { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetrics.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetrics.java index 6c2ba07cc3d6..681b1f416c78 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetrics.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetrics.java @@ -1,6 +1,4 @@ -/** - * Copyright The Apache Software Foundation - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,18 +15,16 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; import java.util.Map; - import org.apache.hadoop.hbase.util.Bytes; import org.apache.yetus.audience.InterfaceAudience; import org.apache.yetus.audience.InterfaceStability; /** - * Encapsulates per-user load metrics. - */ + * Encapsulates per-user load metrics. + */ @InterfaceAudience.Public @InterfaceStability.Evolving public interface UserMetrics { @@ -44,43 +40,31 @@ interface ClientMetrics { long getFilteredReadRequestsCount(); } - /** - * @return the user name - */ + /** Returns the user name */ byte[] getUserName(); - /** - * @return the number of read requests made by user - */ + /** Returns the number of read requests made by user */ long getReadRequestCount(); - /** - * @return the number of write requests made by user - */ + /** Returns the number of write requests made by user */ long getWriteRequestCount(); /** - * @return the number of write requests and read requests and coprocessor - * service requests made by the user + * Returns the number of write requests and read requests and coprocessor service requests made by + * the user */ default long getRequestCount() { return getReadRequestCount() + getWriteRequestCount(); } - /** - * @return the user name as a string - */ + /** Returns the user name as a string */ default String getNameAsString() { return Bytes.toStringBinary(getUserName()); } - /** - * @return metrics per client(hostname) - */ + /** Returns metrics per client(hostname) */ Map getClientMetrics(); - /** - * @return count of filtered read requests for a user - */ + /** Returns count of filtered read requests for a user */ long getFilteredReadRequests(); } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetricsBuilder.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetricsBuilder.java index 70d28883c269..4a66283146d9 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetricsBuilder.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/UserMetricsBuilder.java @@ -1,6 +1,4 @@ -/** - * Copyright The Apache Software Foundation - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,12 +15,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase; +import java.nio.charset.StandardCharsets; import java.util.HashMap; import java.util.Map; - import org.apache.hadoop.hbase.util.Strings; import org.apache.yetus.audience.InterfaceAudience; @@ -34,24 +31,26 @@ public final class UserMetricsBuilder { public static UserMetrics toUserMetrics(ClusterStatusProtos.UserLoad userLoad) { - UserMetricsBuilder builder = UserMetricsBuilder.newBuilder(userLoad.getUserName().getBytes()); - userLoad.getClientMetricsList().stream().map( - clientMetrics -> new ClientMetricsImpl(clientMetrics.getHostName(), - clientMetrics.getReadRequestsCount(), clientMetrics.getWriteRequestsCount(), - clientMetrics.getFilteredRequestsCount())).forEach(builder::addClientMetris); + UserMetricsBuilder builder = + UserMetricsBuilder.newBuilder(userLoad.getUserName().getBytes(StandardCharsets.UTF_8)); + userLoad.getClientMetricsList().stream() + .map(clientMetrics -> new ClientMetricsImpl(clientMetrics.getHostName(), + clientMetrics.getReadRequestsCount(), clientMetrics.getWriteRequestsCount(), + clientMetrics.getFilteredRequestsCount())) + .forEach(builder::addClientMetris); return builder.build(); } public static ClusterStatusProtos.UserLoad toUserMetrics(UserMetrics userMetrics) { ClusterStatusProtos.UserLoad.Builder builder = - ClusterStatusProtos.UserLoad.newBuilder().setUserName(userMetrics.getNameAsString()); - userMetrics.getClientMetrics().values().stream().map( - clientMetrics -> ClusterStatusProtos.ClientMetrics.newBuilder() - .setHostName(clientMetrics.getHostName()) - .setWriteRequestsCount(clientMetrics.getWriteRequestsCount()) - .setReadRequestsCount(clientMetrics.getReadRequestsCount()) - .setFilteredRequestsCount(clientMetrics.getFilteredReadRequestsCount()).build()) - .forEach(builder::addClientMetrics); + ClusterStatusProtos.UserLoad.newBuilder().setUserName(userMetrics.getNameAsString()); + userMetrics.getClientMetrics().values().stream() + .map(clientMetrics -> ClusterStatusProtos.ClientMetrics.newBuilder() + .setHostName(clientMetrics.getHostName()) + .setWriteRequestsCount(clientMetrics.getWriteRequestsCount()) + .setReadRequestsCount(clientMetrics.getReadRequestsCount()) + .setFilteredRequestsCount(clientMetrics.getFilteredReadRequestsCount()).build()) + .forEach(builder::addClientMetrics); return builder.build(); } @@ -59,9 +58,9 @@ public static UserMetricsBuilder newBuilder(byte[] name) { return new UserMetricsBuilder(name); } - private final byte[] name; private Map clientMetricsMap = new HashMap<>(); + private UserMetricsBuilder(byte[] name) { this.name = name; } @@ -82,26 +81,30 @@ public static class ClientMetricsImpl implements UserMetrics.ClientMetrics { private final long writeRequestCount; public ClientMetricsImpl(String hostName, long readRequest, long writeRequest, - long filteredReadRequestsCount) { + long filteredReadRequestsCount) { this.hostName = hostName; this.readRequestCount = readRequest; this.writeRequestCount = writeRequest; this.filteredReadRequestsCount = filteredReadRequestsCount; } - @Override public String getHostName() { + @Override + public String getHostName() { return hostName; } - @Override public long getReadRequestsCount() { + @Override + public long getReadRequestsCount() { return readRequestCount; } - @Override public long getWriteRequestsCount() { + @Override + public long getWriteRequestsCount() { return writeRequestCount; } - @Override public long getFilteredReadRequestsCount() { + @Override + public long getFilteredReadRequestsCount() { return filteredReadRequestsCount; } } @@ -115,33 +118,38 @@ private static class UserMetricsImpl implements UserMetrics { this.clientMetricsMap = clientMetricsMap; } - @Override public byte[] getUserName() { + @Override + public byte[] getUserName() { return name; } - @Override public long getReadRequestCount() { - return clientMetricsMap.values().stream().map(c -> c.getReadRequestsCount()) - .reduce(0L, Long::sum); + @Override + public long getReadRequestCount() { + return clientMetricsMap.values().stream().map(c -> c.getReadRequestsCount()).reduce(0L, + Long::sum); } - @Override public long getWriteRequestCount() { - return clientMetricsMap.values().stream().map(c -> c.getWriteRequestsCount()) - .reduce(0L, Long::sum); + @Override + public long getWriteRequestCount() { + return clientMetricsMap.values().stream().map(c -> c.getWriteRequestsCount()).reduce(0L, + Long::sum); } - @Override public Map getClientMetrics() { + @Override + public Map getClientMetrics() { return this.clientMetricsMap; } - @Override public long getFilteredReadRequests() { + @Override + public long getFilteredReadRequests() { return clientMetricsMap.values().stream().map(c -> c.getFilteredReadRequestsCount()) - .reduce(0L, Long::sum); + .reduce(0L, Long::sum); } @Override public String toString() { - StringBuilder sb = Strings - .appendKeyValue(new StringBuilder(), "readRequestCount", this.getReadRequestCount()); + StringBuilder sb = + Strings.appendKeyValue(new StringBuilder(), "readRequestCount", this.getReadRequestCount()); Strings.appendKeyValue(sb, "writeRequestCount", this.getWriteRequestCount()); Strings.appendKeyValue(sb, "filteredReadRequestCount", this.getFilteredReadRequests()); return sb.toString(); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java index 4dc44b4c3c69..f361d43f61db 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -19,7 +18,6 @@ package org.apache.hadoop.hbase; import java.io.IOException; - import org.apache.yetus.audience.InterfaceAudience; /** @@ -42,8 +40,7 @@ public ZooKeeperConnectionException(String message) { /** * Constructor taking another exception. - * - * @param message the message for this exception + * @param message the message for this exception * @param exception the exception to grab data from */ public ZooKeeperConnectionException(String message, Exception exception) { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractClientScanner.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractClientScanner.java index 92b046436258..48cec12f43c5 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractClientScanner.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractClientScanner.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -17,8 +17,8 @@ */ package org.apache.hadoop.hbase.client; -import org.apache.yetus.audience.InterfaceAudience; import org.apache.hadoop.hbase.client.metrics.ScanMetrics; +import org.apache.yetus.audience.InterfaceAudience; /** * Helper class for custom client scanners. diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractResponse.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractResponse.java index 9e33a12af6b5..b0a33eda4021 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractResponse.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractResponse.java @@ -1,5 +1,4 @@ /* - * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -28,10 +27,9 @@ abstract class AbstractResponse { public enum ResponseType { - SINGLE (0), - MULTI (1); + SINGLE, + MULTI; - ResponseType(int value) {} } public abstract ResponseType type(); diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRpcBasedConnectionRegistry.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRpcBasedConnectionRegistry.java index 54138d30516c..4e97dcab24dd 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRpcBasedConnectionRegistry.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractRpcBasedConnectionRegistry.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -135,7 +135,7 @@ private void populateStubs(Set addrs) throws IOException { * Typically, you can use lambda expression to implement this interface as * *

    -   * (c, s, d) -> s.xxx(c, your request here, d)
    +   * (c, s, d) -> s.xxx(c, your request here, d)
        * 
    */ @FunctionalInterface @@ -222,7 +222,7 @@ protected final CompletableFuture call(Callable callab } @RestrictedApi(explanation = "Should only be called in tests", link = "", - allowedOnPath = ".*/src/test/.*") + allowedOnPath = ".*/src/test/.*") Set getParsedServers() { return addr2Stub.keySet(); } @@ -244,8 +244,7 @@ public CompletableFuture getMetaRegionLocations() { . call( (c, s, d) -> s.getMetaRegionLocations(c, GetMetaRegionLocationsRequest.getDefaultInstance(), d), - r -> r.getMetaLocationsCount() != 0, - "getMetaLocationsCount") + r -> r.getMetaLocationsCount() != 0, "getMetaLocationsCount") .thenApply(AbstractRpcBasedConnectionRegistry::transformMetaRegionLocations), getClass().getSimpleName() + ".getMetaRegionLocations"); } @@ -265,7 +264,7 @@ public CompletableFuture getClusterId() { public CompletableFuture getActiveMaster() { return tracedFuture( () -> this - .call( + . call( (c, s, d) -> s.getActiveMaster(c, GetActiveMasterRequest.getDefaultInstance(), d), GetActiveMasterResponse::hasServerName, "getActiveMaster()") .thenApply(resp -> ProtobufUtil.toServerName(resp.getServerName())), diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java index 4496a9e98558..6d57764491b9 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java @@ -1,5 +1,4 @@ -/** - * +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -22,9 +21,9 @@ import org.apache.yetus.audience.InterfaceAudience; /** - * A Get, Put, Increment, Append, or Delete associated with it's region. Used internally by - * {@link Table#batch} to associate the action with it's region and maintain - * the index from the original request. + * A Get, Put, Increment, Append, or Delete associated with it's region. Used internally by + * {@link Table#batch} to associate the action with it's region and maintain the index from the + * original request. */ @InterfaceAudience.Private public class Action implements Comparable { @@ -46,7 +45,7 @@ public Action(Row action, int originalIndex, int priority) { /** * Creates an action for a particular replica from original action. - * @param action Original action. + * @param action Original action. * @param replicaId Replica id for the new action. */ public Action(Action action, int replicaId) { @@ -76,7 +75,9 @@ public int getReplicaId() { return replicaId; } - public int getPriority() { return priority; } + public int getPriority() { + return priority; + } @Override public int compareTo(Action other) { diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java index a3a51071b338..fd362fc4031c 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -72,11 +72,11 @@ import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableList; /** - * The administrative API for HBase. Obtain an instance from {@link Connection#getAdmin()} and - * call {@link #close()} when done. - *

    Admin can be used to create, drop, list, enable and disable and otherwise modify tables, - * as well as perform other administrative operations. - * + * The administrative API for HBase. Obtain an instance from {@link Connection#getAdmin()} and call + * {@link #close()} when done. + *

    + * Admin can be used to create, drop, list, enable and disable and otherwise modify tables, as well + * as perform other administrative operations. * @see ConnectionFactory * @see Connection * @see Table @@ -112,12 +112,11 @@ public interface Admin extends Abortable, Closeable { @Override boolean isAborted(); - /** - * @return Connection used by this object. - */ + /** Returns Connection used by this object. */ Connection getConnection(); /** + * Check if a table exists. * @param tableName Table to check. * @return true if table exists already. * @throws IOException if a remote or network exception occurs @@ -126,11 +125,10 @@ public interface Admin extends Abortable, Closeable { /** * List all the userspace tables. - * * @return an array of read-only HTableDescriptors * @throws IOException if a remote or network exception occurs - * @deprecated since 2.0 version and will be removed in 3.0 version. - * Use {@link #listTableDescriptors()}. + * @deprecated since 2.0 version and will be removed in 3.0 version. Use + * {@link #listTableDescriptors()}. * @see #listTableDescriptors() */ @Deprecated @@ -138,7 +136,6 @@ public interface Admin extends Abortable, Closeable { /** * List all the userspace tables. - * * @return a list of TableDescriptors * @throws IOException if a remote or network exception occurs */ @@ -146,13 +143,12 @@ public interface Admin extends Abortable, Closeable { /** * List all the userspace tables that match the given pattern. - * * @param pattern The compiled regular expression to match against * @return an array of read-only HTableDescriptors * @throws IOException if a remote or network exception occurs * @see #listTables() - * @deprecated since 2.0 version and will be removed in 3.0 version. - * Use {@link #listTableDescriptors(java.util.regex.Pattern)}. + * @deprecated since 2.0 version and will be removed in 3.0 version. Use + * {@link #listTableDescriptors(java.util.regex.Pattern)}. * @see #listTableDescriptors(Pattern) */ @Deprecated @@ -160,7 +156,6 @@ public interface Admin extends Abortable, Closeable { /** * List all the userspace tables that match the given pattern. - * * @param pattern The compiled regular expression to match against * @return a list of TableDescriptors * @throws IOException if a remote or network exception occurs @@ -172,7 +167,6 @@ default List listTableDescriptors(Pattern pattern) throws IOExc /** * List all the userspace tables matching the given regular expression. - * * @param regex The regular expression to match against * @return a list of read-only HTableDescriptors * @throws IOException if a remote or network exception occurs @@ -185,50 +179,44 @@ default List listTableDescriptors(Pattern pattern) throws IOExc /** * List all the tables matching the given pattern. - * - * @param pattern The compiled regular expression to match against + * @param pattern The compiled regular expression to match against * @param includeSysTables false to match only against userspace tables * @return an array of read-only HTableDescriptors * @throws IOException if a remote or network exception occurs * @see #listTables() - * @deprecated since 2.0 version and will be removed in 3.0 version. - * Use {@link #listTableDescriptors(java.util.regex.Pattern, boolean)}. + * @deprecated since 2.0 version and will be removed in 3.0 version. Use + * {@link #listTableDescriptors(java.util.regex.Pattern, boolean)}. * @see #listTableDescriptors(java.util.regex.Pattern, boolean) */ @Deprecated - HTableDescriptor[] listTables(Pattern pattern, boolean includeSysTables) - throws IOException; + HTableDescriptor[] listTables(Pattern pattern, boolean includeSysTables) throws IOException; /** * List all the tables matching the given pattern. - * - * @param pattern The compiled regular expression to match against + * @param pattern The compiled regular expression to match against * @param includeSysTables false to match only against userspace tables * @return a list of TableDescriptors * @throws IOException if a remote or network exception occurs * @see #listTables() */ List listTableDescriptors(Pattern pattern, boolean includeSysTables) - throws IOException; + throws IOException; /** * List all the tables matching the given pattern. - * - * @param regex The regular expression to match against + * @param regex The regular expression to match against * @param includeSysTables false to match only against userspace tables * @return an array of read-only HTableDescriptors * @throws IOException if a remote or network exception occurs * @see #listTables(java.util.regex.Pattern, boolean) - * @deprecated since 2.0 version and will be removed in 3.0 version. - * Use {@link #listTableDescriptors(Pattern, boolean)}. + * @deprecated since 2.0 version and will be removed in 3.0 version. Use + * {@link #listTableDescriptors(Pattern, boolean)}. */ @Deprecated - HTableDescriptor[] listTables(String regex, boolean includeSysTables) - throws IOException; + HTableDescriptor[] listTables(String regex, boolean includeSysTables) throws IOException; /** * List all of the names of userspace tables. - * * @return TableName[] table names * @throws IOException if a remote or network exception occurs */ @@ -257,17 +245,16 @@ default TableName[] listTableNames(Pattern pattern) throws IOException { /** * List all of the names of userspace tables. - * @param pattern The regular expression to match against + * @param pattern The regular expression to match against * @param includeSysTables false to match only against userspace tables * @return TableName[] table names * @throws IOException if a remote or network exception occurs */ - TableName[] listTableNames(Pattern pattern, boolean includeSysTables) - throws IOException; + TableName[] listTableNames(Pattern pattern, boolean includeSysTables) throws IOException; /** * List all of the names of userspace tables. - * @param regex The regular expression to match against + * @param regex The regular expression to match against * @param includeSysTables false to match only against userspace tables * @return TableName[] table names * @throws IOException if a remote or network exception occurs @@ -275,81 +262,87 @@ TableName[] listTableNames(Pattern pattern, boolean includeSysTables) * {@link #listTableNames(Pattern, boolean)} instead. */ @Deprecated - TableName[] listTableNames(String regex, boolean includeSysTables) - throws IOException; + TableName[] listTableNames(String regex, boolean includeSysTables) throws IOException; /** * Get a table descriptor. - * * @param tableName as a {@link TableName} * @return the read-only tableDescriptor - * @throws org.apache.hadoop.hbase.TableNotFoundException - * @throws IOException if a remote or network exception occurs - * @deprecated since 2.0 version and will be removed in 3.0 version. - * Use {@link #getDescriptor(TableName)}. + * @throws TableNotFoundException if the table was not found + * @throws IOException if a remote or network exception occurs + * @deprecated since 2.0 version and will be removed in 3.0 version. Use + * {@link #getDescriptor(TableName)}. */ @Deprecated HTableDescriptor getTableDescriptor(TableName tableName) - throws TableNotFoundException, IOException; + throws TableNotFoundException, IOException; /** * Get a table descriptor. - * * @param tableName as a {@link TableName} * @return the tableDescriptor - * @throws org.apache.hadoop.hbase.TableNotFoundException - * @throws IOException if a remote or network exception occurs + * @throws TableNotFoundException if the table was not found + * @throws IOException if a remote or network exception occurs */ - TableDescriptor getDescriptor(TableName tableName) - throws TableNotFoundException, IOException; + TableDescriptor getDescriptor(TableName tableName) throws TableNotFoundException, IOException; /** * Creates a new table. Synchronous operation. - * * @param desc table descriptor for table - * @throws IllegalArgumentException if the table name is reserved + * @throws IllegalArgumentException if the table name is reserved * @throws org.apache.hadoop.hbase.MasterNotRunningException if master is not running - * @throws org.apache.hadoop.hbase.TableExistsException if table already exists (If concurrent - * threads, the table may have been created between test-for-existence and attempt-at-creation). - * @throws IOException if a remote or network exception occurs + * @throws TableExistsException if table already exists (If + * concurrent threads, the table may + * have been created between + * test-for-existence and + * attempt-at-creation). + * @throws IOException if a remote or network exception + * occurs */ default void createTable(TableDescriptor desc) throws IOException { get(createTableAsync(desc), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); } /** - * Creates a new table with the specified number of regions. The start key specified will become + * Creates a new table with the specified number of regions. The start key specified will become * the end key of the first region of the table, and the end key specified will become the start * key of the last region of the table (the first region has a null start key and the last region * has a null end key). BigInteger math will be used to divide the key range specified into enough * segments to make the required number of total regions. Synchronous operation. - * - * @param desc table descriptor for table - * @param startKey beginning of key range - * @param endKey end of key range + * @param desc table descriptor for table + * @param startKey beginning of key range + * @param endKey end of key range * @param numRegions the total number of regions to create - * @throws IllegalArgumentException if the table name is reserved - * @throws IOException if a remote or network exception occurs + * @throws IllegalArgumentException if the table name is reserved + * @throws IOException if a remote or network exception + * occurs * @throws org.apache.hadoop.hbase.MasterNotRunningException if master is not running - * @throws org.apache.hadoop.hbase.TableExistsException if table already exists (If concurrent - * threads, the table may have been created between test-for-existence and attempt-at-creation). + * @throws TableExistsException if table already exists (If + * concurrent threads, the table may + * have been created between + * test-for-existence and + * attempt-at-creation). */ void createTable(TableDescriptor desc, byte[] startKey, byte[] endKey, int numRegions) - throws IOException; + throws IOException; /** * Creates a new table with an initial set of empty regions defined by the specified split keys. * The total number of regions created will be the number of split keys plus one. Synchronous * operation. Note : Avoid passing empty split key. - * - * @param desc table descriptor for table + * @param desc table descriptor for table * @param splitKeys array of split keys for the initial regions of the table - * @throws IllegalArgumentException if the table name is reserved, if the split keys are repeated - * and if the split key has empty byte array. + * @throws IllegalArgumentException if the table name is reserved, if the + * split keys are repeated and if the + * split key has empty byte array. * @throws org.apache.hadoop.hbase.MasterNotRunningException if master is not running - * @throws org.apache.hadoop.hbase.TableExistsException if table already exists (If concurrent - * threads, the table may have been created between test-for-existence and attempt-at-creation). - * @throws IOException if a remote or network exception occurs + * @throws TableExistsException if table already exists (If + * concurrent threads, the table may + * have been created between + * test-for-existence and + * attempt-at-creation). + * @throws IOException if a remote or network exception + * occurs */ default void createTable(TableDescriptor desc, byte[][] splitKeys) throws IOException { get(createTableAsync(desc, splitKeys), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); @@ -378,7 +371,7 @@ default void createTable(TableDescriptor desc, byte[][] splitKeys) throws IOExce *

    * Throws IllegalArgumentException Bad table name, if the split keys are repeated and if the split * key has empty byte array. - * @param desc table descriptor for table + * @param desc table descriptor for table * @param splitKeys keys to check if the table has been created with all split keys * @throws IOException if a remote or network exception occurs * @return the result of the async creation. You can use Future.get(long, TimeUnit) to wait on the @@ -396,62 +389,53 @@ default void deleteTable(TableName tableName) throws IOException { } /** - * Deletes the table but does not block and wait for it to be completely removed. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * + * Deletes the table but does not block and wait for it to be completely removed. You can use + * Future.get(long, TimeUnit) to wait on the operation to complete. It may throw + * ExecutionException if there was an error while executing the operation or TimeoutException in + * case the wait timeout was not long enough to allow the operation to complete. * @param tableName name of table to delete * @throws IOException if a remote or network exception occurs - * @return the result of the async delete. You can use Future.get(long, TimeUnit) - * to wait on the operation to complete. + * @return the result of the async delete. You can use Future.get(long, TimeUnit) to wait on the + * operation to complete. */ Future deleteTableAsync(TableName tableName) throws IOException; /** * Deletes tables matching the passed in pattern and wait on completion. Warning: Use this method - * carefully, there is no prompting and the effect is immediate. Consider using {@link - * #listTableDescriptors(Pattern)} - * and {@link #deleteTable(org.apache.hadoop.hbase.TableName)} - * + * carefully, there is no prompting and the effect is immediate. Consider using + * {@link #listTableDescriptors(Pattern)} and + * {@link #deleteTable(org.apache.hadoop.hbase.TableName)} * @param regex The regular expression to match table names against - * @return Table descriptors for tables that couldn't be deleted. - * The return htds are read-only + * @return Table descriptors for tables that couldn't be deleted. The return htds are read-only * @throws IOException if a remote or network exception occurs * @see #deleteTables(java.util.regex.Pattern) * @see #deleteTable(org.apache.hadoop.hbase.TableName) - * @deprecated since 2.0 version and will be removed in 3.0 version - * This is just a trivial helper method without any magic. - * Consider using {@link #listTableDescriptors(Pattern)} - * and {@link #deleteTable(TableName)} + * @deprecated since 2.0 version and will be removed in 3.0 version This is just a trivial helper + * method without any magic. Consider using {@link #listTableDescriptors(Pattern)} and + * {@link #deleteTable(TableName)} */ @Deprecated HTableDescriptor[] deleteTables(String regex) throws IOException; /** * Delete tables matching the passed in pattern and wait on completion. Warning: Use this method - * carefully, there is no prompting and the effect is immediate. Consider using {@link - * #listTableDescriptors(java.util.regex.Pattern)} and + * carefully, there is no prompting and the effect is immediate. Consider using + * {@link #listTableDescriptors(java.util.regex.Pattern)} and * {@link #deleteTable(org.apache.hadoop.hbase.TableName)} - * * @param pattern The pattern to match table names against - * @return Table descriptors for tables that couldn't be deleted - * The return htds are read-only + * @return Table descriptors for tables that couldn't be deleted The return htds are read-only * @throws IOException if a remote or network exception occurs - * @deprecated since 2.0 version and will be removed in 3.0 version - * This is just a trivial helper method without any magic. - * Consider using {@link #listTableDescriptors(java.util.regex.Pattern)} - * and {@link #deleteTable(TableName)} + * @deprecated since 2.0 version and will be removed in 3.0 version This is just a trivial helper + * method without any magic. Consider using + * {@link #listTableDescriptors(java.util.regex.Pattern)} and + * {@link #deleteTable(TableName)} */ @Deprecated HTableDescriptor[] deleteTables(Pattern pattern) throws IOException; /** - * Truncate a table. - * Synchronous operation. - * - * @param tableName name of table to truncate + * Truncate a table. Synchronous operation. + * @param tableName name of table to truncate * @param preserveSplits true if the splits should be preserved * @throws IOException if a remote or network exception occurs */ @@ -464,14 +448,13 @@ default void truncateTable(TableName tableName, boolean preserveSplits) throws I * Future.get(long, TimeUnit) to wait on the operation to complete. It may throw * ExecutionException if there was an error while executing the operation or TimeoutException in * case the wait timeout was not long enough to allow the operation to complete. - * @param tableName name of table to delete + * @param tableName name of table to delete * @param preserveSplits true if the splits should be preserved * @throws IOException if a remote or network exception occurs * @return the result of the async truncate. You can use Future.get(long, TimeUnit) to wait on the * operation to complete. */ - Future truncateTableAsync(TableName tableName, boolean preserveSplits) - throws IOException; + Future truncateTableAsync(TableName tableName, boolean preserveSplits) throws IOException; /** * Enable a table. May timeout. Use {@link #enableTableAsync(org.apache.hadoop.hbase.TableName)} @@ -479,8 +462,8 @@ Future truncateTableAsync(TableName tableName, boolean preserveSplits) * disabled state for it to be enabled. * @param tableName name of the table * @throws IOException if a remote or network exception occurs There could be couple types of - * IOException TableNotFoundException means the table doesn't exist. - * TableNotDisabledException means the table isn't in disabled state. + * IOException TableNotFoundException means the table doesn't exist. + * TableNotDisabledException means the table isn't in disabled state. * @see #isTableEnabled(org.apache.hadoop.hbase.TableName) * @see #disableTable(org.apache.hadoop.hbase.TableName) * @see #enableTableAsync(org.apache.hadoop.hbase.TableName) @@ -490,67 +473,59 @@ default void enableTable(TableName tableName) throws IOException { } /** - * Enable the table but does not block and wait for it to be completely enabled. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * + * Enable the table but does not block and wait for it to be completely enabled. You can use + * Future.get(long, TimeUnit) to wait on the operation to complete. It may throw + * ExecutionException if there was an error while executing the operation or TimeoutException in + * case the wait timeout was not long enough to allow the operation to complete. * @param tableName name of table to delete * @throws IOException if a remote or network exception occurs - * @return the result of the async enable. You can use Future.get(long, TimeUnit) - * to wait on the operation to complete. + * @return the result of the async enable. You can use Future.get(long, TimeUnit) to wait on the + * operation to complete. */ Future enableTableAsync(TableName tableName) throws IOException; /** * Enable tables matching the passed in pattern and wait on completion. Warning: Use this method - * carefully, there is no prompting and the effect is immediate. Consider using {@link - * #listTableDescriptors(Pattern)} and {@link #enableTable(org.apache.hadoop.hbase.TableName)} - * + * carefully, there is no prompting and the effect is immediate. Consider using + * {@link #listTableDescriptors(Pattern)} and + * {@link #enableTable(org.apache.hadoop.hbase.TableName)} * @param regex The regular expression to match table names against * @throws IOException if a remote or network exception occurs - * @return Table descriptors for tables that couldn't be enabled. - * The return HTDs are read-only. + * @return Table descriptors for tables that couldn't be enabled. The return HTDs are read-only. * @see #enableTables(java.util.regex.Pattern) * @see #enableTable(org.apache.hadoop.hbase.TableName) - * @deprecated since 2.0 version and will be removed in 3.0 version - * This is just a trivial helper method without any magic. - * Consider using {@link #listTableDescriptors(Pattern)} - * and {@link #enableTable(org.apache.hadoop.hbase.TableName)} + * @deprecated since 2.0 version and will be removed in 3.0 version This is just a trivial helper + * method without any magic. Consider using {@link #listTableDescriptors(Pattern)} and + * {@link #enableTable(org.apache.hadoop.hbase.TableName)} */ @Deprecated HTableDescriptor[] enableTables(String regex) throws IOException; /** * Enable tables matching the passed in pattern and wait on completion. Warning: Use this method - * carefully, there is no prompting and the effect is immediate. Consider using {@link - * #listTableDescriptors(java.util.regex.Pattern)} and + * carefully, there is no prompting and the effect is immediate. Consider using + * {@link #listTableDescriptors(java.util.regex.Pattern)} and * {@link #enableTable(org.apache.hadoop.hbase.TableName)} - * * @param pattern The pattern to match table names against * @throws IOException if a remote or network exception occurs - * @return Table descriptors for tables that couldn't be enabled. - * The return HTDs are read-only. - * @deprecated since 2.0 version and will be removed in 3.0 version - * This is just a trivial helper method without any magic. - * Consider using {@link #listTableDescriptors(java.util.regex.Pattern)} - * and {@link #enableTable(org.apache.hadoop.hbase.TableName)} + * @return Table descriptors for tables that couldn't be enabled. The return HTDs are read-only. + * @deprecated since 2.0 version and will be removed in 3.0 version This is just a trivial helper + * method without any magic. Consider using + * {@link #listTableDescriptors(java.util.regex.Pattern)} and + * {@link #enableTable(org.apache.hadoop.hbase.TableName)} */ @Deprecated HTableDescriptor[] enableTables(Pattern pattern) throws IOException; /** - * Disable the table but does not block and wait for it to be completely disabled. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * + * Disable the table but does not block and wait for it to be completely disabled. You can use + * Future.get(long, TimeUnit) to wait on the operation to complete. It may throw + * ExecutionException if there was an error while executing the operation or TimeoutException in + * case the wait timeout was not long enough to allow the operation to complete. * @param tableName name of table to delete * @throws IOException if a remote or network exception occurs - * @return the result of the async disable. You can use Future.get(long, TimeUnit) - * to wait on the operation to complete. + * @return the result of the async disable. You can use Future.get(long, TimeUnit) to wait on the + * operation to complete. */ Future disableTableAsync(TableName tableName) throws IOException; @@ -559,9 +534,9 @@ default void enableTable(TableName tableName) throws IOException { * {@link #disableTableAsync(org.apache.hadoop.hbase.TableName)} and * {@link #isTableDisabled(org.apache.hadoop.hbase.TableName)} instead. The table has to be in * enabled state for it to be disabled. - * @param tableName * @throws IOException There could be couple types of IOException TableNotFoundException means the - * table doesn't exist. TableNotEnabledException means the table isn't in enabled state. + * table doesn't exist. TableNotEnabledException means the table isn't in + * enabled state. */ default void disableTable(TableName tableName) throws IOException { get(disableTableAsync(tableName), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); @@ -569,42 +544,39 @@ default void disableTable(TableName tableName) throws IOException { /** * Disable tables matching the passed in pattern and wait on completion. Warning: Use this method - * carefully, there is no prompting and the effect is immediate. Consider using {@link - * #listTableDescriptors(Pattern)} and {@link #disableTable(org.apache.hadoop.hbase.TableName)} - * + * carefully, there is no prompting and the effect is immediate. Consider using + * {@link #listTableDescriptors(Pattern)} and + * {@link #disableTable(org.apache.hadoop.hbase.TableName)} * @param regex The regular expression to match table names against - * @return Table descriptors for tables that couldn't be disabled - * The return htds are read-only + * @return Table descriptors for tables that couldn't be disabled The return htds are read-only * @throws IOException if a remote or network exception occurs * @see #disableTables(java.util.regex.Pattern) * @see #disableTable(org.apache.hadoop.hbase.TableName) - * @deprecated since 2.0 version and will be removed in 3.0 version - * This is just a trivial helper method without any magic. - * Consider using {@link #listTableDescriptors(Pattern)} - * and {@link #disableTable(org.apache.hadoop.hbase.TableName)} + * @deprecated since 2.0 version and will be removed in 3.0 version This is just a trivial helper + * method without any magic. Consider using {@link #listTableDescriptors(Pattern)} and + * {@link #disableTable(org.apache.hadoop.hbase.TableName)} */ @Deprecated HTableDescriptor[] disableTables(String regex) throws IOException; /** * Disable tables matching the passed in pattern and wait on completion. Warning: Use this method - * carefully, there is no prompting and the effect is immediate. Consider using {@link - * #listTableDescriptors(java.util.regex.Pattern)} and + * carefully, there is no prompting and the effect is immediate. Consider using + * {@link #listTableDescriptors(java.util.regex.Pattern)} and * {@link #disableTable(org.apache.hadoop.hbase.TableName)} - * * @param pattern The pattern to match table names against - * @return Table descriptors for tables that couldn't be disabled - * The return htds are read-only + * @return Table descriptors for tables that couldn't be disabled The return htds are read-only * @throws IOException if a remote or network exception occurs - * @deprecated since 2.0 version and will be removed in 3.0 version - * This is just a trivial helper method without any magic. - * Consider using {@link #listTableDescriptors(java.util.regex.Pattern)} - * and {@link #disableTable(org.apache.hadoop.hbase.TableName)} + * @deprecated since 2.0 version and will be removed in 3.0 version This is just a trivial helper + * method without any magic. Consider using + * {@link #listTableDescriptors(java.util.regex.Pattern)} and + * {@link #disableTable(org.apache.hadoop.hbase.TableName)} */ @Deprecated HTableDescriptor[] disableTables(Pattern pattern) throws IOException; /** + * Check if a table is enabled. * @param tableName name of table to check * @return true if table is on-line * @throws IOException if a remote or network exception occurs @@ -612,6 +584,7 @@ default void disableTable(TableName tableName) throws IOException { boolean isTableEnabled(TableName tableName) throws IOException; /** + * Check if a table is disabled. * @param tableName name of table to check * @return true if table is off-line * @throws IOException if a remote or network exception occurs @@ -619,6 +592,7 @@ default void disableTable(TableName tableName) throws IOException { boolean isTableDisabled(TableName tableName) throws IOException; /** + * Check if a table is available. * @param tableName name of table to check * @return true if all regions of the table are available * @throws IOException if a remote or network exception occurs @@ -629,7 +603,6 @@ default void disableTable(TableName tableName) throws IOException { * Use this api to check if the table has been created with the specified number of splitkeys * which was used while creating the given table. Note : If this api is used after a table's * region gets splitted, the api may return false. - * * @param tableName name of table to check * @param splitKeys keys to check if the table has been created with all split keys * @throws IOException if a remote or network excpetion occurs @@ -641,13 +614,12 @@ default void disableTable(TableName tableName) throws IOException { /** * Get the status of an alter (a.k.a modify) command - indicates how * many regions have received the updated schema Asynchronous operation. - * * @param tableName TableName instance * @return Pair indicating the number of regions updated Pair.getFirst() is the regions that are - * yet to be updated Pair.getSecond() is the total number of regions of the table + * yet to be updated Pair.getSecond() is the total number of regions of the table * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future - * on an operation. + * @deprecated Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future on an + * operation. */ @Deprecated Pair getAlterStatus(TableName tableName) throws IOException; @@ -655,28 +627,25 @@ default void disableTable(TableName tableName) throws IOException { /** * Get the status of alter (a.k.a modify) command - indicates how many * regions have received the updated schema Asynchronous operation. - * * @param tableName name of the table to get the status of * @return Pair indicating the number of regions updated Pair.getFirst() is the regions that are - * yet to be updated Pair.getSecond() is the total number of regions of the table + * yet to be updated Pair.getSecond() is the total number of regions of the table * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future - * on an operation. + * @deprecated Since 2.0.0. Will be removed in 3.0.0. No longer needed now you get a Future on an + * operation. */ @Deprecated Pair getAlterStatus(byte[] tableName) throws IOException; /** - * Add a column family to an existing table. Synchronous operation. - * Use {@link #addColumnFamilyAsync(TableName, ColumnFamilyDescriptor)} instead because it - * returns a {@link Future} from which you can learn whether success or failure. - * - * @param tableName name of the table to add column family to + * Add a column family to an existing table. Synchronous operation. Use + * {@link #addColumnFamilyAsync(TableName, ColumnFamilyDescriptor)} instead because it returns a + * {@link Future} from which you can learn whether success or failure. + * @param tableName name of the table to add column family to * @param columnFamily column family descriptor of column family to be added * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0. - * This will be removed in HBase 3.0.0. - * Use {@link #addColumnFamily(TableName, ColumnFamilyDescriptor)}. + * @deprecated As of release 2.0.0. This will be removed in HBase 3.0.0. Use + * {@link #addColumnFamily(TableName, ColumnFamilyDescriptor)}. */ @Deprecated default void addColumn(TableName tableName, ColumnFamilyDescriptor columnFamily) @@ -685,55 +654,50 @@ default void addColumn(TableName tableName, ColumnFamilyDescriptor columnFamily) } /** - * Add a column family to an existing table. Synchronous operation. - * Use {@link #addColumnFamilyAsync(TableName, ColumnFamilyDescriptor)} instead because it - * returns a {@link Future} from which you can learn whether success or failure. - * - * @param tableName name of the table to add column family to + * Add a column family to an existing table. Synchronous operation. Use + * {@link #addColumnFamilyAsync(TableName, ColumnFamilyDescriptor)} instead because it returns a + * {@link Future} from which you can learn whether success or failure. + * @param tableName name of the table to add column family to * @param columnFamily column family descriptor of column family to be added * @throws IOException if a remote or network exception occurs */ default void addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) - throws IOException { + throws IOException { get(addColumnFamilyAsync(tableName, columnFamily), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); } /** - * Add a column family to an existing table. Asynchronous operation. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * - * @param tableName name of the table to add column family to + * Add a column family to an existing table. Asynchronous operation. You can use Future.get(long, + * TimeUnit) to wait on the operation to complete. It may throw ExecutionException if there was an + * error while executing the operation or TimeoutException in case the wait timeout was not long + * enough to allow the operation to complete. + * @param tableName name of the table to add column family to * @param columnFamily column family descriptor of column family to be added * @throws IOException if a remote or network exception occurs * @return the result of the async add column family. You can use Future.get(long, TimeUnit) to * wait on the operation to complete. */ Future addColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) - throws IOException; + throws IOException; /** - * Delete a column family from a table. Synchronous operation. - * Use {@link #deleteColumnFamily(TableName, byte[])} instead because it - * returns a {@link Future} from which you can learn whether success or failure. - * - * @param tableName name of table + * Delete a column family from a table. Synchronous operation. Use + * {@link #deleteColumnFamily(TableName, byte[])} instead because it returns a {@link Future} from + * which you can learn whether success or failure. + * @param tableName name of table * @param columnFamily name of column family to be deleted * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0. - * This will be removed in HBase 3.0.0. - * Use {@link #deleteColumnFamily(TableName, byte[])}}. + * @deprecated As of release 2.0.0. This will be removed in HBase 3.0.0. Use + * {@link #deleteColumnFamily(TableName, byte[])}}. */ @Deprecated void deleteColumn(TableName tableName, byte[] columnFamily) throws IOException; /** - * Delete a column family from a table. Synchronous operation. - * Use {@link #deleteColumnFamily(TableName, byte[])} instead because it - * returns a {@link Future} from which you can learn whether success or failure. - * @param tableName name of table + * Delete a column family from a table. Synchronous operation. Use + * {@link #deleteColumnFamily(TableName, byte[])} instead because it returns a {@link Future} from + * which you can learn whether success or failure. + * @param tableName name of table * @param columnFamily name of column family to be deleted * @throws IOException if a remote or network exception occurs */ @@ -743,117 +707,133 @@ default void deleteColumnFamily(TableName tableName, byte[] columnFamily) throws } /** - * Delete a column family from a table. Asynchronous operation. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * - * @param tableName name of table + * Delete a column family from a table. Asynchronous operation. You can use Future.get(long, + * TimeUnit) to wait on the operation to complete. It may throw ExecutionException if there was an + * error while executing the operation or TimeoutException in case the wait timeout was not long + * enough to allow the operation to complete. + * @param tableName name of table * @param columnFamily name of column family to be deleted * @throws IOException if a remote or network exception occurs * @return the result of the async delete column family. You can use Future.get(long, TimeUnit) to * wait on the operation to complete. */ - Future deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) - throws IOException; + Future deleteColumnFamilyAsync(TableName tableName, byte[] columnFamily) throws IOException; /** * Modify an existing column family on a table. Synchronous operation. Use * {@link #modifyColumnFamilyAsync(TableName, ColumnFamilyDescriptor)} instead because it returns * a {@link Future} from which you can learn whether success or failure. - * @param tableName name of table + * @param tableName name of table * @param columnFamily new column family descriptor to use * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0. - * This will be removed in HBase 3.0.0. - * Use {@link #modifyColumnFamily(TableName, ColumnFamilyDescriptor)}. + * @deprecated As of release 2.0.0. This will be removed in HBase 3.0.0. Use + * {@link #modifyColumnFamily(TableName, ColumnFamilyDescriptor)}. */ @Deprecated default void modifyColumn(TableName tableName, ColumnFamilyDescriptor columnFamily) - throws IOException { + throws IOException { modifyColumnFamily(tableName, columnFamily); } /** - * Modify an existing column family on a table. Synchronous operation. - * Use {@link #modifyColumnFamilyAsync(TableName, ColumnFamilyDescriptor)} instead because it - * returns a {@link Future} from which you can learn whether success or failure. - * @param tableName name of table + * Modify an existing column family on a table. Synchronous operation. Use + * {@link #modifyColumnFamilyAsync(TableName, ColumnFamilyDescriptor)} instead because it returns + * a {@link Future} from which you can learn whether success or failure. + * @param tableName name of table * @param columnFamily new column family descriptor to use * @throws IOException if a remote or network exception occurs */ default void modifyColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily) - throws IOException { + throws IOException { get(modifyColumnFamilyAsync(tableName, columnFamily), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); } /** - * Modify an existing column family on a table. Asynchronous operation. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * - * @param tableName name of table + * Modify an existing column family on a table. Asynchronous operation. You can use + * Future.get(long, TimeUnit) to wait on the operation to complete. It may throw + * ExecutionException if there was an error while executing the operation or TimeoutException in + * case the wait timeout was not long enough to allow the operation to complete. + * @param tableName name of table * @param columnFamily new column family descriptor to use * @throws IOException if a remote or network exception occurs * @return the result of the async modify column family. You can use Future.get(long, TimeUnit) to * wait on the operation to complete. */ Future modifyColumnFamilyAsync(TableName tableName, ColumnFamilyDescriptor columnFamily) - throws IOException; + throws IOException; + + /** + * Change the store file tracker of the given table's given family. + * @param tableName the table you want to change + * @param family the family you want to change + * @param dstSFT the destination store file tracker + * @throws IOException if a remote or network exception occurs + */ + default void modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, String dstSFT) + throws IOException { + get(modifyColumnFamilyStoreFileTrackerAsync(tableName, family, dstSFT), getSyncWaitTimeout(), + TimeUnit.MILLISECONDS); + } + + /** + * Change the store file tracker of the given table's given family. + * @param tableName the table you want to change + * @param family the family you want to change + * @param dstSFT the destination store file tracker + * @return the result of the async modify. You can use Future.get(long, TimeUnit) to wait on the + * operation to complete + * @throws IOException if a remote or network exception occurs + */ + Future modifyColumnFamilyStoreFileTrackerAsync(TableName tableName, byte[] family, + String dstSFT) throws IOException; /** * Uses {@link #unassign(byte[], boolean)} to unassign the region. For expert-admins. - * * @param regionname region name to close * @param serverName Deprecated. Not used. * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #unassign(byte[], boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #unassign(byte[], boolean)}. */ @Deprecated void closeRegion(String regionname, String serverName) throws IOException; /** * Uses {@link #unassign(byte[], boolean)} to unassign the region. For expert-admins. - * * @param regionname region name to close * @param serverName Deprecated. Not used. * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #unassign(byte[], boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #unassign(byte[], boolean)}. */ @Deprecated void closeRegion(byte[] regionname, String serverName) throws IOException; /** * Uses {@link #unassign(byte[], boolean)} to unassign the region. For expert-admins. - * * @param encodedRegionName The encoded region name; i.e. the hash that makes up the region name - * suffix: e.g. if regionname is - * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., - * then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. - * @param serverName Deprecated. Not used. + * suffix: e.g. if regionname is + * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., + * then the encoded region name is: + * 527db22f95c8a9e0116f0cc13c680396. + * @param serverName Deprecated. Not used. * @return Deprecated. Returns true always. * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #unassign(byte[], boolean)}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #unassign(byte[], boolean)}. */ @Deprecated boolean closeRegionWithEncodedRegionName(String encodedRegionName, String serverName) - throws IOException; + throws IOException; /** * Used {@link #unassign(byte[], boolean)} to unassign the region. For expert-admins. - * * @param sn Deprecated. Not used. * @throws IOException if a remote or network exception occurs * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * (HBASE-18231). - * Use {@link #unassign(byte[], boolean)}. + * (HBASE-18231). Use + * {@link #unassign(byte[], boolean)}. */ @Deprecated void closeRegion(final ServerName sn, final HRegionInfo hri) throws IOException; @@ -862,15 +842,14 @@ boolean closeRegionWithEncodedRegionName(String encodedRegionName, String server * Get all the online regions on a region server. * @throws IOException if a remote or network exception occurs * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0 - * (HBASE-17980). - * Use {@link #getRegions(ServerName sn)}. + * (HBASE-17980). Use + * {@link #getRegions(ServerName sn)}. */ @Deprecated List getOnlineRegions(ServerName sn) throws IOException; /** * Get all the online regions on a region server. - * * @return List of {@link RegionInfo} * @throws IOException if a remote or network exception occurs */ @@ -878,17 +857,15 @@ boolean closeRegionWithEncodedRegionName(String encodedRegionName, String server /** * Flush a table. Synchronous operation. - * * @param tableName table to flush * @throws IOException if a remote or network exception occurs */ void flush(TableName tableName) throws IOException; /** - * Flush the specified column family stores on all regions of the passed table. - * This runs as a synchronous operation. - * - * @param tableName table to flush + * Flush the specified column family stores on all regions of the passed table. This runs as a + * synchronous operation. + * @param tableName table to flush * @param columnFamily column family within a table * @throws IOException if a remote or network exception occurs */ @@ -896,7 +873,6 @@ boolean closeRegionWithEncodedRegionName(String encodedRegionName, String server /** * Flush an individual region. Synchronous operation. - * * @param regionName region to flush * @throws IOException if a remote or network exception occurs */ @@ -904,8 +880,7 @@ boolean closeRegionWithEncodedRegionName(String encodedRegionName, String server /** * Flush a column family within a region. Synchronous operation. - * - * @param regionName region to flush + * @param regionName region to flush * @param columnFamily column family within a region * @throws IOException if a remote or network exception occurs */ @@ -919,10 +894,8 @@ boolean closeRegionWithEncodedRegionName(String encodedRegionName, String server void flushRegionServer(ServerName serverName) throws IOException; /** - * Compact a table. Asynchronous operation in that this method requests that a - * Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * + * Compact a table. Asynchronous operation in that this method requests that a Compaction run and + * then it returns. It does not wait on the completion of Compaction (it can take a while). * @param tableName table to compact * @throws IOException if a remote or network exception occurs */ @@ -930,9 +903,8 @@ boolean closeRegionWithEncodedRegionName(String encodedRegionName, String server /** * Compact an individual region. Asynchronous operation in that this method requests that a - * Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * + * Compaction run and then it returns. It does not wait on the completion of Compaction (it can + * take a while). * @param regionName region to compact * @throws IOException if a remote or network exception occurs */ @@ -940,122 +912,103 @@ boolean closeRegionWithEncodedRegionName(String encodedRegionName, String server /** * Compact a column family within a table. Asynchronous operation in that this method requests - * that a Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * - * @param tableName table to compact + * that a Compaction run and then it returns. It does not wait on the completion of Compaction (it + * can take a while). + * @param tableName table to compact * @param columnFamily column family within a table * @throws IOException if a remote or network exception occurs */ - void compact(TableName tableName, byte[] columnFamily) - throws IOException; + void compact(TableName tableName, byte[] columnFamily) throws IOException; /** * Compact a column family within a region. Asynchronous operation in that this method requests - * that a Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * - * @param regionName region to compact + * that a Compaction run and then it returns. It does not wait on the completion of Compaction (it + * can take a while). + * @param regionName region to compact * @param columnFamily column family within a region * @throws IOException if a remote or network exception occurs */ - void compactRegion(byte[] regionName, byte[] columnFamily) - throws IOException; + void compactRegion(byte[] regionName, byte[] columnFamily) throws IOException; /** - * Compact a table. Asynchronous operation in that this method requests that a - * Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * - * @param tableName table to compact + * Compact a table. Asynchronous operation in that this method requests that a Compaction run and + * then it returns. It does not wait on the completion of Compaction (it can take a while). + * @param tableName table to compact * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} * @throws IOException if a remote or network exception occurs - * @throws InterruptedException */ void compact(TableName tableName, CompactType compactType) throws IOException, InterruptedException; /** - * Compact a column family within a table. Asynchronous operation in that this method - * requests that a Compaction run and then it returns. It does not wait on the - * completion of Compaction (it can take a while). - * - * @param tableName table to compact + * Compact a column family within a table. Asynchronous operation in that this method requests + * that a Compaction run and then it returns. It does not wait on the completion of Compaction (it + * can take a while). + * @param tableName table to compact * @param columnFamily column family within a table - * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} + * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} * @throws IOException if not a mob column family or if a remote or network exception occurs - * @throws InterruptedException */ void compact(TableName tableName, byte[] columnFamily, CompactType compactType) throws IOException, InterruptedException; /** - * Major compact a table. Asynchronous operation in that this method requests - * that a Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * + * Major compact a table. Asynchronous operation in that this method requests that a Compaction + * run and then it returns. It does not wait on the completion of Compaction (it can take a + * while). * @param tableName table to major compact * @throws IOException if a remote or network exception occurs */ void majorCompact(TableName tableName) throws IOException; /** - * Major compact a table or an individual region. Asynchronous operation in that this method requests - * that a Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * + * Major compact a table or an individual region. Asynchronous operation in that this method + * requests that a Compaction run and then it returns. It does not wait on the completion of + * Compaction (it can take a while). * @param regionName region to major compact * @throws IOException if a remote or network exception occurs */ void majorCompactRegion(byte[] regionName) throws IOException; /** - * Major compact a column family within a table. Asynchronous operation in that this method requests - * that a Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * - * @param tableName table to major compact + * Major compact a column family within a table. Asynchronous operation in that this method + * requests that a Compaction run and then it returns. It does not wait on the completion of + * Compaction (it can take a while). + * @param tableName table to major compact * @param columnFamily column family within a table * @throws IOException if a remote or network exception occurs */ - void majorCompact(TableName tableName, byte[] columnFamily) - throws IOException; + void majorCompact(TableName tableName, byte[] columnFamily) throws IOException; /** - * Major compact a column family within region. Asynchronous operation in that this method requests - * that a Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * - * @param regionName egion to major compact + * Major compact a column family within region. Asynchronous operation in that this method + * requests that a Compaction run and then it returns. It does not wait on the completion of + * Compaction (it can take a while). + * @param regionName egion to major compact * @param columnFamily column family within a region * @throws IOException if a remote or network exception occurs */ - void majorCompactRegion(byte[] regionName, byte[] columnFamily) - throws IOException; + void majorCompactRegion(byte[] regionName, byte[] columnFamily) throws IOException; /** - * Major compact a table. Asynchronous operation in that this method requests that a - * Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * - * @param tableName table to compact + * Major compact a table. Asynchronous operation in that this method requests that a Compaction + * run and then it returns. It does not wait on the completion of Compaction (it can take a + * while). + * @param tableName table to compact * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} * @throws IOException if a remote or network exception occurs - * @throws InterruptedException */ void majorCompact(TableName tableName, CompactType compactType) throws IOException, InterruptedException; /** - * Major compact a column family within a table. Asynchronous operation in that this method requests that a - * Compaction run and then it returns. It does not wait on the completion of Compaction - * (it can take a while). - * - * @param tableName table to compact + * Major compact a column family within a table. Asynchronous operation in that this method + * requests that a Compaction run and then it returns. It does not wait on the completion of + * Compaction (it can take a while). + * @param tableName table to compact * @param columnFamily column family within a table - * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} + * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} * @throws IOException if not a mob column family or if a remote or network exception occurs - * @throws InterruptedException */ void majorCompact(TableName tableName, byte[] columnFamily, CompactType compactType) throws IOException, InterruptedException; @@ -1064,17 +1017,16 @@ void majorCompact(TableName tableName, byte[] columnFamily, CompactType compactT * Compact all regions on the region server. Asynchronous operation in that this method requests * that a Compaction run and then it returns. It does not wait on the completion of Compaction (it * can take a while). - * @param sn the region server name + * @param sn the region server name * @param major if it's major compaction * @throws IOException if a remote or network exception occurs - * @throws InterruptedException * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use * {@link #compactRegionServer(ServerName)} or * {@link #majorCompactRegionServer(ServerName)}. */ @Deprecated - default void compactRegionServer(ServerName sn, boolean major) throws IOException, - InterruptedException { + default void compactRegionServer(ServerName sn, boolean major) + throws IOException, InterruptedException { if (major) { majorCompactRegionServer(sn); } else { @@ -1084,16 +1036,15 @@ default void compactRegionServer(ServerName sn, boolean major) throws IOExceptio /** * Turn the compaction on or off. Disabling compactions will also interrupt any currently ongoing - * compactions. This state is ephemeral. The setting will be lost on restart. Compaction - * can also be enabled/disabled by modifying configuration hbase.regionserver.compaction.enabled - * in hbase-site.xml. - * + * compactions. This state is ephemeral. The setting will be lost on restart. Compaction can also + * be enabled/disabled by modifying configuration hbase.regionserver.compaction.enabled in + * hbase-site.xml. * @param switchState Set to true to enable, false to disable. * @param serverNamesList list of region servers. * @return Previous compaction states for region servers */ Map compactionSwitch(boolean switchState, List serverNamesList) - throws IOException; + throws IOException; /** * Compact all regions on the region server. Asynchronous operation in that this method requests @@ -1116,9 +1067,10 @@ Map compactionSwitch(boolean switchState, List serv /** * Move the region encodedRegionName to a random server. * @param encodedRegionName The encoded region name; i.e. the hash that makes up the region name - * suffix: e.g. if regionname is - * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., - * then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. + * suffix: e.g. if regionname is + * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., + * then the encoded region name is: + * 527db22f95c8a9e0116f0cc13c680396. * @throws IOException if we can't find a region named encodedRegionName */ void move(byte[] encodedRegionName) throws IOException; @@ -1126,16 +1078,18 @@ Map compactionSwitch(boolean switchState, List serv /** * Move the region rencodedRegionName to destServerName. * @param encodedRegionName The encoded region name; i.e. the hash that makes up the region name - * suffix: e.g. if regionname is - * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., - * then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. - * @param destServerName The servername of the destination regionserver. If passed the empty byte - * array we'll assign to a random server. A server name is made of host, port and - * startcode. Here is an example: host187.example.com,60020,1289493121758 + * suffix: e.g. if regionname is + * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., + * then the encoded region name is: + * 527db22f95c8a9e0116f0cc13c680396. + * @param destServerName The servername of the destination regionserver. If passed the empty + * byte array we'll assign to a random server. A server name is made of + * host, port and startcode. Here is an example: + * host187.example.com,60020,1289493121758 * @throws IOException if we can't find a region named encodedRegionName * @deprecated since 2.2.0 and will be removed in 4.0.0. Use {@link #move(byte[], ServerName)} - * instead. And if you want to move the region to a random server, please use - * {@link #move(byte[])}. + * instead. And if you want to move the region to a random server, please use + * {@link #move(byte[])}. * @see HBASE-22108 */ @Deprecated @@ -1150,12 +1104,13 @@ default void move(byte[] encodedRegionName, byte[] destServerName) throws IOExce /** * Move the region rencodedRegionName to destServerName. * @param encodedRegionName The encoded region name; i.e. the hash that makes up the region name - * suffix: e.g. if regionname is - * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., - * then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. - * @param destServerName The servername of the destination regionserver. A server name is made of - * host, port and startcode. Here is an example: - * host187.example.com,60020,1289493121758 + * suffix: e.g. if regionname is + * TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., + * then the encoded region name is: + * 527db22f95c8a9e0116f0cc13c680396. + * @param destServerName The servername of the destination regionserver. A server name is made + * of host, port and startcode. Here is an example: + * host187.example.com,60020,1289493121758 * @throws IOException if we can't find a region named encodedRegionName */ void move(byte[] encodedRegionName, ServerName destServerName) throws IOException; @@ -1169,22 +1124,21 @@ default void move(byte[] encodedRegionName, byte[] destServerName) throws IOExce /** * Unassign a Region. - * @param regionName Region name to assign. + * @param regionName Region name to unassign. * @throws IOException if a remote or network exception occurs */ void unassign(byte[] regionName) throws IOException; /** - * Unassign a region from current hosting regionserver. Region will then be assigned to a - * regionserver chosen at random. Region could be reassigned back to the same server. Use {@link - * #move(byte[], ServerName)} if you want to control the region movement. - * + * Unassign a region from current hosting regionserver. Region will then be assigned to a + * regionserver chosen at random. Region could be reassigned back to the same server. Use + * {@link #move(byte[], ServerName)} if you want to control the region movement. * @param regionName Region to unassign. Will clear any existing RegionPlan if one found. - * @param force If true, force unassign (Will remove region from regions-in-transition too if - * present. If results in double assignment use hbck -fix to resolve. To be used by experts). + * @param force If true, force unassign (Will remove region from + * regions-in-transition too if present. If results in double assignment use + * hbck -fix to resolve. To be used by experts). * @throws IOException if a remote or network exception occurs - * @deprecated since 2.4.0 and will be removed in 4.0.0. Use {@link #unassign(byte[])} - * instead. + * @deprecated since 2.4.0 and will be removed in 4.0.0. Use {@link #unassign(byte[])} instead. * @see HBASE-24875 */ @Deprecated @@ -1198,7 +1152,6 @@ default void unassign(byte[] regionName, boolean force) throws IOException { * still online as per Master's in memory state. If this API is incorrectly used on active region * then master will loose track of that region. This is a special method that should be used by * experts or hbck. - * * @param regionName Region to offline. * @throws IOException if a remote or network exception occurs */ @@ -1206,13 +1159,12 @@ default void unassign(byte[] regionName, boolean force) throws IOException { /** * Turn the load balancer on or off. - * - * @param synchronous If true, it waits until current balance() call, if - * outstanding, to return. + * @param synchronous If true, it waits until current balance() call, if outstanding, + * to return. * @return Previous balancer value * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.0.0. Will be removed in 3.0.0. - * Use {@link #balancerSwitch(boolean, boolean)} instead. + * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use + * {@link #balancerSwitch(boolean, boolean)} instead. */ @Deprecated default boolean setBalancerRunning(boolean on, boolean synchronous) throws IOException { @@ -1221,23 +1173,20 @@ default boolean setBalancerRunning(boolean on, boolean synchronous) throws IOExc /** * Turn the load balancer on or off. - * @param onOrOff Set to true to enable, false to disable. - * @param synchronous If true, it waits until current balance() call, if - * outstanding, to return. + * @param onOrOff Set to true to enable, false to disable. + * @param synchronous If true, it waits until current balance() call, if outstanding, + * to return. * @return Previous balancer value * @throws IOException if a remote or network exception occurs */ - boolean balancerSwitch(boolean onOrOff, boolean synchronous) - throws IOException; + boolean balancerSwitch(boolean onOrOff, boolean synchronous) throws IOException; /** - * Invoke the balancer. Will run the balancer and if regions to move, it will go ahead and do the - * reassignments. Can NOT run for various reasons. Check logs. - * + * Invoke the balancer. Will run the balancer and if regions to move, it will go ahead and do the + * reassignments. Can NOT run for various reasons. Check logs. * @return true if balancer ran, false otherwise. * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.0.0. Will be removed in 3.0.0. - * Use {@link #balance()} instead. + * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #balance()} instead. */ @Deprecated default boolean balancer() throws IOException { @@ -1245,21 +1194,18 @@ default boolean balancer() throws IOException { } /** - * Invoke the balancer. Will run the balancer and if regions to move, it will go ahead and do the - * reassignments. Can NOT run for various reasons. Check logs. - * + * Invoke the balancer. Will run the balancer and if regions to move, it will go ahead and do the + * reassignments. Can NOT run for various reasons. Check logs. * @return true if balancer ran, false otherwise. * @throws IOException if a remote or network exception occurs */ default boolean balance() throws IOException { - return balance(BalanceRequest.defaultInstance()) - .isBalancerRan(); + return balance(BalanceRequest.defaultInstance()).isBalancerRan(); } /** - * Invoke the balancer with the given balance request. The BalanceRequest defines how the - * balancer will run. See {@link BalanceRequest} for more details. - * + * Invoke the balancer with the given balance request. The BalanceRequest defines how the balancer + * will run. See {@link BalanceRequest} for more details. * @param request defines how the balancer should run * @return {@link BalanceResponse} with details about the results of the invocation. * @throws IOException if a remote or network exception occurs @@ -1267,15 +1213,14 @@ default boolean balance() throws IOException { BalanceResponse balance(BalanceRequest request) throws IOException; /** - * Invoke the balancer. Will run the balancer and if regions to move, it will - * go ahead and do the reassignments. If there is region in transition, force parameter of true - * would still run balancer. Can *not* run for other reasons. Check - * logs. + * Invoke the balancer. Will run the balancer and if regions to move, it will go ahead and do the + * reassignments. If there is region in transition, force parameter of true would still run + * balancer. Can *not* run for other reasons. Check logs. * @param force whether we should force balance even if there is region in transition * @return true if balancer ran, false otherwise. * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.0.0. Will be removed in 3.0.0. - * Use {@link #balance(BalanceRequest)} instead. + * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #balance(BalanceRequest)} + * instead. */ @Deprecated default boolean balancer(boolean force) throws IOException { @@ -1283,39 +1228,33 @@ default boolean balancer(boolean force) throws IOException { } /** - * Invoke the balancer. Will run the balancer and if regions to move, it will - * go ahead and do the reassignments. If there is region in transition, force parameter of true - * would still run balancer. Can *not* run for other reasons. Check - * logs. + * Invoke the balancer. Will run the balancer and if regions to move, it will go ahead and do the + * reassignments. If there is region in transition, force parameter of true would still run + * balancer. Can *not* run for other reasons. Check logs. * @param force whether we should force balance even if there is region in transition * @return true if balancer ran, false otherwise. * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.5.0. Will be removed in 4.0.0. - * Use {@link #balance(BalanceRequest)} instead. + * @deprecated Since 2.5.0. Will be removed in 4.0.0. Use {@link #balance(BalanceRequest)} + * instead. */ @Deprecated default boolean balance(boolean force) throws IOException { - return balance( - BalanceRequest.newBuilder() - .setIgnoreRegionsInTransition(force) - .build() - ).isBalancerRan(); + return balance(BalanceRequest.newBuilder().setIgnoreRegionsInTransition(force).build()) + .isBalancerRan(); } /** * Query the current state of the balancer. - * * @return true if the balancer is enabled, false otherwise. * @throws IOException if a remote or network exception occurs */ boolean isBalancerEnabled() throws IOException; /** - * Clear all the blocks corresponding to this table from BlockCache. For expert-admins. - * Calling this API will drop all the cached blocks specific to a table from BlockCache. - * This can significantly impact the query performance as the subsequent queries will - * have to retrieve the blocks from underlying filesystem. - * + * Clear all the blocks corresponding to this table from BlockCache. For expert-admins. Calling + * this API will drop all the cached blocks specific to a table from BlockCache. This can + * significantly impact the query performance as the subsequent queries will have to retrieve the + * blocks from underlying filesystem. * @param tableName table to clear block cache * @return CacheEvictionStats related to the eviction * @throws IOException if a remote or network exception occurs @@ -1323,11 +1262,9 @@ default boolean balance(boolean force) throws IOException { CacheEvictionStats clearBlockCache(final TableName tableName) throws IOException; /** - * Invoke region normalizer. Can NOT run for various reasons. Check logs. - * This is a non-blocking invocation to region normalizer. If return value is true, it means - * the request was submitted successfully. We need to check logs for the details of which regions - * were split/merged. - * + * Invoke region normalizer. Can NOT run for various reasons. Check logs. This is a non-blocking + * invocation to region normalizer. If return value is true, it means the request was submitted + * successfully. We need to check logs for the details of which regions were split/merged. * @return {@code true} if region normalizer ran, {@code false} otherwise. * @throws IOException if a remote or network exception occurs */ @@ -1336,11 +1273,9 @@ default boolean normalize() throws IOException { } /** - * Invoke region normalizer. Can NOT run for various reasons. Check logs. - * This is a non-blocking invocation to region normalizer. If return value is true, it means - * the request was submitted successfully. We need to check logs for the details of which regions - * were split/merged. - * + * Invoke region normalizer. Can NOT run for various reasons. Check logs. This is a non-blocking + * invocation to region normalizer. If return value is true, it means the request was submitted + * successfully. We need to check logs for the details of which regions were split/merged. * @param ntfp limit to tables matching the specified filter. * @return {@code true} if region normalizer ran, {@code false} otherwise. * @throws IOException if a remote or network exception occurs @@ -1349,7 +1284,6 @@ default boolean normalize() throws IOException { /** * Query the current state of the region normalizer. - * * @return true if region normalizer is enabled, false otherwise. * @throws IOException if a remote or network exception occurs */ @@ -1357,11 +1291,10 @@ default boolean normalize() throws IOException { /** * Turn region normalizer on or off. - * * @return Previous normalizer value * @throws IOException if a remote or network exception occurs * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #normalizerSwitch(boolean)}} - * instead. + * instead. */ @Deprecated default boolean setNormalizerRunning(boolean on) throws IOException { @@ -1370,20 +1303,18 @@ default boolean setNormalizerRunning(boolean on) throws IOException { /** * Turn region normalizer on or off. - * * @return Previous normalizer value * @throws IOException if a remote or network exception occurs */ - boolean normalizerSwitch (boolean on) throws IOException; + boolean normalizerSwitch(boolean on) throws IOException; /** * Enable/Disable the catalog janitor. - * * @param enable if true enables the catalog janitor * @return the previous state * @throws IOException if a remote or network exception occurs * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #catalogJanitorSwitch(boolean)}} - * instead. + * instead. */ @Deprecated default boolean enableCatalogJanitor(boolean enable) throws IOException { @@ -1392,7 +1323,6 @@ default boolean enableCatalogJanitor(boolean enable) throws IOException { /** * Enable/Disable the catalog janitor/ - * * @param onOrOff if true enables the catalog janitor * @return the previous state * @throws IOException if a remote or network exception occurs @@ -1401,11 +1331,9 @@ default boolean enableCatalogJanitor(boolean enable) throws IOException { /** * Ask for a scan of the catalog table. - * * @return the number of entries cleaned. Returns -1 if previous run is in progress. * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #runCatalogJanitor()}} - * instead. + * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #runCatalogJanitor()}} instead. */ @Deprecated default int runCatalogScan() throws IOException { @@ -1414,7 +1342,6 @@ default int runCatalogScan() throws IOException { /** * Ask for a scan of the catalog table. - * * @return the number of entries cleaned * @throws IOException if a remote or network exception occurs */ @@ -1422,19 +1349,17 @@ default int runCatalogScan() throws IOException { /** * Query on the catalog janitor state (Enabled/Disabled?). - * * @throws IOException if a remote or network exception occurs */ boolean isCatalogJanitorEnabled() throws IOException; /** * Enable/Disable the cleaner chore. - * * @param on if true enables the cleaner chore * @return the previous state * @throws IOException if a remote or network exception occurs * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #cleanerChoreSwitch(boolean)}} - * instead. + * instead. */ @Deprecated default boolean setCleanerChoreRunning(boolean on) throws IOException { @@ -1443,7 +1368,6 @@ default boolean setCleanerChoreRunning(boolean on) throws IOException { /** * Enable/Disable the cleaner chore. - * * @param onOrOff if true enables the cleaner chore * @return the previous state * @throws IOException if a remote or network exception occurs @@ -1452,7 +1376,6 @@ default boolean setCleanerChoreRunning(boolean on) throws IOException { /** * Ask for cleaner chore to run. - * * @return true if cleaner chore ran, false otherwise * @throws IOException if a remote or network exception occurs */ @@ -1460,39 +1383,37 @@ default boolean setCleanerChoreRunning(boolean on) throws IOException { /** * Query on the cleaner chore state (Enabled/Disabled?). - * * @throws IOException if a remote or network exception occurs */ boolean isCleanerChoreEnabled() throws IOException; /** * Merge two regions. Asynchronous operation. - * * @param nameOfRegionA encoded or full name of region a * @param nameOfRegionB encoded or full name of region b - * @param forcible true if do a compulsory merge, otherwise we will only merge two - * adjacent regions + * @param forcible true if do a compulsory merge, otherwise we will only merge + * two adjacent regions * @throws IOException if a remote or network exception occurs * @deprecated Since 2.0. Will be removed in 3.0. Use - * {@link #mergeRegionsAsync(byte[], byte[], boolean)} instead. + * {@link #mergeRegionsAsync(byte[], byte[], boolean)} instead. */ @Deprecated - void mergeRegions(byte[] nameOfRegionA, byte[] nameOfRegionB, - boolean forcible) throws IOException; + void mergeRegions(byte[] nameOfRegionA, byte[] nameOfRegionB, boolean forcible) + throws IOException; /** * Merge two regions. Asynchronous operation. * @param nameOfRegionA encoded or full name of region a * @param nameOfRegionB encoded or full name of region b - * @param forcible true if do a compulsory merge, otherwise we will only merge two - * adjacent regions + * @param forcible true if do a compulsory merge, otherwise we will only merge + * two adjacent regions * @throws IOException if a remote or network exception occurs * @deprecated since 2.3.0 and will be removed in 4.0.0. Multi-region merge feature is now * supported. Use {@link #mergeRegionsAsync(byte[][], boolean)} instead. */ @Deprecated default Future mergeRegionsAsync(byte[] nameOfRegionA, byte[] nameOfRegionB, - boolean forcible) throws IOException { + boolean forcible) throws IOException { byte[][] nameofRegionsToMerge = new byte[2][]; nameofRegionsToMerge[0] = nameOfRegionA; nameofRegionsToMerge[1] = nameOfRegionB; @@ -1502,16 +1423,16 @@ default Future mergeRegionsAsync(byte[] nameOfRegionA, byte[] nameOfRegion /** * Merge multiple regions (>=2). Asynchronous operation. * @param nameofRegionsToMerge encoded or full name of daughter regions - * @param forcible true if do a compulsory merge, otherwise we will only merge - * adjacent regions + * @param forcible true if do a compulsory merge, otherwise we will only + * merge adjacent regions * @throws IOException if a remote or network exception occurs */ Future mergeRegionsAsync(byte[][] nameofRegionsToMerge, boolean forcible) - throws IOException; + throws IOException; /** - * Split a table. The method will execute split action for each region in table. - * Asynchronous operation. + * Split a table. The method will execute split action for each region in table. Asynchronous + * operation. * @param tableName table to split * @throws IOException if a remote or network exception occurs */ @@ -1519,19 +1440,17 @@ Future mergeRegionsAsync(byte[][] nameofRegionsToMerge, boolean forcible) /** * Split an individual region. Asynchronous operation. - * * @param regionName region to split * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #splitRegionAsync(byte[], byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #splitRegionAsync(byte[], byte[])}. */ @Deprecated void splitRegion(byte[] regionName) throws IOException; /** * Split a table. Asynchronous operation. - * - * @param tableName table to split + * @param tableName table to split * @param splitPoint the explicit position to split on * @throws IOException if a remote or network exception occurs */ @@ -1539,16 +1458,14 @@ Future mergeRegionsAsync(byte[][] nameofRegionsToMerge, boolean forcible) /** * Split an individual region. Asynchronous operation. - * * @param regionName region to split * @param splitPoint the explicit position to split on * @throws IOException if a remote or network exception occurs - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #splitRegionAsync(byte[], byte[])}. + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #splitRegionAsync(byte[], byte[])}. */ @Deprecated - void splitRegion(byte[] regionName, byte[] splitPoint) - throws IOException; + void splitRegion(byte[] regionName, byte[] splitPoint) throws IOException; /** * Split an individual region. Asynchronous operation. @@ -1568,7 +1485,7 @@ void splitRegion(byte[] regionName, byte[] splitPoint) /** * Modify an existing table, more IRB friendly version. * @param tableName name of table. - * @param td modified description of the table + * @param td modified description of the table * @throws IOException if a remote or network exception occurs * @deprecated since 2.0 version and will be removed in 3.0 version. use * {@link #modifyTable(TableDescriptor)} @@ -1576,10 +1493,24 @@ void splitRegion(byte[] regionName, byte[] splitPoint) @Deprecated default void modifyTable(TableName tableName, TableDescriptor td) throws IOException { if (!tableName.equals(td.getTableName())) { - throw new IllegalArgumentException("the specified table name '" + tableName + - "' doesn't match with the HTD one: " + td.getTableName()); + throw new IllegalArgumentException("the specified table name '" + tableName + + "' doesn't match with the HTD one: " + td.getTableName()); } - modifyTable(td); + modifyTable(td, true); + } + + /** + * Modify an existing table, more IRB friendly version. + * @param td modified description of the table + * @param reopenRegions By default, 'modifyTable' reopens all regions, potentially causing a + * RIT(Region In Transition) storm in large tables. If set to 'false', + * regions will remain unaware of the modification until they are + * individually reopened. Please note that this may temporarily result in + * configuration inconsistencies among regions. + * @throws IOException if a remote or network exception occurs + */ + default void modifyTable(TableDescriptor td, boolean reopenRegions) throws IOException { + get(modifyTableAsync(td, reopenRegions), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); } /** @@ -1592,45 +1523,83 @@ default void modifyTable(TableDescriptor td) throws IOException { } /** - * Modify an existing table, more IRB friendly version. Asynchronous operation. This means that - * it may be a while before your schema change is updated across all of the table. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * + * Modify an existing table, more IRB friendly version. Asynchronous operation. This means that it + * may be a while before your schema change is updated across all of the table. You can use + * Future.get(long, TimeUnit) to wait on the operation to complete. It may throw + * ExecutionException if there was an error while executing the operation or TimeoutException in + * case the wait timeout was not long enough to allow the operation to complete. * @param tableName name of table. - * @param td modified description of the table + * @param td modified description of the table * @throws IOException if a remote or network exception occurs * @return the result of the async modify. You can use Future.get(long, TimeUnit) to wait on the - * operation to complete - * @deprecated since 2.0 version and will be removed in 3.0 version. - * use {@link #modifyTableAsync(TableDescriptor)} + * operation to complete + * @deprecated since 2.0 version and will be removed in 3.0 version. use + * {@link #modifyTableAsync(TableDescriptor, boolean)} */ @Deprecated default Future modifyTableAsync(TableName tableName, TableDescriptor td) - throws IOException { + throws IOException { if (!tableName.equals(td.getTableName())) { - throw new IllegalArgumentException("the specified table name '" + tableName + - "' doesn't match with the HTD one: " + td.getTableName()); + throw new IllegalArgumentException("the specified table name '" + tableName + + "' doesn't match with the HTD one: " + td.getTableName()); } return modifyTableAsync(td); } /** - * Modify an existing table, more IRB (ruby) friendly version. Asynchronous operation. This means that - * it may be a while before your schema change is updated across all of the table. - * You can use Future.get(long, TimeUnit) to wait on the operation to complete. - * It may throw ExecutionException if there was an error while executing the operation - * or TimeoutException in case the wait timeout was not long enough to allow the - * operation to complete. - * + * Modify an existing table, more IRB (ruby) friendly version. Asynchronous operation. This means + * that it may be a while before your schema change is updated across all of the table. You can + * use Future.get(long, TimeUnit) to wait on the operation to complete. It may throw + * ExecutionException if there was an error while executing the operation or TimeoutException in + * case the wait timeout was not long enough to allow the operation to complete. * @param td description of the table * @throws IOException if a remote or network exception occurs * @return the result of the async modify. You can use Future.get(long, TimeUnit) to wait on the * operation to complete */ - Future modifyTableAsync(TableDescriptor td) throws IOException; + default Future modifyTableAsync(TableDescriptor td) throws IOException { + return modifyTableAsync(td, true); + } + + /** + * Modify an existing table, more IRB (ruby) friendly version. Asynchronous operation. This means + * that it may be a while before your schema change is updated across all of the table. You can + * use Future.get(long, TimeUnit) to wait on the operation to complete. It may throw + * ExecutionException if there was an error while executing the operation or TimeoutException in + * case the wait timeout was not long enough to allow the operation to complete. + * @param td description of the table + * @param reopenRegions By default, 'modifyTableAsync' reopens all regions, potentially causing a + * RIT(Region In Transition) storm in large tables. If set to 'false', + * regions will remain unaware of the modification until they are + * individually reopened. Please note that this may temporarily result in + * configuration inconsistencies among regions. + * @throws IOException if a remote or network exception occurs + * @return the result of the async modify. You can use Future.get(long, TimeUnit) to wait on the + * operation to complete + */ + Future modifyTableAsync(TableDescriptor td, boolean reopenRegions) throws IOException; + + /** + * Change the store file tracker of the given table. + * @param tableName the table you want to change + * @param dstSFT the destination store file tracker + * @throws IOException if a remote or network exception occurs + */ + default void modifyTableStoreFileTracker(TableName tableName, String dstSFT) throws IOException { + get(modifyTableStoreFileTrackerAsync(tableName, dstSFT), getSyncWaitTimeout(), + TimeUnit.MILLISECONDS); + } + + /** + * Change the store file tracker of the given table. + * @param tableName the table you want to change + * @param dstSFT the destination store file tracker + * @return the result of the async modify. You can use Future.get(long, TimeUnit) to wait on the + * operation to complete + * @throws IOException if a remote or network exception occurs + */ + Future modifyTableStoreFileTrackerAsync(TableName tableName, String dstSFT) + throws IOException; /** * Shuts down the HBase cluster. @@ -1653,22 +1622,21 @@ default Future modifyTableAsync(TableName tableName, TableDescriptor td) /** * Check whether Master is in maintenance mode. - * * @throws IOException if a remote or network exception occurs */ - boolean isMasterInMaintenanceMode() throws IOException; + boolean isMasterInMaintenanceMode() throws IOException; /** * Stop the designated regionserver. - * * @param hostnamePort Hostname and port delimited by a : as in - * example.org:1234 + * example.org:1234 * @throws IOException if a remote or network exception occurs */ void stopRegionServer(String hostnamePort) throws IOException; /** * Get whole cluster status, containing status about: + * *

        * hbase version
        * cluster id
    @@ -1678,10 +1646,11 @@ default Future modifyTableAsync(TableName tableName, TableDescriptor td)
        * balancer
        * regions in transition
        * 
    + * * @return cluster status * @throws IOException if a remote or network exception occurs - * @deprecated since 2.0 version and will be removed in 3.0 version. - * use {@link #getClusterMetrics()} + * @deprecated since 2.0 version and will be removed in 3.0 version. use + * {@link #getClusterMetrics()} */ @Deprecated default ClusterStatus getClusterStatus() throws IOException { @@ -1690,6 +1659,7 @@ default ClusterStatus getClusterStatus() throws IOException { /** * Get whole cluster metrics, containing status about: + * *
        * hbase version
        * cluster id
    @@ -1699,6 +1669,7 @@ default ClusterStatus getClusterStatus() throws IOException {
        * balancer
        * regions in transition
        * 
    + * * @return cluster metrics * @throws IOException if a remote or network exception occurs */ @@ -1714,6 +1685,7 @@ default ClusterMetrics getClusterMetrics() throws IOException { ClusterMetrics getClusterMetrics(EnumSet
    * + * * @param serverName the server name to which the endpoint call is made * @return A RegionServerCoprocessorRpcChannel instance */ CoprocessorRpcChannel coprocessorService(ServerName serverName); - /** - * Update the configuration and trigger an online config change - * on the regionserver. + * Update the configuration and trigger an online config change on the regionserver. * @param server : The server whose config needs to be updated. * @throws IOException if a remote or network exception occurs */ void updateConfiguration(ServerName server) throws IOException; - /** - * Update the configuration and trigger an online config change - * on all the regionservers. + * Update the configuration and trigger an online config change on all the regionservers. * @throws IOException if a remote or network exception occurs */ void updateConfiguration() throws IOException; @@ -2730,15 +2702,14 @@ default int getMasterInfoPort() throws IOException { /** * Return the set of supported security capabilities. * @throws IOException if a remote or network exception occurs - * @throws UnsupportedOperationException */ List getSecurityCapabilities() throws IOException; /** * Turn the Split or Merge switches on or off. - * @param enabled enabled or not + * @param enabled enabled or not * @param synchronous If true, it waits until current split() call, if outstanding, - * to return. + * to return. * @param switchTypes switchType list {@link MasterSwitchType} * @return Previous switch value array * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #splitSwitch(boolean, boolean)} @@ -2747,7 +2718,7 @@ default int getMasterInfoPort() throws IOException { */ @Deprecated default boolean[] setSplitOrMergeEnabled(boolean enabled, boolean synchronous, - MasterSwitchType... switchTypes) throws IOException { + MasterSwitchType... switchTypes) throws IOException { boolean[] preValues = new boolean[switchTypes.length]; for (int i = 0; i < switchTypes.length; i++) { switch (switchTypes[i]) { @@ -2766,9 +2737,9 @@ default boolean[] setSplitOrMergeEnabled(boolean enabled, boolean synchronous, /** * Turn the split switch on or off. - * @param enabled enabled or not + * @param enabled enabled or not * @param synchronous If true, it waits until current split() call, if outstanding, - * to return. + * to return. * @return Previous switch value * @throws IOException if a remote or network exception occurs */ @@ -2776,9 +2747,9 @@ default boolean[] setSplitOrMergeEnabled(boolean enabled, boolean synchronous, /** * Turn the merge switch on or off. - * @param enabled enabled or not + * @param enabled enabled or not * @param synchronous If true, it waits until current merge() call, if outstanding, - * to return. + * to return. * @return Previous switch value * @throws IOException if a remote or network exception occurs */ @@ -2786,11 +2757,10 @@ default boolean[] setSplitOrMergeEnabled(boolean enabled, boolean synchronous, /** * Query the current state of the switch. - * * @return true if the switch is enabled, false otherwise. * @throws IOException if a remote or network exception occurs - * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use - * {@link #isSplitEnabled()} or {@link #isMergeEnabled()} instead. + * @deprecated Since 2.0.0. Will be removed in 3.0.0. Use {@link #isSplitEnabled()} or + * {@link #isMergeEnabled()} instead. */ @Deprecated default boolean isSplitOrMergeEnabled(MasterSwitchType switchType) throws IOException { @@ -2821,24 +2791,24 @@ default boolean isSplitOrMergeEnabled(MasterSwitchType switchType) throws IOExce /** * Add a new replication peer for replicating data to slave cluster. - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig configuration for the replication peer * @throws IOException if a remote or network exception occurs */ default void addReplicationPeer(String peerId, ReplicationPeerConfig peerConfig) - throws IOException { + throws IOException { addReplicationPeer(peerId, peerConfig, true); } /** * Add a new replication peer for replicating data to slave cluster. - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig configuration for the replication peer - * @param enabled peer state, true if ENABLED and false if DISABLED + * @param enabled peer state, true if ENABLED and false if DISABLED * @throws IOException if a remote or network exception occurs */ default void addReplicationPeer(String peerId, ReplicationPeerConfig peerConfig, boolean enabled) - throws IOException { + throws IOException { get(addReplicationPeerAsync(peerId, peerConfig, enabled), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); } @@ -2849,13 +2819,13 @@ default void addReplicationPeer(String peerId, ReplicationPeerConfig peerConfig, * You can use Future.get(long, TimeUnit) to wait on the operation to complete. It may throw * ExecutionException if there was an error while executing the operation or TimeoutException in * case the wait timeout was not long enough to allow the operation to complete. - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig configuration for the replication peer * @return the result of the async operation * @throws IOException IOException if a remote or network exception occurs */ default Future addReplicationPeerAsync(String peerId, ReplicationPeerConfig peerConfig) - throws IOException { + throws IOException { return addReplicationPeerAsync(peerId, peerConfig, true); } @@ -2865,14 +2835,14 @@ default Future addReplicationPeerAsync(String peerId, ReplicationPeerConfi * You can use Future.get(long, TimeUnit) to wait on the operation to complete. It may throw * ExecutionException if there was an error while executing the operation or TimeoutException in * case the wait timeout was not long enough to allow the operation to complete. - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig configuration for the replication peer - * @param enabled peer state, true if ENABLED and false if DISABLED + * @param enabled peer state, true if ENABLED and false if DISABLED * @return the result of the async operation * @throws IOException IOException if a remote or network exception occurs */ Future addReplicationPeerAsync(String peerId, ReplicationPeerConfig peerConfig, - boolean enabled) throws IOException; + boolean enabled) throws IOException; /** * Remove a peer and stop the replication. @@ -2880,8 +2850,7 @@ Future addReplicationPeerAsync(String peerId, ReplicationPeerConfig peerCo * @throws IOException if a remote or network exception occurs */ default void removeReplicationPeer(String peerId) throws IOException { - get(removeReplicationPeerAsync(peerId), getSyncWaitTimeout(), - TimeUnit.MILLISECONDS); + get(removeReplicationPeerAsync(peerId), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); } /** @@ -2948,12 +2917,12 @@ default void disableReplicationPeer(String peerId) throws IOException { /** * Update the peerConfig for the specified peer. - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig new config for the replication peer * @throws IOException if a remote or network exception occurs */ default void updateReplicationPeerConfig(String peerId, ReplicationPeerConfig peerConfig) - throws IOException { + throws IOException { get(updateReplicationPeerConfigAsync(peerId, peerConfig), getSyncWaitTimeout(), TimeUnit.MILLISECONDS); } @@ -2964,23 +2933,23 @@ default void updateReplicationPeerConfig(String peerId, ReplicationPeerConfig pe * You can use Future.get(long, TimeUnit) to wait on the operation to complete. It may throw * ExecutionException if there was an error while executing the operation or TimeoutException in * case the wait timeout was not long enough to allow the operation to complete. - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig new config for the replication peer * @return the result of the async operation * @throws IOException IOException if a remote or network exception occurs */ Future updateReplicationPeerConfigAsync(String peerId, ReplicationPeerConfig peerConfig) - throws IOException; + throws IOException; /** * Append the replicable table column family config from the specified peer. - * @param id a short that identifies the cluster + * @param id a short that identifies the cluster * @param tableCfs A map from tableName to column family names * @throws ReplicationException if tableCfs has conflict with existing config - * @throws IOException if a remote or network exception occurs + * @throws IOException if a remote or network exception occurs */ default void appendReplicationPeerTableCFs(String id, Map> tableCfs) - throws ReplicationException, IOException { + throws ReplicationException, IOException { if (tableCfs == null) { throw new ReplicationException("tableCfs is null"); } @@ -2992,13 +2961,13 @@ default void appendReplicationPeerTableCFs(String id, Map> tableCfs) - throws ReplicationException, IOException { + throws ReplicationException, IOException { if (tableCfs == null) { throw new ReplicationException("tableCfs is null"); } @@ -3024,10 +2993,10 @@ default void removeReplicationPeerTableCFs(String id, Map listReplicationPeers(Pattern pattern) throws IOException; /** - * Mark region server(s) as decommissioned to prevent additional regions from getting - * assigned to them. Optionally unload the regions on the servers. If there are multiple servers - * to be decommissioned, decommissioning them at the same time can prevent wasteful region - * movements. Region unloading is asynchronous. + * Mark region server(s) as decommissioned to prevent additional regions from getting assigned to + * them. Optionally unload the regions on the servers. If there are multiple servers to be + * decommissioned, decommissioning them at the same time can prevent wasteful region movements. + * Region unloading is asynchronous. * @param servers The list of servers to decommission. * @param offload True to offload the regions from the decommissioned servers * @throws IOException if a remote or network exception occurs @@ -3042,15 +3011,14 @@ default void removeReplicationPeerTableCFs(String id, Map listDecommissionedRegionServers() throws IOException; /** - * Remove decommission marker from a region server to allow regions assignments. - * Load regions onto the server if a list of regions is given. Region loading is - * asynchronous. - * @param server The server to recommission. + * Remove decommission marker from a region server to allow regions assignments. Load regions onto + * the server if a list of regions is given. Region loading is asynchronous. + * @param server The server to recommission. * @param encodedRegionNames Regions to load onto the server. * @throws IOException if a remote or network exception occurs */ void recommissionRegionServer(ServerName server, List encodedRegionNames) - throws IOException; + throws IOException; /** * Find all table and column families that are replicated from this cluster @@ -3076,9 +3044,8 @@ void recommissionRegionServer(ServerName server, List encodedRegionNames /** * Clear compacting queues on a regionserver. * @param serverName the region server name - * @param queues the set of queue name + * @param queues the set of queue name * @throws IOException if a remote or network exception occurs - * @throws InterruptedException */ void clearCompactionQueues(ServerName serverName, Set queues) throws IOException, InterruptedException; @@ -3092,6 +3059,14 @@ default List listDeadServers() throws IOException { return getClusterMetrics(EnumSet.of(Option.DEAD_SERVERS)).getDeadServerNames(); } + /** + * List unknown region servers. + * @return List of unknown region servers. + */ + default List listUnknownServers() throws IOException { + return getClusterMetrics(EnumSet.of(Option.UNKNOWN_SERVERS)).getUnknownServerNames(); + } + /** * Clear dead region servers from master. * @param servers list of dead region servers. @@ -3102,13 +3077,13 @@ default List listDeadServers() throws IOException { /** * Create a new table by cloning the existent table schema. - * @param tableName name of the table to be cloned - * @param newTableName name of the new table where the table will be created + * @param tableName name of the table to be cloned + * @param newTableName name of the new table where the table will be created * @param preserveSplits True if the splits should be preserved * @throws IOException if a remote or network exception occurs */ void cloneTableSchema(TableName tableName, TableName newTableName, boolean preserveSplits) - throws IOException; + throws IOException; /** * Switch the rpc throttle enable state. @@ -3126,8 +3101,8 @@ void cloneTableSchema(TableName tableName, TableName newTableName, boolean prese boolean isRpcThrottleEnabled() throws IOException; /** - * Switch the exceed throttle quota. If enabled, user/table/namespace throttle quota - * can be exceeded if region server has availble quota. + * Switch the exceed throttle quota. If enabled, user/table/namespace throttle quota can be + * exceeded if region server has availble quota. * @param enable Set to true to enable, false to disable. * @return Previous exceed throttle enabled value * @throws IOException if a remote or network exception occurs @@ -3144,8 +3119,8 @@ void cloneTableSchema(TableName tableName, TableName newTableName, boolean prese * Fetches the observed {@link SpaceQuotaSnapshotView}s observed by a RegionServer. * @throws IOException if a remote or network exception occurs */ - Map getRegionServerSpaceQuotaSnapshots( - ServerName serverName) throws IOException; + Map + getRegionServerSpaceQuotaSnapshots(ServerName serverName) throws IOException; /** * Returns the Master's view of a quota on the given {@code namespace} or null if the Master has @@ -3163,10 +3138,10 @@ void cloneTableSchema(TableName tableName, TableName newTableName, boolean prese /** * Grants user specific permissions - * @param userPermission user name and the specific permission + * @param userPermission user name and the specific permission * @param mergeExistingPermissions If set to false, later granted permissions will override - * previous granted permissions. otherwise, it'll merge with previous granted - * permissions. + * previous granted permissions. otherwise, it'll merge with + * previous granted permissions. * @throws IOException if a remote or network exception occurs */ void grant(UserPermission userPermission, boolean mergeExistingPermissions) throws IOException; @@ -3181,22 +3156,22 @@ void cloneTableSchema(TableName tableName, TableName newTableName, boolean prese /** * Get the global/namespace/table permissions for user * @param getUserPermissionsRequest A request contains which user, global, namespace or table - * permissions needed + * permissions needed * @return The user and permission list * @throws IOException if a remote or network exception occurs */ List getUserPermissions(GetUserPermissionsRequest getUserPermissionsRequest) - throws IOException; + throws IOException; /** * Check if the user has specific permissions - * @param userName the user name + * @param userName the user name * @param permissions the specific permission list * @return True if user has the specific permissions * @throws IOException if a remote or network exception occurs */ List hasUserPermissions(String userName, List permissions) - throws IOException; + throws IOException; /** * Check if call user has specific permissions @@ -3210,40 +3185,34 @@ default List hasUserPermissions(List permissions) throws IO /** * Turn on or off the auto snapshot cleanup based on TTL. - * - * @param on Set to true to enable, false to disable. + * @param on Set to true to enable, false to disable. * @param synchronous If true, it waits until current snapshot cleanup is completed, - * if outstanding. + * if outstanding. * @return Previous auto snapshot cleanup value * @throws IOException if a remote or network exception occurs */ - boolean snapshotCleanupSwitch(final boolean on, final boolean synchronous) - throws IOException; + boolean snapshotCleanupSwitch(final boolean on, final boolean synchronous) throws IOException; /** * Query the current state of the auto snapshot cleanup based on TTL. - * - * @return true if the auto snapshot cleanup is enabled, - * false otherwise. + * @return true if the auto snapshot cleanup is enabled, false + * otherwise. * @throws IOException if a remote or network exception occurs */ boolean isSnapshotCleanupEnabled() throws IOException; - /** - * Retrieves online slow/large RPC logs from the provided list of - * RegionServers - * - * @param serverNames Server names to get slowlog responses from + * Retrieves online slow/large RPC logs from the provided list of RegionServers + * @param serverNames Server names to get slowlog responses from * @param logQueryFilter filter to be used if provided (determines slow / large RPC logs) * @return online slowlog response list * @throws IOException if a remote or network exception occurs - * @deprecated since 2.4.0 and will be removed in 4.0.0. - * Use {@link #getLogEntries(Set, String, ServerType, int, Map)} instead. + * @deprecated since 2.4.0 and will be removed in 4.0.0. Use + * {@link #getLogEntries(Set, String, ServerType, int, Map)} instead. */ @Deprecated default List getSlowLogResponses(final Set serverNames, - final LogQueryFilter logQueryFilter) throws IOException { + final LogQueryFilter logQueryFilter) throws IOException { String logType; if (LogQueryFilter.Type.LARGE_LOG.equals(logQueryFilter.getType())) { logType = "LARGE_LOG"; @@ -3256,40 +3225,39 @@ default List getSlowLogResponses(final Set serverNa filterParams.put("tableName", logQueryFilter.getTableName()); filterParams.put("userName", logQueryFilter.getUserName()); filterParams.put("filterByOperator", logQueryFilter.getFilterByOperator().toString()); - List logEntries = - getLogEntries(serverNames, logType, ServerType.REGION_SERVER, logQueryFilter.getLimit(), - filterParams); + List logEntries = getLogEntries(serverNames, logType, ServerType.REGION_SERVER, + logQueryFilter.getLimit(), filterParams); return logEntries.stream().map(logEntry -> (OnlineLogRecord) logEntry) .collect(Collectors.toList()); } /** - * Clears online slow/large RPC logs from the provided list of - * RegionServers - * + * Clears online slow/large RPC logs from the provided list of RegionServers * @param serverNames Set of Server names to clean slowlog responses from - * @return List of booleans representing if online slowlog response buffer is cleaned - * from each RegionServer + * @return List of booleans representing if online slowlog response buffer is cleaned from each + * RegionServer * @throws IOException if a remote or network exception occurs */ - List clearSlowLogResponses(final Set serverNames) - throws IOException; - + List clearSlowLogResponses(final Set serverNames) throws IOException; /** - * Retrieve recent online records from HMaster / RegionServers. - * Examples include slow/large RPC logs, balancer decisions by master. - * - * @param serverNames servers to retrieve records from, useful in case of records maintained - * by RegionServer as we can select specific server. In case of servertype=MASTER, logs will - * only come from the currently active master. - * @param logType string representing type of log records - * @param serverType enum for server type: HMaster or RegionServer - * @param limit put a limit to list of records that server should send in response + * Retrieve recent online records from HMaster / RegionServers. Examples include slow/large RPC + * logs, balancer decisions by master. + * @param serverNames servers to retrieve records from, useful in case of records maintained by + * RegionServer as we can select specific server. In case of + * servertype=MASTER, logs will only come from the currently active master. + * @param logType string representing type of log records + * @param serverType enum for server type: HMaster or RegionServer + * @param limit put a limit to list of records that server should send in response * @param filterParams additional filter params * @return Log entries representing online records from servers * @throws IOException if a remote or network exception occurs */ - List getLogEntries(Set serverNames, String logType, - ServerType serverType, int limit, Map filterParams) throws IOException; + List getLogEntries(Set serverNames, String logType, ServerType serverType, + int limit, Map filterParams) throws IOException; + + /** + * Flush master local region + */ + void flushMasterStore() throws IOException; } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdvancedScanResultConsumer.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdvancedScanResultConsumer.java index 10933abf3cf2..091024105a34 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdvancedScanResultConsumer.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdvancedScanResultConsumer.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -18,7 +18,6 @@ package org.apache.hadoop.hbase.client; import java.util.Optional; - import org.apache.yetus.audience.InterfaceAudience; /** @@ -93,10 +92,10 @@ interface ScanController { /** * Indicate that we have receive some data. - * @param results the data fetched from HBase service. + * @param results the data fetched from HBase service. * @param controller used to suspend or terminate the scan. Notice that the {@code controller} - * instance is only valid within scope of onNext method. You can only call its method in - * onNext, do NOT store it and call it later outside onNext. + * instance is only valid within scope of onNext method. You can only call its + * method in onNext, do NOT store it and call it later outside onNext. */ void onNext(Result[] results, ScanController controller); @@ -113,8 +112,9 @@ interface ScanController { *

    * This method give you a chance to terminate a slow scan operation. * @param controller used to suspend or terminate the scan. Notice that the {@code controller} - * instance is only valid within the scope of onHeartbeat method. You can only call its - * method in onHeartbeat, do NOT store it and call it later outside onHeartbeat. + * instance is only valid within the scope of onHeartbeat method. You can only + * call its method in onHeartbeat, do NOT store it and call it later outside + * onHeartbeat. */ default void onHeartbeat(ScanController controller) { } diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AllowPartialScanResultCache.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AllowPartialScanResultCache.java index 8d21994c23e0..3ef28308f1c8 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AllowPartialScanResultCache.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AllowPartialScanResultCache.java @@ -1,4 +1,4 @@ -/** +/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -21,7 +21,6 @@ import java.io.IOException; import java.util.Arrays; - import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.yetus.audience.InterfaceAudience; diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java index 3a08d687fbbb..8a49ef6ab784 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java @@ -36,34 +36,32 @@ /** * Performs Append operations on a single row. *

    - * This operation ensures atomicty to readers. Appends are done - * under a single row lock, so write operations to a row are synchronized, and - * readers are guaranteed to see this operation fully completed. + * This operation ensures atomicty to readers. Appends are done under a single row lock, so write + * operations to a row are synchronized, and readers are guaranteed to see this operation fully + * completed. *

    - * To append to a set of columns of a row, instantiate an Append object with the - * row to append to. At least one column to append must be specified using the + * To append to a set of columns of a row, instantiate an Append object with the row to append to. + * At least one column to append must be specified using the * {@link #addColumn(byte[], byte[], byte[])} method. */ @InterfaceAudience.Public public class Append extends Mutation { private static final Logger LOG = LoggerFactory.getLogger(Append.class); - private static final long HEAP_OVERHEAD = ClassSize.REFERENCE + ClassSize.TIMERANGE; + private static final long HEAP_OVERHEAD = (long) ClassSize.REFERENCE + ClassSize.TIMERANGE; private TimeRange tr = TimeRange.allTime(); /** * Sets the TimeRange to be used on the Get for this append. *

    - * This is useful for when you have counters that only last for specific - * periods of time (ie. counters that are partitioned by time). By setting - * the range of valid times for this append, you can potentially gain - * some performance with a more optimal Get operation. - * Be careful adding the time range to this class as you will update the old cell if the - * time range doesn't include the latest cells. + * This is useful for when you have counters that only last for specific periods of time (ie. + * counters that are partitioned by time). By setting the range of valid times for this append, + * you can potentially gain some performance with a more optimal Get operation. Be careful adding + * the time range to this class as you will update the old cell if the time range doesn't include + * the latest cells. *

    * This range is used as [minStamp, maxStamp). * @param minStamp minimum timestamp value, inclusive * @param maxStamp maximum timestamp value, exclusive - * @return this */ public Append setTimeRange(long minStamp, long maxStamp) { tr = new TimeRange(minStamp, maxStamp); @@ -72,22 +70,19 @@ public Append setTimeRange(long minStamp, long maxStamp) { /** * Gets the TimeRange used for this append. - * @return TimeRange */ public TimeRange getTimeRange() { return this.tr; } @Override - protected long extraHeapSize(){ + protected long extraHeapSize() { return HEAP_OVERHEAD; } /** - * @param returnResults - * True (default) if the append operation should return the results. - * A client that is not interested in the result can save network - * bandwidth setting this to false. + * True (default) if the append operation should return the results. A client that is not + * interested in the result can save network bandwidth setting this to false. */ @Override public Append setReturnResults(boolean returnResults) { @@ -95,9 +90,7 @@ public Append setReturnResults(boolean returnResults) { return this; } - /** - * @return current setting for returnResults - */ + /** Returns current setting for returnResults */ // This method makes public the superclasses's protected method. @Override public boolean isReturnResults() { @@ -113,6 +106,7 @@ public boolean isReturnResults() { public Append(byte[] row) { this(row, 0, row.length); } + /** * Copy constructor * @param appendToCopy append to copy @@ -122,50 +116,46 @@ public Append(Append appendToCopy) { this.tr = appendToCopy.getTimeRange(); } - /** Create a Append operation for the specified row. + /** + * Create a Append operation for the specified row. *

    * At least one column must be appended to. * @param rowArray Makes a copy out of this buffer. - * @param rowOffset - * @param rowLength */ - public Append(final byte [] rowArray, final int rowOffset, final int rowLength) { + public Append(final byte[] rowArray, final int rowOffset, final int rowLength) { checkRow(rowArray, rowOffset, rowLength); this.row = Bytes.copy(rowArray, rowOffset, rowLength); } /** - * Construct the Append with user defined data. NOTED: - * 1) all cells in the familyMap must have the Type.Put - * 2) the row of each cell must be same with passed row. - * @param row row. CAN'T be null - * @param ts timestamp + * Construct the Append with user defined data. NOTED: 1) all cells in the familyMap must have the + * Type.Put 2) the row of each cell must be same with passed row. + * @param row row. CAN'T be null + * @param ts timestamp * @param familyMap the map to collect all cells internally. CAN'T be null */ - public Append(byte[] row, long ts, NavigableMap> familyMap) { + public Append(byte[] row, long ts, NavigableMap> familyMap) { super(row, ts, familyMap); } /** * Add the specified column and value to this Append operation. - * @param family family name + * @param family family name * @param qualifier column qualifier - * @param value value to append to specified column - * @return this - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link #addColumn(byte[], byte[], byte[])} instead + * @param value value to append to specified column + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link #addColumn(byte[], byte[], byte[])} instead */ @Deprecated - public Append add(byte [] family, byte [] qualifier, byte [] value) { + public Append add(byte[] family, byte[] qualifier, byte[] value) { return this.addColumn(family, qualifier, value); } /** * Add the specified column and value to this Append operation. - * @param family family name + * @param family family name * @param qualifier column qualifier - * @param value value to append to specified column - * @return this + * @param value value to append to specified column */ public Append addColumn(byte[] family, byte[] qualifier, byte[] value) { KeyValue kv = new KeyValue(this.row, family, qualifier, this.ts, KeyValue.Type.Put, value); @@ -174,10 +164,10 @@ public Append addColumn(byte[] family, byte[] qualifier, byte[] value) { /** * Add column and value to this Append operation. - * @param cell * @return This instance */ @SuppressWarnings("unchecked") + @Override public Append add(final Cell cell) { try { super.add(cell); @@ -211,8 +201,8 @@ public Append setDurability(Durability d) { /** * Method for setting the Append's familyMap - * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. - * Use {@link Append#Append(byte[], long, NavigableMap)} instead + * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0. Use + * {@link Append#Append(byte[], long, NavigableMap)} instead */ @Deprecated @Override diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java index 85d545505e99..6d5e7c0791bf 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java @@ -61,6 +61,7 @@ public interface AsyncAdmin { /** + * Check if a table exists. * @param tableName Table to check. * @return True if table exists already. The return value will be wrapped by a * {@link CompletableFuture}. @@ -84,12 +85,12 @@ default CompletableFuture> listTableDescriptors() { /** * List all the tables matching the given pattern. - * @param pattern The compiled regular expression to match against + * @param pattern The compiled regular expression to match against * @param includeSysTables False to match only against userspace tables * @return - returns a list of TableDescriptors wrapped by a {@link CompletableFuture}. */ CompletableFuture> listTableDescriptors(Pattern pattern, - boolean includeSysTables); + boolean includeSysTables); /** * List specific tables including system tables. @@ -123,7 +124,7 @@ default CompletableFuture> listTableNames() { /** * List all of the names of userspace tables. - * @param pattern The regular expression to match against + * @param pattern The regular expression to match against * @param includeSysTables False to match only against userspace tables * @return a list of table names wrapped by a {@link CompletableFuture}. */ @@ -155,19 +156,19 @@ default CompletableFuture> listTableNames() { * key of the last region of the table (the first region has a null start key and the last region * has a null end key). BigInteger math will be used to divide the key range specified into enough * segments to make the required number of total regions. - * @param desc table descriptor for table - * @param startKey beginning of key range - * @param endKey end of key range + * @param desc table descriptor for table + * @param startKey beginning of key range + * @param endKey end of key range * @param numRegions the total number of regions to create */ CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[] endKey, - int numRegions); + int numRegions); /** * Creates a new table with an initial set of empty regions defined by the specified split keys. - * The total number of regions created will be the number of split keys plus one. - * Note : Avoid passing empty split key. - * @param desc table descriptor for table + * The total number of regions created will be the number of split keys plus one. Note : Avoid + * passing empty split key. + * @param desc table descriptor for table * @param splitKeys array of split keys for the initial regions of the table */ CompletableFuture createTable(TableDescriptor desc, byte[][] splitKeys); @@ -176,7 +177,27 @@ CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[ * Modify an existing table, more IRB friendly version. * @param desc modified description of the table */ - CompletableFuture modifyTable(TableDescriptor desc); + default CompletableFuture modifyTable(TableDescriptor desc) { + return modifyTable(desc, true); + } + + /** + * Modify an existing table, more IRB friendly version. + * @param desc description of the table + * @param reopenRegions By default, 'modifyTable' reopens all regions, potentially causing a RIT + * (Region In Transition) storm in large tables. If set to 'false', regions + * will remain unaware of the modification until they are individually + * reopened. Please note that this may temporarily result in configuration + * inconsistencies among regions. + */ + CompletableFuture modifyTable(TableDescriptor desc, boolean reopenRegions); + + /** + * Change the store file tracker of the given table. + * @param tableName the table you want to change + * @param dstSFT the destination store file tracker + */ + CompletableFuture modifyTableStoreFileTracker(TableName tableName, String dstSFT); /** * Deletes a table. @@ -186,7 +207,7 @@ CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[ /** * Truncate a table. - * @param tableName name of table to truncate + * @param tableName name of table to truncate * @param preserveSplits True if the splits should be preserved */ CompletableFuture truncateTable(TableName tableName, boolean preserveSplits); @@ -199,11 +220,11 @@ CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[ /** * Disable a table. The table has to be in enabled state for it to be disabled. - * @param tableName */ CompletableFuture disableTable(TableName tableName); /** + * Check if a table is enabled. * @param tableName name of table to check * @return true if table is on-line. The return value will be wrapped by a * {@link CompletableFuture}. @@ -211,6 +232,7 @@ CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[ CompletableFuture isTableEnabled(TableName tableName); /** + * Check if a table is disabled. * @param tableName name of table to check * @return true if table is off-line. The return value will be wrapped by a * {@link CompletableFuture}. @@ -218,6 +240,7 @@ CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[ CompletableFuture isTableDisabled(TableName tableName); /** + * Check if a table is available. * @param tableName name of table to check * @return true if all regions of the table are available. The return value will be wrapped by a * {@link CompletableFuture}. @@ -238,26 +261,34 @@ CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[ /** * Add a column family to an existing table. - * @param tableName name of the table to add column family to + * @param tableName name of the table to add column family to * @param columnFamily column family descriptor of column family to be added */ - CompletableFuture addColumnFamily(TableName tableName, - ColumnFamilyDescriptor columnFamily); + CompletableFuture addColumnFamily(TableName tableName, ColumnFamilyDescriptor columnFamily); /** * Delete a column family from a table. - * @param tableName name of table + * @param tableName name of table * @param columnFamily name of column family to be deleted */ CompletableFuture deleteColumnFamily(TableName tableName, byte[] columnFamily); /** * Modify an existing column family on a table. - * @param tableName name of table + * @param tableName name of table * @param columnFamily new column family descriptor to use */ CompletableFuture modifyColumnFamily(TableName tableName, - ColumnFamilyDescriptor columnFamily); + ColumnFamilyDescriptor columnFamily); + + /** + * Change the store file tracker of the given table's given family. + * @param tableName the table you want to change + * @param family the family you want to change + * @param dstSFT the destination store file tracker + */ + CompletableFuture modifyColumnFamilyStoreFileTracker(TableName tableName, byte[] family, + String dstSFT); /** * Create a new namespace. @@ -313,9 +344,9 @@ CompletableFuture modifyColumnFamily(TableName tableName, CompletableFuture flush(TableName tableName); /** - * Flush the specified column family stores on all regions of the passed table. - * This runs as a synchronous operation. - * @param tableName table to flush + * Flush the specified column family stores on all regions of the passed table. This runs as a + * synchronous operation. + * @param tableName table to flush * @param columnFamily column family within a table */ CompletableFuture flush(TableName tableName, byte[] columnFamily); @@ -328,9 +359,9 @@ CompletableFuture modifyColumnFamily(TableName tableName, /** * Flush a column family within a region. - * @param regionName region to flush + * @param regionName region to flush * @param columnFamily column family within a region. If not present, flush the region's all - * column families. + * column families. */ CompletableFuture flushRegion(byte[] regionName, byte[] columnFamily); @@ -342,8 +373,8 @@ CompletableFuture modifyColumnFamily(TableName tableName, /** * Compact a table. When the returned CompletableFuture is done, it only means the compact request - * was sent to HBase and may need some time to finish the compact operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. + * was sent to HBase and may need some time to finish the compact operation. Throws + * {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. * @param tableName table to compact */ default CompletableFuture compact(TableName tableName) { @@ -353,11 +384,10 @@ default CompletableFuture compact(TableName tableName) { /** * Compact a column family within a table. When the returned CompletableFuture is done, it only * means the compact request was sent to HBase and may need some time to finish the compact - * operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. - * @param tableName table to compact + * operation. Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. + * @param tableName table to compact * @param columnFamily column family within a table. If not present, compact the table's all - * column families. + * column families. */ default CompletableFuture compact(TableName tableName, byte[] columnFamily) { return compact(tableName, columnFamily, CompactType.NORMAL); @@ -365,10 +395,10 @@ default CompletableFuture compact(TableName tableName, byte[] columnFamily /** * Compact a table. When the returned CompletableFuture is done, it only means the compact request - * was sent to HBase and may need some time to finish the compact operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for - * normal compaction type. - * @param tableName table to compact + * was sent to HBase and may need some time to finish the compact operation. Throws + * {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for normal compaction + * type. + * @param tableName table to compact * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} */ CompletableFuture compact(TableName tableName, CompactType compactType); @@ -376,15 +406,14 @@ default CompletableFuture compact(TableName tableName, byte[] columnFamily /** * Compact a column family within a table. When the returned CompletableFuture is done, it only * means the compact request was sent to HBase and may need some time to finish the compact - * operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for + * operation. Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for * normal compaction type. - * @param tableName table to compact + * @param tableName table to compact * @param columnFamily column family within a table - * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} + * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} */ CompletableFuture compact(TableName tableName, byte[] columnFamily, - CompactType compactType); + CompactType compactType); /** * Compact an individual region. When the returned CompletableFuture is done, it only means the @@ -397,16 +426,16 @@ CompletableFuture compact(TableName tableName, byte[] columnFamily, * Compact a column family within a region. When the returned CompletableFuture is done, it only * means the compact request was sent to HBase and may need some time to finish the compact * operation. - * @param regionName region to compact + * @param regionName region to compact * @param columnFamily column family within a region. If not present, compact the region's all - * column families. + * column families. */ CompletableFuture compactRegion(byte[] regionName, byte[] columnFamily); /** * Major compact a table. When the returned CompletableFuture is done, it only means the compact - * request was sent to HBase and may need some time to finish the compact operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. + * request was sent to HBase and may need some time to finish the compact operation. Throws + * {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. * @param tableName table to major compact */ default CompletableFuture majorCompact(TableName tableName) { @@ -416,12 +445,11 @@ default CompletableFuture majorCompact(TableName tableName) { /** * Major compact a column family within a table. When the returned CompletableFuture is done, it * only means the compact request was sent to HBase and may need some time to finish the compact - * operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for + * operation. Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for * normal compaction. type. - * @param tableName table to major compact + * @param tableName table to major compact * @param columnFamily column family within a table. If not present, major compact the table's all - * column families. + * column families. */ default CompletableFuture majorCompact(TableName tableName, byte[] columnFamily) { return majorCompact(tableName, columnFamily, CompactType.NORMAL); @@ -429,10 +457,10 @@ default CompletableFuture majorCompact(TableName tableName, byte[] columnF /** * Major compact a table. When the returned CompletableFuture is done, it only means the compact - * request was sent to HBase and may need some time to finish the compact operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for - * normal compaction type. - * @param tableName table to major compact + * request was sent to HBase and may need some time to finish the compact operation. Throws + * {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found for normal compaction + * type. + * @param tableName table to major compact * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} */ CompletableFuture majorCompact(TableName tableName, CompactType compactType); @@ -440,15 +468,14 @@ default CompletableFuture majorCompact(TableName tableName, byte[] columnF /** * Major compact a column family within a table. When the returned CompletableFuture is done, it * only means the compact request was sent to HBase and may need some time to finish the compact - * operation. - * Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. - * @param tableName table to major compact + * operation. Throws {@link org.apache.hadoop.hbase.TableNotFoundException} if table not found. + * @param tableName table to major compact * @param columnFamily column family within a table. If not present, major compact the table's all - * column families. - * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} + * column families. + * @param compactType {@link org.apache.hadoop.hbase.client.CompactType} */ CompletableFuture majorCompact(TableName tableName, byte[] columnFamily, - CompactType compactType); + CompactType compactType); /** * Major compact a region. When the returned CompletableFuture is done, it only means the compact @@ -461,9 +488,9 @@ CompletableFuture majorCompact(TableName tableName, byte[] columnFamily, * Major compact a column family within region. When the returned CompletableFuture is done, it * only means the compact request was sent to HBase and may need some time to finish the compact * operation. - * @param regionName region to major compact + * @param regionName region to major compact * @param columnFamily column family within a region. If not present, major compact the region's - * all column families. + * all column families. */ CompletableFuture majorCompactRegion(byte[] regionName, byte[] columnFamily); @@ -494,9 +521,9 @@ default CompletableFuture mergeSwitch(boolean enabled) { * Notice that, the method itself is always non-blocking, which means it will always return * immediately. The {@code drainMerges} parameter only effects when will we complete the returned * {@link CompletableFuture}. - * @param enabled enabled or not + * @param enabled enabled or not * @param drainMerges If true, it waits until current merge() call, if outstanding, - * to return. + * to return. * @return Previous switch value wrapped by a {@link CompletableFuture} */ CompletableFuture mergeSwitch(boolean enabled, boolean drainMerges); @@ -523,9 +550,9 @@ default CompletableFuture splitSwitch(boolean enabled) { * Notice that, the method itself is always non-blocking, which means it will always return * immediately. The {@code drainSplits} parameter only effects when will we complete the returned * {@link CompletableFuture}. - * @param enabled enabled or not + * @param enabled enabled or not * @param drainSplits If true, it waits until current split() call, if outstanding, - * to return. + * to return. * @return Previous switch value wrapped by a {@link CompletableFuture} */ CompletableFuture splitSwitch(boolean enabled, boolean drainSplits); @@ -541,22 +568,22 @@ default CompletableFuture splitSwitch(boolean enabled) { * Merge two regions. * @param nameOfRegionA encoded or full name of region a * @param nameOfRegionB encoded or full name of region b - * @param forcible true if do a compulsory merge, otherwise we will only merge two adjacent - * regions + * @param forcible true if do a compulsory merge, otherwise we will only merge two adjacent + * regions * @deprecated since 2.3.0 and will be removed in 4.0.0.Use {@link #mergeRegions(List, boolean)} * instead. */ @Deprecated default CompletableFuture mergeRegions(byte[] nameOfRegionA, byte[] nameOfRegionB, - boolean forcible) { + boolean forcible) { return mergeRegions(Arrays.asList(nameOfRegionA, nameOfRegionB), forcible); } /** * Merge multiple regions (>=2). * @param nameOfRegionsToMerge encoded or full name of daughter regions - * @param forcible true if do a compulsory merge, otherwise we will only merge two adjacent - * regions + * @param forcible true if do a compulsory merge, otherwise we will only merge two + * adjacent regions */ CompletableFuture mergeRegions(List nameOfRegionsToMerge, boolean forcible); @@ -574,7 +601,7 @@ default CompletableFuture mergeRegions(byte[] nameOfRegionA, byte[] nameOf /** * Split a table. - * @param tableName table to split + * @param tableName table to split * @param splitPoint the explicit position to split on */ CompletableFuture split(TableName tableName, byte[] splitPoint); @@ -583,16 +610,20 @@ default CompletableFuture mergeRegions(byte[] nameOfRegionA, byte[] nameOf * Split an individual region. * @param regionName region to split * @param splitPoint the explicit position to split on. If not present, it will decide by region - * server. + * server. */ CompletableFuture splitRegion(byte[] regionName, byte[] splitPoint); /** + * Assign an individual region. * @param regionName Encoded or full name of region to assign. */ CompletableFuture assign(byte[] regionName); /** + * Unassign a region from current hosting regionserver. Region will then be assigned to a + * regionserver chosen at random. Region could be reassigned back to the same server. Use + * {@link #move(byte[], ServerName)} if you want to control the region movement. * @param regionName Encoded or full name of region to unassign. */ CompletableFuture unassign(byte[] regionName); @@ -602,12 +633,11 @@ default CompletableFuture mergeRegions(byte[] nameOfRegionA, byte[] nameOf * regionserver chosen at random. Region could be reassigned back to the same server. Use * {@link #move(byte[], ServerName)} if you want to control the region movement. * @param regionName Encoded or full name of region to unassign. Will clear any existing - * RegionPlan if one found. - * @param forcible If true, force unassign (Will remove region from regions-in-transition too if - * present. If results in double assignment use hbck -fix to resolve. To be used by - * experts). - * @deprecated since 2.4.0 and will be removed in 4.0.0. Use {@link #unassign(byte[])} - * instead. + * RegionPlan if one found. + * @param forcible If true, force unassign (Will remove region from regions-in-transition too if + * present. If results in double assignment use hbck -fix to resolve. To be used + * by experts). + * @deprecated since 2.4.0 and will be removed in 4.0.0. Use {@link #unassign(byte[])} instead. * @see HBASE-24875 */ @Deprecated @@ -633,10 +663,11 @@ default CompletableFuture unassign(byte[] regionName, boolean forcible) { /** * Move the region r to dest. - * @param regionName Encoded or full name of region to move. + * @param regionName Encoded or full name of region to move. * @param destServerName The servername of the destination regionserver. If not present, we'll - * assign to a random server. A server name is made of host, port and startcode. Here is - * an example: host187.example.com,60020,1289493121758 + * assign to a random server. A server name is made of host, port and + * startcode. Here is an example: + * host187.example.com,60020,1289493121758 */ CompletableFuture move(byte[] regionName, ServerName destServerName); @@ -655,22 +686,22 @@ default CompletableFuture unassign(byte[] regionName, boolean forcible) { /** * Add a new replication peer for replicating data to slave cluster - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig configuration for the replication slave cluster */ default CompletableFuture addReplicationPeer(String peerId, - ReplicationPeerConfig peerConfig) { + ReplicationPeerConfig peerConfig) { return addReplicationPeer(peerId, peerConfig, true); } /** * Add a new replication peer for replicating data to slave cluster - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig configuration for the replication slave cluster - * @param enabled peer state, true if ENABLED and false if DISABLED + * @param enabled peer state, true if ENABLED and false if DISABLED */ - CompletableFuture addReplicationPeer(String peerId, - ReplicationPeerConfig peerConfig, boolean enabled); + CompletableFuture addReplicationPeer(String peerId, ReplicationPeerConfig peerConfig, + boolean enabled); /** * Remove a peer and stop the replication @@ -699,27 +730,27 @@ CompletableFuture addReplicationPeer(String peerId, /** * Update the peerConfig for the specified peer - * @param peerId a short name that identifies the peer + * @param peerId a short name that identifies the peer * @param peerConfig new config for the peer */ CompletableFuture updateReplicationPeerConfig(String peerId, - ReplicationPeerConfig peerConfig); + ReplicationPeerConfig peerConfig); /** * Append the replicable table-cf config of the specified peer - * @param peerId a short that identifies the cluster + * @param peerId a short that identifies the cluster * @param tableCfs A map from tableName to column family names */ CompletableFuture appendReplicationPeerTableCFs(String peerId, - Map> tableCfs); + Map> tableCfs); /** * Remove some table-cfs from config of the specified peer - * @param peerId a short name that identifies the cluster + * @param peerId a short name that identifies the cluster * @param tableCfs A map from tableName to column family names */ CompletableFuture removeReplicationPeerTableCFs(String peerId, - Map> tableCfs); + Map> tableCfs); /** * Return a list of replication peers. @@ -764,7 +795,7 @@ CompletableFuture removeReplicationPeerTableCFs(String peerId, * naming. Snapshot names follow the same naming constraints as tables in HBase. See * {@link org.apache.hadoop.hbase.TableName#isLegalFullyQualifiedTableName(byte[])}. * @param snapshotName name of the snapshot to be created - * @param tableName name of the table for which snapshot is created + * @param tableName name of the table for which snapshot is created */ default CompletableFuture snapshot(String snapshotName, TableName tableName) { return snapshot(snapshotName, tableName, SnapshotType.FLUSH); @@ -778,12 +809,12 @@ default CompletableFuture snapshot(String snapshotName, TableName tableNam * naming. Snapshot names follow the same naming constraints as tables in HBase. See * {@link org.apache.hadoop.hbase.TableName#isLegalFullyQualifiedTableName(byte[])}. * @param snapshotName name to give the snapshot on the filesystem. Must be unique from all other - * snapshots stored on the cluster - * @param tableName name of the table to snapshot - * @param type type of snapshot to take + * snapshots stored on the cluster + * @param tableName name of the table to snapshot + * @param type type of snapshot to take */ default CompletableFuture snapshot(String snapshotName, TableName tableName, - SnapshotType type) { + SnapshotType type) { return snapshot(new SnapshotDescription(snapshotName, tableName, type)); } @@ -835,11 +866,11 @@ default CompletableFuture snapshot(String snapshotName, TableName tableNam * restored. If the restore completes without problem the failsafe snapshot is deleted. The * failsafe snapshot name is configurable by using the property * "hbase.snapshot.restore.failsafe.name". - * @param snapshotName name of the snapshot to restore + * @param snapshotName name of the snapshot to restore * @param takeFailSafeSnapshot true if the failsafe snapshot should be taken */ default CompletableFuture restoreSnapshot(String snapshotName, - boolean takeFailSafeSnapshot) { + boolean takeFailSafeSnapshot) { return restoreSnapshot(snapshotName, takeFailSafeSnapshot, false); } @@ -850,17 +881,17 @@ default CompletableFuture restoreSnapshot(String snapshotName, * restored. If the restore completes without problem the failsafe snapshot is deleted. The * failsafe snapshot name is configurable by using the property * "hbase.snapshot.restore.failsafe.name". - * @param snapshotName name of the snapshot to restore + * @param snapshotName name of the snapshot to restore * @param takeFailSafeSnapshot true if the failsafe snapshot should be taken - * @param restoreAcl true to restore acl of snapshot + * @param restoreAcl true to restore acl of snapshot */ CompletableFuture restoreSnapshot(String snapshotName, boolean takeFailSafeSnapshot, - boolean restoreAcl); + boolean restoreAcl); /** * Create a new table by cloning the snapshot content. * @param snapshotName name of the snapshot to be cloned - * @param tableName name of the table where the snapshot will be restored + * @param tableName name of the table where the snapshot will be restored */ default CompletableFuture cloneSnapshot(String snapshotName, TableName tableName) { return cloneSnapshot(snapshotName, tableName, false); @@ -869,11 +900,23 @@ default CompletableFuture cloneSnapshot(String snapshotName, TableName tab /** * Create a new table by cloning the snapshot content. * @param snapshotName name of the snapshot to be cloned - * @param tableName name of the table where the snapshot will be restored - * @param restoreAcl true to restore acl of snapshot + * @param tableName name of the table where the snapshot will be restored + * @param restoreAcl true to restore acl of snapshot + */ + default CompletableFuture cloneSnapshot(String snapshotName, TableName tableName, + boolean restoreAcl) { + return cloneSnapshot(snapshotName, tableName, restoreAcl, null); + } + + /** + * Create a new table by cloning the snapshot content. + * @param snapshotName name of the snapshot to be cloned + * @param tableName name of the table where the snapshot will be restored + * @param restoreAcl true to restore acl of snapshot + * @param customSFT specify the StroreFileTracker used for the table */ CompletableFuture cloneSnapshot(String snapshotName, TableName tableName, - boolean restoreAcl); + boolean restoreAcl, String customSFT); /** * List completed snapshots. @@ -900,13 +943,13 @@ CompletableFuture cloneSnapshot(String snapshotName, TableName tableName, /** * List all the completed snapshots matching the given table name regular expression and snapshot * name regular expression. - * @param tableNamePattern The compiled table name regular expression to match against + * @param tableNamePattern The compiled table name regular expression to match against * @param snapshotNamePattern The compiled snapshot name regular expression to match against * @return - returns a List of completed SnapshotDescription wrapped by a * {@link CompletableFuture} */ CompletableFuture> listTableSnapshots(Pattern tableNamePattern, - Pattern snapshotNamePattern); + Pattern snapshotNamePattern); /** * Delete an existing snapshot. @@ -934,34 +977,34 @@ CompletableFuture> listTableSnapshots(Pattern tableNam /** * Delete all existing snapshots matching the given table name regular expression and snapshot * name regular expression. - * @param tableNamePattern The compiled table name regular expression to match against + * @param tableNamePattern The compiled table name regular expression to match against * @param snapshotNamePattern The compiled snapshot name regular expression to match against */ CompletableFuture deleteTableSnapshots(Pattern tableNamePattern, - Pattern snapshotNamePattern); + Pattern snapshotNamePattern); /** * Execute a distributed procedure on a cluster. * @param signature A distributed procedure is uniquely identified by its signature (default the - * root ZK node name of the procedure). - * @param instance The instance name of the procedure. For some procedures, this parameter is - * optional. - * @param props Property/Value pairs of properties passing to the procedure + * root ZK node name of the procedure). + * @param instance The instance name of the procedure. For some procedures, this parameter is + * optional. + * @param props Property/Value pairs of properties passing to the procedure */ CompletableFuture execProcedure(String signature, String instance, - Map props); + Map props); /** * Execute a distributed procedure on a cluster. * @param signature A distributed procedure is uniquely identified by its signature (default the - * root ZK node name of the procedure). - * @param instance The instance name of the procedure. For some procedures, this parameter is - * optional. - * @param props Property/Value pairs of properties passing to the procedure + * root ZK node name of the procedure). + * @param instance The instance name of the procedure. For some procedures, this parameter is + * optional. + * @param props Property/Value pairs of properties passing to the procedure * @return data returned after procedure execution. null if no return data. */ CompletableFuture execProcedureWithReturn(String signature, String instance, - Map props); + Map props); /** * Check the current state of the specified procedure. There are three possible states: @@ -971,18 +1014,18 @@ CompletableFuture execProcedureWithReturn(String signature, String insta *

  • finished with error - throws the exception that caused the procedure to fail
  • * * @param signature The signature that uniquely identifies a procedure - * @param instance The instance name of the procedure - * @param props Property/Value pairs of properties passing to the procedure + * @param instance The instance name of the procedure + * @param props Property/Value pairs of properties passing to the procedure * @return true if the specified procedure is finished successfully, false if it is still running. * The value is wrapped by {@link CompletableFuture} */ CompletableFuture isProcedureFinished(String signature, String instance, - Map props); + Map props); /** - * Abort a procedure - * Do not use. Usually it is ignored but if not, it can do more damage than good. See hbck2. - * @param procId ID of the procedure to abort + * Abort a procedure Do not use. Usually it is ignored but if not, it can do more damage than + * good. See hbck2. + * @param procId ID of the procedure to abort * @param mayInterruptIfRunning if the proc completed at least one step, should it be aborted? * @return true if aborted, false if procedure already completed or does not exist. the value is * wrapped by {@link CompletableFuture} @@ -1005,10 +1048,10 @@ CompletableFuture isProcedureFinished(String signature, String instance CompletableFuture getLocks(); /** - * Mark region server(s) as decommissioned to prevent additional regions from getting - * assigned to them. Optionally unload the regions on the servers. If there are multiple servers - * to be decommissioned, decommissioning them at the same time can prevent wasteful region - * movements. Region unloading is asynchronous. + * Mark region server(s) as decommissioned to prevent additional regions from getting assigned to + * them. Optionally unload the regions on the servers. If there are multiple servers to be + * decommissioned, decommissioning them at the same time can prevent wasteful region movements. + * Region unloading is asynchronous. * @param servers The list of servers to decommission. * @param offload True to offload the regions from the decommissioned servers */ @@ -1023,47 +1066,37 @@ CompletableFuture isProcedureFinished(String signature, String instance /** * Remove decommission marker from a region server to allow regions assignments. Load regions onto * the server if a list of regions is given. Region loading is asynchronous. - * @param server The server to recommission. + * @param server The server to recommission. * @param encodedRegionNames Regions to load onto the server. */ CompletableFuture recommissionRegionServer(ServerName server, - List encodedRegionNames); + List encodedRegionNames); - /** - * @return cluster status wrapped by {@link CompletableFuture} - */ + /** Returns cluster status wrapped by {@link CompletableFuture} */ CompletableFuture getClusterMetrics(); - /** - * @return cluster status wrapped by {@link CompletableFuture} - */ + /** Returns cluster status wrapped by {@link CompletableFuture} */ CompletableFuture getClusterMetrics(EnumSet