Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got exception: Object with id "" is managed by a different persistence manager #253

Open
iswarezwp opened this issue Nov 8, 2017 · 2 comments

Comments

@iswarezwp
Copy link

iswarezwp commented Nov 8, 2017

I setup kafka-connect-hdfs with the flowing configurations with hive integration enabled:

name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=10
topics=hdfs_test_26
hdfs.url=hdfs://ambari.bigdata.mycompany.com:8020
flush.size=3

locale=zh_CN
timezone=Asia/Shanghai
partitioner.class=io.confluent.connect.hdfs.partitioner.HourlyPartitioner

hive.integration=true
hive.metastore.uris=thrift://node1.bigdata.test.com:9083
schema.compatibility=BACKWARD

The hdfs_test_26 topic is created with 20 partitions, so the connector can start more than one task.

When I write some data to the topic, the connector started to work but got the following exceptions and only part of the data is sink to the hdfs:

[2017-11-08 15:45:44,289] INFO Finished recovery for topic partition hdfs_test_26-16 (io.confluent.connect.hdfs.TopicPartitionWriter:223)
[2017-11-08 15:45:44,290] INFO Started recovery for topic partition hdfs_test_26-15 (io.confluent.connect.hdfs.TopicPartitionWriter:208)
[2017-11-08 15:45:44,292] INFO Finished recovery for topic partition hdfs_test_26-6 (io.confluent.connect.hdfs.TopicPartitionWriter:223)
[2017-11-08 15:45:44,292] INFO Finished recovery for topic partition hdfs_test_26-12 (io.confluent.connect.hdfs.TopicPartitionWriter:223)
[2017-11-08 15:45:44,293] INFO Finished recovery for topic partition hdfs_test_26-15 (io.confluent.connect.hdfs.TopicPartitionWriter:223)
[2017-11-08 15:45:44,438] WARN Hive database already exists: default (io.confluent.connect.hdfs.hive.HiveMetaStore:150)
[2017-11-08 15:45:44,439] WARN Hive database already exists: default (io.confluent.connect.hdfs.hive.HiveMetaStore:150)
[2017-11-08 15:45:44,446] WARN Hive database already exists: default (io.confluent.connect.hdfs.hive.HiveMetaStore:150)
[2017-11-08 15:45:44,459] INFO Starting commit and rotation for topic partition hdfs_test_26-7 with start offsets {year=2017/month=11/day=08/hour=15/=0} and end offsets {year=2017/month=11/day=08/hour=15/=2} (io.confluent.connect.hdfs.TopicPartitionWriter:297)
[2017-11-08 15:45:44,460] INFO Starting commit and rotation for topic partition hdfs_test_26-15 with start offsets {year=2017/month=11/day=08/hour=15/=0} and end offsets {year=2017/month=11/day=08/hour=15/=2} (io.confluent.connect.hdfs.TopicPartitionWriter:297)
[2017-11-08 15:45:44,688] INFO Successfully acquired lease for hdfs://ambari.bigdata.mycompany.com:8020/logs/hdfs_test_26/15/log (io.confluent.connect.hdfs.wal.FSWAL:75)
[2017-11-08 15:45:44,692] INFO Successfully acquired lease for hdfs://ambari.bigdata.mycompany.com:8020/logs/hdfs_test_26/7/log (io.confluent.connect.hdfs.wal.FSWAL:75)
[2017-11-08 15:45:45,352] INFO Committed hdfs://ambari.bigdata.mycompany.com:8020/topics/hdfs_test_26/year=2017/month=11/day=08/hour=15//hdfs_test_26+15+0000000000+0000000002.avro for hdfs_test_26-15 (io.confluent.connect.hdfs.TopicPartitionWriter:625)
[2017-11-08 15:45:45,380] INFO Committed hdfs://ambari.bigdata.mycompany.com:8020/topics/hdfs_test_26/year=2017/month=11/day=08/hour=15//hdfs_test_26+7+0000000000+0000000002.avro for hdfs_test_26-7 (io.confluent.connect.hdfs.TopicPartitionWriter:625)
[2017-11-08 15:45:45,433] INFO Starting commit and rotation for topic partition hdfs_test_26-17 with start offsets {year=2017/month=11/day=08/hour=15/=0} and end offsets {year=2017/month=11/day=08/hour=15/=2} (io.confluent.connect.hdfs.TopicPartitionWriter:297)
[2017-11-08 15:45:45,466] WARN Hive database already exists: default (io.confluent.connect.hdfs.hive.HiveMetaStore:150)
[2017-11-08 15:45:45,471] INFO Starting commit and rotation for topic partition hdfs_test_26-8 with start offsets {year=2017/month=11/day=08/hour=15/=0} and end offsets {year=2017/month=11/day=08/hour=15/=2} (io.confluent.connect.hdfs.TopicPartitionWriter:297)
[2017-11-08 15:45:45,523] WARN Hive table already exists: default.hdfs_test_26 (io.confluent.connect.hdfs.hive.HiveMetaStore:198)
[2017-11-08 15:45:45,580] INFO Successfully acquired lease for hdfs://ambari.bigdata.mycompany.com:8020/logs/hdfs_test_26/17/log (io.confluent.connect.hdfs.wal.FSWAL:75)
[2017-11-08 15:45:45,606] INFO Successfully acquired lease for hdfs://ambari.bigdata.mycompany.com:8020/logs/hdfs_test_26/8/log (io.confluent.connect.hdfs.wal.FSWAL:75)
[2017-11-08 15:45:46,222] INFO Committed hdfs://ambari.bigdata.mycompany.com:8020/topics/hdfs_test_26/year=2017/month=11/day=08/hour=15//hdfs_test_26+17+0000000000+0000000002.avro for hdfs_test_26-17 (io.confluent.connect.hdfs.TopicPartitionWriter:625)
[2017-11-08 15:45:46,222] INFO Committed hdfs://ambari.bigdata.mycompany.com:8020/topics/hdfs_test_26/year=2017/month=11/day=08/hour=15//hdfs_test_26+8+0000000000+0000000002.avro for hdfs_test_26-8 (io.confluent.connect.hdfs.TopicPartitionWriter:625)
[2017-11-08 15:45:46,233] WARN Hive database already exists: default (io.confluent.connect.hdfs.hive.HiveMetaStore:150)
[2017-11-08 15:45:46,267] INFO Starting commit and rotation for topic partition hdfs_test_26-16 with start offsets {year=2017/month=11/day=08/hour=15/=0} and end offsets {year=2017/month=11/day=08/hour=15/=2} (io.confluent.connect.hdfs.TopicPartitionWriter:297)
[2017-11-08 15:45:46,294] WARN Hive table already exists: default.hdfs_test_26 (io.confluent.connect.hdfs.hive.HiveMetaStore:198)
[2017-11-08 15:45:46,308] INFO Starting commit and rotation for topic partition hdfs_test_26-6 with start offsets {year=2017/month=11/day=08/hour=15/=0} and end offsets {year=2017/month=11/day=08/hour=15/=2} (io.confluent.connect.hdfs.TopicPartitionWriter:297)
[2017-11-08 15:45:46,413] INFO Successfully acquired lease for hdfs://ambari.bigdata.mycompany.com:8020/logs/hdfs_test_26/16/log (io.confluent.connect.hdfs.wal.FSWAL:75)
[2017-11-08 15:45:46,473] INFO Successfully acquired lease for hdfs://ambari.bigdata.mycompany.com:8020/logs/hdfs_test_26/6/log (io.confluent.connect.hdfs.wal.FSWAL:75)
[2017-11-08 15:45:46,606] WARN Hive table already exists: default.hdfs_test_26 (io.confluent.connect.hdfs.hive.HiveMetaStore:198)
[2017-11-08 15:45:46,735] WARN Hive table already exists: default.hdfs_test_26 (io.confluent.connect.hdfs.hive.HiveMetaStore:198)
[2017-11-08 15:45:46,767] WARN Hive database already exists: default (io.confluent.connect.hdfs.hive.HiveMetaStore:150)
[2017-11-08 15:45:46,777] WARN Hive table already exists: default.hdfs_test_26 (io.confluent.connect.hdfs.hive.HiveMetaStore:198)
[2017-11-08 15:45:47,070] INFO Committed hdfs://ambari.bigdata.mycompany.com:8020/topics/hdfs_test_26/year=2017/month=11/day=08/hour=15//hdfs_test_26+16+0000000000+0000000002.avro for hdfs_test_26-16 (io.confluent.connect.hdfs.TopicPartitionWriter:625)
[2017-11-08 15:45:47,098] INFO Committed hdfs://ambari.bigdata.mycompany.com:8020/topics/hdfs_test_26/year=2017/month=11/day=08/hour=15//hdfs_test_26+6+0000000000+0000000002.avro for hdfs_test_26-6 (io.confluent.connect.hdfs.TopicPartitionWriter:625)
[2017-11-08 15:45:51,295] ERROR Got exception: org.apache.hadoop.hive.metastore.api.MetaException javax.jdo.JDOUserException: Object with id "" is managed by a different persistence manager
    at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:636)
    at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171)
    at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:582)
    at org.apache.hadoop.hive.metastore.ObjectStore.getDatabases(ObjectStore.java:816)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
    at com.sun.proxy.$Proxy10.getDatabases(Unknown Source)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_databases(HiveMetaStore.java:1304)
    at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
    at com.sun.proxy.$Proxy12.get_databases(Unknown Source)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9267)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9251)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
NestedThrowablesStackTrace:
Object with id "" is managed by a different persistence manager
org.datanucleus.exceptions.NucleusUserException: Object with id "" is managed by a different persistence manager
    at org.datanucleus.ExecutionContextImpl.detachObject(ExecutionContextImpl.java:2691)
    at org.datanucleus.ExecutionContextThreadedImpl.detachObject(ExecutionContextThreadedImpl.java:329)
    at org.datanucleus.store.types.SCOUtils.detachForCollection(SCOUtils.java:1267)
    at org.datanucleus.store.fieldmanager.DetachFieldManager.internalFetchObjectField(DetachFieldManager.java:175)
    at org.datanucleus.store.fieldmanager.AbstractFetchDepthFieldManager.fetchObjectField(AbstractFetchDepthFieldManager.java:114)
    at org.datanucleus.state.StateManagerImpl.detach(StateManagerImpl.java:3571)
    at org.datanucleus.ExecutionContextImpl.performDetachAllOnTxnEnd(ExecutionContextImpl.java:4579)
    at org.datanucleus.ExecutionContextImpl.postCommit(ExecutionContextImpl.java:4616)
    at org.datanucleus.ExecutionContextImpl.transactionCommitted(ExecutionContextImpl.java:775)
    at org.datanucleus.TransactionImpl.internalPostCommit(TransactionImpl.java:559)
    at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:342)
    at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107)
    at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:582)
    at org.apache.hadoop.hive.metastore.ObjectStore.getDatabases(ObjectStore.java:816)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
    at com.sun.proxy.$Proxy10.getDatabases(Unknown Source)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_databases(HiveMetaStore.java:1304)
    at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
    at com.sun.proxy.$Proxy12.get_databases(Unknown Source)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9267)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9251)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
 (hive.log:1211)
MetaException(message:javax.jdo.JDOUserException: Object with id "" is managed by a different persistence manager
    at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:636)
    at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171)
    at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:582)
    at org.apache.hadoop.hive.metastore.ObjectStore.getDatabases(ObjectStore.java:816)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
    at com.sun.proxy.$Proxy10.getDatabases(Unknown Source)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_databases(HiveMetaStore.java:1304)
    at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
    at com.sun.proxy.$Proxy12.get_databases(Unknown Source)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9267)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9251)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
NestedThrowablesStackTrace:
Object with id "" is managed by a different persistence manager
org.datanucleus.exceptions.NucleusUserException: Object with id "" is managed by a different persistence manager
    at org.datanucleus.ExecutionContextImpl.detachObject(ExecutionContextImpl.java:2691)
    at org.datanucleus.ExecutionContextThreadedImpl.detachObject(ExecutionContextThreadedImpl.java:329)
    at org.datanucleus.store.types.SCOUtils.detachForCollection(SCOUtils.java:1267)
    at org.datanucleus.store.fieldmanager.DetachFieldManager.internalFetchObjectField(DetachFieldManager.java:175)
    at org.datanucleus.store.fieldmanager.AbstractFetchDepthFieldManager.fetchObjectField(AbstractFetchDepthFieldManager.java:114)
    at org.datanucleus.state.StateManagerImpl.detach(StateManagerImpl.java:3571)
    at org.datanucleus.ExecutionContextImpl.performDetachAllOnTxnEnd(ExecutionContextImpl.java:4579)
    at org.datanucleus.ExecutionContextImpl.postCommit(ExecutionContextImpl.java:4616)
    at org.datanucleus.ExecutionContextImpl.transactionCommitted(ExecutionContextImpl.java:775)
    at org.datanucleus.TransactionImpl.internalPostCommit(TransactionImpl.java:559)
    at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:342)
    at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107)
    at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:582)
    at org.apache.hadoop.hive.metastore.ObjectStore.getDatabases(ObjectStore.java:816)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
    at com.sun.proxy.$Proxy10.getDatabases(Unknown Source)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_databases(HiveMetaStore.java:1304)
    at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
    at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
    at com.sun.proxy.$Proxy12.get_databases(Unknown Source)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9267)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_databases.getResult(ThriftHiveMetastore.java:9251)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
    at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_databases_result$get_databases_resultStandardScheme.read(ThriftHiveMetastore.java:17344)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_databases_result$get_databases_resultStandardScheme.read(ThriftHiveMetastore.java:17312)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_databases_result.read(ThriftHiveMetastore.java:17254)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_databases(ThriftHiveMetastore.java:714)
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:701)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:1020)
    at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:367)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
    at com.sun.proxy.$Proxy54.isOpen(Unknown Source)
    at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:205)
    at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
    at io.confluent.connect.hdfs.hive.HiveMetaStore.<init>(HiveMetaStore.java:69)
    at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:186)
    at io.confluent.connect.hdfs.HdfsSinkTask.start(HdfsSinkTask.java:76)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:232)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:145)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2017-11-08 15:45:51,297] ERROR Converting exception to MetaException (hive.log:1212)
[2017-11-08 15:45:51,302] INFO Trying to connect to metastore with URI thrift://node1.bigdata.mycompany.com:9083 (hive.metastore:376)
[2017-11-08 15:45:51,303] INFO Connected to metastore. (hive.metastore:472)
[2017-11-08 15:45:51,303] INFO Sink task WorkerSinkTask{id=hdfs-sink-4} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:233)
@iswarezwp
Copy link
Author

I setup the hive.support.concurrency property to true, but it takes no effects.

When I let the topic to be created automatically, then only one task will be created, and it works fine.

So, is there any configurations I missed?

@OneCricketeer
Copy link

OneCricketeer commented Sep 11, 2018

This looks like a Hive configuration issue, not Kafka Connect. For example, some setting in the hive-site.xml needs corrected that's related to your Hive metastore.

And you need to give hive.conf.dir and hadoop.conf.dir properties to Connect

I might suggesting asking around in the Hive developer mailing lists, though

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants