-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Over Hive avro.schema.literal
max size due to so long Avro schema
#331
Comments
I've seen this workaround: log into Hive Metastore DB, then run Probably want to pick a realistic size, though. |
It is one way to avoid this problem, but I think it is depended Hive Setting side and could be shorten again by Hive upgrade. |
@daigorowhite yes, using I only mentioned the workaround mentioned because it works today and requires no code changes. It may not be ideal, and it may not work for everyone. |
See HIVE-12274. You could manually apply the upgrade script for Mysql |
Thanks for sharing it! 👍 |
Duplicates #145 |
Hi, team. |
Hi team, I have one issue with long Avro schema with
kafka-connect-hdfs
hive integration.When I try to sink long schema table into HDFS with
kafka-connect-hdfs
.Success to put data into HDFS , but I got this error when I throw query
I investigated root cause of this, and it is caused the Hive meta data param
varchar
size.https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/avro/AvroHiveUtil.java#L69
https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/avro/AvroHiveUtil.java#L95
Do you have any solution/idea for this in
kafka-connect-hdfs
?The text was updated successfully, but these errors were encountered: