File tree 1 file changed +11
-0
lines changed
src/main/java/io/confluent/connect/hdfs
1 file changed +11
-0
lines changed Original file line number Diff line number Diff line change @@ -132,6 +132,17 @@ public DataWriter(
132
132
hadoopConfiguration .addResource (new Path (config .hadoopConfDir () + "/hdfs-site.xml" ));
133
133
}
134
134
135
+ // By default all FileSystem clients created through the java Hadoop clients are auto
136
+ // closed at JVM shutdown. This can interfere with the normal shutdown logic of the connector
137
+ // where the created clients are used to clean up temporary files (if the client was closed
138
+ // prior, then this operation throws a Filesystem Closed error). To prevent this behavior we
139
+ // set the Hadoop configuration fs.automatic.close to false. All created clients are closed as
140
+ // part of the connector lifecycle. This is anyways necessary as during connector deletion the
141
+ // connector lifecycle needs to close the clients, as the JVM shutdown hooks don't come into the
142
+ // picture. Hence we should always operate with fs.automatic.close as false. If in future we
143
+ // find that we are leaking client connections, we need to fix the lifecycle to close those.
144
+ hadoopConfiguration .setBoolean ("fs.automatic.close" , false );
145
+
135
146
if (config .kerberosAuthentication ()) {
136
147
configureKerberosAuthentication (hadoopConfiguration );
137
148
}
You can’t perform that action at this time.
0 commit comments