-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support DataFrameWriter & DataFrameReader? #229
Comments
No, because this connector does not implement |
Is there a plan to support it? 我的这个项目想通过df api 集成进来: https://github.com/melin/datatunnel |
in low priority |
Sorry this question is out of topic. What is the use of this connector? I want to connect Clickhouse with Spark 3.4 in Python. Do I need to use this connector? If YES then what is the class of driver. |
@durgeksh there are quick start demos for |
Thanks for reply, but i see, there is no documentation for using this connector in pyspark. When I checked to use "clickhouse-jdbc-0.4.6-shaded.jar" they use driver attribute with Class of the driver as given below. How can I use this connector in pyspark? |
If you run and compare the outputs of I'm not a Pythoneer, so the document I write does not include the PySpark demo. This is a connector (kind of a plugin) of Spark, a plugin does not have the responsibility to teach how to use the main framework, please read the Spark docs, it provides snippets for different languages for each case. |
You are right. But, I am using pyspark in python program and not in shell. Thank you. How shall I use this connector in the python program? |
@durgeksh just add a clickhouse catalog, and use example spark-deafult.conf in our cluster
read & write
|
Thank you. |
提供的实例都是都是注册catalog,通过sql 读写ck。支持DataFrameWriter & DataFrameReader api 方式读写数据? @pan3793
The text was updated successfully, but these errors were encountered: