You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And I tried take data from table and on first iteration all good. On second iteration I see into visits same dataframe like after first iteration select. That look like clickhouse caching query data. If I do small change into query (new where, bigger limit, etc) - all working fine on first try. On second iteration I get same dataframe on second run.
How I can fix that? If i send same query into clickhouse with other instrument - I get normal result. Into instruction I not found param with caching. Thank you in advance!
The text was updated successfully, but these errors were encountered:
Hello! I use driver in pyspark on cluster with jars clickhouse-jdbc-0.4.5.jar clickhouse-spark-runtime-3.3_2.12-0.7.1.jar
I have table with ReplicatedReplacingMergeTree:
And I tried take data from table and on first iteration all good. On second iteration I see into visits same dataframe like after first iteration select. That look like clickhouse caching query data. If I do small change into query (new where, bigger limit, etc) - all working fine on first try. On second iteration I get same dataframe on second run.
Spark code looks like:
How I can fix that? If i send same query into clickhouse with other instrument - I get normal result. Into instruction I not found param with caching. Thank you in advance!
The text was updated successfully, but these errors were encountered: