-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adbc.snowflake.statement.* options are not effective ["python"] #2146
Comments
@joellubi can you take a look at this when you get a chance? thanks! |
@davlee1972 Can you please share the code you used to pass in those parameters? Based on the traceback the error is occurring during database initialization so my guess is those parameters are being provided as database kwargs. The ingestion parameters are defined on the statement rather than the database. They can be set as follows: with adbc_driver_snowflake.dbapi.connect(uri) as conn, conn.cursor() as cur:
cur.adbc_statement.set_options(**kwargs)
cur.adbc_ingest(...) |
FWIW I do think the way parameters handling works right now is kind of awkward. I would be for cleaning it up some way if it can help make this more intuitive. |
I think we should update the documentation to make this clearer, and perhaps make the interface more consistent. @lidavidm Do you know if adding optional parameters to I'm thinking for consistency to allow with adbc_driver_snowflake.dbapi.connect(uri, db_kwargs=db_kwargs, conn_kwargs=conn_kwargs) as conn:
with conn.cursor(stmt_kwargs=stmt_kwargs) as cur:
cur.adbc_ingest(...) Thoughts on this? CC: @zeroshade |
I think it would be OK only as a keyword-only argument like |
What happened?
I've tried changing adbc.snowflake.statement options, but none of the settings appear to have any effect with adbc_ingest().
I've tried the following:
"adbc.snowflake.statement.ingest_copy_concurrency": "0",
"adbc.snowflake.statement.ingest_copy_concurrency": "1",
"adbc.snowflake.statement.ingest_target_file_size": "100mb",
"adbc.snowflake.statement.ingest_target_file_size": "100",
"adbc.snowflake.statement.ingest_target_file_size": "100000000",
But the result always ends up with ~120 PUTs with concurrent overlapping COPY INTO(s) running in parallel..
Changing any of the above to int(s) like:
"adbc.snowflake.statement.ingest_copy_concurrency": 0,
results in an error which is expected, so the parameters are being checked, but they just don't impact anything..
I'm also passing a recordbatchreader into adbc_ingest()
Stack Trace
No response
How can we reproduce the bug?
No response
Environment/Setup
No response
The text was updated successfully, but these errors were encountered: