Add support for extra HTTP headers in IRCAuthorization#674
Add support for extra HTTP headers in IRCAuthorization#674talatuyarer wants to merge 7 commits intoduckdb:mainfrom
Conversation
|
Hi @Tmonster Could you review my pr ? |
|
Hi @talatuyarer, yes, I will take a look. Just have been super busy with some things at the moment. |
Tmonster
left a comment
There was a problem hiding this comment.
Looks good to me.
Can you add some tests to make sure they are sent in the request as well?
In a test like at test/sql/local/irc/test_table_information_requests.test you can see us check the request. To check the headers I think something like
select request.headers from duckdb_logs_parsed('http');
should work
|
@Tmonster Thank you for pointer I added also tests too. and also run make format to fix format issue fyi |
Enhanced `IRCEndpointBuilder::AddPathComponent` to split path components by slashes to prevent slashes get encoded.
…in IRCEndpointBuilder. Update tests to verify behavior with empty and multiple custom headers.
…nsistency in path handling
|
Thanks! Just set up a test against some other IRC catalogs. Will take another look when that finishes |
|
Looks like some of the cloud tests are failing. You can see the run here. Seems like the s3tables attach is failing. They return an already encoded s3 prefix. The response to the /config endpoint looks something like this And the url we hit now for namespaces is The url we hit for table listing is Notice the difference in It seems the URL builder in GetTables is building the following components in the GetTable request But in the GetSchemas the URL builder has the following components. It seems like here you missed I think the fix here is to write some functionality to detect if the prefix is already encoded or not? If it is encoded, decode it and add it as the prefix. Also, S3Tables is super easy to set up here for debugging. It really is just a matter of creating a bucket in the S3Tables console. I don't know what BigLake returns (encoded or decoded), would be nice if you could share that here Also, can you add a test where you explicitly add an |
I've added a unit test to ensure the |
|
@Tmonster BigLake is very user-friendly; a public dataset simplifies Iceberg testing, and all you need is a Google Cloud Project. Refer to the instructions here: #665 (comment) |
|
Hi @talatuyarer I forked and opened a new PR here. The problem is we already decode url prefixes when they are returned from the catalog. I've checked this also against the public Biglake bucket you mentioned, along with our cloud tests and everything seems to work fine. Feel free to leave comments on my PR if you have anything you'd like to add |
Adds support in DuckDB’s Iceberg extension to attach user-provided “extra headers” to all HTTP requests made to the REST catalog (including the initial /v1/config call and subsequent table/namespace operations).
This aligns DuckDB with how other Iceberg clients treat REST catalogs. Example usage:
I also fixed a bug if catalog return prefix which has slashes in it. Duckdb encode those slashes which it should not encode.