-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Concurrent Low-Code] Allow low-code streams using AsyncRetriever to be run within the Concurrent CDK #168
Comments
The first part of this was merged into the CDK: #170 The second part I'll put up for review in the next day or two as I clean up the code and finish fixing the crazy amount of tests this breaks. |
@brianjlai any update here? |
@klsoper yep! I have the last PR to get our async streams to concurrent here: #185 It's been tested on some of our existing connectors, but we didn't want to release this during the holidays and want to wait for @maxi297 to get back to do a final sign off since the PR affects quite a few things. We're still on track. |
@brianjlai looks like this was merged, can we close this up? |
Yep it's merged in CDK version One not though, is I've had a few issues w/ our CI/CD pipeline in the past two days, but it should be released to most custom connectors. If this is for one of our connectors, then we'll need to check that the version is up to date. Some connectors aren't auto upgraded |
Context
After allowing certain types of low-code streams to be processed within the concurrent framework, we ran into an issue where stream that used the
AsyncRetriever
component would result in errors during processing. One such example isAs a temporary fix, we changed the
concurrent_declarative_source.py
to only run streams using theSimpleRetriever
concurrently.Slack thread with more context:
https://airbytehq-team.slack.com/archives/C063B9A434H/p1733264441227669?thread_ts=1733257895.550789&cid=C063B9A434H
Problem / Solution
The root of the issue is two fold
Issue 1:
Within the
concurrent_declarative_source.py
, when we instantiate ourStreamSlicerPartitionGenerator
, we pass in thedeclarative_stream.retriever.stream_slicer
. However, this will not work for anAsyncRetriever
because in this scenario thestream_slicer
corresponds to an underlying partition router of the AsyncRetriever which is used to supply partitions when creating async jobs. And in our current implementation, the AsyncRetriever is responsible for generating slices withinstream_slices()
instead of delegating to the stream_slicer unlike our existing SimpleRetriever.For example in
simple_retriever.py
:In
async_retriever.py
:What we should do is:
StreamSlicer
low-code component calledAsyncJobStreamSlicer
that adheres to theStreamSlicer
interfacestream_slices()
should be the current implementation shown above. It should be instantiated with a parent stream slicerstream_slices()
is calledmodel_to_component_factory
, we instantiate the new async stream_slicerIssue 2:
We are instantiating a new
AsyncRetriever
every time we create a new partition because the factory method we supply to the StreamSlicerPartitionGenerator instantiates new instances of the declarative stream + retriever. That in turn leads to the partition invokingAsyncRetriever.read_records()
on a new instance of the async retriever which has not been instantiated properly. This is because theAsyncRetriever.stream_slices()
is not stateless and responsible for setting it up properly to then be called during read records. Also, because we instantiate individual retrievers, they aren't using a sharedAsyncJobOrchestrator
orAsyncJobRepository
which is needed to properly manage the internal state of the retriever.As evidenced by this error:
What we want to actually do is reuse the same AsyncRetriever on each partition. This would allow us to use a properly instantiated AsyncRetriever which has already called
stream_slices()
. From withinconcurrent_declarative_source.py
, we can instead just pass the original AsyncRetriever instance which has the proper state as well as the shared orchestrators and job repository.However, there is one major problem with this approach and that is that there are potential ways that the
AsyncRetriever
is not thread safe. The biggest one being that theDefaultPaginator
relies on an internal state. The_token
field is overwritten each time we read a page. If we have multiple partitions using the sameAsyncRetriever
, we can possibly lose records between partitions. If we make this thread safe, we should be able to share the same retriever.Note:
There are potentially other places where we may not be thread safe. However, we can do some additional analysis about areas that are not. However, rather than drag work out and try to fix everything, we are going to make a calculated bet that the impact on async retriever + thread safety is relatively low blast radius. We may at some point have to revisit making all of our low-code components thread safe in the future.
Acceptance Criteria
The text was updated successfully, but these errors were encountered: