Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support batch-level transformations in Encodings #251

Open
lorenzoh opened this issue Jul 21, 2022 · 9 comments
Open

Support batch-level transformations in Encodings #251

lorenzoh opened this issue Jul 21, 2022 · 9 comments
Labels
api-proposal Implementation or suggestion for new APIs and improvements to existing APIs

Comments

@lorenzoh
Copy link
Member

lorenzoh commented Jul 21, 2022

Sometimes encodings need to be able to take into account batch information, as in a sequence learning task where samples in a batch should be padded to the length of the longest sequence.

Currently, all Encodings transform individual samples, which is great for simplicity and composability, but doesn't allow implementing these batch-level transformations.

A usage of encodings in basically every training loop is taskdataloaders which will always give batches of encoded data. We could have this use a new function encodebatch(encoding, context, block, samples) that transforms multiple samples at a time. This would operate on vectors of samples, not a collated batch, since not all kinds of data can be collated (e.g. different-sized images).

By default, it would simply delegate to the single-sample encode function:

function encodebatch(encoding, context, block, observations::AbstractVector)
    map(obs -> encode(encoding, context, block, obs), observations)
end

But it could be overwritten by individual encodings:

function encodebatch(encoding::PadSequences, context, block, observations::AbstractVector)
    # dummy padding code
    n  = maximum(length, observations)
    return map(obs, pad(obs, n), observations)
end

Tagging relevant parties @Chandu-4444 @darsnack @ToucheSir for discussion.

@lorenzoh lorenzoh added the api-proposal Implementation or suggestion for new APIs and improvements to existing APIs label Jul 21, 2022
@darsnack
Copy link
Member

No issues here with the proposed API.

Typically, in FastAI, we have a "batch of images" or a "batch of tabular entries." Similarly, here we have a "batch of sequences." Ultimately, the model will want a "sequence of batches" though, so this transformation needs to happen somewhere. After this transformation, it becomes very hard to access each sample individually, so it must only happen at the end. Even if we do this as a final encoding step, there's the question of how FastAI understands the encoded block. With other data, you can view the individual encoded samples or encoded batch. What will the view look like here?

@lorenzoh
Copy link
Member Author

Can you explain a bit more what you mean by "sequence of batches" so I can wrap my head around it?

@Chandu-4444
Copy link
Contributor

Can you explain a bit more what you mean by "sequence of batches" so I can wrap my head around it?

Yeah, even I didn't get it. Batches don't have to be in a "sequence" to be fed into the model but a batch should have sequences.

@ToucheSir
Copy link
Member

Flux RNNs expect an input format of (features x batch) x sequence length, but the data loader will generate (features x sequence length) x batch by default. Ideally that transposition happens as late as possible, but it does need to happen at some point.

@darsnack
Copy link
Member

Batches don't have to be in a "sequence" to be fed into the model but a batch should have sequences.

Quite the opposite for Flux as Brian pointed out. Let me add to this in case there is uncertainty about how recurrence is handled in Flux.

If you have a recurrent model m (i.e. a cell wrapped in Flux.Recur) that accepts a vector of features, x, then m(x) will evaluate a single time step and update the internal state of m. Suppose a single sample is sequence of features, xs, then we evaluate the full sequence as [m(x) for x in xs].

Batching serves many purposes in ML, but one of them is achieving higher utilization for hardware that supports parallelism. So, in the framework described above, we want m(xbatch) to evaluate m at a given time step for multiple samples concurrently. This means that xbatch should have dimensions (features x batch) to hit BLAS etc. Since xbatch is only a single time step, to represent a sequence, we need a vector where each element is a single time step like xbatch. This vector, xbatches, is evaluated as [m(xbatch) for xbatch in xbatches], making xbatches have dimensions (features x batch) x sequence_length.

The relevant detail here for the issue is that once you have the data in this format, accessing a single sample becomes cumbersome. You have to iterate over xbatches to access each time step, slice the batch of features to access the correct column, then merge the results together into a single sequence. That's why this operation can only happen at the end. If it is done too early, then all the encodings that require random access to samples will be cumbersome and slow. This also means that the transformation should happen to a batch, because applying MLUtils.batchseq to the entire dataset is necessarily "too early."

TL;DR:

  • "batch of sequences": the outer index is by sample which is convenient for the data processing
  • "sequence of batches": the outer index is the time step which is required by Flux but inconvenient for the rest of the data pipeline

@lorenzoh
Copy link
Member Author

lorenzoh commented Jul 27, 2022

Hm, I see the issue and how this doesn't solve it. Of course putting the batchseq into the model is not desirable either.
Instead of introducing a lot of new APIs to make this possible it may be doable to stick with the simple encode and instead introduce a ´Batch <: WrapperBlock´ that has the default implementation above.
The encoding that does the padding could then have a custom method for encode that takes in a Batch block and performs the batchseq operation, returning data for a SequenceBatch <: Wrapperblock block.
This way we wouldn't have to introduce any new APIs while unifying observation- and batch-level transformations and not breaking existing encode implementations. What do you think?

@darsnack
Copy link
Member

Yeah I like this approach better because of the unification. It addresses the concerns about tying batchseq into the data block visualization. Now, it should be clear to the user that the encoded data is stored as a "sequence of batches."

@Chandu-4444
Copy link
Contributor

Chandu-4444 commented Jul 27, 2022

Yeah, I think the approach Lorenz suggested should be "the way" to achieve this batch-wise encoding.

But, where do we encode this? Will this be a part of the initial transformations? Or just before passing the data to the model?

@lorenzoh
Copy link
Member Author

Adding this kind of first-class support for batches will entail a lot of changes to FastAi.jl internals, e.g. applying encode to batches and not individual samples, but should ultimately reduce the amount of code.
We could then make it an encoding that transforms a ´Batch{NumberVector}´ into something like a ´SequenceBatch{NumberVector}´.

Until we find time to implement those changes, though, I would continue with the current method of doing the sequencing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-proposal Implementation or suggestion for new APIs and improvements to existing APIs
Projects
None yet
Development

No branches or pull requests

4 participants