Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross validation feature #839

Closed
BraveDistribution opened this issue Feb 14, 2020 · 111 comments
Closed

Cross validation feature #839

BraveDistribution opened this issue Feb 14, 2020 · 111 comments
Assignees
Labels
discussion In a discussion stage feature Is an improvement or enhancement good first issue Good for newcomers help wanted Open to be worked on
Milestone

Comments

@BraveDistribution
Copy link

🚀 Feature

Cross-Validation is a crucial model validation techniques for assessing how the model generalizes on new data.

Motivation

Research papers usually require cross-validation. From my point of view, this kind of feature would simplify the work of researches.

Pitch

I want to pass a parameter to the Trainer object to specify that I want to train the model on K-folds.

In the case that nobody wants to make a PR, I can start working on that.

@BraveDistribution BraveDistribution added feature Is an improvement or enhancement help wanted Open to be worked on labels Feb 14, 2020
@Borda
Copy link
Member

Borda commented Feb 22, 2020

I think that the cleaner way would some abstraction above the dataloader, because cross-validation is just systematic train/test on a particular dataset... Anyway, a PR is welcome!
@BraveDistribution may you pls a bit more describe how do you plan to implement or make a draft PR and we can talk about it there 🤖

@BraveDistribution
Copy link
Author

@Borda, I don't have any plan how to implement it because I wasn't working on that till now.

If I have any questions I will post it here, if not I will make a PR directly.

@williamFalcon
Copy link
Contributor

what if we just integrate with sklearn cross validation? this can be the start of supporting sklearn interop

@BraveDistribution
Copy link
Author

How would you propose that @williamFalcon?

In my "own" library I split the datasets into K folders by using my own script (you can use k-fold or stratified k-fold or any of the scikit methods).

dataset/k_0/train
dataset/k_0/test

dataset/k_1/train
dataset/k_1/test

Then I trained and evaluated K neural networks and finally I just grab all the results and saved out the mean of acc, f1 and other metrics.

That of course means you wasted space on HDD which equals to (K-1) * size of the dataset. We shouldn't be implementing that approach.


I think we should add new parameter into trainer which can be something like GridSearchCV in scikit-learn

cvint, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy. Possible inputs for cv are:
None, to use the default 5-fold cross validation,
integer, to specify the number of folds in a (Stratified)KFold,
CV splitter,
An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used.

@Ir1d
Copy link
Contributor

Ir1d commented Feb 29, 2020

what if we just integrate with sklearn cross validation? this can be the start of supporting sklearn interop

@williamFalcon skorch has a nice implementation. https://github.com/skorch-dev/skorch/blob/f94466e272f6f325898359fecb9a7c004354af7f/skorch/dataset.py#L212

@Borda
Copy link
Member

Borda commented Apr 9, 2020

check use case in #1393

@Borda Borda added discussion In a discussion stage and removed information needed labels Apr 9, 2020
@Anjum48
Copy link

Anjum48 commented Apr 9, 2020

By passing data loaders directly to the Trainer my CV loop looks like this:

for fold, (train_idx, valid_idx) in enumerate(kfold.split(train_df):
    train_loader = create_dataloader(train_df.iloc[train_idx])
    valid_loader = create_dataloader(train_df.iloc[valid_idx])

    # Folder hack
    tb_logger = TensorBoardLogger(save_dir=OUTPUT_PATH, name=f'{args.model_name}', version=f'fold_{fold + 1}')
    os.makedirs(OUTPUT_PATH / f'{args.model_name}, exist_ok=True)
    checkpoint_callback = ModelCheckpoint(filepath=tb_logger.log_dir + "/{epoch:02d}-{val_metric:.4f}", 
                                          monitor='val_metric', mode='max')

    model = YourPLModule(args)
    trainer = pl.Trainer(logger=tb_logger, early_stop_callback=early_stop_callback, checkpoint_callback=checkpoint_callback)
    trainer.fit(model, train_dataloader=train_loader, val_dataloaders=valid_loader)

Note that the folder hack is from #1207

@Borda
Copy link
Member

Borda commented Apr 10, 2020

it could be a nice feature as we have now the LR finder...
@PyTorchLightning/core-contributors any other suggestions?
@Anjum48, I would say draft a PR would be nice...

@justusschock
Copy link
Member

I wouldn't integrate this to fit or trainer init, but to a separate function internally calling fit

@Borda
Copy link
Member

Borda commented Apr 11, 2020

I wouldn't integrate this to fit or trainer init, but to a separate function internally calling fit

I agree, that's why I proposed to do it similar as LR finder... lol

@BraveDistribution
Copy link
Author

We should also somehow include the CV results into tensorboard, to provide scientists easy way to check the quality of their models. I don't know much about tensorboard, so I don't know whether that's possible.

Or, we should at least save the final results into json / pickle file.

@stale stale bot added the won't fix This will not be worked on label Jun 11, 2020
@axkoenig
Copy link

Are there any news on this?

@stale stale bot removed the won't fix This will not be worked on label Jun 14, 2020
@Borda
Copy link
Member

Borda commented Jun 14, 2020

@axkoenig how would you do it, Write a wrapper over a Trainer and perform the fold splitting followed by train-test?

@justusschock
Copy link
Member

I think, we could have something like that in bolts, but it is very hard to generalize this, since it always depends on how you want to split your data.

@SkafteNicki
Copy link
Member

I think we could provide two options:

  1. Either users provide a single train_dataloader that we split into K new dataloaders with non-overlapping subsets of data, and perform the cross validation from them
  2. Users provide K train_dataloaders and K test_dataloaders and we run cross validation on them (basically calling trainer.fit iteratively)

@justusschock
Copy link
Member

@SkafteNicki I think this would be a good idea to start.

However, we might also want to have some stratified splitting and not just random splitting, which may become more difficult, since we would have to assume things (like structure, dtype etc.) about these batches.

In general, we should also keep in mind, that we may not want to only split for train and test but also for validation sets/data loaders

@SkafteNicki
Copy link
Member

@justusschock completely agree, I think that v1 of this feature should be very simple just random splitting. My proposed option 2. would allow the user to provide their own stratified dataloaders.

In v2 we can begin to figure out how to do more advance stuff/better integration. The main problem (in my view), is that we are working with dataloaders and not datasets, so to get dataset statistics (like class balance for stratified splitting) we need to explicit run over the dataset and enforce a lot of structure in the batches (as you mention).

@stale stale bot added the won't fix This will not be worked on label Aug 14, 2020
@stale stale bot closed this as completed Aug 23, 2020
@jameschapman19
Copy link

Worth noting that metrics from on_train_epoch_end and on_validation_epoch_end log with global_step rather than epoch (different to behaviour with vanilla pytorch-lightning).

This is after the proposed changes by @ltx-dan. If I find anything that fixes this I will let you know.

(I assume @ltx-dan with your set up you have the epochs metric logging as a zigzag/sharktooth against global step for each fold?)

@ltx-dan
Copy link

ltx-dan commented Nov 23, 2021

Yepp, that's what I have @jameschapman19 . But that works for me bc I wanted to ensure that the max_epoch param is abided by fold and I don't really care about the total epochs passed across the folds..

@jameschapman19
Copy link

Thanks @ltx-dan - nice job with the changes by the way!

@tchaton
Copy link
Contributor

tchaton commented Nov 24, 2021

Hey @ltx-dan,

Did you make a PR with the changes ?

@ltx-dan
Copy link

ltx-dan commented Nov 25, 2021

No I'm afraid I haven't got around to it yet.. But all the mods I had to do are outlined above so feel free to add them to your PR.

@vedal
Copy link

vedal commented Jan 24, 2022

Really hope this is still planned for milestone 1.6 as suggested by @awaelchli :)

@rohitgr7
Copy link
Contributor

fixed the example recently. are you guys still facing issues with this?
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/loop_examples/kfold.py

@ltx-dan
Copy link

ltx-dan commented Jan 25, 2022

Hi @rohitgr7, I get

AttributeError: 'FitLoop' object has no attribute 'replace'

What is this line supposed to do? If I delete it, it all works fine btw..

Also, if you don't want to do distributed training (bc you have single GPU) this fails..

@rohitgr7
Copy link
Contributor

hey @ltx-dan !
the example is updated to work with master, new release is probably next month.
to run it with latest release, you need to make small adjustments. check more here: #11565

@ltx-dan
Copy link

ltx-dan commented Jan 25, 2022

Hi @rohitgr7 I see it's added here. Would the strategy=null work too on the master branch?

@rohitgr7
Copy link
Contributor

for single device training, the strategy will be SingleDeviceStrategy. It can never be Null.

@function2-llx
Copy link
Contributor

Hello @rohitgr7 , may I ask if there's some plan to adopt the example implementation officially into the library, or just leave it as an example (with copy & paste for usage)? Thanks!

@rohitgr7
Copy link
Contributor

hey @function2-llx

for now, AFAIK, it will stay as an example since Loop API is still experimental.

@YannDubs
Copy link

YannDubs commented Feb 25, 2022

what if we just integrate with sklearn cross validation? this can be the start of supporting sklearn interop

I agree with @williamFalcon , for the case of datasets in memory we should really just have a function (probably in bolts) that wraps SklearnDatamodule Trainer Module to a module that can be used with sklearn. Then you could get all the sklearn goodies for free: Pipeline (eg for preprocessing or to add a sklearn predictor after a SSL module), cross validation, hyperparameter tuning ...

@chanshing
Copy link

hey @function2-llx

for now, AFAIK, it will stay as an example since Loop API is still experimental.

I'm getting the following error when I try to use the kfold.py example:

    self.trainer.strategy.setup_optimizers(self.trainer)
AttributeError: 'Trainer' object has no attribute 'strategy'

I'm using PL 1.5.10 and Pytorch 1.10.2

@ltx-dan
Copy link

ltx-dan commented Mar 1, 2022

hi @rohitgr7

FYI, we just discovered that this updated version of the example (running on master) does not work at all if early_stopping callbacks are used.. after the first fold, the subsequent ones don't run at all.. do you have any idea why that might be?

What I found was that in anticipation of the rollout of the strategy feature, callbacks were updated (early_stopping included), see here

So for some reason, once early_stopping is updating the should_stop property of the trainer, it is never reset (for the subsequent folds) in kfold (after switching to strategy backend for ptl1.6.0. So, simply adding

self.trainer.should_stop = False

to the _reset_fitting method in your example makes it work again.

@function2-llx
Copy link
Contributor

@chanshing The example code on the master branch is inconsistent with tag 1.5.10, if you're using 1.5.10 you should refer to code with this tag.

@pchalasani
Copy link

after the first fold, the subsequent ones don't run at all.. do you have any idea why that might be?

I noticed this too, with early stopping. The subsequent fold training loops retain state from the first fold, and so the behavior is as if the early stopping condition is already satisfied, and hence they don't run.

Even for the MNIST example given, due to the max_epochs=10 param, after the fold trains, the subsequent ones start at epoch=9 and don't really do anything. You can see this by printing the value of self.fit_loop.current_epoch just prior to self.fit_loop.run() in the advance() function. It prints 9 for the subsequent folds. Note that there is no early stopping used here.

(All these refer to the version pointed out by @function2-llx above ).

There needs to be a way to truly reset the fit_loop state as each fold training starts (so that callbacks like Early Stopping, ModelCheckpoint, and stopping conditions like max_epochs function propertly). I tried doing self.fit_loop.reset() in the _reset_fitting() function but that did not help.

@YerePhy
Copy link

YerePhy commented Jun 27, 2022

Hi, I would like what is the KFold feature state in Lightning since I am really interested but the example provided in the official documentation is no longer available

@tchaton
Copy link
Contributor

tchaton commented Jun 27, 2022

Hey @YerePhy

To my knowledge, the best version of KFold is this repo: https://github.com/SkafteNicki/pl_cross.

Best,
T.C

@YerePhy
Copy link

YerePhy commented Jun 27, 2022

Thanks @tchaton, looks great, I will give a try for sure.

@YerePhy
Copy link

YerePhy commented Jun 29, 2022

Hey, pl_cross is not maintained and there are some issues with >=1.6... the example of KFold is no longer available in the official documentation of Loops API... will this feature be implemented anytime soon? It would be very very helpful for the medical imaging community!

@peterchristofferholm
Copy link

Some of the issues that @YerePhy is refering to are described here

@peterchristofferholm
Copy link

@rohitgr7 could this issue be reopened?

@YaduKini
Copy link

Hello,
Is there a document about how to implement K-Fold cross validation in Lightning? Any pointers would be much appreciated.
Thanks in advance.

@jannik-el
Copy link

Any updates on this feature? I can't currently find any information on if there is a lightning 2.0 officially supported way to do this, would be nice to know before I get stuck into a hacky way of doing it for my project.

@matormentor
Copy link

I would also like to know if there are any updates on this feature?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion In a discussion stage feature Is an improvement or enhancement good first issue Good for newcomers help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.