Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

schedulers: set /dev/shm size for docker based schedulers to bypass 64M default #429

Closed
wants to merge 1 commit into from

Conversation

d4l3k
Copy link
Contributor

@d4l3k d4l3k commented Mar 21, 2022

PyTorch dataloaders use /dev/shm to transfer data between processes. The default in docker containers is 64MB so we need to increase this for more complex models.

  • DockerScheduler: sets shm_size to mem request
  • AWSBatchScheduler: sets sharedMemorySize to mem request
  • KubernetesScheduler: mounts an unlimited tmpfs onto /dev/shm

Fixes #428

Test plan:

import torch
from torch.utils.data import Dataset, DataLoader

class BigDataset(Dataset):
    def __init__(self, size):
        self.size = size

    def __len__(self):
        return 20

    def __getitem__(self, idx):
        return torch.zeros((1,self.size))

dataset = BigDataset(100_000_000)
dataloader = DataLoader(dataset, batch_size=1, num_workers=1)

for i, x in enumerate(dataloader):
    print(i, x.shape)
torchx run --scheduler {local_docker,kubernetes,aws_batch} --wait --log dist.ddp --memMB 2000 -j 1x1 --script large-shm.py

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 21, 2022
@codecov
Copy link

codecov bot commented Mar 21, 2022

Codecov Report

Merging #429 (090fc11) into main (90b05b0) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main     #429   +/-   ##
=======================================
  Coverage   94.41%   94.41%           
=======================================
  Files          67       67           
  Lines        3829     3830    +1     
=======================================
+ Hits         3615     3616    +1     
  Misses        214      214           
Impacted Files Coverage Δ
torchx/schedulers/aws_batch_scheduler.py 84.18% <100.00%> (ø)
torchx/schedulers/docker_scheduler.py 95.74% <100.00%> (ø)
torchx/schedulers/kubernetes_scheduler.py 92.68% <100.00%> (+0.03%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 90b05b0...090fc11. Read the comment docs.

@facebook-github-bot
Copy link
Contributor

@d4l3k has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@d4l3k d4l3k added this to the 0.1.2 release milestone Mar 22, 2022
facebook-github-bot pushed a commit that referenced this pull request Mar 22, 2022
…ing memory resource (#430)

Summary:
This behavior was noticed in #429 and this is intended to clean it up.

Previously the thread local logic was incorrect so we would create a new session for every request which caused a lot of spam:
```
torchx 2022-03-21 14:19:45 INFO     Found credentials in environment variables.
torchx 2022-03-21 14:19:45 INFO     Found credentials in environment variables.
torchx 2022-03-21 14:19:45 INFO     Found credentials in environment variables.
```

Pull Request resolved: #430

Test Plan:
Updated unit tests
```
torchx run --scheduler aws_batch --wait --log dist.ddp --memMB 2000 -j 1x1 --script large-shm.py
```

Reviewed By: aivanou

Differential Revision: D35027238

Pulled By: d4l3k

fbshipit-source-id: f28024ac2b1ee789d389021ec0c8c668d5d8514d
@d4l3k d4l3k deleted the dockershm branch April 13, 2022 22:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

docker/k8s/batch: increase /dev/shm size for larger datasets
2 participants