You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a server with 8 GPU and 1TB of RAM. My parquet dataset size is 25GB (5 million data). If I'm using deepspeed to load the data, every process rank load the dataset and each need more than 150GB of RAM only for loading the parquet table and convert it to dictionary of numpy. So, the total memory usage only for loading the dataset is 150*8=1200GB memory of RAM.
Is there a way to make loading the dataset is only to the first rank and then shared to another rank?
The text was updated successfully, but these errors were encountered:
I have a server with 8 GPU and 1TB of RAM. My parquet dataset size is 25GB (5 million data). If I'm using deepspeed to load the data, every process rank load the dataset and each need more than 150GB of RAM only for loading the parquet table and convert it to dictionary of numpy. So, the total memory usage only for loading the dataset is 150*8=1200GB memory of RAM.
Is there a way to make loading the dataset is only to the first rank and then shared to another rank?
The text was updated successfully, but these errors were encountered: