You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It currently takes around 2 seconds to load the pirl module. Most of the time this isn't a big deal, but due to a combination of:
We restart each Ray worker after a single call to work around TensorFlow/CUDA brokenness;
A complete experiment can consist of 100s of calls;
this adds up to a significant overhead when the results are mostly cached. (The overhead is still there for non-cached calls but is negligible compared to the cost of e.g. running an RL/IRL algorithm.)
I'm not sure how much we can realistically shave off. Just importing Ray takes ~0.5s, and we can't avoid that. Importing TensorFlow seems to be a big culprit; we actually can delay that, but we'd need to make imports in pirl.agents and pirl.irl be on-demand. (Currently they all get loaded by the config.)
The text was updated successfully, but these errors were encountered:
It currently takes around 2 seconds to load the pirl module. Most of the time this isn't a big deal, but due to a combination of:
this adds up to a significant overhead when the results are mostly cached. (The overhead is still there for non-cached calls but is negligible compared to the cost of e.g. running an RL/IRL algorithm.)
I'm not sure how much we can realistically shave off. Just importing Ray takes ~0.5s, and we can't avoid that. Importing TensorFlow seems to be a big culprit; we actually can delay that, but we'd need to make imports in pirl.agents and pirl.irl be on-demand. (Currently they all get loaded by the config.)
The text was updated successfully, but these errors were encountered: