-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
load in parallel #81
Comments
It would be much easier to implement |
Do we really need a pload? Can't load just do that automatically, if it detects parallel mode? |
In general, I think that we need a simpler
Is there a single node which "owns" all file system access? Or is this where we need a distributed file system? |
Agree. |
I prefer that there is a single node that owns all the filesystem access. Until then, we assume a distributed FS available, so that we can just do @bcast load(file) |
I feel like this is backwards — it makes sense to have a single node own accesses until there's a distributed filesystem to rely on. No? |
Well, most clusters we are likely to run on in early stages will probably have NFS. Also, the multi-core mode where you run multiple julia processes have the same fs (although not distributed). |
Maybe we should just assume that some fs will take care of this then and On Fri, Jun 24, 2011 at 12:10 PM, ViralBShah <
|
Except that it breaks down, when your client is your laptop, and the computation is in the cloud. But for now, we can assume that some fs will figure it out. -viral On Jun 24, 2011, at 9:59 PM, StefanKarpinski wrote:
|
Right. That's an excellent case for using the local copy and shipping it On Fri, Jun 24, 2011 at 12:38 PM, ViralBShah <
|
Handle ellipsis in Tuple{} (Fix #81)
Complete refactor of the dependency graph and related functions
Add exercise: robot-simulator
What should load do in parallel? Do we assume an NFS type filesystem? It seems to me that the right thing to do would be for the client to preprocess/compile the code and send it to the rest of the Workers.
The text was updated successfully, but these errors were encountered: