Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mmap failure with address space quotas #10390

Closed
JonathanAnderson opened this issue Mar 3, 2015 · 58 comments
Closed

mmap failure with address space quotas #10390

JonathanAnderson opened this issue Mar 3, 2015 · 58 comments
Assignees
Labels
bug Indicates an unexpected problem or unintended behavior GC Garbage collector
Milestone

Comments

@JonathanAnderson
Copy link
Contributor

I'm having a problem where when I build from 3c7136e when I run julia, I get the error could not allocate pools If I run as a different user, Julia runs successfully.

I think there might be something specific to my user on this box, but I am happy to help identify what is happening here.

I think this is related to #8699

also, from the julia-users group: https://groups.google.com/forum/#!topic/julia-users/FSIC1E6aaXk

@pao pao added the building Build system, or building Julia or its dependencies label Mar 3, 2015
@JeffBezanson JeffBezanson removed the building Build system, or building Julia or its dependencies label Mar 3, 2015
@JeffBezanson
Copy link
Member

This error is from a failing mmap, where we try to allocate 8GB of virtual address space. There might be a quota on virtual memory for some users.

@pao
Copy link
Member

pao commented Mar 3, 2015

Oops, misread the issue, sorry.

@ivarne
Copy link
Member

ivarne commented Mar 3, 2015

-v: address space (kb) 8000000 (from the julia-users thread) seems to indicate that a 8GB allocation is guaranteed to cause trouble

@JeffBezanson
Copy link
Member

@carnaval Could we decrease this to, say, 4GB, to make this issue less likely?

@vtjnash vtjnash added the bug Indicates an unexpected problem or unintended behavior label Mar 4, 2015
@vtjnash vtjnash added this to the 0.4.1 milestone Mar 4, 2015
@JeffBezanson JeffBezanson changed the title could not allocate pools mmap failure with address space quotas Mar 5, 2015
@tkelman
Copy link
Contributor

tkelman commented Mar 16, 2015

an 8gb array is also too large for msvc to compile, fwiw

@tkelman
Copy link
Contributor

tkelman commented Mar 20, 2015

Can you try reducing the number on

julia/src/gc.c

Line 88 in e1d6e56

#define REGION_PG_COUNT 16*8*4096 // 8G because virtual memory is cheap
by a factor of 2 or 4, see if it helps? I could also make a test branch with that change to have the buildbot make test binaries if that would be easier.

@carnaval
Copy link
Contributor

We can certainly lower this. I set it that high under the reasoning that address space was essentially free on x64. As I understand it, operations are either O(f(number of memory mappings)) or O(f(size of resident portion)) so it should not hurt performance.
I didn't think of arbitrary quotas, but it's probably better to ask, does anyone know any other drawback in allocating "unreasonable" amounts of virtual memory on 64bit arch ?

@ScottPJones
Copy link
Contributor

@carnaval Yes, indeed... lots of performance issues if you have very large amounts of memory mapped... which is why people use huge page support...

@carnaval
Copy link
Contributor

Keep in mind I'm still talking about uncomitted memory. The advantage of huge pages is reducing TLB contention as far as I know, and uncomitted memory sure won't end up in the TLB.

Generally, as far as my understanding of the kernel VM system goes, "dense" data structures (such as the page table, for which the TLB acts as a cache) are only filled with committed memory. The mapping itself stays into a "sparse" structure (like a list of mappings), so you only pay costs relative to the number of mappings. I may be wrong though, so I'll be happy to be corrected.

@ScottPJones
Copy link
Contributor

I'm talking about memory that has actually been touched, i.e. commited.
The issue is if you have an (opt-in at least) limit in the language, instead of just relying on things like ulimit, you can (at least in my experience) better control things, keep things from getting to the point where the OS goes bellyup. Say you have 60,000 processes running, which you know only need say 128M (unless they somehow get out of control, due to some bug)... having the limit protects you.
You may also have different classes of processes that need more memory (say, loading a huge XML document), it's important to be able to also be able to allow those to dynamically (based on user roles) have a higher limit).

@carnaval
Copy link
Contributor

That's not what my question was about though. We are already careful to decommit useless pages.

The limit is another issue, to enforce it strictly would probably require parsing /proc/self/smaps from time to time anyway to be sure some C library is not sneaking around making mappings.

@ScottPJones
Copy link
Contributor

Yes, but does the current system ever try to proactively cut down on caches, etc., so that it can free up some memory?

It doesn't really have to be done strictly, to be useful, without fancy approaches like parsing /proc/...
also, for people embedding julia, couldn't things be compiled so that at least malloc/calloc/realloc end up using a julia version, that does keep track?
Having some facility to try to increase stability is better than none, even if it can't handle external memory pressures.

@carnaval
Copy link
Contributor

I'm not arguing that we should not do those things. But those are features. I was just trying to check if someone knew that some kernel would be slow with large mappings : it would be a regression, not a missing feature.

@mauro3
Copy link
Contributor

mauro3 commented Aug 12, 2015

I'm running into a could not allocate pools issue on a new build on a new machine (0.3 works fine).
(Not sure whether this warrants a new issue or not, let me know.)

It builds fine but it crashes on running the tests, the culprit is addprocs:

   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.0-dev+6683 (2015-08-12 17:53 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 103f7a3* (0 days old master)
|__/                   |  x86_64-linux-gnu

julia> addprocs(3; exeflags=`--check-bounds=yes --depwarn=error`)
could not allocate pools

However, addprocs(2; exeflags=``--check-bounds=yes --depwarn=error``) works. Also starting more than three REPLs at once produces the error.

As far as I can tell there are no relevant ulimits:

 $ ulimit -a
-t: cpu time (seconds)         unlimited
-f: file size (blocks)         unlimited
-d: data seg size (kbytes)     unlimited
-s: stack size (kbytes)        8192
-c: core file size (blocks)    0
-m: resident set size (kbytes) unlimited
-u: processes                  63889
-n: file descriptors           1024
-l: locked-in-memory size (kb) 64
-v: address space (kb)         unlimited
-x: file locks                 unlimited
-i: pending signals            63889
-q: bytes in POSIX msg queues  819200
-e: max nice                   0
-r: max rt priority            0
-N 15:                         unlimited

On my normal machine the -l option is unlimited, but limiting it there to 64 does not reproduce this behavior.

@mauro3
Copy link
Contributor

mauro3 commented Aug 13, 2015

The same problem arises using the Julia nightlies julia-0.4.0-24a92a9f5d-linux64.tar.gz.

Any ideas on how I could resolve this? Should I contact the admin of that machine to change some settings?

@carnaval
Copy link
Contributor

yes, you can remove the 16* here

julia/src/gc.c

Line 164 in f40da0f

#define REGION_PG_COUNT 16*8*4096 // 8G because virtual memory is cheap
and recompile.

Maybe I should make that the default but it feels so silly to me for admins to restrict addr space, I don't get it really.

@mauro3
Copy link
Contributor

mauro3 commented Aug 13, 2015

Yes, that works, thanks! Just to clarify, my understanding from this thread is that it is limiting the -v: address space (kb) which causes this. However, this is unlimited on my machine. So which one is the culprit?

@waTeim
Copy link
Contributor

waTeim commented May 5, 2016

As for me, this is happening on ARM for unclear reasons. The GC memory space is currently not expandable I take it. If it were, I think minimal low-end stuff could afford a heap size of at least 64M, without an issue while expecting size approaching 1G is ridiculous. Somewhere in between is the target.

Additionally, I request that this needs to be configurable via library jl_init or similar, and not expect to be controlled by running julia the executable.

@yuyichao
Copy link
Contributor

yuyichao commented May 5, 2016

The arm issue is completely different. This is only an issue for those who cannot control the virtual address limit. The amount of physical memory is irrelevant here.

@waTeim
Copy link
Contributor

waTeim commented May 5, 2016

Well since the error message is the same it at least seems related. Are you saying this happens not because of the size of the allocation but the location? These aren't related? The previous discussion made it sound that people were having problems because the system prevented oversubscription which seems to indicate a problem with size. Well if that's the case, then that kind of makes sense too; it is true that there is a lack of OS support for 64-bit virtual addresses.

@r-barnes
Copy link

Attempting to compile on XSEDE's Comet raised this error. Removing 16* from gc.c allowed compilation to continue.

@eschnett
Copy link
Contributor

Comet's front end has a severely restricted memory limit setting (ulimit). You can only allocate 2 GByte. The solution is to request a compute node interactively, and build there:

/share/apps/compute/interactive/qsubi.bash -p debug --nodes=1 --ntasks-per-node=24 -t 00:30:00 --export=ALL

yuyichao added a commit that referenced this issue May 16, 2016
* Set region sizes based on `ulimit`.
* Automatically shrink region size when allocation fails.

Fix #10390
yuyichao added a commit that referenced this issue May 16, 2016
* Set region sizes based on `ulimit`.
* Automatically shrink region size when allocation fails.

Fix #10390
tkelman pushed a commit to tkelman/julia that referenced this issue May 16, 2016
* Set region sizes based on `ulimit`.
* Automatically shrink region size when allocation fails.

Fix JuliaLang#10390
yuyichao added a commit that referenced this issue May 16, 2016
* Set region sizes based on `ulimit`.
* Automatically shrink region size when allocation fails.

Fix #10390
yuyichao added a commit that referenced this issue May 16, 2016
* Set region sizes based on `ulimit`.
* Automatically shrink region size when allocation fails.

Fix #10390
@floswald
Copy link

floswald commented Jun 8, 2016

Hi all,
is this going to be backported to 0.4.x at some point? I'm stuck with this problem on a cluster. thanks!

@tkelman
Copy link
Contributor

tkelman commented Jun 8, 2016

#16385 was a pretty large change, I'm not sure whether it can be easily backported. Are you building from source or using binaries? If the former, just change the number in the code and recompile. If the latter, I guess we could trigger an unofficial build with a smaller value.

@floswald
Copy link

floswald commented Jun 8, 2016

i was using binaries. building is a nightmare on that system as well, I run into diskspace quota exceeded on the login node all the time, and i can't get the build to work on a compute node either. If you can trigger an unofficial 0.4.5 build that would save my week. thanks.

@tkelman
Copy link
Contributor

tkelman commented Jun 8, 2016

might take a while to build, but check back at https://build.julialang.org/builders/package_tarball64/builds/435 and when it's done it should be available at https://julianightlies.s3.amazonaws.com/bin/linux/x64/0.4/julia-0.4.6-c7cd8171df-linux64.tar.gz (assuming you want 64 bit linux, and dropping by a factor of 8 will get you below your ulimit)

@floswald
Copy link

floswald commented Jun 8, 2016

Awesome! Thanks.

On Wednesday, 8 June 2016, Tony Kelman [email protected] wrote:

might take a while to build, but check back at
https://build.julialang.org/builders/package_tarball64/builds/435 and
when it's done it should be available at
julianightlies.s3.amazonaws.com/bin/linux/x64/0.4/julia-0.4.6-c7cd8171df-linux64.tar.gz
(assuming you want 64 bit linux)


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#10390 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AA-WdofbXDWPFFjmNbso-fAX6Dq4Dk_Bks5qJpt5gaJpZM4Doy6D
.

@floswald
Copy link

floswald commented Jun 8, 2016

@tkelman thanks so much works out of the box like a charm! so much for "broken software". outstanding support as usual. 👍 👍 👍

@mauro3
Copy link
Contributor

mauro3 commented Sep 13, 2016

In case someone else stumbles over this: I was under the impression that this issue was resolve but it still surfaced for me with 0.5-rc4 binaries and source build for rc4, see #18477.

The error now looks a bit different for me with either just hanging at the tests when running make testall or when doing addprocs with a suitably high number I get Master process (id 1) could not connect within 60.0 seconds. The fix is as before but now in file src/gc-pages.c.

@StefanKarpinski StefanKarpinski added this to the 0.5.x milestone Sep 13, 2016
@StefanKarpinski
Copy link
Member

Reopened to be fixed in 0.5.x.

@yuyichao
Copy link
Contributor

As mentioned in the related issue, this is really #17987. It does not fail because we are asking for a huge fixed size anymore and the remaining is better handled by allowing users with special memory constraint to specify that directly.

@floswald
Copy link

Sorry to bother with this but I am still looking for a solution to this problem. I am working on a cluster where I have to request the max amount of virtual and physical memory that I will be using, and I have to request very large amounts in order for my job to run at all. This puts me on a significantly longer queue, because I basically need an entire compute node all for myself. julia v0.5-rc3.

my job has the following memory requirements when run on a single compute node on that same cluster.

Any advice for how to deal with this greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Indicates an unexpected problem or unintended behavior GC Garbage collector
Projects
None yet
Development

Successfully merging a pull request may close this issue.