[IPython-User] ipcluster: Too many open files (tcp_listener.cpp:213)
Mon Jun 18 16:23:40 CDT 2012
On Mon, Jun 18, 2012 at 1:29 PM, Jon Olav Vik <firstname.lastname@example.org> wrote:
> MinRK <benjaminrk <at> gmail.com> writes:
> > [IPControllerApp] client::client 'c5dd44d5-b59d-4392-bac0-917e9ef4c9d8'
> > requested u'registration_request'
> > 2012-06-18 14:03:58.161 [IPClusterStart] Too many open files
> > (tcp_listener.cpp:213)
> > 2012-06-18 14:03:58.274 [IPClusterStart] Process '.../python' stopped:
> > 21820, 'exit_code': -6}
> > 2012-06-18 14:03:58.275 [IPClusterStart] IPython cluster: stopping
> > 2012-06-18 14:03:58.275 [IPClusterStart] Stopping Engines...
> > 2012-06-18 14:04:01.281 [IPClusterStart] Removing pid file: .../.ipython/
> > profile_default/pid/ipcluster.pid
> > The culprit seems to be "Too many open files (tcp_listener.cpp:213)". I
> > like to know where this limit is set, and how to modify it. Also, I
> wonder if
> > it would help to spread connection attempts out in time. That might help
> > problem is too many simultaneous requests, but not if the limit applies
> to how
> > many engines I can connect simultaneously. Any other advice would be
> > too.
> > This is just the fd limit set by your system. See various docs on
> 'ulimit' for your system.
> fd = file descriptor?
Yes, sorry, fd is file descriptor.
> But the engines are running on separate computers, with
> from 8 to 24 cores. Does not the ulimit apply only within each computer?
There are limits at a few levels, but the one that is relevant here is the
*per-process* one, which in your case is 1024. It is only the Controller
processes that have a number of FDs proportional to the number of engines,
so that's the machine where you need to pay attention to this.
> there any relationship between tcp and open files? (Sorry, I'm not a
> native on
Yes - each connection gets a new FD (it's actually a little more
complicated than that with zeromq, but it's proportional to new
> I see the number 200 in "max user processes", but 1024 "max open files".
> Am I
> missing something, e.g. similar limits for network connections?
open files is the limiting factor you want to increase (-n).
> -bash-3.2$ ulimit -a
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 135168
> max locked memory (kbytes, -l) unlimited
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 10240
> cpu time (seconds, -t) unlimited
> max user processes (-u) 200
> virtual memory (kbytes, -v) 4194304
> file locks (-x) unlimited
> > You can try to spread out connection attempts, but I don't think it will
> change anything.
> > I do not believe there are transient sockets during the connection
> Does this mean that the number of parallel processes with IPython is
> limited by
> the permitted number of file descriptors?
Yes, just as any networked process has a limited number of connections.
> Two hundred really isn't too much,
> but I guess it'll have to do...
This is a result of there being several zeromq connections for each engine,
not just one.
> Would it be feasible to have all ipengines on a compute node use a single
> connection to the central ipcontroller? (Pardon me if I get the terminology
No, this is not feasible.
> Thank you for your help.
> Best regards,
> Jon Olav
> IPython-User mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the IPython-User