[IPython-User] ipcluster: Too many open files (tcp_listener.cpp:213)

Jon Olav Vik jonovik@gmail....
Mon Jun 18 15:29:15 CDT 2012


MinRK <benjaminrk <at> gmail.com> writes:

> [IPControllerApp] client::client 'c5dd44d5-b59d-4392-bac0-917e9ef4c9d8'
> requested u'registration_request'
> 2012-06-18 14:03:58.161 [IPClusterStart] Too many open files
> (tcp_listener.cpp:213)
> 2012-06-18 14:03:58.274 [IPClusterStart] Process '.../python' stopped: {'pid':
> 21820, 'exit_code': -6}
> 2012-06-18 14:03:58.275 [IPClusterStart] IPython cluster: stopping
> 2012-06-18 14:03:58.275 [IPClusterStart] Stopping Engines...
> 2012-06-18 14:04:01.281 [IPClusterStart] Removing pid file: .../.ipython/
> profile_default/pid/ipcluster.pid
> The culprit seems to be "Too many open files (tcp_listener.cpp:213)". I would
> like to know where this limit is set, and how to modify it. Also, I wonder if
> it would help to spread connection attempts out in time. That might help if 
the
> problem is too many simultaneous requests, but not if the limit applies to how
> many engines I can connect simultaneously. Any other advice would be welcome
> too.
 
> This is just the fd limit set by your system.  See various docs on changing 
'ulimit' for your system.

fd = file descriptor? But the engines are running on separate computers, with 
from 8 to 24 cores. Does not the ulimit apply only within each computer? Is 
there any relationship between tcp and open files? (Sorry, I'm not a native on 
Linux.)

I see the number 200 in "max user processes", but 1024 "max open files". Am I 
missing something, e.g. similar limits for network connections?

-bash-3.2$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 135168
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 200
virtual memory          (kbytes, -v) 4194304
file locks                      (-x) unlimited


> You can try to spread out connection attempts, but I don't think it will 
change anything.  
> 
> I do not believe there are transient sockets during the connection process.

Does this mean that the number of parallel processes with IPython is limited by 
the permitted number of file descriptors? Two hundred really isn't too much, 
but I guess it'll have to do...

Would it be feasible to have all ipengines on a compute node use a single 
connection to the central ipcontroller? (Pardon me if I get the terminology 
wrong.)

Thank you for your help.

Best regards,
Jon Olav



More information about the IPython-User mailing list