[IPython-user] IPython1 with SSH
Tue Aug 21 15:05:53 CDT 2007
On 8/21/07, Jussi Rasinmäki <firstname.lastname@example.org> wrote:
> Hi all,
> Thanks for your pointers. I did manage to get the task_profiler.py
> example to work by doing ssh port forwarding for ports 10105 and
> So, for the benefit of the "other not so knowledgeable" readers,
> here's a brief explanation what I did:
> on cluster: ipcluster -n 10 (starts the ipython1 controller and 10
> engines on the cluster)
> on local: ssh -L 10105:my.cluster.ip:10105 email@example.com
> on local: ssh -L 10113:my.cluster.ip:10113 firstname.lastname@example.org
> (these processes are left running, they will forward
> network traffic to the localhost ports 10105 and 10113 to the same
> ports on the cluster)
> on local: python task_profiler.py -n 128 -t 0.01 -T 1.0 (note that no
> -c for the controller is specified (localhost is the default), but
> because of the ssh tunneling packets to localhost:10105 and :10113
> will actually end up on the cluster)
> I hadn't really grasped the roles of client, controller and engines
> properly, so my initial setup didn't make much sense: I had both the
> client and controller running on my local computer and only the nodes
> on the cluster, hence the rc =
> kernel.RemoteController(('127.0.0.1',10105)). Obviously it makes more
> sense to run the controller on the cluster head as Brian suggested.
> However, once I get our simulator to run on the "supercluster", I'd
> like to test it on Amazon EC2 as well. In that case the controller
> would actually be on my local computer and the engines on EC2
> instances, I guess. Should I want to secure the traffic between the
> controller and the engines in that case, how would I go about doing
You could start sshd on your local computer and tunnel ssh from the
EC2 nodes (where the engines are) to you local computer. You would
just have to initial the tunnel from the EC2 nodes and your local
computer would need to be reachable (public ip with ssh port open).
Does that make sense. It does take a little while to figure out all
the different pieces, but in principle just about anything is
possible. Let us know if you get EC2 working with ipython1.
> On 17/08/07, Brian Granger <email@example.com> wrote:
> > The main constraints are these:
> > The computer that is running the controller must:
> > - Have firewall ports open for the client and engines to contact it.
> > To see what ports need to be opened, have a look at the controller
> > log. I think the ports are 10201, 10105, 10113 by default.
> > If you can't open firewall ports and they are closed, then you will
> > need to use ssh tunneling as Ville mentions. See the ssh man pages
> > for details. There are also a number of good tutorials about this.
> > --It must be reachable by both the client and engines. This means it
> > must have public IP addreses that are visible. I some contexts, the
> > controller computer will have multiple network interfaces and this
> > must be dealt with. Because of this, you should specify the ip
> > address rather than the hostname to be safe.
> > The typical way we run ipython1 on a cluster is to start the
> > controller on a head node, engines on the compute node and the client
> > on a local machine where you are sitting.
> > > ipcontroller
> > > ipengine --controller-ip=myhost
> > > doesn't work:
> > > >>>import ipython1.kernel.api as kernel
> > > >>>rc = kernel.RemoteController(('127.0.0.1',10105))
> > You will need to change the '127.0.0.1' to the ip address that the
> > controller is running on. That is definitely a problem.
> > Try these things and let us know how it goes.
> > Brian
> > > >>>rc.getIDs()
> > > socket.error: (61, 'Connection refused')
> > >
> > > Cheers,
> > >
> > > Jussi
> > > _______________________________________________
> > > IPython-user mailing list
> > > IPythonfirstname.lastname@example.org
> > > http://lists.ipython.scipy.org/mailman/listinfo/ipython-user
> > >
More information about the IPython-user