[IPython-user] Trouble importing my own modules?

Brian Granger ellisonbg.net@gmail....
Tue Jun 12 22:51:28 CDT 2007


> Building on some of the things that Greg asked about, I'm curious,
> what, if any, future plans there are for a "cooperative computing"
> environment. That is, you could imagine that two users would want to
> work on the same data at the same time. Perhaps both updating
> calculations or performing visualizations while communicated over the
> phone or in a chat session.

We definitely want to move in that direction even further as it seems
like something that could be really useful.

> I realize that much of this is possible to the point that two users
> can connect and run commands in the same namespace...but this is not
> really the same as a system designed with cooperative computation in
> mind.

In my mind being able to run commands in the same namespace (and thus
on the same data) seems to cover most of the usage cases I have come
up with.  But, this area is wide open - I don't know of any other
system that even come close in this respect.  What other
capabilties/feature do you think would be useful for a full
"cooperative computation" system?  I think there is a lot of
interesting work on this front and we are glad to take suggestions or
even contributions.

Brian

> Thanks,
> ~doug
>
> On 6/12/07, Fernando Perez <fperez.net@gmail.com> wrote:
> > Hi Greg,
> >
> > sorry but I'll only provide a partial reply right now.  I'm at a
> > conference with very limited email access and a bit swamped.
> >
> > On 6/11/07, Greg Novak <novak@ucolick.org> wrote:
> > > I'm having trouble importing my own modules into the ipython1 engines.
> > >
> > > First ipython wasn't finding them, in spite of the fact that they were
> > > in the current working directory of the engines.  That's strange, but
> > > not a problem--I set PYTHONPATH to include the modules in question and
> > > the modules were found.
> > >
> > > The other problem is that if I just do:
> > >   import ipython1.kernel.api as par
> > >   rc = par.RemoteController(('localhost', 10105))
> > >   rc.executeAll('import analytic')
> > >
> > > Then I get the traceback attached below.  However, if I do:
> > >
> > >   [rc.execute(i, 'import analytic') for i in rc.getIDs()]
> > >
> > > Then it seems to work.  So, I'm reasonably happy since I have a an
> > > easy workaround, but it is strange.  This is using a CVS checkout from
> > > May 8.
> >
> > Why don't you try to update saw to current SVN and let us know what
> > happens.  The traceback situation is better now (see below).
> >
> > > 1) Thank you for putting this together.  My opinion is that high
> > > end/parallel computing sucks these days because the whole mindset
> > > differs from that of desktop computing.  On desktops, you have
> > > interactive GUI programs and flexible languages (like Python).  On
> > > high-end computers, you have non-interactive batch job systems,
> > > laborious/difficult visualization, and Fortran.  Things like ipython1
> > > are a breath of fresh air.
> >
> > Thanks for the kind words :)  There's still a LOT to be done, and we
> > know that.  It's a big effort and a lot of this is not easy, but we
> > hope you (a collective you, the users) won't lose patience.  Work
> > continues...
> >
> > > 2) Is there a way to get better tracebacks?  When my code generates
> > > exceptions, the exception is thrown in the bowels of ipython/twisted,
> > > rather than anything indicating what was my actual mistake.  I realize
> > > that this may be an impossible task since I pass code to be executed
> > > as a string and after that Python has no good way of figuring out
> > > where in the source file it came from.
> >
> > This is a particularly thorny issue, but is much better in current
> > SVN.  Now we generate a full remote traceback on the engine when it
> > happens, and we stuff that as the *value* of the exception.  So while
> > you'll still see that ugly local twisted-related traceback (I can't
> > avoid that, unfortunately, because I can't subvert the true local
> > stack with a remote one), at least at the bottom you'll now see a much
> > more informative traceback.
> >
> > > 3) I understand from mailing list posts that the eventual goal is to
> > > have the engines running Ipython rather than plain python instances.
> > > That seems fine as a default, but I'd like to put in a vote for having
> > > it continue to be possible to run plain python instances on the
> > > engines.  The reason is a little esoteric...
> >
> > Don't worry.  When we say 'ipython in the engines' we don't mean
> > *today's* readline-based ipython.  We mean a pure python program that
> > provides things like object introspection, nice tracebacks, etc, but
> > over an API.  There will be a *terminal* based ipython that will use
> > such an object to build what looks like today's ipython, and will
> > hence use readline.  But the code running in the engines will simply
> > expose the ipython API over a network, so no readline too worry about.
> >
> > > 4) Multiple users.  Do you have any ideas or a preferred model for
> > > allowing multiple people to connect to the same controller (and
> > > therefore have access to the same pool of engines?)  That would be
> > > truly killer.
> >
> > So far, we've done it just by starting more than one remote controller
> > within a subnet with no firewalls, and it seems to work fine.
> > Security needs to be added for using it in open networks though.
> > While twisted offers a lot in that terrain, we need to actually do the
> > implementation.
> >
> > >
> > > To be concrete, what excites me about ipython1 is the idea of
> > > interactive data analysis.  For the most part exploration of data is
> > > limited to what you can do on a single processor because you need to
> > > hook your application up to a GUI.  GUIs, being event driven, lead to
> > > the potential of a lot of idle time since the user might have to think
> > > about what he sees for a while before requesting more action.
> > > Typically (in my experience) the only way to run something on a large
> > > computer is to submit a batch job, and then it's supposed to crank
> > > away like mad, not wait for user input via some connection to a GUI.
> > > Therefore it's either practically or politically impossible to harness
> > > a large number of processors for interactive data exploration.
> > >
> > > Hence my interest in multiple users.  Let's say you have a cluster
> > > with 100 machines and 4 users.  One way to handle this would be to
> > > give each user a separate controller and 25 engines.  This is nice
> > > because it insulates users from one another, and a user can be sure
> > > that when he tells his controller to do something, there will be
> > > engines available.  However, the downside is that if the four users
> > > are doing interactive data exploration, then there will be a lot of
> > > idle time and user A would benefit from being able to use user B's
> > > engines when they're idle.
> > >
> > > Another way to do it would be to have one controller with access to
> > > all 100 engines.  This would be truly killer since it would be as
> > > though you had 100 processors inside your desktop machine.  You'd
> > > click "View Some Complicated Plot" and 100 processors would crank away
> > > at generating it, returning the processed data to your desktop where
> > > it's dutifully plotted in a GUI window.  The guy in the next office
> > > would be doing the same thing, and unless you both happened to hit
> > > "Plot" at the same moment, you won't notice each other's presence.
> > >
> > > That would be incredible, and would, I think drag high end computing
> > > into the modern era. There's a world of difference in how you think
> > > about things if you're doing it interactively in real-time as opposed
> > > to waiting minutes or hours for the result.
> >
> > What you describe is precisely what we've had in mind for a long time.
> >  The infrastructure is getting to that point, though it's not that
> > seamless quite yet.  We'll get it though...
> >
> > > Last, an observation which is sure to categorize me as a lunatic: In
> > > poking around the ipython1 code, I came across several places where
> > > source code is laboriously manipulated as strings.  That's a heroic
> > > effort, but it makes me sad, because the Lisp people realized the
> > > usefulness of representing source code in one of the language's basic
> > > data structures since its inception in the 60's.  And they realized
> > > the usefulness of manipulating source code with the language (via
> > > macros) since the early 70's.  And you can add type declarations at
> > > will if you want the compiler to be able to do a better job optimizing
> > > your code.  And the compilers compile to native machine code. Those
> > > guys didn't have bad ideas--they just had the misfortune of being 40
> > > years ahead of their time.
> >
> > Yes, unfortunately Python isn't lisp in that regard.  There is a way
> > to get to the AST, and in core1/ you'll find some code by Robert Kern
> > that already uses it.  But Python doesn't really 'sell' AST
> > manipulations as a user feature, so I honestly don't know exactly what
> > can be done there.  Perhaps we can replace some of our current
> > string-munging with more AST work, I don't know.  Feel free to pitch
> > in if you notice anything specific of that nature.
> >
> > Cheers, and many thanks for your comments!
> >
> > f
> > _______________________________________________
> > IPython-user mailing list
> > IPython-user@scipy.org
> > http://lists.ipython.scipy.org/mailman/listinfo/ipython-user
> >
> _______________________________________________
> IPython-user mailing list
> IPython-user@scipy.org
> http://lists.ipython.scipy.org/mailman/listinfo/ipython-user
>


More information about the IPython-user mailing list