[Numpy-discussion] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation

josef.pktd@gmai... josef.pktd@gmai...
Tue Dec 29 10:26:45 CST 2009


On Tue, Dec 29, 2009 at 10:55 AM, Gael Varoquaux
<gael.varoquaux@normalesup.org> wrote:
> On Tue, Dec 29, 2009 at 11:34:44PM +0900, David Cournapeau wrote:
>> Buildout, virtualenv all work by sandboxing from the system python:
>> each of them do not see each other, which may be useful for
>> development, but as a deployment solution to the casual user who may
>> not be familiar with python, it is useless. A scientist who installs
>> numpy, scipy, etc... to try things out want to have everything
>> available in one python interpreter, and does not want to jump to
>> different virtualenvs and whatnot to try different packages.
>
> I think that you are pointing out a large source of misunderstanding
> in packaging discussion. People behind setuptools, pip or buildout care
> to have a working ensemble of packages that deliver an application (often
> a web application)[1]. You and I, and many scientific developers see
> libraries as building blocks that need to be assembled by the user, the
> scientist using them to do new science. Thus the idea of isolation is not
> something that we can accept, because it means that we are restricting
> the user to a set of libraries.
>
> Our definition of user is not the same as the user targeted by buildout.
> Our user does not push buttons, but he writes code. However, unlike the
> developer targeted by buildout and distutils, our user does not want or
> need to learn about packaging.
>
> Trying to make the debate clearer...

I wanted to say the same thing. Pylons during the active development
time, required a different combination of versions of several
different packages almost every month. virtualenv and pip are the only
solutions if you don't want to spend all the time updating.

In the last half year, I started to have similar problem with numpy
trunk and scipy and the rest, but I hope this will be only temporary,
and might not really be a problem for the end user.

Additionally for obtaining packages from pypi, I never had problems
with pure python packages, or packages that had complete binary
installers (eg. wxpython or matplotlib).

However, the standard case for scientific packages, with different
build dependencies are often a pain. (A nice example that I never
tried is http://fenics.org/wiki/Installing_DOLFIN_on_Windows - website
doesn't respond but it looks like it takes a week to install a lot of
required source packages). On pypm.activestate.com scipy, matplotlib,
mayavi all fail, scipy because of missing lapack/blas.

That's also a reason why CRAN is nice, because it has automatic
platform specific binary installation.

And any improvement will be very welcome, especially if we start with
a more widespread use of cython. I'm reluctant to use cython in
statsmodels, exactly to avoid any build and distribution problems,
even though it would be very useful.

Josef

>
> Gaël
>
> [1] I know your position on why simply focusing on sandboxing working
> ensemble of libraries is not a replacement for backward compatibility,
> and will only create impossible problems in the long run. While I agree
> with you, this is not my point here.
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>


More information about the NumPy-Discussion mailing list