[SciPy-user] How to start with SciPy and NumPy

David Cournapeau cournape@gmail....
Sun Jan 25 05:43:27 CST 2009

On Sun, Jan 25, 2009 at 8:17 PM, Vicent <vginer@gmail.com> wrote:
> On Sat, Jan 24, 2009 at 21:08, David Baddeley <david_baddeley@yahoo.com.au>
> wrote:
>> Hi Vincent,
>> if you're new to both python and numerical programming I'd suggest you
>> make yourself familiar with basic python first and then move on to the
>> numerical stuff - it'll probably be easier that way.
> Thank you for the advice.
>> To answer your question, there are two main ways in which Numpy and Scipy
>> help with numeric programming. The first (and simplest) of these is by
>> providing lots of pre-rolled algorithms to do useful things (e.g. computing
>> bessel functions, fourier transforms, and much more).
> Yes, I realize of that. In that aspect, NumPy+SciPy are like any other
> Python, for me. If any time I need something specific, I look if a package
> for that already exists.

Depending on your POV, this may be true. But for many scientific
usages, an array capability is so fundamental that it has strong
consequences on all the dependent code (e.g. little scientific code in
python will use list as its core data structure, for example). It is a
fundamental building block if you want.

> I understand this advantage. Sorry if this was already explained in the
> online documentation, but I was not able to find it...

I think the online documentation is organized for people who are
familiar with those concepts - most people doing numerical
computations are familiar with the union R/matlab/idl/labview. I am
not sure we have a documentation for people not familiar with those
concepts - this would certainly be nice.

> (1) Is there any point in maintaining a list and then create a temporary
> NumPy array just to perform calculations, and then "copy and paste" the
> results on the list?

Depends on whether you need a list for later computation: a list
generally takes much more memory if you only care about homogenous
items (a numpy array only takes M * N bytes + overhead, where M is the
size of one item and N the number of bytes of your item - 4 for a 32
bits integers). OTOH, if you keep resizing your data, list may makes
sense - and list can be faster than arrays for small sizes.

There is no unique rule, but for computation on a lot of data, numpy
arrays certainly are a powerful data structure, useful on its own.

> (2) What about lists with different typed items within them?

Numpy arrays - and generally arrays - fundamentally rely on the
assumption of the same type for every item. A lot of the performances
of array comes from this assumption (it means you can access any item
randomly without the need to traverse any other item first, etc...).

> (3) Can I perform operations over all the elements (scalars) in one given
> array that meet some given condition?? For example, in your previous
> example,"compute sinus only for those elements which are multiple of pi/4
> (or whatever)".

Of course. For example, getting an array with all the positive numbers is:

b = a[a>0]

And this will be much faster than list comprehension for relatively large arrays


More information about the SciPy-user mailing list