[Numpy-discussion] python reduce vs numpy reduce for outer product
Sat Sep 26 18:22:03 CDT 2009
On Sat, Sep 26, 2009 at 18:17, Erik Tollerud <firstname.lastname@example.org> wrote:
>> I'm sure you mean np.multiply.reduce().
> Yes, sorry - typo.
>>> Or, if there's a better way to just start with the first 3 1d
>>> vectorsand jump straight to the broadcast product (basically, an outer
>>> product over arbitrary number of dimensions...)?
>> Well, numpy doesn't support arbitrary numbers of dimensions, nor will
>> your memory. You won't be able to do more than a handful of dimensions
>> practically. Exactly what are you trying to do? Specifics, please, not
>> toy examples.
> Well, I'm not sure how to get too much more specific than what I just
> described. I am computing moments of n-d input arrays given a
> particular axis ... I want to take a sequence of 1D arrays, and get an
> output has as many dimensions as the input sequence's length, with
> each dimension's size matching the corresponding vector.
> Symbolically, A[i,j,k,...] = v0[i]*v1[j]*v2[k]*... A is then
> multiplied by the input n-d array (same shape as A), and that is the
> And yes, practically, this will only work until I run out of memory,
> but the reduce method works for the n=1,2, 3, and 4 cases, and
> potentially in the future it will be needed for with higher (up to
> maybe 8) dimensions that are small enough that they won't overwhelm
> the memory. So it seems like a bad idea to write custom versions for
> each potential dimensionality.
Okay, that's the key fact I needed. When you said that you would have
a long list of vectors, I was worried that you wanted a dimension for
each of them.
You probably aren't going to be able to beat reduce(np.multiply, ...).
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion