[Numpy-discussion] Optimized sum of squares

Anne Archibald peridot.faceted@gmail....
Tue Oct 20 17:59:42 CDT 2009


2009/10/20  <josef.pktd@gmail.com>:
> On Tue, Oct 20, 2009 at 3:09 PM, Anne Archibald
> <peridot.faceted@gmail.com> wrote:
>> 2009/10/20  <josef.pktd@gmail.com>:
>>> On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben <gruben@bigpond.net.au> wrote:
>>>> Hi Gaël,
>>>>
>>>> If you've got a 1D array/vector called "a", I think the normal idiom is
>>>>
>>>> np.dot(a,a)
>>>>
>>>> For the more general case, I think
>>>> np.tensordot(a, a, axes=something_else)
>>>> should do it, where you should be able to figure out something_else for
>>>> your particular case.
>>>
>>> Is it really possible to get the same as np.sum(a*a, axis)  with
>>> tensordot  if a.ndim=2 ?
>>> Any way I try the "something_else", I get extra terms as in np.dot(a.T, a)
>>
>> It seems like this would be a good place to apply numpy's
>> higher-dimensional ufuncs: what you want seems to just be the vector
>> inner product, broadcast over all other dimensions. In fact I believe
>> this is implemented in numpy as a demo: numpy.umath_tests.inner1d
>> should do the job.
>
> Thanks, this works well, needs core in name
> (I might have to learn how to swap or roll axis to use this for more than 2d.)
>
>>>> np.core.umath_tests.inner1d(a.T, b.T)
> array([12,  8, 16])
>>>> (a*b).sum(0)
> array([12,  8, 16])
>>>> np.core.umath_tests.inner1d(a.T, b.T)
> array([12,  8, 16])
>>>> (a*a).sum(0)
> array([126, 166, 214])
>>>> np.core.umath_tests.inner1d(a.T, a.T)
> array([126, 166, 214])
>
>
> What's the status on these functions? They don't show up in the docs
> or help, except for
> a brief mention in the c-api:
>
> http://docs.scipy.org/numpy/docs/numpy-docs/reference/c-api.generalized-ufuncs.rst/
>
> Are they for public consumption and should go into the docs?
> Or do they remain a hidden secret, to force users to read the mailing lists?

I think the long-term goal is to have a completely ufuncized linear
algebra library, and I think these functions are just tests of the
gufunc features. In principle, at least, it wouldn't actually be too
hard to fill out a full linear algebra library, since the per
"element" linear algebra operations already exist. Unfortunately the
code should exist for many data types, and the code generator scheme
currently used to do this for ordinary ufuncs is a barrier to
contributions. It might be worth banging out a doubles-only generic
ufunc linear algebra library (in addition to
numpy.linalg/scipy.linalg), just as a proof of concept.

Anne


More information about the NumPy-Discussion mailing list