[Numpy-discussion] Optimized sum of squares
Tue Oct 20 14:09:36 CDT 2009
> On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben <email@example.com> wrote:
>> Hi Gaël,
>> If you've got a 1D array/vector called "a", I think the normal idiom is
>> For the more general case, I think
>> np.tensordot(a, a, axes=something_else)
>> should do it, where you should be able to figure out something_else for
>> your particular case.
> Is it really possible to get the same as np.sum(a*a, axis) with
> tensordot if a.ndim=2 ?
> Any way I try the "something_else", I get extra terms as in np.dot(a.T, a)
It seems like this would be a good place to apply numpy's
higher-dimensional ufuncs: what you want seems to just be the vector
inner product, broadcast over all other dimensions. In fact I believe
this is implemented in numpy as a demo: numpy.umath_tests.inner1d
should do the job.
More information about the NumPy-Discussion