[Numpy-discussion] arrays of matrices
Robert Kern
robert.kern@gmail....
Thu Feb 28 18:55:11 CST 2008
On Thu, Feb 28, 2008 at 6:43 PM, Geoffrey Irving <irving@pixar.com> wrote:
> > The magic is in In[27]. We reshape the array of vectors to be
> > compatible with the shape of the array of matrices. When we multiply
> > the two together, it is as if we multiplied two (n,3,3) matrices, the
> > latter being the vectors repeated 3 times. Then we sum along the rows
> > of each of the product matrices to get the desired dot product.
>
> Thanks! That'll do nicely.
>
> For large matrices, that could be problematic due to the blowup in
> intermediate memory, but on the other hand for large matrices a loop
> through the toplevel index wouldn't add much cost.
If you really want to save memory and you can destroy A, then you
could do the multiplication in-place. If you really want to get fancy
and can destroy b, you can use it as storage for the summation output,
too.
In [11]: A *= b.reshape([n,1,3])
In [12]: c = A.sum(axis=-1, out=b)
In [13]: b
Out[13]:
array([[ 50, 140, 230],
[ 1220, 1580, 1940],
[ 4010, 4640, 5270],
[ 8420, 9320, 10220],
[14450, 15620, 16790]])
In [14]: c is b
Out[14]: True
> > PS: Are you perchance the Geoffrey Irving I knew at CalTech, class of '03?
>
> Yep. That would answer the question I had when I started reading this email.
> However, it's spelled Caltech, not CalTech!
Yeah, yeah, yeah. The Wikis, they have taken over my finger ReFlexes.
NumPy Rudds += 1. Take that, Tim Hochberg! :-)
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the Numpy-discussion
mailing list