[Numpy-discussion] result shape from dot for 0d, 1d, 2d scalar
Wed Nov 28 11:31:25 CST 2012
On Wed, 2012-11-28 at 11:11 -0500, Skipper Seabold wrote:
> On Tue, Nov 27, 2012 at 11:16 AM, Sebastian Berg
> <firstname.lastname@example.org> wrote:
> On Mon, 2012-11-26 at 13:54 -0500, Skipper Seabold wrote:
> > I discovered this because scipy.optimize.fmin_powell appears
> > squeeze 1d argmin to 0d unlike the other optimizers, but
> that's a
> > different story.
> > I would expect the 0d array to behave like the 1d array not
> the 2d as
> > it does below. Thoughts? Maybe too big of a pain to change
> > behavior if indeed it's not desired, but I found it to be
> I don't quite understand why it is unexpected. A 1-d array is
> a vector, a 0-d array is a scalar.
> When you put it like this I guess it makes sense. I don't encounter 0d
> arrays often and never think of a 0d array as truly a scalar like
> np.array(1.).item(). See below for my intuition.
I think you should see them as a scalar though for mathematical
operations. The differences are fine in any case, and numpy typically
silently converts scalars -> 0d arrays on function calls and back again
to return scalars.
> Maybe I'm misunderstanding. How do you mean there is no broadcasting?
Broadcasting adds dimensions to the start. To handle a vector like a
matrix product in dot, you do not always add the dimension at the start.
For matrix.vector the vector (N,) is much like (N,1). Also the result of
dot is not necessarily 2-d which it should be in your reasoning and if
you think about what happens in broadcasting terms.
> They're clearly not conformable. Is vector.scalar specially defined
> (I have no idea)? I recall arguing once and submitting a patch such
> that np.linalg.det(5) and np.linalg.inv(5) should be well-defined and
> work but the counter-argument was that a scalar is not the same as a
> scalar matrix. This seems to be an exception.
I do not see an exception, in all cases there is no implicit
(broadcasting like) adding of extra dimensions (leading to an error in
most linear algebra functions if the input is not 2-d) which is good
since "explicit is better then implicit".
> Here, I guess, following that counterargument, I'd expected the scalar
> to fail in dot. I certainly don't expect a (N,2).scalar -> (N,2). Or
If you say dot is strictly a matrix product yes (though it should also
throw errors for vectors then). I think it simply is trying to be more
like the dot that I would write down on paper and thus special cases
vectors and scalars and this generalization only replaces what should
otherwise be an error in a matrix product!
Maybe a strict matrix product would make sense too, but the dot function
behavior cannot be changed in any case, so its pointless to argue about
it. Just make sure your arrays are 2-d (or matrices) if you want a
matrix product, which will give the behavior you expect in a much more
controlled fashion anyway.
> I'd expect it to follow the rules of matrix notation and be treated
> like the 1d scalar vector so that (N,1).scalar -> (N,). To my mind,
> this follows more closely to the expectation that (J,K).(M,N) ->
> (J,N), i.e., the second dimension of the result is the same as the
> second dimension of whatever is post-multiplying where the first
> dimension is inferred if necessary (or should fail if non-existent).
> So my expectations are (were)
> (N,).() -> (N,)
> (N,1).() -> (N,)
> (N,1).(1,) -> (N,)
> (N,1).(1,1) -> (N,1)
> (N,2).() -> Error
> > [~]
> > : arr = np.random.random((25,2))
> > [~/]
> > : np.dot(arr.squeeze(), np.array(2.)).shape
> > : (25, 2)
> > Skipper
> > _______________________________________________
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> NumPy-Discussion mailing list
> NumPy-Discussion mailing list
More information about the NumPy-Discussion