[SciPy-user] array vs matrix, converting code from matlab
Ed Schofield
schofield at ftw.at
Fri Apr 21 05:17:34 CDT 2006
David Cournapeau wrote:
> To be more specific, I am trying to convert a function which compute
> multivariate Gaussian densities. It should be able to handle scalar
> case, the case where the mean is a vector, and the case where va is a
> vector (diagonal covariance matrix) or square matrix (full covariance
> matrix).
> So, in matlab, I simply do:
>
> function [n, d, K, varmode] = gaussd_args(data, mu, var)
>
> [n, d] = size(data);
> [dm0, dm1] = size(mu);
> [dv0, dv1]= size(var);
>
> And I check that the dimensions are what I expect afterwards. Using
> arrays, I don't see a simple way to do that while passing scalar
> arguments to the functions. So either I should be using matrix type (and
> using asmatrix on the arguments), or I should never pass scalar to the
> function, and always pass arrays. But maybe I've used matlab too much,
> and there is a much simpler way to do that in scipy.
> To sum it up, what is the convention in scipy when a function
> handles both scalar and arrays ? Is there an idiom to treat scalar and
> arrays of size 1 the same way, whatever the number of dimensions arrays
> may have ?
>
You could use rank-0 arrays instead of scalars. For example, if your
function were to wrap the arguments up with 'asarray', they'd then have
the normal methods and attributes of arrays:
def foo(x, mu, va):
x = asarray(x)
mu = asarray(m)
va = asarray(va)
if mu.ndim == 0 and va.ndim == 0:
call scalar_implementation
return result
if mu.ndim == 1 and va.ndim == 1:
call scalar implementation on each element
if mu.ndim == 1 and va.ndim == 2:
call matrix implementation
-- Ed
More information about the SciPy-user
mailing list