[SciPy-user] non-linear multi-variate optimization
Sat Jul 18 11:43:02 CDT 2009
I'm not quite understanding what you're saying. According to this:
the Gradient and Jacobian are one in the same.
what would be the form of the gradient in my case?
Thanks for the help!
On Wed, Jul 15, 2009 at 3:32 AM, Sebastian
> The gradient g is only defined for functions f: R^N --> R and is
> simply an array with shape (N,)
> what you sketched in your post is the Jacobian J of a function f: R^N --> R^M
> Typically, the Jacobian J is defined to have the shape (M,N), but
> there are exceptions.
> hope that helps a little
> On Wed, Jul 15, 2009 at 2:00 AM, Chris Colbert<email@example.com> wrote:
>> The routines for non-linear optimization in scipy.optimize take an
>> argument for a function that computes the gradient.
>> What should be the format of return value of this function? I am
>> assuming that its the gradient of the functions with repect to the
>> independent variables in row vector format.
>> for example say we have:
>> f(x,y,z; a1, a2, a3) where a1, a2, and a3 are the independent variables.
>> Should the gradient of N x,y,z points then be of the form:
>> df/da = [[df(X0)/da1, df(X0)/da2, df(X0)/da3],
>> [df(X1)/da1, df(X1)/da2, df(X1)/da3],
>> [df(Xn)/da1, df(Xn)/da2, df(Xn)/da3]]
>> where Xn is the set of (xn, yn, zn) ?
>> SciPy-user mailing list
> SciPy-user mailing list
More information about the SciPy-user