# [SciPy-user] non-linear multi-variate optimization

Robert Kern robert.kern@gmail....
Sat Jul 18 13:02:40 CDT 2009

```On Sat, Jul 18, 2009 at 12:08, Chris Colbert<sccolbert@gmail.com> wrote:
> I'm using fmin_l_bfgs_b
>
> here's the code of my objective function, its the equation of a
> superquadric with 11 free variables (a1, a2, a3, e1, e2, px, py, pz,
> phi, theta, tsai) the variables (xw, yw, zw) are length N vectors
> representing the world coordinates to which I'm fitting the
> superquadric.  (i know i spelled psi wrong, i need to change it :) )
>
> def superQuadricFit((a1, a2, a3, e1, e2, phi, theta, tsai, px, py, pz),
>
>                     *args):
>
>
>
>    a1 = float(a1)
>
>    a2 = float(a2)
>
>    a3 = float(a3)
>
>    e1 = float(e1)
>
>    e2 = float(e2)
>
>    phi = float(phi)
>
>    theta = float(theta)
>
>    tsai = float(tsai)
>
>    px = float(px)
>
>    py = float(py)
>
>    pz = float(pz)
>
>
>
>
>
>
>
>    xw = args[0]
>
>    yw = args[1]
>
>    zw = args[2]
>
>
>
>    cphi = math.cos(phi)
>
>    ctheta = math.cos(theta)
>
>    ctsai = math.cos(tsai)
>
>    sphi = math.sin(phi)
>
>    stheta = math.sin(theta)
>
>    stsai = math.sin(tsai)
>
>
>
>    nx = cphi * ctheta * ctsai - sphi * stsai
>
>    ny = sphi * ctheta * ctsai + cphi * stsai
>
>    nz = -stheta * ctsai
>
>    ox = -cphi * ctheta * stsai - sphi * ctsai
>
>    oy = -sphi * ctheta * stsai + cphi * ctsai
>
>    oz = stheta * stsai
>
>    ax = cphi * stheta
>
>    ay = sphi * stheta
>
>    az = ctheta
>
>
>
>
>
>
>
>    f1 = ((nx * xw + ny * yw + nz * zw - px * nx - py * ny - pz * nz) / a1)
>
>    f2 = ((ox * xw + oy * yw + oz * zw - px * ox - py * oy - pz * oz) / a2)
>
>    f3 = ((ax * xw + ay * yw + az * zw - px * ax - py * ay - pz * az) / a3)
>
>
>
>    F = ((f1**2)**(1/e2) + (f2**2)**(1/e2))**(e2/e1) + (f3**2)**(1/e1)
>
>
>
>    err = (math.sqrt(a1 * a2 * a3) * (F**(e1) - 1))**2
>
>
>    sumerr = err.sum()
>
>
>    print err
>
>
>
>    return sumerr
>
>
>
> So I would think the gradient should express the steepness of the
> function wrt the 11 variables, and that steepness will be different at
> every point (xw, yw, zw)_i . I cant see how to give any useful
> gradient information in a single N-length vector.

The gradient that fmin_l_bfgs_b needs is not of F, but of sumerr.
sumerr is a single scalar that is a function of the 11 free variables
and thus has an 11-vector as its gradient.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
```