[SciPy-user] optimization using fmin_bfgs with gradient information

Ernest Adrogué eadrogue@gmx....
Sun Jul 19 06:05:43 CDT 2009


Hi,
18/07/09 @ 18:36 (+0200), thus spake Sebastian Walter:
> I don't find it so hard to believe that you got your gradient function wrong.
> Could you post the code of your objective function?

Here it goes:

def fobj(self, x):

    # x = (alpha1...alphan, beta0..betan, gamma, rho)

    n = self.n

	# use absolute values for alphas and betas

    y = [abs(i) for i in x[:-2]]

    # alpha0 = n - sum(alpha1...alphan)

    y.insert(0, abs(n-sum(y[:n-1])))

    alpha = dict(zip(self.names, y[:n]))
    beta = dict(zip(self.names, y[n:]))

    gamma = abs(x[-2])
    rho = x[-1]

    pseudo_likelihood = 0

    for obs in self.observations:

        mu1 = alpha[obs.ht] * beta[obs.at] * gamma
        mu2 = alpha[obs.at] * beta[obs.ht]
        tau = self.tau(mu1, mu2, rho, obs.hg, obs.ag)

        # avoid log(0)
        mu1 = mu1 > 0 and mu1 or 1e-10
        mu2 = mu2 > 0 and mu2 or 1e-10
        tau = tau > 0 and tau or 1e-10

        pseudo_likelihood += math.log(tau)
        pseudo_likelihood += obs.hg * math.log(mu1) - mu1
        pseudo_likelihood += obs.ag * math.log(mu2) - mu2

    return -pseudo_likelihood

> Maybe you've just got the wrong sign. Near the optimum this would be OK,
> but trying to do a descent step away from the optimizer is to fail.

Yes, it must be the gradient that is wrong. It occurs to me
that it could be related to the fact that I'm changing the value
of tau() in the objective function when tau is < 0, and I don't think
tau_prime() used in the gradient reflects this.

Thanks.


Ernest


More information about the SciPy-user mailing list