[SciPy-user] fsolve with sparse matrices

David Huard david.huard@gmail....
Fri Apr 17 21:11:22 CDT 2009


Have you looked at pyamg ? It has a number of sparse matrix solvers based on
multigrid.

David

On Fri, Apr 17, 2009 at 12:28 PM, Pauli Virtanen <pav@iki.fi> wrote:

> Fri, 17 Apr 2009 17:29:29 +0200, Jan Wicijowski kirjoitti:
> [clip]
> > I would like to ask you, if anyone was ever confronted with solving
> > nonlinear system with scipy.optimize.fsolve, with huge no. of equations,
> > say 10000.
>
> Sure, but as you noticed, fsolve won't cut it, as it assumes dense
> matrices.
>
> [clip: fsolve & sparse jacobian]
> > Injecting custom function instead of scipy.optimize.minpack.check_func
> > didn't work as well with fsolve. This wasn't surprising, as I guessed,
> > that FORTRAN hybrj won't be able to deal with interfacing with scipy
> > matrices.
> >
> > So, am I left with rewriting the original FORTRAN hybrj source to
> > python, or is there somebody, who dealt with such problem?
>
> Translating hybrj directly to sparse matrices is a bit of work; it's a
> trust-region Newton method, so it doesn't simply invert the Jacobian, but
> solves a restricted linear programming problem involving it. (I think
> some of the tricks it does work well only with sparse matrices.) In
> principle, writing something like this with sparse matrices should
> nevertheless be possible with the tools in Scipy (though Scipy does not
> have a sparse QR decomposition).
>
> If you write a sparse trust-region Newton algorithm in Python, I'm sure
> there's a place in Scipy for it :)
>
>    ***
>
> The easier way would be just to implement a Newton method combined with
> line search:
>
>        x = x0
>        maxiter = 100
>        abs_tolerance = 1e-8
>        for k in xrange(maxiter):
>            J = jacobian(x)
>            r = residual(x)
>            d = -sparse.linalg.spsolve(J, r)  # or, iterative
>
>            r_norm = norm(r)
>
>            # check convergence
>            if r_norm < abs_tolerance:
>                break
>
>            # naive line search, could consider using
>            # scipy.optimize.line_search instead
>            for alpha in [1, 0.5, 0.1, 0.01]:
>                x2 = x + alpha*d
>                r2 = residual(x2)
>                if norm(r2) < r_norm:
>                    break
>            x = x2
>        else:
>            raise RuntimeError("Didn't converge")
>
> It's very naive, but if your problem is easy enough, it works.
>
> --
> Pauli Virtanen
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20090417/2f812261/attachment.html 


More information about the SciPy-user mailing list