[SciPy-dev] SciPy improvements
Thu Apr 12 18:32:14 CDT 2007
thank to all of you for your thorough response.
1) The DE code from Robert:
looks incredibly simple, I'll check it on my problem, if it behaves
better than my ugly one. But that's the thing - I didn't find any
mention about it in the documentation, otherwise I would be using your
code, because it's simpler (=better). Also I didn't find it in google.
I'll check the simplex method from SciPy - when I was using it, I just
needed a simple script, not a whole dependence on SciPy and it was
very difficult to get just the simplex algorithm out of SciPy.
2) I am sending my Broyden update methods in the attachement together
with tests (you need py.test to run them). I am using a test from some
article, I think from Vanderbilt. However my code is giving a little
different results. I'll polish it and send it as a patch to SciPy in
the trac, as directed, so you can just look how I do it. But when you
do it, you will see there is really nothing to it - it's very trivial.
However, my code doesn't use linesearch. Also I am curious how the
BFGS method is going to work when I implement it - it's very similar
to Broyden, except the update of the Jacobian is a little different.
Could you Michael please also rewrite your code to Python?
It would be nice to have all of it in SciPy with the same interface.
BTW - why are you using matlab at all? To me, the python with numpy is
better than anything else I've programmed in, including matlab.
3) about the logistics transforamtion - I was sceptical too, until I
tried that and it was converging faster by a factor of 7x on my
problem (chemev). So for me it's enough justification, but of course I
am not saying that it must converge faster for any problem.
However, I implemented it as a wrapper function above all the
unconstrained algorithms with the SciPy interface, so the user is not
forced to use it - he can just try it and see, as I did. I'll post it
to the cookbook.
4) About the petsc - I know it's another dependence. However, I
noticed you are using umfpack in SciPy. So why not petsc? I think it
contains much more (sometimes better) solvers (depends on the
problem). It's seems logical to me, to either use nothing, or the best
library available, which I believe is petsc.
5)documentation: the front page is quite fine, however the
documentation needs complete redesign in my opinion. First - I believe
the numpy should be separated from SciPy and have it's own page
(numpy.org), but if you think it belongs under the hood of scipy.org,
So, I'll copy the page:
into some new one, and redesign it as I would like it to be, and then
you'll tell me what you think about it. The same with other pages if
I'll get a better idea about them. This way I shouldn't spoil anything
in case you wouldn't like it. Because I don't have just couple of
On 4/12/07, Michael McNeil Forbes <firstname.lastname@example.org> wrote:
> On 11 Apr 2007, at 1:56 PM, Matthieu Brucher wrote:
> > 2) Nonlinear solvers
> > I have written these nonlinear solvers for the problem R(x) = 0, where
> > x and R has a dimension "n":
> > broyden1 - Broyden's first method - is a quasi-Newton-Raphson method
> > updating an approximate Jacobian and then inverting it
> > broyden2 - Broyden's second method - the same as broyden1, but updates
> > inverse Jacobian directly
> > broyden3 - Broyden's second method - the same as broyden2, but instead
> > directly computing the inverse Jacobian, it remembers how to
> > it using vectors, and when computing inv(J)*F, it uses those
> vectors to
> > compute this product, thus avoding the expensive NxN matrix
> > multiplication.
> > broyden_generalized - Generalized Broyden's method, the same as
> > but instead of approximating the full NxN Jacobian, it construct
> it at
> > every iteration in a way that avoids the NxN matrix
> > This is not as precise as broyden3.
> > anderson - extended Anderson method, the same as the
> > but added w_0^2*I to before taking inversion to improve the
> > anderson2 - the Anderson method, the same as anderson, but formulated
> > differently
> > linear_mixing
> > exciting_mixing
> > I use them in the self-consistent cycle of the Density Functional
> > Theory (so I use a terminology of DFT literature in the names of the
> > methods).
> Could the part that computes the step be separated from the function iself
> and the optimizer ? I'm trying to "modularize" non linear solvers so as to
> select more efficiently what is needed - kind of optimizer, kind of step,
> kind of stopping criterion, ... -
> It should be possible to modularize these with a step class that maintains
> (the Jacobian, or its inverse etc.).
> (Where is the latest version of your optimization proposal? I have not had
> a chance
> to look at it yet, but have been meaning to and would like to look at the
> latest version.
> Perhaps we should make a page on the Wiki to collect suggestions and code
> I have been meaning to get a good Broyden based algorithm coded for python
> for a
> while. I have a MATLAB version of a globally convergent Broyden
> using a linesearch as a base with a couple of unusual features that might be
> useful (not specific to Broyden based methods). (The code is based on the
> presentation of the algorithm given in Numerical Recipies with some
> suggested by Dennis and Schnabel's book and is partially documented using
> 1) Variable tolerances. The idea is to quickly estimate the starting
> with low tolerance calculations and then improve the tolerances as the
> converges to the solution. This is useful if the function R(x) is
> with numerical integration or some similar technique that is quick for
> tolerances but expensive for high tolerance functions. (I have rarely
> this technique in generic optimization algorithms, but found it
> invaluable for
> several problems.)
> 2) Real-time bailout. This allows you to compute for a specified length of
> and then return the best solution found within that time frame. Most
> simply count function calls.
> 3) Selective refinement of the Jacobian as the iteration proceeds. This
> to monitoring the condition number of the Jacobian and recomputing parts
> of it
> selectively if it becomes ill-conditioned. (For this I update the QR
> factorization of J rather than maintaining inv(J)).
> These things all slow down the fundamental algorithm, but are very useful
> when the
> function R(x) is extremely expensive to evaluate.
> Scipy-dev mailing list
More information about the Scipy-dev