[SciPy-user] PyDSTool and SciPy (integrators)
Fri Apr 18 12:06:01 CDT 2008
On 18/04/2008, Erik Sherwood <email@example.com> wrote:
> For the kinds of problems Rob and I deal with, high accuracy with
> reasonable speed is essential, and our function evaluations are not
> excessively expensive, hence our preference for Radau. We have found
> the HW dopri853 implementation to be fast and robust for non-stiff
> (and some moderately stiff) systems and the Radau5 implementation to
> be able to handle almost all stiff problems that stymie dopri.
Just to check: radau and dopri853 require you to know whether your
problem is stiff or not? An extremely convenient feature of the
ODEPACK integrator is that it can switch between stiff and non-stiff
solvers as needed.
> 3. Stopping events, auxiliary functions
> PyDSTool's event functions, which can be set as 'terminal' (halting
> integration) or 'non-terminal' (not stopping integration, but saved
> and returned to the caller), may be any C function which returns a
> double. The event is considered to have happened when the event
> function returns 0.0. There is a layer of code between the interface
> to Python and the interface to the H&W codes where the events are
> checked. After each step of the integration, the event handling
> routines check if any event functions have changed sign; if so, the
> exact point at which the sign change (and hence event) occurred is
> found using bisection and the dense output function provided by the
> H&W integrator, to user specified accuracy, which can be set
> independently for each event function.
> Anne: This feature of PyDSTool would seem to be what you wanted, and
> does not require restarting the integration with a smaller step size.
> However, it is possible that your problem involves other features that
> I didn't take into consideration when designing this part of PyDSTool.
> If you could provide us the example, we would find it useful.
This does seem like just what I wanted; the only concern is that it be
able to handle situations where the RHS cannot be evaluated outside
the domain. (The specific system I was using involved tracing rays
around a black hole. Far from the hole I used Schwarzschild
coordinates, but these are singular as you cross the event horizon. So
I wanted to stop and switch coordinates when I got to, say, R=3M.
Unfortunately the integrators kept trying to take steps down past
R=2M, where the RHS blew up.) The reason I suggested reducing step
sizes in such a situation is because with most algorithms, there
simply isn't the RHS information available to take a normal-sized
> At the end of an integration, the intermediate layer of the PyDSTool
> integration code will compute any function of the phase space
> variables, parameters, etc. you want on the computed trajectory and
> return the result. Performing this internally rather than in python
> makes it much faster.
Does the integrator take auxiliary functions into account when
deciding on step sizes? That is, suppose my auxiliary function has
oscillatory behaviour for certain values; will the integrator notice
and take smaller steps there?
> Anne: Providing so much information at every trajectory point would
> require much revision of the current code, for which I see little
> additional benefit. Could you explain how you would use all of the
> additional information? Also, could you elaborate on your "running"
> mode? I do not understand what you are suggesting.
Most of the extra information I had in mind was intended to reduce
memory usage and improve smoothness of the (interpolated) trajectory.
Since the trajectory is the result of solving an ODE, it will normally
be at least differentiable, but linear interpolation will produce a
non-differentiable trajectory. I can envision many situations where
the derivative of the objective function is wanted. This can at worst
be obtained by evaluating the RHS, but if the integrator is internally
working with something that is locally a polynomial, ideally the
trajectory would follow that. Using polynomials, at least to take the
derivative into account, should reduce the density with which one must
sample the trajecotry.
The other information, on convergence, I suggested mostly so that
users could evaluate the trajectory and look for places where the
solution risked being inaccurate. The current ODEINT provides a fair
amount of such instrumentation, in its messy FORTRAN way.
The "running" mode I suggested is based on the way ODEPACK is designed
to be called from FORTRAN: one call initializes the integrator, then
it maintains internal state. It can be told to advance either one step
(of its choice) or a fixed amount (obtained by interpolation based on
the solution). The result can then be "walked" forward, keeping the
integrator's internal state (derivatives, appropriate step size,
knowledge about stiffness) at each step. This kind of calling
convention is useful for (say) particle systems, where for every frame
of an animation one wants to advance the system by a thirtieth of a
second, continuing indefinitely and forgetting the past.
> I would like to provide an interface for all of the H&W codes that
> have dense output. This includes other methods for stiff integration,
> a delay differential equation integrator, and possibly a symplectic
> integrator. However, these integrators require the systems to be input
> with particular forms which do not necessarily conform so nicely to
> the odeint interface, for example. I would like to settle on an API/
> calling format before moving ahead with this project.
The odeint interface is, frankly, a disaster. I have had to abuse it
every time I needed to solve an ODE. The underlying ODEPACK interface
is fairly horrible, but I think it's valuable to look at it because
it's seen a lot of use and has been adjusted to be able to do all the
various things that users want from it.
More information about the SciPy-user