[SciPy-user] [OpenOpt] evaluation of f(x) and df(x)
Mon Jul 21 09:58:32 CDT 2008
no, openopt will never hold in memory more then 1 point value, because
it's too expensive to check all those previous values, as well as to
store in memory so much data (sometimes nVars are very large). The
proble you have mentioned is too specific, so it assumes user will take
care of the situation. As for solvers (especially those which use
derivatives), they will hardly use previous points (if so it may be
likely considered as a bug).
To prevent the bugs I use openopt Point concept (in my ralg solver).
Instead of handling in current workspace so lots of variables (f,df,
f_prev, df_prev, c_prev, h_prev, dh_prev, etc, + any kinds of linear, +
possible 2nd derivatives) I use something like this:
iterPoint = p.point(x)
if I need f(x), df(x), dc(x), maxResidual(x) etc I use
iterPoint.f(), iterPoint.df(), iterPoint.dc(), iterPoint.mr() etc
and I'm sure these values will not be recalculated.
Still I'm not sure in my current Point having f will benefit of having
df and wise versa (if some calculations with other points had been done
As for you as a user, if you want to take usage of OO Point, it could be
p = NLP()
p.args = p
and then using p.point in your objFunc/constraints (still I'm not sure
it will not bring endless recursive cycle).
Or, you could write something like this by yourself in your code. Point
class is situated in /Kernel/Point.py.
Emanuele Olivetti wrote:
> Hi Dmitrey,
> I do not understand if your message answers my question so let me
> reformulate. I have the exact gradient df(x) implemented in my code
> so I don't use finite differences.
> In my problem, In order to compute the gradient df(x=x1), I'd like to
> take advantage of intermediate results of f(x=x1)'s compuation.
> The re-use of these results is trivial to implement if
> the sequence of function calls made by OpenOpt is, e.g., like this:
> f(x0), df(x0), f(x1), f(x2), df(x2), f(x3), df(x3).... . Instead the
> implementation could become quite difficult if the sequence would
> be like this: f(x0), f(x1), df(x0), f(x2), f(x3), f(x4), df(x3),...
> (i.e., the sequence of f / df is not evaluated on the same values).
> Is OpenOpt working as in the first case?
> dmitrey wrote:
>> Hi Emanuele,
>> if df(x1) is obtained via finite-difference calculations then f(x1) is
>> stored and compared during next call to f / df, and vice versa: if f(x1)
>> is called then the value obtained is stored and compared during next
>> call to f and/or finite-difference df.
>> At least it is intended so, I can take more precise look if you have
>> noticed it doesn't work properly.
>> Regards, D.
>> Emanuele Olivetti wrote:
>>> Dear All and Dmitrey,
>>> in my code the evaluation of f(x) and df(x) shares many
>>> intermediate steps. I'd like to re-use what is computed
>>> inside f(x) to evaluate df(x) more efficiently, during f(x)
>>> optimization. Then is it _always_ true that, when OpenOpt
>>> evaluates df(x) at a certain x=x^*, f(x) too was previously
>>> evaluated at x=x^*? And in case f(x) was evaluated multiple
>>> times before evaluating df(x), is it true that the last x at
>>> which f(x) was evaluated (before computing df(x=x^*))
>>> was x=x^*?
>>> If these assumptions holds (as it seems from preliminary
>>> tests on NLP using ralg), the extra code to take advantage
>>> of this fact is extremely simple.
>>> P.S.: if the previous assumptions are false in general, I'd
>>> like to know it they are true at least for the NLP case.
>>> SciPy-user mailing list
>> SciPy-user mailing list
> SciPy-user mailing list
More information about the SciPy-user