[SciPy-user] [OpenOpt] evaluation of f(x) and df(x)

Emanuele Olivetti emanuele@relativita....
Mon Jul 21 09:04:35 CDT 2008


Hi Dmitrey,

I do not understand if your message answers my question so let me
reformulate. I have the exact gradient df(x) implemented in my code
so I don't use finite differences.

In my problem, In order to compute the gradient df(x=x1), I'd like to
take advantage of intermediate results of f(x=x1)'s compuation.
The re-use of these results is trivial to implement if
the sequence of function calls made by OpenOpt is, e.g., like this:
f(x0), df(x0), f(x1), f(x2), df(x2), f(x3), df(x3).... . Instead the
implementation could become quite difficult if the sequence would
be like this: f(x0), f(x1), df(x0), f(x2), f(x3), f(x4), df(x3),...
(i.e., the sequence of f / df is not evaluated on the same values).

Is OpenOpt working as in the first case?

Thanks,

Emanuele

dmitrey wrote:
> Hi Emanuele,
>
> if df(x1) is obtained via finite-difference calculations then f(x1) is 
> stored and compared during next call to f / df, and vice versa: if f(x1) 
> is called then the value obtained is stored and compared during next 
> call to f and/or finite-difference df.
>
> At least it is intended so, I can take more precise look if you have 
> noticed it doesn't work properly.
>
> Regards, D.
>
> Emanuele Olivetti wrote:
>   
>> Dear All and Dmitrey,
>>
>> in my code the evaluation of f(x) and df(x) shares many
>> intermediate steps. I'd like to re-use what is computed
>> inside f(x) to evaluate df(x) more efficiently, during f(x)
>> optimization. Then is it _always_ true that, when OpenOpt
>> evaluates df(x) at a certain x=x^*, f(x) too was previously
>> evaluated at x=x^*? And in case f(x) was evaluated multiple
>> times before evaluating df(x), is it true that the last x at
>> which f(x) was evaluated (before computing df(x=x^*))
>> was x=x^*?
>>
>> If these assumptions holds (as it seems from preliminary
>> tests on NLP using ralg), the extra code to take advantage
>> of this fact is extremely simple.
>>
>> Best,
>>
>> Emanuele
>>
>> P.S.: if the previous assumptions are false in general, I'd
>> like to know it they are true at least for the NLP case.
>>
>> _______________________________________________
>> SciPy-user mailing list
>> SciPy-user@scipy.org
>> http://projects.scipy.org/mailman/listinfo/scipy-user
>>
>>
>>
>>   
>>     
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
>   



More information about the SciPy-user mailing list