[SciPy-user] [OpenOpt] lb issue

Emanuele Olivetti emanuele@relativita....
Fri Jul 4 11:23:04 CDT 2008


Yes Dmitrey you are right. After some thinking I successfully
implemented your advice in my code, and added contol and
inf. It seems to work quite well. Now "ralg" tries some attempts
in the forbidden area and then comes back. I hope that
generating the next 'x' (attempt) does not require lots of
computation to 'ralg', in the sense that wasting some attempts
does not cost too much time (especially for bigger problems,
like hundreds of variables).

I have another  question. Is there a way in OpenOpt to
tell the solver to optimize "just" on some variable/dimensions
and not all of them?
Example: assume my function takes 3 numbers and returns
1; I want to minimize it but with respect just the first 2 of the
3 input values; let's say that the initial guess of the 3rd value
is already OK or that I can't change it.
I'd like to pass something like a boolean vector [True, True, False]
to p.solve() o the NLP instance to say that the third parameter
should not be changed. In other words it can be thought as
another kind of constraint.

Obviously I can wrap my function in order to handle the
boolean vector and do the right thing, but I'm wondering
if OpenOpt is able to handle this kind of requests.

Thanks,

Emanuele


dmitrey wrote:
> I don't see any difficulties to map my advice to more difficult cases 
> (just don't forget to increase your ineq constraints by contol, for to 
> compare those ones with zero, not contol, and use inf, not the huge 
> value you have mentioned). Current ralg implementation doesn't need 
> objfunc value when point is outside of feasible region (i.e. if *any* 
> constraint is bigger than p.contol, not only lb-ub constraint). OO calls 
> objFunc outside feasible region just for to check some stop criteria, 
> yield iter output point text and possible graphics output.
>
> Regards, D.
>
> Emanuele Olivetti wrote:
>   
>> Unfortunately it is not so simple to map this advice to my
>> real situation, which is more complex than the
>> proof-of-concept example of the initial message. Returning
>> a big positive value when x is outside the bounds is an option
>> I considered some time ago but then discarded. But I'll think
>> more about it now.
>>
>> Ciao,
>>
>> Emanuele
>>
>> dmitrey wrote:
>>   
>>     
>>> Hi Emanuele,
>>>
>>> I could propose you temporary solution (see below), this one doesn't
>>> require updating oo from svn. However, usually ALGENCAN, ipopt and
>>> scipy_slsqp work much better for box-bound constrained problems (w/o
>>> other constraints) than current ralg implementation.
>>> D.
>>>
>>> import numpy as N
>>> from scikits.openopt import NLP
>>> from numpy import any, inf
>>> size = 100
>>> dimensions = 2
>>> data = N.random.rand(size,dimensions)-0.5
>>>
>>> contol = 1e-6
>>> lb=N.zeros(dimensions) + contol
>>>
>>> def f(x):
>>>    global data
>>>    if any(x<0):
>>>        #objective function is not defined here, let's use inf instead
>>>        #however, some iters will show objFunVa= inf in text output
>>>        # and graphic output is currently unavailable for the case
>>>        return inf
>>>    return N.dot(data**2,x.T)
>>>
>>> x0 = N.ones(dimensions)
>>> p = NLP(f,x0,lb=lb, contol = contol)
>>> p.solve('ralg')
>>> print p.ff,p.xf
>>>
>>>     
>>>       
>>
>>
>>   
>>     
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
>   



More information about the SciPy-user mailing list