[Numpy-discussion] polyfit with fixed points
Thu Mar 7 10:22:30 CST 2013
On Thu, Mar 7, 2013 at 1:52 AM, Jaime Fernández del Río <
> On Tue, Mar 5, 2013 at 5:23 AM, Charles R Harris <
> firstname.lastname@example.org> wrote:
>> On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río <
>> email@example.com> wrote:
>>> On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris <
>>> firstname.lastname@example.org> wrote:
>>>> There are actually seven versions of polynomial fit, two for the usual
>>>> polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e,
>>>> and Laguerre ;)
>>> Correct me if I am wrong, but the fitted function is the same regardless
>>> of the polynomial basis used. I don't know if there can be numerical
>>> stability issues, but chebfit(x, y, n) returns the same as
>>> poly2cheb(polyfit(x, y, n)).
>>> In any case, with all the already existing support for these special
>>> polynomials, it wouldn't be too hard to set the problem up to calculate the
>>> right coefficients directly for each case.
>>>> How do you propose to implement it? I think Lagrange multipliers is
>>>> overkill, I'd rather see using the weights (approximate) or change of
>>>> variable -- a permutation in this case -- followed by qr and lstsq.
>>> The weights method is already in place, but I find it rather inelegant
>>> and unsatisfactory as a solution to this problem. But if it is deemed
>>> sufficient, then there is of course no need to go any further.
>>> I hadn't thought of any other way than using Lagrange multipliers, but
>>> looking at it in more detail, I am not sure it will be possible to
>>> formulate it in a manner that can be fed to lstsq, as polyfit does today.
>>> And if it can't, it probably wouldn't make much sense to have two different
>>> methods which cannot produce the same full output running under the same
>>> I can't figure out your "change of variable" method from the succinct
>>> description, could you elaborate a little more?
>> I think the place to add this is to lstsq as linear constraints. That is,
>> the coefficients must satisfy B * c = y_c for some set of equations B. In
>> the polynomial case the rows of B would be the powers of x at the points
>> you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the
>> design matrix of the unconstrained points A' = A * v.T so that B' becomes
>> u * d. The coefficients are now replaced by new variables c' with the
>> contraints in the first two columns. If there are, say, 2 constraints, u *
>> d will be 2x2. Solve that equation for the first two constraints then
>> multiply the first two columns of the design matrix A' by the result and
>> put them on the rhs, i.e.,
>> y = y - A'[:, :2] * c'[:2]
>> then solve the usual l least squares thing with
>> A[:, 2:] * c'[2:] = y
>> to get the rest of the transformed coefficients c'. Put the coefficients
>> altogether and multiply with v^T to get
>> c = v^T * c'
> Very nice, and works beautifully! I have tried the method you describe,
> and there are a few relevant observations:
> 1. It gives the exact same result as the Lagrange multiplier approach,
> which is probably expected, but I wasn't all that sure it would be the case.
> 2. The result also seems to be to what the sequence of fits giving
> increasing weights to the fixed points converges to. This image
> http://i1092.photobucket.com/albums/i412/jfrio/image.png is an example.
> In there:
> * blue crosses are the data points to fit to
> * red points are the fixed points
> * blue line is the standard polyfit
> * red line is the constrained polyfit
> * cyan, magenta, yellow and black are polyfits with weights of 2, 4,
> 8, 16 for the fixed points, 1 for the rest
> Seeing this last point, probably the cleanest, least disruptive
> implementation of this, would be to allow np.inf values in the weights
> parameter, which would get filtered out, and dealt with in the above manner.
Just to point out that a very simple approach is where one just multiply
the constraints with big enough number M, like:
In : def V(x, n= None):
....: """Polynomial package compatible Vandermonde 'matrix'"""
....: return vander(x, n)[:, ::-1]
In : def clsq(A, b, C, d, M= 1e5):
....: """A simple constrained least squared solution of Ax= b, s.t.
....: return solve(dot(A.T, A)+ M* dot(C.T, C), dot(A.T, b)+ M*
In : x= linspace(-6, 6, 23)
In : y= sin(x)+ 4e-1* rand(len(x))- 2e-1
In : x_f, y_f= linspace(-(3./ 2)* pi, (3./ 2)* pi, 4), array([1, -1, 1,
In : n, x_s= 5, linspace(-6, 6, 123)
In : plot(x, y, 'bo', x_f, y_f, 'bs', x_s, sin(x_s), 'b--')
In : for M in 7** (arange(5)):
....: p= Polynomial(clsq(V(x, n), y, V(x_f, n), y_f, M))
....: plot(x_s, p(x_s))
In : ylim([-2, 2])
In : show()
Obviously this is not any 'silver bullet' solution, but simple enough ;-)
My 2 cents,
> So I have two questions:
> 1. Does this make sense? Or will it be better to make it more explicit,
> with a 'fixed_points' keyword argument defaulting to None?
> 2. Once I have this implemented, documented and tested... How do I go
> about submitting it for consideration? Would a patch be the way to go, or
> should I fork?
>> NumPy-Discussion mailing list
> ( O.o)
> ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
> de dominación mundial.
> NumPy-Discussion mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 69891 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/numpy-discussion/attachments/20130307/f9e9f21e/attachment-0001.png
More information about the NumPy-Discussion