[Numpy-discussion] poly class question

Sebastian Walter sebastian.walter@gmail....
Mon Oct 5 10:39:56 CDT 2009


On Mon, Oct 5, 2009 at 4:52 PM,  <josef.pktd@gmail.com> wrote:
> On Mon, Oct 5, 2009 at 5:37 AM, Sebastian Walter
> <sebastian.walter@gmail.com> wrote:
>> On Fri, Oct 2, 2009 at 10:40 PM,  <josef.pktd@gmail.com> wrote:
>>> On Fri, Oct 2, 2009 at 3:38 PM, Charles R Harris
>>> <charlesr.harris@gmail.com> wrote:
>>>>
>>>>
>>>> On Fri, Oct 2, 2009 at 12:30 PM, <josef.pktd@gmail.com> wrote:
>>>>>
>>>>> On Fri, Oct 2, 2009 at 2:09 PM, Charles R Harris
>>>>> <charlesr.harris@gmail.com> wrote:
>>>>> >
>>>>> >
>>>>> > On Fri, Oct 2, 2009 at 11:35 AM, Charles R Harris
>>>>> > <charlesr.harris@gmail.com> wrote:
>>>>> >>
>>>>> >>
>>>>> >> On Fri, Oct 2, 2009 at 11:33 AM, Charles R Harris
>>>>> >> <charlesr.harris@gmail.com> wrote:
>>>>> >>>
>>>>> >>>
>>>>> >>> On Fri, Oct 2, 2009 at 11:30 AM, Charles R Harris
>>>>> >>> <charlesr.harris@gmail.com> wrote:
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> On Fri, Oct 2, 2009 at 11:08 AM, <josef.pktd@gmail.com> wrote:
>>>>> >>>>>
>>>>> >>>>> Is there a way in numpy (or scipy) to get an infinite expansion for
>>>>> >>>>> the inverse of a polynomial (for a finite number of terms)
>>>>> >>>>>
>>>>> >>>>> np.poly1d([ -0.8, 1])**(-1)
>>>>> >>>>>
>>>>> >>>>> application for example the MA representation of an AR(1)
>>>>> >>>>>
>>>>> >>>>
>>>>> >>>> Hmm, I've been working on a chebyshev class and division of a scalar
>>>>> >>>> by
>>>>> >>>> a chebyshev series is
>>>>> >>>> expressly forbidden, but it could be included if a good interface is
>>>>> >>>> proposed. Same would go for polynomials.
>>>>> >>>
>>>>> >>> In fact is isn't hard to get, for poly1d you should be able to
>>>>> >>> multiply
>>>>> >>> the series by a power of x to shift it left, then divide.
>>>>> >>>
>>>>> >>
>>>>> >> That is, divide a power of x by the polynomial.
>>>>> >>
>>>>> >
>>>>> > You will also need to reverse the denominator coefficients...Chuck
>>>>>
>>>>> That's the hint I needed. However the polynomial coefficients are then
>>>>> reversed and not consistent with other polynomial operations, aren't
>>>>> they?
>>>>>
>>>>> >>> from scipy.signal import lfilter
>>>>>
>>>>> >>> (np.poly1d([1, 0])**10)/np.poly1d([1, -0.8])
>>>>> (poly1d([ 1.        ,  0.8       ,  0.64      ,  0.512     ,  0.4096    ,
>>>>>        0.32768   ,  0.262144  ,  0.2097152 ,  0.16777216,
>>>>> 0.13421773]), poly1d([ 0.10737418]))
>>>>>
>>>>> >>> lfilter([1], [1,-0.8], [1] + [0]*9)
>>>>> array([ 1.        ,  0.8       ,  0.64      ,  0.512     ,  0.4096    ,
>>>>>        0.32768   ,  0.262144  ,  0.2097152 ,  0.16777216,  0.13421773])
>>>>>
>>>>> >>> (np.poly1d([1, 0])**10)/np.poly1d([1, -0.8, 0.2])
>>>>> (poly1d([ 1.        ,  0.8       ,  0.44      ,  0.192     ,  0.0656    ,
>>>>>        0.01408   , -0.001856  , -0.0043008 , -0.00306944]),
>>>>> poly1d([-0.00159539,  0.00061389]))
>>>>> >>> lfilter([1], [1,-0.8, 0.2], [1] + [0]*9)
>>>>> array([ 1.        ,  0.8       ,  0.44      ,  0.192     ,  0.0656    ,
>>>>>        0.01408   , -0.001856  , -0.0043008 , -0.00306944, -0.00159539])
>>>>>
>>>>>
>>>>> What I meant initally doesn't necessarily mean division of a scalar.
>>>>>
>>>>> >>> np.poly1d([1])/np.poly1d([-0.8, 1])
>>>>> (poly1d([ 0.]), poly1d([ 1.]))
>>>>>
>>>>> I didn't find any polynomial division that does the expansion of the
>>>>> remainder. The same problem, I think is inherited, by the
>>>>> scipy.signal.lti, and it took me a while to find the usefulness of
>>>>> lfilter in this case.
>>>>>
>>>>> If it were possible to extend the methods for the polynomial class to
>>>>> do a longer expansions, it would make them more useful for arma and
>>>>> lti.
>>>>>
>>>>> (in some areas, I'm still trying to figure out whether some
>>>>> functionality is just hidden to me, or actually a limitation of the
>>>>> implementation or a missing feature.)
>>>>>
>>>>
>>>> Could you describe the sort of problems you want to solve? There are lots of
>>>> curious things out there we could maybe work with. Covariances, for
>>>> instance, are closely related to Chebyshev series.
>>>
>>> I am working on a discrete time arma process of the form
>>>
>>> a(L) x_t = b(L) u_t,  where L is the lag operator L^k x_t = x_(t-k)
>>>
>>> what I just programmed using lfilter is
>>> x_t = b(L)/a(L) u_t  where  b(L)/a(L) is the impulse response function
>>> or moving average representation
>>>
>>>  a(L)/b(L)  is the autoregressive representation
>>>
>>> the extension
>>> a(L)(1-L)^d x_t = b(L) u_t,  where d = 0,1,2,...  (standard) or  also
>>> continuous d <1 (fractional integration)
>>>
>>>  a(L)/b(L),  b(L)/a(L)  (1-L)^(-d)  or (1-L)^d (0<d<1)  are infinite
>>> dimensional lag polynomials in the general case.
>>> Initially I was looking for an easy way to do these calculation as polynomials.
>>> (The fractional case (1-L)^d (0<d<1) might be a pretty special case,
>>> and I just looked it up today, but is a popular model class in
>>> econometrics, fractionally integrated arma processes)
>>>
>>> multiplication works well np.poly1d([-1, 1])*np.poly1d([-0.8, 1])
>>> (with reversed poly coefficient scipy.signal I think)
>>>
>>> the functions in scipy.signal for lti are only for continuous time
>>> processes and use poly1d under the hood, which means for example
>>>
>>>>>> from scipy import signal
>>>>>> signal.impulse(([1, -0.8],[1]), N=10)
>>>    raise ValueError, "Improper transfer function."
>>> ValueError: Improper transfer function.
>>>
>>> while this works
>>>>>> signal.impulse(([1],[1, -0.8]), N=10)
>>>
>>> (It's been a while since I looked inside scipy.signal.lti)
>>>
>>> A separate issue would be the multivariate version VARMA, or MIMO in
>>> system modeling.  a(L), b(L) are matrix polynomials and x_t, u_t are
>>> 1d arrays evolving in time.
>>> But that is a different discussion.
>>
>
>
>> I'm working on something that requires truncated
>> univariate/multivariate operations
>> on scalar, vector and matrix polynomials. I'm pretty sure I'm doing
>> something different
>> than what is used in MIMO.  Still, do you have a good reference for
>> implementation/complexity/stability of operations on multivariate
>> (matrix) polynomials?
>> Want to make sure I'm not reinventing the wheel ;)
>
> Sorry, but I'm no help here. I was hoping to benefit from the knowledge
> of others. I have used univariate and multivariate polynomials for special
> cases for function approximation and time series analysis, but I know
> little about the general numerical issues in this. For MIMO, I only looked
> at the matlab systems toolbox, which would be an extension of scipy.signal.lti.
>
> Since these operations are in my case usually inside an optimization loop
> in a (supposedly) well behaved problem, I usually cared more about
> speed and approximation errors in low order polynomials than numerical
> stability.
>
> If you invent the wheel, I would be very glad to use it.

Well, I'd be happy to have a user of my code :). However, I'm not sure
what I'm doing really helps you:
the package ALGOPY  I'm working on  computes on truncated Taylor
polynomials x(t) = x_0 + x_1 t + x_2 t^2 + ... + x_{D-1}t^D
I have implemented most common functions like

z(t) = x(t)/y(t)
z(t) = x(t) * y(t)
z(t) = exp(x(t))
z(t) = sin(x(t))

z(t) has always the same degree as x(t) and y(t) (assumed to have the
same degree unless it is constant, i.e. z(t) = 1./x(t) works).
All this is implemented in a single class called UTPS in
http://github.com/b45ch1/algopy/blob/master/algopy/utp/utps.py

The actual reason for ALGOPY is the matrix polynomial class UTPM in
http://github.com/b45ch1/algopy/blob/master/algopy/utp/utpm.py

The rationale is that to compute derivatives of matrix valued
functions one should compute on truncated matrix polynomials.

One good example is to compute the Jacobian

J = [dy/dA ,dy/dx]

of
y = solve(A,x)

where y an (N,) array
A (N,N) array
x (N,) array

The underlying method is to generalize the solve-function to work on
polynomials,

i.e.
y(t) = solve( A(t), x(t))

where y(t) = y_0 + y_1 t + y_2 t^2 + ...
A(t) = A_0 + A_1 t + A_2 t^2 + ...
x(t) = x_0 + x_1 t + ...

You can have a look at
http://github.com/b45ch1/algopy/blob/master/algopy/utp/utpm.py#L257
I'm afraid the best tutorial I can give is a talk that I've given on a
small workshop:
http://github.com/b45ch1/algopy/raw/master/documentation/Seventh_EuroAd_Workshop-Sebastian_Walter-Higher_Order_Forward_and_Reverse_Mode_on_Matrices_with_Application_to_Optimal_Experimental_Design.pdf


Sebastian

>
> Josef
>
>>
>>
>>
>>>
>>> I'm not very familiar with Chebychev polynomials, the last time I
>>> wanted to use them I didn't see anything about their use as a base for
>>> functions in several variables and gave up. I've seen papers that use
>>> them as base for functions in one variable, but I'm not doing anything
>>> like this right now.
>>>
>>> Thanks,
>>>
>>> Josef
>>>
>>>>
>>>> Chuck
>>>>
>>>>
>>>> _______________________________________________
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>
>>>>
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>


More information about the NumPy-Discussion mailing list