[SciPy-user] Characteristic polynomial
Robert Kern
robert.kern@gmail....
Sun Dec 23 12:42:30 CST 2007
Nikolas Karalis wrote:
> Hello again. I return with another question regarding characteristic
> polynomials.
>
> I am computing the characteristic polynomials for matrices. And while
> (for millions of them) i get the right result, i found some cases, where
> i get the "wrong".
> Let me give you the examples, and then i will explain.
>
>>>> A=matrix([[290, 324, 323, 364, 340, 341, 365, 336, 342, 326],
> [324, 290, 322, 338, 366, 341, 365, 336, 344, 324],
> [323, 322, 286, 337, 337, 366, 364, 361, 320, 323],
> [345, 327, 326, 315, 343, 344, 342, 363, 343, 312],
> [329, 347, 325, 343, 319, 345, 343, 364, 327, 327],
> [329, 329, 347, 344, 345, 323, 344, 342, 348, 328],
> [347, 347, 346, 342, 343, 344, 321, 341, 326, 311],
> [325, 325, 343, 363, 364, 342, 341, 313, 324, 309],
> [342, 344, 320, 362, 338, 366, 338, 335, 286, 307],
> [340, 339, 338, 330, 361, 362, 333, 329, 316, 265]])
>
>>>> B=matrix([[290, 324, 336, 365, 341, 340, 364, 323, 342, 326],
> [324, 290, 336, 339, 367, 340, 364, 322, 344, 324],
> [325, 325, 313, 341, 342, 364, 363, 343, 324, 309],
> [346, 328, 341, 319, 346, 343, 342, 347, 345, 312],
> [330, 348, 342, 346, 321, 345, 344, 346, 329, 327],
> [328, 328, 364, 343, 345, 321, 341, 326, 346, 328],
> [346, 346, 363, 342, 344, 341, 317, 325, 324, 311],
> [323, 322, 361, 365, 365, 338, 336, 286, 320, 323],
> [342, 344, 335, 364, 340, 364, 336, 320, 286, 307],
> [340, 339, 329, 332, 363, 360, 331, 338, 316, 265]])
>
>>>> poly(A)
> array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06,
> -1.55679754e+08, -1.10343405e+10, -4.15092661e+11,
> -7.89507268e+12, -6.51631023e+13, -1.80794492e+14,
> -2.21111889e+00, -2.95926167e-13])
>
>>>> poly(B)
> array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06,
> -1.55679754e+08 , -1.10343405e+10, -4.15092661e+11,
> -7.89507268e+12, -6.51631023e+13, -1.80794492e+14,
> 4.59224482e+00, 3.39308585e-14])
>
> As you can see, the 2 polynomials are the same (up to the last 2 terms).
> The last term is considered to be "almost" 0, so we can see that the
> difference is the coefficient of x^1.
>
> If we compute the exact same thing with Maple and Sage, we get (for both
> matrices) :
>
> *x^10 - 3008*x^9 - 1106126*x^8 - 155679754*x^7 - 11034340454*x^6 - 415092661064*x^5 - 7895072675601*x^4 - 65163102265268*x^3 - 180794492489124*x^2
> *
>
> so, it is the same, since it doesn't "compute" the x^1 term.
>
> This also happens for a few other matrices...
>
> Can anybody help me with this? What is the right answer for the char.
> poly (i guess it is the Sage's and Maple's one, since they agree).
Yes, also because they are using exact arithmetic rather than floating point.
> Why does this difference occur? Is it "sage" to ignore the last 2 terms?
Remember that floating point errors are usually relative, not absolute. A value
of 1e-14 is not always "almost" 0, and a value of 1e0 can sometimes be "almost"
0 when the other values are around 1e+14. It should make you suspicious that the
exponent steadily increases then suddenly drops to 0.
You can do a little bit of cleanup on the floating-point results by looking at
the eigenvalues directly.
In [14]: ev = linalg.eigvals(A)
In [15]: ev
Out[15]:
array([ 3.35212801e+03 +0.00000000e+00j,
-9.14584932e+01 +0.00000000e+00j,
-7.67326852e+01 +0.00000000e+00j,
-6.72710568e+01 +0.00000000e+00j,
-5.92400278e+01 +0.00000000e+00j,
-3.36474895e+01 +0.00000000e+00j,
-1.01081162e+01 +0.00000000e+00j,
-5.67013979e+00 +0.00000000e+00j,
9.77724356e-15 +8.22960451e-15j, 9.77724356e-15 -8.22960451e-15j])
You can see that the last two terms are "almost" 0 in the relative sense.
Therefore, you can set them to zero before you construct the polynomial:
In [24]: aev = absolute(ev)
In [25]: ev[(aev / aev.max()) < 1e-16] = 0.0
In [26]: poly(real_if_close(ev))
Out[26]:
array([ 1.00000000e+00, -3.00800000e+03, -1.10612600e+06,
-1.55679754e+08, -1.10343405e+10, -4.15092661e+11,
-7.89507268e+12, -6.51631023e+13, -1.80794492e+14,
0.00000000e+00, 0.00000000e+00])
But if you want really reliable results for largish integer matrices, you'll
want to use SAGE. Your matrices are large enough that the floating point error
will be significant even if there are no "almost" 0s to clean up.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
More information about the SciPy-user
mailing list