Reik H. Börger
reikhboerger at gmx.de
Sun Dec 5 11:28:43 CST 2004
Tiziano, thanks for the link, I will check it soon.
Robert, thanks for the link about floating-point arithmetics. What you
say about 1e-15 being zero for practical purposes, I am aware of that.
My problem is, that other routines in python rely on linalg.eig or
similar procedures, for example in generating mutlivariate normal
random variates. When I give my covariance matrix to the routine and it
comes back to me, saying that my matrix is not positive definite,
though I can prove it, then it is a problem. There are some easy ways
to check, if a matrix is positive without using a eigenvalue
decomposition. I think, this should be included in either the
linalg.eig-routine or the stats.rv.multinormal-routine.
You say, that Maple is basically rounding the imaginary part. Are you
sure or do you guess? How does Maple know, that this matrix has no
complex eigenvalues? Does it perform the checks mentioned above?
> You may want to have a look at:
> The symeig module contains a Python wrapper for the LAPACK functions
> to solve the standard and generalized eigenvalue problems for
> symmetric (hermitian) positive definite matrices. Those specialized
> algorithms give an important speed-up with repect to the generic
> LAPACK eigenvalue problem solver used by SciPy (scipy.linalg.eig).
> You are running into the problems of finite-precision floating point
> arithmetic. Values around 1e-15 *are* zero for practical purposes.
> you know that the output values should be real (up to numerical
> precision), you can just take the real part. That is more or less what
> Maple is doing.
> I also recommend reading "What Every Computer Scientist Should Know
> About Floating-Point Arithmetic":
> Robert Kern
> rkern at ucsd.edu
> "In the fields of hell where the grass grows high
> Are the graves of dreams allowed to die."
> -- Richard Harter
More information about the SciPy-user