[Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

Anne Archibald aarchiba@physics.mcgill...
Mon Nov 8 16:40:57 CST 2010


On 8 November 2010 14:38, Joon <groups.and.lists@gmail.com> wrote:

> Oh I see. So I guess in invA = solve(Ax, I) and then x = dot(invA, b) case,
> there are more places where numerical errors occur, than just x = solve(Ax,
> b) case.

That's the heart of the matter, but one can be more specific. You can
think of a matrix by how it acts on vectors. Taking the inverse
amounts to solving Ax=b for all the standard basis vectors
(0,...,0,1,0,...,0); multiplying by the inverse amounts to expressing
your vector in terms of these, finding where they go, and adding them
together. But it can happen that when you break your vector up like
that, the images of the components are large but almost cancel. This
sort of near-cancellation amplifies numerical errors tremendously. In
comparison, solving directly, if you're using a stable algorithm, is
able to avoid ever constructing these nearly-cancelling combinations
explicitly.

The standard reason for trying to construct an inverse is that you
want to solve equations for many vectors with the same matrix. But
most solution methods are implemented as a matrix factorization
followed by a single cheap operation, so if this is your goal, it's
better to simply keep the matrix factorization around.

Anne


More information about the NumPy-Discussion mailing list