[Numpy-discussion] lstsq functionality
Charles R Harris
Mon Jul 19 22:27:15 CDT 2010
On Mon, Jul 19, 2010 at 9:02 PM, Keith Goodman <firstname.lastname@example.org> wrote:
> On Mon, Jul 19, 2010 at 6:53 PM, Joshua Holbrook
> <email@example.com> wrote:
> > On Mon, Jul 19, 2010 at 5:50 PM, Charles R Harris
> > <firstname.lastname@example.org> wrote:
> >> Hi All,
> >> I'm thinking about adding some functionality to lstsq because I find
> >> doing the same fixes over and over. List follows.
> >> Add weights so data points can be weighted.
> >> Use column scaling so condition numbers make more sense.
> >> Compute covariance approximation?
> >> Unfortunately, the last will require using svd since there no linear
> >> squares routines in LAPACK that also compute the covariance, at least
> >> google knows about.
> >> Thoughts?
> > Maybe make 2 functions--one which implements 1 and 2, and another
> > which implements 3? I think weights is an excellent idea!
> I'd like a lstsq that did less, like not calculate the sum of squared
> residuals. That's useful in tight loops. So I also think having two
> lstsq makes sense. One as basic as possible; one with bells. How does
> scipy's lstsq fit into all this?
I think the computation of the residues is cheap in lstsq. The algolrithm
used starts by reducing the design matrix to bidiagonal from and reduces the
rhs at the same time. In other words an mxn problem becomes a (n+1)xn
problem. That's why the summed square of residuals is available but not the
individual residuals, after the reduction there is only one residual and its
square it the residue.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion