[SciPy-User] [ANN] scikit.statsmodels 0.2.0 release

Robert Kern robert.kern@gmail....
Thu Feb 18 17:28:41 CST 2010


On Thu, Feb 18, 2010 at 17:23,  <josef.pktd@gmail.com> wrote:
> hit the wrong button
>
> On Thu, Feb 18, 2010 at 5:34 PM,  <josef.pktd@gmail.com> wrote:
>> On Thu, Feb 18, 2010 at 5:30 PM, Gael Varoquaux
>> <gael.varoquaux@normalesup.org> wrote:
>>> On Thu, Feb 18, 2010 at 05:24:58PM -0500, David Warde-Farley wrote:
>>>
>>>> On 16-Feb-10, at 2:14 PM, Skipper Seabold wrote:
>>>
>>>> > * Added four discrete choice models: Poisson, Probit, Logit, and
>>>> > Multinomial Logit.
>
> They are still new, so some problems might still be lurking around.
>
>>>
>>>> Awesome. I look forward to trying these out (by the way, do you
>>>> support any regularization methods? L2/L1?)
>
> L2 Tychonov style penalization is planned for specific models, eg.
> generalized/Bayesian Ridge Regression, Vector Autoregressive
> Regression with dummy variable priors and other shrinkage estimators
> when there are too many parameters to estimate.
>
> There are no plans for Lasso, although I would like LARS as a
> complement to Principal Component Regression, but it's not high on the
> priority list.

There's been a recent paper that might be of interest. The associated
code is GPLed, alas.

  http://www.jstatsoft.org/v33/i01

"""
Abstract:	
We develop fast algorithms for estimation of generalized linear models
with convex penalties. The models include linear regression, two-class
logistic regression, and multi- nomial regression problems while the
penalties include L1 (the lasso), L2 (ridge regression) and mixtures
of the two (the elastic net). The algorithms use cyclical coordinate
descent, computed along a regularization path. The methods can handle
large problems and can also deal efficiently with sparse features. In
comparative timings we find that the new algorithms are considerably
faster than competing methods.
"""

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco


More information about the SciPy-User mailing list