[SciPy-User] Classification using neural networks

David Cournapeau cournape@gmail....
Thu Jul 26 11:38:36 CDT 2012

On Thu, Jul 26, 2012 at 5:26 PM, Sturla Molden <sturla@molden.no> wrote:
> Den 26.07.2012 09:04, skrev Gael Varoquaux:
>> Well, it has a perceptron implementation:
>> http://scikit-learn.org/dev/modules/generated/sklearn.linear_model.Perceptron.html
>> but not any multilayer-perceptron[*]. Thus, I don't really think that we can
>> claim that we have neural-network. That said, they are so 1990's :)
> Yeah, it seems that SVMs are more fashionable than ANNs these days. I
> don't know why that is, SVMs are slow to train and use, and I have yet
> to see that they out-perform an ANN. Perhaps it's because the latest
> edition of Numerical Receipes favour them over AANs, because SVMs
> supposedly are more transparent and easier to understand (I beg to
> differ). Multilayer ANNs trained with Levenberg-Marquardt and error
> backpropagation are among the most powerful non-linear regression and
> classification tools there are. And by the way, SciPy already has an
> LM-engine to train one (scipy.optimize.leastsq), all it takes is the
> code to compute the Jacobian by backpropagation.

I find Vapnik work on structured risk minimization to be one of the
crown jewel of machine learning (or statistics for that matter), and
would like to believe it is one of the reason why it is/was populat.
ANN also got a bad press because of the history - mentioning neural
network in your publication was a almost-sure way to get your paper
considered badly a couple of years ago I think.

The focus on one technique in particular is fundamentally wrong, I
think (no free lunch and all that). It all depends on your data and
what you're doing, the "use technique X" that sees X changed every few
years is closer to pop culture than science IMO.


More information about the SciPy-User mailing list