Fri Jul 23 12:36:49 CDT 2010
On Fri, Jul 23, 2010 at 12:27 PM, David Cournapeau <email@example.com>wrote:
> On Sat, Jul 24, 2010 at 2:19 AM, Benjamin Root <firstname.lastname@example.org> wrote:
> > Examining further, I see that SciPy's implementation is fairly simplistic
> > and has some issues. In the given example, the reason why 3 is never
> > returned is not because of the use of the distortion metric, but rather
> > because the kmeans function never sees the distance for using 3. As a
> > matter of fact, the actual code that does the convergence is in vq and
> > (vector quantization) and it tries to minimize the sum of squared errors.
> > kmeans just keeps on retrying the convergence with random guesses to see
> > different convergences occur.
> As one of the maintainer of kmeans, I would be the first to admit the
> code is basic, for good and bad. Something more elaborate for
> clustering may indeed be useful, as long as the interface stays
> More complex needs should turn on scikits.learn or more specialized
I agree, kmeans does not need to get very complicated because kmeans (the
general concept) is not very suitable for very complicated situations.
As a thought, a possible way to help out the current implementation is to
ensure that unique guesses are made. Currently, several iterations are
wasted by performing guesses that it has already done before. Is there a
way to do sampling without replacement in numpy.random?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User