Tue Jun 1 03:22:08 CDT 2010
On Tue, Jun 1, 2010 at 4:09 AM, Travis Oliphant <email@example.com> wrote:
> On May 31, 2010, at 9:16 AM, firstname.lastname@example.org wrote:
>> This is more about the process then the content, distributions was
>> Travis's baby (although unfinished), and most of his changes are very
>> good, but I don't want to look for the 5-10% (?) typos anymore.
> I really am not sure what the difference between looking at timeline of changes and a formal "review" process really is? In either case you are "looking for someone's mistakes or problems". I do think your estimate of typos is a bit aggressive. Really? 5-10% typos. What is the denominator?
I just replied for most of this.
My test run in the middle of the weekend (before I gave up), had about
4 or 5 test failures in the new _logpdf _logcdf methods.
Third and forth moments (skew, kurtosis) might still return about 5%
incorrect numbers, which I accept since it was written at a different
time. Same with many generic methods in stats.distributions that I
fixed two and a half years ago and which seems to never have worked
from what I inferred from the history.
denominator: functions/methods that return numbers
5-10% is just a guess, I never tried to measure it, maybe it's only
3%, but each one requires an afternoon to hunt down the reference and
the correct formula.
> SciPy-Dev mailing list
More information about the SciPy-Dev