[SciPy-User] [OT] statistical test for comparing two measurements (with errors)
Tue Sep 13 17:54:15 CDT 2011
Hi all, seeing as there are a few stats gurus on the list, I thought someone might know the answer to this question:
I've got two distributions and want to compare each of the moments of the distributions and determine the individual probability of each of them being equal. What I've done so far is to calculate the moments, and (using monte-carlo sub-sampling) estimate an error for each calculation.
This essentially gives a value and a 'measurement error' for each moment and distribution, and I'm looking for a test which will take these pairs and determine if they're likely to be equal. One option I've considered is to use/abuse the t-test as it compares two distributions with given means and std. deviations (analagous to the value and error scenario I have). What I'm struggling with is how to choose the degrees of freedom - I've contemplated using the number of Monte-Carlo iterates, but this doesn't really seem right because I'm not convinced that they will be truely independent measures. The other option I've thought of is the reciprocal of the Monte-carlo selection probability - this gives results which 'feel' right, but I'm having a hard time finding a solid justification of it.
If anyone could suggest either an alternative test, or a suitable way of estimating degrees of freedom I'd be very grateful.
To give a little more context, the underlying distributions from which I am calculating moments are 2D clouds of points and what I'm eventually aiming at is a way of quantifying shape similarity (and possibly also determining which moments give the most robust shape discrimination).
More information about the SciPy-User