[SciPy-User] Unit testing of Bayesian estimator
Mon Nov 9 12:14:36 CST 2009
> >From the posterior probability S/(S+1), you could construct
> a decision rule similar to a classical test, e.g. accept null
> if S/(S+1) < 0.95, and then construct a MonteCarlo
> with samples drawn form either the uniform or the pulsed
> distribution in the same way as for a classical test, and
> verify that the decision mistakes, alpha and beta errors, in the
> sample are close to the posterior probabilities.
> The posterior probability would be similar to the p-value
> in a classical test. If you want to balance alpha and
> beta errors, a threshold S/(S+1)<0.5 would be more
> appropriate, but for the unit tests it wouldn't matter.
Unfortunately this doesn't work. Think of it this way: if my data size
is 10000 photons, and I'm looking at the fraction of
uniformly-distributed data sets that have a probability > 0.95 that
they are pulsed, this won't happen with 5% of my fake data sets - it
will almost never happen, since 10000 photons are enough to give a
very solid answer (experiment confirms this). So I can't interpret my
Bayesian probability as a frequentist probability of alpha error.
> Running the example a few times, it looks like that the power
> is relatively low for distinguishing uniform distribution from
> a pulsed distribution with fraction/binomial parameter 0.05
> and sample size <1000.
> If you have strong beliefs that the fraction is really this low
> than an informative prior for the fraction, might improve the
I really don't want to encourage my code to return reports of
pulsations. To be believed in this nest of frequentists I work with, I
need a solid detection in spite of very conservative priors.
More information about the SciPy-User