[SciPy-User] Unit testing of Bayesian estimator

josef.pktd@gmai... josef.pktd@gmai...
Mon Nov 9 12:44:50 CST 2009


On Mon, Nov 9, 2009 at 1:14 PM, Anne Archibald
<peridot.faceted@gmail.com> wrote:
> 2009/11/9  <josef.pktd@gmail.com>:
>
>> >From the posterior probability S/(S+1), you could construct
>> a decision rule similar to a classical test, e.g. accept null
>> if S/(S+1) < 0.95, and then construct a MonteCarlo
>> with samples drawn form either the uniform or the pulsed
>> distribution in the same way as for a classical test, and
>> verify that the decision mistakes, alpha and beta errors, in the
>> sample are close to the posterior probabilities.
>> The posterior probability would be similar to the p-value
>> in a classical test. If you want to balance alpha and
>> beta errors, a threshold S/(S+1)<0.5 would be more
>> appropriate, but for the unit tests it wouldn't matter.
>
> Unfortunately this doesn't work. Think of it this way: if my data size
> is 10000 photons, and I'm looking at the fraction of
> uniformly-distributed data sets that have a probability > 0.95 that
> they are pulsed, this won't happen with 5% of my fake data sets - it
> will almost never happen, since 10000 photons are enough to give a
> very solid answer (experiment confirms this). So I can't interpret my
> Bayesian probability as a frequentist probability of alpha error.

Doesn't this mean that the Bayesian posterior doesn't have the
correct tail probabilities? If my posterior beliefs are that the
probability that I make a mistake is 5% and I have the correct
model, but the real probability that I make a mistake is only 0.1%,
then my updating should have correctly taken into account that
the signal is so informative and tightened my posterior distribution.

With 8000 in your example, I get
Probability the signal is pulsed: 0.999960
This makes it pretty obvious if the signal is pulsed or not.

Do the tail probabilities work better for cases that are not so
easy to distinguish?

Josef

>
>> Running the example a few times, it looks like that the power
>> is relatively low for distinguishing uniform distribution from
>> a pulsed distribution with fraction/binomial parameter 0.05
>> and sample size <1000.
>> If you have strong beliefs that the fraction is really this low
>> than an informative prior for the fraction, might improve the
>> results.
>
> I really don't want to encourage my code to return reports of
> pulsations. To be believed in this nest of frequentists I work with, I
> need a solid detection in spite of very conservative priors.

When you have everything working, then you could check the
sensitivity to the priors. For parameter estimation, I found it
interesting to see which parameters change a lot when I
varied the prior variance, and it helps in the defense against
frequentists.

Josef

>
> Anne
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>


More information about the SciPy-User mailing list