[SciPy-User] "small data" statistics
Sat Oct 13 08:07:47 CDT 2012
On Thu, Oct 11, 2012 at 10:57 AM, <firstname.lastname@example.org> wrote:
> Most statistical tests and statistical inference in scipy.stats and
> statsmodels relies on large number assumptions.
> Everyone is talking about "Big data", but is anyone still interested
> in doing small sample statistics in python.
> I'd like to know whether it's worth spending any time on general
> purpose small sample statistics.
> for example:
> Example homework problem:
> Twenty participants were given a list of 20 words to process. The 20
> participants were randomly assigned to one of two treatment
> conditions. Half were instructed to count the number of vowels in each
> word (shallow processing). Half were instructed to judge whether the
> object described by each word would be useful if one were stranded on
> a desert island (deep processing). After a brief distractor task, all
> subjects were given a surprise free recall task. The number of words
> correctly recalled was recorded for each subject. Here are the data:
> Shallow Processing: 13 12 11 9 11 13 14 14 14 15
> Deep Processing: 12 15 14 14 13 12 15 14 16 17
example: R package coin
found again while digging for an error in p-values in stats.wilcoxon
in the presence of ties https://github.com/scipy/scipy/pull/338
and enhancements for it.
More information about the SciPy-User