[SciPy-User] "small data" statistics

josef.pktd@gmai... josef.pktd@gmai...
Thu Oct 11 09:57:23 CDT 2012


Most statistical tests and statistical inference in scipy.stats and
statsmodels relies on large number assumptions.

Everyone is talking about "Big data", but is anyone still interested
in doing small sample statistics in python.

I'd like to know whether it's worth spending any time on general
purpose small sample statistics.

for example:

http://facultyweb.berry.edu/vbissonnette/statshw/doc/perm_2bs.html

```
Example homework problem:
Twenty participants were given a list of 20 words to process. The 20
participants were randomly assigned to one of two treatment
conditions. Half were instructed to count the number of vowels in each
word (shallow processing). Half were instructed to judge whether the
object described by each word would be useful if one were stranded on
a desert island (deep processing). After a brief distractor task, all
subjects were given a surprise free recall task. The number of words
correctly recalled was recorded for each subject. Here are the data:

Shallow Processing: 13 12 11 9 11 13 14 14 14 15
Deep Processing: 12 15 14 14 13 12 15 14 16 17
```

Josef


More information about the SciPy-User mailing list