[Nipy-devel] rm 2-way ANOVA

Skipper Seabold jsseabold@gmail....
Sun Nov 22 12:17:41 CST 2009

On Sat, Nov 21, 2009 at 1:03 PM,  <josef.pktd@gmail.com> wrote:
> On Sat, Nov 21, 2009 at 7:02 AM, Christian Brodbeck
> <christianmbrodbeck@gmail.com> wrote:
>> Hi, thanks for the explanations.
>> I am performing the ANOVA for each electrode for each sample point, so
>> 129*150=19350 times
>> Ok, that's a lot and some speedup would be good.
>> Just out of curiosity, is there still a way to interpret the results of so
>> many
>> statistical estimations or tests?
>> Well, I think it is not really possible to interpret the individual P
>> values, but these plots provide more spatial and temporal information. This
>> is what one of my plots would look
> The graph looks like a good summary of what would be a huge table
> of results.
>> like: http://dl.dropbox.com/u/659990/eeg/IPhB_x_POS_anova.png
>> There are not actually that many comparisons, because the data are
>> temporally smoothed (low pass) as well as spatially dependent (volume
>> conduction in the head). (So I think it should theoretically be possible
>> given the temporal and spatial intercorrelation of the sensors to find an
>> appropriate correction factor for p?)
> Does this mean that you are selecting 150 time points for the analysis,
> out of a continuously observed time series?
> The statistical interpretation of so many correlated p-values looks
> pretty difficult, but they look very useful as descriptive statistic.
> In many cases in econometrics, robust error covariances for the
> parameter estimates are calculated.
> I don't know about the ANOVA in this case, but from the models,
> I have seen, this would be a panel data with spatial and intertemporal
> correlation as they use in spatial econometrics. With one big
> regression instead of individual regression, it should be possible to
> calculate robust p-values.
> (However, I never did anything in this area, but it is on my wish list.)
> <2 hours later>
> Maybe the spatial econometrics analogy doesn't apply in your case,
> if there are no parameters in common across your 19350 regressions.
> I think this looks more like SUR, but (in a brief google search) I didn't
> find any information about robust standard errors for SUR.
> There is a large literature for robust standard errors for panel data, for
> cross-section standard White heteroscedasticity correction can be
> used. So maybe framing your problem as panel data or as a
> system of equations, might produce (intertemporal and spatial)
>  correlation robust standard errors.
> T=(subjects*conditions) by N=(electrode*timepoints)
> with theory for fixed T and large N
> Maybe Skipper knows more about this. I hopefully know more
> next year, when we have more of random effects and panel
> data in statsmodels. We already have heteroscedasticity robust
> standard errors in statsmodels, but only for simple, single
> equation ols.

I must confess that I'm not entirely sure what you're after.  I have
trouble generalizing from econometrics to statistics writ large until
I have more experience, but SUR might work here (for a look at how the
problem looks http://en.wikipedia.org/wiki/Seemingly_unrelated_regression).
 SUR itself is like having robust standard errors.  It takes into
account the cross-correlations between equations.  So if you have Y =
X1, Y = X2, and Y = X3, all the same Ys and different designs, and
there isn't a reason to think that there is any causal relation among
the equations but that their errors are still correlated (due to
temporal or spatial relations perhaps in this case) then SUR will take
this into account (hence the *seemingly unrelated* regression) like a
systems modeling analogue to generalized least squares.

Let me know if you think this might be a useful pursuit, and I can
show you how to use the SUR model in statsmodels (it's still in the
sandbox, and I don't know if the docs are up to snuff).


> Just some thoughts,
> (I still have little idea what all this "neuro" data looks like, but
> there are even more/larger multivariate time series than in
> finance).
> Josef
>> The more traditional approach would be to choose a time window based on
>> visual inspection of the event relate potentials and then perform a single
>> ANOVA with data from the whole window (mean or peak). Also, in more
>> traditional experiments only around 16 electrodes would be used, and region
>> (e.g. anterior vs. posterior) might be included as factor in the anova. I
>> wanted to be able to visualize the additional data collected by 129
>> sensors. In order to confirm a certain effect i will then also pick a time
>> window and perform a single ANOVA for an electrode chosen out of theoretical
>> considerations.
>> --Christian
>> _______________________________________________
>> Nipy-devel mailing list
>> Nipy-devel@neuroimaging.scipy.org
>> http://mail.scipy.org/mailman/listinfo/nipy-devel
> _______________________________________________
> Nipy-devel mailing list
> Nipy-devel@neuroimaging.scipy.org
> http://mail.scipy.org/mailman/listinfo/nipy-devel

More information about the Nipy-devel mailing list