[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Experiments with large N
See related discussion just published in Empirical Musicology Review:
On 3 Dec 2007, at 19:42, Robert Zatorre wrote:
Huge samples are very nice if you can get 'em, though such is not
always the case, alas.
So one thing that I would like to see from people who do have
gigantic N is to do some analyses to determine at what point the
data reach some asymptote. In other words, if you've collected
1,000,000 people, at what earlier point in your sampling could you
have stopped, and come to the identical conclusions with valid
Obviously, the answer to this question will be different for
different types of studies with different types of variance and so
forth. But having the large N allows one to perform this
calculation, so that next time one does a similar study, one could
reasonably stop after reaching a smaller and more manageable sample
Has anybody already done this for those large samples that were
recently discussed? It would be really helpful for those who cannot
always collect such samples.
Robert J. Zatorre, Ph.D.
Montreal Neurological Institute
3801 University St.
Montreal, QC Canada H3A 2B4
web site: www.zlab.mcgill.ca
Universiteit van Amsterdam