[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Experiments with large N
The Wald statistic can give you an estimate of the number of observations
needed to have a valid maximum likelihood estimate for testing a hypothesis
at an arbitrary significance level. Other ML regression models do the same.
It's rare that 1,000,000 subjects would be necessary for testing any
hypothesis except the winner of the 2000 (US) presidential election.
On 2/6/08 8:34 AM, "Henkjan Honing" <honing@xxxxxx> wrote:
> See related discussion just published in Empirical Musicology Review:
> On 3 Dec 2007, at 19:42, Robert Zatorre wrote:
>> Huge samples are very nice if you can get 'em, though such is not
>> always the case, alas.
>> So one thing that I would like to see from people who do have
>> gigantic N is to do some analyses to determine at what point the
>> data reach some asymptote. In other words, if you've collected
>> 1,000,000 people, at what earlier point in your sampling could you
>> have stopped, and come to the identical conclusions with valid
>> Obviously, the answer to this question will be different for
>> different types of studies with different types of variance and so
>> forth. But having the large N allows one to perform this
>> calculation, so that next time one does a similar study, one could
>> reasonably stop after reaching a smaller and more manageable sample
>> Has anybody already done this for those large samples that were
>> recently discussed? It would be really helpful for those who cannot
>> always collect such samples.
>> Robert J. Zatorre, Ph.D.
>> Montreal Neurological Institute
>> 3801 University St.
>> Montreal, QC Canada H3A 2B4
>> phone: 1-514-398-8903
>> fax: 1-514-398-1338
>> e-mail: robert.zatorre@xxxxxxxxx
>> web site: www.zlab.mcgill.ca
> Henkjan Honing
> Universiteit van Amsterdam
> I http://www.hum.uva.nl/mmm/hh/
> I http://www.musiccognition.nl/blog