[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

*To*: AUDITORY@xxxxxxxxxxxxxxx*Subject*: GLM fit or Cubic smoothing spline for categorical boundary data??*From*: Noah Haskell Silbert <noahpoah@xxxxxxxxx>*Date*: Tue, 8 May 2012 07:48:03 -0400*Approved-by*: noahpoah@xxxxxxxxx*Delivery-date*: Tue May 8 08:24:33 2012*List-archive*: <http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>*List-help*: <http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>, <mailto:LISTSERV@LISTS.MCGILL.CA?body=INFO AUDITORY>*List-owner*: <mailto:AUDITORY-request@LISTS.MCGILL.CA>*List-subscribe*: <mailto:AUDITORY-subscribe-request@LISTS.MCGILL.CA>*List-unsubscribe*: <mailto:AUDITORY-unsubscribe-request@LISTS.MCGILL.CA>*Reply-to*: Noah Haskell Silbert <noahpoah@xxxxxxxxx>*Sender*: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx>

The simpler model y(t)=1/(1+exp(-r(t-t0))) is a special case of the more complex model k1/(1+exp(-r(t-t0)))+k2 (with k1 = 1 and k2 = 0). With maximum likelihood estimation, the more complex model will fit at least as well as the simpler model. Also, MLE will enable you to test (e.g., with a likelihood ratio) whether or not the additional parameters are justified. You should also just look at the data and model predictions/fit to get a sense of which of the k parameters, if either, are likely to help improve the fit at all (e.g., if the maximum categorization probabilities in your data are less than one, the k1 parameter can scale the curve to account for that, and if the minimum categorization probabilities in your data are above zero, k2 can raise the curve appropriately, with k1 scaling the curve to keep it from exceeding one, etc...).

- Prev by Date:
**Announcing a Symposium: JAMMMIT** - Next by Date:
**Headphones for Research Purposes** - Previous by thread:
**Re: GLM fit or Cubic smoothing spline for categorical boundary data??** - Next by thread:
**Re: AUDITORY Digest - 30 Apr 2012 to 2 May 2012 (#2012-110)** - Index(es):