[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

subtraction and distributed function [was Re: The Auditory Continuity Illusion/Temporal Induction]



Hello Al and list,

I'd just like to interject that the difficulty (folly?) of localizing
distributed functions in the brain is not lost on (all) fMRI
researchers. I've tried to focus attention on this issue when describing
to others the direction of my own work in auditory fMRI. There are
really two kinds of approaches through which we could imagine using the
subtraction technique. The first depends on assuming modularity of
function in the brain. If that assumption holds, then two carefully
designed conditions---one that includes the process of interest, and one
identical to the first except for the lack of that process---can be used
via subtraction to identify the module in question. (This is of course
not a new idea and has been used to look at chronometry, etc. for quite
some time now.) Many of us, however, doubt the applicability of strict
modularity, and instead recognize that---even with perfectly designed
conditions---many things are likely to change in related modules (if one
expects modularity to hold weakly) or throughout an extremely
distributed system that subserves the process of interest along with
many others (if one does not). The second approach is to use subtraction
as a tool to examine the effects of specific manipulations upon the
"activations" observed in the brain, and interpret those effects mainly
in a descriptive sense. For example, we might look at tone-evoked
activations at different sound levels. We might use subtraction to
isolate sound-related activity ("sound" - "silence") which we then
compare across levels, or we might directly compare activations produced
at different levels ("intense" - "soft"). Either way, we try to describe
the sensitivity to tone level in different regions of the brain (which
might in turn tell us a lot about the kinds of computations each region
is likely to be involved in) rather than to localize the "module that
processes sound level" or the "module that processes intense sound." [It
might also be worth pointing out that "subtraction" in this case can be
replaced by correlation or another statistical technique for assessing
sensitivity to manipulation of the independent variable across
potentially many levels; I'm not sure how to interpret such data via the
modularity assumption, but it makes good sense for the descriptive
approach.]

Personally, I think the evidence for distributed function (throughout
the auditory cortex at least) is pretty good, and strongly prefer the
second, descriptive, approach to interpreting fMRI data. So I agree with
your post, especially regarding concerns about designing appropriate
conditions for subtraction and about the interpretation of subtraction
results via an implicit assumption of modularity. But I want list
members to realize that the method of subtraction can be employed
without that assumption, even though they are often encountered
together. The question, as you suggest, should be "how" distributed
functions take place, and is unlikely to be answered by any one method,
but rather through the development of computational models on the basis
of descriptive data about the brain and behavior.

-Chris Stecker