[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: looking for literature on human error correction
On Fri, 23 Feb 2007 j3cube@xxxxxxx wrote:
> In printed test where there is concern with letters, there is a lot of
> old stuff on deleting letters as a measure of redundancy (information)
> in printed materials. Deleting consonants is deadly; deleting vowels
> is not particularly handicapping. There was a lot of this stuff about
> the time that people were fascinated by Shannon's ideas about
> "information" (1960s).
Thanks, since then I have found the classic paper of Shannon
(Prediction and entropy of printed English) from 1951.
However, the experiment I imagine is different in at least
two respects. First, PREDICTION would be replaced by ERROR CORRECTION.
That is, instead of deleting letters (and denoting their position by a
dash), I would replace them with incorrect letters. This would make the
task definitely harder, but probably more realistic (I think that the
errors of some transmission channel will not appear as blank spaces but as
garbage...). Second, my experiment would be with SPOKEN and not
WRITTEN language. For example, I would phonemically transcribe the text,
put in the errors, and then have it read by a speech synthetizer. I
would guess that the results would be quite different from those with
printed text (for example, the importance of wovels would be much
higher...). Have you ever met such a version of the Shannon experiment?
P.S.: I also wonder if the difference between the written and spoken
versions of the test would be bigger for English than for some language
that has a phonemic typing style? Are the cognitve processes behind
reading and speech comprehension more different for English than for,
say, Croatian? (E.g. is the strategy of "reading aloud in your head" more
probable for the latter then for the former?)
Hungarian Academy of Sciences *
Research Group on Artificial Intelligence * "Failure only begins
e-mail: tothl@xxxxxxxxxxxxxxx * when you stop trying"