Re: [AUDITORY] Question: same/different judgments across domains. (Sarah Hawkins )


Subject: Re: [AUDITORY] Question: same/different judgments across domains.
From:    Sarah Hawkins  <sh110@xxxxxxxx>
Date:    Mon, 10 May 2021 11:04:58 +0100
X-Cam-ScannerInfo:http://help.uis.cam.ac.uk/email-scanner-virus

This is a multi-part message in MIME format. --------------83BB421181D1C62234AB1702 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by edgeum4.it.mcgill.ca id 14AA55md131054 I agree with the points made so far.=C2=A0 I'd already drafted this as ad= ding=20 'some general points I think are compatible' before I read Mattson's=20 msg. I'm sending it anyway, because one or two points have not yet been=20 made, and while others are obvious or have now been said, it may be=20 helpful to put them in one place. I can supply references for most of my=20 points if you would like them, but much of this is easily found in=20 literature that may be of more relevance to you. - Listening strategies are unavoidable, so even if you try to produce an=20 unbiased initial situation, participants are likely to develop a=20 strategy during the experiment that is tuned to the particular stimuli=20 (including their range of variation) and task. The strategy may or may=20 not vary significantly between individuals, depending on stimulus=20 construction and presentation. - What do you want to generalise your results to?=C2=A0 Responses to shor= t=20 sounds heard out of context may not generalise to responses to longer=20 sounds, and the same sound can be interpreted very differently in=20 different contexts. Ideally, your presentation context as well as your=20 sound stimuli themselves will reflect the situations you want your=20 experiment to be relevant to. - Another consideration might be the definition of your categories. Is=20 it the domains (e.g. speech, music, natural environment) you are=20 interested in, or detection of different timbres? If it's the domains,=20 then it would seem reasonable to let awareness of the domain be part of=20 the experiment, since expectations tend to drive perceptions of=20 ambiguous stimuli. But it sounds as though timbre rather than domain may=20 be the point. If timbre, then these can change within a 250-ms excerpt=20 in all 3 domains mentioned. So considering whether you want natural=20 dynamical variation or not could be important. (And perhaps to use=20 stimuli long enough for those functional categories to be meaningful.) - do you care about thresholds, or what people normally do above=20 threshold? Either way, exactly where and how in a sound chunk a=20 particular change occurs is sometimes critical, and sometimes of no=20 apparent importance at all. This is perhaps particularly true for=20 speech, for example for f0 contours vis a vis the syllable structures=20 that carry them, and for what the perceived function of the utterance is=20 (which would normally require it to be heard in context). And though it=20 sounds as though you are probably planning a psychoacoustic expt, the=20 fact you've asked the question suggests that you might in time want to=20 find a functionally-meaningful task, perhaps in addition to a 4IAX task.=20 If you do, this could affect stimulus choice now. - Stimulus variation can strongly affect responses, presumably by=20 helping or hindering attention to be focussed on particular acoustic=20 properties. You can assess this by blocking stimulus presentations, so=20 that listeners hear only one type of stimulus in a block, or by=20 presenting the full range in a block. With subtle differences, you'll=20 likely get different results. - While stimuli of constant duration look nicely controlled, and can be=20 the best for some experiments (probably including discrimination and=20 threshold tasks), it could be worth considering the amount of=20 information conveyed within a stimulus. Generalising across genres,=20 musical notes are typically rather slower than spontaneous=20 conversational speech. In normal-rate speech, 250 ms can (but does not=20 always) involve more than one syllable and often more than one word.=20 Fast music can involve several notes within 250 ms, but in much music,=20 single notes typically are longer than 250 ms. (There is no 1:1 relation=20 between phonemes, syllables, words, and notes and phrases.) - In longer stretches of sound, temporal properties (e.g. amplitude=20 envelope, factors that affect rhythm and metre) strongly affect=20 perceptual responses, and how listeners hear them is=20 culturally-sensitive. (Relevance to generalisation to real life, again.) - Relatedly, and following on from Bob's email, speech, music and=20 environmental sounds can and typically do include harmonic, inharmonic=20 and aperiodic sounds, and of course silence too (albeit often in=20 different proportions).=C2=A0 For speech, it is easy to stick to stimuli = that=20 have an f0, but generalisation to normal conditions may be somewhat=20 limited. I'd predict, but don't know, the same for environmental sounds. - Finally, where do singing and rap fit in? Many of these issues cannot be resolved to produce perfectly controlled=20 stimuli - you have to make (sometimes very tough) decisions about your=20 focus and what's practical, after which other decisions are likely to be=20 influenced by your earlier ones. Being aware that you are making the=20 early ones before the design is finalised is useful though! I hope this helps, and good luck! Sarah On 09/05/2021 16:14, Mattson ogg wrote: > Hi Max, > > I looked at this a bit in grad school, particularly with very brief=20 > sounds though mostly focusing on onsets bc I was interested in getting=20 > at =E2=80=9Cwhen=E2=80=9D listeners can recognize what they hear to sub= sequently=20 > engage any potentially different listening strategies (I.e., you more=20 > frequently hear/recognize quickly during what is basically a sound=20 > onset than dropping in on the middle of an acoustic event in the real=20 > world). > > Anyway, I think the thread raises some very good points - I=E2=80=99d j= ust add=20 > that it sort of depends what question you (they) are asking. I kept it=20 > fairly high level. At like 25ms listeners can only barely tell=20 > different sound classes apart. But I think by 250ms you do have=20 > different listening strategies and the same acoustic dimension can=20 > carry different kinds of information for different classes so it=20 > depends on what you=E2=80=99re interested in (e.g., pitch is more varia= ble in=20 > a given vowel and can cue different speakers or emotions, often=20 > doesn=E2=80=99t vary as much within an instrument note and is not as us= eful=20 > for identifying instruments, is basically absent for many noisy=20 > environmental sounds). So=C2=A0IMO the trickier thing in limited time=20 > windows is controlling things so the comparisons are meaningful for=20 > your q bc in my experience there=E2=80=99s always a bit of compromise h= ere due=20 > to how different those sound classes are. Note speech I think is=20 > interesting and tricky here bc it=E2=80=99s particularly slippery: it=E2= =80=99s=20 > acoustically rich and variable from moment to moment. > > Anyhow since you asked for some recs here=E2=80=99s links to a few pape= rs of=20 > mine that dig into this that could be helpful - all looking at=20 > slightly different questions with multiple sound classes on limited=20 > time scales. Perhaps there=E2=80=99s a better way to treat some of thes= e=20 > issues but this general approach seemed like a fairly straightforward=20 > starting place to me: > > https://asa.scitation.org/doi/abs/10.1121/1.5014057=20 > <https://asa.scitation.org/doi/abs/10.1121/1.5014057> > > https://direct.mit.edu/jocn/article/32/1/111/95406/The-Rapid-Emergence-= of-Auditory-Object=20 > <https://direct.mit.edu/jocn/article/32/1/111/95406/The-Rapid-Emergence= -of-Auditory-Object> > > (Follow up to the two previous should be on some arxiv soonish?=20 > Whenever I can get around to it! heh) > > https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01594/full=20 > <https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01594/full> > > https://www.sciencedirect.com/science/article/abs/pii/S1053811919300813= ?via%3Dihub=20 > <https://www.sciencedirect.com/science/article/abs/pii/S105381191930081= 3?via%3Dihub> > > > > > > On Sun, May 9, 2021 at 12:30 AM Jan Schnupp=20 > <000000e042a1ec30-dmarc-request@xxxxxxxx=20 > <mailto:000000e042a1ec30-dmarc-request@xxxxxxxx>> wrote: > > Same/different judgments are always a bad idea. Unless stimuli are > actually identical, they are not the same, so the observer has to > make some sort of "close enough" judgment which always involves a > bit of a fudge in their minds. Much better to play 3 sounds and > ask which was the odd one out, or two pairs and ask which pair was > more different. In those cases you have a much more unambiguous > way of declaring a response objectively correct or incorrect. > There is no internal "close enough" criterion that may vary from > subject to subject or from domain to domain. Playing with duration > is tricky. Certain categories of sounds have characteristic > temporal envelopes and if you make them "much shorter than they > should be" then they are no longer good representives of their > domain or category. > Good luck with your experiment. > Jan > > > On Sat, May 8, 2021, 12:34 PM Max Henry <max.henry@xxxxxxxx > <mailto:max.henry@xxxxxxxx>> wrote: > > Hi folks. Long time listener, first time caller... > > Some friends of mind are setting up an experiment with > same/different judgements between pairs of sounds. They want > to test sounds from a variety of domains: speech, music, > natural sounds, etc. > > One of the researchers suggested that listeners will have > different listening strategies depending on the domain, and > this might pose a problem for the experiment: our sensitivity > for difference in pitch, for example, might be very acute for > musical sounds but much less-so for speech sounds. > > I have a hunch that if the stimuli were short enough, this > might sidestep the problem. Ie, if I played you 250 > milliseconds of speech, or 250 milliseconds of music, you > would not necessarily use any particular domain-specific > listening strategy to tell the difference. It would simply be > =E2=80=9Csound.=E2=80=9D > > I suspect this is because a sound that=E2=80=99s sufficiently s= hort > can stay entirely in echoic memory. For longer sounds, you > have to consolidate the information somehow, and the way that > you consolidate it has to do with the kind of domain it falls > into. For speech sounds, we can throw away the acute pitch > information. > > But that=E2=80=99s just a hunch. I=E2=80=99m wondering if this = rings true for > any of you, that is to say, if it reminds you of any > particular research. I=E2=80=99d love to read about it. > > It's been a pleasure to follow these e-mails. I'm glad to > finally have an excuse to write. Wishing you all well. > > *Max Henry*=C2=A0(he/his) > Graduate Researcher and Teaching Assistant > Music Technology Area > *McGill University*. > www.linkedin.com/in/maxshenry > <http://www.linkedin.com/in/maxshenry> > github.com/maxsolomonhenry <https://github.com/maxsolomonhenry> > www.maxhenrymusic.com/ <https://www.maxhenrymusic.com/> > --------------83BB421181D1C62234AB1702 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-MIME-Autoconverted: from 8bit to quoted-printable by edgeum4.it.mcgill.ca id 14AA55md131054 <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DUTF= -8"> </head> <body text=3D"#000000" bgcolor=3D"#FFFFFF"> I agree with the points made so far.=C2=A0 I'd already drafted this a= s adding 'some general points I think are compatible' before I read Mattson's msg. I'm sending it anyway, because one or two points have not yet been made, and while others are obvious or have now been said, it may be helpful to put them in one place. I can supply references for most of my points if you would like them, but much of this is easily found in literature that may be of more relevance to you.<br> <br> - Listening strategies are unavoidable, so even if you try to produce an unbiased initial situation, participants are likely to develop a strategy during the experiment that is tuned to the particular stimuli (including their range of variation) and task. The strategy may or may not vary significantly between individuals, depending on stimulus construction and presentation.<br> - What do you want to generalise your results to?=C2=A0 Responses to short sounds heard out of context may not generalise to responses to longer sounds, and the same sound can be interpreted very differently in different contexts. Ideally, your presentation context as well as your sound stimuli themselves will reflect the situations you want your experiment to be relevant to.<br> - Another consideration might be the definition of your categories.=C2= =A0 Is it the domains (e.g. speech, music, natural environment) you are interested in, or detection of different timbres? If it's the domains, then it would seem reasonable to let awareness of the domain be part of the experiment, since expectations tend to drive perceptions of ambiguous stimuli. But it sounds as though timbre rather than domain may be the point. If timbre, then these can change within a 250-ms excerpt in all 3 domains mentioned. So considering whether you want natural dynamical variation or not could be important. (And perhaps to use stimuli long enough for those functional categories to be meaningful.)<br> - do you care about thresholds, or what people normally do above threshold? Either way, exactly where and how in a sound chunk a particular change occurs is sometimes critical, and sometimes of no apparent importance at all. This is perhaps particularly true for speech, for example for f0 contours vis a vis the syllable structures that carry them, and for what the perceived function of the utterance is (which would normally require it to be heard in context). And though it sounds as though you are probably planning a psychoacoustic expt, the fact you've asked the question suggests that you might in time want to find a functionally-meaningful task, perhaps in addition to a 4IAX task. If you do, this could affect stimulus choice now.<br> - Stimulus variation can strongly affect responses, presumably by helping or hindering attention to be focussed on particular acoustic properties. You can assess this by blocking stimulus presentations, so that listeners hear only one type of stimulus in a block, or by presenting the full range in a block. With subtle differences, you'll likely get different results.<br> - While stimuli of constant duration look nicely controlled, and can be the best for some experiments (probably including discrimination and threshold tasks), it could be worth considering the amount of information conveyed within a stimulus. Generalising across genres, musical notes are typically rather slower than spontaneous conversational speech. In normal-rate speech, 250 ms can (but does not always) involve more than one syllable and often more than one word. Fast music can involve several notes within 250 ms, but in much music, single notes typically are longer than 250 ms. (There is no 1:1 relation between phonemes, syllables, words, and notes and phrases.)<br> - In longer stretches of sound, temporal properties (e.g. amplitude envelope, factors that affect rhythm and metre) strongly affect perceptual responses, and how listeners hear them is culturally-sensitive. (Relevance to generalisation to real life, again.)<br> - Relatedly, and following on from Bob's email, speech, music and environmental sounds can and typically do include harmonic, inharmonic and aperiodic sounds, and of course silence too (albeit often in different proportions).=C2=A0 For speech, it is easy to stic= k to stimuli that have an f0, but generalisation to normal conditions may be somewhat limited. I'd predict, but don't know, the same for environmental sounds.<br> - Finally, where do singing and rap fit in?<br> <br> Many of these issues cannot be resolved to produce perfectly controlled stimuli - you have to make (sometimes very tough) decisions about your focus and what's practical, after which other decisions are likely to be influenced by your earlier ones. Being aware that you are making the early ones before the design is finalised is useful though!<br> <br> I hope this helps, and good luck!<br> <br> Sarah<br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <div class=3D"moz-cite-prefix">On 09/05/2021 16:14, Mattson ogg wrote= :<br> </div> <blockquote type=3D"cite" cite=3D"mid:CADmxK8XZEX+H9sQnULvpbKLHGwjpSGDt-Q7KKb9HRVYYWran3Q@xxxxxxxx= l.com"> <meta http-equiv=3D"content-type" content=3D"text/html; charset=3DU= TF-8"> <div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto">Hi Max,</div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto">I looked at this a b= it in grad school, particularly with very brief sounds though mostly focusing on onsets bc I was interested in getting at =E2=80=9Cwhen=E2=80=9D listeners can recognize what they hear t= o subsequently engage any potentially different listening strategies (I.e., you more frequently hear/recognize quickly during what is basically a sound onset than dropping in on the middle of an acoustic event in the real world).</div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><span></span></div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto">Anyway, I think the thread raises some very good points - I=E2=80=99d just add that= it sort of depends what question you (they) are asking. I kept it fairly high level. At like 25ms listeners can only barely tell different sound classes apart. But I think by 250ms you do have different listening strategies and the same acoustic dimension can carry different kinds of information for different classes so it depends on what you=E2=80=99re interest= ed in (e.g., pitch is more variable in a given vowel and can cue different speakers or emotions, often doesn=E2=80=99t vary as m= uch within an instrument note and is not as useful for identifying instruments, is basically absent for many noisy environmental sounds). So=C2=A0IMO the trickier thing in limited time windows= is controlling things so the comparisons are meaningful for your q bc in my experience there=E2=80=99s always a bit of compromis= e here due to how different those sound classes are. Note speech I think is interesting and tricky here bc it=E2=80=99s particular= ly slippery: it=E2=80=99s acoustically rich and variable from mome= nt to moment.=C2=A0</div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><span></span></div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto">Anyhow since you ask= ed for some recs here=E2=80=99s links to a few papers of mine that= dig into this that could be helpful - all looking at slightly different questions with multiple sound classes on limited time scales. Perhaps there=E2=80=99s a better way to treat some= of these issues but this general approach seemed like a fairly straightforward starting place to me:</div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><a href=3D"https://asa.scitation.org/doi/abs/10.1121/1.5014057" moz-do-not-send=3D"true"><span style=3D"color:rgb(0,0,0)">htt= ps://asa.scitation.org/doi/abs/10.1121/1.5014057</span></a></div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><a href=3D"https://direct.mit.edu/jocn/article/32/1/111/95406/The-Rapid-Emer= gence-of-Auditory-Object" moz-do-not-send=3D"true"><span style=3D"color:rgb(0,0,0)">htt= ps://direct.mit.edu/jocn/article/32/1/111/95406/The-Rapid-Emergence-of-Au= ditory-Object</span></a></div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto">(Follow up to the tw= o previous should be on some arxiv soonish? Whenever I can get around to it! heh)</div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><a href=3D"https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01594/ful= l" moz-do-not-send=3D"true"><span style=3D"color:rgb(0,0,0)">htt= ps://www.frontiersin.org/articles/10.3389/fpsyg.2019.01594/full</span></a= ></div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><br> </div> <div style=3D"color:rgb(0,0,0)" dir=3D"auto"><a href=3D"https://www.sciencedirect.com/science/article/abs/pii/S1053811919= 300813?via%3Dihub" moz-do-not-send=3D"true"><span style=3D"color:rgb(0,0,0)">htt= ps://www.sciencedirect.com/science/article/abs/pii/S1053811919300813?via%= 3Dihub</span></a></div> </div> <div dir=3D"auto"><br> </div> <div dir=3D"auto"><br> </div> <div dir=3D"auto"><br> </div> <div dir=3D"auto"><br> </div> <div><br> <div class=3D"gmail_quote"> <div dir=3D"ltr" class=3D"gmail_attr">On Sun, May 9, 2021 at 12= :30 AM Jan Schnupp &lt;<a href=3D"mailto:000000e042a1ec30-dmarc-request@xxxxxxxx= ca" moz-do-not-send=3D"true">000000e042a1ec30-dmarc-request@xxxxxxxx= ts.mcgill.ca</a>&gt; wrote:<br> </div> <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;bord= er-left-color:rgb(204,204,204)"> <div dir=3D"auto">Same/different judgments are always a bad idea. Unless stimuli are actually identical, they are not the same, so the observer has to make some sort of "close enough" judgment which always involves a bit of a fudge in their minds. Much better to play 3 sounds and ask which was the odd one out, or two pairs and ask which pair was more different. In those cases you have a much more unambiguous way of declaring a response objectively correct or incorrect. There is no internal "close enough" criterion that may vary from subject to subject or from domain to domain. Playing with duration is tricky. Certain categories of sounds have characteristic temporal envelopes and if you make them "much shorter than they should be" then they are no longer good representives of their domain or category.=C2=A0 <div dir=3D"auto">Good luck with your experiment.=C2=A0</di= v> </div> <div dir=3D"auto"> <div dir=3D"auto">Jan=C2=A0<br> <div dir=3D"auto"><br> </div> </div> </div> <br> <div class=3D"gmail_quote"> <div dir=3D"ltr" class=3D"gmail_attr">On Sat, May 8, 2021, 12:34 PM Max Henry &lt;<a href=3D"mailto:max.henry@xxxxxxxx" target=3D"_bla= nk" moz-do-not-send=3D"true">max.henry@xxxxxxxx</a>&g= t; wrote:<br> </div> <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0= px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;bord= er-left-color:rgb(204,204,204)"> <div dir=3D"ltr"> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)">Hi folks. Long time listener, first time caller... <br> </div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><br> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif">Some friends of mind are setting up an experiment with same/different judgements between pairs of sounds. They want to test sounds from a variety of domains: speech, music, natural sounds, etc.</div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif"><br> </div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif">One of the researchers suggested that listeners will have different listening strategies depending on the domain, and this might pose a problem for the experiment: our sensitivity for difference in pitch, for example, might be very acute for musical sounds but much less-so for speech sounds.<= /div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif"><br> </div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif">I have a hunch that if the stimuli were short enough, this might sidestep the problem. Ie, if I played you 250 milliseconds of speech, or 250 milliseconds of music, you would not necessarily use any particular domain-specific listening strategy to tell the difference. It would simply be =E2=80=9Csound.=E2=80=9D</div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif"><br> </div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif">I suspect this is because a sound that=E2=80=99s sufficiently short can stay entirely in echoic memory. For longer sounds, you have to consolidate the information somehow, and the way that you consolidate it has to do with the kind of domain it falls into. For speech sounds, we can throw away the acute pitch information. <br> </div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-s= erif"><br> </div> But that=E2=80=99s just a hunch. I=E2=80=99m wonderin= g if this rings true for any of you, that is to say, if it reminds you of any particular research. I=E2=80=99d love to r= ead about it.</div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><br> </div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)">It's been a pleasure to follow these e-mails. I'm glad to finally have an excuse to write. Wishing you all well.<br> </div> <div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><br> </div> <div id=3D"m_5289833683295697566m_-3489444030197921230Si= gnature"> <div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><b style=3D"font-family:Calibri,Arial,Helvetica,sans-serif">Max Henry</b>=C2= =A0(he/his)</div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)">Graduate Researcher and Teaching Assistant</div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)">Music Technology Area</div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><span style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><b style=3D"font-family:Calibri,Arial,Helvetica,sans-serif">McGill Universit= y</b>.</span><br> </div> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><span style=3D"box-sizing:border-box;margin:0px;font-size:1.4rem;outline:0px;li= ne-height:1.42857;text-align:start;font-family:Calibri,Arial,Helvetica,sa= ns-serif;background-color:rgb(255,255,255);color:rgba(0,0,0,0.9)"></span>= <a href=3D"http://www.linkedin.com/in/maxshenry" rel=3D"noreferrer" target=3D"_blank" style=3D"font-family:Calibri,Arial,Helvetica,= sans-serif" moz-do-not-send=3D"true"><span style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;color:rgb(23,78,1= 34)">www.linkedin.com/in/maxshenry</span></a><span style=3D"box-sizing:border-box;margin:0px;font-size:1.4rem;outline:0px;li= ne-height:1.42857;text-align:start;font-family:Calibri,Arial,Helvetica,sa= ns-serif;background-color:rgb(255,255,255);color:rgba(0,0,0,0.9)"></span>= <br> </div> <span style=3D"color:rgb(23,78,134)"></span> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><a href=3D"https://github.com/maxsolomonhenry" rel=3D"noreferrer" target=3D"_blank" style=3D"font-family:Calibri,Arial,Helvetica,= sans-serif" moz-do-not-send=3D"true"><span style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;color:rgb(23,78,1= 34)">github.com/maxsolomonhenry</span></a><br> </div> <span style=3D"color:rgb(23,78,134)"></span> <div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;font-size:10pt;co= lor:rgb(0,0,0)"><a href=3D"https://www.maxhenrymusic.com/" rel=3D"noreferrer" target=3D"_blank" style=3D"font-family:Calibri,Arial,Helvetica,= sans-serif" moz-do-not-send=3D"true"><span style=3D"font-family:Calibri,Arial,Helvetica,sans-serif;color:rgb(23,78,1= 34)">www.maxhenrymusic.com/</span></a><br> </div> </div> </div> </div> </div> </blockquote> </div> </blockquote> </div> </div> </blockquote> <br> </body> </html> --------------83BB421181D1C62234AB1702--


This message came from the mail archive
src/postings/2021/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University