# Re: Frequency shift to alleviate acoustic feedback (Julius Smith )

```Subject: Re: Frequency shift to alleviate acoustic feedback
From:    Julius Smith  <jos@xxxxxxxx>
Date:    Sat, 26 Jan 2013 12:07:44 -0800
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

<html>
<body>
<font size=3>Hi All,<br><br>
I don't know if this will help or confuse things, but here is a pretty
smooth frequency-shifting implementation in SuperCollider:<br><br>
// Frequency-Shifting Example 5: Add phase-correction<br>
//&nbsp;&nbsp; MouseX = amplitude<br>
//&nbsp;&nbsp; MouseY = frequency shift (400 * (2 ** MouseY(-1,1)) in
[200,800])<br>
//&nbsp;&nbsp; MouseButton = clear frequency shift<br>
(<br>
x = {<br>
var in, out, amp, f0=400, fftSize=8192, winLen=2048, hopFrac=0.5,<br>
chain, mexp, fScaled, df, binShift, phaseShift, <br>
inWinType=0, outWinType=0;<br>
amp = MouseX.kr(-60,10).dbamp;<br>
in = SinOsc.ar(f0,0,amp);<br>
chain = FFT(LocalBuf(fftSize), in, hopFrac, inWinType, 1, winLen);<br>
mexp = MouseY.kr(-1.0,1.0);<br>
mexp = mexp*(1-MouseButton.kr);<br>
fScaled = f0 * (2.0 ** mexp);<br>
df = fScaled - f0;<br>
binShift = fftSize * (df / s.sampleRate);<br>
chain = PV_BinShift(chain, stretch:1, shift:binShift, interp:1);<br>
phaseShift = 2 * pi * binShift * hopFrac * (winLen/fftSize);<br>
chain = PV_PhaseShift(chain, phaseShift, integrate:1);<br>
out = IFFT(chain,outWinType,winLen);<br>
Out.ar(0, out.dup);<br>
}.play<br>
)<br><br>
- Julius<br><br>
At 04:48 AM 1/25/2013, Steve Beet wrote:<br>
<blockquote type=cite class=cite cite="">Dear Siping,<br><br>
I'd agree with Dick's simplification, except to note that *if* you can
assume that the listeners are not sensitive to phase, then frequency
shifting is actually very easy - you merely have to ensure phase
continuity at block boundaries, or (my preferred approach) do the
processing sample-by-sample using a direct analogue of the traditional EE
approach: heterodyning followed by linear filtering.<br><br>
I've also just remembered one reference which is relevant to this, and
should give you some idea of the issues involved in manipulating an audio
signal in terms of the frequencies, amplitudes and phases of its
components:<br><br>
R.J. McAulay, T. F. Quartieri; &quot;Speech analysis/synthesis based on a
sinusoidal representation&quot;; IEEE Trans. on Acoust., Speech and
Signal Proc., vol ASSP-34, pp. 744-754, 1986.<br><br>
Good luck,<br><br>
Steve<br><br>
<br><br>
<br>
On Thu, 24 Jan 2013 22:41:05 -0800<br>
&quot;Richard F. Lyon&quot; &lt;dicklyon@xxxxxxxx&gt; wrote:<br><br>
&gt; To put it more simply, the original assumption that frequency
shifting<br>
&gt; would be &quot;the simplest method&quot; was unfounded.<br>
&gt; Frequency shifting is actually quite complicated, subtle, error
prone, and<br>
&gt; not so well defined.<br>
&gt; <br>
&gt; Dick<br>
&gt; </blockquote>
<x-sigsep><p></x-sigsep>
<br>
Julius O. Smith III &lt;jos@xxxxxxxx&gt;<br>
Prof. of Music and Assoc. Prof. (by courtesy) of Electrical
Engineering<br>
CCRMA, Stanford University<br>
<a href="http://ccrma.stanford.edu/~jos/" eudora="autourl">
http://ccrma.stanford.edu/~jos/</a> </font></body>
</html>
```

This message came from the mail archive
/var/www/postings/2013/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University