[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
in terms of normalization you need to be rather careful: if you normalize every sample (every vowel/consonant sound) individually, you'll likely end up increasing the amplitude of each of them in a different way, therefore having problems when calibrating their levels. So, if you have all the recordings in one single file, you can just normalize it all together to 0dBFS, and it should anyway result in a general amplitude increase. If not, you can just analyse all recordings, check which is the one that has the max amplitude peak, and increase the level of all files until that peak gets to 0 dBFS...but this is not always needed... You can do this with Adobe Audition, but I've found that using SoundHack (Tom Erbe's software...you can get it for free from the Internet) is very fast, precise and simple.
Regarding compression, and in general the amplitude of the signals, you need to establish at which level you want to perform the calibration: generally, speech recordings used for vocal audiometry come all with a sinusoidal signal that has the same level of the RMS of the various samples, therefore you can use that for calibrating the playback system. This signal can have various levels, depending on the set you are using...the sets I have used are generally calibrated between -10 and -14 dBFS unweighted. So, you'll need to decide at which level to calibrate yours, and according to this you'll see if you need or not compression or limiting.
In order to avoid clipping, a limiter with 10ms attack time set with a threshold of -0.5 dBFS would do...obviously, if you are increasing the level of the signal too much before the actual limiter, you'll avoid clipping, but have other types of problems/artifacts due to the dynamic processing.
I hope this helps!
Dr. Lorenzo Picinali
Senior Lecturer in Music/Audio Technology
Faculty of Technology
De Montfort University, The Gateway
Leicester, LE1 9BH
Tel 0116 207 8051
Date: Tue, 18 Dec 2012 12:05:16 +1300
From: Abin Kuruvilla Mathew <amat527@xxxxxxxxxxxxxxxxx>
Subject: Audio editing
Content-Type: text/plain; charset=ISO-8859-1
I have a set of audio files (consonants and vowels) to be editied in Adobe
audition and was wondering to what extent and how much of Normalization
(RMS) and dynamic compression (if necessary) would be needed so that the
naturalness is preserved and clipping doesn't occur.
Abin K. Mathew
Department of Psychology (Speech Science)
Tamaki Campus, 261 Morrin Road, Glen Innes
The University of Auckland
Private Bag 92019