[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: data resampling



Hi Jan, 

My point was that resample() does not work well.  It contains an egregious bug that decimate() appears not to have.  Normally they should be equivalent.

I highly agree that the choice of processing depends on your goals and the nature of the signal.  Downsampling entails loss of information, and implicitly assumes that the information lost is not essential.  Downsampling methods differ in how they enforce that assumption.  I see two main options.

The first preserves as much as possible the temporal shape of the waveform.  For this you simply replace each set of N samples by their mean.  For example:
x=mean(reshape(x,N,n/N)); 
where the number n of original samples is assumed to be divisible by N.  This gives equal weight to each of the N original samples, and enforces the assumption that differences between them are unimportant.  Temporal smearing is minimized, but spectral properties are not great.

The second preserves spectral properties.  You apply a low-pass filter to ensure that power (in the original signal) beyond the Nyquist frequency (of the downsampled signal) is neglible.  The reason for doing so to ensure that the spectrum of the new data unambigously reflects that of the original data, at least within the 0 - Nyquist interval.  If you don't, power beyond Nyquist is mixed with power below Nyquist, so you can't easily interpret the spectrum of the downsampled signal.  decimate() applies this filter adequately, resample() doesn't.  The downside (compared to the first option) is that the low pass filter has a long impulse response (several times N) so you may loose some temporal resolution.  

There is a third course that may be worth considering if there is useful information beyond Nyquist (specifically if the power beyond Nyquist fluctuates slowly).  Here you highpass (rather than lowpass), demodulate (e.g. square or absolute value), then lowpass the demodulated signal so it has negligible power beyond Nyquist, and downsample.  This signal represents the slow fluctuations of the high-frequency power.  It's a way of saving some of the information that you'd normally discard when you low-pass the original waveform.

Sebastian suggested adjusting the order parameter of resample() as a bug-fix to reduce aliasing.  The order controls the steepness of the cutoff, and increasing it may indeed alleviate aliasing.  However it also implies a longer impulse response so temporal smearing and edge effects may be exacerbated.  It's no replacement for a proper choice of cutoff (as in decimate()).

Alain







On 15 Aug 2014, at 11:56, Jan Schnupp <jan.schnupp@xxxxxxxxxxxxx> wrote:

> Dear Juan,
> 
> you've had a lot of good input already on the important differences between downsample and resample from other colleagues. The only thing I would like to add is that you need to think about these things in the context of what sort of data you are dealing with, why you are downsampling  in the first place, whether or why 1 Hz is really a good new sample rate for your data, etc. Clearly your original trace has a lot of "fine structure" at high frequencies, faster than 1 Hz. Saying you want to downsample implies, either that you think this high frequency stuff is noise, or that it may be signal but there are other reasons why you need to sacrifice some of that detail. The resample() function mentioned by Alain works pretty well if low pass filtering at the new "Nyquist frequency" (in your case 0.5 Hz) is a good way to analyse your data. But there may be good reasons to filter at even lower frequencies (and after you've done that you could use downsample() without having to worry about aliassing). Or you may decide that there is "real signal" at or above 0.5 Hz, in which case you should not downsample to 1 Hz in the first place. What I'm really saying is that, to do the best quality work, you should filter your signal first, before you do any downsampling, and think carefully about the most appropriate filtering which will give you the best "signal to noise", and what these filters imply for what sort of downsampling can be done in a lossless manner. If that's too much work, then using the decimate() function Alain mentioned is the next best thing.
> 
> Cheers,
> 
> Jan
> 
> 
> On 14 August 2014 00:05, Juan Wu <wujuan22@xxxxxxxxx> wrote:
> List experts,
> 
> I am attempting to down sample my data from 300 Hz to 1 Hz. I just tried two functions of Matlab: 1) a = resample(Data,1,300); 2) b = downsample(Data,300). The results are quite different between each others. 
> 
> <image.png>
> 
> Apparently, the result from "downsampled" is close to the the original data. However, quite a few persons suggested using the re-sampling method - I get this information from google searching, and very agree with this view. Personally I think the downsample method is too simple and very arbitrary. I also do not believe the "Nearest Neighbor" method. I assume that the method resampling takes both "FIR interpolator and decimator" implementation - this is what I expected. Am I right?  But now my output from the re-sampling method is really very terrible. So I assume that the resample function from Matlab does not do a good job for this. I am not sure whether I need change to use other softwares or try other functions in the Matlab. 
> 
> Any opinions or references are much appreciated.
> J
> 
> 
> 
> -- 
> Prof Jan Schnupp
> University of Oxford
> Dept. of Physiology, Anatomy and Genetics
> Sherrington Building - Parks Road
> Oxford OX1 3PT - UK
> +44-1865-282012
> http://jan.schnupp.net