[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: data resampling



Hi Alain,

It is great that this is being taken up and it is commendable that you filed the bug report! Didn't know that one could do that!!  The beauty of resample(), I find, is using it with rat(), as the mathworks documentation mentions, to arrive at any strange fs that one desires.

Given the black magic that you seem to posses, is it defensible to do this - up, not down - with wideband signals?

Again, I'm glad this has come up, and sorry if my question is naive.  
David


________________________________________
From: AUDITORY - Research in Auditory Perception [AUDITORY@xxxxxxxxxxxxxxx] on behalf of Alain de Cheveigne [alain.de.cheveigne@xxxxxx]
Sent: Friday, August 15, 2014 9:26 AM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: data resampling

Hi Hari,

Note that matlab's resample() has inadequate antialiasing. Try:
figure(1); clf;
x=randn(10000,1);
pwelch(decimate(x,2)); hold on
pwelch(resample(x,1,2));
c=get(gca,'children'); set(c(1), 'color', [0 0 0]);
legend('decimate','resample');

decimate() does the right thing which is to apply a lowpass filter with a sufficiently low cutoff so that power is negligible at Nyquist frequency and beyond.  resample()'s apparently applies a filter that attenuates by 3dB at Nyquist, so that there may be serious aliasing if the signal is wideband.
I filed a bug report to MathWorks, they answered that they think it's OK (!). Annoyingly decimate() works only for single channel data.

It's an example of matlab's tendency to have lots of different functions for a similar purpose, each limited or buggy in a different way (see http://abandonmatlab.wordpress.com/2012/08/07/matlab-cant-read-plain-text-data-out-of-a-wet-paper-bag/ for an enlighening rant).

Concerning the original post, the best form of downsampling depends on your requirements.  If the goal is to preserve time-domain properties of the waveform, it may be best to simply replace sets of N samples by their average (where N is the downsampling factor).  If the goal is to preserve frequency-domain properties (within the band below Nyquist), decimate is best.  Both are similar in that each new sample is a function of a set (window) of samples of the original series.  In the first case this set is a compact as makes sense (N). In the second it is much more extended, so temporal features may be blurred.  In the example given, the window includes samples before and after the beginning and end of the signal (treated as zero) so the outcome is contaminated by the response to the onset and offset step.  This effect is severe because of the large DC offset.

downsample() is unlikely to be useful in any situation as it ignores all but 1/N samples of the original data.  It's a rather silly function because its functionality is easy to get by writing x(1:N:end).


Alain


On 14 Aug 2014, at 15:41, Hari Bharadwaj <hari@xxxxxxxxxxxxxxxxxxx> wrote:

> Hi J,
>   downsample(.) just picks out every nth sample (300th in your case). Though visually it may look like it worked well, it is quite likely that some of the high frequency portions of the signal are now aliased into the low frequency portions.
>
> resample(.) and decimate(.) guard against this aliasing by filtering first.. But filtering would cause artifacts at the edges of your signal as you don't have the input for infinite past (or in your FIR case, enough samples that you don't care about, for your filter transients to die out) . However the middle portions should be properly filtered and decimated. So if you throw out the first few samples (technically, the length of the filter used) and the last few sample, the rest should match reasonably with what you have in mind. This is not a problem with the software implementations but rather is a fundamental limitation of filtering and time-frequency uncertainty. If you care about those first few seconds, your input should include additional/extra samples at the beginning and end that you don't care about that would absorb the filter transients.
>
> I would recommend consulting a signal processing text book for more elaborations on these issues (I.e, aliasing, filter design, resampling etc.).
> Briefly, these might help:
> http://en.m.wikipedia.org/wiki/Anti-aliasing_filter
> https://ccrma.stanford.edu/~jos/fp/Transient_Response_Steady_State.html
> The help for resample(.) talks about what kind of filter is employed: http://www.mathworks.com/help/signal/ref/resample.html
>
>
> Hari
>
>> On Aug 13, 2014, at 7:05 PM, Juan Wu <wujuan22@xxxxxxxxx> wrote:
>>
>> List experts,
>>
>> I am attempting to down sample my data from 300 Hz to 1 Hz. I just tried two functions of Matlab: 1) a = resample(Data,1,300); 2) b = downsample(Data,300). The results are quite different between each others.
>>
>> <image.png>
>>
>> Apparently, the result from "downsampled" is close to the the original data. However, quite a few persons suggested using the re-sampling method - I get this information from google searching, and very agree with this view. Personally I think the downsample method is too simple and very arbitrary. I also do not believe the "Nearest Neighbor" method. I assume that the method resampling takes both "FIR interpolator and decimator" implementation - this is what I expected. Am I right?  But now my output from the re-sampling method is really very terrible. So I assume that the resample function from Matlab does not do a good job for this. I am not sure whether I need change to use other softwares or try other functions in the Matlab.
>>
>> Any opinions or references are much appreciated.
>> J
>
>
> The information in this e-mail is intended only for the person to whom it is
> addressed. If you believe this e-mail was sent to you in error and the e-mail
> contains patient information, please contact the Partners Compliance HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in error
> but does not contain patient information, please contact the sender and properly
> dispose of the e-mail.