signal-processingfftdftdigitaldct

Does FFT and IFFT implicitly assume circular convolution? Does DCT assume the same?


if I put zeros at the end of an input sequence, and take fft, then take ifft of a greater length by appending zeros to the fft output, will the zeros of the input sequence reflect on the final ifft output?

I tried to do this in MATLAB and python, it seems the last part of the final IFFT output deviates from being nearly zero in the absence of any zero-head in original data.

However, when repeating the same process for DCT and IDCT, I found that the last part of the output is smoothened to nearly zero.

I am taking 512 point FFT/DCT and 2048 point IFFT/IDCT for that

Could anyone give any mathematical explanation to this? Output Comparison for taking DFT and DCT


Solution

  • Conventional FFTs usually have implicit periodic translation by the array length or circular looping if you like. This can be bad news if there is a discontinuity when it is wrapped around (as there is in this example).

    DCT has implicit mirror boundary conditions at the ends of its range and so it is a continuous function across the array boundaries.

    The only exception that I have encountered was the FFT used by Jodrell Bank for processing Merlin interferometer observations which had reflection symmetry BCs. I'm assuming that neither package you have used does any preconvolution or weighting of the input data to better approximate a DFT.

    If you look at the FFT (blue line) the RH end of the data is a nice sharp transition but then the wrap around at the RH end of the array back to the origin value is a soft rise to make the waveform periodic and continuous. Discontinuities and finite maximum frequencies result in Gibb's phenomena oscillations at the very sharp rise.

    If you look at the DCT (red line) the RH end of the actual data has a soft corner going down to the baseline which is then continuous across the RH boundary. It is less obvious to the eye on the DCT but it is still there. The function here is continuous across both left and right boundaries which makes it cleaner.

    Padding data with zeroes is common in indirect imaging and is usually called oversampling. We tend to work with 1.5 to 2.0. It makes the resulting transform smoother and much easier for contouring codes to plot.

    If you have done it right for 512 zero padded to 2048 then every 4th value in your output array should be to within rounding error equal to your input array. Used this way it is an expensive way of interpolating data.

    Various methods are used to ameliorate these annoying FFT edge effects by either multiplicative weighting or preconvolution of the input data. The simplest are called windowing functions and each has advantages and disadvantages so you have to compromise. Optimum regridding functions are still actively researched.