From: "Jordan Rosenthal" Subject: Re: Why Numerical Recipes in C stink Date: Thu, 16 Dec 1999 09:48:41 -0500 Newsgroups: sci.math.num-analysis Keywords: Fast Fourier Transform, Discrete Fourier Transform, differences Hi, I'd like to jump into this for a second. In the signal processing literature (where the term FFT originated), the term FFT has come to mean the vague definition of "any algorithm that can compute the DFT faster than the 'naive' approach of directly implementing the DFT equations". Typically, it refers to the O(m log m) algorithms since these are the most well known, but that is not an absolute. You can find MANY books that describe all the different Cooley-Tukey, split-radix, PFA, Winograd algorithms under the blanket heading of FFT. Here is a quote from one of the bibles of digital signal processing (Discrete-Time Signal Processing, Oppenheim & Schafer) --- "The major focus of this chapter is a particularly efficient class of algorithms for the digital calculation of the N-point DFT. Collectively, these are called fast Fourier transform (FFT) algorithms, ..." Incidently, the title of the chapter this was taken from is called "Computation of the Discrete Fourier Transform". A tendency for people to use the term FFT in place of the actually mathematical operation, the DFT, leads to some of the confusion for the term. It is the DFT operation that an algorithm implements. If it does it in a numerically fast way, it gets classified as a FFT algorithm. Really, this discussion is just a matter of semantics, i.e., how do you want to define the word FFT. Use whatever definintion you'd like. The important thing is that we can communicate with each other and know to what each of us is referring when we say FFT. For example, my personal preference would be to use the term FDFT for "fast discrete fourier transform" since I think this makes it more clear that I was talking about a fast way of computing the DFT. However, nobody would understand what I am saying and it would lead to more confusion. Yes, just using the term FFT may lead to a modicum of confusion about the particular algorithm we are referring. I'd argue: so what! If someone really wanted you to use a particular algorithm, they would mention the name of the particular FFT to you. If they didn't know what algorithm to use, but knew that they wanted you to avoid the direct 'naive' approach, they would just say "use an FFT" and leave the specifics of the implementation up to you. Anyway, just some thoughts. - Jordan "E. Robert Tisdale" wrote in message news:3858AE2C.F256DB5E@netwood.net... > stevenj@gil-galad.mit.edu wrote: > > > Putting aside the facts that you are neglecting > > the difference in the log base, > > and that performance is more complicated > > than the difference in arithmetic complexity, > > and that there are O(m log m) algorithms > > for prime sizes (you don't have to use m^2), > > and that the literature almost universally calls > > any O(n log n) algorithm an "FFT", > > I'd still be inclined to disagree with your (apparent) definition. > > > > It sounds like you only want to use "FFT" for the algorithm > > with the smallest constant factors in its arithmetic complexity. > > In this case, however, you can't even call a radix-2 algorithm an "FFT". > > Split-radix, for example, has lower arithmetic complexity > > than radix-2 for the same sizes. > > And this doesn't even get into algorithms like PFA and Winograd > > that are completely different from Cooley-Tukey (though still O(n lg n)). > > > > In some sense, you can't directly compare e.g. radix-3 with radix-2, > > since they don't operate on the same transform sizes. > > It is certainly the case, however, that you often lose performance > > by padding your transform size up to the next power of two, > > rather than using a mixed-radix algorithm with small prime factors. > > For this reason alone, I would be reluctant > > to deny the appellation of "fast" to mixed-radix algorithms. > > Good night Steven. > > I am impressed with you knowledge of taxonomy. > But I don't favor the names given to each species. > I don't like it when algorithms are named after people > because it doesn't help me remember what the algorithm is about. > All of the so-called Fast Fourier Transforms involve > a factorization of the length n and usually a factorization > in to small primes is fastest but the prime factor algorithm (PFA) > actually only factors the length into mutually prime numbers > so I think its name is misleading. > > I think that most people think of the radix-2 Cooley-Tukey algorithm > when someone mentions FFTs. I think they assume > that you can always get what you need out of an FFT > by simply padding with zeros out to the next power of 2 -- > You can't. Some times you must bite the bullet > and compute an FFT of length n that is not a power of 2. > Anyway, padding an array with zeros out to a power of 2 > so that you can use a radix-2 algorithm may or may not > be faster than using a mixed-radix algorithm. > It depends upon how many zeros you actually add to the array. > Or you simply may not have enough extra memory > for the zero padded array. > Or your mixed-radix algorithm may be too large > to fit into program memory. > > But generally, > I think it is nice to have a Discrete Fourier Transform > of arbitrary length n in a numerical library > so that you can use it without considering these issues. > You can always replace it later if your profiler indicates > that it creates an unacceptable bottleneck in your program. > > E. Robert Tisdale >