Spectra derived from fast Fourier transform (FFT) analysis of time-domain data intrinsically contain statistical fluctuations whose distribution depends on the number of accumulated spectra contributing to a measurement. The tail of this distribution, which is essential for separating the true signal from the statistical fluctuations, deviates noticeably from the normal distribution for a finite number of accumulations. In this paper, we develop a theory to properly account for the statistical fluctuations when fitting a model to a given accumulated spectrum. The method is implemented in software for the purpose of automatically fitting a large body of such FFT-derived spectra. We apply this tool to analyze a portion of a dense cluster of spikes recorded by our FASR Subsystem Testbed instrument during a record-breaking event that occurred on 2006 December 6. The outcome of this analysis is briefly discussed.
展开▼