Template:Short description Template:Use dmy dates {{#invoke:Listen|main}} Additive synthesis is a sound synthesis technique that creates timbre by adding sine waves together.<ref name="JOS_Additive"> {{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref> Template:Cite journal</ref>
The timbre of musical instruments can be considered in the light of Fourier theory to consist of multiple harmonic or inharmonic partials or overtones. Each partial is a sine wave of different frequency and amplitude that swells and decays over time due to modulation from an ADSR envelope or low frequency oscillator.
Additive synthesis most directly generates sound by adding the output of multiple sine wave generators. Alternative implementations may use pre-computed wavetables or the inverse fast Fourier transform.
ExplanationEdit
The sounds that are heard in everyday life are not characterized by a single frequency. Instead, they consist of a sum of pure sine frequencies, each one at a different amplitude. When humans hear these frequencies simultaneously, we can recognize the sound. This is true for both "non-musical" sounds (e.g. water splashing, leaves rustling, etc.) and for "musical sounds" (e.g. a piano note, a bird's tweet, etc.). This set of parameters (frequencies, their relative amplitudes, and how the relative amplitudes change over time) are encapsulated by the timbre of the sound. Fourier analysis is the technique that is used to determine these exact timbre parameters from an overall sound signal; conversely, the resulting set of frequencies and amplitudes is called the Fourier series of the original sound signal.
In the case of a musical note, the lowest frequency of its timbre is designated as the sound's fundamental frequency. For simplicity, we often say that the note is playing at that fundamental frequency (e.g. "middle C is 261.6 Hz"),<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> even though the sound of that note consists of many other frequencies as well. The set of the remaining frequencies is called the overtones (or the harmonics, if their frequencies are integer multiples of the fundamental frequency) of the sound.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In other words, the fundamental frequency alone is responsible for the pitch of the note, while the overtones define the timbre of the sound. The overtones of a piano playing middle C will be quite different from the overtones of a violin playing the same note; that's what allows us to differentiate the sounds of the two instruments. There are even subtle differences in timbre between different versions of the same instrument (for example, an upright piano vs. a grand piano).
Additive synthesis aims to exploit this property of sound in order to construct timbre from the ground up. By adding together pure frequencies (sine waves) of varying frequencies and amplitudes, we can precisely define the timbre of the sound that we want to create.
DefinitionsEdit
Harmonic additive synthesis is closely related to the concept of a Fourier series which is a way of expressing a periodic function as the sum of sinusoidal functions with frequencies equal to integer multiples of a common fundamental frequency. These sinusoids are called harmonics, overtones, or generally, partials. In general, a Fourier series contains an infinite number of sinusoidal components, with no upper limit to the frequency of the sinusoidal functions and includes a DC component (one with frequency of 0 Hz). Frequencies outside of the human audible range can be omitted in additive synthesis. As a result, only a finite number of sinusoidal terms with frequencies that lie within the audible range are modeled in additive synthesis.
A waveform or function is said to be periodic if
- <math> y(t) = y(t+P) </math>
for all <math> t </math> and for some period <math> P </math>.
The Fourier series of a periodic function is mathematically expressed as:
- <math> \begin{align}
y(t) &= \frac{a_0}{2} + \sum_{k=1}^{\infty} \left[ a_k \cos(2 \pi k f_0 t ) - b_k \sin(2 \pi k f_0 t ) \right] \\ &= \frac{a_0}{2} + \sum_{k=1}^{\infty} r_k \cos\left(2 \pi k f_0 t + \phi_k \right) \\ \end{align} </math>
where
- <math>f_0 = 1/P</math> is the fundamental frequency of the waveform and is equal to the reciprocal of the period,
- <math>a_k = r_k \cos(\phi_k) = 2 f_0 \int_{0}^P y(t) \cos(2 \pi k f_0 t)\, dt, \quad k \ge 0</math>
- <math>b_k = r_k \sin(\phi_k) = -2 f_0 \int_{0}^P y(t) \sin(2 \pi k f_0 t)\, dt, \quad k \ge 1</math>
- <math>r_k = \sqrt{a_k^2 + b_k^2}</math> is the amplitude of the <math>k</math>th harmonic,
- <math>\phi_k = \operatorname{atan2}(b_k, a_k)</math> is the phase offset of the <math>k</math>th harmonic. atan2 is the four-quadrant arctangent function,
Being inaudible, the DC component, <math>a_0/2</math>, and all components with frequencies higher than some finite limit, <math>K f_0</math>, are omitted in the following expressions of additive synthesis.
Harmonic formEdit
The simplest harmonic additive synthesis can be mathematically expressed as:
where <math>y(t)</math> is the synthesis output, <math>r_k</math>, <math>k f_0</math>, and <math>\phi_k</math> are the amplitude, frequency, and the phase offset, respectively, of the <math>k</math>th harmonic partial of a total of <math>K</math> harmonic partials, and <math>f_0</math> is the fundamental frequency of the waveform and the frequency of the musical note.
Time-dependent amplitudesEdit
File:Harmonic additive synthesis spectrum.png | Example of harmonic additive synthesis in which each harmonic has a time-dependent amplitude. The fundamental frequency is 440 Hz.
noicon Problems listening to this file? See Media help |
More generally, the amplitude of each harmonic can be prescribed as a function of time, <math>r_k(t)</math>, in which case the synthesis output is
Each envelope <math>r_k(t)\,</math> should vary slowly relative to the frequency spacing between adjacent sinusoids. The bandwidth of <math>r_k(t)</math> should be significantly less than <math>f_0</math>.
Inharmonic formEdit
Additive synthesis can also produce inharmonic sounds (which are aperiodic waveforms) in which the individual overtones need not have frequencies that are integer multiples of some common fundamental frequency.<ref name=smith05> Template:Cite book (online reprint)</ref><ref name=smith11> Template:Cite book</ref> While many conventional musical instruments have harmonic partials (e.g. an oboe), some have inharmonic partials (e.g. bells). Inharmonic additive synthesis can be described as
- <math>y(t) = \sum_{k=1}^{K} r_k(t) \cos\left(2 \pi f_k t + \phi_k \right),</math>
where <math>f_k</math> is the constant frequency of <math>k</math>th partial.
File:Inharmonic additive synthesis spectrum.png | Example of inharmonic additive synthesis in which both the amplitude and frequency of each partial are time-dependent.
noicon Problems listening to this file? See Media help |
Time-dependent frequenciesEdit
In the general case, the instantaneous frequency of a sinusoid is the derivative (with respect to time) of the argument of the sine or cosine function. If this frequency is represented in hertz, rather than in angular frequency form, then this derivative is divided by <math>2 \pi</math>. This is the case whether the partial is harmonic or inharmonic and whether its frequency is constant or time-varying.
In the most general form, the frequency of each non-harmonic partial is a non-negative function of time, <math>f_k(t)</math>, yielding
Broader definitionsEdit
Additive synthesis more broadly may mean sound synthesis techniques that sum simple elements to create more complex timbres, even when the elements are not sine waves.<ref> Template:Cite book</ref><ref name="MooreFoundationsCM"> Template:Cite book </ref> For example, F. Richard Moore listed additive synthesis as one of the "four basic categories" of sound synthesis alongside subtractive synthesis, nonlinear synthesis, and physical modeling.<ref name="MooreFoundationsCM"/> In this broad sense, pipe organs, which also have pipes producing non-sinusoidal waveforms, can be considered as a variant form of additive synthesizers. Summation of principal components and Walsh functions have also been classified as additive synthesis.<ref> Template:Cite book</ref>
Implementation methodsEdit
Modern-day implementations of additive synthesis are mainly digital. (See section Discrete-time equations for the underlying discrete-time theory)
Oscillator bank synthesisEdit
Additive synthesis can be implemented using a bank of sinusoidal oscillators, one for each partial.<ref name="JOS_Additive"/>
Wavetable synthesisEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} In the case of harmonic, quasi-periodic musical tones, wavetable synthesis can be as general as time-varying additive synthesis, but requires less computation during synthesis.<ref name="Wavetable Synthesis 101"> {{#invoke:citation/CS1|citation |CitationClass=web }} </ref><ref name="Wavetable Matching Synthesis of Dynamic Instruments with Genetic Algorithms"> Template:Cite journal</ref> As a result, an efficient implementation of time-varying additive synthesis of harmonic tones can be accomplished by use of wavetable synthesis.
Group additive synthesisEdit
Group additive synthesis<ref>
{{#invoke:citation/CS1|citation
|CitationClass=web }}</ref><ref> Template:Cite journal</ref><ref> Template:Cite book</ref> is a method to group partials into harmonic groups (having different fundamental frequencies) and synthesize each group separately with wavetable synthesis before mixing the results.
Inverse FFT synthesisEdit
An inverse fast Fourier transform can be used to efficiently synthesize frequencies that evenly divide the transform period or "frame". By careful consideration of the DFT frequency-domain representation it is also possible to efficiently synthesize sinusoids of arbitrary frequencies using a series of overlapping frames and the inverse fast Fourier transform.<ref name="RodetDepalle_FFTm1"> Template:Cite journal </ref>
Additive analysis/resynthesisEdit
It is possible to analyze the frequency components of a recorded sound giving a "sum of sinusoids" representation. This representation can be re-synthesized using additive synthesis. One method of decomposing a sound into time varying sinusoidal partials is short-time Fourier transform (STFT)-based McAulay-Quatieri Analysis.<ref name=MQ1986> Template:Cite journal</ref><ref> {{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
By modifying the sum of sinusoids representation, timbral alterations can be made prior to resynthesis. For example, a harmonic sound could be restructured to sound inharmonic, and vice versa. Sound hybridisation or "morphing" has been implemented by additive resynthesis.<ref name="XSerraPhD"> Template:Cite thesis</ref>
Additive analysis/resynthesis has been employed in a number of techniques including Sinusoidal Modelling,<ref> {{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Spectral Modelling Synthesis (SMS),<ref name="XSerraPhD"/> and the Reassigned Bandwidth-Enhanced Additive Sound Model.<ref> Template:Cite thesis</ref> Software that implements additive analysis/resynthesis includes: SPEAR,<ref>SPEAR Sinusoidal Partial Editing Analysis and Resynthesis for Mac OS X, MacOS 9 and Windows</ref> LEMUR, LORIS,<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> SMSTools,<ref>SMSTools application for Windows</ref> ARSS.<ref>ARSS: The Analysis & Resynthesis Sound Spectrograph</ref>
ProductsEdit
New England Digital Synclavier had a resynthesis feature where samples could be analyzed and converted into "timbre frames" which were part of its additive synthesis engine. Technos acxel, launched in 1987, utilized the additive analysis/resynthesis model, in an FFT implementation.
Also a vocal synthesizer, Vocaloid have been implemented on the basis of additive analysis/resynthesis: its spectral voice model called Excitation plus Resonances (EpR) model<ref name=BonadaICMC01>
Template:Cite journal (PDF)</ref><ref>
Template:Cite thesis (PDF).
See "Excitation plus resonances voice model" (p. 51)
</ref> is extended based on Spectral Modeling Synthesis (SMS),
and its diphone concatenative synthesis is processed using
spectral peak processing (SPP)<ref>Template:Harvnb, "Spectral peak processing"</ref> technique similar to modified phase-locked vocoder<ref>Template:Harvnb, "Phase locked vocoder"</ref> (an improved phase vocoder for formant processing).<ref name=BonadaSMAC03>
Template:Cite journal</ref> Using these techniques, spectral components (formants) consisting of purely harmonic partials can be appropriately transformed into desired form for sound modeling, and sequence of short samples (diphones or phonemes) constituting desired phrase, can be smoothly connected by interpolating matched partials and formant peaks, respectively, in the inserted transition region between different samples. (See also Dynamic timbres)
ApplicationsEdit
Musical instrumentsEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
Additive synthesis is used in electronic musical instruments. It is the principal sound generation technique used by Eminent organs.
Speech synthesisEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
In linguistics research, harmonic additive synthesis was used in the 1950s to play back modified and synthetic speech spectrograms.<ref name="cooper1951" />
Later, in the early 1980s, listening tests were carried out on synthetic speech stripped of acoustic cues to assess their significance. Time-varying formant frequencies and amplitudes derived by linear predictive coding were synthesized additively as pure tone whistles. This method is called sinewave synthesis.<ref name=remez81> Template:Cite journal</ref><ref name=rubin80>Template:Cite journal</ref> Also the composite sinusoidal modeling (CSM)<ref name=sagayama79a> Template:Citation</ref><ref name=sagayama79b> Template:Cite conference </ref> used on a singing speech synthesis feature on the Yamaha CX5M (1984), is known to use a similar approach which was independently developed during 1966–1979.<ref name=sagayama86> Template:Cite book</ref><ref name=itakura04> Template:Cite journal </ref> These methods are characterized by extraction and recomposition of a set of significant spectral peaks corresponding to the several resonance modes occurring in the oral cavity and nasal cavity, in a viewpoint of acoustics. This principle was also utilized on a physical modeling synthesis method, called modal synthesis.<ref name=adrien1991> Template:Cite book </ref><ref name=morrison&adrien1993> Template:Cite journal </ref><ref name=bilbao2009> Template:Citation (See also companion page) </ref><ref name=doel&pai2003> Template:Cite journal </ref>
HistoryEdit
Harmonic analysis was discovered by Joseph Fourier,<ref name="prestini2004">Template:Cite book</ref> who published an extensive treatise of his research in the context of heat transfer in 1822.<ref>Template:Cite book</ref> The theory found an early application in prediction of tides. Around 1876,<ref name="miller1916" /> William Thomson (later ennobled as Lord Kelvin) constructed a mechanical tide predictor. It consisted of a harmonic analyzer and a harmonic synthesizer, as they were called already in the 19th century.<ref name="philmag1875">Template:Cite journalTemplate:Failed verification</ref><ref name="thomson1878">Template:Cite journal</ref> The analysis of tide measurements was done using James Thomson's integrating machine. The resulting Fourier coefficients were input into the synthesizer, which then used a system of cords and pulleys to generate and sum harmonic sinusoidal partials for prediction of future tides. In 1910, a similar machine was built for the analysis of periodic waveforms of sound.<ref name="cahan1993" /> The synthesizer drew a graph of the combination waveform, which was used chiefly for visual validation of the analysis.<ref name="cahan1993" />
Georg Ohm applied Fourier's theory to sound in 1843. The line of work was greatly advanced by Hermann von Helmholtz, who published his eight years worth of research in 1863.<ref name="helmholtz1863">Template:Cite book</ref> Helmholtz believed that the psychological perception of tone color is subject to learning, while hearing in the sensory sense is purely physiological.<ref name="christensen2002">Template:Cite book</ref> He supported the idea that perception of sound derives from signals from nerve cells of the basilar membrane and that the elastic appendages of these cells are sympathetically vibrated by pure sinusoidal tones of appropriate frequencies.<ref name="cahan1993">Template:Cite book</ref> Helmholtz agreed with the finding of Ernst Chladni from 1787 that certain sound sources have inharmonic vibration modes.<ref name="christensen2002" />
In Helmholtz's time, electronic amplification was unavailable. For synthesis of tones with harmonic partials, Helmholtz built an electrically excited array of tuning forks and acoustic resonance chambers that allowed adjustment of the amplitudes of the partials.<ref name="helmholtz1875" /> Built at least as early as in 1862,<ref name="helmholtz1875" /> these were in turn refined by Rudolph Koenig, who demonstrated his own setup in 1872.<ref name="helmholtz1875">Template:Cite book</ref> For harmonic synthesis, Koenig also built a large apparatus based on his wave siren. It was pneumatic and utilized cut-out tonewheels, and was criticized for low purity of its partial tones.<ref name="miller1916" /> Also tibia pipes of pipe organs have nearly sinusoidal waveforms and can be combined in the manner of additive synthesis.<ref name="miller1916">Template:Cite book</ref>
In 1938, with significant new supporting evidence,<ref>Template:Cite book</ref> it was reported on the pages of Popular Science Monthly that the human vocal cords function like a fire siren to produce a harmonic-rich tone, which is then filtered by the vocal tract to produce different vowel tones.<ref>Template:Cite journal</ref> By the time, the additive Hammond organ was already on market. Most early electronic organ makers thought it too expensive to manufacture the plurality of oscillators required by additive organs, and began instead to build subtractive ones.<ref name="comerford1993">Template:Cite journal</ref> In a 1940 Institute of Radio Engineers meeting, the head field engineer of Hammond elaborated on the company's new Novachord as having a "subtractive system" in contrast to the original Hammond organ in which "the final tones were built up by combining sound waves".<ref>Template:Cite journal</ref> Alan Douglas used the qualifiers additive and subtractive to describe different types of electronic organs in a 1948 paper presented to the Royal Musical Association.<ref name="Douglas1948">Template:Cite journal</ref> The contemporary wording additive synthesis and subtractive synthesis can be found in his 1957 book The electrical production of music, in which he categorically lists three methods of forming of musical tone-colours, in sections titled Additive synthesis, Subtractive synthesis, and Other forms of combinations.<ref name="douglas1957">Template:Cite book</ref>
A typical modern additive synthesizer produces its output as an electrical, analog signal, or as digital audio, such as in the case of software synthesizers, which became popular around year 2000.<ref name="pejrolo2007">Template:Cite book</ref>
TimelineEdit
The following is a timeline of historically and technologically notable analog and digital synthesizers and devices implementing additive synthesis.
Research implementation or publication | Commercially available | Company or institution | Synthesizer or synthesis device | Description | Audio samples | |||
---|---|---|---|---|---|---|---|---|
1900<ref name="weidenaar1995">Template:Cite book</ref> | 1906<ref name="weidenaar1995" /> | New England Electric Music Company | Telharmonium | The first polyphonic, touch-sensitive music synthesizer.<ref name="moog1977">Template:Cite journal</ref> Implemented sinuosoidal additive synthesis using tonewheels and alternators. Invented by Thaddeus Cahill. | no known recordings<ref name="weidenaar1995" /> | |||
citation | CitationClass=web
}}</ref> |
1935<ref name="harvey2011" /> | Hammond Organ Company | Hammond Organ | An electronic additive synthesizer that was commercially more successful than Telharmonium.<ref name="moog1977" /> Implemented sinusoidal additive synthesis using tonewheels and magnetic pickups. Invented by Laurens Hammond. | {{#ifexist:Media:Hammond Organ - Model A Medley.ogg|<phonos file="Hammond Organ - Model A Medley.ogg">Model A</phonos>|{{errorTemplate:Main other|Audio file "Hammond Organ - Model A Medley.ogg" not found}}Template:Category handler}}}} | ||
1950 or earlier<ref name="cooper1951">Template:Cite journal</ref> | Haskins Laboratories | Pattern Playback | A speech synthesis system that controlled amplitudes of harmonic partials by a spectrogram that was either hand-drawn or an analysis result. The partials were generated by a multi-track optical tonewheel.<ref name="cooper1951" /> | samples | ||||
citation | CitationClass=web
}}</ref> |
ANS | An additive synthesizer<ref name="vail2002">Template:Cite magazine</ref> that played microtonal spectrogram-like scores using multiple multi-track optical tonewheels. Invented by Evgeny Murzin. A similar instrument that utilized electronic oscillators, the Oscillator Bank, and its input device Spectrogram were realized by Hugh Le Caine in 1959.<ref name="young1999a">{{#invoke:citation/CS1|citation | CitationClass=web
}}</ref><ref name="young1999b">{{#invoke:citation/CS1|citation |
CitationClass=web
}}</ref> |
{{#ifexist:Media:The ANS Synthesizer playing doodles (live).ogg|<phonos file="The ANS Synthesizer playing doodles (live).ogg">1964 model</phonos>|{{errorTemplate:Main other|Audio file "The ANS Synthesizer playing doodles (live).ogg" not found}}Template:Category handler}}}} | ||
1963<ref name="luce1963">Template:Cite thesis</ref> | MIT | An off-line system for digital spectral analysis and resynthesis of the attack and steady-state portions of musical instrument timbres by David Luce.<ref name="luce1963" /> | ||||||
citation | CitationClass=web
}}</ref> |
University of Illinois | Harmonic Tone Generator | An electronic, harmonic additive synthesis system invented by James Beauchamp.<ref name="beauchamp2009" /><ref name="beauchamp1966">Template:Cite journal</ref> | samples (info) | |||
1974 or earlier<ref name="synthmuseum-RMI" /><ref name="reid2001"/> | citation | CitationClass=web
}}</ref><ref name="reid2001"> Template:Cite journal</ref> |
RMI | Harmonic Synthesizer | The first synthesizer product that implemented additive<ref name="flint2008">
Template:Cite journal</ref> synthesis using digital oscillators.<ref name="synthmuseum-RMI" /><ref name="reid2001" /> The synthesizer also had a time-varying analog filter.<ref name="synthmuseum-RMI" /> RMI was a subsidiary of Allen Organ Company, which had released the first commercial digital church organ, the Allen Computer Organ, in 1971, using digital technology developed by North American Rockwell.<ref name="fundinguniverse">{{#invoke:citation/CS1|citation |
CitationClass=web
}}</ref> |
1 2 3 4 | |
1974<ref name="cosimi2009">Template:Cite journal</ref> | EMS (London) | Digital Oscillator Bank | citation | CitationClass=web
}}</ref> intended for use in analysis-resynthesis with the digital Analysing Filter Bank (AFB) also constructed at EMS.<ref name="cosimi2009" /><ref name="hinton2002" /> Also known as: DOB. |
in The New Sound of Music<ref>Template:Cite AV media Includes a demonstration of DOB and AFB.</ref> | |||
1976<ref name="leete1999">
Template:Cite journal</ref> |
1976<ref name="twyman2004">Template:Cite thesis</ref> | Fairlight | Qasar M8 | An all-digital synthesizer that used the fast Fourier transform<ref name="street2000">{{#invoke:citation/CS1|citation | CitationClass=web
}}</ref> to create samples from interactively drawn amplitude envelopes of harmonics.<ref>{{#invoke:citation/CS1|citation |
CitationClass=web
}}</ref> |
samples | |
1977<ref name="Leider2004">Template:Cite book</ref> | Bell Labs | Digital Synthesizer | A real-time, digital additive synthesizer<ref name="Leider2004" /> that has been called the first true digital synthesizer.<ref name="chadabe1997">Template:Cite book</ref> Also known as: Alles Machine, Alice. | sample (info) | ||||
1979<ref name="chadabe1997" /> | 1979<ref name="chadabe1997" /> | New England Digital | Synclavier II | A commercial digital synthesizer that enabled development of timbre over time by smooth cross-fades between waveforms generated by additive synthesis. | {{#ifexist:Media:Jon Appleton - Sashasonjon.oga|<phonos file="Jon Appleton - Sashasonjon.oga">Jon Appleton - Sashasonjon</phonos>|{{errorTemplate:Main other|Audio file "Jon Appleton - Sashasonjon.oga" not found}}Template:Category handler}}}} | |||
citation | CitationClass=web
}}</ref> |
Kawai | K5000 | citation | CitationClass=web
}}</ref> |
Discrete-time equationsEdit
In digital implementations of additive synthesis, discrete-time equations are used in place of the continuous-time synthesis equations. A notational convention for discrete-time signals uses brackets i.e. <math>y[n]\,</math> and the argument <math>n\,</math> can only be integer values. If the continuous-time synthesis output <math>y(t)\,</math> is expected to be sufficiently bandlimited; below half the sampling rate or <math>f_\mathrm{s}/2\,</math>, it suffices to directly sample the continuous-time expression to get the discrete synthesis equation. The continuous synthesis output can later be reconstructed from the samples using a digital-to-analog converter. The sampling period is <math>T=1/f_\mathrm{s}\,</math>.
Beginning with (Template:EquationNote),
- <math>y(t) = \sum_{k=1}^{K} r_k(t) \cos\left(2 \pi \int_0^t f_k(u)\ du + \phi_k \right)</math>
and sampling at discrete times <math> t = nT = n/f_\mathrm{s} \,</math> results in
- <math> \begin{align}
y[n] & = y(nT) = \sum_{k=1}^{K} r_k(nT) \cos\left(2 \pi \int_0^{nT} f_k(u)\ du + \phi_k \right) \\ & = \sum_{k=1}^{K} r_k(nT) \cos\left(2 \pi \sum_{i=1}^{n} \int_{(i-1)T}^{iT} f_k(u)\ du + \phi_k \right) \\ & = \sum_{k=1}^{K} r_k(nT) \cos\left(2 \pi \sum_{i=1}^{n} (T f_k[i]) + \phi_k \right) \\ & = \sum_{k=1}^{K} r_k[n] \cos\left(\frac{2 \pi}{f_\mathrm{s}} \sum_{i=1}^{n} f_k[i] + \phi_k \right) \\ \end{align} </math>
where
- <math>r_k[n] = r_k(nT) \,</math> is the discrete-time varying amplitude envelope
- <math>f_k[n] = \frac{1}{T} \int_{(n-1)T}^{nT} f_k(t)\ dt \,</math> is the discrete-time backward difference instantaneous frequency.
This is equivalent to
- <math> y[n] = \sum_{k=1}^{K} r_k[n] \cos\left( \theta_k[n] \right) </math>
where
- <math> \begin{align}
\theta_k[n] &= \frac{2 \pi}{f_\mathrm{s}} \sum_{i=1}^{n} f_k[i] + \phi_k \\ &= \theta_k[n-1] + \frac{2 \pi}{f_\mathrm{s}} f_k[n] \\ \end{align} </math> for all <math>n>0\,</math><ref name="RodetDepalle_FFTm1"/>
and
- <math> \theta_k[0] = \phi_k. \,</math>