Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Additive synthesis
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Sound synthesis technique}} {{Use dmy dates|date=January 2020}} {{Listen | filename = Additive_synthesis_bell.ogg | title = Additive synthesis example | description = A bell-like sound generated by additive synthesis of 21 inharmonic partials | pos = right }} '''Additive synthesis''' is a [[sound synthesis]] technique that creates [[timbre]] by adding [[sine]] waves together.<ref name="JOS_Additive"> {{cite web | author = Julius O. Smith III | title = Additive Synthesis (Early Sinusoidal Modeling) | url = https://ccrma.stanford.edu/~jos/sasp/Additive_Synthesis_Early_Sinusoidal.html | quote = The term "additive synthesis" refers to sound being formed by adding together many sinusoidal components | access-date = 14 January 2012 }}</ref><ref> {{cite journal | author = Gordon Reid | title = Synth Secrets, Part 14: An Introduction To Additive Synthesis | url = http://www.soundonsound.com/sos/jun00/articles/synthsec.htm | journal = Sound on Sound | issue = January 2000 | access-date = 14 January 2012 }}</ref> The timbre of musical instruments can be considered in the light of [[Fourier series|Fourier theory]] to consist of multiple [[harmonic]] or inharmonic ''[[Harmonic series (music)#Partial|partials]]'' or [[overtone]]s. Each partial is a sine wave of different [[frequency]] and [[amplitude]] that swells and decays over time due to [[modulation]] from an [[ADSR envelope]] or [[low frequency oscillator]]. Additive synthesis most directly generates sound by adding the output of multiple sine wave generators. Alternative implementations may use pre-computed [[Wavetable synthesis|wavetables]] or the inverse [[fast Fourier transform]]. == Explanation == The sounds that are heard in everyday life are not characterized by a single [[frequency]]. Instead, they consist of a sum of pure sine frequencies, each one at a different [[amplitude]]. When humans hear these frequencies simultaneously, we can recognize the sound. This is true for both "non-musical" sounds (e.g. water splashing, leaves rustling, etc.) and for "musical sounds" (e.g. a piano note, a bird's tweet, etc.). This set of parameters (frequencies, their relative amplitudes, and how the relative amplitudes change over time) are encapsulated by the ''[[timbre]]'' of the sound. [[Fourier analysis]] is the technique that is used to determine these exact timbre parameters from an overall sound signal; conversely, the resulting set of frequencies and amplitudes is called the [[Fourier series]] of the original sound signal. In the case of a musical note, the lowest frequency of its timbre is designated as the sound's [[fundamental frequency]]. For simplicity, we often say that the note is playing at that fundamental frequency (e.g. "[[middle C]] is 261.6 Hz"),<ref>{{Cite web|url=http://www.liutaiomottola.com/formulae/freqtab.htm|title=Table of Musical Notes and Their Frequencies and Wavelengths|last=Mottola|first=Liutaio|date=31 May 2017}}</ref> even though the sound of that note consists of many other frequencies as well. The set of the remaining frequencies is called the [[overtone]]s (or the [[harmonic]]s, if their frequencies are integer multiples of the fundamental frequency) of the sound.<ref>{{Cite web|url=http://www.physicsclassroom.com/class/sound/Lesson-4/Fundamental-Frequency-and-Harmonics|title=Fundamental Frequency and Harmonics}}</ref> In other words, the fundamental frequency alone is responsible for the pitch of the note, while the overtones define the timbre of the sound. The overtones of a piano playing middle C will be quite different from the overtones of a violin playing the same note; that's what allows us to differentiate the sounds of the two instruments. There are even subtle differences in timbre between different versions of the same instrument (for example, an [[upright piano]] vs. a [[Grand Piano|grand piano]]). Additive synthesis aims to exploit this property of sound in order to construct timbre from the ground up. By adding together pure frequencies ([[sine wave]]s) of varying frequencies and amplitudes, we can precisely define the timbre of the sound that we want to create. ==Definitions== {{See also|Fourier series|Fourier analysis}} [[File:Additive synthesis.svg|250px|thumb|right|Schematic diagram of additive synthesis. The inputs to the oscillators are frequencies <math>f_k</math> and amplitudes <math>r_k</math>.]] Harmonic additive synthesis is closely related to the concept of a [[Fourier series]] which is a way of expressing a [[periodic function]] as the sum of [[sine wave|sinusoidal]] functions with [[Frequency|frequencies]] equal to integer multiples of a common [[fundamental frequency]]. These sinusoids are called [[harmonic]]s, [[overtone]]s, or generally, [[Harmonic series (music)#Partial|partials]]. In general, a Fourier series contains an infinite number of sinusoidal components, with no upper limit to the frequency of the sinusoidal functions and includes a [[direct current|DC]] component (one with frequency of 0 [[Hertz|Hz]]). [[Equal-loudness contour|Frequencies outside of the human audible range]] can be omitted in additive synthesis. As a result, only a finite number of sinusoidal terms with frequencies that lie within the audible range are modeled in additive synthesis. A waveform or function is said to be [[periodic function|periodic]] if : <math> y(t) = y(t+P) </math> for all <math> t </math> and for some period <math> P </math>. The [[Fourier series]] of a periodic function is mathematically expressed as: : <math> \begin{align} y(t) &= \frac{a_0}{2} + \sum_{k=1}^{\infty} \left[ a_k \cos(2 \pi k f_0 t ) - b_k \sin(2 \pi k f_0 t ) \right] \\ &= \frac{a_0}{2} + \sum_{k=1}^{\infty} r_k \cos\left(2 \pi k f_0 t + \phi_k \right) \\ \end{align} </math> where * <math>f_0 = 1/P</math> is the [[fundamental frequency]] of the waveform and is equal to the reciprocal of the period, * <math>a_k = r_k \cos(\phi_k) = 2 f_0 \int_{0}^P y(t) \cos(2 \pi k f_0 t)\, dt, \quad k \ge 0</math> * <math>b_k = r_k \sin(\phi_k) = -2 f_0 \int_{0}^P y(t) \sin(2 \pi k f_0 t)\, dt, \quad k \ge 1</math> * <math>r_k = \sqrt{a_k^2 + b_k^2}</math> is the [[amplitude]] of the <math>k</math>th harmonic, * <math>\phi_k = \operatorname{atan2}(b_k, a_k)</math> is the [[phase (waves)|phase offset]] of the <math>k</math>th harmonic. [[atan2]] is the four-quadrant [[arctangent]] function, Being inaudible, the [[direct current|DC]] component, <math>a_0/2</math>, and all components with frequencies higher than some finite limit, <math>K f_0</math>, are omitted in the following expressions of additive synthesis. ===Harmonic form=== The simplest harmonic additive synthesis can be mathematically expressed as: {{NumBlk|:|<math>y(t) = \sum_{k=1}^{K} r_k \cos\left(2 \pi k f_0 t + \phi_k \right),</math>|{{EquationRef|1}}}} where <math>y(t)</math> is the synthesis output, <math>r_k</math>, <math>k f_0</math>, and <math>\phi_k</math> are the amplitude, frequency, and the phase offset, respectively, of the <math>k</math>th harmonic partial of a total of <math>K</math> harmonic partials, and <math>f_0</math> is the [[fundamental frequency]] of the waveform and the [[Piano key frequencies|frequency of the musical note]]. ===Time-dependent amplitudes=== {|class=wikitable align=right width=420px |- | [[File:Harmonic additive synthesis spectrum.png|280px]] | <span style="font-size:85%;line-height:130%;">Example of harmonic additive synthesis in which each harmonic has a time-dependent amplitude. The fundamental frequency is 440 Hz.</span> [[File:Harmonic additive synthesis.ogg|noicon|150px]] <span style="font-size:70%;line-height:130%;font-style:italic;">Problems listening to this file? See [[Media help]]</span> |} More generally, the amplitude of each harmonic can be prescribed as a function of time, <math>r_k(t)</math>, in which case the synthesis output is {{NumBlk|:|<math>y(t) = \sum_{k=1}^{K} r_k(t) \cos\left(2 \pi k f_0 t + \phi_k \right)</math>.|{{EquationRef|2}}}} Each [[Envelope (waves)|envelope]] <math>r_k(t)\,</math> should vary slowly relative to the frequency spacing between adjacent sinusoids. The [[bandwidth (signal processing)|bandwidth]] of <math>r_k(t)</math> should be significantly less than <math>f_0</math>. ===Inharmonic form=== Additive synthesis can also produce [[Inharmonicity|inharmonic]] sounds (which are [[aperiodic]] waveforms) in which the individual overtones need not have frequencies that are integer multiples of some common fundamental frequency.<ref name=smith05> {{Cite book | last1 = Smith III | first1 = Julius O. | last2 = Serra | first2 = Xavier | year = 2005 | chapter = Additive Synthesis | chapter-url = https://ccrma.stanford.edu/~jos/parshl/Additive_Synthesis.html | title = PARSHL: An Analysis/Synthesis Program for Non-Harmonic Sounds Based on a Sinusoidal Representation | url = https://ccrma.stanford.edu/~jos/parshl/ | series = Proceedings of the International Computer Music Conference (ICMC-87, Tokyo), Computer Music Association, 1987 | publisher = [[Center for Computer Research in Music and Acoustics|CCRMA]], Department of Music, Stanford University | access-date = 11 January 2015 }} ([https://ccrma.stanford.edu/STANM/stanms/stanm43/stanm43.pdf online reprint])</ref><ref name=smith11> {{Cite book | last = Smith III | first = Julius O. | year = 2011 | chapter = Additive Synthesis (Early Sinusoidal Modeling) | chapter-url = https://ccrma.stanford.edu/~jos/sasp/Additive_Synthesis_Early_Sinusoidal.html | title = Spectral Audio Signal Processing | url = https://ccrma.stanford.edu/~jos/sasp/ | publisher = [[Center for Computer Research in Music and Acoustics|CCRMA]], Department of Music, Stanford University | isbn = 978-0-9745607-3-1 | access-date = 9 January 2012 }}</ref> While many conventional musical instruments have harmonic partials (e.g. an [[oboe]]), some have inharmonic partials (e.g. [[bell (instrument)|bells]]). Inharmonic additive synthesis can be described as : <math>y(t) = \sum_{k=1}^{K} r_k(t) \cos\left(2 \pi f_k t + \phi_k \right),</math> where <math>f_k</math> is the constant frequency of <math>k</math>th partial. {|class=wikitable align=right width=420px |- | [[File:Inharmonic additive synthesis spectrum.png|280px]] | <span style="font-size:85%;line-height:130%;">Example of inharmonic additive synthesis in which both the amplitude and frequency of each partial are time-dependent.</span> [[File:Inharmonic additive synthesis.ogg|noicon|150px]] <span style="font-size:70%;line-height:130%;font-style:italic;">Problems listening to this file? See [[Media help]]</span> |} ===Time-dependent frequencies=== In the general case, the [[instantaneous frequency]] of a sinusoid is the [[derivative]] (with respect to time) of the argument of the sine or cosine function. If this frequency is represented in [[hertz]], rather than in [[angular frequency]] form, then this derivative is divided by <math>2 \pi</math>. This is the case whether the partial is harmonic or inharmonic and whether its frequency is constant or time-varying. In the most general form, the frequency of each non-harmonic partial is a non-negative function of time, <math>f_k(t)</math>, yielding {{NumBlk|:|<math>y(t) = \sum_{k=1}^{K} r_k(t) \cos\left(2 \pi \int_0^t f_k(u)\ du + \phi_k \right).</math>|{{EquationRef|3}}}} ===Broader definitions=== ''Additive synthesis'' more broadly may mean sound synthesis techniques that sum simple elements to create more complex timbres, even when the elements are not sine waves.<ref> {{cite book | last = Roads | first = Curtis | author-link = Curtis Roads | year = 1995 | title = The Computer Music Tutorial | url=https://archive.org/details/computermusictut00road | url-access=limited | publisher = [[MIT Press]] | isbn = 978-0-262-68082-0 | page = [https://archive.org/details/computermusictut00road/page/n152 134] }}</ref><ref name="MooreFoundationsCM"> {{cite book | last = Moore | first = F. Richard | year = 1995 | title = Foundations of Computer Music | publisher = [[Prentice Hall]] | isbn = 978-0-262-68082-0 | page = 16 }} </ref> For example, F. Richard Moore listed additive synthesis as one of the "four basic categories" of sound synthesis alongside [[subtractive synthesis]], nonlinear synthesis, and [[physical modelling synthesis|physical modeling]].<ref name="MooreFoundationsCM"/> In this broad sense, [[pipe organ]]s, which also have pipes producing non-sinusoidal waveforms, can be considered as a variant form of additive synthesizers. Summation of [[Principal component analysis|principal components]] and [[Walsh functions]] have also been classified as additive synthesis.<ref> {{cite book | last = Roads | first = Curtis | author-link = Curtis Roads | year = 1995 | title = The Computer Music Tutorial | url=https://archive.org/details/computermusictut00road | url-access=limited | publisher = [[MIT Press]] | isbn = 978-0-262-68082-0 | pages = [https://archive.org/details/computermusictut00road/page/n168 150]–153 }}</ref> ==Implementation methods== Modern-day implementations of additive synthesis are mainly digital. (See section ''[[#Discrete-time equations|Discrete-time equations]]'' for the underlying discrete-time theory) ===Oscillator bank synthesis=== Additive synthesis can be implemented using a bank of sinusoidal oscillators, one for each partial.<ref name="JOS_Additive"/><!-- {{cite web | title = Additive Synthesis (Early Sinusoidal Modeling) | author = Julius O. Smith III | url = https://ccrma.stanford.edu/~jos/sasp/Additive_Synthesis_Early_Sinusoidal.html | access-date = 14 January 2012 }}</ref> --> ===Wavetable synthesis=== {{Main|Wavetable synthesis}} In the case of harmonic, quasi-periodic musical tones, [[wavetable synthesis]] can be as general as time-varying additive synthesis, but requires less computation during synthesis.<ref name="Wavetable Synthesis 101"> {{cite web |author = Robert Bristow-Johnson |date = November 1996 |title = Wavetable Synthesis 101, A Fundamental Perspective |url = http://www.musicdsp.org/files/Wavetable-101.pdf |access-date = 21 May 2005 |archive-url = https://web.archive.org/web/20130615202748/http://musicdsp.org/files/Wavetable-101.pdf |archive-date = 15 June 2013 |df = dmy-all }} </ref><ref name="Wavetable Matching Synthesis of Dynamic Instruments with Genetic Algorithms"> {{cite journal | author = Andrew Horner | date = November 1995 | title = Wavetable Matching Synthesis of Dynamic Instruments with Genetic Algorithms | journal = Journal of the Audio Engineering Society | volume = 43 | issue = 11 | pages = 916–931 | url = http://www.aes.org/e-lib/browse.cfm?elib=7923 }}</ref> As a result, an efficient implementation of time-varying additive synthesis of harmonic tones can be accomplished by use of ''wavetable synthesis''. ====Group additive synthesis==== Group additive synthesis<ref> {{cite web | author = Julius O. Smith III | title = Group Additive Synthesis | url = https://ccrma.stanford.edu/~jos/sasp/Group_Additive_Synthesis.html | publisher = [[CCRMA]], Stanford University | access-date = 12 May 2011 | archive-url = https://web.archive.org/web/20110606200135/https://ccrma.stanford.edu/~jos/sasp/Group_Additive_Synthesis.html | archive-date = 6 June 2011 | url-status= live}}</ref><ref> {{cite journal | author = P. Kleczkowski | title = Group additive synthesis | journal = [[Computer Music Journal]] | volume = 13 | issue = 1 | pages = 12–20 | year = 1989 | doi=10.2307/3679851 | jstor = 3679851 }}</ref><ref> {{cite book | author = B. Eaglestone and S. Oates | chapter = Analytical tools for group additive synthesis | title = Proceedings of the 1990 International Computer Music Conference, Glasgow | publisher = Computer Music Association | year = 1990 | chapter-url = http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1990.015 }}</ref> is a method to group partials into harmonic groups (having different fundamental frequencies) and synthesize each group separately with ''wavetable synthesis'' before mixing the results. ===Inverse FFT synthesis=== An inverse [[fast Fourier transform]] can be used to efficiently synthesize frequencies that evenly divide the transform period or "frame". By careful consideration of the [[Discrete Fourier transform|DFT]] frequency-domain representation it is also possible to efficiently synthesize sinusoids of arbitrary frequencies using a series of overlapping frames and the inverse [[fast Fourier transform]].<ref name="RodetDepalle_FFTm1"> {{cite journal | last1 = Rodet | first1 = X. | last2 = Depalle | first2 = P. | year = 1992 | title = Spectral Envelopes and Inverse FFT Synthesis | citeseerx = 10.1.1.43.4818 | journal = Proceedings of the 93rd Audio Engineering Society Convention }} </ref> ==Additive analysis/resynthesis== [[Image:Sinusoidal Analysis & Synthesis (McAulay-Quatieri 1988).svg|thumb|350px|Sinusoidal analysis/synthesis system for Sinusoidal Modeling (based on {{harvnb|McAulay|Quatieri|1988|p=161}})<ref name=MQ1988>{{cite journal |last1 = McAulay |first1 = R. J. |last2 = Quatieri |first2 = T. F. |author-link2 = Thomas F. Quatieri |year = 1988 |title = Speech Processing Based on a Sinusoidal Model |url = http://www.ll.mit.edu/publications/journal/pdf/vol01_no2/1.2.3.speechprocessing.pdf |journal = The Lincoln Laboratory Journal |volume = 1 |issue = 2 |pages = 153–167 |access-date = 9 December 2013 |archive-url = https://web.archive.org/web/20120521071601/http://www.ll.mit.edu/publications/journal/pdf/vol01_no2/1.2.3.speechprocessing.pdf |archive-date = 21 May 2012 |df = dmy-all }}</ref>]] It is possible to analyze the frequency components of a recorded sound giving a "sum of sinusoids" representation. This representation can be re-synthesized using additive synthesis. One method of decomposing a sound into time varying sinusoidal partials is [[short-time Fourier transform|short-time Fourier transform (STFT)]]-based McAulay-[[Thomas F. Quatieri|Quatieri]] Analysis.<ref name=MQ1986> {{cite journal | last1 = McAulay | first1 = R. J. | last2 = Quatieri| first2 = T. F. | date = Aug 1986 | title = Speech analysis/synthesis based on a sinusoidal representation | journal = IEEE Transactions on Acoustics, Speech, and Signal Processing | volume = 34 | issue = 4 | pages = 744–754 | doi = 10.1109/TASSP.1986.1164910 }}</ref><ref> {{cite web | title = McAulay-Quatieri Method | url = http://www.clear.rice.edu/elec301/Projects02/lorisFor/mqmethod2.html }}</ref> By modifying the sum of sinusoids representation, timbral alterations can be made prior to resynthesis. For example, a harmonic sound could be restructured to sound inharmonic, and vice versa. Sound hybridisation or "morphing" has been implemented by additive resynthesis.<ref name="XSerraPhD"> {{cite thesis | degree = PhD | last = Serra | first = Xavier | date = 1989 | title = A System for Sound Analysis/Transformation/Synthesis based on a Deterministic plus Stochastic Decomposition | url = http://mtg.upf.edu/node/304 | publisher = Stanford University | access-date = 13 January 2012 }}</ref> Additive analysis/resynthesis has been employed in a number of techniques including Sinusoidal Modelling,<ref> {{cite web | last1 = Smith III | first1 = Julius O. | last2 = Serra | first2 = Xavier | title = PARSHL: An Analysis/Synthesis Program for Non-Harmonic Sounds Based on a Sinusoidal Representation | url = https://ccrma.stanford.edu/~jos/parshl/Additive_Synthesis.html | access-date = 9 January 2012 }}</ref> [[Spectral modeling synthesis|Spectral Modelling Synthesis]] (SMS),<ref name="XSerraPhD"/> and the Reassigned Bandwidth-Enhanced Additive Sound Model.<ref> {{cite thesis | degree = PhD | last = Fitz | first = Kelly | date = 1999 | title = The Reassigned Bandwidth-Enhanced Method of Additive Synthesis | publisher = Dept. of Electrical and Computer Engineering, University of Illinois Urbana-Champaign | citeseerx = 10.1.1.10.1130 }}</ref> Software that implements additive analysis/resynthesis includes: SPEAR,<ref>[http://www.klingbeil.com/spear/ SPEAR Sinusoidal Partial Editing Analysis and Resynthesis for Mac OS X, MacOS 9 and Windows]</ref> LEMUR, LORIS,<ref>{{Cite web |url=http://www.hakenaudio.com/Loris/ |title=Loris Software for Sound Modeling, Morphing, and Manipulation |access-date=13 January 2012 |archive-url=https://web.archive.org/web/20120730195624/http://www.hakenaudio.com/Loris/ |archive-date=30 July 2012 |df=dmy-all }}</ref> SMSTools,<ref>[http://mtg.upf.edu/technologies/sms SMSTools application for Windows]</ref> ARSS.<ref>[http://arss.sourceforge.net/ ARSS: The Analysis & Resynthesis Sound Spectrograph]</ref> ===Products=== {{multiple image |direction=vertical |align=right |width=165 | header = Additive re-synthesis using timbre-frame concatenation: | image1 = Wavesequence.svg | caption1 = Concatenation with crossfades (on Synclavier) | image2 = Vocaloid's phonemes crossfading - en.jpg | caption2 = Concatenation with spectral envelope interpolation (on Vocaloid) }} New England Digital [[Synclavier]] had a resynthesis feature where samples could be analyzed and converted into "timbre frames" which were part of its additive synthesis engine. [[Technos acxel]], launched in 1987, utilized the additive analysis/resynthesis model, in an [[Additive synthesis#Inverse FFT synthesis|FFT]] implementation. Also a vocal synthesizer, [[Vocaloid]] have been implemented on the basis of additive analysis/resynthesis: its spectral voice model called [[Excitation plus Resonances]] (EpR) model<ref name=BonadaICMC01> {{cite journal | last1 = Bonada | first1 = J. | last2 = Celma | first2 = O. | last3 = Loscos | first3 = A. | last4 = Ortola | first4 = J.|first5=X. |last5=Serra |first6=Y. |last6=Yoshioka |first7=H. |last7=Kayama |first8=Y. |last8=Hisaminato |first9=H. |last9=Kenmochi | year = 2001 | title = Singing voice synthesis combining Excitation plus Resonance and Sinusoidal plus Residual Models | periodical = Proc. Of ICMC | citeseerx = 10.1.1.18.6258 }} ([http://mtg.upf.edu/files/publications/icmc2001-celma.pdf PDF])</ref><ref> {{cite thesis | degree = PhD | last = Loscos | first = A. | year = 2007 | title = Spectral processing of the singing voice | location = Barcelona, Spain | publisher = Pompeu Fabra University | hdl= 10803/7542 }} ([http://www.tdx.cat/bitstream/handle/10803/7542/talm.pdf?sequence=1 PDF]).<br/> See "''<!-- 2.4.2.5 -->Excitation plus resonances voice model''" (p. 51) </ref> is extended based on Spectral Modeling Synthesis (SMS), and its [[Diphone synthesis|diphone]] [[concatenative synthesis]] is processed using ''spectral peak processing'' (SPP)<ref>{{harvnb|Loscos|2007|p=44}}, "''<!-- 2.4.2.2 -->Spectral peak processing"''</ref> technique similar to modified [[phase-locked vocoder]]<ref>{{harvnb|Loscos|2007|p=44}}, "''<!-- 2.4.2.1.2 -->Phase locked vocoder''"</ref> (an improved [[phase vocoder]] for formant processing).<ref name=BonadaSMAC03> {{cite journal | last1 = Bonada | first1 = Jordi | last2 = Loscos | first2 = Alex | year = 2003 | title = Sample-based singing voice synthesizer by spectral concatenation: 6. Concatenating Samples | url = http://mtg.upf.edu/node/322 | periodical = Proc. of <!-- the Stockholm Music Acoustics Conference --> SMAC 03 | pages = 439–442 }}</ref> Using these techniques, spectral components (''[[formant]]s'') consisting of purely harmonic partials can be appropriately transformed into desired form for sound modeling, and sequence of short samples (''diphones'' or ''[[phoneme]]s'') constituting desired phrase, can be smoothly connected by interpolating matched partials and formant peaks, respectively, in the inserted transition region between different samples. (See also [[Dynamic timbres]]) ==Applications== ===Musical instruments=== {{main|Synthesizer|Electronic musical instrument|Software synthesizer}} Additive synthesis is used in electronic musical instruments. It is the principal sound generation technique used by [[Eminent BV|Eminent]] organs. ===Speech synthesis=== {{main|Speech synthesis}} In [[linguistics]] research, harmonic additive synthesis was used in the 1950s to play back modified and synthetic speech spectrograms.<ref name="cooper1951" /> Later, in the early 1980s, listening tests were carried out on synthetic speech stripped of acoustic cues to assess their significance. Time-varying [[formant]] frequencies and amplitudes derived by [[linear predictive coding]] were synthesized additively as pure tone whistles. This method is called [[sinewave synthesis]].<ref name=remez81> {{cite journal | last = Remez | first = R.E. |author2= Rubin, P.E.|author3= Pisoni, D.B.|author4= Carrell, T.D. | s2cid = 13039853 | year = 1981 | title = Speech perception without traditional speech cues | journal = Science | volume = 212 | issue = 4497 | pages = 947–950 | doi=10.1126/science.7233191 | pmid=7233191 | bibcode = 1981Sci...212..947R }}</ref><ref name=rubin80>{{cite journal | last = Rubin | first = P.E. | year = 1980 | title = Sinewave Synthesis Instruction Manual (VAX) | url = http://www.haskins.yale.edu/featured/sws/SWSmanual.pdf | journal = Internal Memorandum | publisher = Haskins Laboratories, New Haven, CT | access-date = 27 December 2011 | archive-date = 29 August 2021 | archive-url = https://web.archive.org/web/20210829131739/http://www.haskins.yale.edu/featured/sws/SWSmanual.pdf | url-status = dead }}</ref> Also the composite sinusoidal modeling (CSM)<ref name=sagayama79a> {{citation | last1 = Sagayama | first1 = S. | author-link1 = :ja:嵯峨山茂樹 | last2 = Itakura | first2 = F. | author-link2 = Fumitada Itakura | year = 1979 |script-title=ja:複合正弦波による音声合成 | trans-title = Speech Synthesis by Composite Sinusoidal Wave | periodical = Speech Committee of Acoustical Society of Japan | id = S79-39 | publication-date = October 1979 }}</ref><ref name=sagayama79b> {{cite conference | last1 = Sagayama | first1 = S. | last2 = Itakura | first2 = F. | date = October 1979 |script-chapter=ja:複合正弦波による簡易な音声合成法 |trans-chapter=Simple Speech Synthesis method by Composite Sinusoidal Wave |title=Proceedings of Acoustical Society of Japan, Autumn Meeting | volume = 3-2-3 | pages = 557–558 }} </ref> used on a singing [[speech synthesis]] feature on the [[Yamaha CX5M]] (1984), is known to use a similar approach which was independently developed during 1966–1979.<ref name=sagayama86> {{cite book | last1 = Sagayama | first1 = S. | title = ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing | last2 = Itakura | first2 = F. | year = 1986 | author-link2 = Fumitada Itakura | chapter = Duality theory of composite sinusoidal modeling and linear prediction | publisher = Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '86. | volume = 11 | pages = 1261–1264 | publication-date = April 1986 | doi = 10.1109/ICASSP.1986.1168815 | s2cid = 122814777 }}</ref><ref name=itakura04> {{cite journal | last = Itakura | first = F. | author-link = Fumitada Itakura | year = 2004 | title = Linear Statistical Modeling of Speech and its Applications -- Over 36-year history of LPC -- | url = http://www.icacommission.org/Proceedings/ICA2004Kyoto/pdf/We3.D.pdf | periodical = Proceedings of the 18th International Congress on Acoustics (ICA 2004), We3.D, Kyoto, Japan, Apr. 2004. | volume = 3 | pages = III–2077–2082 | publication-date = April 2004 | quote = 6. Composite Sinusoidal Modeling(CSM) In 1975, Itakura proposed the line spectrum representation (LSR) concept and its algorithm to obtain a set of parameters for new speech spectrum representation. Independently from this, Sagayama developed a composite sinusoidal modeling (CSM) concept which is equivalent to LSR but give a quite different formulation, solving algorithm and synthesis scheme. Sagayama clarified the duality of LPC and CSM and provided the unified view covering LPC, PARCOR, LSR, LSP and CSM, CSM is not only a new concept of speech spectrum analysis but also a key idea to understand the linear prediction from a unified point of view. ... }} </ref> These methods are characterized by extraction and recomposition of a set of significant spectral peaks corresponding to the several resonance modes occurring in the oral cavity and nasal cavity, in a viewpoint of [[acoustics]]. This principle was also utilized on a [[physical modeling synthesis]] method, called [[modal synthesis]].<ref name=adrien1991> {{cite book |last = Adrien |first = Jean-Marie |chapter = The missing link: modal synthesis |chapter-url = http://dl.acm.org/citation.cfm?id=131158 |editor= Giovanni de Poli |editor2=Aldo Piccialli |editor3=Curtis Roads |editor3-link=Curtis Roads |title = Representations of Musical Signals |url = https://archive.org/details/representationso0000unse_k7n4/page/269 |publisher = [[MIT Press]] |location = Cambridge, MA |date = 1991 |isbn = 978-0-262-04113-3 |pages = [https://archive.org/details/representationso0000unse_k7n4/page/269 269–298] |url-access = registration }} </ref><ref name=morrison&adrien1993> {{cite journal | last1 = Morrison | first1 = Joseph Derek (IRCAM) | last2 = Adrien | first2 = Jean-Marie | title = MOSAIC: A Framework for Modal Synthesis | journal = [[Computer Music Journal]] | volume = 17 | issue = 1 | publication-date = <!-- Spring, -->1993 | pages = 45–56 | doi=10.2307/3680569 | jstor = 3680569 | year = 1993 }} </ref><ref name=bilbao2009> {{citation | last = Bilbao | first = Stefan | chapter = Modal Synthesis | chapter-url= https://ccrma.stanford.edu/~bilbao/booktop/node14.html | title = Numerical Sound Synthesis: Finite Difference Schemes and Simulation in Musical Acoustics | publisher = John Wiley and Sons | location = Chichester, UK | date = October 2009 | isbn = 978-0-470-51046-9 | quote = ''A different approach, with a long history of use in physical modeling sound synthesis, is based on a frequency-domain, or modal description of vibration of objects of potentially complex geometry. Modal synthesis [1,148], as it is called, is appealing, in that the complex dynamic behaviour of a vibrating object may be decomposed into contributions from a set of modes (the spatial forms of which are eigenfunctions of the particular problem at hand, and are dependent on boundary conditions), each of which oscillates at a single complex frequency. ...<!-- (Generally, for realvalued problems, these complex frequencies will occur in complex conjugate pairs, and the ``mode" may be considered to be the pair of such eigenfunctions and frequencies.) -->'' }} (See also [http://www2.ph.ed.ac.uk/~sbilbao/nsstop.html companion page]) </ref><ref name=doel&pai2003> {{cite journal | last1 = Doel | first1 = Kees van den | last2 = Pai | first2 = Dinesh K. | title = Modal Synthesis For Vibrating Object | url = http://www.cs.ubc.ca/~kvdoel/publications/modalpaper.pdf <!-- | doi = 10.1.1.117.2576 --> | editor-last = Greenebaum | editor-first = K. | journal = Audio Anecdotes | publisher = AK Peter | location = Natick, MA | date = 2003 | quote = ''When a solid object is struck, scraped, or engages in other external interactions, the forces at the contact point causes deformations to propagate through the body, causing its outer surfaces to vibrate and emit sound waves. ... A good physically motivated synthesis model for objects like this is modal synthesis ...<!-- (Wawrzynek, 1989; Gaver, 1993; Morrison & Adrien, 1993; Cook, 1996; Doel & Pai, 1996; Doel, Kry, & Pai, 2001; O'Brien, Chen, & Gatchalian, 2002; Doel, Pai, Adam, Kortchmar, & Pichora-Fuller, 2002),--> where a vibrating object is modeled by a bank of damped harmonic oscillators which are excited by an external stimulus.'' }} </ref> ==History== {{multiple image |direction=horizontal |header = [[Lord Kelvin]]'s [[Tide-predicting machine]] |caption1 = Harmonic synthesizer |image1 = DSCN1739-thomson-tide-machine.jpg |width1=109 |caption2 = [[Differential analyser|Harmonic analyzer]] |image2 = Harmonic analyser.jpg |width2=224 }} [[Harmonic analysis]] was discovered by [[Joseph Fourier]],<ref name="prestini2004">{{Cite book |last=Prestini |first=Elena |url=https://books.google.com/books?id=fye--TBu4T0C |title=The Evolution of Applied Harmonic Analysis: Models of the Real World |publisher=Birkhäuser Boston |others=trans. |year=2004 |isbn=978-0-8176-4125-2 |location=New York, USA |pages=114–115 |access-date=6 February 2012 |orig-date=Rev. ed of: Applicazioni dell'analisi armonica. Milan: Ulrico Hoepli, 1996}}</ref> who published an extensive treatise of his research in the context of [[heat transfer]] in 1822.<ref>{{Cite book |last=Fourier |first=Jean Baptiste Joseph |author-link=Joseph Fourier |url=https://archive.org/details/thorieanalytiqu00fourgoog |title=Théorie analytique de la chaleur |publisher=Chez Firmin Didot, père et fils |year=1822 |isbn=9782876470460 |location=Paris, France |language=fr |trans-title=The Analytical Theory of Heat}}</ref> The theory found an early application in [[Theory of tides#Harmonic analysis|prediction of tides]]. Around 1876,<ref name="miller1916" /> William Thomson (later ennobled as [[Lord Kelvin]]) constructed a mechanical [[Tide-predicting machine|tide predictor]]. It consisted of a ''harmonic analyzer'' and a ''harmonic synthesizer'', as they were called already in the 19th century.<ref name="philmag1875">{{Cite journal |year=1875 |journal=The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science |publisher=Taylor & Francis |volume=49 |page=490}}{{failed verification|date=July 2024}}</ref><ref name="thomson1878">{{Cite journal |last=Thomson |first=Sir W. |year=1878 |title=Harmonic analyzer |url=https://zenodo.org/record/1432065 |journal=Proceedings of the Royal Society of London |publisher=Taylor and Francis |volume=27 |issue=185–189 |pages=371–373 |doi=10.1098/rspl.1878.0062 |jstor=113690 |doi-access=free}}</ref> The analysis of tide measurements was done using [[James Thomson (engineer)|James Thomson]]'s ''[[Differential analyser|integrating machine]]''. The resulting [[Fourier coefficient]]s were input into the synthesizer, which then used a system of cords and pulleys to generate and sum harmonic sinusoidal partials for prediction of future tides. In 1910, a similar machine was built for the analysis of periodic waveforms of sound.<ref name="cahan1993" /> The synthesizer drew a graph of the combination waveform, which was used chiefly for visual validation of the analysis.<ref name="cahan1993" /> {{multiple image |direction=horizontal |caption1 = [[Helmholtz resonator]] |image1 = Helmholtz_Resonator.png |width1=100 |caption2 = Tone-generator utilizing it <!-- utilizing electromagnetic vibrator, tuning folk, and Helmholtz resonator as amplifier --> |image2 = Helmholtz resonator 2.jpg |width2=148 }} [[Georg Ohm]] applied Fourier's theory to sound in 1843. The line of work was greatly advanced by [[Hermann von Helmholtz]], who published his eight years worth of research in 1863.<ref name="helmholtz1863">{{Cite book |last=Helmholtz, von |first=Hermann |url=http://vlp.mpiwg-berlin.mpg.de/library/data/lit3483/index_html?pn=1&ws=1.5 |title=Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik |publisher=Leopold Voss |year=1863 |edition=1st |location=Leipzig |pages=v |language=de |trans-title=On the sensations of tone as a physiological basis for the theory of music}}</ref> Helmholtz believed that the psychological perception of tone color is subject to learning, while hearing in the sensory sense is purely physiological.<ref name="christensen2002">{{Cite book |last=Christensen |first=Thomas Street |url=https://books.google.com/books?id=ioa9uW2t7AQC |title=The Cambridge History of Western Music |publisher=Cambridge University Press |year=2002 |isbn=978-0-521-62371-1 |location=Cambridge, United Kingdom |pages=251, 258}}</ref> He supported the idea that perception of sound derives from signals from nerve cells of the basilar membrane and that the elastic appendages of these cells are sympathetically vibrated by pure sinusoidal tones of appropriate frequencies.<ref name="cahan1993">{{Cite book |last=Cahan |first=David |url=https://books.google.com/books?id=lfdJNRgzKyUC |title=Hermann von Helmholtz and the foundations of nineteenth-century science |publisher=University of California Press |year=1993 |isbn=978-0-520-08334-9 |editor-last=Cahan |editor-first=David |location=Berkeley and Los Angeles, USA |pages=110–114, 285–286}}</ref> Helmholtz agreed with the finding of [[Ernst Chladni]] from 1787 that certain sound sources have inharmonic vibration modes.<ref name="christensen2002" /> {{multiple image |direction=horizontal |header = [[Rudolph Koenig]]'s sound analyzer and synthesizer |caption1 = sound synthesizer |image1 = Synthesizer after Helmholtz by Koenig 1865.jpg |width1=231 |caption2 = sound analyzer |image2 = Koenig - klankanalysator purchased in 1996.jpg |width2=102 }} In Helmholtz's time, [[electronic amplifier|electronic amplification]] was unavailable. For synthesis of tones with harmonic partials, Helmholtz built an electrically [[Excitation (magnetic)|excited]] array of [[tuning fork]]s and acoustic [[Helmholtz resonator|resonance chambers]] that allowed adjustment of the amplitudes of the partials.<ref name="helmholtz1875" /> Built at least as early as in 1862,<ref name="helmholtz1875" /> these were in turn refined by [[Rudolph Koenig]], who demonstrated his own setup in 1872.<ref name="helmholtz1875">{{Cite book |last=von Helmholtz |first=Hermann |url=https://archive.org/details/onsensationston00helmgoog |title=On the sensations of tone as a physiological basis for the theory of music |publisher=Longmans, Green, and co. |year=1875 |location=London, United Kingdom |pages=xii, 175–179}}</ref> For harmonic synthesis, Koenig also built a large apparatus based on his ''wave siren''. It was pneumatic and utilized cut-out [[tonewheel]]s, and was criticized for low purity of its partial tones.<ref name="miller1916" /> Also [[tibia pipe]]s of [[pipe organ]]s have nearly sinusoidal waveforms and can be combined in the manner of additive synthesis.<ref name="miller1916">{{Cite book |last=Miller |first=Dayton Clarence |author-link=Dayton Miller |url=https://archive.org/details/scienceofmusical028670mbp |title=The Science of Musical Sounds |publisher=The Macmillan Company |year=1926 |location=New York |pages=[https://archive.org/details/scienceofmusical028670mbp/page/n127 110], 244–248 |orig-date=1916}}</ref> In 1938, with significant new supporting evidence,<ref>{{Cite book |last=Russell |first=George Oscar |author-link=George Oscar Russell |url=https://archive.org/details/yearbookcarne35193536carn |title=Year book - Carnegie Institution of Washington (1936) |publisher=Carnegie Institution of Washington |year=1936 |series=Carnegie Institution of Washington: Year Book |volume=35 |location=Washington |pages=[https://archive.org/details/yearbookcarne35193536carn/page/359 359]–363}}</ref> it was reported on the pages of [[Popular Science Monthly]] that the human vocal cords function like a fire siren to produce a harmonic-rich tone, which is then filtered by the vocal tract to produce different vowel tones.<ref>{{Cite journal |last=Lodge |first=John E. |date=April 1938 |editor-last=Brown |editor-first=Raymond J. |title=Odd Laboratory Tests Show Us How We Speak: Using X Rays, Fast Movie Cameras, and Cathode-Ray Tubes, Scientists Are Learning New Facts About the Human Voice and Developing Teaching Methods To Make Us Better Talkers |url=https://books.google.com/books?id=wigDAAAAMBAJ&pg=PA32 |journal=Popular Science Monthly |location=New York, USA |publisher=Popular Science Publishing |volume=132 |issue=4 |pages=32–33}}</ref> By the time, the additive Hammond organ was already on market. Most early electronic organ makers thought it too expensive to manufacture the plurality of oscillators required by additive organs, and began instead to build [[subtractive synthesis|subtractive]] ones.<ref name="comerford1993">{{Cite journal |last=Comerford |first=P. |year=1993 |title=Simulating an Organ with Additive Synthesis |journal=Computer Music Journal |volume=17 |issue=2 |pages=55–65 |doi=10.2307/3680869 |jstor=3680869}}</ref> In a 1940 [[Institute of Radio Engineers]] meeting, the head field engineer of Hammond elaborated on the company's new ''Novachord'' as having a ''"subtractive system"'' in contrast to the original Hammond organ in which ''"the final tones were built up by combining sound waves"''.<ref>{{Cite journal |year=1940 |title=Institute News and Radio Notes |journal=Proceedings of the IRE |volume=28 |issue=10 |pages=487–494 |doi=10.1109/JRPROC.1940.228904}}</ref> Alan Douglas used the qualifiers ''additive'' and ''subtractive'' to describe different types of electronic organs in a 1948 paper presented to the [[Royal Musical Association]].<ref name="Douglas1948">{{Cite journal |last=Douglas |first=A. |year=1948 |title=Electrotonic Music |journal=Proceedings of the Royal Musical Association |volume=75 |pages=1–12 |doi=10.1093/jrma/75.1.1}}</ref> <!--Also, in the 1968 edition of his 1947 book ''The Electronic Musical Instrument Manual'', in the section ''Production and Mixing of Electrical Oscillations'' a distinction is made between ''additive tone-forming'' and ''subtractive tone-forming''.--> The contemporary wording ''additive synthesis'' and ''subtractive synthesis'' can be found in his 1957 book ''The electrical production of music'', in which he categorically lists three methods of forming of musical tone-colours, in sections titled ''Additive synthesis'', ''Subtractive synthesis'', and ''Other forms of combinations''.<ref name="douglas1957">{{Cite book |last=Douglas |first=Alan Lockhart Monteith |url=https://archive.org/details/electricalproduc00doug |title=The Electrical Production of Music |publisher=Macdonald |year=1957 |location=London, UK |pages=[https://archive.org/details/electricalproduc00doug/page/140 140], 142 |url-access=limited}}</ref> A typical modern additive synthesizer produces its output as an [[electrical]], [[analog signal]], or as [[digital audio]], such as in the case of [[software synthesizers]], which became popular around year 2000.<ref name="pejrolo2007">{{Cite book |last=Pejrolo |first=Andrea |title=Acoustic and MIDI orchestration for the contemporary composer |last2=DeRosa |first2=Rich |publisher=Elsevier |year=2007 |location=Oxford, UK |pages=53–54}}</ref> === Timeline === The following is a timeline of historically and technologically notable analog and digital synthesizers and devices implementing additive synthesis. {| class="wikitable" width="100%" ! width="50" | Research implementation or publication ! width="50" | Commercially available ! width="100" class="unsortable" | Company or institution ! width="50" class="unsortable" | Synthesizer or synthesis device ! class="unsortable" | Description ! width="85" class="unsortable" | Audio samples |- | 1900<ref name="weidenaar1995">{{Cite book |last=Weidenaar |first=Reynold |url=https://archive.org/details/bub_gb_Gr2kq-598-YC |title=Magic Music from the Telharmonium |publisher=Scarecrow Press |year=1995 |isbn=978-0-8108-2692-2 |location=Lanham, MD}}</ref> | 1906<ref name="weidenaar1995" /> | New England Electric Music Company | [[Telharmonium]] | The first polyphonic, touch-sensitive music synthesizer.<ref name="moog1977">{{Cite journal |last=Moog, Robert A. |date=October–November 1977 |title=Electronic Music |journal=Journal of the Audio Engineering Society |volume=25 |issue=10/11 |page=856}}</ref> Implemented sinuosoidal additive synthesis using [[tonewheel]]s and [[alternator]]s. Invented by [[Thaddeus Cahill]]. | ''no known recordings''<ref name="weidenaar1995" /> |- | 1933<ref name="harvey2011">{{Cite web |last=Olsen |first=Harvey |date=14 December 2011 |editor-last=Brown, Darren T. |title=Leslie Speakers and Hammond organs: Rumors, Myths, Facts, and Lore |url=http://www.hammond-organ.com/History/hammond_lore.htm |archive-url=https://web.archive.org/web/20120901005950/http://hammond-organ.com/History/hammond_lore.htm |archive-date=1 September 2012 |access-date=20 January 2012 |website=The Hammond Zone |publisher=Hammond Organ in the U.K. |df=dmy-all}}</ref> | 1935<ref name="harvey2011" /> | [[Hammond Organ|Hammond Organ Company]] | [[Hammond Organ]] | An electronic additive synthesizer that was commercially more successful than Telharmonium.<ref name="moog1977" /> Implemented sinusoidal additive synthesis using [[tonewheel]]s and [[Pickup (music technology)#Magnetic pickups|magnetic pickups]]. Invented by [[Laurens Hammond]]. | {{Audio|Hammond Organ - Model A Medley.ogg|Model A}} |- | 1950 or earlier<ref name="cooper1951">{{Cite journal |last=Cooper |first=F. S. |last2=Liberman |first2=A. M. |last3=Borst |first3=J. M. |date=May 1951 |title=The interconversion of audible and visible patterns as a basis for research in the perception of speech |journal=Proc. Natl. Acad. Sci. U.S.A. |volume=37 |issue=5 |pages=318–25 |bibcode=1951PNAS...37..318C |doi=10.1073/pnas.37.5.318 |pmc=1063363 |pmid=14834156 |doi-access=free}}</ref> | | [[Haskins Laboratories]] | [[Pattern playback|Pattern Playback]] | A speech synthesis system that controlled amplitudes of harmonic partials by a spectrogram that was either hand-drawn or an analysis result. The partials were generated by a multi-track optical [[tonewheel]].<ref name="cooper1951" /> | [http://www.haskins.yale.edu/featured/sentences/ppsentences.html samples] |- | 1958<ref name="holzer2010">{{Cite web |last=Holzer |first=Derek |date=22 February 2010 |title=A brief history of optical synthesis |url=http://www.umatic.nl/tonewheels_historical.html |access-date=13 January 2012}}</ref> | | | [[ANS synthesizer|ANS]] | An additive synthesizer<ref name="vail2002">{{Cite magazine |last=Vail |first=Mark |date=1 November 2002 |title=Eugeniy Murzin's ANS – Additive Russian synthesizer |magazine=[[Keyboard Magazine]] |page=120}}</ref> that played microtonal [[spectrogram]]-like scores using multiple multi-track optical [[tonewheel]]s. Invented by [[Evgeny Murzin]]. A similar instrument that utilized electronic oscillators, the ''Oscillator Bank'', and its input device ''Spectrogram'' were realized by [[Hugh Le Caine]] in 1959.<ref name="young1999a">{{Cite web |last=Young |first=Gayle |title=Oscillator Bank (1959) |url=http://www.hughlecaine.com/en/oscbank.html}}</ref><ref name="young1999b">{{Cite web |last=Young |first=Gayle |title=Spectrogram (1959) |url=http://www.hughlecaine.com/en/spectro.html}}</ref> | {{Audio|The ANS Synthesizer playing doodles (live).ogg|1964 model}} |- | 1963<ref name="luce1963">{{Cite thesis |last=Luce |first=David Alan |title=Physical correlates of nonpercussive musical instrument tones |degree=Thesis |publisher=Massachusetts Institute of Technology |year=1963 |hdl=1721.1/27450 |location=Cambridge, Massachusetts, U.S.A.}}</ref> | | [[Massachusetts Institute of Technology|MIT]] | | An off-line system for digital spectral analysis and resynthesis of the attack and steady-state portions of musical instrument timbres by David Luce.<ref name="luce1963" /> | |- | 1964<ref name="beauchamp2009">{{Cite web |last=Beauchamp |first=James |date=17 November 2009 |title=The Harmonic Tone Generator: One of the First Analog Voltage-Controlled Synthesizers |url=http://cmp.music.illinois.edu/beaucham/htg.html |website=Prof. James W. Beauchamp Home Page}}</ref> | | [[University of Illinois]] | [[Experimental Music Studios|Harmonic Tone Generator]] | An electronic, harmonic additive synthesis system invented by James Beauchamp.<ref name="beauchamp2009" /><ref name="beauchamp1966">{{Cite journal |last=Beauchamp |first=James W. |date=October 1966 |title=Additive Synthesis of Harmonic Musical Tones |url=http://www.aes.org/e-lib/browse.cfm?elib=1129 |journal=Journal of the Audio Engineering Society |volume=14 |issue=4 |pages=332–342}}</ref> | [https://web.archive.org/web/20131228061841/http://ems.music.uiuc.edu/beaucham/htg_sounds/ samples] ([https://web.archive.org/web/20120322191551/http://ems.music.uiuc.edu/beaucham/htg.html info]) |- | 1974 or earlier<ref name="synthmuseum-RMI" /><ref name="reid2001"/> | 1974<ref name="synthmuseum-RMI">{{Cite web |title=RMI Harmonic Synthesizer |url=http://www.synthmuseum.com/rmi/rmihar01.html |url-status=live |archive-url=https://web.archive.org/web/20110609205852/http://www.synthmuseum.com/rmi/rmihar01.html |archive-date=9 June 2011 |access-date=12 May 2011 |publisher=Synthmuseum.com}}</ref><ref name="reid2001"> {{Cite journal |last=Reid |first=Gordon <!-- date = December 2011 --> |title=PROG SPAWN! The Rise And Fall of Rocky Mount Instruments (Retro) |url=http://www.soundonsound.com/sos/dec01/articles/retrozone1201.asp |journal=Sound on Sound |issue=December 2001 |archive-url=https://web.archive.org/web/20111225162843/http://www.soundonsound.com/sos/dec01/articles/retrozone1201.asp |archive-date=25 December 2011 |access-date=22 January 2012}}</ref> | [[Rocky Mount Instruments|RMI]] | Harmonic Synthesizer | The first synthesizer product that implemented additive<ref name="flint2008"> {{Cite journal |last=Flint |first=Tom <!-- |date=February 2008 --> |title=Jean Michel Jarre: 30 Years of Oxygene |url=http://www.soundonsound.com/sos/feb08/articles/jmjarre.htm |journal=Sound on Sound |issue=February 2008 |access-date=22 January 2012}}</ref> synthesis using digital oscillators.<ref name="synthmuseum-RMI" /><ref name="reid2001" /> The synthesizer also had a time-varying analog filter.<ref name="synthmuseum-RMI" /> RMI was a subsidiary of [[Allen Organ Company]], which had released the first commercial [[Electronic organ#Digital church organs|digital church organ]], the ''Allen Computer Organ'', in 1971, using digital technology developed by [[North American Rockwell]].<ref name="fundinguniverse">{{Cite web |title=Allen Organ Company |url=http://www.fundinguniverse.com/company-histories/Allen-Organ-company-company-History.html |website=fundinguniverse.com}}</ref> | [https://soundcloud.com/doombient-music/rmi-harmonic-drones 1] [https://soundcloud.com/doombient-music/rmi-harmonic-demos 2] [https://soundcloud.com/doombient-music/rmi-harmonic-arpeggiator-demo 3] [https://soundcloud.com/doombient-music/rmi-harmonic-intermodulation 4] |- | 1974<ref name="cosimi2009">{{Cite journal |last=Cosimi |first=Enrico |date=20 May 2009 |title=EMS Story - Prima Parte |trans-title=EMS Story - Part One |url=http://audio.accordo.it/articles/2009/05/23828/ems-story-prima-parte.html |journal=Audio Accordo.it |language=it |archive-url=https://web.archive.org/web/20090522022413/http://audio.accordo.it/articles/2009/05/23828/ems-story-prima-parte.html |archive-date=22 May 2009 |access-date=21 January 2012}}</ref> | | [[Electronic Music Studios|EMS]] (London) | Digital Oscillator Bank | A bank of digital oscillators with arbitrary waveforms, individual frequency and amplitude controls,<ref name="hinton2002">{{Cite web |last=Hinton |first=Graham |year=2002 |title=EMS: The Inside Story |url=http://www.ems-synthi.demon.co.uk/emsstory.html |archive-url=https://web.archive.org/web/20130521015858/http://www.ems-synthi.demon.co.uk/emsstory.html |archive-date=21 May 2013 |publisher=Electronic Music Studios (Cornwall)}}</ref> intended for use in analysis-resynthesis with the digital ''Analysing Filter Bank'' (AFB) also constructed at EMS.<ref name="cosimi2009" /><ref name="hinton2002" /> Also known as: ''DOB''. | in The New Sound of Music<ref>{{Cite AV media |title=The New Sound of Music |date=1979 |type=TV |publisher=BBC |place=UK}} Includes a demonstration of DOB and AFB.</ref> |- | 1976<ref name="leete1999"> {{Cite journal |last=Leete |first=Norm <!-- |date=April 1999 --> |title=Fairlight Computer – Musical Instrument (Retro) |url=http://www.soundonsound.com/sos/apr99/articles/fairlight.htm |journal=Sound on Sound |issue=April 1999 |access-date=29 January 2012}}</ref> | 1976<ref name="twyman2004">{{Cite thesis |last=Twyman |first=John |title=(inter)facing the music: The history of the Fairlight Computer Musical Instrument |date=1 November 2004 |access-date=29 January 2012 |degree=Bachelor of Science (Honours) |publisher=Unit for the History and Philosophy of Science, University of Sydney |url=http://www.geosci.usyd.edu.au/users/john/thesis/thesis_web.pdf}}</ref> | [[Fairlight (company)|Fairlight]] | [[Fairlight CMI|Qasar M8]] | An all-digital synthesizer that used the [[fast Fourier transform]]<ref name="street2000">{{Cite web |last=Street |first=Rita |date=8 November 2000 |title=Fairlight: A 25-year long fairytale |url=http://www.audiomedia.com/archive/features/uk-1000/uk-1000-fairlight/uk-1000-fairlight.htm |archive-url=https://web.archive.org/web/20031008201831/http://www.audiomedia.com/archive/features/uk-1000/uk-1000-fairlight/uk-1000-fairlight.htm |archive-date=8 October 2003 |access-date=29 January 2012 |website=Audio Media magazine |publisher=IMAS Publishing UK}}</ref> to create samples from interactively drawn amplitude envelopes of harmonics.<ref>{{Cite web |year=1978 |title=Computer Music Journal |url=http://egrefin.free.fr/images/Fairlight/CMJfall78.jpg |access-date=29 January 2012 |format=JPG}}</ref> | [http://anerd.com/fairlight/audioarchives/index.htm samples] |- | 1977<ref name="Leider2004">{{Cite book |last=Leider |first=Colby |title=Digital Audio Workstation |publisher=[[McGraw-Hill]] |year=2004 |page=58 |chapter=The Development of the Modern DAW}}</ref> | | [[Bell Labs]] | [[Bell Labs Digital Synthesizer|Digital Synthesizer]] | A [[real-time computing|real-time]], digital additive synthesizer<ref name="Leider2004" /> that has been called the first true digital synthesizer.<ref name="chadabe1997">{{Cite book |last=Joel |first=Chadabe |author-link=Joel Chadabe |url=http://www.pearsonhighered.com/educator/product/Electric-Sound-The-Past-and-Promise-of-Electronic-Music/9780133032314.page |title=Electric Sound |publisher=Prentice Hall |year=1997 |isbn=978-0-13-303231-4 |location=Upper Saddle River, N.J., U.S.A. |pages=177–178, 186}}</ref> Also known as: ''Alles Machine'', ''Alice''. | [http://retiary.org/ls/music/realaudio/ob_sys/05_alles_synth_improv.rm sample] ([http://retiary.org/ls/obsolete_systems/ info]) |- | 1979<ref name="chadabe1997" /> | 1979<ref name="chadabe1997" /> | [[New England Digital]] | [[Synclavier II]] | A commercial digital synthesizer that enabled development of timbre over time by smooth cross-fades between waveforms generated by additive synthesis. | {{Audio|Jon Appleton - Sashasonjon.oga|Jon Appleton - Sashasonjon}} |- | |1996<ref>{{Cite web |title=Kawai K5000 {{!}} Vintage Synth Explorer |url=https://www.vintagesynth.com/kawai/k5000 |access-date=2024-01-21 |website=www.vintagesynth.com}}</ref> |[[Kawai Musical Instruments|Kawai]] |[[Kawai K5000|K5000]] |A commercial digital synthesizer workstation capable of polyphonic, digital additive synthesis of up to 128 sinusodial waves, as well as combing PCM waves.<ref>{{Cite web |title=Kawai K5000R & K5000S |url=https://www.soundonsound.com/reviews/kawai-k5000r-k5000s |access-date=2024-01-21 |website=www.soundonsound.com}}</ref> | |} ==Discrete-time equations== In digital implementations of additive synthesis, [[discrete signal|discrete-time]] equations are used in place of the continuous-time synthesis equations. A notational convention for discrete-time signals uses brackets i.e. <math>y[n]\,</math> and the argument <math>n\,</math> can only be integer values. If the continuous-time synthesis output <math>y(t)\,</math> is expected to be sufficiently [[bandlimited]]; below half the [[sampling rate]] or <math>f_\mathrm{s}/2\,</math>, it suffices to directly sample the continuous-time expression to get the discrete synthesis equation. The continuous synthesis output can later be [[Nyquist–Shannon sampling theorem|reconstructed]] from the samples using a [[digital-to-analog converter]]. The sampling period is <math>T=1/f_\mathrm{s}\,</math>. Beginning with ({{EquationNote|3}}), : <math>y(t) = \sum_{k=1}^{K} r_k(t) \cos\left(2 \pi \int_0^t f_k(u)\ du + \phi_k \right)</math> and sampling at discrete times <math> t = nT = n/f_\mathrm{s} \,</math> results in :<math> \begin{align} y[n] & = y(nT) = \sum_{k=1}^{K} r_k(nT) \cos\left(2 \pi \int_0^{nT} f_k(u)\ du + \phi_k \right) \\ & = \sum_{k=1}^{K} r_k(nT) \cos\left(2 \pi \sum_{i=1}^{n} \int_{(i-1)T}^{iT} f_k(u)\ du + \phi_k \right) \\ & = \sum_{k=1}^{K} r_k(nT) \cos\left(2 \pi \sum_{i=1}^{n} (T f_k[i]) + \phi_k \right) \\ & = \sum_{k=1}^{K} r_k[n] \cos\left(\frac{2 \pi}{f_\mathrm{s}} \sum_{i=1}^{n} f_k[i] + \phi_k \right) \\ \end{align} </math> where : <math>r_k[n] = r_k(nT) \,</math> is the discrete-time varying amplitude envelope : <math>f_k[n] = \frac{1}{T} \int_{(n-1)T}^{nT} f_k(t)\ dt \,</math> is the discrete-time [[Finite difference|backward difference]] instantaneous frequency. This is equivalent to : <math> y[n] = \sum_{k=1}^{K} r_k[n] \cos\left( \theta_k[n] \right) </math> where :<math> \begin{align} \theta_k[n] &= \frac{2 \pi}{f_\mathrm{s}} \sum_{i=1}^{n} f_k[i] + \phi_k \\ &= \theta_k[n-1] + \frac{2 \pi}{f_\mathrm{s}} f_k[n] \\ \end{align} </math> for all <math>n>0\,</math><ref name="RodetDepalle_FFTm1"/> and : <math> \theta_k[0] = \phi_k. \,</math> ==See also== * [[Frequency modulation synthesis]] * [[Subtractive synthesis]] * [[Speech synthesis]] * [[Harmonic series (music)]] ==References== {{Reflist}} ==External links== * [http://users.ece.gatech.edu/lanterma/synergy/ Digital Keyboards Synergy] {{Sound synthesis types}} {{DEFAULTSORT:Additive Synthesis}} [[Category:Sound synthesis types]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Audio
(
edit
)
Template:Category handler
(
edit
)
Template:Citation
(
edit
)
Template:Cite AV media
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite magazine
(
edit
)
Template:Cite thesis
(
edit
)
Template:Cite web
(
edit
)
Template:EquationNote
(
edit
)
Template:Failed verification
(
edit
)
Template:Harvnb
(
edit
)
Template:Listen
(
edit
)
Template:Main
(
edit
)
Template:Main other
(
edit
)
Template:Multiple image
(
edit
)
Template:NumBlk
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Sound synthesis types
(
edit
)
Template:Use dmy dates
(
edit
)