Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Digital waveguide synthesis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
[[File:Make_Noise_-_Mysteron,_Erbe-Verb_-_2014_NAMM_Show.jpg | thumb | right | alt=A white panel that is covered with various blue and white dials, wires and empty sockets | A voltage-controlled digital waveguide]] '''Digital waveguide synthesis''' is the [[synthesizer|synthesis]] of [[Audio frequency|audio]] using a digital [[waveguide]]. Digital waveguides are efficient computational models for physical media through which acoustic waves propagate. For this reason, digital waveguides constitute a major part of most modern [[physical modeling synthesis|physical modeling synthesizers]]. <!-- Unsourced image removed: [[Image:DigitalWaveguide.png|frame|right|A basic one-dimensional digital waveguide (likely of a string) with a rigid termination on one end (left) and a frequency-dependent attenuating filter at the other (right).]] --> A lossless digital waveguide realizes the discrete form of [[d'Alembert's formula|d'Alembert's solution]] of the one-dimensional [[wave equation]] as the [[superposition principle|superposition]] of a right-going and a left-going waves, : <math>y(m, n) = y^+(m - n) + y^-(m + n),</math> where <math>y^+</math> is the right-going wave, and <math>y^-</math> is the left-going wave. It can be seen from this representation that sampling the function <math>y</math> at a given position <math>m</math> and time <math>n</math> merely involves summing two delayed copies of its traveling waves. These traveling waves will reflect at boundaries such as the suspension points of vibrating strings or the open or closed ends of tubes. Hence the waves travel along closed loops. Digital waveguide models therefore comprise [[digital delay line]]s to represent the geometry of the waveguide which are closed by recursion, [[digital filter]]s to represent the frequency-dependent losses and mild dispersion in the medium, and often [[non-linear]] elements. Losses incurred throughout the medium are generally consolidated so that they can be calculated once at the termination of a delay line, rather than many times throughout. Waveguides such as acoustic tubes are three-dimensional, but because their lengths are often much greater than their diameters, it is reasonable and computationally efficient to model them as one-dimensional waveguides. Membranes, as used in [[drum]]s, may be modeled using two-dimensional waveguide meshes, and reverberation in three-dimensional spaces may be modeled using three-dimensional meshes. [[Vibraphone]] bars, [[Bell (instrument)|bells]], [[singing bowl]]s and other sounding solids (also called [[idiophone]]s) can be modeled by a related method called [[Banded waveguide synthesis|banded waveguides]], where multiple [[band-limited]] digital waveguide elements are used to model the strongly [[acoustic dispersion|dispersive]] behavior of waves in solids. The term "digital waveguide synthesis" was coined by [[Julius O. Smith III]], who helped develop it and eventually filed the patent. It represents an extension of the [[Karplus–Strong algorithm]]. [[Stanford University]] owned the patent rights for digital waveguide synthesis and signed an agreement in 1989 to develop the technology with [[Yamaha Corporation|Yamaha]], however, many of the early patents have now expired. An extension to DWG synthesis of strings made by Smith is [[commuted synthesis]], wherein the excitation to the digital waveguide contains both string excitation and the body response of the instrument. This is possible because the digital waveguide is [[linear]] and makes it unnecessary to model the instrument body's resonances after synthesizing the string output, greatly reducing the number of computations required for a convincing resynthesis. Prototype waveguide software implementations were done by students of Smith in the [[Synthesis Toolkit]] (STK).<ref>{{Cite web |url=https://ccrma.stanford.edu/~jos/wg.html |title=Digital Waveguide Synthesis Papers, Software, Sound Samples, and Links |website=Julius Orion Smith III Home Page |access-date=2019-07-17}}</ref><ref>{{Cite web |url=https://ccrma.stanford.edu/workshops/dsp2008/prc/NotYet/Week2Labs/PRCBookCode/STK/stk-4.0/doc/html/classPluckTwo.html |title=PluckTwo Class Reference |website=The Synthesis ToolKit in C++ (STK) |access-date=2019-07-17}}</ref> The first musical use of digital waveguide synthesis was in the composition "May All Your Children Be Acrobats" (1981) by [[David A. Jaffe]], followed by his "Silicon Valley Breakdown" (1982).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)