Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computer music
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Machine improvisation== {{See also|Machine learning|Machine listening|Music and artificial intelligence|Computer models of musical creativity}} Machine improvisation uses computer algorithms to create [[improvisation]] on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses [[machine learning]] and [[pattern matching]] algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic re-injection. This is different from other improvisation methods with computers that use [[algorithmic composition]] to generate new music without performing analysis of existing music examples.<ref>Mauricio Toro, Carlos Agon, Camilo Rueda, Gerard Assayag. "[http://www.jatit.org/volumes/Vol86No2/17Vol86No2.pdf GELISP: A Framework to Represent Musical Constraint Satisfaction Problems and Search Strategies]", ''Journal of Theoretical and Applied Information Technology'' 86, no. 2 (2016): 327–331.</ref> ===Statistical style modeling=== Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's ''Illiac Suite for String Quartet'' (1957) and Xenakis' uses of [[Markov chains]] and [[stochastic processes]]. Modern methods include the use of [[lossless data compression]] for incremental parsing, prediction [[suffix tree]], [[string searching]] and more.<ref>Shlomo Dubnov, Gérard Assayag, Olivier Lartillot, Gill Bejerano, "Using Machine-Learning Methods for Musical Style Modeling", ''[[Computer (magazine)|Computers]]'', 36 (10), pp. 73–80, October 2003. {{doi|10.1109/MC.2003.1236474}}</ref> Style mixing is possible by blending models derived from several musical sources, with the first style mixing done by S. Dubnov in a piece NTrope Suite using Jensen-Shannon joint source model.<ref>Dubnov, S. (1999). "Stylistic randomness: About composing NTrope Suite." ''[[Organised Sound]]'', 4(2), 87–92. {{doi|10.1017/S1355771899002046}}</ref> Later the use of [[factor oracle]] algorithm (basically a ''factor oracle'' is a finite state automaton constructed in linear time and space in an incremental fashion)<ref>{{cite book |url=https://books.google.com/books?id=JtMYxwzUL00C&q=Factor+oracle%3A+a+new+structure+for+pattern+matching&pg=PA295 |editor1=Jan Pavelka |editor2=Gerard Tel |editor3=Miroslav Bartosek |quote=Lecture Notes in Computer Science 1725 |pages=291–306 |publisher=Springer-Verlag, Berlin |year=1999 |isbn=978-3-540-66694-3 |title=Factor oracle: a new structure for pattern matching; Proceedings of SOFSEM'99; Theory and Practice of Informatics |access-date=4 December 2013}}</ref> was adopted for music by Assayag and Dubnov<ref>"Using factor oracles for machine improvisation", G. Assayag, S. Dubnov, (September 2004) ''Soft Computing'' 8 (9), 604–610 {{doi|10.1007/s00500-004-0385-4}}</ref> and became the basis for several systems that use stylistic re-injection.<ref>"Memex and composer duets: computer-aided composition using style mixing", S. Dubnov, G. Assayag, ''Open Music Composers Book'' 2, 53–66</ref> ===Implementations=== The first implementation of statistical style modeling was the LZify method in Open Music,<ref>G. Assayag, S. Dubnov, O. Delerue, "Guessing the Composer's Mind : Applying Universal Prediction to Musical Style", In Proceedings of International Computer Music Conference, Beijing, 1999.</ref> followed by the Continuator system that implemented interactive machine improvisation that interpreted the LZ incremental parsing in terms of [[Markov models]] and used it for real time style modeling<ref>{{Cite web |url=http://francoispachet.fr/continuator/continuator.html |title=:: Continuator |access-date=19 May 2014 |archive-url=https://web.archive.org/web/20141101121138/http://francoispachet.fr/continuator/continuator.html |archive-date=1 November 2014 |url-status=dead}}</ref> developed by [[François Pachet]] at Sony CSL Paris in 2002.<ref>Pachet, F., [http://www.csl.sony.fr/downloads/papers/uploads/pachet-02f.pdf The Continuator: Musical Interaction with Style] {{Webarchive|url=https://web.archive.org/web/20120414183356/http://www.csl.sony.fr/downloads/papers/uploads/pachet-02f.pdf |date=14 April 2012 }}. In ICMA, editor, Proceedings of ICMC, pages 211–218, Göteborg, Sweden, September 2002. ICMA.</ref><ref>Pachet, F. [http://www.csl.sony.fr/downloads/papers/2002/pachet02b.pdf Playing with Virtual Musicians: the Continuator in practice] {{Webarchive|url=https://web.archive.org/web/20120414183418/http://www.csl.sony.fr/downloads/papers/2002/pachet02b.pdf |date=14 April 2012 }}. IEEE MultiMedia,9(3):77–82 2002.</ref> Matlab implementation of the Factor Oracle machine improvisation can be found as part of [[Computer Audition]] toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation.<ref> M. Toro, C. Rueda, C. Agón, G. Assayag. "NTCCRT: A concurrent constraint framework for soft-real time music interaction." ''Journal of Theoretical & Applied Information Technology'', vol. 82, issue 1, pp. 184–193. 2015</ref> OMax is a software environment developed in IRCAM. OMax uses [[OpenMusic]] and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and [[Shlomo Dubnov]] and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the ''OMax Brothers'') in the Ircam Music Representations group.<ref>{{cite web|url=http://omax.ircam.fr/|title=The OMax Project Page|website=omax.ircam.fr|access-date=2018-02-02}}</ref> One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation,<ref>Guided music synthesis with variable markov oracle C Wang, S Dubnov, Tenth Artificial Intelligence and Interactive Digital Entertainment Conference, 2014</ref> using an information rate criteria for finding the optimal or most informative representation.<ref>S Dubnov, G Assayag, A Cont, "Audio oracle analysis of musical information rate", IEEE Fifth International Conference on Semantic Computing, 567–557, 2011 {{doi|10.1109/ICSC.2011.106}}</ref> === Use of artificial intelligence === The use of [[artificial intelligence]] to generate new melodies,<ref>{{Cite web |date=2023-05-10 |title=Turn ideas into music with MusicLM |url=https://blog.google/technology/ai/musiclm-google-ai-test-kitchen/ |access-date=2023-09-22 |website=Google |language=en-us}}</ref> [[Cover version|cover]] pre-existing music,<ref>{{Cite web |date=2023-06-21 |title=Pick a voice, any voice: Voicemod unleashes "AI Humans" collection of real-time AI voice changers |url=https://tech.eu/2023/06/21/pick-a-voice-any-voice-voicemod-unleashes-ai-humans-collection-of-real-time-ai-voice-changers/ |access-date=2023-09-22 |website=Tech.eu |language=en-GB}}</ref> and clone artists' voices, is a recent phenomenon that has been reported to disrupt the [[music industry]].<ref>{{Cite web |title='Regulate it before we're all finished': Musicians react to AI songs flooding the internet |url=https://news.sky.com/story/ai-music-can-you-tell-if-these-songs-were-made-using-artificial-intelligence-or-not-12865174 |access-date=2023-09-22 |website=Sky News |language=en}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)