Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Music information retrieval
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Applications == Music information retrieval is being used by businesses and academics to categorize, manipulate and even create music. === Music classification === One of the classical MIR research topics is genre classification, which is categorizing music items into one of the pre-defined genres such as [[Classical music|classical]], [[jazz]], [[Rock music|rock]], etc. [[Emotion classification|Mood classification]], artist classification, instrument identification, and music tagging are also popular topics. === Recommender systems === Several [[recommender systems]] for music already exist, but surprisingly few are based upon MIR techniques, instead of making use of similarity between users or laborious data compilation. [[Pandora Radio|Pandora]], for example, uses experts to tag the music with particular qualities such as "female singer" or "strong bassline". Many other systems find users whose listening history is similar and suggests unheard music to the users from their respective collections. MIR techniques for [[musical similarity|similarity in music]] are now beginning to form part of such systems. === Music source separation and instrument recognition === Music source separation is about separating original signals from a mixture [[audio signal]]. Instrument recognition is about identifying the instruments involved in music. Various MIR systems have been developed that can separate music into its component tracks without access to the master copy. In this way, for example, karaoke tracks can be created from normal music tracks, though the process is not yet perfect owing to vocals occupying some of the same [[frequency]] space as the other instruments. ===Automatic music transcription=== Automatic [[Transcription (music)|music transcription]] is the process of converting an audio recording into symbolic notation, such as a score or a [[MIDI file#File formats|MIDI file]].<ref>A. Klapuri and M. Davy, editors. Signal Processing Methods for Music Transcription. Springer-Verlag, New York, 2006.</ref> This process involves several audio analysis tasks, which may include multi-pitch detection, [[Onset detection#Onset detection|onset detection]], duration estimation, instrument identification, and the extraction of [[harmonic]], [[Rhythm|rhythmic]] or [[Melody|melodic]] information. This task becomes more difficult with greater numbers of instruments and a greater [[Polyphony and monophony in instruments|polyphony level]]. ===Music generation=== The [[automatic generation of music]] is a goal held by many MIR researchers. Attempts have been made with limited success in terms of human appreciation of the results.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)