Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Music information retrieval
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Methods used== ===Data source=== [[Sheet music|Scores]] give a clear and logical description of music from which to work, but access to [[sheet music]], whether digital or otherwise, is often impractical. [[MIDI]] music has also been used for similar reasons, but some data is lost in the conversion to MIDI from any other format, unless the music was written with the MIDI standards in mind, which is rare. [[Audio file format|Digital audio formats]] such as [[WAV]], [[mp3]], and [[ogg]] are used when the audio itself is part of the analysis. Lossy formats such as mp3 and ogg work well with the human ear but may be missing crucial data for study. Additionally some encodings create artifacts which could be misleading to any automatic analyser. Despite this the ubiquity of the mp3 has meant much research in the field involves these as the source material. Increasingly, [[metadata]] mined from the web is incorporated in MIR for a more rounded understanding of the music within its cultural context, and this recently consists of analysis of [[social tagging|social tags]] for music. ===Feature representation=== Analysis can often require some summarising,<ref>Eidenberger, Horst (2011). "Fundamental Media Understanding", atpress. {{ISBN|978-3-8423-7917-6}}.</ref> and for music (as with many other forms of data) this is achieved by [[feature extraction]], especially when the [[Computer audition|audio content]] itself is analysed and [[machine learning]] is to be applied. The purpose is to reduce the sheer quantity of data down to a manageable set of values so that learning can be performed within a reasonable time-frame. One common feature extracted is the [[Mel-frequency cepstral coefficient|Mel-Frequency Cepstral Coefficient]] (MFCC) which is a measure of the [[timbre]] of a [[Musical composition|piece of music]]. Other features may be employed to represent the [[Tonality#Computational methods to determine the key|key]], [[Chord (music)|chords]], [[Harmony|harmonies]], [[melody]], main [[Pitch (music)|pitch]], [[beats per minute]] or rhythm in the piece. There are a number of available audio feature extraction tools<ref>David Moffat, David Ronan, and Joshua D Reiss. "An Evaluation of Audio Feature Extraction Toolboxes". In Proceedings of the International Conference on Digital Audio Effects (DAFx), 2016.</ref> [https://www.ntnu.edu/documents/1001201110/1266017954/DAFx-15_submission_43_v2.pdf Available here] ===Statistics and machine learning=== *Computational methods for classification, clustering, and modelling β musical feature extraction for mono- and [[polyphonic]] music, similarity and [[pattern matching]], retrieval * [[Formal methods]] and [[Database|databases]] β applications of automated [[music identification]] and recognition, such as [[score following]], automatic accompaniment, routing and filtering for music and music queries, query languages, standards and other metadata or protocols for music information handling and [[information retrieval|retrieval]], [[multi-agent system]]s, distributed search) *Software for music information retrieval β [[Semantic Web]] and musical digital objects, [[Intelligent agent|intelligent agents]], [[collaborative software]], web-based search and [[semantic retrieval]], [[query by humming]] / [[Search by sound]], [[acoustic fingerprinting]] * Music analysis and knowledge representation β [[automatic summarization]], citing, excerpting, downgrading, transformation, formal models of music, digital scores and representations, music indexing and [[metadata]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)