Data-Driven Structural Sequence Representations of Songs and Applications
Nov 21, 2012
from 10:00 AM to 12:00 PM
|Where||Engr. IV Bldg., 57-124|
|Add event to calendar||
Advisor: Prof. Vwani Roychowdhury
Content-based music analysis has attracted considerable attention owing to the rapidly growing digital music market. A number of specific functionalities, such as the exact look-up of songs from an existing database, or classification of music into well-known genres, can now be executed at large scale and are even available as consumer services from several well-known social media and mobile companies. In spite of these advances, robust representations of music that allow for efficient execution of tasks, seemingly simple to many humans, such as cover song (which is a new recording of an old song) identification or breaking up a song into its constituent structural parts, are yet to be invented. Motivated by this challenge, we introduce a method for determining approximate structural sequence representations purely from the chromagram of songs without adopting any prior knowledge from Musicology. Each song is represented by a sequence of states of an underlying Hidden Markov Model, where each state may represent a property of a song such as, the harmony, chord or melody. Then by adapting different versions of the sequence alignment algorithms, the method is applied to the problems of: (i) exploring and identifying repeating parts in a song, (ii) identifying cover songs, and (iii) extracting similar sections from two different songs. The proposed method has a number of advantages, including elimination of the unreliable beat estimation step and the ability to match parts of songs. The invariance of key transpositions among cover songs is achieved by cyclically rotating the chromatic domain of a chromagram. Our data-driven method is shown to be robust to reordering, inserting, and deleting sections of songs, and its performance is superior to that of other known methods for the cover song identification task
Chih-Li Wang is a Ph.D. candidate in Electrical Engineering Department at UCLA. He received his B.S. degree in Electrical Engineering from the National Tsing Hua University, Taiwan and M.S. degree in Electrical Engineering from the National Taiwan University, Taiwan. His research focuses on signal processing for music analysis and music information retrieval.