Abstract—With the great demand of automatic music analysis, Music Summarization that aims to determine the most representative segment in given music, paid much attention in music information retrieval field. In this paper, we propose a new approach to music summarization. Our goal is to identify the segment that listeners actually recognized as the most representative or memorable one. The strategy of proposed approach is to learn the relationship between acoustic features and information annotated by human instead of selecting a segment based on the self-structure of given music clip. Our prediction model can identify the most representative part considerably, and the experimental results also show that the proposed approach has significant potential to music summarization.
Index Terms—Human-intuitive, music summarization, supervised learning, prediction system.
The authors are with the Chung-Ang University, Seoul, Korea (e-mail: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com).
Cite: Jaesung Lee, Jihae Yoon, Seongwon Lee, and Dae-Won Kim, "Supervised Music Summarization for Human-Intuitive Highlight Identification," International Journal of Machine Learning and Computing vol.6, no. 1, pp. 15-20, 2016.