Abstract—The art of composing music can be automated with deep learning along with the knowledge of a few implicit heuristics. In this proposed paper, we aim at building a model that composes Carnatic oriented contemporary tune, that is based on the features of given training song. It implements the task of automated musical composition using a pipeline where various key features of the resultant tune are constructed separately step by step and finally combined into a complete piece. LSTM models were used for creating four different modules namely, the Tune module, the Motif module, the Endnote module, and the Gamaka module. Four models were built namely - Motif, Tune, End Note, and Gamaka whose training accuracy was 86%, 98%, 60%, and 72% respectively. Our work focuses primarily on generating a user friendly Carnatic music composer that accepts the initial user phrase to compose a simultaneous sequence of notes and motifs for the required duration.
Index Terms—RNN-LSTM, composition, deep learning, Carnatic music, AI.
Hari Kumar is with the Ericsson Research Labs, Ericsson, India (e-mail: email@example.com).
P. S Ashwin and Haritha Ananthakrishnan are with the Computer Science Engineering Department of SSN College of Engineering, Chennai, TamilNadu, India (e-mail: firstname.lastname@example.org, email@example.com).
Cite: N. Hari Kumar, P. S Ashwin, and Haritha Ananthakrishnan, "MellisAI - An AI Generated Music Composer Using RNN-LSTMs," International Journal of Machine Learning and Computing vol. 10, no. 2, pp. 247-252, 2020.Copyright © 2020 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).