RESEARCH OF MUSIC CLASSIFICATION BASED ON MOOD RECOGNITION
Keywords:
Music emotion recognition, Feature extraction, Two level classification, Music mood classificationAbstract
Music emotion is a vital component in the field of multimedia database recovery and computational musicology. The online musical datasets are major challenges for searching, retrieving, and organizing the music content. Therefore, there is a require for robust automatic music emotion classifier system for organizing variety music pieces into different classes according to the specific viable information. Basic components are to be considered for music emotion classification audio feature origin and classifier design. In user propose diverse audio features to precisely characterize the music substance. The feature sets belong to groups dynamic, rhythmic, spectral, and harmonic. Four statistical parameters are considered as representatives, including the fourth-order central moments of each feature as well as covariance part. Number of unimportant parameters is forced by minimum unemployment maximum relevance(MRMR)algorithm and principal component analysis(PCA). Support Vector Machine(SVM) is used as a classifiers to classify the music mood recognition.
References
I. Sebastian Napiorkowski,” Music mood recognition: State of the Art Review ”, MUS-15 July 10, 2015.
II. R. Panda et al. MIREX 2014: Mood Classificationtasks submission. MIREX 2014.
III. B. K. Baniya and J. Lee, “Importance of audio feature reduction inautomatic music genre classification”, Multimeida Tools andApplications, Dec. 2014.
IV. Samira Pouyanfar, Hossein Sameti, “ Music Emotion Recognition Using Two Level Classification ”, IEEE,2014.
V. B. K. Baniya, D. Ghimire, and J. Lee, “A Novel Approach of AutomaticMusic Genre Classification Based on Timbral Texture and RhythmicContent Features”, Int. Conference on Advance CommunicationTechnology (ICACT), pp.96-102, 2014.
VI. Kyoungro Yoon, Jonghyung Lee, Min-Uk Kim, “Music recommendation system using emotion low-level features,” Transactions on Consumer Electronics, vol.58-2, pp. 612-618, 2012.
VII. H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information: Criteria of max- dependency, max-relevance and min-redundancy,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 8, pp.1226–1238, Aug. 2005.
VIII. Babu Kaji Baniya, Choong Seon Hong, “Nearest Multi-Prototype Based Music Mood Classification”.
IX. Jia-Min Ren, Ming-Ju Wu, and Jyh-Shing Roger Jang, “Automatic Music Mood Classification Based on Timbre and Modulation Features”, DOI 10.1109/TAFFC.2015.2427836, IEEE Transactions on Affective Computing.
X. H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information: Criteria of max- dependency, max-relevance and min-redundancy,” IEEE Trans. Pattern Anal. Mach. Intell.vol. 27, no. 8, pp.1226–1238, Aug. 2005.
XI. H. Peng and F. Long, “An efficient max-dependency algorithm for gene selection,” in 36th Symp. Interface: Computational Biology and Bioinformatics May 2004.
XII. L. Smith, “A Tutorial on Principal Components Analysis,”Available:www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf, 2000.
XIII. The direct Personality Social Psychol, vol. 57, no. 3, pp.493-502,1989.
XIV. MIRtoolbox1.6.1.
Additional Files
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 International Education and Research Journal (IERJ)

This work is licensed under a Creative Commons Attribution 4.0 International License.