TuFoC: regional classification of Turkish folk music recordings using deep learning on Mel spectrograms


ABİDİN D.

PeerJ Computer Science, vol.11, 2025 (SCI-Expanded, Scopus) identifier identifier

  • Publication Type: Article / Article
  • Volume: 11
  • Publication Date: 2025
  • Doi Number: 10.7717/peerj-cs.3233
  • Journal Name: PeerJ Computer Science
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, Directory of Open Access Journals
  • Keywords: Artificial Intelligence, Convolutional neural networks, Data Mining and Machine Learning, Deep learning, Mel spectrograms, Music information retrieval, Neural Networks, Regional classification, Turkish folk music
  • Manisa Celal Bayar University Affiliated: Yes

Abstract

The regional classification of Turkish folk music remains a relatively unexplored domain in music information retrieval, particularly when leveraging raw audio signals for deep learning. This study addresses this gap by investigating how modern deep learning architectures can effectively classify regional folk recordings from limited original data using carefully designed spectrogram inputs. To address this challenge, we present Turkish Folk Music Classification (TuFoC), a deep learning-based approach that classifies Turkish folk music recordings according to their region of origin using Mel spectrogram representations. We investigate two complementary architectures: a MobileNetV2-based convolutional neural network (CNN) for extracting spatial representations, and a long short-term memory (LSTM) network for modeling temporal dynamics. To engage critically with the state of the art, we replicate a classical machine learning pipeline that combines histogram of oriented gradients (HOG) features with a Light Gradient-Boosting Machine (LightGBM) classifier and Synthetic Minority Oversampling Technique (SMOTE)-based oversampling. This baseline achieves competitive results, particularly on GAD, with 92.53% accuracy and macro F1-score. However, our CNN architecture not only marginally outperforms the baseline on the gently augmented dataset (GAD) (92.68% accuracy) but also demonstrates superior consistency across all dataset variants. While the LSTM model underperforms on original dataset (OD) and strongly augmented dataset (AD), it improves markedly on GAD, validating the importance of balanced data preparation. Experimental results, obtained through stratified 10-fold cross-validation repeated over 30 independent runs, demonstrate that the proposed CNN architecture delivers the highest and most stable classification performance. Beyond performance gains, our approach offers key advantages in generalization, scalability, and automation, as it bypasses the need for domain-specific feature design. The findings confirm that deep neural models trained on Mel spectrograms constitute a flexible and robust alternative to classical pipelines, holding promise for future applications in computational ethnomusicology and music information retrieval.