Advancing natural language processing (NLP) applications of morphologically rich languages with bidirectional encoder representations from transformers (BERT): an empirical case study for Turkish


ÖZÇİFT A., Akarsu K., Yumuk F., Söylemez C.

Automatika, cilt.62, sa.2, ss.226-238, 2021 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 62 Sayı: 2
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1080/00051144.2021.1922150
  • Dergi Adı: Automatika
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Applied Science & Technology Source, Central & Eastern European Academic Source (CEEAS), Computer & Applied Sciences, Directory of Open Access Journals
  • Sayfa Sayıları: ss.226-238
  • Anahtar Kelimeler: Bidirectional encoder representations transformers, language pre-processing, morphologically rich language, natural language processing, Turkish
  • Manisa Celal Bayar Üniversitesi Adresli: Evet

Özet

Language model pre-training architectures have demonstrated to be useful to learn language representations. bidirectional encoder representations from transformers (BERT), a recent deep bidirectional self-attention representation from unlabelled text, has achieved remarkable results in many natural language processing (NLP) tasks with fine-tuning. In this paper, we want to demonstrate the efficiency of BERT for a morphologically rich language, Turkish. Traditionally morphologically difficult languages require dense language pre-processing steps in order to model the data to be suitable for machine learning (ML) algorithms. In particular, tokenization, lemmatization or stemming and feature engineering tasks are needed to obtain an efficient data model to overcome data sparsity or high-dimension problems. In this context, we selected five various Turkish NLP research problems as sentiment analysis, cyberbullying identification, text classification, emotion recognition and spam detection from the literature. We then compared the empirical performance of BERT with the baseline ML algorithms. Finally, we found enhanced results compared to base ML algorithms in the selected NLP problems while eliminating heavy pre-processing tasks.