A New Multi Objective Video Summarization Approach for Video Surveillance Analytics Applications on Smart Cities


ALTUNDOĞAN T. G., Karakose M., Mert F.

IEEE Access, cilt.13, ss.154353-154382, 2025 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 13
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1109/access.2025.3605259
  • Dergi Adı: IEEE Access
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Sayfa Sayıları: ss.154353-154382
  • Anahtar Kelimeler: smart city, transformer, video analytics, video classification, Video summarization
  • Manisa Celal Bayar Üniversitesi Adresli: Evet

Özet

Summarizing surveillance videos used in smart city applications is very important in terms of transaction costs and sustainability. Although there is a considerable amount of literature on video summarization, the methods in the literature for summarizing surveillance videos used in smart cities are few and inadequate. Because, both object and event features of the mentioned video data must be preserved. In this study, we integrated an object-centric and an event-centric summarization method with Apache Kafka for effective summarization of such videos. With the object-centric summarization module of our proposed method, we focused on preserving the statistical and motion features of the objects in the videos. With the event-centric summarization module, we ensured the preservation of abnormal events in the videos. We presented the performance results of both modules and the integrated system in detail with different metrics. Finally, we compared the performances of both modules with the video summarization approaches in the literature based on different metrics. The developed object-centric summarization method preserves the statistical features of the video with a success rate of over 90% and shortens the videos with a rate of over 95%. The developed event-centric summarization approach provides summaries that include abnormal situations found in videos with a success rate of over 95%. The presented comparative results prove that this original method we developed is superior to the studies in the literature in terms of performance and many different evaluations.