Preprint Article Version 1 This version is not peer-reviewed

Cooking is All About People: Comment Classification on Cookery Channels Using Bert and Classification Models (Malayalam-English Mix-Code)

Version 1 : Received: 15 June 2020 / Approved: 17 June 2020 / Online: 17 June 2020 (13:40:22 CEST)

How to cite: Kazhuparambil, S.; Kaushik, A. Cooking is All About People: Comment Classification on Cookery Channels Using Bert and Classification Models (Malayalam-English Mix-Code). Preprints 2020, 2020060223 (doi: 10.20944/preprints202006.0223.v1). Kazhuparambil, S.; Kaushik, A. Cooking is All About People: Comment Classification on Cookery Channels Using Bert and Classification Models (Malayalam-English Mix-Code). Preprints 2020, 2020060223 (doi: 10.20944/preprints202006.0223.v1).

Abstract

The scope of a lucrative career promoted by Google through its video distribution platform YouTube 1 has attracted a large number of users to become content creators. An important aspect of this line of work is the feedback received in the form of comments which show how well the content is being received by the audience. However, volume of comments coupled with spam and limited tools for comment classification makes it virtually impossible for a creator to go through each and every comment and gather constructive feedback. Automatic classification of comments is a challenge even for established classification models, since comments are often of variable lengths riddled with slang, symbols and abbreviations. This is a greater challenge where comments are multilingual as the messages are often rife with the respective vernacular. In this work, we have evaluated top-performing classification models and four different vectorizers, for classifying comments which are a mix of different combinations of English and Malayalam (only English, only Malayalam and Mix of English and Malayalam). The statistical analysis of results indicates that Multinomial Naïve Bayes, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Random Forest and Decision Trees offer similar level of accuracy in comment classification. Further, we have also evaluated 3 multilingual sub-types of the novel NLP language model, BERT and compared its’ performance to the conventional machine learning classification techniques. XLM was the top-performing BERT model with an accuracy of 67.31%. Random Forest with Term Frequency Vectorizer was the best the top-performing model out of all the traditional classification models with an accuracy of 63.59%.

Subject Areas

BERT; Classification; Mix-Code; Language Model; Youtube; Parametric and Non-Parametric

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.