Sentence-Level Sentiment Classification A Comparative Study Between Deep Learning Models

Authors

  • Sara Mifrah Laboratory of Information Processing and Modelling, Hassan II University of Casablanca, Faculty of Sciences Ben M’sik, Casablanca, Morocco
  • El Habib Benlahmar Laboratory of Information Processing and Modelling, Hassan II University of Casablanca, Faculty of Sciences Ben M’sik, Casablanca, Morocco

DOI:

https://doi.org/10.13052/jicts2245-800X.10213

Keywords:

Sentiment Classification, Sentence Level, Deep Learning, BiLSTM, LSTM, BiGRU, GRU, BERT

Abstract

Sentiment classification provides a means of analysing the subjective information in the text and subsequently extracting the opinion. Sentiment analysis is the method by which people extract information from their opinions, judgments and emotions about entities. In this paper we propose a comparative study between the most deep learning models used in the field of sentiment analysis; L-NFS (Linguistique Neuro Fuzzy System), GRU (Gated Recurrent Unit), BiGRU (Bidirectional Gated Recurrent Unit), LSTM (Long Short-Term Memory), BiLSTM (Bidirectional Long Short-Term Memory) and BERT(Bidirectional Encoder Representation from Transformers), we used for this study a large Corpus contain 1.6 Million tweets, as devices we train our models with GPU (graphics processing unit) processor. As result we obtain the best Accuracy and F1-Score respectively 87.36% and 0.87 for the BERT Model.

Downloads

Download data is not yet available.

Author Biographies

Sara Mifrah, Laboratory of Information Processing and Modelling, Hassan II University of Casablanca, Faculty of Sciences Ben M’sik, Casablanca, Morocco

Mifrah Sara received the bachelor’s degree in Mathematical and Computer Science from Hassan II University of Casablanca FSBM in 2014, the master’s degree in information science and engineering from the same University in 2016, and she is a PhD Student in computer science. Her research areas include Natural Language Processing, deep learning, and Bibliometric analysis. she has been serving as a reviewer for some journals.

El Habib Benlahmar, Laboratory of Information Processing and Modelling, Hassan II University of Casablanca, Faculty of Sciences Ben M’sik, Casablanca, Morocco

El Habib Benlahmar holds a PhD in computer science from the National School of Computer Science and Systems Analysis in 2007. He is currently a professor of higher education at the Faculty of Sciences Ben M’Sik, Laboratory of Computer Science and Modeling, University Hassan II, Casablanca, Morocco. He has published several papers in various international journals and national and international conferences. His research interests include: Machine Learning, E-learning, Cloud Computing, Data Science, Ontology, Deep Learning, Internet of Things, Semantic Web, Bibliometric Analysis, Mathematics, Semantic Web Technologies, Mobile Applications, Educational Technology, Human-Computer Interaction.

References

Pamina, J. and Raja, Beschi (2019). Survey on Deep Learning Algorithms (January 12, 2019). International Journal of Emerging Technology and Innovative Engineering, Volume 5, Issue 1, January 2019, Available at SSRN: https://ssrn.com/abstract=3351289.

Alex Sherstinsky (2020). Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network, Physica D: Nonlinear Phenomena, Volume 404, 2020, 132306, ISSN 0167-2789, https://doi.org/10.1016/j.physd.2019.132306.

Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Lukasz; Polosukhin, Illia (2017-06-12). “Attention Is All You Need”. arXiv:1706.03762.

“Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing”. Google AI Blog. Retrieved 2019-08-25.

Wolf, Thomas; Debut, Lysandre; Sanh, Victor; Chaumond, Julien; Delangue, Clement; Moi, Anthony; Cistac, Pierric; Rault, Tim; Louf, Remi; Funtowicz, Morgan; Davison, Joe; Shleifer, Sam; von Platen, Patrick; Ma, Clara; Jernite, Yacine; Plu, Julien; Xu, Canwen; Le Scao, Teven; Gugger, Sylvain; Drame, Mariama; Lhoest, Quentin; Rush, Alexander (2020). “Transformers: State-of-the-Art Natural Language Processing”. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 38–45. doi: 10.18653/v1/2020.emnlp-demos.6. S2CID 208117506.

Sara Mifrah, El Habib Benlahmar, Mohamed Ezeouati, Youssef Mifrah (2022). Citation Sentiment Analysis with Deep Learning: a Comparative Study between RNN & CNN Models,The 2nd International Workshop on New Services and Networks WNSN’22.

“Better Language Models and Their Implications”. OpenAI. 2019-02-14. Retrieved 2019-08-25.

Zhao, Wei, Guan, Ziyu, Chen, Long, He, Xiaofei, Cai, Deng, Wang, Beidou, Wang, Quan. (2017). Weakly-Supervised Deep Embedding for Product Review Sentiment Analysis. IEEE Transactions on Knowledge and Data Engineering. pp. 1–1. 10.1109/TKDE.2017.2756658.

Xiong, Shufeng, Lv, Hailian, Zhao, Weiting, Ji, Donghong. (2017). Towards Twitter Sentiment Classification by Multi-Level Sentiment-Enriched Word Embeddings. Neurocomputing. 275. 10.1016/j.neucom.2017.11.023.

Rojas-Barahona LM. Deep learning for sentiment analysis language and linguistics. Compass 10:701–719. https://doi.org/10.1111/lnc3.12228 (2016).

Tang D, Qin B, Feng X, Liu T. Effective LSTMs for target-dependent sentiment classification. In: Paper presented at The 26th international conference on computational linguistics (COLING 2016). Osaka, Japan., 11–16 Dec. 2016, pp. 3298–3307. (2016).

Poria S, Chaturvedi I, Cambria E, Hussain A. Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: Paper presented at the 2016 IEEE 16th international conference on data mining (ICDM), Barcelona, Spain, pp. 439–448 (2016).

Akhtar MS,Kumar A, EkbalA,Bhattacharyya P. Ahybrid. deep learning architecture for sentiment analysis. In: Paper presented at the 26th international conference on computational linguistics: technical papers, COLING, Osaka, Japan, 11–17 Dec 2016, pp. 482–493 (2016).

Hassan A, Mahmood A. Deep learning approach for sentiment analysis of short texts. In: 2017 3rd international conference on control, automation and robotics (ICCAR), 24–26 April 2017, pp. 705–707. https://doi.org/10.1109/ICCAR.2017.7942788 (2017).

Nguyen D, Vo K, Pham D, Nguyen M, Quan T (2017). A deep architecture for sentiment analysis of news articles. In: Le N-T, van Do T, Nguyen NT, Thi HAL (eds) Advanced computational methods for knowledge engineering: proceedings of the 5th international conference on computer science, applied mathematics and applications, ICCSAMA 2017. Springer, Cham, pp. 129–140. https://doi.org/10.1007/978-3-319-61911-8_12.

Muppidi, Satish, Gorripati, Satya Keerthi, and Kishore, B (2020). ‘An Approach for Bibliographic Citation Sentiment Analysis Using Deep Learning’. 1 Jan. 2020: 353–362.

Hossain E., Sharif O., Hoque M.M., Sarker I.H. (2021) SentiLSTM: A Deep Learning Approach for Sentiment Analysis of Restaurant Reviews. In: Abraham A., Hanne T., Castillo O., Gandhi N., Nogueira Rios T., Hong TP. (eds) Hybrid Intelligent Systems. HIS 2020. Advances in Intelligent Systems and Computing, vol. 1375. Springer, Cham. https://doi.org/10.1007/978-3-030-73050-5_19

Rhanoui Maryem, Mikram Mounia, Yousfi Siham, Barzali Soukaina (2019). “A CNN-BiLSTM Model for Document-Level Sentiment Analysis” Mach. Learn. Knowl. Extr. 1, no. 3: 832–847. https://doi.org/10.3390/make1030048

Devlin, J., Chang, M., Lee, K., Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv, abs/1810.04805.

Downloads

Published

2022-05-21

Issue

Section

Intelligent Systems for Smart Applications