User Guide
Why can I only view 3 results?
You can also view all results when you are connected from the network of member institutions only. For non-member institutions, we are opening a 1-month free trial version if institution officials apply.
So many results that aren't mine?
References in many bibliographies are sometimes referred to as "Surname, I", so the citations of academics whose Surname and initials are the same may occasionally interfere. This problem is often the case with citation indexes all over the world.
How can I see only citations to my article?
After searching the name of your article, you can see the references to the article you selected as soon as you click on the details section.
 Views 4
Enhancing Deep Learning-Based Sentiment Analysis Using Static and Contextual Language Models
2023
Journal:  
Bitlis Eren Üniversitesi Fen Bilimleri Dergisi
Author:  
Abstract:

Sentiment Analysis (SA) is an essential task of Natural Language Processing and is used in various fields such as marketing, brand reputation control, and social media monitoring. The various scores generated by users in product reviews are essential feedback sources for businesses to discover their products' positive or negative aspects. However, it takes work for businesses facing a large user population to accurately assess the consistency of the scores. Recently, automated methodologies based on Deep Learning (DL), which utilize static and especially pre-trained contextual language models, have shown successful performances in SA tasks. To address the issues mentioned above, this paper proposes Multi-layer Convolutional Neural Network-based SA approaches using Static Language Models (SLMs) such as Word2Vec and GloVe and Contextual Language Models (CLMs) such as ELMo and BERT that can evaluate product reviews with ratings. Focusing on improving model inputs by using sentence representations that can store richer features, this study applied SLMs and CLMs to the inputs of DL models and evaluated their impact on SA performance. To test the performance of the proposed approaches, experimental studies were conducted on the Amazon dataset, which is publicly available and considered a benchmark dataset by most researchers. According to the results of the experimental studies, the highest classification performance was obtained by applying the BERT CLM with 82% test and 84% training accuracy scores. The proposed approaches can be applied to various domains' SA tasks and provide insightful decision-making information.

Keywords:

0
2023
Author:  
Citation Owners
Information: There is no ciation to this publication.
Similar Articles










Bitlis Eren Üniversitesi Fen Bilimleri Dergisi

Field :   Fen Bilimleri ve Matematik; Mühendislik

Journal Type :   Ulusal

Metrics
Article : 948
Cite : 1.897
Bitlis Eren Üniversitesi Fen Bilimleri Dergisi