,
Research Scholar, Department of ECM, Koneru Lakshmaiah Education Foundation , Vaddeswaram, Guntur, Andhra Pradesh , India
Professor & Coordinator (FED), Koneru Lakshmaiah Education Foundation , Vaddeswaram, Guntur, Andhra Pradesh , India
The paper suggests a hybrid multimodal sentiment analysis (MSA) model that would enhance the accuracy of sentiment prediction through the combination of textual, auditory, and visual information. In most cases, the traditional sentiment analysis models have been challenged because of numerous overlapping features and poor fusion methods when using multimodal data. To overcome these problems, propose a supervised contrastive learning-based methodology that will improve data representation and exploit multimodal feature fusion. The technique includes pre-processing Twitter information by tokenization, stemming, and feature extraction, and classifying it with the help of a Particle Swarm Optimization-Deep Learning Modified Neural Network (PSO-DLBMNN). The experimental findings, assessed based on the measures of accuracy, precision, recall, and F1-score, demonstrate that the suggested model is superior to the traditional approaches to deep learning, such as Bi-LSTM and Bi-GRU. In particular, the PSO-DLBMNN model had an accuracy of 95.48, a precision of 96.57, a recall of 94.87, and an F1-score of 93.45, which is a substantial increase over the baseline models. These results indicate that the model is capable of completing multiple tasks of integrating multimodal data alongside solving the problem of redundancy and data noise. The suggested method gives a fresh outlook on improving sentiment analysis through enhancing multimodal feature fusion. To sum up, the model has the potential to be applied to real-time analysis in social media and human-computer interaction systems, and it provides information about how multimodal data can be used to enhance sentiment prediction and emotional perception.
This is an open access article distributed under the Creative Commons Attribution Non-Commercial License (CC BY-NC) License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
0
The statements, opinions and data contained in the journal are solely those of the individual authors and contributors and not of the publisher and the editor(s). We stay neutral with regard to jurisdictional claims in published maps and institutional affiliations.