Submitted:
29 November 2023
Posted:
05 December 2023
You are already at the latest version
Abstract
Keywords:
1. Introduction

- A primary contribution is the novel incorporation of syntactic information into deep learning models for TOWE. This approach utilizes syntax-based opinion potential scores and syntactic inter-word connections, which were previously overlooked in deep learning approaches to TOWE. By leveraging the distances in dependency trees and syntactic connections between words, SEDLM enhances the representation learning process, making it more effective in identifying opinion words relative to specific targets in sentences.
- The study introduces an innovative regularization technique that aims to distinctly differentiate the representation vectors of target-oriented opinion words from other words in a sentence. Additionally, the utilization of Ordered-Neuron Long Short-Term Memory Networks to calculate model-based possibility scores represents a significant advancement. These scores, aligned with syntax-based possibility scores, guide the deep learning model to better understand and interpret the likelihood of each word in the sentence being an opinion word in the context of TOWE.
- The comprehensive analysis and extensive experimental validation of SEDLM set new benchmarks in TOWE performance. The model has been rigorously tested across multiple standard datasets, demonstrating its effectiveness and superiority over existing approaches. This contribution not only validates the proposed methods but also establishes SEDLM as a new state-of-the-art model in the field of aspect-based sentiment analysis, specifically in the targeted extraction of opinion words.
2. Related Work
3. Methodology
3.1. Sentence Encoding
3.2. Syntax-Model Consistency
3.3. Graph Convolutional Networks
3.4. Representation Regularization
4. Experiment
4.1. Comparing to the State of the Art
4.2. Model Analysis and Ablation Study
5. Conclusion
References
- Fan, Z.; Wu, Z.; Dai, X.; Huang, S.; Chen, J. Target-oriented opinion words extraction with target-fused neural sequence labeling. NAACL-HLT, 2019.
- Tang, D.; Qin, B.; Feng, X.; Liu, T. Effective LSTMs for target-dependent sentiment classification. COLING, 2016.
- Wang, Y.; Huang, M.; Zhu, X.; Zhao, L. Attention-based LSTM for aspect-level sentiment classification. EMNLP, 2016.
- Fei, H.; Ren, Y.; Zhang, Y.; Ji, D.; Liang, X. Enriching contextualized language model from knowledge graph for biomedical information extraction. Briefings in Bioinformatics 2021, 22. [Google Scholar] [CrossRef] [PubMed]
- Xue, W.; Li, T. Aspect based sentiment analysis with gated convolutional networks. ACL, 2018.
- Wu, Z.; Zhao, F.; Dai, X.Y.; Huang, S.; Chen, J. Latent Opinions Transfer Network for Target-Oriented Opinion Words Extraction. arXiv preprint arXiv:2001.01989 2020. arXiv:2001.01989 2020.
- Hu, M.; Liu, B. Mining and summarizing customer reviews. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004, pp. 168–177.
- Fei, H.; Wu, S.; Ren, Y.; Li, F.; Ji, D. Better Combine Them Together! Integrating Syntactic Constituency and Dependency Representations for Semantic Role Labeling. Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021, pp. 549–559.
- Zhuang, L.; Jing, F.; Zhu, X.Y. Movie review mining and summarization. Proceedings of the 15th ACM international conference on Information and knowledge management, 2006, pp. 43–50.
- Fei, H.; Zhang, Y.; Ren, Y.; Ji, D. Latent Emotion Memory for Multi-Label Emotion Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp. 7692–7699.
- Shen, Y.; Tan, S.; Sordoni, A.; Courville, A. Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks. ICLR, 2019.
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. ICLR, 2017.
- Qiu, G.; Liu, B.; Bu, J.; Chen, C. Opinion word expansion and target extraction through double propagation. Computational linguistics 2011, 37, 9–27. [Google Scholar] [CrossRef]
- Liu, P.; Joty, S.; Meng, H. Fine-grained opinion mining with recurrent neural networks and word embeddings. EMNLP, 2015.
- Poria, S.; Cambria, E.; Gelbukh, A. Aspect extraction for opinion mining with a deep convolutional neural network. Knowledge-Based Systems 2016, 108, 42–49. [Google Scholar] [CrossRef]
- Yin, Y.; Wei, F.; Dong, L.; Xu, K.; Zhang, M.; Zhou, M. Unsupervised word and dependency path embeddings for aspect term extraction. IJCAI, 2016.
- Xu, H.; Liu, B.; Shu, L.; Yu, P.S. Double embeddings and cnn-based sequence labeling for aspect extraction. ACL, 2018.
- Htay, S.S.; Lynn, K.T. Extracting product features and opinion words using pattern knowledge in customer reviews. The Scientific World Journal 2013, 2013. [Google Scholar] [CrossRef] [PubMed]
- Shamshurin, I. Extracting domain-specific opinion words for sentiment analysis. Mexican International Conference on Artificial Intelligence, 2012, pp. 58–68.
- Fei, H.; Wu, S.; Li, J.; Li, B.; Li, F.; Qin, L.; Zhang, M.; Zhang, M.; Chua, T.S. LasUIE: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model. Proceedings of the Advances in Neural Information Processing Systems, NeurIPS 2022, 2022, pp. 15460–15475.
- Li, J.; Fei, H.; Liu, J.; Wu, S.; Zhang, M.; Teng, C.; Ji, D.; Li, F. Unified Named Entity Recognition as Word-Word Relation Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, pp. 10965–10973.
- Li, J.; Xu, K.; Li, F.; Fei, H.; Ren, Y.; Ji, D. MRN: A Locally and Globally Mention-Based Reasoning Network for Document-Level Relation Extraction. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 1359–1370.
- Fei, H.; Ren, Y.; Ji, D. Boundaries and edges rethinking: An end-to-end neural model for overlapping entity relation extraction. Information Processing & Management 2020, 57, 102311. [Google Scholar]
- Wang, F.; Li, F.; Fei, H.; Li, J.; Wu, S.; Su, F.; Shi, W.; Ji, D.; Cai, B. Entity-centered Cross-document Relation Extraction. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 9871–9881.
- Fei, H.; Li, F.; Li, B.; Ji, D. Encoder-Decoder Based Unified Semantic Role Labeling with Label-Aware Syntax. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp. 12794–12802.
- Cao, H.; Li, J.; Su, F.; Li, F.; Fei, H.; Wu, S.; Li, B.; Zhao, L.; Ji, D. OneEE: A One-Stage Framework for Fast Overlapping and Nested Event Extraction. Proceedings of the 29th International Conference on Computational Linguistics, 2022, pp. 1953–1964.
- Fei, H.; Wu, S.; Ren, Y.; Zhang, M. Matching Structure for Dual Learning. Proceedings of the International Conference on Machine Learning, ICML, 2022, pp. 6373–6391.
- Wu, S.; Fei, H.; Ren, Y.; Ji, D.; Li, J. Learn from Syntax: Improving Pair-wise Aspect and Opinion Terms Extraction with Rich Syntactic Knowledge. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021, pp. 3957–3963.
- Wu, S.; Fei, H.; Li, F.; Zhang, M.; Liu, Y.; Teng, C.; Ji, D. Mastering the Explicit Opinion-Role Interaction: Syntax-Aided Neural Transition System for Unified Opinion Role Labeling. Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022, pp. 11513–11521.
- Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, and Tat-Seng Chua. Next-gpt: Any-to-any multimodal llm, 2023.
- Fei, H.; Zhang, M.; Ji, D. Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7014–7026.
- Liu, K.; Xu, H.L.; Liu, Y.; Zhao, J. Opinion target extraction using partially-supervised word alignment model. Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
- Wang, W.; Pan, S.J.; Dahlmeier, D.; Xiao, X. Recursive neural conditional random fields for aspect-based sentiment analysis. EMNLP, 2016.
- Wang, W.; Pan, S.J.; Dahlmeier, D.; Xiao, X. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. AAAI, 2017.
- Li, X.; Lam, W. Deep multi-task learning for aspect term extraction with memory interaction. EMNLP, 2017.
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT, 2019.
| 14res | 14lap | 15res | 16res | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Model | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
| Distance-rule [7] | 58.39 | 43.59 | 49.92 | 50.13 | 33.86 | 40.42 | 54.12 | 39.96 | 45.97 | 61.90 | 44.57 | 51.83 |
| Dependency-rule [9] | 64.57 | 52.72 | 58.04 | 45.09 | 31.57 | 37.14 | 65.49 | 48.88 | 55.98 | 76.03 | 56.19 | 64.62 |
| LSTM [14] | 52.64 | 65.47 | 58.34 | 55.71 | 57.53 | 56.52 | 57.27 | 60.69 | 58.93 | 62.46 | 68.72 | 65.33 |
| BiLSTM [14] | 58.34 | 61.73 | 59.95 | 64.52 | 61.45 | 62.71 | 60.46 | 63.65 | 62.00 | 68.68 | 70.51 | 69.57 |
| Pipeline [1] | 77.72 | 62.33 | 69.18 | 72.58 | 56.97 | 63.83 | 74.75 | 60.65 | 66.97 | 81.46 | 67.81 | 74.01 |
| TC-BiLSTM [1] | 67.65 | 67.67 | 67.61 | 62.45 | 60.14 | 61.21 | 66.06 | 60.16 | 62.94 | 73.46 | 72.88 | 73.10 |
| IOG [1] | 82.85 | 77.38 | 80.02 | 73.24 | 69.63 | 71.35 | 76.06 | 70.71 | 73.25 | 82.25 | 78.51 | 81.69 |
| LOTN [6] | 84.00 | 80.52 | 82.21 | 77.08 | 67.62 | 72.02 | 76.61 | 70.29 | 73.29 | 86.57 | 80.89 | 83.62 |
| SEDLM (Ours) | 83.23 | 81.46 | 82.33 | 73.87 | 77.78 | 75.77 | 76.63 | 81.14 | 78.81 | 87.72 | 84.38 | 86.01 |
| Model | 14res | 14lap | 15res | 16res |
|---|---|---|---|---|
| SEDLM | 82.33 | 75.77 | 78.81 | 86.01 |
| SEDLM - KL | 80.91 | 73.34 | 76.21 | 83.78 |
| SEDLM - ON-LSTM | 78.99 | 70.28 | 71.39 | 81.13 |
| SEDLM_wLSTM | 81.03 | 73.98 | 74.43 | 82.81 |
| Model | 14res | 14lap | 15res | 16res |
|---|---|---|---|---|
| SEDLM | 82.33 | 75.77 | 78.81 | 86.01 |
| SEDLM - | 80.98 | 73.05 | 75.51 | 83.72 |
| SEDLM - | 81.23 | 74.18 | 76.32 | 85.20 |
| Model | 14res | 14lap | 15res | 16res |
|---|---|---|---|---|
| SEDLM | 82.33 | 75.77 | 78.81 | 86.01 |
| SEDLM - REG | 80.88 | 73.89 | 75.92 | 84.03 |
| SEDLM_REG_wMP-GCN | 80.72 | 72.44 | 74.28 | 84.29 |
| SEDLM - GCN | 81.01 | 70.88 | 72.98 | 82.58 |
| SEDLM - GCN - REG | 79.23 | 71.04 | 72.53 | 82.13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).