Contemporary advanced models for aspect-targeted opinion word extraction (ATOWE), which predominantly utilize BERT-based encoders at a word level, have shown limited advancements when integrated with graph convolutional networks (GCNs) for syntactic tree assimilation. Recognizing the prowess of BERT subwords in encapsulating rare or context-poor words, this study pivots from syntactic trees to BERT subwords, omitting GCNs from the structural framework. Our approach, named Aspect-Enhanced Wordpiece Extraction Model (AEWEM), focuses on refining aspect representation during encoding. We propose an input format of paired sentence-aspect, diverging from traditional single-sentence inputs. AEWEM demonstrates superior performance on benchmark datasets, establishing a robust foundation for future explorations in this domain.