In this paper, we propose a novel approach named EmbedHDP to enhance the evaluation models used to assess sentence suggestions within nursing care record applications. The key focus is determining whether these suggestions garner assessments that align with caregivers as human evaluators. It is crucial due to the direct relevance of the information provided to the health or condition of the elderly. The motivation behind this proposal stems from challenges observed in previous models, such as BERTScore, which encountered difficulties in effectively evaluating the nurse care record domain, consistently providing quality assessments of generated sentence suggestions above 60%. Additionally, while widely used, cosine similarity exhibits limitations concerning word order, leading to potential misjudgments of semantical differences within similar word sets. Similarly, relying on lexical overlap, ROUGE tends to overlook semantic accuracy. Furthermore, despite its utility, BLEU neglects semantic coherence in its evaluations. EmbedHDP excels in evaluating nurse care records by effectively handling a variety of sentence structures and medical terminology and providing differentiated and contextually relevant assessments. We used a dataset comprising 320 pairs of sentences with correspondingly equivalent lengths. The results revealed that EmbedHDP outperformed other evaluation models, achieving a coefficient score of 61%, followed by cosine similarity with a score of 59%, and BERTScore with 58%. This shows the effectiveness of our proposed approach in improving the evaluation of sentence suggestions in nursing care record applications.