Article
Version 1
Preserved in Portico This version is not peer-reviewed
A Simple Survey of Pre-trained Language Models
Version 1
: Received: 11 August 2022 / Approved: 12 August 2022 / Online: 12 August 2022 (11:43:50 CEST)
How to cite: Zhu, Z. A Simple Survey of Pre-trained Language Models. Preprints 2022, 2022080238. https://doi.org/10.20944/preprints202208.0238.v1 Zhu, Z. A Simple Survey of Pre-trained Language Models. Preprints 2022, 2022080238. https://doi.org/10.20944/preprints202208.0238.v1
Abstract
Pre-trained Language Models (PTLM) have remarkable and successful performance in solving lots of NLP tasks nowadays. And previous researchers have created many SOTA models and these models are included in many long surveys(Qiu et al., 2020). So, we would like to conduct a simple and short survey on this topic to help researchers understand the sketch of PTLM more quickly and comprehensively. In this short survey, we would provide a simple but comprehensive review of techniques, benchmarks, and methodologies in PTLM. And we would also introduce the applications evaluation of PTLM in this simple survey.
Keywords
NLP; PLTMs; benchmarks
Subject
Computer Science and Mathematics, Computer Science
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment