Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

A Simple Survey of Pre-trained Language Models

Version 1 : Received: 11 August 2022 / Approved: 12 August 2022 / Online: 12 August 2022 (11:43:50 CEST)

How to cite: Zhu, Z. A Simple Survey of Pre-trained Language Models. Preprints 2022, 2022080238. https://doi.org/10.20944/preprints202208.0238.v1 Zhu, Z. A Simple Survey of Pre-trained Language Models. Preprints 2022, 2022080238. https://doi.org/10.20944/preprints202208.0238.v1

Abstract

Pre-trained Language Models (PTLM) have remarkable and successful performance in solving lots of NLP tasks nowadays. And previous researchers have created many SOTA models and these models are included in many long surveys(Qiu et al., 2020). So, we would like to conduct a simple and short survey on this topic to help researchers understand the sketch of PTLM more quickly and comprehensively. In this short survey, we would provide a simple but comprehensive review of techniques, benchmarks, and methodologies in PTLM. And we would also introduce the applications evaluation of PTLM in this simple survey.

Keywords

NLP; PLTMs; benchmarks

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.