Preprint
Article

A Simple Survey of Pre-trained Language Models

This version is not peer-reviewed.

Submitted:

11 August 2022

Posted:

12 August 2022

You are already at the latest version

Abstract
Pre-trained Language Models (PTLM) have remarkable and successful performance in solving lots of NLP tasks nowadays. And previous researchers have created many SOTA models and these models are included in many long surveys(Qiu et al., 2020). So, we would like to conduct a simple and short survey on this topic to help researchers understand the sketch of PTLM more quickly and comprehensively. In this short survey, we would provide a simple but comprehensive review of techniques, benchmarks, and methodologies in PTLM. And we would also introduce the applications evaluation of PTLM in this simple survey.
Keywords: 
Subject: 
Computer Science and Mathematics  -   Computer Science
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Alerts
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated