Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

WATS-SMS: A T5-based French Wikipedia Abstractive Text Summarizer for SMS

Version 1 : Received: 4 August 2021 / Approved: 16 August 2021 / Online: 16 August 2021 (08:21:23 CEST)
Version 2 : Received: 27 August 2021 / Approved: 27 August 2021 / Online: 27 August 2021 (12:56:05 CEST)
Version 3 : Received: 14 September 2021 / Approved: 14 September 2021 / Online: 14 September 2021 (10:48:55 CEST)

A peer-reviewed article of this Preprint also exists.

Fendji, J.L.E.K.; Taira, D.M.; Atemkeng, M.; Ali, A.M. WATS-SMS: A T5-Based French Wikipedia Abstractive Text Summarizer for SMS. Future Internet 2021, 13, 238. Fendji, J.L.E.K.; Taira, D.M.; Atemkeng, M.; Ali, A.M. WATS-SMS: A T5-Based French Wikipedia Abstractive Text Summarizer for SMS. Future Internet 2021, 13, 238.

Abstract

Text summarization remains a challenging task in the Natural Language Processing field despite the plethora of applications in enterprises and daily life. One of the common use cases is the summarization of web pages which has the potential to provide an overview of web pages to devices with limited features. In fact, despite the increasing penetration rate of mobile devices in rural areas, the bulk of those devices offer limited features in addition to the fact that these areas are covered with limited connectivity such as the GSM network. Summarizing web pages into SMS becomes, therefore, an important task to provide information to limited devices. This work introduces WATS-SMS, a T5-based French Wikipedia Abstractive Text Summarizer for SMS. It is built through a transfer learning approach. The T5 English pre-trained model is used to generate a French text summarization model by retraining the model on 25,000 Wikipedia pages then compared with different approaches in the literature. The objective is twofold: (1) to check the assumption made in the literature that abstractive models provide better results compared to extractive ones; and (2) to evaluate the performance of our model compared to other existing abstractive models. A score based on ROUGE metrics gave us a value of 52% for articles with length up to 500 characters against 34.2% for transformer-ED and 12.7% for seq-2seq-attention; and a value of 77% for articles with larger size against 37% for transformers-DMCA. Moreover, an architecture including a software SMS-gateway has been developed to allow owners of mobile devices with limited features to send requests and to receive summaries through the GSM network.

Keywords

Text summarization; Fine-tuning; Transformers; SMS; Gateway; French Wikipedia.

Subject

Computer Science and Mathematics, Algebra and Number Theory

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.