Version 1
: Received: 4 August 2021 / Approved: 16 August 2021 / Online: 16 August 2021 (08:21:23 CEST)
Version 2
: Received: 27 August 2021 / Approved: 27 August 2021 / Online: 27 August 2021 (12:56:05 CEST)
Version 3
: Received: 14 September 2021 / Approved: 14 September 2021 / Online: 14 September 2021 (10:48:55 CEST)
Fendji, J.L.E.K.; Taira, D.M.; Atemkeng, M.; Ali, A.M. WATS-SMS: A T5-Based French Wikipedia Abstractive Text Summarizer for SMS. Future Internet2021, 13, 238.
Fendji, J.L.E.K.; Taira, D.M.; Atemkeng, M.; Ali, A.M. WATS-SMS: A T5-Based French Wikipedia Abstractive Text Summarizer for SMS. Future Internet 2021, 13, 238.
Fendji, J.L.E.K.; Taira, D.M.; Atemkeng, M.; Ali, A.M. WATS-SMS: A T5-Based French Wikipedia Abstractive Text Summarizer for SMS. Future Internet2021, 13, 238.
Fendji, J.L.E.K.; Taira, D.M.; Atemkeng, M.; Ali, A.M. WATS-SMS: A T5-Based French Wikipedia Abstractive Text Summarizer for SMS. Future Internet 2021, 13, 238.
Abstract
Text summarization remains a challenging task in the Natural Language Processing field despite the plethora of applications in enterprises and daily life. One of the common use cases is the summarization of web pages which has the potential to provide an overview of web pages to devices with limited features. In fact, despite the increasing penetration rate of mobile devices in rural areas, the bulk of those devices offer limited features in addition to the fact that these areas are covered with limited connectivity such as the GSM network. Summarizing web pages into SMS becomes, therefore, an important task to provide information to limited devices. This work introduces WATS-SMS, a T5-based French Wikipedia Abstractive Text Summarizer for SMS. It is built through a transfer learning approach. The T5 English pre-trained model is used to generate a French text summarization model by retraining the model on 25,000 Wikipedia pages then compared with different approaches in the literature. The objective is twofold: (1) to check the assumption made in the literature that abstractive models provide better results compared to extractive ones; and (2) to evaluate the performance of our model compared to other existing abstractive models. A score based on ROUGE metrics gave us a value of 52% for articles with length up to 500 characters against 34.2% for transformer-ED and 12.7% for seq-2seq-attention; and a value of 77% for articles with larger size against 37% for transformers-DMCA. Moreover, an architecture including a software SMS-gateway has been developed to allow owners of mobile devices with limited features to send requests and to receive summaries through the GSM network.
Keywords
Text summarization; Fine-tuning; Transformers; SMS; Gateway; French Wikipedia.
Subject
Computer Science and Mathematics, Algebra and Number Theory
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Received:
27 August 2021
Commenter:
Jean Louis Fendji Kedieng Ebongue
Commenter's Conflict of Interests:
Author
Comment:
We added more text for the motivation of the work. We provided an extra table to comment on the results. We better described the dataset creation. We redid the last figure for a better quality. We provided several appendices.
Commenter: Jean Louis Fendji Kedieng Ebongue
Commenter's Conflict of Interests: Author