ARTICLE | doi:10.20944/preprints202011.0369.v2
Online: 27 November 2020 (16:43:48 CET)
Social media giants like Facebook are struggling to keep up with fake news, in the light of the fact that disinformation diffuses at lightning speed. For example, the COVID-19 (i.e. Coronavirus) pandemic is testing the citizens' ability to distinguish real news from falsifying facts (i.e. disinformation). Cyber-criminals take advantage of the inability to cope with fake news diffusion on social media platforms. Fake news, created as a means to manipulate readers to perform various malicious IT activities such as clicking on fraudulent links associated with the fake news/posts. However, no previous study has investigated the strategies used to create fake news on social media. Therefore, we have analysed five data-sets using Machine Learning (ML) that contain online news articles (i.e. both fake and legitimate news) to investigate strategies of creating fake news on social media platforms. Our study findings revealed a threat model understanding strategies of crafting fake news which may highly likely diffuse on social media platforms.
ARTICLE | doi:10.20944/preprints202111.0024.v1
Subject: Mathematics & Computer Science, Analysis Keywords: Fake news detection; Deep learning; Feature Engineering
Online: 1 November 2021 (15:34:46 CET)
The rapid infiltration of fake news is a flaw to the otherwise valuable internet, a virtually global network that allows for the simultaneous exchange of information. While a common, and normally effective, approach to such classification tasks is designing a deep learning-based model, the subjectivity behind the writing and production of misleading news invalidates this technique. Deep learning models are unexplainable in nature, making the contextualization of results impossible because it lacks explicit features used in traditional machine learning. This paper emphasizes the need for feature engineering to effectively address this problem: containing the spread of fake news at the source, not after it has become globally prevalent. Insights from extracted features were used to manipulate the text, which was then tested on deep learning models. The original unknown yet substantial impact that the original features had on deep learning models was successfully depicted in this study.
ARTICLE | doi:10.20944/preprints202012.0023.v1
Online: 1 December 2020 (13:13:27 CET)
The infodiet of young Spanish adults aged 18 to 25 was analysed to determine their attitude towards fake news. The objectives were: to establish whether they have received any training in fake news; to determine whether they know how to identify fake information; and to investigate whether they spread it. The study employed a descriptive quantitative method consisting of a survey of 500 representative interviews of the Spanish population aged between 18 and 25 through a structured questionnaire. The results indicate that they are aware of the importance of training, although generally they do not know of any course and when they do, they do not tend to enrol on one either due to lack of interest or time. These young adults feel that they know how to identify fake content and, moreover, that they know how to do so very well. However, they do not use the best tools. While they do not always verify information, they mainly suspect the credibility of information when it is meaningless. However, they do not tend to spread fake information. We conclude that media information literacy training (MILT) is necessary in educational centres that focuses on the main issues identified.
ARTICLE | doi:10.20944/preprints202105.0477.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: emotions; sentiment analysis; machine learning; fake news; disinformation
Online: 20 May 2021 (10:29:52 CEST)
Researchers are concerned about the impact of fake news on democracy, while it could also escalate to life-threatening problems. Fake news continues to spread, so does people's behaviour and emotions about fake news via social media. This opens up the back door for cyber-criminals to entice people (i.e. taking advantage of victims' emotional and behavioural aspects) to click on fraudulent links (e.g. phishing links) associated with fake news when reading. Therefore, we investigate how people's emotional and behavioural features influence reading and diffusing fake news on social media. We proposed a classification model incorporating people's behavioural features and their emotions to better detect fake news in social media. Our results reveal that fake news has more negative emotions than legitimate ones and both title and the content of the news/posts are equally important. Furthermore, we have identified that there exist strong correlations between some of the behavioural and emotional features. Finally, we concluded that emotional and behavioural features are important for fake news classifications as they improve the accuracy of detecting fake news, and the findings of our study can ultimately be used to develop a risk score prediction model for fake news in social media.
ARTICLE | doi:10.20944/preprints202006.0243.v1
Online: 19 June 2020 (12:15:46 CEST)
All over the world, development of micro blog and other social platform indicate that Social Media is now the focus and trend of the Internet. Daily life, study and work are influenced by news in Social Medias . Micro blog is new emergent type of media and it spreads information rapidly in the crowd in recent years. Suppose an user searches for specific information about one topic on micro blog. He/she found easily plenty of information related to his/her search in social medias. The problem is to find out the correct information. Normally, multi-document summarization method deals with a collection of documents about one topic for extracting the valuable points and discards useless information. Actually, it needs to extract the topic content by adding topic factors and social patterns. Topic factor is the lexical information related to the topic. Social pattern relates to special interactive mode owned by online social network, such as comment and repost. People has been seen the fake news on mobile/internet during lockdown period. It is of no doubt that anyone with a social media account has seen at least one example of this.Humanity’s greatest challenges are to detect false information. Fake news are collected from 150 persons using social media The aim of the paper is to investigate the truthfulness of the news people share on social media using K-nearest Neighbour (KNN) based Classifier method.
CONCEPT PAPER | doi:10.20944/preprints201803.0238.v1
Subject: Social Sciences, Education Studies Keywords: applied behaviour analysis; autism; policy; randomised controlled trials; fake news
Online: 28 March 2018 (12:40:58 CEST)
Since autism was first recognised, prevalence has increased rapidly. The growing economic as well as social cost to society can only be mitigated by effective interventions and supports. It is therefore not surprising that most governments have developed public policy documents to address the management of autism. Over the past 40-50 years, meaningful evidence has accrued showing that interventions based on the scientific discipline of Applied Behaviour Analysis (ABA) can help people with autism reach their potential. In view of this, nearly all of North America has laws to mandate that ABA-based interventions are available through the health care systems. In contrast, across Europe there are no such laws. In fact, the National Institute for Health and Care Excellence (NICE), the body guiding health and social policy in the UK, concluded that it could not find any evidence to support ABA, and therefore could not recommend it. This paper addresses the reasons for these diametrically opposed perspectives. In particular, it examines what happens when health and social care policy is misinformed about effective autism intervention.
ARTICLE | doi:10.20944/preprints202102.0342.v1
Subject: Social Sciences, Other Keywords: Online Fake News; Interpersonal influence; Self-evaluation; Motivation for Change; Food Consumption.
Online: 17 February 2021 (07:39:34 CET)
In the Italian context, the diffusion of online fake news about food is becoming increasingly fast-paced and widespread, making it more difficult for the public to recognize reliable information. Moreover, this phenomenon is deteriorating the relation with public institutions and industries. The purpose of this article is to provide a more advanced understanding of the individual psychological factors and the social influence contribute to the belief in food-related online fake news and the aspects that can increase or mitigate this risk. Data were collected with a self-report questionnaire between February and March 2019. We obtained 1004 valid questionnaires filled out by a representative sample of Italian population, extracted by stratified sampling. We used structural equation modelling (SEM) and the multi-group analyses to test our hypothesis. The results show that self-evaluation negatively affects the social-influence, which in turn positively affects the belief in online fake news. Moreover, this latter relationship is moderated by the readiness to change. Our results suggest that individual psychological characteristics and social influence are important to explain the belief in online fake news in the food sector; however, a pivotal role is played by the motivation of change lifestyle. This should be considered to engage people in clear and effective communication.
ARTICLE | doi:10.20944/preprints202104.0778.v1
Subject: Keywords: Covid-19, fake news, health protocols, belief
Online: 29 April 2021 (14:31:37 CEST)
Along with the increasing number of Covid-19 cases, the development of false news or misinformation about Covid-19 -19 is getting bigger. This article aims to analyze public opinion about the various hoaxes that were widely spread in Indonesia during the pandemic. The method used is a mixture, namely literature review, in the form of searching for related journals regarding the distribution of hoaxes during the pandemic and conducting online surveys via a google form. The research conducted indicates that during the pandemic there were rapid spreads of fake news, it is proven with more than 45% of the participants who were often heard hoax news about Covid-19 on online media. From this evidence, it also can be discovered that hoax news can affect a person's belief in the Covid-19 virus.
ARTICLE | doi:10.20944/preprints202207.0179.v1
Subject: Social Sciences, Library & Information Science Keywords: Telegram; media; Spain; channels; bots; fake channels; rankings
Online: 12 July 2022 (09:13:54 CEST)
Background. Telegram, an Industry 4.0 style Russian-born communication service, is one of the world’s most widespread communication platforms despite Putin’s censorship. The availability of channels and bots has opened as a broadcast channel for any media outlet. Objetives. We asked the following questions: Do Spanish media outlets use Telegram channels? Which media outlets? Are they verified? What is their volume of subscribers? Can this information be used to rank media outlets? We identified many media channels and numerous variables were collected from each one. Results and conclusions. Forty-two Spanish media outlets have Telegram channels, 26 of which are ranked in the directory. Less than half of these channels are verified by the platform, and only three are linked to their website. This lack of verification could lead to the proliferation of fake channels. We created a ranking, and the top-10 includes two foreign, six national, and two regional media outlets.
ARTICLE | doi:10.20944/preprints202210.0374.v1
Subject: Social Sciences, Other Keywords: Influencer; Instagram; health promotion; followers; social media; fake content; media education
Online: 25 October 2022 (03:13:48 CEST)
The pandemic has accentuated the power that influencers have to influence their followers. Various scientific approaches highlight the lack of moral and ethical responsibility of these creators when disseminating content under highly sensitive tags such as health. This article presents a correlational statistical study of 443 Instagram accounts with more than one million followers belonging to health-related categories. This study aims to describe the content of these profiles and their authors and to determine whether they promote health as accounts that disseminate health-related content, identifying predictive factors of their content topics. In addition, it aims to portray their followers and establish correlations between the gender of the self-described health influencers, the characteristics of their audience and the messages that these prescribers share. Health promotion is not the predominant narrative among these influencers, who tend to promote beauty and normative bodies over health matters. A correlation is observed between posting health content, the gender of the influencers and the average age of their audiences. The study concludes with a discussion on the role of public media education and the improvement of ethical protocols on social networks to limit the impact of misleading and false content on sensitive topics, increasing the influence of real health prescribers.
ARTICLE | doi:10.20944/preprints202210.0059.v1
Subject: Engineering, Control & Systems Engineering Keywords: Artificial Intelligence; Cybersecurity; Remote Control; Fake Signals; Replay Attack; Deep Learning, ResNet50, Transfer Learning.
Online: 6 October 2022 (09:16:56 CEST)
The keyless systems have replaced the old fashion methods of inserting physical keys in the keyhole to, i.e., unlock the door, because they are inconvenient and easy to be exploited by the threat actors. Keyless systems use the technology of radio frequency (RF) as an interface to transmit signals from the key fob to the vehicle. However, Keyless systems are susceptible to being compromised by a thread actor who intercepts the transmitted signal and performs a reply attack. In this paper, we propose a transfer learning-based model to identify the replay attacks launched against remote keyless controlled vehicles. Specifically, the system makes use of a pre-trained ResNet50 deep neural network to predict the wireless remote signals used to lock or unlock doors of a remote-controlled vehicle system remotely. The signals are finally classified into three classes: real signal, fake signal high gain, and fake signal low gain. We have trained our model with 100 epochs (3800 iterations) on a KeFRA 2022 dataset, a modern dataset. The model has recorded a final validation accuracy of 99.71% and a final validation loss of 0.29% at a low inferencing time of 50 ms for the model-based SGD solver. The experimental evaluation revealed the supremacy of the proposed model.