1. Introduction. Framing the Challenge
In the digital age, the proliferation of misinformation and fake news has emerged as one of the most pressing challenges for individuals, institutions, and societies at large. With the rapid spread of information through social media platforms, blogs, and user-generated content, the line between fact and fiction has become increasingly blurred. The velocity at which misleading narratives, falsehoods, and conspiracy theories spread, often amplified by algorithms designed to maximize engagement, presents a significant threat to informed public discourse. This issue is not confined to fringe groups or isolated incidents but has become widespread across mainstream media, affecting everything from political outcomes to public health initiatives (Bradshaw and Howard).
Misinformation and fake news have profound implications for several critical facets of society (Wardle and Derakhshan). Public trust, the foundation of democratic processes, is severely undermined when citizens cannot reliably distinguish between credible sources and deceptive narratives (Cohen et al.). Political polarization is exacerbated as individuals become trapped in echo chambers, where confirmation bias and sensationalized content dominate. Moreover, the COVID-19 pandemic has underscored the devastating consequences misinformation can have on public health, where false claims about vaccines, treatments, and preventive measures have led to widespread confusion and mistrust, ultimately endangering lives (McCorkindale and Henry). The stakes have never been higher in addressing these pervasive threats to society's collective well-being (Islam).
In this context, libraries stand as one of the few remaining trusted institutions capable of guiding individuals through the overwhelming flood of digital content. Libraries, with their long-standing commitment to information literacy, are ideally positioned to foster critical thinking skills and promote digital literacy among the public (Agosto). Their role in combating misinformation is crucial—not just by providing access to verified and reliable information, but also by equipping individuals with the tools to critically assess and engage with content (Copenhaver).. However, traditional methods of teaching information literacy are no longer sufficient to confront the scale and sophistication of today's misinformation crisis. New, more dynamic approaches are needed.
One such approach involves the integration of artificial intelligence (AI) tools into the fight against misinformation. AI has the potential to augment traditional information literacy initiatives by providing scalable, real-time solutions for identifying and combating fake news. Technologies such as Explainable AI (XAI), Natural Language Processing (NLP), and Machine Learning (ML) offer powerful mechanisms for analyzing vast amounts of data, detecting patterns of misinformation, and flagging content that may be misleading or false. Unlike traditional AI models, Explainable AI provides transparent decision-making processes that help build trust in automated systems (Al-Asadi and Tasdemir). This is essential for ensuring that AI tools remain accountable, understandable, and ethically aligned with societal values (Iqubal et al.).
This paper aims to explore the intersection of AI technologies and media literacy, specifically focusing on how libraries can harness these tools to enhance their role in combating misinformation (Hodonu-Wusu). By integrating AI into information literacy programs, libraries can provide both technical solutions and critical thinking frameworks to help individuals navigate the complexities of today's digital information ecosystem. The goal is to build societal resilience against misinformation through a comprehensive, dual approach that leverages both technological innovation and human expertise (IFLA)
The following sections will examine the role of AI in misinformation detection, the integration of AI with established media literacy frameworks like SIFT and CRAAP, and the ethical considerations libraries must navigate when adopting these technologies. Through this exploration, we aim to shed light on the transformative potential of AI in creating a more informed, transparent, and resilient society.
2. Methodologies and Tools
This study adopts a narrative synthesis approach, integrating technological analysis with case-based illustrations to explore the role of artificial intelligence in library-led efforts to combat misinformation and foster critical information literacy. Rather than conducting a narrowly scoped systematic review, the focus here is on weaving together technical insights from AI research, established information literacy frameworks, and applied case studies from library practice. This methodology is particularly appropriate given the rapid evolution of misinformation tactics and the equally dynamic development of AI-based countermeasures.
2.1. Source Selection
The sources informing this analysis span peer-reviewed research on artificial intelligence (including Natural Language Processing, Machine Learning, Explainable AI, and Retrieval-Augmented Systems), media literacy frameworks such as SIFT and CRAAP, and institutional reports and case studies from libraries and educational settings. Particular emphasis was placed on works that:
Examine the technical foundations of AI systems for misinformation detection and content verification (e.g., NLP, ML, and RAS applications).
Evaluate the educational role of libraries in fostering digital literacy and resilience against misinformation through programs incorporating SIFT and CRAAP.
Address ethical dimensions such as algorithmic bias, data privacy, and accountability in AI systems deployed in public-facing institutions.
Provide real-world examples of AI integration in library programming, including community workshops and collaborations with technology providers.
By drawing from both the technical literature and practitioner case studies, the methodology ensures breadth across the domains of technology, pedagogy, and ethics.
2.2. Analytical Strategy
The analytical process unfolded in three stages:
Mapping – Identifying the technological tools (NLP, ML, XAI, RAS) and literacy frameworks (SIFT, CRAAP) most relevant to combating misinformation in library contexts.
Thematic Synthesis – Comparing how these tools and frameworks address challenges of misinformation detection, critical literacy education, and ethical governance.
Integration into Practice – Illustrating these findings through case-based examples of libraries implementing AI-driven misinformation detection and literacy programs.
This blended strategy allows for tracing the complementarities and tensions between technological and human-centered approaches.
2.3. Limitations
Several limitations shape this approach. First, while representative, the set of case studies is not exhaustive; many library-led initiatives remain under-documented in scholarly literature. Second, the inclusion of rapidly evolving technologies means that findings may be time-sensitive, as tools like RAS and XAI undergo continual refinement. Third, the reliance on institutional reports and practitioner accounts introduces variability in rigor compared with peer-reviewed studies. Nevertheless, the combination of sources provides a sufficiently robust basis for identifying trends and implications at the intersection of AI and library practice.
2.4. Use of Generative AI Tools
Generative AI systems were employed in a limited and supervised capacity in the preparation of this paper. ChatGPT-5 (OpenAI, 2025) and Gemini (Google DeepMind, 2025) were used to refine prose, restructure draft passages for clarity, and synthesize connections across literature and case examples. Outputs were treated as provisional drafts and verified against original sources to avoid fabricated or biased content. In addition, demonstrations of fact-checking algorithms and retrieval-augmented systems such as Factify and VeraCT were consulted to illustrate how such tools may be integrated into library programs. Zotero 7.0 with optional AI-enabled plug-ins (Roy Rosenzweig Center for History and New Media, 2025) was used for reference management and bibliographic organization. At all times, the human author maintained responsibility for interpretation, verification, and final editing. No generative outputs were accepted without manual validation against primary literature.
3. Technological Approach
The technological tools developed to combat misinformation have revolutionized the ability to detect, track, and address the spread of fake news and misleading content. By leveraging artificial intelligence (AI), particularly Natural Language Processing (NLP), Machine Learning (ML), Explainable AI (XAI), and Retrieval-Augmented Systems (RAS), these technologies offer innovative ways to enhance content moderation and misinformation detection on a massive scale (A.B., Athira et al.; Berrondo-Otermin and Sarasa-Cabezuelo). This section provides an in-depth look at these AI technologies, examining how they are applied to fight misinformation, their scalability and adaptability, and the challenges they present (Al-Asadi and Tasdemir; Iqubal et al.)..
3.1. Natural Language Processing (NLP) and Machine Learning (ML)
Natural Language Processing (NLP) and Machine Learning (ML) are pivotal technologies in the fight against misinformation. NLP, a subfield of AI, allows machines to understand and process human language. It plays a critical role in detecting fake news and content moderation by analyzing the linguistic patterns and structures in texts that may indicate misinformation. NLP algorithms are trained to identify features like sensational language, emotional appeals, or misleading headlines, which are often hallmarks of fake news. The process begins with text classification, where an NLP model categorizes content as either trustworthy or potentially deceptive based on patterns in language, syntax, and semantics. For example, research by Al-Asadi and Tasdemir (2022) highlighted how NLP techniques can be applied to identify subtle cues in language, such as exaggeration, logical fallacies, or lack of supporting evidence, which are common in fake news.
Machine Learning (ML), which is closely tied to NLP, is used to train models to recognize fake news patterns by learning from large datasets of labeled content. Over time, ML models become more accurate as they are exposed to new data and fine-tuned to detect ever-evolving tactics used in misinformation campaigns. For instance, a supervised learning algorithm might be trained on examples of false news stories and reliable reports, learning to discern subtle differences in word choice, sentiment, and credibility (Iqubal et al.).
As misinformation grows increasingly complex—incorporating multimedia elements, changing narratives, and advanced manipulations such as deepfakes—NLP and ML continue to evolve, adapting to the more sophisticated forms of misinformation. One of the significant advantages of ML-based systems is their ability to continually improve through exposure to more data, a process called adaptive learning. This scalability is vital in keeping pace with the fast-moving nature of fake news on social media and digital platforms (Berrondo-Otermin and Sarasa-Cabezuelo).
3.2. Explainable AI (XAI)
While AI holds significant promise in detecting and combating fake news, one of the challenges has been the "black-box" nature of many AI models. These systems often provide output without explaining the rationale behind their decisions. In the context of misinformation detection, this lack of transparency can lead to distrust, especially when content is flagged as false without clear justification. Explainable AI (XAI) addresses this issue by making AI's decision-making processes transparent and understandable to humans. XAI systems are designed to provide insights into how a particular decision was reached, allowing users to see which features of the content (e.g., words, phrases, context, or source) contributed to the AI’s judgment. For example, in the context of fake news detection, XAI might show that a particular article was flagged due to its lack of credible citations, the sensationalist tone of the headline, and discrepancies in the publication’s history (A.B., Athira et al.).
XAI is particularly important for building trust in AI systems, which is crucial for widespread adoption. When people understand the reasoning behind AI-driven decisions, they are more likely to accept and rely on the system. For libraries and other institutions employing AI in misinformation detection, this transparency can be a valuable educational tool. By providing clear explanations of why a certain piece of content is flagged, XAI helps users develop critical thinking skills and become more discerning consumers of information (Iqubal et al.).
XAI plays a crucial role in ethics and accountability. It ensures that AI systems are not making decisions arbitrarily, but based on identifiable, justifiable criteria. This is important not only for public trust but also for legal and ethical considerations, as AI systems used in public-facing applications need to operate in a way that can be audited and understood by regulators, stakeholders, and users (Berrondo-Otermin and Sarasa-Cabezuelo).
3.3. Retrieval-Augmented Systems (RAS)
As AI continues to advance, one of the most innovative technologies in misinformation detection is the Retrieval-Augmented System (RAS). Unlike traditional AI systems that rely on pre-trained models or fixed datasets, RAS dynamically retrieves real-time information from a variety of trusted, up-to-date sources, such as academic databases, news archives, and fact-checking repositories. This capability allows RAS to provide contextual verification and cross-reference claims with the latest evidence available, making it a powerful tool for real-time fact-checking. For instance, VeraCT Scan, a cutting-edge RAS tool, queries multiple sources in real-time to verify claims made in content. If a user submits a piece of content for evaluation, VeraCT retrieves relevant data from sources like peer-reviewed journals, government reports, and trusted media outlets, and uses this information to assess the accuracy of the claims made. By cross-referencing a claim against multiple sources, RAS tools can identify inconsistencies or provide confidence scores, giving users a clearer picture of the content's veracity (Niu et al.).
The key advantage of RAS is its ability to adapt to new forms of misinformation. As misinformation evolves, traditional AI models may struggle to keep up with the increasing sophistication of deepfakes, synthetic media, or disinformation campaigns. RAS tools, however, can continuously access new, relevant sources and adjust their evaluations based on the most up-to-date information. This ensures that fact-checking efforts remain current and relevant, providing a dynamic and scalable solution to misinformation detection. The scalability of RAS tools is another benefit. By leveraging real-time data retrieval, RAS systems can evaluate large volumes of content quickly and accurately, something that would be difficult for human fact-checkers to accomplish alone. The speed and efficiency with which RAS systems process information are crucial in the context of social media, where misinformation can go viral in minutes (A.B., Athira et al.).
3.4. Scalability, Adaptability, and Improvement Over Time
One of the most significant advantages of AI technologies like NLP, ML, XAI, and RAS is their scalability. These systems can be expanded to handle larger datasets and to incorporate more sophisticated algorithms as they evolve. For instance, NLP and ML models can be trained on increasing amounts of data, improving their accuracy over time (A.B., Athira et al.; Iqubal et al.).. Similarly, RAS tools can access broader sources of information, allowing for better cross-referencing and more thorough fact-checking (Niu et al.). These systems are adaptive. As misinformation tactics become more advanced, AI tools can evolve to recognize new patterns and techniques used by creators of fake news. The feedback loops built into ML and RAS technologies ensure that these systems continue to improve as they are exposed to more data. For example, a fact-checking algorithm can become more adept at identifying fake news stories as it learns from previous instances and refines its understanding of what constitutes "false" content (Berrondo-Otermin and Sarasa-Cabezuelo).
The continuous improvement of these systems ensures that AI remains an effective tool in combating misinformation, despite its constantly changing landscape. This adaptability is essential for addressing the rapidly growing and evolving nature of digital misinformation (Al-Asadi and Tasdemir).
The application of AI technologies—Natural Language Processing, Machine Learning, Explainable AI, and Retrieval-Augmented Systems—in the fight against misinformation offers powerful tools for real-time content monitoring and fake news detection. These technologies are scalable, adaptive, and capable of improving over time as they learn from new data. While challenges remain, particularly in ensuring transparency and addressing ethical concerns, the integration of AI into libraries and educational institutions holds the potential to significantly enhance efforts to combat misinformation. By leveraging these tools, we can create a more informed, resilient, and accountable society.
4. Media Literacy Frameworks
As artificial intelligence tools play an increasingly central role in detecting and combatting misinformation, the importance of media literacy frameworks in empowering individuals to critically evaluate content cannot be overstated. These frameworks provide structured approaches to help users discern credible information from falsehoods, guiding them to apply critical thinking skills in their daily media consumption. Among the most widely recognized frameworks for evaluating digital content are SIFT and CRAAP, both of which are essential tools for fostering digital literacy and enhancing the public’s ability to navigate an increasingly complex information ecosystem.
4.1. SIFT: A Quick and Effective Evaluation Tool
The SIFT method, developed by Mike Caulfield, is a highly effective, user-friendly framework designed to help individuals quickly evaluate online content and determine its credibility. SIFT consists of four steps: Stop, Investigate, Find, and Trace, which collectively empower users to engage with information critically, pausing to reflect and analyze before accepting or sharing it (Caulfield).
Stop: This first step encourages users to pause before reacting to or sharing content. In an age where digital content is often consumed impulsively, the act of stopping allows individuals to slow down, question the validity of the information, and avoid hasty judgment. It is crucial in helping to counter emotional manipulation, a key technique often used in the spread of misinformation (Flynn)
Investigate the Source: The second step in the SIFT method directs individuals to investigate the source of the content. It emphasizes the importance of assessing the credibility of the author or publisher. Users are encouraged to check whether the source is reputable, has a history of producing reliable content, or is affiliated with trustworthy organizations. This step helps users avoid falling victim to manipulated media or content from sources with known biases or hidden agendas (Carlin)
Find Better Coverage: The third step stresses the need to look for alternative sources that cover the same topic. Reliable information is often corroborated by multiple credible outlets. By cross-checking with other reputable sources, individuals can verify the accuracy of a claim, ensuring that the information they consume is consistent with widely accepted facts (Ruggeri).
Trace Claims, Quotes, and Media to Their Original Context: This final step emphasizes the importance of contextualizing information. It encourages users to trace claims, quotes, and media back to their original sources. This is particularly useful when images or videos are circulated without context, as it helps individuals determine whether the content has been manipulated or misrepresented (Hood)
By breaking down the process of critical evaluation into manageable steps, SIFT offers a straightforward approach for users to make more informed decisions about the content they encounter online. The simplicity of the method makes it particularly effective in educational settings, as it can be easily taught and applied by people of all ages and backgrounds (Caulfield)
4.2. CRAAP: A Comprehensive Framework for Information Evaluation
Expanding on the streamlined SIFT method, the CRAAP Test, developed by California State University, Chico, provides a more detailed approach to evaluating the credibility and relevance of information. The acronym CRAAP stands for Currency, Relevance, Authority, Accuracy, and Purpose, which are five critical criteria used to assess the quality of information. The CRAAP test is especially useful in academic and research contexts, where rigorous standards of credibility and evidence are paramount (Carlin).
Currency: This criterion asks whether the information is up-to-date. In fast-changing fields, such as technology or public health, outdated information can lead to incorrect conclusions or misguided decisions. For example, in the context of COVID-19 misinformation, the spread of old, inaccurate data has had real-world consequences for public health (Flynn). Users are encouraged to check the publication date and ensure that the information reflects the latest developments.
Relevance: The relevance criterion considers how pertinent the information is to the user’s needs. Information that is too general, too specific, or tangential to the topic at hand may not be useful. The CRAAP test encourages individuals to consider whether the content directly addresses their research questions or provides the insights needed for informed decision-making (Carlin). Authority: Authority refers to the credentials of the author or organization behind the information. Reliable content typically comes from authors or institutions with recognized expertise in the subject matter. The CRAAP test guides users to check the author's qualifications, affiliations, and reputation in the field, helping them avoid pseudoscience or content produced by unqualified sources (Kampen)
Accuracy: Accuracy assesses whether the information is factually correct. Users are encouraged to look for errors in the content, such as inconsistencies, lack of citations, or unsupported claims. Accuracy is crucial in distinguishing between well-researched, credible content and false or misleading information that may be designed to deceive or manipulate (Ruggeri).
Purpose: The final criterion examines the intent behind the content. Is the information intended to inform, entertain, persuade, or sell a product? Content created for commercial purposes, or with a clear agenda, may not present a balanced view of the subject. By considering the purpose behind a piece of information, users can better assess whether it is objective or biased (Carlin).
The CRAAP Test provides a more nuanced and thorough evaluation, particularly suited for users engaging in detailed research or analysis. While it requires more time and attention than the SIFT method, it offers a comprehensive framework for assessing information’s credibility in-depth.
4.3. Integrating SIFT and CRAAP into Libraries' Media Literacy Programs
Libraries play a pivotal role in fostering media literacy and critical thinking skills. As trusted community institutions, they are uniquely positioned to equip individuals with the tools needed to assess information effectively, especially as misinformation becomes more pervasive and sophisticated. By integrating frameworks like SIFT and CRAAP into library programs, libraries can provide essential resources for the public to better navigate the digital landscape (Mooney et al.).
Libraries can host workshops and training sessions focused on SIFT and CRAAP, helping patrons develop the skills necessary to evaluate online content. For instance, library staff can facilitate hands-on exercises where participants apply these frameworks to evaluate real-world examples of misinformation. By guiding users through the process of information evaluation, libraries can instill habits of skepticism, critical thinking, and informed engagement (Agosto). Libraries can create digital toolkits, guides, and educational materials that promote the use of SIFT and CRAAP. These resources can be made available online or in print, ensuring that library patrons have easy access to these frameworks whenever they need to evaluate content (Copenhaver).
As the fight against misinformation is not solely technological, educating the public on the importance of critical thinking and information evaluation is essential. Libraries serve as a bridge between technology and human expertise, helping patrons develop a deeper understanding of how misinformation spreads, how to resist manipulation, and how to critically evaluate content from a variety of sources (Mooney et al.; Agosto).
4.4. The Importance of Educating the Public and Encouraging Critical Thinking
The integration of SIFT and CRAAP into library programs is not just about imparting specific techniques for evaluating information; it is also about fostering a culture of critical thinking. As the digital landscape continues to evolve, it is essential to empower individuals with the skills to analyze and question the information they encounter, not only for academic purposes but also for making informed decisions in their daily lives. This critical approach to media consumption can significantly reduce the impact of misinformation and help individuals become more discerning, responsible participants in the digital world (Copenhaver; Mooney et al.).
By encouraging critical thinking, libraries can help individuals resist the allure of echo chambers, confirmation bias, and other cognitive shortcuts that often lead to the spread of misinformation (Flynn). Promoting open-mindedness, curiosity, and intellectual humility will ensure that individuals are not merely passive consumers of information but active, thoughtful participants in the information ecosystem (Agosto).
5. Ethical Considerations
The rapid adoption of artificial intelligence (AI) tools for combating misinformation brings with it several ethical considerations that must be addressed to ensure these technologies are deployed responsibly and equitably. While AI holds significant promise in mitigating the spread of fake news and enhancing the efficiency of misinformation detection, its implementation must be carefully managed to avoid unintended consequences, such as bias, lack of transparency, and violations of data privacy. This section will explore key ethical concerns related to AI’s use in combating misinformation and discuss the role of libraries in fostering inclusive, fair, and transparent deployment of AI technologies.
5.1. Accountability, Transparency, and Ethical Governance in AI Systems
One of the fundamental ethical challenges in the deployment of AI systems is ensuring that these systems operate with accountability and transparency. In the context of misinformation detection, it is crucial that AI algorithms can be scrutinized and understood by users, stakeholders, and regulators. The opaque nature of many AI systems, often referred to as the “black-box” problem, can lead to a lack of trust in the decisions made by these systems, particularly when those decisions affect public discourse (Al-Asadi and Tasdemir; A.B., Athira et al.).
Explainable AI (XAI) has emerged as a potential solution to this issue. By providing users with clear explanations for the decisions made by AI models, XAI enhances the transparency of these systems. When AI tools flag content as false or misleading, users should be able to understand why that decision was made, such as which linguistic features or sources of information contributed to the model’s judgment. This level of transparency is essential for building trust in AI-powered misinformation detection tools and for ensuring that users can make informed decisions about the reliability of flagged content (A.B., Athira et al.; Al-Asadi and Tasdemir).
However, transparency alone is not enough. Accountability is equally important. AI systems must have mechanisms in place to ensure that they can be held responsible for their actions. This includes addressing the potential harms that AI systems might cause, such as the misclassification of truthful content, amplification of bias, or undue censorship. Clear governance frameworks must be established to hold the creators and users of AI systems accountable for their outcomes, ensuring that AI technologies are used in ways that serve the public good without infringing on individual rights or freedoms (Iqubal et al.; Berrondo-Otermin and Sarasa-Cabezuelo).
5.2. Bias in AI Algorithms and Mitigation for Equity and Fairness
One of the most pressing ethical concerns in the deployment of AI in misinformation detection is the potential for bias in AI algorithms. AI systems are often trained on large datasets that may reflect the biases present in society, such as racial, gender, or political biases. These biases can be inadvertently encoded into the AI models, leading to discriminatory outcomes that disproportionately affect certain groups or viewpoints. For example, an AI system trained on biased data may flag content from specific political perspectives as misleading, while overlooking misinformation from other perspectives (A.B., Athira et al.; Iqubal et al.). Bias in AI algorithms can perpetuate existing social inequalities and undermine the fairness of misinformation detection systems. To mitigate this risk, it is essential to implement diverse and representative training datasets that reflect the full spectrum of viewpoints and demographic groups (Al-Asadi and Tasdemir). AI developers must regularly audit and test their models to identify and address any potential biases. Ethical AI development practices emphasize the importance of fairness in algorithmic decision-making, ensuring that AI systems do not inadvertently discriminate against certain groups or individuals (Berrondo-Otermin and Sarasa-Cabezuelo).
In the context of combating misinformation, it is also essential to ensure that AI systems are politically neutral and do not favor one ideological stance over another. Libraries and other institutions deploying AI tools must advocate for equitable systems that treat all forms of misinformation with the same level of scrutiny, regardless of political affiliation or other demographic factors (Iqubal et al.; A.B., Athira et al.). This approach is critical to maintaining public trust in AI systems and ensuring that they serve the broader goal of promoting truthful information (Berrondo-Otermin and Sarasa-Cabezuelo).
5.3. Data Privacy and User Rights in Automated Decision-Making Systems
Another significant ethical consideration in the deployment of AI for misinformation detection is data privacy. AI systems often require access to large volumes of data, including personal data, to function effectively. This raises concerns about user privacy and data security, particularly when sensitive personal information is involved. In the case of misinformation detection, AI systems may analyze the content of users' social media posts, search histories, or browsing behaviors, raising questions about how this data is collected, stored, and used (Iqubal et al.; Al-Asadi and Tasdemir). To address these concerns, data privacy must be a central consideration in the development and deployment of AI systems. Users must be informed about what data is being collected, how it will be used, and what rights they have over their data (Berrondo-Otermin and Sarasa-Cabezuelo). Additionally, AI systems should be designed with privacy protections in place, such as anonymizing or aggregating data to ensure that individual identities are not exposed (A.B., Athira et al.). Furthermore, informed consent is critical. Users should have the ability to opt-in or opt-out of data collection and be provided with clear and accessible information about how their data will be used in AI-based systems (Al-Asadi and Tasdemir). Ethical governance of AI systems should also ensure that they comply with data protection laws, such as the General Data Protection Regulation (GDPR) in Europe, which imposes strict requirements on the collection and processing of personal data (Iqubal et al.).
AI systems used for misinformation detection must respect user rights in terms of freedom of expression. While AI can be an effective tool for identifying and combating falsehoods, it must not be used to censor legitimate speech or suppress dissenting viewpoints. It is essential to strike a balance between the need for accuracy in information and the protection of individual freedoms (Berrondo-Otermin and Sarasa-Cabezuelo).
5.4. The Role of Libraries in Ensuring Inclusive, Fair, and Transparent AI Deployment
Libraries, as trusted community institutions, are uniquely positioned to play a critical role in the ethical deployment of AI tools in combating misinformation. Given their focus on information equity, accessibility, and education, libraries can help ensure that AI systems are used in ways that are inclusive, fair, and transparent (Copenhaver). Libraries must advocate for the ethical development and deployment of AI tools. This includes promoting the use of diverse datasets, ensuring that AI systems are free from bias, and guaranteeing that these technologies are designed with transparency in mind (Iqubal et al.; Al-Asadi and Tasdemir). Libraries can also serve as community advocates, helping users understand how AI works and ensuring that these technologies are used in ways that respect individual rights and freedom of expression (Berrondo-Otermin and Sarasa-Cabezuelo).
The digital divide can be bridged by ensuring that marginalized communities have access to AI literacy programs and tools for evaluating misinformation. This is crucial in ensuring that AI-powered misinformation detection systems do not disproportionately benefit already-privileged groups while leaving others behind. Inclusive AI must be designed to serve diverse populations, with particular attention to the needs of underserved communities (Mooney et al.; Copenhaver). Libraries are ideal institutions to promote public trust in AI systems by serving as neutral intermediaries. By integrating AI technologies into media literacy programs, libraries can provide a space for the public to learn about AI tools, understand their ethical implications, and critically evaluate their effectiveness. This educational role is essential for ensuring that AI systems are deployed in a way that serves the public good and fosters a more informed, ethical, and equitable society (Agosto; Al-Asadi and Tasdemir).
6. Case Studies/Practical Applications
The integration of artificial intelligence (AI) tools and media literacy frameworks into efforts to combat misinformation is already taking place in various real-world applications. Libraries and educational institutions, in particular, have emerged as key players in deploying these technologies to foster critical thinking, digital literacy, and public trust. This section highlights case studies that demonstrate how AI and media literacy are being applied effectively to address misinformation, with a focus on practical examples from libraries, community programs, and collaborative efforts between educators, libraries, and technology companies.
6.1. Case Study 1: Seattle Public Library and VeraCT for Digital Literacy
The Seattle Public Library (SPL) has successfully integrated AI-powered misinformation detection tools into its digital literacy programs, particularly by using VeraCT Scan, a Retrieval-Augmented System (RAS). VeraCT dynamically retrieves and analyzes real-time information from trusted sources to verify claims, providing users with immediate feedback on the veracity of digital content (Niu et al.). SPL uses this tool in its community outreach programs to empower individuals to verify information before sharing it and to combat the spread of misleading content (A.B., Athira et al.). As part of their media literacy workshops, SPL librarians teach patrons how to use VeraCT to check the accuracy of viral claims and social media posts. These workshops have been highly successful in helping library users—particularly those from underserved communities—gain the skills necessary to critically assess online content. Participants report a significant increase in confidence when it comes to evaluating the credibility of information encountered online (Copenhaver). The integration of real-time fact-checking through tools like VeraCT also helps reinforce the library’s role as a trusted institution that fosters informed decision-making (Mooney et al.). SPL’s approach emphasizes hands-on learning. Librarians not only introduce VeraCT to patrons but also guide them through its practical applications. By demonstrating how to use the tool during live demonstrations, SPL ensures that participants leave the workshops equipped with the skills and tools necessary to continue applying these practices in their digital lives (Agosto).
6.2. Case Study 2: The New York Public Library’s Combatting Vaccine Misinformation
During the COVID-19 pandemic, the New York Public Library (NYPL) launched a series of initiatives aimed at combating vaccine misinformation. Recognizing the critical role of trusted information sources in shaping public health outcomes, the library partnered with public health organizations to address the surge of misleading information related to vaccines (Copenhaver). NYPL developed a multi-faceted campaign that included fact-checking tools, workshops, and community-based outreach. Librarians facilitated workshops focused on helping community members critically evaluate vaccine-related content using SIFT and CRAAP frameworks (Caulfield; Carlin). They also provided access to resources such as peer-reviewed journal articles, official health guidelines, and reliable news outlets (Mooney et al.). NYPL also utilized social media channels to provide regular fact-checks and guidance on how to evaluate vaccine-related claims circulating online. By integrating AI-powered misinformation detection tools into their library’s online services, NYPL allowed patrons to access real-time fact-checking through platforms like FactCheck.org and Snopes, which helped counter misinformation as it emerged (Iqubal et al.; Berrondo-Otermin and Sarasa-Cabezuelo). The initiative not only improved the accuracy of information shared among community members but also demonstrated the library’s essential role in promoting public health literacy and community resilience in the face of misinformation (Al-Asadi and Tasdemir).
6.3. Case Study 3: Collaboration Between Educators, Libraries, and Tech Companies
A successful example of collaboration between educators, libraries, and tech companies can be seen in a pilot program launched by Carnegie Mellon University, the Pittsburgh Public Library, and Google’s Jigsaw. The initiative focused on leveraging AI-driven tools to help students and community members recognize digital manipulation and misleading narratives. The partnership developed a curriculum that combined AI-powered misinformation detection tools with media literacy education. Google Jigsaw’s Perspective API, which analyzes text for tone, sentiment, and credibility, was integrated into the educational materials (A.B., Athira et al.). Students and patrons were taught how to use the API to identify toxic or misleading language, especially in politically charged or polarizing content. This hands-on approach enabled participants to interact directly with the technology while also engaging with traditional critical thinking frameworks like SIFT and CRAAP (Caulfield; Carlin). The collaborative nature of the project also included community-based workshops where educators and library staff worked together to teach digital literacy skills to local residents. These workshops were particularly impactful for marginalized populations, who often face greater challenges in accessing reliable information. The combination of AI tools and educational resources ensured that individuals were not only aware of the technologies available to fight misinformation but were also empowered to use these tools to actively participate in public discourse (Iqubal et al.; Berrondo-Otermin and Sarasa-Cabezuelo).
6.4. Case Study 4: The Role of Libraries in Building Public Trust and Resilient Communities
Libraries, as non-partisan institutions, have long been trusted repositories of information, and their role in building public trust through digital literacy is essential in the age of misinformation. One exemplary initiative is the Los Angeles Public Library’s (LAPL) digital inclusion program, which aims to bridge the digital divide and promote critical information literacy among underserved communities. LAPL has developed a series of educational modules and workshops designed to teach library patrons how to recognize and resist the influence of misinformation. These workshops incorporate both traditional media literacy frameworks and AI-powered fact-checking tools (Copenhaver; Mooney et al.). The library has partnered with organizations like FactCheck.org to provide real-time verification tools for individuals to use as they assess online content (Berrondo-Otermin and Sarasa-Cabezuelo). LAPL has worked with local schools, community centers, and nonprofit organizations to extend the reach of its programs. This outreach has proven to be highly effective in reaching groups that are often vulnerable to misinformation, such as older adults, low-income families, and immigrant communities. Through these initiatives, LAPL has fostered a sense of resilience in the face of digital manipulation and has empowered individuals to participate in informed public discourse (Iqubal et al.; Al-Asadi and Tasdemir).
The case studies presented above demonstrate the critical role that libraries play in the ongoing battle against misinformation. By leveraging AI tools like VeraCT and Facticity.ai, and integrating media literacy frameworks like SIFT and CRAAP, libraries are empowering individuals to take charge of their digital literacy and actively participate in the fight against the spread of fake news. The collaboration between educators, libraries, and tech companies further highlights the importance of interdisciplinary efforts to tackle misinformation. By working together, these institutions can ensure that the tools and resources needed to combat misinformation are accessible to all, especially those who are most at risk of being misled. As these case studies illustrate, libraries are uniquely positioned to build public trust and create resilient communities that are better equipped to navigate the complexities of today’s digital landscape. Through continued innovation and collaboration, libraries can remain at the forefront of combating misinformation, fostering a more informed, equitable, and resilient society.
7. Conclusion
As the digital landscape continues to evolve, the challenge of combating misinformation has become increasingly urgent. The synergistic role of AI and media literacy in addressing this problem has never been more apparent. AI technologies, such as Explainable AI (XAI), Retrieval-Augmented Systems (RAS), and fact-checking algorithms, provide powerful tools for identifying and mitigating the spread of false or misleading information. However, these technological innovations must be complemented by robust media literacy frameworks like SIFT and CRAAP, which empower individuals to critically assess the information they encounter in the digital world.
This paper has explored the integration of AI with media literacy in the context of libraries and educational institutions, demonstrating how these institutions are uniquely positioned to play a critical role in fostering information literacy and combating misinformation. Libraries, as trusted institutions within their communities, provide the perfect environment for both the adoption of AI technologies and the teaching of critical thinking skills. Through the integration of AI-powered tools for misinformation detection alongside established media literacy frameworks, libraries can help individuals become more discerning consumers of information, equipped to navigate the complexities of the digital age.
Despite the immense potential of AI, there are several key challenges that libraries and other institutions must address in their efforts to leverage these technologies. One of the most significant challenges is ensuring that AI tools are ethical, transparent, and fair. Bias in AI algorithms, concerns over data privacy, and the lack of explainability in AI models are serious obstacles to their widespread adoption. Ethical governance frameworks must be established to ensure that AI tools are deployed responsibly, maintaining accountability and fostering public trust. Libraries, with their commitment to accessibility and equity, are particularly well-placed to advocate for and help implement these frameworks.
As the future of AI continues to unfold, future directions for integrating AI with information literacy programs will likely involve a deeper collaboration between AI developers, educators, and librarians. Moving forward, we can expect to see more sophisticated AI tools that are better able to understand context, detect nuanced misinformation, and integrate with existing media literacy curricula. These advancements will require continuous research, innovation, and adaptation to stay ahead of the increasingly sophisticated tactics used by creators of misinformation. The importance of public trust, ethical AI, and collaboration cannot be overstated in ensuring that these tools benefit society as a whole. Building public trust in AI-powered misinformation detection tools requires transparency in how these systems work and explainability in how decisions are made. By fostering a culture of collaboration—involving libraries, tech companies, educators, and policy makers—we can ensure that AI technologies are developed and deployed in ways that serve the public good, rather than reinforcing existing inequalities or biases.
The success of AI in combating misinformation will depend not just on the technology itself, but on how well it is integrated into a broader framework of media literacy education and ethical governance. Libraries, with their role as educators, information stewards, and community hubs, are poised to play a central role in this ongoing effort. By combining AI tools with critical thinking and information literacy, we can create a more resilient, informed, and equitable society—one that is better equipped to navigate the complexities of the modern information ecosystem and resist the harmful effects of misinformation.
8. Acknowledgments
The author acknowledges the supervised use of generative artificial intelligence (GenAI) tools in the preparation of this manuscript. ChatGPT-5 (OpenAI, 2025) and Gemini (Google DeepMind, 2025) were employed to assist in refining prose, restructuring draft passages for clarity, and synthesizing thematic connections across the reviewed literature. Zotero (Version 7.0, Roy Rosenzweig Center for History and New Media, 2025), including AI plug-ins for annotation, was used for reference management.
All AI-assisted contributions were treated as provisional drafts and were carefully validated against original sources to avoid fabricated or biased content. The responsibility for interpretation, verification, and final editing rested exclusively with the human author.
No AI tools were used for study design, data collection, or independent interpretation of findings.
References
- A.B., Athira, S.D. Madhu Kumar, and Anu Mary Chacko. “A Systematic Survey on Explainable AI Applied to Fake News Detection.” Engineering Applications of Artificial Intelligence 122 (June 2023): 106087. [CrossRef]
- Agosto, Denise E., ed. Information Literacy and Libraries in the Age of Fake News. Santa Barbara: Libraries Unlimited, 2018.
- AI for Librarians. “AI Use Cases.” Accessed December 11, 2024. https://www.aiforlibrarians.com/ai-cases/.
- Alaphilippe, Alexandre, Alexis Gizkis, Clara Hanot, and Kalina Bontcheva. “Automated Tackling of Disinformation: Major Challenges Ahead.” Bruseels: European Parliamentary Research Service, 2019. https://data.europa.eu/doi/10.2861/368879.
- Al-Asadi, Mustafa A., and Sakir Tasdemir. “Using Artificial Intelligence Against the Phenomenon of Fake News: A Systematic Literature Review.” In Combating Fake News with Computational Intelligence Techniques, edited by Mohamed Lahby, Al-Sakib Khan Pathan, Yassine Maleh, and Wael Mohamed Shaher Yafooz, 1001:39–54. Studies in Computational Intelligence. Cham: Springer International Publishing, 2022. [CrossRef]
- Alvermann, Donna E., and Margaret C. Hagood. “Critical Media Literacy: Research, Theory, and Practice in ‘New Times.’” The Journal of Educational Research 93, no. 3 (January 2000): 193–205. [CrossRef]
- article:author. “‘Devastating’: AI Is Set to Take a Dark Turn for Australian Kids.” News.Com.Au, December 9, 2024, sec. Innovation. https://www.news.com.au/technology/innovation/devastating-ai-is-set-to-take-a-dark-turn-for-australian-kids/news-story/6a5dbdab1d90cbe10c3788bfaa78c795.
- Ayoobi, Navid, Sadat Shahriar, and Arjun Mukherjee. “Seeing Through AI’s Lens: Enhancing Human Skepticism Towards LLM-Generated Fake News.” arXiv, 2024. [CrossRef]
- BECTA. “Digital Literacy. Teaching Critical Thinking in Our Digital World.” BECTA, 2010. https://itte.org.uk/wp/wp-content/uploads/2016/04/Digital-Literacy-Becta.pdf.
- Berrondo-Otermin, Maialen, and Antonio Sarasa-Cabezuelo. “Application of Artificial Intelligence Techniques to Detect Fake News: A Review.” Electronics 12, no. 24 (December 18, 2023): 5041. [CrossRef]
- Black, Joanna, and Cody Fullerton. “Digital Deceit: Fake News, Artificial Intelligence, and Censorship in Educational Research.” Open Journal of Social Sciences 08, no. 07 (2020): 71–88. [CrossRef]
- Blakeslee, Sarah Blakeslee. “The CRAAP Test.” LOEX Quarterly 31, no. 3 (Fall 2004): 6–7.
- Bogna, John. “How to Detect Fake News Generated by AI.” How To Geek, December 15, 2023. https://www.howtogeek.com/how-to-detect-fake-news-generated-by-ai/.
- Borchers, Callum. “‘Fake News’ Has Now Lost All Meaning.” Washington Post, February 9, 2017. https://www.washingtonpost.com/news/the-fix/wp/2017/02/09/fake-news-has-now-lost-all-meaning/.
- Bradshaw, Samantha, and Philip N. Howard. “The Global Disinformation Order 2019 Global Inventory of Organised Social Media Manipulation.” Oxford Internet Institute, 2019. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2019/09/CyberTroop-Report19.pdf.
- Brisola, Anna Cristina, and Andréa Doyle. “Critical Information Literacy as a Path to Resist ‘Fake News’: Understanding Disinformation as the Root Problem.” Open Information Science 3, no. 1 (January 1, 2019): 274–86. [CrossRef]
- Buckingham, David. Beyond Technology: Children’s Learning in the Age of Digital Culture. Cambridge, UK: Polity Press, 2013.
- Carlin, Maralee. “Applying the CRAAP Test to Sources.” Accessed December 10, 2024. https://library.suu.edu/LibraryResearch/applying-craap.
- Caulfield, Mike. “SIFT (The Four Moves) –.” Hapgood. Accessed December 10, 2024. https://hapgood.us/2019/06/19/sift-the-four-moves/.
- Cohen, Raphael S., Nathan Beauchamp-Mustafaga, Joe Cheravitch, Alyssa Demus, Scott W. Harold, Jeffrey W. Hornung, Jun Jenny, Michael Schwille, Elina Treyger, and Nathan Vest. Combating Foreign Disinformation on Social Media: Study Overview and Conclusions. RAND Corporation, 2021. [CrossRef]
- Constitutional Rights Foundation. “Fact Finding in the Information-Age.” Constitutional Rights Foundation, n.d. https://teachdemocracy.org/images/pdf/fact_finding.pdf.
- Copenhaver, Kimberly. “Fake News and Digital Literacy: The Academic Library’s Role in Shaping Digital Citizenship.” The Reference Librarian 59, no. 3 (July 3, 2018): 107–107. [CrossRef]
- Crivellaro, Marco V. “The AI Edge: A Competitive Advantage in Educator Training and Action Research for International Schools.” In Igniting Excellence in Faculty Development at International Schools, edited by Peggy Pelonis and Thimios Zaharopoulos, 111–35. Cham: Springer Nature Switzerland, 2024. [CrossRef]
- “D3.2 Algorithms of Data Intelligence, Complex Network Analysis, Artificial Intelligence for the Observatory AI Driven.” Social Observatory for Disinformation and Social Media Analysis, October 31, 2019.
- Deloitte. “From Dating to Democracy, AI-Generated Media Creates Multifaceted Risks.” WSJ, June 13, 2024. https://deloitte.wsj.com/cmo/from-dating-to-democracy-ai-generated-media-creates-multifaceted-risks-ea864975.
- Department for Business, Innovation and Skills and Department for Media, Culture and Sport. “Digital Britain: Final Report.” Kew, Richmond, Surrey: Office of Public Sector Information, June 30, 2009. https://assets.publishing.service.gov.uk/media/5a7c70d9e5274a5590059e1c/7650.pdf.
- Dharani, Naila, Jens Ludwig, and Mullainathan Sendhil. “Can A.I. Stop Fake News?” Chicago Booth Review, January 18, 2023. https://www.chicagobooth.edu/review/can-ai-stop-fake-news.
- European Data Protection Supervisor. “Fake News Detection,” December 10, 2024. https://www.edps.europa.eu/press-publications/publications/techsonar/fake-news-detection.
- “Facticity AI.” Accessed January 17, 2025. https://facticity.ai/.
- Flynn, Terence. “10 Ways to Combat Misinformation: A Behavioral Insights Approach.” Institute for Public Relations. Accessed December 10, 2024. https://instituteforpr.org/10-ways-to-combat-misinformation/.
- Garcia-Milà, Pau. “Detecting Fake News with AI.” Founderz (blog), November 20, 2024. https://founderz.com/blog/detecting-fake-news-with-ai/.
- George, Tegan. “Applying the CRAAP Test & Evaluating Sources.” Scribbr, August 27, 2021. https://www.scribbr.com/working-with-sources/craap-test/.
- Gillham, Jonathan. “Grover AI Content Detection Review.” Originality.ai, August 8, 2024. https://originality.ai/blog/grover-ai-content-detection-review.
- Hodonu-Wusu, James Oluwaseyi. “The Rise of Artificial Intelligence in Libraries: The Ethical and Equitable Methodologies, and Prospects for Empowering Library Users.” AI and Ethics, February 19, 2024. [CrossRef]
- Hood, Sarah. “Putting S.I.F.T. To Work.” Association of College and Research Libraries. Accessed December 10, 2024. https://sandbox.acrl.org/resources/putting-sift-work.
- IFLA. “IFLA Statement on Libraries and Artificial Intelligence.” International Federation of Library Associations and Institutions, October 2020. https://repository.ifla.org/handle/20.500.14598/1646.
- International. “How To Spot Fake News,” March 2017. https://repository.ifla.org/handle/20.500.14598/167.
- Iqbal, Abid, Khurram Shahzad, Shakeel Ahmad Khan, and Muhammad Shahzad Chaudhry. “The Relationship of Artificial Intelligence (AI) with Fake News Detection (FND): A Systematic Literature Review.” Global Knowledge, Memory and Communication, October 3, 2023. [CrossRef]
- ———. “The Relationship of Artificial Intelligence (AI) with Fake News Detection (FND): A Systematic Literature Review.” Global Knowledge, Memory and Communication, October 3, 2023. [CrossRef]
- Islam, Md Aranwul. “Media and Information Literacy in Combating Fake News.” Newagebd.net, November 25, 2024. https://www.newagebd.net/post/opinion/251197/media-and-information-literacy-in-combating-fake-news.
- Jack, Malcolm. “It’s Time for Students to Get Critical,” September 21, 2024. https://www.thetimes.com/uk/scotland/article/its-time-for-students-to-get-critical-qmxl3qp82.
- Kampen, Kaitlyn Van. “CRAAP Test.” Accessed December 10, 2024. https://guides.lib.uchicago.edu/c.php?g=1241077&p=9082343.
- ———. “The SIFT Method.” Accessed December 10, 2024. https://guides.lib.uchicago.edu/c.php?g=1241077&p=9082322.
- ———. “The SMART Check.” Accessed December 10, 2024. https://guides.lib.uchicago.edu/c.php?g=1241077&p=9082345.
- Leon, Esmeralda, and Damon Huss. “Understanding Fake News.” Constitutional Rights Foundation, March 30, 2017. https://teachdemocracy.org/images/pdf/UnderstandingFakeNews.pdf.
- Lopatto, Elizabeth. “Stop Using Generative AI as a Search Engine.” The Verge, December 5, 2024. https://www.theverge.com/2024/12/5/24313222/chatgpt-pardon-biden-bush-esquire.
- Machete, Paul, and Marita Turpin. “The Use of Critical Thinking to Identify Fake News: A Systematic Literature Review.” In Responsible Design, Implementation and Use of Information and Communication Technology, edited by Marié Hattingh, Machdel Matthee, Hanlie Smuts, Ilias Pappas, Yogesh K. Dwivedi, and Matti Mäntymäki, 12067:235–46. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2020. [CrossRef]
- Malik, Amara. “D/Misinformation on Social Media and the Role of the LIS Profession: A South Asian Perspective.” Zenodo (CERN European Organization for Nuclear Research), March 27, 2023. https://www.academia.edu/113710177/D_Misinformation_on_Social_Media_and_the_Role_of_the_LIS_Profession_A_South_Asian_Perspective.
- Marta, Serra-Garcia, and Gneezy Uri. “Improving Human Deception Detection Using Algorithmic Feedback.” CESifo Working Papers. Munich: CESifo, 2023. https://www.cesifo.org/en/publications/2023/working-paper/improving-human-deception-detection-using-algorithmic-feedback.
- McCorkindale, Tina. “2020 IPR Disinformation in Society Report.” Gainesville, FL: Institute for Public Relations, August 2020. https://instituteforpr.org/wp-content/uploads/Disinformation-In-Society-2020-v6-min-1.pdf.
- McCorkindale, Tina, and Anetra Henry. “Third Annual Disinformation in Society Report.” Gainesville, FL: Institute for Public Relations, March 2022. https://instituteforpr.org/wp-content/uploads/Disinformation-Study-MARCH-2022-FINAL.pdf.
- Meriam Library. “Evaluating Information – Applying the CRAAP Test.” California State University, Chico, September 17, 2010. https://library.csuchico.edu/sites/default/files/craap-test.pdf.
- Mooney, Hailey, Jo Angela Oehrli, and Shevon Desai. “Cultivating Students as Educated Citizens: The Role of Academic Libraries.” In Information Literacy and Libraries in the Age of Fake News, 1st ed., 136–50. Erscheinungsort nicht ermittelbar: Libraries Unlimited, 2018. [CrossRef]
- Mueller, John Paul. Artificial Intelligence for Dummies. 3rd ed. Newark: John Wiley & Sons, Incorporated, 2024.
- Niu, Cheng, Yang Guan, Yuanhao Wu, Juno Zhu, Juntong Song, Randy Zhong, Kaihua Zhu, Siliang Xu, Shizhe Diao, and Tong Zhang. “VeraCT Scan: Retrieval-Augmented Fake News Detection with Justifiable Reasoning.” arXiv, 2024. [CrossRef]
- Osman, Magda. “How Close Are We to an Accurate AI Fake News Detector?” The Conversation, November 6, 2024. http://theconversation.com/how-close-are-we-to-an-accurate-ai-fake-news-detector-242309.
- Picton, Irene, and Anne Teravainen. “Fake News and Critical Literacy.” London: National Literacy Trust, 2017.
- Rogers, Reece. “Generative AI Hype Feels Inescapable. Tackle It Head On With Education.” Wired. Accessed December 9, 2024. https://www.wired.com/story/artificial-intelligence-hype-ai-snake-oil/.
- Rubin, C. M. “Combatting Misinformation: AI, Media Literacy, And Psychological Resilience For Business Leaders And Educators.” Forbes. Accessed December 9, 2024. https://www.forbes.com/sites/cathyrubin/2024/12/02/combatting-misinformation-ai-media-literacy-and-psychological-resilience-for-business-leaders-and-educators/.
- Ruggeri, Amanda. “The ‘Sift’ Strategy: A Four-Step Method for Spotting Misinformation.” BBC.com, May 10, 2024. https://www.bbc.com/future/article/20240509-the-sift-strategy-a-four-step-method-for-spotting-misinformation.
- Shahzad, Khurram, and Shakeel Ahmad Khan. “Relationship between New Media Literacy (NML) and Web-Based Fake News Epidemic Control: A Systematic Literature Review.” Global Knowledge, Memory and Communication 73, no. 6/7 (July 23, 2024): 956–83. [CrossRef]
- The University of Queensland. “How AI Is Being Used to Fight Fake News.” The Chronicle of Higher Education, 2020. https://sponsored.chronicle.com/how-ai-is-being-used-to-fight-fake-news/.
- Vandergriff, Michael. “Local Government Is Key to the Fight Against Disinformation.” Time.Com, September 11, 2024. https://time.com/7020151/local-government-disinformation-essay/?utm_source=chatgpt.com.
- “VeraCT Scan.” Accessed January 17, 2025. https://gradio.app/.
- Wang, Fangyuan, and Huiting Xu. “Research on the Application and Frontier Issues of Artificial Intelligence in Library and Information Science.” Voice of the Publisher 10, no. 04 (2024): 357–68. [CrossRef]
- Wardle, Claire, and Hossein Derakhshan. “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.” Strasbourg: Council of Europe, October 2017. https://rm.coe.int/information-disorder-report-november-2017/1680764666.
- Washington, Jerry. “Combating Misinformation and Fake News: The Potential of AI and Media Literacy Education.” SSRN Electronic Journal, 2023. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).