Preprint
Article

This version is not peer-reviewed.

Conceptual Foundations of Knowledge in Philosophy, Science, Language, Education, and Artificial Intelligence

Submitted:

13 October 2025

Posted:

14 October 2025

You are already at the latest version

Abstract
Humans leverage knowledge to solve problems, and so do embedded systems like programmable machines, robots, and digital twins. Consequently, understanding critical aspects of knowledge, that is, its type, nature, formation, application, and refinement, is not just essential, but also beneficial for advancing both human-centric and machine-centric systems. However, the notion of knowledge is continuously evolving as new examples and counterexamples emerge, adding layers of complexity to its understanding. In addition, knowledge-centric entities such as truth, belief, justification, data, probability, possibility, uncertainty, learning, and knowing create a rich and intricate ecosystem. By understanding these, we can unlock the practical benefits of knowledge in our systems. From this perspective, this article delves into the conceptual foundations of knowledge scattered across various disciplines, including philosophy, science, language, education, and artificial intelligence. By doing so, it aims to provide a cohesive framework for researchers and practitioners from diverse fields to identify vital issues before creating human- and machine-centric systems. The exploration begins with the theories of knowledge articulated by Hume and Kant, then transitions to a pragmatist viewpoint. It also investigates the principles of knowledge as articulated within the philosophy of science and language. Furthermore, the article reviews how knowledge is framed by diverse educational theories. Finally, it presents the foundations of digital knowledge, the cornerstone of artificial intelligence, focusing on propositional logic, modal logic, multi-valued (fuzzy) logic, and machine learning. This comprehensive examination, invites all system developers to gain insights into the underlying principles governing human reasoning, interpretation, and learning, empowering them to design artificially intelligent systems that align with these fundamental principles.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

The entity known as knowledge has been important since ancient times, long before the English word was created. “Knowledge” comes from Middle English “knowleche,” which came from the Old English verb “cnāwan,” meaning to recognize or understand. It shares roots with the Greek word “gnosis” and the Latin word “cognoscere,” both related to knowing. In ancient Greece, philosophers like Plato distinguished between knowledge (or “epistēmē”) and opinion (“doxa”). Aristotle linked knowledge to careful reasoning and observation. Later, medieval scholars used the Latin term “scientia” to mean proven knowledge, helping shape the modern meaning of the word. This history of knowledge as a word has reflected in the dictionary meaning. Three examples are cited, as follows. According to a dictionary [1], knowledge is “information and understanding about a subject which a person has, or which all people have.” According to another dictionary [2], knowledge is “the state or fact of knowing; familiarity, awareness, or understanding gained through experience or study; and the sum or range of what has been perceived, discovered, or learned.” According to the other dictionary considered here [3], knowledge is “understanding of or information about a subject that you get by experience or study, either known by one person or by people generally.” In addition to dictionary, the well-known views regarding knowledge can be found in the encyclopedias, e.g., see [4].
However, in a field of philosophy called the theory of knowledge or simply epistemology, knowledge is critically studied. Epistemology provides a bird’s-eye view of knowledge across different fields like science, education, and language. It also articulates the definition, nature, scope, and limitations of knowledge and knowing [5]. A comprehensive description of the epistemological underpinnings of knowledge is presented in the following section. As preliminaries, this introductory section outlines the following broad considerations.
In epistemology, knowledge is traditionally defined as “justified true belief (JTB),” a formulation commonly attributed to Plato [6,7]. Epistemologists have been debating the JTB-view of knowledge, offering both supporting and opposing arguments. Notably, Gettier’s counterexamples demonstrated that JTB does not always constitute knowledge, sparking significant discourse [8,9]. In response, Goldman [10] proposed the causal theory of knowledge, asserting that for a belief to qualify as knowledge, it must have an appropriate causal connection to the corresponding fact. Similarly, Zagzebski considers that knowledge is a cognitive relation between a conscious subject and reality [11]. It can be direct (knowledge by acquaintance) or indirect (propositional knowledge). Knowledge by acquaintance involves direct experiential contact, such as knowing a person, object, or one’s own mental states. Propositional knowledge involves knowing true statements about the world, such as “Roger is a philosopher.” The distinction lies in the degree of directness, with self-knowledge often considered the most direct form of acquaintance. While both forms contribute to understanding reality, acquaintance knowledge is foundational, providing the basis for propositional knowledge by offering firsthand experience of the things we know. Sosa [12] argues that knowledge is an apt belief, a belief that is true because of the knower’s competence. He introduces the AAA model, which includes accuracy, adroitness, and aptness, to explain knowledge: accuracy denotes truth, adroitness denotes intellectual competence, and aptness denotes truth achieved through that competence. He addresses skepticism through reliabilism and emphasizes the role of reliable cognitive processes in knowledge, examining epistemic normativity, justification, and intellectual virtues. Sosa also contrasts his view with foundationalism and explores the relation between epistemology and cognitive science.
As described above, knowledge remains a complex and evolving concept. Philosophers’ continuous presentation of new examples and counterexamples refines the epistemological underpinnings of knowledge. At the same time, the advent of large language model-based generative artificial intelligence has created new realities of knowledge [13]. These emerging issues underscore the need for a comprehensive evaluation of knowledge. Accordingly, this article takes an interdisciplinary approach and examines the conceptual foundations of knowledge across epistemology, philosophy of science, philosophy of language, educational theory, and artificial intelligence. Such a broad and deep exploration can uncover how humans reason, interpret, and learn while using knowledge, and, thereby, can support the development of artificially intelligent systems that work in closer harmony with human thoughts.
The rest of this article is organized as follows. Section 2 presents the theory of knowledge as developed through the contributions of Hume and Kant. Section 3 presents a pragmatist perspective on knowledge. Section 4 presents the theory of knowledge within the philosophy of science. Section 5 presents the articulation of knowledge in the philosophy of language. Section 6 presents how knowledge is conceptualized within different educational theories. Section 7 presents the articulation of knowledge in artificial intelligence, with particular attention to digital knowledge. Finally, Section 8 presents the concluding remarks of this study.

2. Hume-Kant’s Theory of Knowledge

The philosophical field of the theory of knowledge, known as epistemology, examines the definitions, nature, scope, and limitations of knowledge and the process of knowing, as introduced in the previous section [5,6,7,8,9,10,11,12]. This section provides a detailed discussion based on the contributions of Hume and Kant. Their contributions are pivotal because they consolidate the epistemological developments preceding them and establish the foundational principles that guided subsequent developments.
The scientific revolution in 16th–18th centuries transformed views on nature, science, and knowledge. Figures like Galilei, Kepler, and especially Newton, introduced a new way of understanding the world based on mathematics, empirical observation, and universal laws. Newton’s Principia Mathematica, which described the laws of motion and universal gravitation, provided a systematic, predictive framework for understanding the physical universe. Philosophers wanted to understand the epistemological foundations of these discoveries by answering the following questions:
How is scientific knowledge possible? How can science achieve certainty and universality? What is the relationship between reason, experience, and knowledge? Can we trust scientific observation-driven knowledge, or is it simply a habit of thought? How humans could know the world so precisely and predictively?
The answers to the aforementioned questions collectively form the foundation of the theory of knowledge, or epistemology. Within this intellectual framework, the contributions of David Hume and Immanuel Kant stand as pivotal, shaping both prior and subsequent epistemological theories. Hume’s empirical skepticism challenged the certainty of knowledge derived from experience, questioning the reliability of inductive reasoning and causal inference. In response, Kant synthesized rationalist and empiricist traditions, proposing his transcendental idealism, wherein the structure of human cognition imposes necessary conditions on how knowledge is acquired. Their philosophical insights continue to influence contemporary discussions on the nature, limits, and justification of human knowledge. This section presents a summary the theory of knowledge of Hume and Kant. First, the Hume’s theory of knowledge is presented followed by the Kant’s.

2.1. Hume’s Theory of Knowledge

While reading this sub-section readers may refer to the following references [14,15,16,17] where the contributions of Hume are articulated.
David Hume’s theory of knowledge is grounded in empiricism of Bacon, Locke, and Berkeley, emphasizing that all knowledge originates from sensory experience. He distinguishes between impressions (vivid sensory experiences) and ideas (fainter mental representations of these impressions), asserting through the Copy Principle that every idea derives from an impression. Hume famously questioned causality, arguing that our belief in cause and effect stems from psychological habit rather than rational justification. This led to the Problem of Induction, a significant challenge in his theory, where we cannot rationally infer that the future will resemble the past. He also divided knowledge into relations of ideas (necessary truths like mathematics, known a priori) and matters of fact (contingent truths known a posteriori), a distinction known as Hume’s Fork. This fork is summarized in Table 1. According to Hume, relations of ideas are statements that are necessarily true and known through reason alone, without the need for empirical evidence. Examples include mathematical and logical truths such as “3+ 9 = 12,” “All bachelors are unmarried men,” and “A rectangle has four sides.” These propositions are self-evident, and denying them results in a contradiction. On the other hand, matters of fact are contingent truths that hinge on experience and observation, implying that they could be otherwise. For instance, “The sun will rise tomorrow,” “Water boils at 100 °C at sea level,” and “The Eiffel Tower is in Paris.” Unlike relations of ideas, matters of fact cannot be proven purely through logic because their truth depends on empirical evidence, and it is always conceivable that they could be false under different circumstances. Furthermore, Hume’s skepticism about the concept of a unified self is radical, claiming the self is merely a “bundle” of perceptions without underlying substance. This radical skepticism about induction and personal identity profoundly influenced later philosophers, particularly Immanuel Kant, and continues to shape modern discussions in epistemology and the philosophy of science, demonstrating the enduring impact of Hume’s work.

2.2. Kant’s Theory of Knowledge

While reading this sub-section readers may refer to the following references [18,19,20,21,22] wherein the contributions of Kant are articulate.
Immanuel Kant’s theory of knowledge seeks to bridge the gap between rationalism and empiricism, responding to Hume’s skepticism by proposing that while all knowledge begins with experience, not all knowledge arises from it. Kant distinguishes between analytic judgments, where the predicate is contained within the subject (e.g., “All bachelors are unmarried”), which are necessarily true and known a priori, and synthetic judgments, where the predicate adds new information (e.g., “The cat is on the mat”), which require empirical validation. His most significant contribution is the concept of synthetic a priori judgments, which are informative yet necessarily true and independent of experience, such as “7 + 5 = 12” and “Every event has a cause.” These judgments expand knowledge without empirical evidence, explaining how mathematics, science, and metaphysics possess universal and necessary truths. Kant’s theory of transcendental idealism asserts that we can only know the world as it appears to us (phenomena), structured by the mind’s innate categories (like causality, unity, and plurality) and forms of intuition (space and time), while the ultimate reality (noumena) remains unknowable. This synthesis allowed Kant to explain how objective knowledge is possible, countering Hume’s doubts about causality and induction. Table 2 summarizes Kant’s theory of knowledge as described above.
Kant’s Categories of Understanding are twelve fundamental concepts that the mind uses to organize and interpret sensory experiences, making coherent knowledge of the world possible. These categories are applied a priori, meaning they structure experience. Kant divided them into four groups, each containing three related categories. The first group, Quantity, includes Unity (viewing something as a single entity), Plurality (considering multiple entities), and Totality (understanding the whole in terms of its parts). The second group, Quality, concerns the nature of objects and includes Reality (presence of something), Negation (absence of something), and Limitation (the boundary between reality and negation). The third group, Relation, deals with how objects relate to each other: Inherence and Subsistence (understanding objects as substances with properties), Causality and Dependence (recognizing cause-and-effect relationships), and Community (acknowledging reciprocal interactions among objects). Lastly, Modality refers to the status of knowledge claims, encompassing Possibility–Impossibility (whether something can exist), Existence–Non-existence (whether something actually exists), and Necessity–Contingency (whether something must exist or could be otherwise). Together, these categories enable the mind to synthesize sensory data into structured, objective knowledge of the phenomenal world.

2.3. Integration

Note that Hume and Kant’s views on cause and effect represent a pivotal moment in the history of philosophy, where Kant sought to address the skepticism Hume raised. Hume argued that the concept of causality is not derived from reason or logical deduction but from habit or custom. After observing that one event (like a billiard ball striking another) is consistently followed by another (the second ball moving), the mind develops the expectation that the first event causes the second. However, Hume pointed out that there is no rational justification for this assumption; we never observe the causal connection itself, only the constant conjunction of events. Consequently, Hume believed that the idea of causality is not grounded in reason but is a psychological expectation formed through repeated experiences. This led to his famous Problem of Induction, where he questioned how we can justify beliefs about future events based on past experiences. In response, Kant incorporated the concept of causality into his framework of the Categories of Understanding, specifically under the category of Relation as Causality and Dependence (cause and effect). Kant agreed with Hume that causality cannot be derived from experience alone, but he rejected Hume’s conclusion that it was merely a habit. Instead, Kant argued that causality is an a priori category, i.e., an innate concept that the mind uses to organize sensory data. According to Kant, our experience of the world as a sequence of events would be impossible without the concept of cause and effect already functioning in the mind. For Kant, causality is a necessary precondition for the possibility of experience because it allows us to understand temporal sequences as connected rather than as random events. Thus, where Hume saw causality as a subjective projection based on habit, Kant viewed it as an objective, necessary structure imposed by the mind to make sense of experience. Consequently, epistemology of Hume and Kant can be integrated to a unified theory of knowledge as shown in Table 3.
However, the concepts of “a posterioricity” and “analyticity” have critically examined by numerous philosophers, some of whom argue for the existence of analytic a posteriori knowledge [23,24], a possibility that was denied by Kant. Others question the very function and distinction between the concepts of synthetic and analytic statements and less agreements are seen among the philosophers [25]. For instance, Carnap explores the concept of analyticity within formal languages, proposing that certain truths are grounded in semantic rules and are true by definition [26]. In contrast, Quine challenges the distinction between analytic and synthetic statements, thereby questioning the very foundation of analyticity [27].
Nevertheless, Hume and Kant’s theory of knowledge articulates three key types of knowledge: 1) analytic a priori, 2) synthetic a priori (relations of ideas), and 3) synthetic a posteriori (matters of fact). And, their theory of knowledge demonstrates how these three types of knowledge integrate rational thinking with real-world experience, making them not only theoretically significant but also practically applicable.

3. Pragmatism-Based Theory of Knowledge

Following Newtonian physics, the Darwinian theory of evolution is considered a major scientific breakthrough. This introduced pragmatism. The pragmatic underpinnings of knowledge offer a distinct perspective, focusing on the practical consequences and the usefulness of knowledge in guiding action. It emerged in the late 19th century and is primarily associated with thinkers like Peirce, James, and Dewey. This section provides an account of the pragmatism-based theory of knowledge. While reading this section, readers may refer to the following references [28,29,30,31,32] wherein the contributions of Peirce, James, and Dewey are articulated.
Pragmatism shifts the focus of epistemology from the traditional questions of truth as correspondence to reality, toward understanding truth and knowledge in terms of their practical utility and applicability in real-life situations. For pragmatists, knowledge is validated by its success in solving problems and guiding effective action. The meaning of concepts and propositions lies in their practical effects and implications. Truth is seen as what works in the long run or what is most satisfactory in practical terms.
Peirce introduced the pragmatic maxim, suggesting that to understand a concept, we must consider the practical consequences we expect from it. Thus, knowledge as a fallible yet self-correcting process driven by inquiry. James emphasized that truth should be understood in terms of practical consequences and the usefulness of beliefs. His version of pragmatism was more psychological, focusing on how beliefs satisfy human needs and adapt to experiences. Dewey, on the other hand, advanced a naturalistic and instrumentalist approach, viewing knowledge as an instrument for dealing with practical problems. Dewey, on the other hand, advanced a naturalistic and instrumentalist approach, viewing knowledge as a tool for addressing practical problems. He rejected the notion of knowledge as a static representation of reality, instead perceiving it as an ongoing process closely tied to experience and experimentation. Thus, the pragmatic view of knowledge is founded on the idea of its evolutionary and dynamic nature.
The remarkable thing is that methodologies to contact research and extract actionable knowledge are heavy influenced by pragmatic epistemology. For example, in [33], the author discusses how pragmatism provides a philosophical basis for integrating qualitative and quantitative research methods, emphasizing practical consequences and real-world applications. Frega [34] explores Dewey’s view of knowledge as a tool for practical problem-solving, highlighting the role of judgment and rationality in human practices. Hothersall [35] examines how philosophical pragmatism can bridge the gap between theory, practice, and research in social work, advocating for a focus on outcomes and practical implications. Gillespie et al. [36] introduce pragmatism as a process philosophy grounded in human activity, offering a transdisciplinary framework for creating useful knowledge. Shusterman [37] discusses the role of experience in pragmatist philosophy, emphasizing its importance for personal growth and self-transformation. Hildebrand [38] examines Dewey’s pragmatic approach to truth and knowledge, arguing for a perspective that transcends traditional debates between realism and antirealism. Misak [39] offers an interpretation of Peirce’s conception of truth, linking it to the idea of inquiry as an ongoing, self-corrective process.
Contemporary pragmatic epistemology has further developed, blending it with linguistic analysis, social epistemology, and anti-foundationalism, arguing that knowledge is inherently context-dependent and socially constructed. For example, Putnam [40] explores the evolution of pragmatism, addressing its implications for issues like truth, reality, and rationality. Rorty [41] critiques traditional epistemology and advocates for a pragmatist approach that views knowledge as a product of linguistic and social practices. Nevertheless, the single most significant idea underlying pragmatic theory of knowledge is an inference called abduction or inferring the best plausible explanation. Complementing deduction and induction, abduction initiates an iterative, self-corrective inquiry process where hypotheses are tested and refined through real-world interactions, aligning knowledge with practical needs and observations. Refer to [42,43] for more details on abductive reasoning. The above description of pragmatic theory of knowledge is summarized in Table 4.

4. Knowledge in Philosophy of Science

Similar to the epistemological frameworks of Hume and Kant, as well as those grounded in pragmatism, the philosophy of science critically examines the nature and scope of knowledge. This section offers a concise overview of some of the most prominent theories of knowledge discussed in the literature. In particular, this section briefly highlights the contributions of Russell, Carnap, Quine, Popper, Hempel, and Salmon.
First, consider Russell’s philosophy of science that is rooted in logical analysis, empiricism, and scientific realism [44,45,46]. His work bridges the gap between science and philosophy, emphasizing the role of logic in understanding the natural world. He believed that scientific knowledge should be grounded in empirical evidence, i.e., knowledge can be derived from sensory experience. At the same time, logically consistent entities can be considered as knowledge. Thus, meaningful statements are either logically provable or empirically verifiable. The remarkable thing is that mathematics is reducible to logic, ensuring the precision and clarity of scientific knowledge. As a proponent of scientific realism, he believed that science provides a potentially true description of reality, including unobservable entities like electrons. He held that the best explanation for the success of science is that its theories are at least approximately true. While recognizing the problem of induction from Hume, Russell suggested that inductive reasoning is a pragmatic necessity for science and introduced a probabilistic approach to scientific knowledge, acknowledging that scientific conclusions are often tentative and revisable. Russell also warned against treating scientific theories as absolute truths, believing that science progresses through critical inquiry, skepticism, and revision of earlier theories. Russell’s analytical approach paved the way for later philosophers like Carnap, Quine, and Popper. His insistence on clarity, logic, and empiricism remains foundational in contemporary discussions on the philosophy of science.
Carnap viewed knowledge [47,48,49] through the lens of logical positivism, emphasizing the logical structure of knowledge and the verification principle. For Carnap, meaningful knowledge consists of statements that can be empirically verified or are logically necessary (such as those in mathematics and logic). Carnap argued that all knowledge could be reduced to a basis of observational statements and logically derived from them. This project, known as reductionism, aimed to reconstruct scientific knowledge from basic sense-data reports using formal logic. Carnap believed that philosophical statements must be translated into logical terms connected to empirical observation; otherwise, they are meaningless. His verification principle held that for a proposition to be meaningful, it must be either logically provable or empirically verifiable. Carnap also contributed to the idea of confirmation theory, where knowledge claims are supported by accumulating confirming evidence rather than absolute proof. Although Carnap acknowledged that scientific knowledge is provisional, he maintained that the growth of knowledge occurs through logical analysis, empirical verification, and clarification of language.
However, while both Russell and Carnap played crucial roles in the development of analytic philosophy and shared commitments to logic and empiricism, their approaches diverged significantly. Russell was deeply engaged with realist metaphysics, epistemology, and the relationship between scientific theories and objective reality. In contrast, Carnap sought to eliminate metaphysics, focusing instead on the logical analysis of language, the formal reconstruction of scientific knowledge, and the verification principle. Russell’s work set the stage for logical analysis in philosophy, but Carnap’s rigorous application of logic and focus on linguistic frameworks gave rise to logical positivism, which would become a dominant philosophical movement in the early 20th century.
Quine revolutionized epistemology by rejecting many foundational assumptions of logical positivism [50,51,52]. He challenged the distinction between analytic (true by definition) and synthetic (true by empirical observation) knowledge, a distinction central to Carnap’s philosophy. Quine argued that knowledge is a holistic web, where beliefs and theories are interconnected and must be tested against experience as a whole. According to Quine’s epistemological holism, no single statement can be tested in isolation because empirical evidence affects the entire network of beliefs. This means that even fundamental principles of logic and mathematics are, in principle, revisable in light of empirical evidence. Quine further advanced the idea of naturalized epistemology, arguing that the study of knowledge should be part of empirical science, particularly psychology. For Quine, epistemology is not about seeking absolute foundations for knowledge but understanding how humans acquire and organize knowledge through empirical methods. He believed that the growth of knowledge occurs through pragmatic adjustments to our web of beliefs when confronted with new empirical data.
Popper rejected the traditional inductive view of knowledge, where knowledge grows by accumulating verified observations [53,54,55,56]. Instead, he proposed a critical rationalist approach, emphasizing falsifiability as the criterion for scientific knowledge [53,54,55,56]. Popper argued that scientific knowledge advances not by proving theories true but by eliminating false ones. For Popper, knowledge is always provisional, i.e., conjectural, because no amount of empirical data can conclusively verify a universal scientific law. However, a single counterexample can refute it, making falsifiability the demarcation criterion for science. Popper believed that the growth of knowledge occurs through a cycle of conjectures (hypotheses) and refutations (testing and rejecting false hypotheses). His later work, elaborated on this view, arguing that the best scientific theories are those that make bold predictions and are open to rigorous testing. Popper’s epistemology emphasizes critical thinking, open inquiry, and the recognition that all knowledge claims are subject to revision. Unlike Carnap, who focused on verification, and Quine, who emphasized holistic adjustment, Popper saw the rational testing of bold theories and their potential falsification as the engine of knowledge growth.
Popper’s three-world theory explains the interplay of knowledge with reality. World 1 is the physical world, which includes material objects and natural phenomena and is studied through empirical sciences. World 2 consists of mental states, where individuals process information and generate ideas. World 3 includes objective knowledge, such as scientific theories, mathematics, and art. Although it originates in World 2, once created, it exists independently. Popper emphasized the interdependence of these worlds. World 1 influences World 2 through sensory experience, while World 2 shapes World 1 through human action. World 3 informs understanding in World 2 and expands through new mental contributions. Staples [57,58] utilizes Popper’s three worlds ontological framework to propose a model of engineering theories. He provides an abstract logical view of these theories, analogous to the deductive-nomological view of scientific theories. Staples argues that engineering has a distinct ontological basis, as its theories address different entities and are evaluated by different criteria compared to scientific theories. This work lays the foundation for an objective understanding of knowledge in engineering. Building upon his previous work, Staples explores methodological issues in engineering epistemology through the lens of Popper’s critical rationalism and three worlds framework. He investigates error elimination and the growth of knowledge in engineering, discussing how engineering failures can result from the falsification of engineering theories. The paper presents taxonomies of the sources of falsification and responses to such falsifications in engineering, offering insights into the unique processes of knowledge development within the engineering discipline.
Hempel [59,60] argued that scientific knowledge must be grounded in empirical evidence and logical reasoning, with meaningful statements being either empirically verifiable or logically necessary. His Deductive-Nomological (D–N) model explains phenomena by logically deducing the explanandum (what is to be explained) from general laws and specific conditions, known as the explanans. For example, a planet’s orbital motion can be deduced from Newton’s laws of motion and gravitation, illustrating the predictive power of scientific theories. In contrast, Hempel’s Inductive-Statistical (I–S) model addresses probabilistic explanations, where events are explained through statistical correlations. For I-S model, the probability must be closed to unit, i.e., P(E | H) ≥ r where P(E | H) is the probability of the event E (explanandum) given the hypothesis H (explanans) and r represents a probability threshold, close to 1 (e.g., 0.95). For example, let the event (E) be “A patient develops lung cancer” and the hypothesis (H) be “the patient is a heavy smoker”. Statistical data show that P(Lung Cancer | Smoking) = 0.96. Therefore, “smoking develops lung cancer” is a valid piece of scientific knowledge. Consequently, both deductive knowledge (D-N explanation) and statistically inductive knowledge (I-S explanation) can provide law-like entities.
Going one step ahead, Salmon [61,62] introduced epistemological framework centered around understanding how scientific knowledge is structured through causal relationships rather than merely logical derivations. Salmon critiqued the Deductive-Nomological (D–N) model, arguing that while it explained the logical structure of scientific theories, it lacked causal depth. In response, Salmon developed the Causal-Mechanical (C–M) Model of scientific explanation, asserting that understanding a phenomenon requires uncovering the causal mechanisms behind it. For example, explaining a chemical reaction involves identifying the molecular interactions that produce it, rather than just describing it through general laws. This model emphasizes that causal processes transmit physical quantities (like energy or momentum) and that causal interactions modify these processes. Additionally, Salmon contributed to probabilistic epistemology through his Statistical Relevance (S–R) model, which focuses on probabilistic causation. This model explains how certain factors statistically influence the likelihood of events. According to S–R model, P(E | C, B) ≠ P(E | B), where P(E | C, B) is the probability of event E given factor C and background conditions B and P(E | B) is the probability of event E given only background conditions B, is enough to have a scientific knowledge because the inequality indicates that C affects the probability of E, showing causal relevance. For example, let the event (E) be “a patient develops a rare genetic disorder,” factor (C) be “the patient carries a specific gene mutation,” and background (B) be “population without the mutation.” Statistical data show that P(Disorder | Mutation, B) = 0.05 and P(Disorder | B) = 0.0001. Therefore, “mutation causes disorder” is a valid conclusion, although the absolute probabilities are low (5% and 0.01%). This means that Salmon relaxed the need for high probability, as I–S explanation provides, and made probabilities approach more pragmatic. Table 5 summarizes the different facets of theory of knowledge associated with the philosophy of science.

5. Knowledge in Philosophy of Language

The philosophy of language explores how language influences our understanding of knowledge, meaning, and communication. Key topics include the nature of meaning, reference, truth, and the relationship between language and thought. Some of the selected linguist philosophers’ contributions briefly described as follows.
Frege [63,64] focused on the relationship between language, thought, and reality. He introduced the distinction between sense and reference, explaining how language can convey knowledge about the world. Frege’s work on logic and meaning influenced epistemology by showing that understanding meaning is essential for grasping truth and making knowledge claims. Mauthner [65] critiqued the limitations of language in expressing knowledge. He argued that many philosophical problems arise from misunderstandings of language. For Mauthner, language cannot fully capture reality, which imposes epistemic limits on what can be known and communicated. His work emphasizes the need for careful linguistic analysis in epistemology. Revising his earlier view that knowledge is limited to what can be meaningfully expressed in logical form [66], Wittgenstein [67] sees knowledge as socially embedded in language use. He argues that meaning arises from practice rather than strict logical structure, emphasizing that knowledge depends on participation in shared linguistic and social activities. Unlike his previous belief in absolute logical foundations, he contends that knowledge lacks inherent certainty and is justified contextually through everyday use. Chomsky [68] views knowledge as innate and structured, particularly in language acquisition, where Universal Grammar provides inherent linguistic principles. He rejects empiricism and behaviorism, arguing that learning is rule-based rather than purely experience-driven. His poverty of the stimulus argument supports the idea that humans acquire language with pre-existing cognitive structures.
Austin’s speech act theory [69], later enhanced by Searle [70,71], explores how language is not only used to describe reality but also to perform actions, with profound implications for epistemology. Austin introduced a three-part framework of speech act: locutionary acts (the literal meaning of an utterance), illocutionary acts (the intention behind the utterance, such as asserting, promising, or commanding), and perlocutionary acts (the effect on the listener). Searle expanded this framework by classifying speech acts into five categories: assertive (statements of fact), directives (requests or commands), commissive (promises), expressive (emotional expressions), and declarations (utterances that change reality, like wedding vows). Austin’s epistemology critiques sense-data theory, arguing that knowledge is based on ordinary language and practical linguistic actions rather than abstract perceptual experiences. Searle extends speech act theory into social epistemology, emphasizing that institutional facts, such as laws, money, and scientific knowledge, are created through collective speech acts and constitutive rules (“X counts as Y in context C”). This suggests that knowledge is not merely an individual mental state but an interactive, linguistic, and performative process, shaped by assertions, testimony, and institutional recognition. Their theories also have significant implications in education, where teachers and students engage in communicative acts that construct learning, reinforcing the idea that knowledge is both spoken into existence and socially maintained. In addition, Searle examined ontological and epistemic objectivity and subjectivity. He explained how objective facts and subjective experiences shape the formation and justification of knowledge. These ideas together show how language, meaning, truth, and knowledge are related in epistemology. Grice [72] developed conversational maxims, including quantity, quality, relation, and manner. These maxims show that clarity, truth, and relevance in communication are important for justified knowledge claims. Kripke [73] developed the Causal Theory of Reference and discussed necessity and a posteriori truth. He explained how identity and necessity affect knowledge about the world. Davidson [74] explored the relationship between truth, meaning, and interpretation. He developed radical interpretation, emphasizing that knowledge and meaning emerge through linguistic interaction and shared truth conditions. Rejecting conceptual schemes, he argues for direct realism, where language and thought directly reflect reality rather than filtering it through subjective frameworks. His holistic approach sees knowledge as interconnected, challenging reductionist theories of meaning and understanding. Finally, Lycan [75,76] contributed to understanding how linguistic meaning and mental representation shape knowledge. He explored how language connects to the mind through representational systems, suggesting that knowledge depends on how accurately these systems correspond to the external world. Lycan’s work bridges the philosophy of language and epistemology by addressing how linguistic structures influence cognitive understanding.
However, Searle also introduced the well-known Chinese Room argument to question the claims of artificial intelligence (AI). In this thought experiment, a person who does not understand Chinese is placed in a room with a set of rules for manipulating Chinese symbols. By following these rules, the person can provide appropriate responses to Chinese questions without understanding the language. Searle argued that computers, like the person in the room, can process symbols and produce correct outputs but do not truly understand the content. This example challenges the idea of strong AI, which suggests that a computer running the right program could have a mind and genuine understanding. According to Searle, AI systems lack intentionality, the characteristic of mental states that are directed toward something. They only simulate understanding instead of truly possessing it. The main limitation of AI, highlighted by the Chinese Room argument, is the gap between syntactic processing (manipulating symbols based on rules) and semantic understanding (comprehending the meaning behind the symbols). While AI can process syntax efficiently, Searle believes it cannot achieve real semantic understanding. This distinction raises important questions in epistemology about what it means to truly know or understand something, emphasizing the difference between appearing knowledgeable and genuinely possessing knowledge. Nevertheless, generative artificial intelligence may not encounter the understanding problem described in the Chinese Room thought experiment, as discussed by the authors [13].
The above description is summarized in Table 6 showing the contribution of philosophy of language in the theory of knowledge.

6. Knowledge in Educational Sciences

Knowledge in educational sciences means how people acquire, process, and retain knowledge in educational settings using the notion learning. It looks at cognitive, emotional, and social factors that affect learning of three types of knowledge known as declarative, procedural, and conditional knowledge. Declarative knowledge includes facts and concepts. Procedural knowledge is knowing how to perform tasks. Conditional knowledge means knowing when and why to apply knowledge. In learning, metacognition, i.e., being aware of and managing one’s own thinking, is plays an important role. It helps learners plan, monitor, and evaluate their understanding. Knowing how knowledge is organized and used helps improve teaching methods, instructional design, and assessment strategies. This understanding leads to better learning outcomes in various educational environments.
Educational sciences introduce numerous epistemological theories integrating the concepts of knowledge, learning, and teaching. One of the remarkable theories is Piaget’s genetic epistemology [77,78,79] that explains how knowledge develops through stages of cognitive growth. It focuses on how individuals construct knowledge by interacting with their environment. Piaget believed that knowledge is built through two main processes called assimilation and accommodation. Assimilation happens when new information fits into existing knowledge structures. Accommodation occurs when existing knowledge structures change to include new information. These processes work together to achieve equilibration, a balance that drives cognitive development toward knowledge. Piaget identified four stages of cognitive development. The sensorimotor stage (0–2 years) involves learning through sensory experiences and actions. The preoperational stage (2–7 years) is marked by the development of language and symbolic thinking. The concrete operational stage (7–11 years) involves logical thinking about concrete objects. The formal operational stage (11 years and older) is when abstract reasoning and hypothetical thinking develop. Piaget’s theory shows that knowledge is not simply acquired but constructed through a process of growth and adaptation. It highlights that learning is an active process where learners build understanding based on their experiences and developmental stages.
Based on Piaget’s genetic epistemology, Ausubel [79] developed a theory called assimilation theory that emphasizes that meaningful learning occurs when new knowledge is connected to existing cognitive structures. He distinguished between meaningful learning, where understanding is built through integration, and rote learning, which relies on memorization without comprehension. A key element of his theory is the use of advance organizers, which serve as conceptual frameworks to help learners relate new information to what they already know. This approach ensures that new knowledge is meaningfully assimilated, leading to deeper understanding and better retention. Novak expanded Ausubel’s ideas by developing concept mapping as a practical tool to visualize knowledge relationships [80,81]. Concept maps highlight the networking relationships among concepts, showing how ideas are interconnected rather than arranged in strict hierarchies. By visually organizing information, concept maps help learners recognize connections between new and existing knowledge. This process not only aids meaningful learning but also supports long-term retention and the ability to apply knowledge in various contexts. Novak’s approach demonstrates that learning involves continuous restructuring of knowledge through active engagement. Thus, concept maps serve as dynamic tools that allow learners to revise and expand their understanding as they encounter new information [82]. The focus on networking relationships encourages critical thinking and problem-solving by making the structure of knowledge explicit. While Piaget’s genetic epistemology provides a general understanding of cognitive development, Ausubel’s assimilation theory and Novak’s concept mapping offer more specific strategies and tools for facilitating meaningful learning. Together, these approaches emphasize that knowledge construction is an active, connected, and evolving process where learners integrate and retain information through meaningful networks of concepts.
Apart from meaningful learning, there are other notions of learning that explicitly consider knowledge and its growth. For example, consider the notion of cumulative learning. The description is as follows. Bourdieu [83] outlined a theory of practice based on the concepts of habitus, field, and capital and explained how social structures shape knowledge, practices, and power in education. Bernstein [84] introduced vertical discourse (hierarchically structured academic knowledge) and horizontal discourse (everyday knowledge). Extending the work of Bernstein and Bourdieu, Maton developed Legitimation Code Theory (LCT) [85,86,87] that is a sociological framework. It examines the structuring principles of knowledge-building across disciplines. It provides analytical tools to understand how knowledge is constructed, transmitted, and legitimized in education and research. LCT consists of multiple dimensions, including Semantics, which deals with meaning-making through Semantic Gravity (SG) (the extent to which knowledge is context-dependent) and Semantic Density (SD) (the degree of complexity and condensation of meaning). Another key dimension is Specialization, which distinguishes between Epistemic Relations (ER) (how knowledge is tied to the external world) and Social Relations (SR) (how knowledge is connected to the identity of the knower). Other dimensions include Autonomy, Density, and Temporality, each addressing how knowledge is organized, ranked, and structured over time. Within LCT, cumulative learning refers to the process by which learners build on previous knowledge in a structured and integrative manner, enabling deep understanding and knowledge transfer across contexts. In Semantics, cumulative learning is achieved through semantic waves, where learners shift between abstract and concrete understandings, ensuring knowledge is not merely memorized but deeply understood. A strong semantic density allows learners to integrate complex ideas, creating more sophisticated conceptual frameworks. In Specialization, cumulative learning occurs when learners engage with epistemic relations, developing expertise in disciplinary knowledge rather than relying on personal opinions or social status. LCT shows how education can either foster or hinder cumulative learning by structuring knowledge in ways that either encourage integration and progression or leave students with isolated pieces of information. For instance, in assessments, a lack of semantic waves (where all questions remain either too abstract or too concrete) results in semantic flatlining, preventing students from developing flexible, transferable knowledge. By applying LCT, educators can design curricula, teaching methods, and assessments that promote cumulative learning by varying the semantic gravity and density of tasks, ensuring knowledge builds progressively rather than remaining fragmented. This approach supports deep learning, critical thinking, and the ability to apply knowledge in different situations.
The remarkable thing is that LCT categorizes knowledge based on semantic gravity (SG) (context dependence) and semantic density (SD) (complexity of meaning), forming four quadrants [88]. The Rhizomatic Code (SG−, SD+) represents abstract yet complex knowledge, such as theoretical physics or philosophy, where meaning is deeply condensed but broadly applicable (e.g., E = mc2). The Prosaic Code (SG+, SD−) consists of concrete and simple knowledge, like everyday instructions or recipes, where meaning is tied to a specific context but lacks complexity. The Worldly Code (SG+, SD+) contains both detailed and context-dependent knowledge, seen in fields like medicine or law, where case-based reasoning requires applying complex principles to specific situations. The Rarefied Code (SG−, SD−) includes generalized but simple ideas, such as basic definitions that apply across contexts without deep complexity. Understanding these quadrants allows educators to design semantic waves, facilitating knowledge transfer between abstract and concrete contexts to promote deeper learning. Based on SD-SG scheme, Kinchin et al. [89] identified four types of knowledge embedded within concept maps: novice knowledge, theoretical knowledge, practical knowledge, and professional knowledge. Novice knowledge is characterized by SD−, SG−, representing loosely connected, basic knowledge with minimal integration of scientific principles. Theoretical knowledge has SD+, SG−, consisting of abstract and highly condensed disciplinary knowledge that is generalizable beyond specific contexts. Practical knowledge is defined by SD−, SG+, encompassing hands-on, context-dependent knowledge often linked to real-world applications and practical skills. Professional knowledge combines SD+, SG+, integrating both theoretical understanding and practical application, which is essential for expert-level knowledge. These types of knowledge interact within concept maps, influencing how learners progress from basic understanding to professional expertise through structured learning experiences.

7. Knowledge in Artificial Intelligence

In the latter half of the twentieth century, the introduction of microprocessors, personal computers, networking technologies such as the Internet and the World Wide Web, and numerical control accelerated the progress of so-called knowledge-based systems, expert systems, and artificially intelligent systems. These systems exist as standalone or reside in embedded platforms such as programmable machines, robots, and digital twins. These systems help solve problems related to automation, reasoning, decision making, and other cognitive tasks. Consequently, these systems require knowledge in much the same way humans do. The difference is that, this time, knowledge must be machine readable, giving rise to a concept denoted as digital knowledge. This special form of knowledge must keep a balance between two essential properties. The first is that it must be soft enough to capture the fundamental aspects of knowledge described in Section 2, Section 3, Section 4, Section 5 and Section 6. The other is that it must be syntactically rich enough to remain programmable throughout its life cycle, from articulation to use and refinement. Like knowledge itself, digital knowledge rests on certain conceptual foundations. Digitize knowledge are often presented by a set of well-formed formulas (WFFs) which are logical equivalents of a syntactically or grammatically correct contents. Based on the underlying logical system, a WFF is articulated using some predefined symbols and a set of WFFs is processed by some logical inferences. This section examines these building-blocks focusing on propositional logic, modal logic, fuzzy logic, and machine learning.

7.1. Propositional Logic

A description of propositional logic and other relevant concepts can be found in [90,91,92]. Referring to [90,91,92], the conceptual foundations of propositional logic is described as follows. In propositional logic, the basic symbols include negation (¬), which denotes logical “not”; conjunction (∧) for “and”; disjunction (∨) for “or”; implication (→) meaning “if... then…”; and biconditional (↔) for “if and only if.” Additionally, there are some constants such as truth (⊤) and falsehood (⊥), and propositional variables (P, Q, R, …), which stand for whole statements or propositions. Predicate logic builds on propositional logic and adds universal quantifier (∀) for “for all,” and existential quantifier (∃) for “there exists.” Statements can involve predicate symbols (P(x), Q(x, y), …) that express properties or relations, as well as variables (x, y, z, …) ranging over the domain of discourse, constants (a, b, c, …) for specific objects, function symbols (f(x), g(x, y), …) that map elements to other elements, and the equality symbol (=) to denote identity between terms.
Note that the symbol → is called material implication and is used in propositional and predicate logic to express a truth-functional conditional: “if P, then Q.” The statement P → Q is only false when P is true and Q is false. Similarly, the symbol ⊢ represents syntactic entailment or provability; it means that a conclusion can be derived from premises using formal inference rules. For example, from P and P → Q, we can derive Q, written as (P, P → Q) ⊢ Q. On the other hand, ⊨ stands for semantic entailment and indicates that a conclusion is true in all models where the premises are true. For example, ∀x Human(x) ⊨ ∃x Human(x) means that if everyone is human, then there exists someone who is human, in all interpretations. The symbol ⇒ is a general symbol for logical implication often used informally to indicate that one statement leads to another logically, e.g., “x is even ⇒ x2 is even.” Although less precise than ⊢ or ⊨, it is widely used to suggest logical consequence. Finally, the symbol ⇔ represents logical biconditional or logical equivalence and is read as “if and only if” (often abbreviated as “iff”). It indicates that two statements are logically equivalent: each one implies the other. For example, the statement “x is even ⇔ x mod 2 = 0” means that x is even if and only if x is divisible by 2. In essence, A ⇔ B means both A implies B and B implies A. The remarkable thing is that the symbols ↔ and ⇔ are often used interchangeably, but their context and usage can differ slightly. For example, P ↔ Q means P is true exactly when Q is true, it is truth functional connectivity. On the other hand, (P ∧ Q) ⇔ ¬(¬P ∨ ¬Q) means that (P ∧ Q) and ¬(¬P ∨ ¬Q) always have the same truth value, i.e., they are logically equivalent. However, for each logical system, there are some laws and principles. For propositional logic, the laws and principles are listed in Table 7.
Based the laws/ principles, a set of inference rules can be set. Table 8 shows the inference rules used in propositional logic. In Table 8, the symbol ⊢ can be replaced by ⇒, as far as inference is concerned.
The advancement of programmable computing machines has made automated reasoning and decision-making increasingly attainable through the use of digitally encoded well-formed formulas (WFFs) that are formulated using propositional or predicate logic-based WFFs. Consequently, systems such as knowledge-based systems (KBS), also known as expert systems, have emerged. These systems represent expert knowledge within a specific domain through sets of “if...then...” rules, which are processed by inference engines (e.g., using Modus Ponens) to derive conclusions. For example, a medical expert might formulate the following rule:
IF a patient has a high fever AND a cough, THEN suggest a COVID-19 test.
The formal representation is as follows:
∀x[Fever(x)∧Cough(x)→SuggestTest(x,COVID_19)].
Here, x is a patient. Say x = John, has a fever and a cough, we can first conjoin these two facts to form the compound statement “Fever(John) ∧ Cough(John).” Then, we instantiate the general rule for John: “Fever(John) ∧ Cough(John) → SuggestTest(John, COVID-19).” Applying Modus Ponens to these two statements, we can validly infer the conclusion: “SuggestTest(John, COVID-19).”
In 1960s and 70s different expert systems, e.g., DENDRAL (for chemical analysis) and MYCIN (medical diagnosis), have been developed using the abovementioned logical formalism of knowledge [93,94,95]. Each of these systems relies on a large set of rules. For instance, MYCIN employs approximately 450 “if…then…” rules to analyze symptoms, interpret test results, and evaluate clinical data in order to diagnose bacterial infections and recommend appropriate antibiotic treatments.

7.2. Modal Logic

Modal logic, as a formal extension of propositional logic, has been systematically introduced in [96,97]. The development of possible worlds semantics, pioneered by Kripke [98] and further expanded by Lewis in his account of counterfactuals [99], provides the semantic backbone for interpreting necessity and possibility. Building on this foundation, epistemic logic initiated by Hintikka [100] formalizes knowledge and belief through modal operators, an approach later refined in computer science and AI contexts by Fagin et al. [101]. The epistemological dimensions of modal logic intersect with metaphysics in [102], which shows how some necessary truths are accessible only a posteriori because of conceivability and possibility as discussed in [103]. Further developments in formal epistemology, including the logical omniscience problem and contextualist approaches to knowledge, are critically analyzed in [104]. Together, these works illustrate how modal logic underpins the study of knowledge, belief, necessity, and possibility, while also highlighting its philosophical challenges. In synopsis, modal logic incorporates modalities such as necessity and possibility in order to relate knowledge, belief, and justification. Here, modal operators represent knowledge (Kp, “it is known that P”) and belief (Bp, “it is believed that P”), allowing formal analysis of statements like “If P is known, then P is true,” reflecting the factivity of knowledge. Modal logic also explains concepts like a priori knowledge, associated with necessary truths (e.g., mathematical facts), and a posteriori knowledge, linked to empirical discoveries (e.g., “Water is H2O”). Counterfactual reasoning, essential for understanding causality, uses modal logic to evaluate statements like “If P had occurred, Q would have followed.” Possible worlds semantics provides a framework for understanding modal claims, where necessary truths hold in all possible worlds and possible truths hold in at least one. In modal logic it is considered some necessary truths are knowable only through empirical means. In formal epistemology, modal logic models epistemic justification, knowledge dynamics, and rational belief. However, challenges like the logical omniscience problem, where agents are unrealistically assumed to know all logical consequences of their knowledge, and contextualism, which highlights shifts in the meaning of “knowing” depending on context, complicate its application. Despite these challenges, modal logic remains a crucial tool in epistemology, helping to formalize reasoning about knowledge, belief, and possibility while bridging the gap between logical structures and epistemic concepts.
However, as far as formal settings are concerned, what sets modal logic apart is the inclusion of necessity (□), which means “it is necessarily the case that,” and possibility (◇), which means “it is possibly the case that.” Modal logic also often references possible worlds (w, w′, …) and the accessibility relation (R(w, w′)), which is used to describe which worlds are accessible from others in modal semantics. There are other implication symbols. Table 9 summarizes the laws and principles of modal logic that can also be used to build well-formed formulas (WFFs).
As shown in Table 9, the necessitation rule means that if a proposition is provable, then it is necessarily true. The distribution axiom means that if it is necessary that P implies Q, then if P is necessary, Q is also necessary. The axiom (reflexivity) means that whatever is necessary is true, while the axiom (transitivity) means that if something is necessary, then it is necessarily necessary. The axiom (Euclidean or symmetry) means that if something is possible, then it is necessarily possible. The duality principle means that necessity and possibility are dual concepts such that □P is equivalent to ¬◇¬P. The possibility of truth means that if something is true, then it is possible. The law of modal contradiction means that nothing can be both necessarily true and necessarily false, and the law of modal excluded middle means that every proposition is either necessarily true or necessarily false, depending on the strength of the modal system. Finally, the axiom of identity means that every proposition necessarily implies itself.

7.3. Fuzzy Logic

In addition to propositional (and predicate) and modal logic, multi-valued or fuzzy logic provides another paradigm that is well suited for articulating digital knowledge. Here, partial truth is allowed to bring softness in knowledge-based computation. The foundation of fuzzy logic lies in fuzzy sets and its derivatives such as fuzzy numbers, linguistic variables, and fuzzy “if...then...” rules. Furthermore, concepts such as possibility theory, imprecise probability, and fuzzy inference contribute to the epistemic basis of fuzzy logic. For detailed discussions, see [105,106,107,108,109,110,111,112]. The salient points are as follows.
In classical sets, an element either belongs or does not belong to a set; it is black or white. In fuzzy sets, elements can partially belong, with a degree of membership between 0 and 1. A fuzzy set A is a tuple, i.e., A = {(x), μA(x) | x ∈ X, μA(x) ∈ [0,1]} where X is the universe of discourse (points of interests), and μA(x) is the membership function that assign a value from the interval [0,1] showing how strongly each element of X, i.e., x, belongs to A. As such, μA: X → [0, 1]. The membership is also interpreted as the degree of belief. For example, let a fuzzy set denoted as Tall, be Tall = {(150,0.0), (160,0.2), (170,0.5), (180,0.8), (190,1.0)}. The interpretation is as follows: A person who is 150 cm is not at all tall because the associated membership value is 0. A person who is 170 cm is somewhat tall because the associated membership value is 0.5. A person who is 190 cm is definitely tall because the associated membership value is 1.0. When the universe discourse of a fuzzy set is a segment of the real line ( R ), the set is considered a fuzzy number if its membership function satisfies the following conditions: (i) convexity (no dips or valleys; single peak), (ii) normality (at least one membership value is equal to unit), (iii) upper semi-continuity (no sudden upward jumps), and (iv) compact support (the nonzero membership region is bounded and closed). Fuzzy numbers help put linguistic terms (such as tall, short, high, moderate, slow, fast, very fast, likely, less likely, more or less warm, hot, and so on) in formal computation. Fuzzy numbers whose membership functions take the shape of a triangle, trapezoid, or Gaussian curve are extensively used in real-life knowledge-based systems (KBS) due to their simplicity and expressive power. For example, a triangular membership function of warm temperature can be defined as follows: μwarm(x) = max(0, min(((x − 15)/(25 − 15)), ((35 − x)/(35 − 25))); x is the value of temperature in degree Celsius. This means that the degree of belief that a temperature is “warm” begins to increase starting from 15°C. At this point, the membership value is zero, indicating no belief that it is warm. As the temperature rises, the belief increases linearly, reaching its maximum value of 1 at 25°C. This is the temperature most strongly associated with the concept of “warm.” Beyond 25°C, the belief that the temperature is “warm” begins to decrease linearly. As the temperature approaches 35°C, the degree of belief continues to decline. At 35°C and above, the membership value becomes zero again, indicating that such temperatures are no longer considered warm, but are likely to be interpreted as “hot.” Similarly, a triangular membership function of hot temperature can be defined as follows: μhot(x) = max(0, min(((x − 30)/(40 − 30)), ((50 − x)/(50 − 40))). The fuzzy numbers can be used as possibility distribution where possibility is a somewhat less restricted mathematical entirety compared to probability. Similar to propositional/predicate logic, fuzzy sets and numbers create a new logical framework called fuzzy logic (more precisely, multi-valued logic). This approach facilitates what is known as computing-with-words, allowing for greater flexibility and a more human-like way of processing knowledge and relevant entities in the digital realm. At a temperature of 27 °C, we evaluate the fuzzy membership degrees for the fuzzy numbers “warm” and “hot”. The membership value for warm is μwarm(27) = 0.8, while for hot it is μhot(27) = 0. The fuzzy negation (NOT) gives NOT warm = 1 − μwarm(27) = 0.2 and NOT hot = 1 − μhot(27) = 1. Using fuzzy conjunction (AND), we get warm AND hot = min(μwarm(27), μhot(27)) = min(0.8, 0) = 0; for fuzzy disjunction (OR), warm OR hot = max(μwarm(27), μhot(27)) = max(0.8, 0) = 0.8. Applying modifiers, the value of very warm (intensifier) is (μwarm(27))2 = (0.8)2 = 0.64, and somewhat warm (weakening hedge) is sqrt 0.8 ≈ 0.894. Finally, for an α-cut at α = 0.5, the warm fuzzy number yields the interval [20,30], representing the core range of temperatures considered “warm,” and for hot, the α-cut at α = 0.5 is [35,45], representing the core range of temperatures considered “hot.”
Table 10. Basic operations of fuzzy logic.
Table 10. Basic operations of fuzzy logic.
Operation Formula
NOT A 1 − μA(x)
A AND B min(μA(x), μB(x))
A OR B max(μA(x), μB(x))
Very A A(x))2
Somewhat A ( μ A ( x ) )
α-cut A(α) = {x ∈ X | μA(x) ≥ α}
Similar to prepositional/modal logic, fuzzy logic also offers digitized knowledge using “if…then…” logical rule. For example, consider the following three rules: (a) if temperature is cold then fan speed is low; (b) if temperature is warm then fan speed is medium; and (c) if temperature is hot then fan speed is high. In order to infer from these types of rules, there are some inference engines (often referred to as defuzzification).

7.4. Machine Learning

With the advent of propositional logic, modal logic, fuzzy logic, probability–possibility transformations, and other mathematical systems, such as well-formed formulas (WFFs), research on digital knowledge has advanced toward the automatic extraction of knowledge. The extracted knowledge may be explicit, as in rule-based representations, or implicit, as in trained artificial neural networks. This shift has resulted in the emergence of machine learning algorithms that enable digital systems to acquire, refine, and generalize knowledge through data-driven processes [113,114]. By now, machine learning algorithms have become a core component of modern artificial intelligence, supporting both domain-specific applications and emerging generative methods. In summary, a machine learning process can be represented as follows.
O 1 , , O n MLA x y
In equation (1), O1,…,On represent n datasets, MLA is the machine learning algorithm used and xy represents the extracted knowledge. Here, “xy” is used as a metaphor; it can be explicit forms, such as rule-based expressions, or in implicit forms, such as trained artificial neural networks. The landscape of machine learning algorithms (MLAs) is rapidly evolving. A concise overview of representative MLAs, along with their respective tribes, origins, and characteristic examples, is presented in Table 11. As seen in Table 11, there are nine tribes and each tribe is associated with its origin, theme, and exemplified algorithms. A brief description is presented as follows.
As seen in Table 11, the “symbolists” tribe originates from logic and philosophy, relies on the theme of inverse deduction (i.e., an ampliative inference consisting of either abduction or induction) [115,116], and offers algorithms such as ID3, C4.5, and C5.0 to extract knowledge for logical rule discovery represented as directed graphs or decision trees [117,118]. The “connectionists” tribe originates from neuroscience, relies on the theme of backpropagation of error, and offers algorithms such as the artificial neural network (ANN) and deep neural network (DNN) to extract knowledge for pattern recognition and feature learning from numerical, textual, and graphical datasets [119,120,121,122,123]. The “evolutionists” tribe originates from evolutionary biology, relies on the theme of self-organization and adaptation, and offers algorithms such as genetic algorithms and genetic programming to extract knowledge for optimal solutions and analytical expressions through heuristic selection [124,125,126,127,128]. The “Bayesianists” tribe originates from statistics, relies on the theme of probabilistic inference, and offers algorithms such as support vector machines (SVMs) and hidden Markov models (HMMs) to extract knowledge for reasoning and prediction using datasets subject to uncertainty [129,130,131,132]. The “analogists” tribe originates from psychology, relies on the theme of kernel-based analogy and similarity, and offers algorithms such as principal component analysis (PCA) to extract knowledge for identifying structural correspondences within high-dimensional datasets [133,134,135]. The “possibilists” tribe originates from multi-valued (or fuzzy) logic, relies on the theme of naturalistic and soft computing, and offers algorithms based on probability–possibility transformation and fuzzy inference [136,137] to extract knowledge for reasoning with granular information, that is, that is, information without sharp boundaries [138,139], in contrast to precise numerical data. The “informationists” tribe originates from the central dogma of molecular biology [140,141], relies on the theme of protein synthesis (DNA → RNA → proteins), and offers algorithms such as DNA-based computing (DBC) to extract knowledge for reasoning and decision-making under data-deficient conditions [142,143,144,145,146,147]. As the name of the tribe, “informationists,” suggests, DBC increases the Shannon’s information content [148] of input data while performing machine learning tasks. Finally, the “hybridists” tribe emerges from the convergence of multiple tribes and develops a range of hybrid machine learning algorithms that complement one another, for example, neuro-fuzzy algorithms [149] and neuro-DBC models [150], among many others.

8. Concluding Remarks

Although the preceding sections articulate the multifaceted nature of knowledge, tracing its evolution from philosophical origins to digital realization, it continues to develop as new examples and counterexamples emerge. It forms a complex ecosystem encompassing truth, belief, justification, data, probability, possibility, uncertainty, learning, and knowing. Although the discourse has been presented in a holistic manner, a coherent foundation of knowledge is yet to be developed. This foundation is necessary to integrate philosophical, scientific, linguistic, educational, and computational perspectives into a unified framework to enables system developers to understand the principles that govern human reasoning, interpretation, and learning, and, thereby, to design artificial systems consistent with those principles.
Nevertheless, one of the ways to articulate a coherent foundation of knowledge is to adopt a succinct and circularity-free definition of knowledge. Based on the adopted definition, other aspects of knowledge including its digitization can be elucidated. In this respect the following definitional scheme can be considered. Knowledge consists of three elements: knowledge claim, knowledge provenance, and knowledge inference. The interplay among these elements yields four fundamental types of knowledge: definitional (uncontroversial conceptual definitions), deductive (derived through logical reasoning), inductive (generalized from data or experience), and creative (generated through abductive or innovative reasoning). This framework unifies the logical, empirical, and inventive dimensions of knowledge, supporting systematic representation and digital formalization. A more formal treatment of this articulation of knowledge is presented in [151]. One of the remarkable characteristics of this articulation of knowledge is its softness, meaning that a knowledge claim (the means by which we represent knowledge) does not have to be perfectly true, and partial truth is allowed. In particular, creative knowledge, where knowledge provenance is absent, is neither true nor false at the point when it is conceived. This feature constitutes the essential ingredient of creativity, as several authors have explained in detail, e.g., see [116,152,153,154]. The next phase of this study, thus, the delves into this unification of the conceptual foundations of knowledge and incorporate them in machine learning algorithms.

Author Contributions

Conceptualization, L.Z. and S.U.; methodology, L.Z. and S.U.; software, L.Z. and S.U.; validation, L.Z. and S.U.; formal analysis, S.U.; investigation, L.Z.; resources, S.U.; data curation, S.U.; writing—original draft preparation, S.U.; writing—review and editing, L.Z.; visualization, L.Z.; supervision, S.U.; project administration, S.U.; funding acquisition, S.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Acknowledgments

The authors used ChatGPT for proofreading, taking full responsibility for the final version.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. N, N. 2025. Knowledge. Collins Dictionary. Available online: https://www.collinsdictionary.com/dictionary/english/knowledge [Accessed on 2025, 7 March].
  2. N, N. 2025. Knowledge. American Heritage Dictionary. Available online: https://www.ahdictionary.com/word/search.html?q=knowledge [Accessed on 2025, 7 March].
  3. N, N. 2025. Knowledge. Cambridge Dictionary. Available online: https://dictionary.cambridge.org/dictionary/english/knowledge [Accessed on 2025, 7 March].
  4. N, N. 2025. Knowledge. Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/wiki/Knowledge [Accessed on 2025, 7 March].
  5. Chisholm, R.M. 1989. Theory of Knowledge, 3rd ed.; Prentice Hall: Englewood Cliffs, NJ, USA.
  6. Plato. 1997. Theaetetus; Cooper, J.M., Ed.; Hackett Publishing: Indianapolis, IN, USA.
  7. Fine, G. 2003. Plato on Knowledge and Forms: Selected Essays; Oxford University Press: Oxford, UK.
  8. Gettier, E.L. Is justified true belief knowledge? Analysis 1963, 23, 121–123. [Google Scholar] [CrossRef]
  9. Zagzebski, L. The inescapability of Gettier problems. Philosophical Quarterly 1994, 44, 65–73. [Google Scholar] [CrossRef]
  10. Goldman, A.I. A causal theory of knowing. The Journal of Philosophy 1967, 64, 357–372. [Google Scholar] [CrossRef]
  11. Zagzebski, L. 2017. What is knowledge? In The Blackwell Guide to Epistemology; pp. 92–116.
  12. Sosa, E. 2018. Epistemology; Princeton University Press: Princeton, NJ, USA.
  13. Mugleston, J.; Truong, V.H.; Kuang, C.; Sibiya, L.; Myung, J. Epistemology in the age of large language models. Knowledge 2025, 5, 3. [Google Scholar] [CrossRef]
  14. Hume, D. 1739–1740. A Treatise of Human Nature; Project Gutenberg. Available online: https://www.gutenberg.org/ebooks/4705 [Accessed on 2025, 20 February].
  15. Hume, D. 2006. An Enquiry Concerning Human Understanding; Selby-Bigge, L.A., Ed.; Project Gutenberg EBook No. 9662. Available online: https://www.gutenberg.org/ebooks/9662 [Accessed on 2025, 20 February].
  16. Morris, W.E.; Brown, C.R. 2023. David Hume. In The Stanford Encyclopedia of Philosophy; Zalta, E.N.; Nodelman, U., Eds.; Winter 2023 Edition; Metaphysics Research Lab, Stanford University. Available online: https://plato.stanford.edu/archives/win2023/entries/hume/ [Accessed on 2025, 20 February].
  17. Fieser, J. 2025. David Hume: Epistemology. Internet Encyclopedia of Philosophy. Available online: https://iep.utm.edu/hume-epis/ [Accessed on 2025, 20 February].
  18. Kant, I. 2003. The Critique of Pure Reason; Meiklejohn, J.M.D., Translator; Charles Aldarondo and David Widger, Contributors; Project Gutenberg EBook No. 4280. Available online: https://www.gutenberg.org/ebooks/4280 [Accessed on 2025, 20 February].
  19. Rohlf, M. 2024. Immanuel Kant. In The Stanford Encyclopedia of Philosophy; Zalta, E.N.; Nodelman, U., Eds.; Fall 2024 Edition; Metaphysics Research Lab, Stanford University. Available online: https://plato.stanford.edu/archives/fall2024/entries/kant/ [Accessed on 2025, 20 February].
  20. Allison, H.E. 2004. Kant’s Transcendental Idealism: An Interpretation and Defense, Revised and Enlarged ed.; Yale University Press: New Haven, CT, USA. Available online: https://doi.org/10.2307/j.ctt1cc2kjc [Accessed on 2025, 20 February]. [CrossRef]
  21. Ameriks, K. 2000. Kant’s Theory of Mind: An Analysis of the Paralogisms of Pure Reason; Oxford University Press: Oxford, UK.
  22. Fieser, J. 2025. Immanuel Kant: Epistemology. Internet Encyclopedia of Philosophy. Available online: https://iep.utm.edu/kant-epis/ [Accessed on 2025, 20 February].
  23. Haddock, G.E.R. On analytic a posteriori statements: Are they possible? Logique & Analyse 2015, 58, 25–33. [Google Scholar]
  24. Wikforss, Å.M. An a posteriori conception of analyticity? Grazer Philosophische Studien 2003, 66, 119–139. [Google Scholar] [CrossRef]
  25. David, M. Analyticity, Carnap, Quine, and truth. Philosophical Perspectives, Metaphysics 1996, 10, 281–296. [Google Scholar] [CrossRef]
  26. Carnap, R. 1947. Meaning and Necessity: A Study in Semantics and Modal Logic; University of Chicago Press: Chicago, IL, USA.
  27. Quine, W.V.O. 1953. From a Logical Point of View: 9 Logico-Philosophical Essays; Harvard University Press: Cambridge, MA, USA.
  28. Peirce, C.S. What pragmatism is. Monist 1905, 15, 161–181. [Google Scholar] [CrossRef]
  29. Peirce, C.S. 1992. The Essential Peirce: Selected Philosophical Writings, Volume 1 (1867–1893); Houser, N.; Kloesel, C., Eds.; Indiana University Press: Bloomington, IN, USA.
  30. James, W. 1907. Pragmatism: A New Name for Some Old Ways of Thinking; Longmans, Green and Company: New York, NY, USA.
  31. Dewey, J. 1938. Logic: The Theory of Inquiry; Henry Holt and Company: New York, NY, USA.
  32. Talisse, R.B.; Aikin, S.F. 2008. Pragmatism: A Guide for the Perplexed; Continuum International Publishing Group: London, UK.
  33. Biesta, G. 2010. Pragmatism and the philosophical foundations of mixed methods research. In SAGE Handbook of Mixed Methods in Social & Behavioral Research; Tashakkori, A.; Teddlie, C., Eds.; SAGE Publications: Thousand Oaks, CA, USA; pp. 95–118.
  34. Frega, R. From judgment to rationality: Dewey’s epistemology of practice. Human Studies 2011, 34, 33–57. [Google Scholar] [CrossRef]
  35. Hothersall, S.J. Epistemology and social work: Enhancing the integration of theory, practice and research through philosophical pragmatism. European Journal of Social Work 2019, 22, 860–870. [Google Scholar] [CrossRef]
  36. Gillespie, A.; Glăveanu, V.; de Saint Laurent, C. 2024. Pragmatism. In Pragmatism and Methodology; Gillespie, A.; Glăveanu, V.; de Saint Laurent, C., Eds.; Cambridge University Press: Cambridge, UK; pp. 1–20.
  37. Shusterman, R. 1997. Experience and self-transformation. In Practicing Philosophy: Pragmatism and the Philosophical Life; Routledge: New York, NY, USA; pp. 1–20.
  38. Hildebrand, D.L. 2003. Beyond Realism and Antirealism: John Dewey and the Neopragmatists; Vanderbilt University Press: Nashville, TN, USA.
  39. Misak, C. 2004. Truth and the End of Inquiry: A Peircean Account of Truth; Oxford University Press: Oxford, UK.
  40. Putnam, H. 1995. Pragmatism: An Open Question; Blackwell Publishers: Cambridge, MA, USA.
  41. Rorty, R. 1979. Philosophy and the Mirror of Nature; Princeton University Press: Princeton, NJ, USA.
  42. Magnani, L. 2023. Introduction to abduction, creative cognition, and discovery. In Handbook of Abductive Cognition; Magnani, L., Ed.; Springer Nature: Cham, Switzerland; pp. 1–20.
  43. Gabbay, D.M.; Kruse, R. 2023. Abductive reasoning and learning. In Handbook of Abductive Cognition; Magnani, L., Ed.; Springer Nature: Cham, Switzerland; pp. 21–40.
  44. Russell, B. 1912. The Problems of Philosophy; Oxford University Press: Oxford, UK.
  45. Russell, B.; Whitehead, A.N. 1910–1913. Principia Mathematica; Cambridge University Press: Cambridge, UK.
  46. Russell, B. 1948. Human Knowledge: Its Scope and Limits; George Allen & Unwin: London, UK.
  47. Carnap, R. 1967. The Logical Structure of the World; University of California Press: Berkeley, CA, USA. (Original work published 1928).
  48. Carnap, R. 1947. Meaning and Necessity: A Study in Semantics and Modal Logic; University of Chicago Press: Chicago, IL, USA.
  49. Carnap, R. 1959. The elimination of metaphysics through logical analysis of language. In Logical Positivism; Ayer, A.J., Ed.; The Free Press: Glencoe, IL, USA; (Original work published 1932).
  50. Quine, W.V.O. Two dogmas of empiricism. The Philosophical Review 1951, 60, 20–43. [Google Scholar] [CrossRef]
  51. Quine, W.V.O. 1953. From a Logical Point of View; Harvard University Press: Cambridge, MA, USA.
  52. Quine, W.V.O. 1969. Epistemology naturalized. In Ontological Relativity and Other Essays; Columbia University Press: New York, NY, USA; pp. 69–90.
  53. Popper, K.R. 2002. The Logic of Scientific Discovery; Routledge: London, UK. (Original work published 1934).
  54. Popper, K.R. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge; Routledge: London, UK.
  55. Popper, K.R. 1977. The worlds 1, 2 and 3. In Popper, K.R.; Eccles, J.C., Eds.; The Self and Its Brain: An Argument for Interactionism; Routledge: London, UK; pp. 36–50.
  56. Popper, K.R. 1959. The Logic of Scientific Discovery, 3rd ed.; Routledge: London, UK.
  57. Staples, M. Critical rationalism and engineering: Ontology. Synthese 2014, 191, 2255–2279. [Google Scholar] [CrossRef]
  58. Staples, M. Critical rationalism and engineering: Methodology. Synthese 2015, 192, 337–362. [Google Scholar] [CrossRef]
  59. Hempel, C.G.; Oppenheim, P. Studies in the logic of explanation. Philosophy of Science 1948, 15, 135–175. [Google Scholar] [CrossRef]
  60. Hempel, C.G. 1965. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science; Free Press: New York, NY, USA.
  61. Salmon, W.C. 1984. Scientific Explanation and the Causal Structure of the World; Princeton University Press: Princeton, NJ, USA.
  62. Salmon, W.C. Statistical explanation. Synthese 1970, 22, 125–130. [Google Scholar] [CrossRef]
  63. Frege, G. 1879. Begriffsschrift: A Formula Language, Modeled upon that of Arithmetic, for Pure Thought; Nebert: Halle, Germany.
  64. Frege, G. On sense and reference. Zeitschrift für Philosophie und philosophische Kritik 1892, 100, 25–50. [Google Scholar] [CrossRef]
  65. Mauthner, F. 1901–1903. Contributions to a Critique of Language (Beiträge zu einer Kritik der Sprache); Meiner Verlag: Leipzig, Germany.
  66. Wittgenstein, L. 1921. Tractatus Logico-Philosophicus; Routledge & Kegan Paul: London, UK.
  67. Wittgenstein, L. 1953. Philosophical Investigations; Blackwell Publishing: Oxford, UK.
  68. Chomsky, N. 1957. Syntactic Structures; Mouton: The Hague, The Netherlands.
  69. Austin, J.L. 1962. How to Do Things with Words; Oxford University Press: Oxford, UK.
  70. Searle, J.R. Minds, brains, and programs. Behavioral and Brain Sciences 1980, 3, 417–424. [Google Scholar] [CrossRef]
  71. Searle, J.R. 1995. The Construction of Social Reality; Free Press: New York, NY, USA.
  72. Grice, H.P. 1975. Logic and conversation. In Syntax and Semantics, Volume 3; Cole, P.; Morgan, J.L., Eds.; Academic Press: New York, NY, USA; pp. 41–58.
  73. Kripke, S. 1980. Naming and Necessity; Harvard University Press: Cambridge, MA, USA.
  74. Davidson, D. 1984. Inquiries into Truth and Interpretation; Clarendon Press: Oxford, UK.
  75. Lycan, W.G. 1987. Consciousness; MIT Press: Cambridge, MA, USA.
  76. Lycan, W.G. 2000. Philosophy of Language: A Contemporary Introduction; Routledge: London, UK.
  77. Woolfolk, A. 2015. Educational Psychology, 14th ed.; Pearson Education: Boston, MA, USA.
  78. Piaget, J. 1970. Genetic Epistemology; Columbia University Press: New York, NY, USA.
  79. Ausubel, D.P. 1968. Educational Psychology: A Cognitive View; Holt, Rinehart and Winston: New York, NY, USA.
  80. Novak, J.D.; Gowin, D.B. 1984. Learning How to Learn; Cambridge University Press: Cambridge, UK.
  81. Novak, J.D.; Cañas, A.J. The origins of the concept mapping tool and the continuing evolution of the tool. Information Visualization 2006, 5, 175–184. [Google Scholar] [CrossRef]
  82. Maker, C.J.; Zimmerman, R.H. Concept maps as assessments of expertise: Understanding of the complexity and interrelationships of concepts in science. Journal of Advanced Academics 2020, 31, 254–297. [Google Scholar] [CrossRef]
  83. Bourdieu, P. 1977. Outline of a Theory of Practice; Cambridge University Press: Cambridge, UK.
  84. Bernstein, B. 1999. Vertical and horizontal discourse: An essay. British Journal of Sociology of Education, 20, 157–173. [CrossRef]
  85. Maton, K. 2014. Knowledge and Knowers: Towards a Realist Sociology of Education; Routledge: London, UK.
  86. Maton, K. Cumulative and segmented learning: Exploring the role of knowledge structures in education. British Journal of Sociology of Education 2013, 34, 1–19. [Google Scholar]
  87. Maton, K.; Doran, Y.J. Semantic waves as a pedagogic tool: Using Legitimation Code Theory to trace knowledge-building in classroom discourse. British Journal of Sociology of Education 2017, 38, 485–505. [Google Scholar]
  88. Rootman-le Grange, I.; Blackie, M.A.L. Assessing assessment: In pursuit of meaningful learning. Chemistry Education Research and Practice 2018, 19, 484–490. [Google Scholar] [CrossRef]
  89. Kinchin, I.M.; Möllits, A.; Reiska, P. Uncovering types of knowledge in concept maps. Education Sciences 2019, 9, 131. [Google Scholar] [CrossRef]
  90. Hurley, P.J. 2017. A Concise Introduction to Logic, 13th ed.; Cengage Learning: Boston, MA, USA.
  91. Copi, I.M.; Cohen, C.; McMahon, K. 2014. Introduction to Logic, 14th ed.; McGraw-Hill Education: New York, NY, USA.
  92. Enderton, H.B. 2001. A Mathematical Introduction to Logic, 2nd ed.; Academic Press: San Diego, CA, USA.
  93. Shortliffe, E.H. 1976. Computer-Based Medical Consultations: MYCIN; Elsevier/North-Holland: New York, NY, USA.
  94. Buchanan, B.G.; Shortliffe, E.H. 1984. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project; Addison-Wesley: Reading, MA, USA.
  95. Shortliffe, E.H.; Buchanan, B.G. A model of inexact reasoning in medicine. Mathematical Biosciences 1975, 23, 351–379. [Google Scholar] [CrossRef]
  96. Hughes, G.E.; Cresswell, M.J. 1996. A New Introduction to Modal Logic; Routledge: London, U.
  97. Chellas, B.F. 1980. Modal Logic: An Introduction; Cambridge University Press: Cambridge, UK.
  98. Kripke, S.A. Semantical considerations on modal logic. Acta Philosophica Fennica 1963, 16, 83–94. [Google Scholar]
  99. Lewis, D.K. 1973. Counterfactuals; Harvard University Press: Cambridge, MA, USA.
  100. Hintikka, J. 1962. Knowledge and Belief: An Introduction to the Logic of the Two Notions; Cornell University Press: Ithaca, NY, USA.
  101. Fagin, R.; Halpern, J.Y.; Moses, Y.; Vardi, M.Y. 1995. Reasoning About Knowledge; MIT Press: Cambridge, MA, USA.
  102. Kripke, S.A. 1980. Naming and Necessity; Harvard University Press: Cambridge, MA, USA.
  103. Chalmers, D.J. 2010. The Character of Consciousness; Oxford University Press: Oxford, UK.
  104. Williamson, T. 2000. Knowledge and Its Limits; Oxford University Press: Oxford, UK.
  105. Zadeh, L.A. Fuzzy sets. Information and Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  106. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—I. Information Sciences 1975, 8, 199–249. [Google Scholar] [CrossRef]
  107. Zadeh, L.A. A new direction in AI: Toward a computational theory of perceptions. AI Magazine 2001, 22, 73–84. [Google Scholar]
  108. Zadeh, L.A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems 1978, 1, 3–28. [Google Scholar] [CrossRef]
  109. Dubois, D.; Prade, H. 1988. Possibility Theory: An Approach to Computerized Processing of Uncertainty; Plenum Press: New York, NY, USA.
  110. Zimmermann, H.-J. 2001. Fuzzy Set Theory—and Its Applications, 4th ed.; Springer: Boston, MA, USA.
  111. Mamdani, E.H.; Assilian, S. An experiment in linguistic synthesis with a fuzzy logic controller. International Journal of Man-Machine Studies 1975, 7, 1–13. [Google Scholar] [CrossRef]
  112. Takagi, T.; Sugeno, M. Fuzzy identification of systems and its applications to modeling and control. IEEE Transactions on Systems, Man, and Cybernetics 1985, 15, 116–132. [Google Scholar] [CrossRef]
  113. Domingos, P. 2018. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World; Basic Books: New York, NY, USA.
  114. Wu, X.; Kumar, V., Eds. 2009. The Top Ten Algorithms in Data Mining; Chapman & Hall/CRC Press: Boca Raton, FL, USA.
  115. Kroll, E. 2023. Introduction to abduction and technological design. In Handbook of Abductive Cognition; Magnani, L., Ed.; Springer International Publishing: Cham, Switzerland; pp. 1319–1324.
  116. Ura, S. 2023. Logical processes underlying creative and innovative design. In Handbook of Abductive Cognition; Magnani, L., Ed.; Springer International Publishing: Cham, Switzerland; pp. 1363–1384.
  117. Quinlan, J.R. Induction of decision trees. Machine Learning 1986, 1, 81–106. [Google Scholar] [CrossRef]
  118. Quinlan, J.R. Improved use of continuous attributes in C4.5. Journal of Artificial Intelligence Research 1996, 4, 77–90. [Google Scholar] [CrossRef]
  119. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 1958, 65, 386–408. [Google Scholar] [CrossRef]
  120. Rosenblatt, F. 1962. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms; Spartan Books: Washington, DC, USA.
  121. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  122. Rumelhart, D.E.; McClelland, J.L., Eds. 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations; MIT Press: Cambridge, MA, USA.
  123. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  124. Darwin, C. 1859. On the Origin of Species by Means of Natural Selection; John Murray: London, UK.
  125. Holland, J.H. 1975. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; University of Michigan Press: Ann Arbor, MI, USA.
  126. Goldberg, D.E. 1989. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley: Reading, MA, USA.
  127. Koza, J.R. 1992. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA.
  128. Kauffman, S. 1993. The Origins of Order: Self-Organization and Selection in Evolution; Oxford University Press: New York, NY, USA.
  129. Bayes, T. An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London 1763, 53, 370–418. [Google Scholar] [CrossRef]
  130. Bishop, C.M. 2006. Pattern Recognition and Machine Learning; Springer: New York, NY, US.
  131. Vapnik, V.N. 1999. The Nature of Statistical Learning Theory, 2nd ed.; Springer: New York, NY, USA.
  132. Baum, L.E.; Petrie, T. Statistical inference for probabilistic functions of finite state Markov chains. Annals of Mathematical Statistics 1966, 37, 1554–1563. [Google Scholar] [CrossRef]
  133. Spearman, C. General intelligence, objectively determined and measured. American Journal of Psychology 1904, 15, 201–293. [Google Scholar] [CrossRef]
  134. Hotelling, H. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology 1933, 24, 417–441. [Google Scholar] [CrossRef]
  135. Hastie, T.; Tibshirani, R.; Friedman, J. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.; Springer: New York, NY, USA.
  136. Sharif Ullah, A.M.M.; Harib, K.H. A human-assisted knowledge extraction method for machining operations. Advanced Engineering Informatics 2006, 20, 335–350. [Google Scholar] [CrossRef]
  137. Sharif Ullah, A.M.M.; Shamsuzzaman, M. Fuzzy Monte Carlo simulation using point-cloud-based probability–possibility transformation. Simulation: Transactions of the Society for Modeling and Simulation International 2013, 89, 860–875. [Google Scholar] [CrossRef]
  138. Zadeh, L.A. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets and Systems 1997, 90, 111–127. [Google Scholar] [CrossRef]
  139. Sharif Ullah, A.M.M.; Noor-E-Alam, M. Big data driven graphical information based fuzzy multi-criteria decision making. Applied Soft Computing 2018, 63, 23–38. [Google Scholar] [CrossRef]
  140. Crick, F. Central dogma of molecular biology. Nature 1970, 227, 561–563. [Google Scholar] [CrossRef]
  141. Cobb, M. 60 years ago, Francis Crick changed the logic of biology. PLoS Biology 2017, 15, e2003243. [Google Scholar] [CrossRef]
  142. Sharif Ullah, A.M.M.S. A DNA-based computing method for solving control chart pattern recognition problems. CIRP Journal of Manufacturing Science and Technology 2010, 3, 293–303. [Google Scholar] [CrossRef]
  143. Sharif Ullah, A.M.M.S.; D’Addona, D.; Arai, N. DNA-based computing for understanding complex shapes. Biosystems 2014, 117, 40–53. [Google Scholar] [CrossRef]
  144. Iwadate, K.; Ullah, S. Determining outer boundary of a complex point-cloud using DNA-based computing. Transactions of the Japan Society for Evolutionary Computation 2020, 11, 1–8. (In Japanese) [Google Scholar]
  145. Kubo, A.; Teti, R.; Sharif Ullah, A.S.; Iwadate, K.; Segreto, T. Determining surface topography of a dressed grinding wheel using bio-inspired DNA-based computing. Materials 2021, 14, 1899. [Google Scholar] [CrossRef]
  146. Ura, S.; Zaman, L. Biologicalization of smart manufacturing using DNA-based computing. Biomimetics 2023, 8, 620. [Google Scholar] [CrossRef]
  147. Ura, S. 2024. Machine learning using DNA-based computing. In Proceedings of the IEEE 13th Global Conference on Consumer Electronics (GCCE); Kitakyushu, Japan, 2024; pp. 1026–1029. [CrossRef]
  148. Shannon, C.E. A mathematical theory of communication. Bell System Technical Journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
  149. Kosko, B. 1992. Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence; Prentice Hall: Englewood Cliffs, NJ, USA.
  150. Ghosh, A.K.; Ura, S. Leveraging DNA-based computing to improve the performance of artificial neural networks in smart manufacturing. Machine Learning and Knowledge Extraction 2025, 7, 96. [Google Scholar] [CrossRef]
  151. Cobb, M. 60 years ago, Francis Crick changed the logic of biology. PLoS Biology 2017, 15, e2003243. [Google Scholar] [CrossRef]
  152. Sharif Ullah, A.M.M.S. A DNA-based computing method for solving control chart pattern recognition problems. CIRP Journal of Manufacturing Science and Technology 2010, 3, 293–303. [Google Scholar] [CrossRef]
  153. Sharif Ullah, A.M.M.S.; D’Addona, D.; Arai, N. DNA-based computing for understanding complex shapes. Biosystems 2014, 117, 40–53. [Google Scholar] [CrossRef]
  154. Iwadate, K.; Ullah, S. Determining outer boundary of a complex point-cloud using DNA-based computing. Transactions of the Japan Society for Evolutionary Computation 2020, 11, 1–8. (In Japanese) [Google Scholar]
  155. Kubo, A.; Teti, R.; Sharif Ullah, A.S.; Iwadate, K.; Segreto, T. Determining surface topography of a dressed grinding wheel using bio-inspired DNA-based computing. Materials 2021, 14, 1899. [Google Scholar] [CrossRef]
  156. Ura, S.; Zaman, L. Biologicalization of smart manufacturing using DNA-based computing. Biomimetics 2023, 8, 620. [Google Scholar] [CrossRef]
  157. Ura, S. 2024. Machine learning using DNA-based computing. In Proceedings of the IEEE 13th Global Conference on Consumer Electronics (GCCE); Kitakyushu, Japan, 2024; pp. 1026–1029. [CrossRef]
  158. Shannon, C.E. A mathematical theory of communication. Bell System Technical Journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
  159. Kosko, B. 1992. Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence; Prentice Hall: Englewood Cliffs, NJ, USA.
  160. Ghosh, A.K.; Ura, S. Leveraging DNA-based computing to improve the performance of artificial neural networks in smart manufacturing. Machine Learning and Knowledge Extraction 2025, 7, 96. [Google Scholar] [CrossRef]
  161. Ullah, A.S. What is knowledge in Industry 4.0? Engineering Reports 2020, 2, e12217. [Google Scholar] [CrossRef]
  162. Hatchuel, A.; Weil, B. 2008. C-K design theory: An advanced formulation. Research in Engineering Design, 19, 181–192. [CrossRef]
  163. Sharif Ullah, A.M.M.; Rashid, M.M.; Tamaki, J. On some unique features of C-K theory of design. CIRP Journal of Manufacturing Science and Technology 2012, 5, 55–66. [Google Scholar] [CrossRef]
  164. Kutari, L.D.; Ura, S. Improving reverse engineering processes using C-K theory of design. Research in Engineering Design 2025, 36, 19. [Google Scholar] [CrossRef]
Table 1. Hume’s fork of knowledge.
Table 1. Hume’s fork of knowledge.
Aspects Relations of Ideas Matters of Fact
Definition Statements that are necessarily true and known through reason alone (a priori). Statements about the world known through experience (a posteriori).
Examples 3+ 9 = 12; All bachelors are unmarried men; A rectangle has four sides The sun will rise tomorrow; Water boils at 100 °C at sea level; The Eiffel Tower is in Paris
Truth Value Necessarily true; denying them leads to a contradiction. Contingently true; denying them does not lead to a contradiction.
Basis of Knowledge Reason and logic. Sensory experience and observation.
Mode of Verification Demonstrated through logical proof or deduction. Verified through empirical evidence and induction.
Certainty Absolute certainty. Probable but not certain; open to doubt.
Implication of Error Impossible, as they are analytically true. Possible, as they rely on induction and experience.
Table 2. Kant’s theory of knowledge.
Table 2. Kant’s theory of knowledge.
Aspects Analytic Judgments Synthetic Judgments Synthetic A Priori Judgments
Definition Predicate contained in the subject; true by definition. Predicate adds new information to the subject. Informative judgments that are necessarily true and known independent of experience.
Examples All bachelors are unmarried.; A triangle has three sides. The cat is on the mat.; The sky is blue. 7 + 5 = 12.; Every event has a cause.; Space and time are forms of intuition.
Basis of Knowledge Reason alone (a priori). Empirical observation (a posteriori). Reason and necessity, yet informative (a priori).
Certainty Absolutely certain. Contingent, dependent on experience. Absolutely certain and necessary, yet informative.
Role in Knowledge Clarifies concepts without adding new knowledge. Expands knowledge through experience. Expands knowledge without empirical evidence.
Table 3. Hume-Kant theory of knowledge.
Table 3. Hume-Kant theory of knowledge.
Domain Analytic Synthetic
Rational (Ideal, A Priori) Analytic A Priori (Kant): True by definition, necessarily true without experience (e.g., “All bachelors are unmarried”). Synthetic A Priori (Kant, Hum’s relations of idea): Informative and necessarily true, expanding knowledge without experience (e.g., “7 + 5 = 12”, “Every event has a cause”).
Real (Empirical, A Posteriori) Analytic A Posteriori: Not possible according to Kant, as analytic truths do not rely on experience. Synthetic A Posteriori (Kant, Hume’s Matters of Fact): Contingent truths known through experience (e.g., “The sun will rise tomorrow”, “Water boils at 100 °C at sea level”).
Table 4. Pragmatism in the theory of knowledge.
Table 4. Pragmatism in the theory of knowledge.
Aspect Definition Examples Characteristics Role in Knowledge
Practical Knowledge (Knowing-How) Knowledge applied to solve real-life problems through skills and experience. Knowing how to ride a bicycle; Cooking a meal without a recipe; Operating complex machinery in a factory. Context-dependent; gained through practice and experience. Guides action; adapts knowledge to practical situations.
Contextual Knowledge Knowledge whose validity depends on the situation and context. Understanding local customs in international business; Tailoring medical treatments based on patient history. Flexible; adapts to changing environments and needs. Enhances relevance of knowledge in specific contexts.
Instrumental Knowledge Knowledge valued for its usefulness in achieving specific goals. Using statistical software to analyze data; Applying marketing strategies to boost sales. Utility-driven; focused on outcomes and results. Provides tools and methods for problem-solving.
Experiential Knowledge Knowledge gained through personal or collective experience. A firefighter’s knowledge of handling emergencies; An entrepreneur’s understanding of market dynamics after failures. Derived from trial, error, and reflection. Builds expertise; informs decision-making.
Adaptive Knowledge Knowledge that evolves through problem-solving and learning from feedback. Updating cybersecurity measures in response to new threats; Adjusting business strategies after customer feedback. Dynamic and iterative; responsive to new information. Ensures knowledge remains relevant and effective.
Socially Constructed Knowledge Knowledge created and validated within communities or societies. Legal systems and their evolution; Scientific paradigms accepted by research communities. Emerges from collaboration, dialogue, and consensus. Shapes collective understanding and shared practices.
Table 5. Theory of knowledge in philosophy of science.
Table 5. Theory of knowledge in philosophy of science.
Categories Descriptions
Key Issue Empiricism and Logical Analysis (Russell)
Logical Positivism and Verification (Carnap)
Naturalized Epistemology and Holism (Quine)
Critical Rationalism and Falsification (Popper)
Logical Empiricism and Explanation (Hempel)
Causal-Mechanical Explanation (Salmon)
Approach to Knowledge Knowledge by acquaintance and description (Russell)
Empirical verification and logical reconstruction (Carnap)
Holistic web of beliefs; empirical revision (Quine)
Conjectures and refutations; no absolute certainty (Popper)
Deductive and probabilistic reasoning (Hempel)
Causal mechanisms and statistical relevance (Salmon)
Scientific Knowledge Empirical observation and logical inference; supports realism (Russell)
Logical syntax and verification principle (Carnap)
No sharp line between mathematics, logic, and empirical science (Quine)
Falsifiable hypotheses; provisional knowledge (Popper)
Logical structures (D-N and I-S models) (Hempel)
Captures causal structures (C-M model) (Salmon)
Role of Language Requires logical analysis for clarity (Russell)
Formal languages to remove ambiguity (Carnap)
Part of web of belief; no privileged language (Quine)
Essential for falsifiable statements (Popper)
Logical structures for verification (Hempel)
Describes causal processes and interactions (Salmon)
Role of Probability Recognizes probabilistic reasoning (not central) (Russell)
Key in confirmation theory; logical probability (Carnap)
Relevant in testing; revisionary knowledge (Quine)
Central in falsification; tentative knowledge claims (Popper)
High probability explanations preferred (I-S model) (Hempel)
Probability indicates causal relevance (S-R model) (Salmon)
View on Causality Focus on logical foundations, not causality (Russell)
Minimal focus; logical relations prioritized (Carnap)
Empirical regularities within frameworks (Quine)
Emphasis on refuting causal hypotheses (Popper)
Law-like generalizations; minimal causality focus (Hempel)
Strong emphasis on causal processes and interactions (Salmon)
Table 6. Knowledge in Philosophy of Language.
Table 6. Knowledge in Philosophy of Language.
Linguist Philosophers Contribution to Epistemology
Frege Introduced the distinction between sense and reference, influencing how language conveys knowledge.
Mauthner Argued that misunderstandings of language lead to philosophical problems, highlighting language’s limits in conveying knowledge.
Wittgenstein Emphasized that knowledge depends on the use of language in specific contexts and shared practices.
Chomsky Suggested that knowledge of language is innate, with linguistic structures shaping understanding.
Austin Demonstrated how language performs actions, shaping knowledge claims in social contexts.
Searle Discussed how knowledge is shaped by objective facts, collective consensus, and intentionality. Questioned AI’s understanding through the Chinese Room argument.
Grice Showed that clarity, truth, and relevance in communication are key to justified knowledge claims.
Kripke Explored how identity and necessity impact knowledge, distinguishing necessary and contingent truths.
Davidson Linked meaning and truth, emphasizing the social nature of knowledge through rational interpretation.
Lycan Examined how linguistic structures influence cognition and understanding of the external world.
Table 7. Laws and principles of propositional logic.
Table 7. Laws and principles of propositional logic.
Principles/Laws Formal Symbols Meanings
Law of Identity P ⇒ P Everything is identical to itself.
Law of Non-Contradiction ¬(P ∧ ¬P) Nothing can be both true and false.
Law of Excluded Middle P ∨ ¬P Every statement is either true or false.
Principle of Bivalence Truth(P) ∈ {T, F} Only two truth values: true (T) or false (F).
Principle of Contraposition P → Q ⇔ ¬Q → ¬P If P implies Q, then not-Q implies not-P.
Principle of Explosion (P ∧ ¬P) ⇒ Q From contradiction, anything follows.
Double Negation ¬(¬P) ⇔ P Negation of negation equals the original.
De Morgan’s Laws ¬(P ∧ Q) ⇔ ¬P ∨ ¬Q Negation of conjunction/disjunction.
Commutativity P ∧ Q ⇔ Q ∧ P Order does not matter for AND/OR.
Associativity (P ∧ Q) ∧ R ⇔ P ∧ (Q ∧ R) Grouping does not matter for AND/OR.
Table 8. Inference rules of propositional logic.
Table 8. Inference rules of propositional logic.
Rule Form Example
Modus Ponens (P → Q, P) ⊢ Q If it rains, then the ground gets wet. It rains.
The ground must be wet.
Modus Tollens (P → Q, ¬Q) ⊢ ¬ P If it is a dog, then it barks. It does not bark.
It is not a dog.
Hypothetical Syllogism (P → Q, Q → R) ⊢ (P → R) If I study, then I pass. If I pass, then I graduate.
If I study, then I graduate.
Disjunctive Syllogism (P ∨ Q, ¬P) ⊢ Q It is either coffee or tea. It is not coffee.
It is tea.
Constructive Dilemma (P → Q, R → S, P ∨ R) ⊢ ( Q ∨ S) If I exercise, then I will be healthy. If I eat well, then I will be energized. I either exercise or eat well.
I will be healthy or energized.
Conjunction Introduction (P, Q) ⊢ (P ∧ Q) It is cold. It is raining.
It is cold and raining.
Conjunction Elimination (P ∧ Q) ⊢ P I am tired and hungry.
I am tired.
Addition (∨ Introduction) P ⊢ (P ∨ Q) It is Monday.
It is Monday or Friday.
Double Negation From P ⊢ ¬¬P It is sunny.
It is not not sunny.
De Morgan’s Laws ¬(P ∧ Q) ≡ (¬P ∨ ¬Q)
¬(P ∨ Q) ≡ (¬P ∧ ¬Q)
Not (hot and humid)
Not hot or not humid
Law of Excluded Middle ⊨ (P ∨ ¬P) It is either snowing or it is not.
Law of Non-Contradiction ⊨ ¬(P ∧ ¬P) It cannot be both raining and not raining.
Table 9. Principles and laws of modal logic.
Table 9. Principles and laws of modal logic.
Principles / Laws Formal Symbols Meanings
Necessitation Rule □P if P If a proposition is provable, it is necessarily true.
Distribution Axiom □(P → Q) → (□P → □Q) If it is necessary that P implies Q, then if P is necessary, Q is also necessary.
Axiom (Reflexivity) □P → P Whatever is necessary is true.
Axiom (Transitivity) □P → □□P If something is necessary, then it is necessarily necessary.
Axiom (Euclidean/ Symmetry) ◇P → □◇P If something is possible, then it is necessarily possible.
Duality Principle □P ⇔ ¬◇¬P Necessity and possibility are duals of each other.
Possibility of Truth P → ◇P If something is true, then it is possible.
Law of Modal Contradiction ¬(□P ∧ □¬P) Nothing can be both necessarily true and necessarily false.
Law of Modal Excluded Middle □P ∨ □¬P
(in strong systems)
Every proposition is either necessarily true or necessarily false.
Axiom of Identity □(P → P) Every proposition necessarily implies itself.
Table 11. A Landscape of Machine Learning Algorithms (MLAs).
Table 11. A Landscape of Machine Learning Algorithms (MLAs).
Tribes Origins Themes Example Algorithms
Symbolists Logic, Philosophy Inverse Deduction ID3, C4.5, C5.0
Connectionists Neuroscience Backpropagation Artificial Neural Network (ANN), Deep Neural Network (DNN)
Evolutionists Evolutionary Biology Self-Organization Genetic Algorithms, Genetic Programming
Bayesianists Statistics Probabilistic Inference Support Vector Machines (SVM), Hidden Markov Models (HMMs)
Analogists Psychology Kernel Machines Principal Component Analysis (PCA)
Possibilists Multi-valued Logic Naturalistic Computing Mamdani/Sugino Fuzzy Models, Probability-Possibility Transformation
Informationists Molecular Biology Protein Synthesis DNA-Based Computing
Hybridists Multiple Tribes Integration Neuro-Fuzzy Systems
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated