Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Shannon Holes, Black Holes and Knowledge: Can a Machine become a “Self-Aware" Teammate?

Version 1 : Received: 17 February 2024 / Approved: 18 February 2024 / Online: 19 February 2024 (14:31:26 CET)

How to cite: ---, W.L.; Moskowitz, A.I.S. Shannon Holes, Black Holes and Knowledge: Can a Machine become a “Self-Aware" Teammate?. Preprints 2024, 2024021035. https://doi.org/10.20944/preprints202402.1035.v1 ---, W.L.; Moskowitz, A.I.S. Shannon Holes, Black Holes and Knowledge: Can a Machine become a “Self-Aware" Teammate?. Preprints 2024, 2024021035. https://doi.org/10.20944/preprints202402.1035.v1

Abstract

We develop new theory from a broad case-study approach to explore a better understanding of what constitutes knowledge and its value as identifiable to human-machine teams. From past research, this trail of exploration will lead to these initial questions: What is the value of debate in the furtherance of knowledge? Will machines with AI be able to contribute to a debate if we humans cannot define it or determine its value sufficiently for a machine’s understanding, contribution, exploration and identification? Like humans, as teammates, machines must be able to determine with AI what constitutes the usable knowledge that contributes to a team’s success in the field (e.g., testing "knowledge" in the field, identifying new knowledge, using knowledge to develop innovation) or its failure (viz., trouble shooting; identifying weaknesses; discovering vulnerabilities, hiding by deception). It matters not whether a debate is public, private or unexpressed by an individual human or machine agent alone; we speculate in this exploration that the process advances the science of autonomous human-machine teams and assists in interpretable machine learning. We conclude with questions and a speculation: How does a human become aware or express its awareness of knowledge? Can a machine be as expressive as its human teammates? And how does a human-machine teammate become aware that its teammates possess sufficient knowledge to perform a task? We speculate that the structure of “knowledge," once found, is resistant to alternatives (i.e., it is ordered); that its functional utility is generalizable; and that its applications are multifaceted (akin to maximum entropy production). The complexity of the team is taken into consideration in our search for knowledge, which can also be used as an information metric.

Keywords

debate; knowledge; entropy; interdependence; human-machine teammates; autonomy; complexity; embodied cognition; information; vulnerability

Subject

Computer Science and Mathematics, Robotics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.