ARTICLE | doi:10.20944/preprints201709.0136.v1
Online: 27 September 2017 (04:45:28 CEST)
This paper builds on previous work introducing the Secure Remote Update Protocol (SRUP)—a secure communications protocol for Command & Control applications in the Internet of Things, built on top of MQTT. This paper builds on the original protocol, and introduces a number of additional message types: adding additional capabilities to the protocol. We also discuss the difficulty of proving that a physical device has an identity corresponding with a logical device on the network, and propose a mechanism to overcome this within the protocol.
ARTICLE | doi:10.20944/preprints202306.0520.v1
Subject: Physical Sciences, Acoustics Keywords: active noise control; phase-scheduled-command FXLMS; active sound profiling; phase error.
Online: 7 June 2023 (08:52:09 CEST)
In this paper, the Phase-Scheduled-Command FXLMS algorithm with the phase error between the disturbance and command signal is analyzed in detail. The influence of the phase error on the convergence time constant, convergence rate, and performance of convergence is explained for both stationary and nonstationary disturbance signals case. For stationary disturbance, the phase error slightly increases the convergence rate but heavily increases the distance of the optimum vector from the initial value, leading to poor convergence time constant performance. For nonstationary disturbance, the existence of phase error leads to poor convergence performance in every step, resulting in poor sound profiling performance. And the estimation of the phase error influence is developed in the closed form. Simulations are performed to demonstrate the validity of the analysis results.
ARTICLE | doi:10.20944/preprints201904.0252.v1
Subject: Computer Science And Mathematics, Data Structures, Algorithms And Complexity Keywords: multi-agent; HTN; distributed architecture; command and control model; algorithm performance comparison
Online: 23 April 2019 (11:01:03 CEST)
For the task planning problem of the command and control architecture, the existing algorithms have problems such as low efficiency and poor re-planning quality under abnormal conditions. Based on the requirements of the current accusation architecture, this paper constructs a distributed command and control architecture model based on multi-agents, which makes use of the superiority of multi-agents in dealing with complex tasks. The concept of MultiAgents-HTN is proposed under the framework. The original hierarchical task network planning algorithm is optimized, the multi-agent collaboration framework is redefined, and the coordination mechanism of local conflict is designed. Taking the classical resource scheduling problem as the experimental background, the comparison between the proposed algorithm and the classical HTN algorithm is carried out. The experimental results show that the proposed algorithm has higher quality and higher efficiency than the existing algorithm, and the space anomaly is heavy during processing. The planning is more efficient, and the time is more complicated and superior in dealing with the same problem, with good convergence and adaptability. The conclusion proves that the distributed command and control architecture proposed in this paper has high practicability in related fields and can solve the problem of distributed command and control architecture in multi-agent environment.
ARTICLE | doi:10.20944/preprints202007.0513.v1
Subject: Engineering, Control And Systems Engineering Keywords: C2; command and control; Identity; Internet of Things; IoT; MQTT; NFC; security; QR Code
Online: 22 July 2020 (10:17:33 CEST)
This paper examines dynamic identity, as it pertains to the IoT; and explores the practical implementation of a mitigation to some of the key weaknesses of a conventional dynamic identity model. This paper explores human-centric and machine-based observer approaches for confirming device identity, permitting automated identity confirmation for deployed systems. It also assesses the advantages of dynamic identity in the context of identity revocation permitting secure change of ownership for IoT devices. The paper explores use-cases for human and machine-based observation for authentication of device identity when devices join a C2 network, and considers the relative merits for these two approaches for different types of system.
ARTICLE | doi:10.20944/preprints201805.0220.v1
Subject: Engineering, Control And Systems Engineering Keywords: integrated guidance and autopilot; neural network; extended state observer; command filter; back-stepping control
Online: 16 May 2018 (05:48:05 CEST)
This paper focuses on the integrated guidance and autopilot design with control input saturation in the end-game phase of hypersonic flight. Firstly, uncertain nonlinear integrated guidance and autopilot model is developed with third actuator dynamics, where the control surface deflection has magnitude constraint. Secondly, neural network is implemented in extended state observer (ESO) design, which is used to estimate the complex model uncertainty, nonlinearity and state coupling. Thirdly, a command filtered back-stepping controller is designed with hybrid sliding surfaces to improve the terminal performance. In the process, different command filters are implemented to avoid the influences of disturbances and repetitive derivation, meanwhile solve the problem of unknown control direction caused by saturation. The stability of closed-loop system is proved by Lyapunov theory, and the principles abided by the controller parameters are concluded through the proof. Finally, series of 6-DOF numerical simulations are presented to show the feasibility and validity of the proposed controller.
ARTICLE | doi:10.20944/preprints202101.0621.v1
Subject: Engineering, Electrical And Electronic Engineering Keywords: Speech Command; MFCC; Tsetlin Machine; Learning Automata; Pervasive AI; Machine Learning; Artificial Neural Network; Keyword Spotting
Online: 29 January 2021 (13:01:47 CET)
The emergence of Artificial Intelligence (AI) driven Keyword Spotting (KWS) technologies has revolutionized human to machine interaction. Yet, the challenge of end-to-end energy efficiency, memory footprint and system complexity of current Neural Network (NN) powered AI-KWS pipelines has remained ever present. This paper evaluates KWS utilizing a learning automata powered machine learning algorithm called the Tsetlin Machine (TM). Through significant reduction in parameter requirements and choosing logic over arithmetic based processing, the TM offers new opportunities for low-power KWS while maintaining high learning efficacy. In this paper we explore a TM based keyword spotting (KWS) pipeline to demonstrate low complexity with faster rate of convergence compared to NNs. Further, we investigate the scalability with increasing keywords and explore the potential for enabling low-power on-chip KWS.
Subject: Social Sciences, Psychology Keywords: multimodal experiment; multisensory experiment; automatic device integration; open-source; PsychoPy; Unity; Virtual Reality (VR); Lab Streaming Layer; LabRecorder; LabRecorderCLI; Windows command line (cmd.exe)
Online: 12 October 2020 (07:06:28 CEST)
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.