Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Low-Power Audio Keyword Spotting using Tsetlin Machines

Version 1 : Received: 28 January 2021 / Approved: 29 January 2021 / Online: 29 January 2021 (13:01:47 CET)

How to cite: Lei, J.; Rahman, T.; Shafik, R.; Yakovlev, A.; Yakovlev, A.; Granmo, O.; Kawsar, F.; Mathur, A. Low-Power Audio Keyword Spotting using Tsetlin Machines. Preprints 2021, 2021010621 (doi: 10.20944/preprints202101.0621.v1). Lei, J.; Rahman, T.; Shafik, R.; Yakovlev, A.; Yakovlev, A.; Granmo, O.; Kawsar, F.; Mathur, A. Low-Power Audio Keyword Spotting using Tsetlin Machines. Preprints 2021, 2021010621 (doi: 10.20944/preprints202101.0621.v1).

Abstract

The emergence of Artificial Intelligence (AI) driven Keyword Spotting (KWS) technologies has revolutionized human to machine interaction. Yet, the challenge of end-to-end energy efficiency, memory footprint and system complexity of current Neural Network (NN) powered AI-KWS pipelines has remained ever present. This paper evaluates KWS utilizing a learning automata powered machine learning algorithm called the Tsetlin Machine (TM). Through significant reduction in parameter requirements and choosing logic over arithmetic based processing, the TM offers new opportunities for low-power KWS while maintaining high learning efficacy. In this paper we explore a TM based keyword spotting (KWS) pipeline to demonstrate low complexity with faster rate of convergence compared to NNs. Further, we investigate the scalability with increasing keywords and explore the potential for enabling low-power on-chip KWS.

Subject Areas

Speech Command; MFCC; Tsetlin Machine; Learning Automata; Pervasive AI; Machine Learning; Artificial Neural Network; Keyword Spotting

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.