Event Driven ECG Processing for Diagnosis of Cardiac Diseases

– The aim of this paper is to develop an intelligent event-driven Electrocardiogram (ECG) processing module in order to achieve an efficient solution for diagnosis of the cardiac diseases. The suggested method acquires the signal with an event-driven A/D converter (EDADC). The output of EDADC is passed through the activity selection and interpolation blocks. It allows focusing only on the important signal parts and resampling it uniformly. Later on, the signal is de-noised. The autoregressive (AR) method is used to extract the classifiable features of the de-noised signal. Afterwards, the output is classified by employing different robust classification techniques such as support vector machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN). The event-driven feature enables to adapt the system processing load according to the signal temporal variations. This interesting feature of the devised system aptitudes a drastic reduction in its processing activity and therefore in the power consumption as compared to the traditional ones. A comparison of the performance of different classifiers is also made in terms of accuracy. Results show that the proposed system is a potential candidate for an automatic diagnosis of the cardiac diseases.

driven Electrocardiogram (ECG) processing module in order to achieve an efficient solution for diagnosis of the cardiac diseases.The suggested method acquires the signal with an event-driven A/D converter (EDADC).The output of EDADC is passed through the activity selection and interpolation blocks.It allows focusing only on the important signal parts and resampling it uniformly.Later on, the signal is de-noised.The autoregressive (AR) method is used to extract the classifiable features of the de-noised signal.Afterwards, the output is classified by employing different robust classification techniques such as support vector machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN).The eventdriven feature enables to adapt the system processing load according to the signal temporal variations.This interesting feature of the devised system aptitudes a drastic reduction in its processing activity and therefore in the power consumption as compared to the traditional ones.A comparison of the performance of different classifiers is also made in terms of accuracy.Results show that the proposed system is a potential candidate for an automatic diagnosis of the cardiac diseases.

I. INTRODUCTION
One of the primary global death causes is heart stroke.To prevent such strokes lately a variety of wearable Electrocardiogram (ECG)devices have been proposed [2,3,5].They allow to monitor the patient's health by observing the functionality of their cardiovascular system.It is realized by acquiring and analyzing the intended patients' ECG signal.The ECG is a complex signal and it carries a plenty of valuable information regarding the cardiovascular system functionality.Because of the complex nature of ECG advanced digital signal processing techniques are required to achieve a better interpretation of the information which lays in these signals.Results are used for diagnosis of the cardiac diseases [13,14].
The classical ECG systems are time invariant in nature [2,3].It can lead towards a useless usage of the system resources and power consumption [7,8].An efficient system can be realizedfor such signals by adjusting the acquisition, processing and transmission rates as a function of the intended signal temporal variations [10][11][12][21][22][23][24][25].In this framework, an EDADC is employed for the intended ECG pulses acquisition.It works on the opportunistic sampling principle and therefore allows the devised system to overcome the downsides of the counter classical ones up to a certain extent [6].Therefore, it promises a simplified and power efficient front-end electronics along with a real time data compression [10][11][12][21][22][23][24][25].
The focus of this work is to intelligently employ the eventdriven signal processing and machine learning tools.It is done in order to contribute in the development of an efficient portable ECG acquisition and analysis system which can effectively diagnose the current status of the patient's cardiovascular system.
The proposed system principle is described in Section II.Section II, also discusses the employed materials and methods, used to realize the devised system.A summary of the system performance verification results is presented in Section III.Section IV, finally concludes the article.

II. THE PROPOSED SOLUTION PRINCIPLE
Figure 1, shows the proposed solution block level diagram.It shows that the intended ECG pulses are acquired via an EDADC.In the studied case the ECG pulses, from MIT-BIH arrhythmia database, are employed as the EDADC input [1].These pulses are firstly passed through an antialiasing bandpass filter with [F Cmin =0.05; F Cmax =60]Hz.The band limited signal x(t) is acquired with a uniform quantization based 5-Bit resolution EDADC [10,12].The EDADC output is time windowed with the Activity Selection Algorithm (ASA) [11,12].The windowed signal is resampled uniformly and then denoised with an appropriate adaptive rate filtering algorithm [11].Later on, the de-noised signal features are extracted and passed to the classification module.The classification decisions provide diagnosis of the intended patient cardiac health.A detail of different system modules is provided in the following subsections.developed on the principle of Level Crossing Sampling (LCS) [10,12,16,19].The ADC is an essential component of any signal processing chain [8][9].It dictates the overall system performance [9].The Nyquist sampling and processing theory governs the functionality of traditional ADCs.The signal acquisition is performed at a constant frequency irrespective of its sporadic nature, i.e. without exploiting the local signal variations [10,12,16,19,[21][22][23][24][25].Therefore, traditional ADCs are parameterized for the worst case [9].Thus, they can be extremely ineffective particularly in the case of random signals with a low percentage of activity like ECG, EEG, etc.In [10,12], this shortcoming is addressed up to a certain extent by employing the event-driven ADCs.Based on the signal driven sampling principle these the EDADCs can adapt their sampling period as a function of the input signal variations.Hence, only relevant information is acquired.It consequently results into a drastic reduction in the post processing chain activity and power consumption.In this case, the sampling frequency is not fixed, and it is piloted by the input signal.However, the signal reconstruction is confirmed by respecting the Bernstein inequality [11].It is realized by employing an appropriate design parameters of the employed EDADC [10,12,16,19].

B. The Windowing, Resampling and De-Noising
The EDADC output can be employed for further nonuniform digital processing.However, the practical system realization necessitates the finite time partitioning of the acquired data.Since a real system has limited resources like memory, processing speed, etc. [7,8].In this context, the ASA is employed for an effective windowing of the EDADC output.It exploits the sampling process non-uniformity to window only the relevant parts of signal.Furthermore, the characteristics of each selected signal part are analyzed in order to extract its local parameters.Later on, these extracted parameters are used to adjust the devised system parameters and processing activity accordingly [11,12].
The output of ASA is resampled uniformly.The extracted window parameters are employed to decide the resampling frequency [11][12].The resampler acts as abridge between the non-uniform and the uniform signal processing domains.It enables to take advantage of both flanks and allows to realize smart solutions [7,8,12].The resampling process requires interpolation, which introduces artefacts in the resampled signal compared to the original one [10].In the proposed system, the resampling error depends on the interpolation technique used to resample the data and on the EDADC resolution and amplitude dynamics [10][11][12].The worst case error is bounded by the EDADC quantum q, defined as q=∆V/(2 M -1) [27].Here, ∆V and M are respectively the EDADC amplitude dynamics and resolution.In the devised system the Nearest Neighbor Resampling (NNR) interpolator is employed for the resampling purpose.For the NNR interpolation method the value of an interpolated sample xr n corresponding to a resampling instant tr n is set according to the algorithm described on Figure 2. Where, t n is the instant of n th non-uniform sample and t n-1 is the instant of previous non-uniform sample.x n is the value of n th non-uniform sample and x n-1 is the value of previous non-uniform sample.
The NNR is a simple interpolation method and uses only one non-uniform observation for each resampled one.Therefore, it keeps the suggested system efficient in terms of the computational complexity compared to the counter complex methods [11].

Figure 2: The NNRI algorithm
The resampling frequency of each selected window is adapted by employing the selected window parameters extracted with the ASA [11,12].Let Frs i be the resampling frequency for the i th selected window W i , then its choice depends on F ref and Fs i .Here, F ref is the chosen reference sampling frequency for the system.It remains greater than and closest to F Nyq =2.f max .Here, f max is the x(t) bandwidth.Fs i =L i /N i is the sampling frequency for W i .Here, L i and N i respectively present the length in time and the number of nonuniform samples exist in W i . .
For the studied case, the bandwidth f max is equal to 60Hz.It is because of the employed F Cmax =60Hz.Therefore, F ref =320Hz is chosen.After resampling, there exist Nr i uniformly placed samples for W i .For the case, Fs i >F ref , Frs i is chosen equal to F ref .Otherwise, when Fs i ≤F ref , Frs i is chosen equal to Fs i .It allows to resample W i closer to the Nyquist rate or at sub-Nyquist rates [11].Therefore, it avoids unnecessary interpolations and filtering operations during the data resampling and de-noising processes.As a result, it improves the proposed approach computational and power efficiencies.
The uniformly resampled data is de-noised by applying the adaptive rate filtering approach [11].The idea is to offline design a bank of Finite Impulse Response (FIR) filters,for a range of sampling frequencies, and then chose an appropriate one for each selected window during the online processing.Later on, the chosen filter response is adjusted in accordance to the concerned selected window resampling frequency.It is done by resampling the chosen filter response by employing the NNR interpolation.For the studied case, filters cut-off frequency is chosen as 60 Hz.The filters bank is offline designed by employing the Parks-McClellan algorithm for a range of sampling frequencies between 120Hz to 320Hz and with a uniform frequency step of 10Hz [11].
This online filter order adaption feature, for each selected window, allows to achieve the intended signal de-noising with a lesser computational complexity compared to the counter time invariant traditional solutions [11].It adds to the performance and the computational efficiency of the proposed system.
In order to achieve a better signal classification, the filtering stage output is passed to the Multiscale Principal Component Analysis (MSPCA) module [14].It employs the interesting features of both PCA and wavelet analysis [13].Depending on the targeted application, a subset of wavelet coefficients and principal components are designated at each scale.It is done by using the criteria of thresholding.It allows selecting the most relevant wavelet coefficients.Models, developed on the base of this approach, achieve more accurate system parameters estimation along with better prediction ability [14].

C. The Parameters Extraction
The classifiable parameters are extracted from the enhanced signal by employing the Autoregressive (AR) algorithm [14].The AR model allows to compute the spectra that contains sharp peaks but does not have deep valleys.Its transfer function contains a constant in the numerator and a polynomial in the denominator.Therefore, it is called as an all-pole model.It provides a higher spectral resolution compared to the classical Fourier transform based spectra [7,8].Therefore, a clarification between closely spaced sinusoids, the input signals, is achieved.It results in to a Power Spectral Density (PSD) estimates that are almost equal to the exact values.

D. The Classification Methods
After parameters extraction the following three methods are employed for the signal classification.

The Support Vector Machine (SVM)
Any linear and nonlinear data can be classified using support vector machines (SVMs).Concisely, an SVM transforms the original training data into a higher dimension by implementing a nonlinear mapping.The training time of the SVMs is higher compared to other counter algorithms [15].However, they can provide a higher accuracy compared to the counter solutions.It is grace of their capability to model complex nonlinear decision boundaries [15].

The k-Nearest Neighbors (k-NN)
This algorithm learns by finding similarities.The principle is to compare a given test data with training data.The training data is defined by n-characteristics.It typifies an object in an n-dimensional space.It allows to keep all training data in an ndimensional pattern space.Then the k-NN classifier examines the pattern space, for the k-training data sets.The classification decisions are made on the basis of votes count of K-Nearest Neighbours.The unknown object is allocated to the most common class among its k-nearest neighbours [15].

The Artificial Neural Network (ANN)
A set of connected input/output units forms the ANN where, each connection has an associated weight.The network learns by adjusting its weights throughout the learning phase, in order to properly classify the test data.The ANN algorithms are inherently parallel.It allows to employ the parallelization techniques to accelerate the decision process [20].The ANN requires longer training period compared to other classifiers such as nearest neighbour, decision tree, random forest, etc. Advantages of neural networks compared to the counter ones are the higher noise tolerance and ability to classify unknown patterns.Therefore, ANN is a good candidate for applications where the relationship between attributes and classes is not clearly defined [15].

III.
RESULTS AND DISCUSSION In this paper, the ECG pulses, obtained from the MIT-BIH arrhythmia database [1], are employed.These pulses are acquired with a 5-Bit resolution EDADC.The signal is band limited to 60Hz.Each pulse is recorded for a time length of 0.94 seconds.In the classical case, the signal is sampled at a fixed sampling rate of 320 samples per second (SPS).Therefore, for a given time length of 0.94 seconds, each window is composed of 300 samples.The EDADC focusses only on the intended signal parts and adapt the sampling rate as a function of the signal variations [10,12,16,19].Moreover, the ASA adapts the window function length according to the characteristics of non-uniformly time repartitioned ECG pulse, obtained at the EDADC output.The process is further clear from Figure 3.  number of samples for each selected window compared to the counter classical one.The total number of samples, which are obtained in the proposed case, are 84906.However, in the classical case the sampling frequency remains constant and total number of samples for the intended 1500 ECG pulses remain 450000.It results into a 5.3 times reduction in the number of samples, obtained by the proposed system as compared to the traditional one.It assures that the devised system will lead towards a drastic computational complexity and power consumption reduction as compare to the counter classic ones.The output of ASA is uniformly resampled by employing the NNR interpolator.Afterwards, it is de-noised by employing the adaptive rate FIR filtering technique [11].It improves the intended signal SNR (Signal to Noise Ratio) and results into a better classification precession.The process is clear from Figure 4.The de-noised signal obtained with the adaptive rate filtering module is further enhanced by employing a MSPCA module.The classifiable features of the enhanced signal are extracted by employing the AR model.A specifically developed MATLAB based application is used for the ECG signal enhancing and feature extraction [17].The classification is realized with the WEKA [18].The classifiers are used with their default configurations.Training and testing sets are composed of five different classes.Total 1500 instances are employed.The 300 instances from each class are included.The first class presents the ECG pulses for a normal patient.The second class presents the ECG pulses for a patient with Premature Ventricular Contractions (PVCs).The third class is composed of pulses for a patient with Atrial Premature Complexes (APCs).The fourth and fifth classes respectively present patients with the Left Bundle Branch Block (LBBB) and the Right Bundle Branch Block (RBBB).The system accuracy is studied collectively for all five classes.
During the classification process, the data should be appropriately splitted into testing and training sets.Training set is employed to generate a classifier model.However, the testing set is employed to confirm the generated classifier performance.The 10-fold cross validation method is employed in all experiments.Classifiers performance is quantified in terms of the accuracy.The results are summarised in Table 1.  1, shows that for the studied case, the best classification accuracy, 93.66%, is obtained with the SVM method.The ANN is the second one with 93% accuracy and k-NN is the last one with 92.53% accuracy.In this case, the SVM performs better because of its ability of only using the most relevant points to find a linear separation.It concludes that for the employed assembly of EDADC, ASA, adaptive rate filtering, MSPCA & AR model the SVM provides the best classification results for the studied case.

IV. CONCLUSION
A novel, event-driven signal acquisition and processing based, method is devised for the diagnosis of intended patient's cardiovascular diseases.It is shown that how the employment of EDADC and ASA have avoided the unnecessary samples to process.It results into a 5.3 times reduction in the number of samples, obtained by the proposed system as compared to the traditional one.It assures that the devised system will lead towards a drastic computational complexity reduction as compare to the counter classic ones.It will lead towards the design and development of low power and efficient ECG wearable devices.The output of this module is passed through the NNR interpolation and adaptive rate FIR filtering based de-noising blocks.The EDADC and ASA allows these modules to focus and process only on the important signal parts at an adaptable processing rate.It further adds to the system computational complexity and processing efficiency.The application of low pass filtering improves the signal SNR.The signal is further enhanced by employing the MSPCA.Afterwards the AR method is applied on the de-noised signal.It extracts the classifiable features from it.The output of AR module is classified by employing three different robust classification techniques.It is shown that for the studied case the SVM provides the best classification accuracy of 93.66%.
The suggested solution can be employed for the design and development of low power and efficient ECG wearable's.The medical practitioners can also use it in order to achieve an augmented diagnosis.
The devised system performance depends on the type of employed interpolator, de-noising stage, parameters extraction module and the classification algorithm.A study on the devised system performance while employing the Linear or spline interpolators is a future work.Moreover, study the

Figure 3 :
Figure 3: The number of samples per window, classical case (3-a), The number of samples per window, obtained in the proposed case (3-b), The time length of window, classical case (3-c), The time length of window, obtained in the proposed case (3-d).

Figure 3 ,
Figure 3, shows that in the proposed solution the sampling frequency and the length of each selected window is adjusted according to the signal characteristics.It allows to focus only on the interesting signal parts and results into a reduced

Table 1 :
Classification Performances for 5 class ECG data

Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 30 October 2018 doi:10.20944/preprints201810.0718.v1 system
performance while employing other robust classification techniques like Rotation forest, Random forest, Naïve Bias, etc. is another future task.Employment of ensemble classifiers can further improves the system classification accuracy.Exploring this approach is another research axis.