Quantum Neural Network-Based EEG Filtering for ... - Semantic Scholar

5 downloads 479 Views 2MB Size Report
learning algorithm enables the RQNN to effectively capture the statistical .... See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. ..... band power, Hjorth, power spectral density (PSD), bispectrum.
278

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 2, FEBRUARY 2014

Quantum Neural Network-Based EEG Filtering for a Brain–Computer Interface Vaibhav Gandhi, Girijesh Prasad, Senior Member, IEEE, Damien Coyle, Senior Member, IEEE, Laxmidhar Behera, Senior Member, IEEE, and Thomas Martin McGinnity, Senior Member, IEEE Abstract— A novel neural information processing architecture inspired by quantum mechanics and incorporating the wellknown Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain–computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner–outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain–computer interface performance compared to using only the raw EEG or Savitzky–Golay filtered EEG across multiple sessions. Index Terms— Brain–computer interface electroencephalogram (EEG), recurrent quantum network (RQNN).

(BCI), neural

I. I NTRODUCTION

B

RAIN–COMPUTER interface (BCI) technology is a means of communication that allows individuals with severe movement disability to communicate with external assistive devices using the electroencephalogram (EEG) or other brain signals. In motor imagery (MI)-based BCIs, the subject performs a mental imagination of specific movements. This MI is translated into a control signal by classifying the specific EEG pattern that is characteristic of the subject’s imagined task, e.g., movement of hands and/or foot. These raw EEG signals have a very low signal-to-noise (SNR) ratio because of the interference from the electrical power line,

Manuscript received August 2, 2012; revised April 16, 2013 and July 13, 2013; accepted July 14, 2013. Date of publication August 6, 2013; date of current version January 10, 2014. This work was supported by the U.K.–India Education and Research Initiative under Grant “Innovations in Intelligent Assistive Robotics.” V. Gandhi is with the School of Science and Technology, Middlesex University, London NW4 4BT, U.K. (e-mail: [email protected]). G. Prasad, D. Coyle, and T. M. McGinnity are with the Intelligent Systems Research Centre, University of Ulster, Derry BT52 1SA, U.K. (e-mail: [email protected]; [email protected]; [email protected]). L. Behera is with the Department of Electrical Engineering, Indian Institute of Technology, Kanpur 208016, India (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2013.2274436

motion artifacts, electromyogram (EMG)/electrooculogram interference. Preprocessing is carried out to remove such unwanted components embedded within the EEG signal and good preprocessing results in increase in signal quality resulting in better feature separability and classification performance. Very recently, integrated with feature extraction stage, novel spatial filtering algorithms based on Kullback– Leibler [1] common spatial pattern (CSP) [2] and Bayesian learning have been investigated to account for very low SNR EEG [3], [4]. The KLCSP-based approach is investigated on several EEG data sets in [3] and showed significant performance improvement compared to CSP and stationary CSP. Similarly, [4] reports an extensive study of Bayesian learning-based spatial filtering approach and its application using publicly available EEG data. Neural networks and selforganizing fuzzy neural network have also being applied to increase signal separability in motor imagery BCIs [5]–[7]. This paper focuses on EEG signal preprocessing utilizing the concepts of quantum mechanics (QM) and neural network theory in a framework referred to as recurrent quantum neural network (RQNN). EEG signals can be considered a realization of a random or stochastic process [8]. When an accurate description of the system is unavailable, a stochastic filter can be designed on the basis of probabilistic measures. Bucy in [9] states that every solution to a stochastic filtering problem involves the computation of a time-varying probability density function (pdf ) on the state–space of the observed system. The architecture of RQNN model is based on the principles of QM with the Schrodinger wave equation (SWE) [10] playing a major part. This approach enables the online estimation of a time-varying pdf that allows estimating and removing the noise from the raw EEG signal. In quantum terminology, the state is represented by ψ (a vector in the Hilbert space H) and referred to as a wave function or a probability amplitude function. The time evolution of this state vector ψ is according to SWE and is represented as ∂ψ(x, t) = H ψ(x, t). (1) i h¯ ∂t Here H is the Hamiltonian or the energy operator and is given as i h¯ (∂/∂t) where 2h¯ (i.e., h) is the Plank’s constant1 [11]. Here is the wave function 1 The Planck’s constant is an atomic-scale constant that denotes the size of the quanta in quantum mechanics. The atomic units are a scale of measurement in which the units of energy and time are defined so that the value of the reduced Planck constant is exactly one.

2162-237X © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING

Neuronal lattice



Fig. 1.

Unified response is a pdf or a wave-packet

A quantum process predicts the average response of the wave-packet

Conceptual framework of RQNN model.

associated with the quantum object at spacetime point (x, t). Fig. 1 shows a basic architecture of RQNN model in which each neuron mediates a spatio-temporal field with a unified quantum activation function in the form of Gaussian that aggregates pdf information from the observed noisy input signal. Thus the solution of SWE (which is complexvalued and whose modulus square is the pdf that localizes the position of quantum object in the vector space) gives us the activation function. From a mathematical point of view, the time-dependent single-dimension nonlinear SWE is a partial differential equation describing the dynamics of wave packet (modulus-square of this wave is the pdf ) in the presence of a potential field (or function) (which is the force field in which the particles defined by the wave function are forced to move) [12]. Thus the RQNN model is based on novel concept that a quantum object mediates the collective response of a neural lattice (a spatial structure of an array of neurons where each neuron is a simple computational unit as shown in Fig. 1 and explained in detail in Section II) [13], [14]. This model has been investigated here as a filtering mechanism in the preprocessing of EEG signal for a synchronous MI-based BCI to improve signal quality and separability. A similar technique was reported in [15] and [16] for EEG signal filtering where the error signal was used to stimulate the neurons within the network and the weights of the network were updated using the well-known Hebbian learning rule. Similar techniques have also been applied for robot control [17], eye tracking, [13] and stock market prediction [18] applications. Neurons within the proposed RQNN are stimulated directly from the raw input signal. In addition, the learning rule for the weight updation process also utilizes a delearning scheme. Several important modifications have been made with reference to [19]. First, the selection of subject-specific RQNN model parameters using a two-step inner–outer fivefold cross-validation and a particle swarm optimization (PSO) [20], [21] technique, and second, the scaling of the input EEG signal resulting in reducing the range of movement of the wave packet as well as the number of spatial neurons. As discussed in Section IX, this model is demonstrated to produce a stable filtered EEG that results in a statistically significant enhancement in the performance of BCI system which is applicable across multiple sessions and is also better than some of the existing filtering techniques in the field including Savitzky–Golay (SG) and Kalman filter.

279

The remainder of this paper is organized into nine sections. Section II describes the theoretical concepts of RQNN model. Section III describes the RQNN signal filtering approach. Sections IV and V discuss the datasets and the methodology for EEG filtering with the RQNN model respectively. Section VI details the feature extraction (FE) and classification methodology utilized in this paper. The parameter selection approach for the subject-specific RQNN model is discussed in Section VII. Section VIII discusses the Savitzky–Golay filtering methodology utilized for comparative analysis. The results are presented and discussed in Section IX. Section X concludes this paper. II. C ONCEPTUAL RQNN F RAMEWORK QM theory is extremely successful in describing the process we see in nature [22]. Dawes in [23] and [24] proposed a novel model—aparametric avalanche stochastic filter using the concept of time-varying pdf proposed by Bucy in [9]. This paper was improved by Behera et al. [13], [14], [25] using maximum likelihood estimation (MLE) instead of inverse filter in the feedback loop. Further, Ivancevic in [18] provided an analytical analysis of nonlinear Schrodinger equation and used the closed-form solution for the concerned application. Because the RQNN approach does not make any assumption about the nature and shape of the noise that is embedded in the signal to be filtered, this approach is most suitable for those signals where the characteristics of the embedded noise is not known. EEG signals are one of these types of signals where the characteristics of the embedded noise is not known and hence this paper presented here on EEG signal filtering is strongly inspired by these works. A conceptual framework of RQNN model is shown in Fig. 1. It is basically a 1-D array of neurons whose receptive fields are initially excited by the signal input reaching each neuron through the synaptic connections. The neural lattice responds to the stimulus by actuating a feedback signal back to the input. The time evolution of this average behavior is described by SWE [10] h¯ 2 2 ∂ψ(x, t) =− ∇ ψ(x, t) + V (x, t)ψ(x, t) (2) ∂t 2m where ψ(x, t) represents the quantum state, ∇ is the Laplacian operator and V (x, t) is the potential energy. The neuronal lattice sets up the spatial potential energy V (x). A quantum process described by the quantum state ψ which mediates the collective response of neuronal lattice, evolves in this spatial potential V (x) according to (2). As V (x) sets up the evolution path of the wave function, any desired response can be obtained by properly modulating the potential energy. Such RQNN filter used for stochastic filtering is discussed in [13], [14], and [25]. Although this filter is able to reduce noise, because of its stability being highly sensitive to model parameters, in case of imperfect tuning, the system may fail to track the signal and its output may saturate to absurd values. In the architecture used in this paper (Fig. 2), the spatial neurons are excited by the input signal y(t). The difference between the output of spatial neuronal network and the pdf i h¯

280

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 2, FEBRUARY 2014

Recurrent

1

+

2

y (t ) Scaling

3

ϕ(x , t)

^

W(x , t)

-

N

Quantum Activation Function (SWE)

ρ(x , t) = |ψ(x , t)|2

ML Estimate

y (t )

spatial neurons

Fig. 2.

Signal estimation using RQNN model.

feedback |ψ(x, t)|2 is weighted by a weight vector W (x) to get the potential energy V (x). The model can thus be seen as a Gaussian mixture model estimator of potential energy with fixed centers and variances, and only the weights are variable. These weights can be trained using any learning rule. The parameters of RQNN model have been selected using a two-step inner–outer fivefold cross-validation technique for filtering EEG data sets and using PSO technique for simple signals used to validate the method. There are several parameters to tune from and hence applying any optimization technique without the knowledge of multidimensional search space for filtering EEG can be time-consuming. In [19], the parameters were heuristically selected and kept the same for all the subjects. This leads to underfiltering or overfiltering for a few subjects without making the system unstable, but for optimal performance, the EEG signal preprocessing should preferably be carried out with subject-specific choice of parameters. III. RQNN S IGNAL F ILTERING This section describes the RQNN architecture (see Fig. 2). In RQNN, we make the assumption that the average behavior of neural lattice that estimates the signal is a time-varying pdf which is mediated by a quantum object placed in the potential field V (x) and modulated by the input signal so as to transfer the information about pdf. We use SWE to recurrently track this pdf because it is a well-known fact that the square of the modulus of ψ function, the solution of the wave equation (2), is also a pdf. The potential energy is calculated as V (x) = ζ W (x, t)φ(x, t)

(3)

where φ(x, t) = e

−(y(t)−x)2 2σ 2

− |ψ(x, t)|2

(4)

where y(t) is the input signal and the synapses are represented by the time-varying synaptic weights W (x, t). The variable ζ represents the scaling factor to actuate the spatial potential energy V (x, t), and σ is the width of the neurons in the lattice (taken here as unity). This potential energy modulates the nonlinear SWE described by (1). The filtered estimate is calculated using MLE as  2 (5) yˆ (t) = E[|ψ(x, t)| ] = x|ψ(x, t)|2 d x where x represents the different possible values which may be taken up by the random process y. The variable x can be

interpreted as the discrete version of quantum space with the resolution within this discrete space being referred to as δx (taken as 0.1 in this paper). Thus all the possible values of x will construct the number of spatial neurons N for RQNN model. On the basis of MLE, the weights are updated and a new potential V (x, t) is established for the next time evolution. It is expected that the synaptic weights W (x, t) evolve in such a manner so as to drive the ψ function to carry the exact information of pdf of the filtered signal yˆ (t). To achieve this goal, the weights are updated using the following learning rule: ∂ W (x, t) = −βd W (x, t) + βφ(x, t)(1 + v(t)2 ) ∂t

(6)

where β is the learning rate, and βd is the delearning rate. Delearning is used to forget the previous information, as the input signal is not stationary, rather quasistationary in nature. The second right-hand side term in the above equation maybe purely positive and so in the absence of delearning term, the value of synaptic weights W may keep growing indefinitely. Delearning thus prevents unbounded increase in the values of the synaptic weights W and does not let the system become unstable. The variable v(t) in the second term is the difference between the noisy input signal and the estimated filtered signal, thereby representing the embedded noise as v(t) = y(t) − yˆ (t).

(7)

If the statistical mean of the noise is zero, then this error correcting signal v(t) has less impact on weights, and it is the actual signal content in input y(t) that influences the movement of wave packet along the desired direction which results in helping the goal of achieving signal filtering. A. Numerical Implementation The space variable x is defined uniformly spaced as x n = nδx, n = −(N/2), . . . , +(N/2) and the time is spaced as tk = kδt, k = 1, . . . , T . The potential function is approximated as V (x n , t( k)) = Vnk . This potential function excites the nonlinear SWE to obtain the quantum wave function ψnk . Various methods, both explicit as well as implicit, have been developed for solving nonlinear SWE numerically, on a finite dimensional subspace [26]. The first approach uses Crank–Nicholson method [27] which is an implicit scheme for solving nonlinear SWE and requires a quasitridiagonal system of equations to be solved at each step [28]. This scheme, although accurate, requires solving the inverse of a huge N × N matrix, which is time-consuming. Hence the implementation of the same was carried out using the explicit scheme i

ψ k − 2ψnk + ψnk − 1 ψnk+1 − ψnk = − n+1 + Vnk ψnk . δt 2mδx 2

(8)

This method is linearly stable for δt/(δ)2  1/4, with a truncation error of the order of (0(δt 2 ) + 0(δx 2 )). Another point to note is that we need to maintain the normalized character of pdf envelope, |ψ|2 , by normalizing at every step, N |ψ k |2 δx for all k. i.e., n=1 n

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING

281

C3

RQNN

Feature Extracon (Bandpower /Hjorth)

C4

Fig. 3.

Training scheme of the paradigm with smiley feedback [22].

IV. DATA S ETS The EEG data used in this analysis is data set 2b provided in the BCI competition IV [29] with each subject contributing a single session referred to as *03T for the training phase and two sessions referred to as *04E, *05E for the evaluation phase. The data set is obtained using acue-based paradigm which consists of two classes, namely MI of left hand (class 1) and right hand (class 2). Three EEG channels (C3, Cz, and C4) were recorded in bipolar mode with a sampling frequency of 250 Hz and were bandpass-filtered between 0.5 Hz and 100 Hz, and a notch filter at 50 Hz was enabled. However, for investigation, only two channels C3 and C4 are utilized. As shown in Fig. 3, the trial paradigm started at 0 s with a gray smiley centered on the screen. At 2 s, a short warning beep (1 kHz, 70 ms) was given. The cue was presented from 3 to 7.5 s and the subjects were accordingly required to perform the specific imagination. At 7.5 s, the screen went blank and a random interval between 1.0 and 2.0 s was added to the trial so as to avoid user adaptation. More details of this EEG signal recording methodology are available in [29]. V. EEG F ILTERING W ITH RQNN Fig. 4 shows the position of RQNN model within the BCI system. The raw EEG signal is fed one sample at a time and an enhanced signal is obtained as a result of filtering process. The raw EEG is first scaled in the range 0–2 before it is fed to the RQNN model. During the off-line classifier training process, all the trials from a particular channel of EEG are available. Therefore, the complete EEG is scaled using the maximum of amplitude value from that specific channel. During the online process, the EEG signal is approximately scaled in the range 0–2 using the maximum of amplitude value obtained from the off-line training data of that specific channel. The net effect is that the input signal during the online process is also maintained approximately in the region 0–2, and this enables the tracking of sample using a reduced range of the movement of wave packet. In addition, the number of spatial neurons has also been reduced along the x-axis from an earlier value of 401 to 612 in the present case. The primary assumption in doing this is that the unknown nonstationary and evolving EEG signal during the evaluation stage will stay within the bound of the range of 61 spatial neurons which can cover the 2 If the range of the neuronal lattice is −2 to +2, then with a spacing of 0.1 between each neuron, the total number of neurons covering the range will be −2, −1.9, −1.8, . . . , −0.1, 0, +0.1, . . . 1.9, 2 i.e., 41. However, to incorporate the behavior of signal during the unknown evaluation stage, the range has been extended to cover the range up to +3 using 61 neurons.

Fig. 4.

Classificaon

RQNN

RQNN model framework for EEG signal enhancement.

input signal range up to three. If the scaling of the input signal is not implemented, then the number of neurons required to cover the input signal range will be larger thereby leading to an increased computational expense. This is an important modification in [19] and the scaling of EEG is now dictated as per the training data set. During the off-line training process, the complete set of scaled EEG signal (here signals from channels C3 and C4 discussed in Section VI) is fed through the two RQNNs, respectively (see Fig. 4), and a filtered estimate of the signal is obtained for the samples from both these channels. VI. F EATURE E XTRACTION AND C LASSIFICATION The next task is to obtain the features from this RQNNenhanced EEG signal which in the present case are the Hjorth [30] and band power features. These combined features are then fed as an input to train the off-line classifier which in this case is the linear discriminant analysis (LDA) classifier. Once the off-line analysis is complete and the classifier is trained, the parameters and weight vector are stored for use with the classifier to identify the unlabeled EEG data during the online analysis. It needs to be clarified here that to capture the dynamic property of the continuous EEG signal, the weight updation process of RQNN filter is continuous (to enhance the EEG signal) during both the off-line and online stages while the classifier parameters are tuned off-line and then kept fixed for the online classification process. Various FE approaches such as RQNN-generated features, band power, Hjorth, power spectral density (PSD), bispectrum (BSP), time frequency (t– f ) features have been utilized by various research groups [15], [16], [31]–[35] to produce a good practical BCI system. Most of the BCI research in signal processing is focused on frequency domain. The band power FE method is based on calculating the squared amplitude of signal over a small window. This approach typically includes two frequency bands. The μ band (8–13 Hz) and the β band (14–24 Hz) for the purpose of FE, although the range of these frequency bands may vary from one subject to the other. The μ and β bands are important as they are more reactive during a cued motor imagery [8], [36]. There is a much larger difference in band power changes [eventrelated desynchronization (ERD), event-related synchronization (ERS)] within these bands and help differentiate between hand versus foot MI or right versus left hand MI. In addition, it is also possible to convey relevant information about the EEG epochs with the trio of combinations of conventional timedomain-based descriptive statistics Hjorth parameters, namely

282

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 2, FEBRUARY 2014

TABLE I F IXED RQNN PARAMETERS B EFORE I NITIALIZING THE VARIABLE PARAMETER S EARCH

TABLE II VARIABLE PARAMETERS TO B E S ELECTED W ITHIN THE S EARCH S PACE

activity, mobility, and complexity [37]. The computational cost in the calculation of Hjorth parameters is considered low as this approach is based on variance [31]. Moreover, Hjorth parameter, especially complexity, is sensitive to noise because their computation is based on numerical differences and their variances [38]. This prompted the authors to evaluate RQNN preprocessing technique by utilizing a combination of Hjorth and band power features. VII. RQNN PARAMETER S ELECTION This section discusses the possible ways of selecting RQNN parameters to suit an individual subject. Four parameters in the RQNN model have been kept fixed and are explained in Table I. These are obtained heuristically, but after suitable trial and experimentation over a small set of EEG data. The variable parameters are selected from the search space as explained in Table II through the two-step inner–outer fivefold cross-validation method shown in Fig. 5. The first step is to vary the RQNN parameters within the search space shown in Table II and measure the overall performance of the classifier through an inner–outer cross-validation technique with a limited number of trials using the Hjorth and band power features over the standard frequency bands of 8–13 Hz and 14–24 Hz. In this first step, the training data set of EEG is separated into five outer folds. Of these, the raw EEG is filtered using the RQNN on four folds using a specific set of parameters over the event-related MI period 3–7 s. Once the RQNN-enhanced signal is obtained, FE is performed. This feature set is now further divided into five inner folds. A normal fivefold cross-validation (CV) is performed on these inner folds to obtain the performance quantifiers [classification accuracy (CA) (i.e., the percentage of correct classifications)

and kappa3 value] for the specific parameter combination with the fixed frequency band. This complete step is repeated with all the different combination of parameters within the search space mentioned in Table II. Five best RQNN parameters are chosen from this step as per the highest kappa value. Thus the output from the first step gives five best RQNN parameters from each outer fold that has the potential to efficiently filter the raw EEG. The second step is to find the best subject-specific frequency band in accordance with the five best outerfold RQNN parameters. Therefore, in this step, the raw EEG is filtered using the five best RQNN parameters and features are again extracted and a normal fivefold CV is performed over the complete set of EEG training data. This stage thus gives one best RQNN parameter and frequency band combination and the optimum time-point4 for performing the classification as per the highest kappa value for each subject. Once these steps are complete, the classifier is chosen at the best time-point so that it can be applied on the unknown evaluation data sets. Another common approach to handle parameter tuning/selecting issue is to utilize optimization techniques such as PSO or genetic algorithm (GA). However, the RQNN model has several parameters that should be varied in agreement with the frequency bands at the FE stage for EEG classification to suit an individual subject. Applying any optimization technique within a large multidimensional search space would be time-consuming. Therefore, PSO has been applied to select the 3 Kappa is a measure of agreement between two estimators and since it considers chance agreement, it is regarded as a more robust measure in comparison to accuracy [58]. 4 Optimum time-point is an estimate of a point in time within the trial duration of 8 s that produces features with maximum separation that allows for classification with the lowest error.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING

283

6

Amplitude

4

2

0 DC + 0 dB Noise Noiseless DC RQNN filtered DC

-2 0

20

40

60

80

100

Time 2.5 2

Amplitude

1.5 1 0.5 Staircase DC + 20 dB Noise Noiseless DC RQNN filtered Staircase DC

0 -0.5 0

5

10 Time

15

20

5

Amplitude

Sine + 6 dB Noise Noiseless Sine RQNN filtered Sine

0

-5 0

Fig. 6.

2

4 Time

6

8

DC, staircase dc, and sine signal filtering.

Fig. 5. Flowchart for the two-step inner–outer fivefold CV parameter selection (RQNN/frequency band).

RQNN parameters for filtering simple example signals while a two-step parameter selection approach has been applied for filtering EEG. VIII. S AVITZKY–G OLAY F ILTER The performance of RQNN has been compared with the unfiltered EEG as well as with the well-established SG technique [39]. The SG technique has been utilized as a noise removal approach (in a way it is thus similar to the RQNN) in biological signals such as ECG [40] and the EEG [41], [42]. SG filtering can smoothen the signal without destroying the original properties of signal. Hence, the SG approach has been utilized here to compare it with the RQNN model. The RQNN block shown in the EEG framework of Fig. 4 is simply replaced with the SG block. IX. R ESULTS AND D ISCUSSION A. Simple Example Signals To validate the RQNN technique for filtering the complex EEG signals, we apply it to filter simple example signals in the form of dc, staircase dc, and sinusoidal signals that have been embedded with a known amount of noise. The dc signal of amplitude 2 is embedded with 0 dB noise (i.e., SNR is 1), the

staircase dc with amplitude varying from 0 to 2 is embedded with 20 dB noise and the sinusoidal signal of amplitude 3 is embedded with 6 dB noise. The parameters of RQNN model to filter the input dc signal are β = 0.002, m = 0.5, ζ = 775.05, and N = 400 while each sample is iterated once, so as to stabilize SWE (Table I). The parameters β and ζ were obtained using the PSO technique [20], [21] by fixing the parameter m at 0.5. The parameters to filter the sinusoidal signal were obtained as β = 5.25, m = 0.25, and ζ = 1.75 and N = 140 and each sample was iterated for 60 times before the next sample was fed. The delearning parameter βd has been kept at all places as one. Fig. 6 shows the filtering of these signals using the RQNN approach. A video showing the movement of the wave packet for dc filtering is available at [43]. The rootmean-square error in filtering the dc signal of amplitude 2 with the proposed RQNN as well as with the Kalman filter [44] is shown in Table III (partially reproduced from [14]) and demonstrates that the RQNN performs better. It can thus be firmly stated from the plots and the figures that the RQNN is able to effectively capture the statistical behavior of the input signal and appropriately track the true signal even when fed with a highly noisy input signal. It is worth highlighting here that the statistical behavior of noise and signal in terms of pdf is a priori assumed in case of Kalman filter and its variants. However, the proposed RQNN

284

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 2, FEBRUARY 2014

Beta band C4

0.2

TABLE III P ERFORMANCE C OMPARISON FOR dc S IGNAL OF A MPLITUDE 2

Left MI (RQNN) Right MI (RQNN) Left MI (RAW) Right MI (RAW)

0.15

ERD/ERS

0.1

0.05

1

0

RAW RQNN

0.95

-0.05

Amplitude

0.9 0.85

Fig. 9.

0

2

4 Time (s)

6

8

ERS for RQNN-filtered and raw EEG (subject B0405E).

0.8

TABLE IV

0.75

S UBJECT-S PECIFIC PARAMETERS FROM I NNER –O UTER F IVEFOLD CV 0.7 0.65 5

5.2

5.4

5.6

5.8

6

Time

Fig. 7.

Representative plot of RQNN filtered and raw EEG.

0.2

f(X)

0.15

0.1 100

0.05

RAW SG RQNN

90

0 0

0.2

0.4

0.6

0.8

1

1.2

1.4 80

Fig. 8. Snapshots of the wave packets and MLE that generate the representative plot of the RQNN filtered EEG as shown in Fig. 7.

Accuracy (%)

X

70

60

directly estimates this probability density function without making any such assumption. Thus the proposed model can enhance the EEG signal much better as the noise pdf is naturally non-Gaussian.

50

40 0

1

2

3

4 Time (s)

5

6

7

8

B. EEG-Based BCI

Fig. 10.

1) Signal Wave Packets: Fig. 8 displays the tracking of EEG signal in the form of snapshots of wave packets. The movement of the wave packet along the x-axis is shown at time instants t = 5.0 s, t = 5.2 s, t = 5.6 s, and t = 6.0 s. MLE from the wave packet gives the filtered EEG as shown in Fig. 7. This figure displays the representative plot of the raw EEG and the RQNN-enhanced EEG for a time interval between 5 and 6 s. The effect of filtering can be ascertained through ERD/ERS in the frequency domain as well as through an overall performance enhancement of the classifier outcome. 2) ERD/ERS: Fig. 9 shows a representative ERS obtained with the RQNN-filtered EEG signal and the raw EEG signal for subject four (evaluation set 5 E). The

ERD/ERS were obtained for all channels by averaging band power change at each time-point across the time interval 4000–6000 ms (standard activity period) with respect to the reference period from 500 to 1500 ms for all the subjects. The improvements in ERD/ERS with the RQNN-filtered signals for both the evaluation data sets is statistically significant (p < 0.04) and enhances the overall BCI performance. 3) Performance Enhancement (CA/Kappa): The list of subject-specific parameters for the RQNN model obtained using the inner–outer fivefold CV (Section VII) is shown in Table IV. Fig. 10 displays the CA plot using the LDA

Classification accuracy plot (subject B0405E).

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING

TABLE V P EAK CA W ITH D IFFERENT M ODELS

TABLE VI M AXIMUM OF KAPPA W ITH D IFFERENT M ODELS

classifier with the Hjorth and band power features using the raw EEG, the RQNN-filtered EEG, and the SG-filtered EEG signals for the evaluation data set for the subjects B0405E. Tables V and VI display the peak CA and the maximum of kappa values, respectively, for the training and the evaluation data sets for all the nine subjects The average improvement with the RQNN technique across all the nine subjects is more than 4% in CA [p < 0.0217]5 and 0.08 in kappa values (p < 0.0216) when compared with the raw approach by using the same combined Hjorth and band power feature setup and subject-specific frequency band (step 2 in Fig. 5). The average improvement with the RQNN technique is >7% in CA (p < 0.0001) and 0.14 in kappa values (p < 0.0001) when compared with the SG-filtered approach by using the same combined feature setup (and subject-specific frequency band obtained from step 2 in Fig. 5). These results also show a clear improvement of >9% in average CA (p < 0.0007) and >0.1 in average kappa value (p < 0.0006) when compared with the BCI design with PSD features extracted from raw EEG investigated in [35] on the same data set and training/evaluation setup. RQNN shows improvements of 4% in the average CA (p < 0.044) and >0.07 in average kappa (p < 0.044) when compared with the performance of BCI design with BSP features extracted from raw EEG investigated in [35] on the same data set and training/evaluation setup. Table VII displays the average maximum of kappa as well as the maximum of kappa computed from all nine subjects 5 Two-way analysis of variance (ANOVA2) test is performed with the results of the training and the evaluation stages for the RQNN filtered and the raw EEG approach.

285

at the evaluation stage to compare the performance using different methods. From the results displayed in Table VII, specifically observing the performance of subject B03, there seems to be a huge difference in the maximum of kappa values obtained with BSP (0.29)/PSD (0.27) compared to that with the raw (0.84) and the RQNN (0.89) approaches. This may be because, the BSP and PSD techniques are frequency-based, while the raw and the RQNN techniques in this paper have used a combination of frequency (band power) and temporalbased (Hjorth) features. To substantiate this, we implemented the inner–outer fivefold cross-validation using only the band power features for both the raw and the RQNN. The resulting average performance for evaluation stages in terms of CA (and maximum of kappa values) for subject B03 was 61.9 (0.25) and 58.12 (0.16), respectively, with the RQNN and the raw approaches. Thus it may be stated here that the RQNN filtering enhances the performance of BCI when compared to the raw EEG, but the increase in performance when compared to BSP and PSD may also be attributed to the use of a combination of frequency and temporal features. It can therefore be concluded from these results that the RQNN improves the average performance of BCI system for almost all the subjects during both the training and the evaluation stages when compared to the unfiltered EEG, SG-filtered EEG, and even PSD and BSP features-based approaches. The same data sets were also processed and classified by several renowned researchers as competitors of BCI Competition IV 2b-data set [45] which is also discussed in [35]. The performance of RQNN (Table VII) is also significantly better than the ones obtained by the winners of BCI competition in [45]6 The competition winner used the filter bank CSP technique for FE along with the Naive Bayes Parzen window classifier. The runner-up group used common spatial subspace decomposition technique for FE followed by LDA classifier. The third group used a CSP followed by log-variance techniques for FE and the best (at training stages) of LDA and SVM classifier. The fourth group used wavelet technique followed by an LDA classifier and it used spectral features before a neural network classifier. The sixth group estimates 75 band power features with their cursive feature elimination technique with a Bayesian LDA classifier [35]. Some of the competitors of Competition IV used only session 3 for training, while some used combined sessions from the three training sessions (combining 1, 3, or 1, 2, or 1, 2, 3) differently for different subjects and evaluated on session 4 and 5 [46]–[51]. In this paper, only session 3 is used for training, while the sessions 4 and 5 are used for evaluation. The results thus show that without prior knowledge of the type of noise characteristics present in EEG, the RQNN can be utilized to enhance EEG signal separability and that the quantum approach-based filtering method can be used as a signal preprocessing method for BCI. 4) Online Real-Time Implementation: The proposed RQNN methodology has also been utilized in online EEG filtering for real-time MI-based robot control task using an intelligent adaptive user interface as shown in the videos at. [52]. A very 6 The average maximum of kappa (across nine subjects) obtained by the first six competitors is 0.6, 0.58, 0.46, 0.43, 0.37, and 0.25 respectively.

286

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 2, FEBRUARY 2014

TABLE VII E VALUATION S TAGE (*4E, *5E) P ERFORMANCE C OMPARISON$

TABLE VIII RQNN P ERFORMANCE ON BCI C OMPETITION IV 2A D ATA S ET

important feature of RQNN filtering methodology is that a single incoming sample (particle) is viewed as a wave packet which evolves as per the potential field (or function) under the influence of SWE (video at. [43]). 5) Investigation on BCI Competition Data Set IV Data Set: The RQNN methodology has also been investigated on the BCI competition IV 2a data set [53] as displayed in Table VIII. This data set consists of one training set and one evaluation set for nine subjects with 22 channels and four different MI tasks, namely the imagination of movement of left hand (class 1), right hand (class 2), both feet (class 3), and tongue (class 4). However, RQNN approach has been carried out, as before, using only the two channels, namely C3 and C4 and only for a two-class classification (left hand versus right hand). Therefore, the data was separated into two classes, EEG with left hand and right hand mental imagination task. The same two-step procedure has been applied (Fig. 5) for the parameter selection. The average performance enhancement obtained is > 2% in CA ( p < 0.0027) and 0.04 in maximum of kappa (p < 0.0031) when compared with the raw EEG. More details about the subject-specific parameters for this data set can be availed from [54]. X. C ONCLUSION The RQNN was evaluated with case studies of simple signals and the results show that the RQNN is significantly better than the Kalman filter while filtering the dc signal added with three different noise levels. The learning architecture

and the associated unsupervised learning algorithm of RQNN have been modified to take into account the complex nature of EEG signal. The basic approach is to ensure that the statistical behavior of input signal is properly transferred to the wave packet associated with the response of quantum dynamics of the network. At every computational sampling instant, the EEG signal is encoded as a wave packet which can be interpreted as pdf of the signal at that instant. The subject-specific RQNN parameters have been obtained using a two-step inner–outer fivefold cross-validation which results in an enhanced EEG signal that is used further for FE and classification processes. The CA and kappa values obtained from RQNN-enhanced EEG signal show a significant improvement during both the training and the evaluation stages across multiple sessions This performance enhancement through the RQNN model is superior when compared to that using the raw EEG, Savitzky–Golay filtered EEG or even raw EEG with the PSD or the BSP-based features. Future work will involve developing automated computational techniques such as GA or PSO for selecting subject-specific RQNN model parameters. Improving other stages of signal processing framework as highlighted in [55] will also increase the online performance of BCI for applications in stroke rehabilitation [56] and games [57] among others. The noteworthy feature of the proposed scheme is that without introducing any complexity at the FE or the classification stages, the performance of BCI can be significantly improved simply by enhancing the EEG signal at the preprocessing stage. ACKNOWLEDGMENT The authors would like to thank InvestNI and the Northern Ireland Integrated Development Fund under the Centre of Excellence in Intelligent Systems Project. R EFERENCES [1] K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed. New York, NY, USA: Academic, 1990. [2] S. Lemm, B. Blankertz, G. Curio, and K. R. Muller, “Spatio-spectral filters for improving the classification of single trial EEG,” IEEE Trans. Biomed. Eng., vol. 52, no. 9, pp. 1541–1548, Sep. 2005. [3] M. Arvaneh, C. Guan, K. K. Ang, and C. Quek, “Optimizing spatial filters by minimizing within-class dissimilarities in electroencephalogrambased brain–computer interface,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 4, pp. 610–619, Apr. 2013. [4] H. Zhang, H. Yang, and C. Guan, “Bayesian learning for spatial filtering in an EEG-based brain–computer interface,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 7, pp. 1049–1060, Jul. 2013. [5] D. Coyle, G. Prasad, and T. M. McGinnity, “Faster self-organizing fuzzy neural network training and a hyperparameter analysis for a brain–computer interface,” IEEE Trans. Syst., Man, Cybern., B, Cybern., vol. 39, no. 6, pp. 1458–1471, Dec. 2009. [6] D. Coyle, G. Prasad, and T. M. McGinnity, “A time-series prediction approach for feature extraction in a brain–computer interface,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 13, no. 4, pp. 461–467, Dec. 2005. [7] D. Coyle, “Neural network based auto association and time-series prediction for biosignal processing in brain–computer interfaces,” IEEE Comput. Intell. Mag., vol. 4, no. 4, pp. 47–59, Nov. 2009. [8] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related desynchronization,” in Handbook of Electroencephalography and Clinical Neurophysiology, vol. 6. Amsterdam, The Netherlands: Elsevier, 1999.

GANDHI et al.: QUANTUM NEURAL NETWORK-BASED EEG FILTERING

[9] R. S. Bucy, “Linear and nonlinear filtering,” Proc. IEEE, vol. 58, no. 6, pp. 854–864, Jun. 1970. [10] R. Shankar, Principles of Quantum Mechanics. New York, NY, USA: Plenum, 1994. [11] M. Planck, Zur Theorie Des Gesetzes Der Energieverteilung Im Normalspectrum. Munich, Germany: Barth, 1900. [12] R. P. Feynman, “Quantum mechanical computers,” Found. Phys., vol. 16, no. 6, pp. 507–531, Jun. 1986. [13] L. Behera, I. Kar, and A. C. Elitzur, “A recurrent quantum neural network model to describe eye tracking of moving targets,” Found. Phys. Lett., vol. 18, no. 4, pp. 357–370, Aug. 2005. [14] L. Behera and I. Kar, “Quantum stochastic filtering,” in Proc. IEEE Int. Conf. Syst., Man Cybern., Oct. 2005, pp. 2161–2167. [15] V. Gandhi, V. Arora, G. Prasad, D. Coyle, and T. M. McGinnity, “A novel EEG signal enhancement approach using a recurrent quantum neural network for a brain–computer interface,” in Proc. 3rd Eur. Conf., Tech. Assist. Rehabil., Mar. 2011, pp. 1–8. [16] V. Gandhi, V. Arora, L. Behera, G. Prasad, D. Coyle, and T. M. McGinnity, “A recurrent quantum neural network model enhances the EEG signal for an improved brain–computer interface,” in Proc. Assist. Living, Inst. Eng. Technol. Conf., Apr. 2011, pp. 1–6. [17] L. Behera, S. Bharat, S. Gaurav, and A. Manish, “A recurrent network model with neurons activated by Schroedinger wave equation and its application to stochastic filtering,” in Proc. 9th Int. Conf. High-Perform. Comput., Workshop Soft Comput., Dec. 2002, pp. 1–8. [18] V. G. Ivancevic, “Adaptive-wave alternative for the black-scholes option pricing model,” Cognit. Comput., vol. 2, no. 1, pp. 17–30, Jan. 2010. [19] V. Gandhi, V. Arora, L. Behera, G. Prasad, D. Coyle, and T. McGinnity, “EEG denoising with a recurrent quantum neural network for a brain– computer interface,” in Proc. Int. Joint Conf. Neural Netw., Jul./Aug. 2011, pp. 1583–1590. [20] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proc. IEEE Int. Conf. Neural Netw., Nov./Dec. 1995, pp. 1942–1948. [21] J. Kennedy, “The particle swarm: Social adaptation of knowledge,” in Proc. IEEE Int. ICEC, Apr. 1997, pp. 303–308. [22] J. Acacio de Barros and P. Suppes, “Quantum mechanics, interference, and the brain,” J. Math. Psychol., vol. 53, no. 5, pp. 306–313, 2009. [23] R. L. Dawes, “Quantum neurodynamics: Neural stochastic filtering with the Schroedinger equation,” in Proc. Int. Joint Conf. Neural Netw., Jun. 1992, pp. 133–140. [24] K. H. Pˇribram, Rethinking Neural Networks: Quantum Fields and Biological Data. Mahwah, NJ, USA: Lawrence Erlbaum Assoc., 1993. [25] L. Behera, I. Kar, and A. C. Elitzur, “Recurrent Quantum neural network and its applications,” in Proc. Emerging Phys. Consciousness, 2006, pp. 327–350. [26] T. R. Taha and M. I. Ablowitz, “Analytical and numerical aspects of certain nonlinear evolution equations. II. Numerical, nonlinear Schrödinger equation,” J. Comput. Phys., vol. 55, no. 2, pp. 203–230, 1984. [27] J. Crank and P. Nicolson, “A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type,” in Proc. Math. Cambridge Phil. Soc., Jan. 1947, pp. 50–67. [28] J. Scheffel, “Does nature solve differential equations?” R. Inst. Technol., Stockholm, Sweden, Tech. Rep. TRITA-ALF-2002-02, May 2002. [29] (2009). BCI Competition IV [Online]. Available: http://www.bbci.de/competition/iv/desc_2b.pdf [30] B. Hjorth, “EEG analysis based on time domain properties,” Electroencephalogr. Clinical Neurophysiol., vol. 29, no. 3, pp. 306–310, 1970. [31] M. Vourkas, S. Micheloyannis, and G. Papadourakis, “Use of ANN and Hjorth parameters in mental-task discrimination,” in Proc. 1st Int. Conf. Adv. Med. Signal Inf. Process., Sep. 2000, pp. 327–332. [32] C. Vidaurre, A. Schlogl, R. Cabeza, R. Scherer, and G. Pfurtscheller, “Study of on-line adaptive discriminant analysis for EEG-based brain– computer interfaces,” IEEE Trans. Biomed. Eng., vol. 54, no. 3, pp. 550–556, Mar. 2007. [33] P. Herman, G. Prasad, T. M. McGinnity, and D. Coyle, “Comparative analysis of spectral approaches to feature extraction for EEG-based motor imagery classification,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 16, no. 4, pp. 317–326, Aug. 2008. [34] D. Coyle, G. Prasad, and T. M. McGinnity, “A time-frequency approach to feature extraction for a brain–computer interface with a comparative analysis of performance measures,” EURASIP J. Appl. Signal Process., vol. 19, pp. 3141–3151, Feb. 2005.

287

[35] S. Shahid and G. Prasad, “Bispectrum-based feature extraction technique for devising a practical brain–computer interface,” J. Neural Eng., vol. 8, no. 2, pp. 025014-1–025014-12, Mar. 2011. [36] G. Pfurtscheller, R. Scherer, G. Müller-Putz, and F. H. Lopes da Silva, “Short-lived brain state after cued motor imagery in naive subjects,” EURASIP J. Neurosci., vol. 28, no. 7, pp. 1419–1426, Oct. 2008. [37] I. Bankman and I. Gath, “Feature extraction and clustering of EEG during anaesthesia,” Med. Biol. Eng. Comput., vol. 25, no. 4, pp. 474–477, 1987. [38] R. M. Rangayyan, Biomedical Signal Analysis: A Case-Study Approach. Piscataway, NJ, USA: IEEE Press, 2002. [39] A. Savitzky and M. J. E. Golay, “Smoothing and differentiation of data by simplified least squares procedures,” Anal. Chem., vol. 36, no. 8, pp. 1627–1639, Jul. 1964. [40] S. Hargittai, “Savitzky-Golay least-squares polynomial filters in ECG signal processing,” in Proc. 32nd Annu. Sci. Comput. Cardiol., Sep. 2005, pp. 763–766. [41] H. Hassanpour, “A time-frequency approach for noise reduction,” Digital Signal Process., vol. 18, no. 5, pp. 728–738, 2008. [42] A. Zehtabian and B. Zehtabian, “A novel noise reduction method based on Subspace Division,” J. Comput. Eng., vol. 1, no. 1, pp. 55–61, 2009. [43] V. Gandhi. (2013, Jul. 12). Evolving of the Wave Packet [Online]. Available: http://isrc.ulster.ac.uk/images/stories/Staff/BCI/Members/VGandhi/ Video_PhysicalRobotControl/wavepacket_evolves_according_to_swe.mp4 [44] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Trans. ASME J. Basic Eng., vol. 82, pp. 35–45, Mar. 1960. [45] B. Blankertz. (2008). BCI Competitions IV [Online]. Available: http://www.bbci.de/competition/iv/ [46] Z. Chin, K. Ang, C. Wang, C. Guan, H. Zhang, K. Phua, B. Hamadicharef, and K. Tee. (2013, Jul. 12). BCI Competition IV Results [Online]. Available: http://www.bbci.de/competition/ iv/results/ds2b/ZhengYangChin_desc.pdf [47] H. Gan, L. Guangquan, and Z. Xiangyang. (2013, Jul. 12). BCI Competition IV Results [Online]. Available: http://www.bbci.de/competition/ iv/results/ds2b/HuangGan_desc.pdf [48] D. Coyle, A. Satti, and T. M. McGinnity. (2013, Jul. 12). BCI Competition IV Result [Online]. Available: http://www.bbci.de/competition/iv/ results/ds2b/DamienCoyle_desc.pdf [49] S. Lodder. (2013, Jul. 12). BCI Competition IV Results [Online]. Available: http://www.bbci.de/competition/iv/results/ds2b/ShaunLodder_desc. txt [50] J. Saa. (2013, Jul. 12). BCI Competition IV Results [Online]. Available: http://www.bbci.de/competition/iv / results / ds2b / JaimeFernandoDelgado Saa_desc.txt [51] Y. Ping, L. Xu, and D. Yao. (2013, Jul. 12). BCI Competition IV Results [Online]. Available: http://www.bbci.de/competition/ iv/results/ds2b/YangPing_desc.txt [52] V. Gandhi. (2013, Jul. 12). Robot Control Through Motor Imagery [Online]. Available: http://isrc.ulster.ac.uk/Staff/VGandhi/ VideoRobotControlThroughMI [53] C. Brunner, R. Leeb, G. R. Müller-Putz, A. Schlögl, and G. Pfurtscheller. (2009). BCI Competition 2008–Graz Data Set A [Online]. Available: http://www.bbci.de/competition/iv/desc_2a.pdf [54] V. Gandhi, “Quantum neural network based EEG filtering and adaptive brain-robot interfaces,” Ph.D. dissertation, Intell. Syst. Res. Centre, Univ. Ulster, Belfast, U.K., 2012. [55] D. J. Krusienski, M. Grosse-Wentrup, F. Galán, D. Coyle, K. J. Miller, E. Forney, and C. W. Anderson, “Critical issues in state-of-the-art brain–computer interface signal processing,” J. Neural Eng., vol. 8, no. 2, pp. 025002-1–025002-8, Apr. 2011. [56] G. Prasad, P. Herman, D. Coyle, S. McDonough, and C. Jacqueline, “Applying a brain–computer interface to support motor imagery practice in people with stroke for upper limb recovery: A feasibility study,” J. Neuroeng. Rehabil., vol. 7, no. 60, pp. 1–17, 2010. [57] D. Marshall, D. Coyle, S. Wilson, and M. Callaghan, “Games, gameplay, and BCI: The state of the art,” IEEE Trans. Comput. Intell. AI Games, vol. 5, no. 2, pp. 82–99, Jun. 2013. [58] A. Schlögl, J. Kronegg, J. E. Huggins, and S. G. Mason, “Evaluation criteria for BCI research,” in Towards Brain-Computer Interfacing, G. Dornhege, J. Millán, T. Hinterberger, and D. McFarland, Eds. Cambridge, MA, USA: MIT Press, 2007.

288

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 2, FEBRUARY 2014

Vaibhav Gandhi received the B.Eng. degree in instrumentation and control engineering from Bhavnagar University, Gujarat, India, in 2000, the M.Eng. degree in electrical engineering from the M.S.University of Baroda, Baroda, India, in 2002, and the Ph.D. degree in computing and engineering from the University of Ulster, Londonderry, U.K., in 2012. He was a recepient of the U.K.-India Education & Research Initiative scholarship for his Ph.D. research in the area of brain-computer interface for assistive robotics carried out at the Intelligent Systems Research Center, University of Ulster and partly at IIT Kanpur, Kanpur, India. His Ph.D. research was focused on quantum mechanics motivated EEG signal processing, and an intelligent adaptive use-centric human-computer interface design for real-time control of a mobile robot for the BCI users. His post-doctoral research involved work on shadow-hand multi-fingered mobile robot control using the EMG/muscle signals, with contributions also in the 3-D printing aspects of a robotic hand. He joined the Department of Design Engineering & Mathematics, School of Science & Technology, Middlesex University, London, U.K., in 2013, where he is currently Lecturer of robotics, embedded systems and real-time systems. His current research interests include brain-computer interfaces, biomedical signal processing, quantum neural networks, computational intelligence, computational neuroscience, use-centric graphical user interfaces, and assistive robotics.

Girijesh Prasad (M’98–SM’07) received the B.Tech. degree in electrical engineering from NIT (formerly REC), Calicut, India, in 1987, the M.Tech. degree in computer science and technology from IIT (formerly UOR), Roorkee, India, in 1992, and the Ph.D. degree from Queen’s University, Belfast, U.K., in 1997. He has been an Academic Staff Member with the University of Ulster, Derry, U.K., since 1999, and is currently a Professor of intelligent systems. He is an executive member of Intelligent Systems Research Centre, Magee Campus, where he leads the Brain-Computer Interface and Assistive Technology Team. He has published over 150 research papers in international journals, books, and conference proceedings. His current research interests include self-organizing hybrid intelligent systems, statistical signal processing, adaptive predictive modelling and control with applications in complex industrial and biological systems including brain modelling, brain-computer interfaces and neuro-rehabilitation, assistive robotic systems, biometrics, and energy systems. Prof. Prasad is a Chartered Engineer and a fellow of the IET. He is a founding member of IEEE SMC TCs on Brain-Machine Interface Systems and Evolving Intelligent Systems.

Damien Coyle (SM’12) received a first class degree in computing and electronic engineering in 2002 and a doctorate in Intelligent Systems Engineering in 2006 from the University of Ulster, Londonderry, U.K. Since 2006, he has been a Lecturer/Senior Lecturer with the School of Computing and Intelligent Systems and a member of the Intelligent Systems Research Centre, University of Ulster, where he is a founding member of the brain-computer interface team and computational neuroscience research teams. His current research interests include braincomputer interfaces, computational intelligence, computational neuroscience, neuroimaging, and biomedical signal processing and he has co-authored several journal articles and book chapters. He is the 2008 recipient of the IEEE Computational Intelligence Society’s Outstanding Doctoral Dissertation Award and the 2011 recipient of the International Neural Network Society’s Young Investigator of the Year Award. He received the University of Ulster’s Distinguished Research Fellowship Award in 2011 and is Royal Academy of Engineering/The Leverhulme Trust Senior Research Fellowship in 2013. He is an active volunteer in the IEEE Computational Intelligence Society.

Laxmidhar Behera (S’92–M’03–SM’03) received the B.Sc. in engineering and M.Sc. in engineering degrees from NIT Rourkela, Rourkela, India, in 1988 and 1990, respectively, and the Ph.D. degree from IIT Delhi, Delhi, India. He was an Assistant Professor at BITS Pilani, India, from 1995 to 1999 and pursued the postdoctoral studies in the German National Research Center for Information Technology, GMD, Sank Augustin, Germany, from 2000 to 2001. He is currently a Professor with the Department of Electrical Engineering, IIT Kanpur, Kanpur, India. He joined the Intelligent Systems Research Center, University of Ulster, Londonderry, U.K., as a Reader on sabbatical from IIT Kanpur from 2007 to 2009. He was a Visiting Researcher/Professor at FHG, Germany, and ETH, Zurich, Switzerland. He has published more than 170 papers to his credit published in refereed journals and presented in conference proceedings. His current research interests include intelligent control, robotics, information processing, quantum neural networks, and cognitive modeling.

Thomas Martin McGinnity (SM’09) received the First Class (Hons.) degree in physics and the Ph.D. degree from the University of Durham, Durham, U.K., in 1975 and 1979, respectively. He is a Professor of intelligent systems engineering with the Faculty of Computing and Engineering, University of Ulster, Derry, Northern Ireland. He is currently the Director of the Intelligent Systems Research Centre, which encompasses the research activities of over 100 researchers. He was an Associate Dean of the Faculty and the Director of the university’s technology transfer company, Innovation Ulster, and a spin-off company, Flex Language Services. He is the author or co-author of over 300 research papers and has attracted over £24 million in research funding to the university. Prof. McGinnity is a fellow of the IET and SMIEEE, and a Chartered Engineer.