Single-Trial EEG Source Reconstruction for Brain ... - Semantic Scholar

4 downloads 976 Views 298KB Size Report
The features are ranked with mutual information before being fed to a ... R. I. Kitney is with the Department of Bioengineering, Imperial College. London, London SW7 ...... M. Vaughan, “Brain–computer interface technology: A review of the first.
1592

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 5, MAY 2008

Single-Trial EEG Source Reconstruction for Brain–Computer Interface Quentin Noirhomme*, Richard I. Kitney, Member, IEEE, and Benoˆıt Macq, Senior Member, IEEE

Abstract—A new way to improve the classification rate of an EEG-based brain–computer interface (BCI) could be to reconstruct the brain sources of EEG and to apply BCI methods to these derived sources instead of raw measured electrode potentials. EEG source reconstruction methods are based on electrophysiological information that could improve the discrimination between BCI tasks. In this paper, we present an EEG source reconstruction method for BCI. The results are compared with results from raw electrode potentials to enable direct evaluation of the method. Features are based on frequency power change and Bereitschaft potential. The features are ranked with mutual information before being fed to a proximal support vector machine. The dataset IV of the BCI competition II and data from four subjects serve as test data. Results show that the EEG inverse solution improves the classification rate and can lead to results comparable to the best currently known methods. Index Terms—Brain–computer interface (BCI), brain–machine interface, classification, Electroencephalogram (EEG), inverse problem, source reconstruction.

I. INTRODUCTION BRAIN–COMPUTER interface (BCI) is “a communication system which enables the brain to send messages to the external world without using traditional pathways as nerve or muscle” [1] (Fig. 1). Generally, a BCI tries to translate patterns of brain activity into computer commands. The brain activity can be recorded either invasively (intracranial) or noninvasively (using brain functional imaging such as EEG). Invasive BCIs record the activity of either single neuron or groups of neurons. Direct neuronal recording lends excellent spatial and temporal accuracy to these BCIs. Studies on nonhuman subjects have shown excellent results in term of usability (simple training, many degrees of freedom) and accuracy (see Lebedev and Nicolelis [2] for review). However, invasive surgery, problem of biological compatibility between electrodes and brain tissues during long-term recording, and risks of infection due to wires that connect implanted electrodes to external hardware are serious drawbacks to their widespread use. Noninvasive BCIs, based on functional imaging, offer safer operation and permit a wider

A

Manuscript received August 6, 2007; revised October 25, 2007. The work of Q. Noirhomme was supported by the R´egion Wallonne: POPSE under Grant EPI A320501R0322/215092). Asterisk indicates corresponding author. *Q. Noirhomme was with the Communications and Remote Sensing Laboratory, Universit´e catholique de Louvain, 1348 Louvain-la-Neuve, Belgium. He is now with the Coma Science Group, Cyclotron Research Center, University of Li`ege, B-4000 Li`ege, Belgium (e-mail: [email protected]). R. I. Kitney is with the Department of Bioengineering, Imperial College London, London SW7 1LU, U.K. B. Macq is with the Communications and Remote Sensing Laboratory, Universit´e catholique de Louvain, 1348 Louvain-la-Neuve, Belgium. Digital Object Identifier 10.1109/TBME.2007.913986

user base. They record direct (change in potential) or indirect (change in blood flow) reflections of brain activity. Experiments have been done with fMRI [3], [4] but electroencephalography (EEG), with its excellent temporal resolution and usability, is the best choice for BCI research. EEG-based BCIs suffer from loss in spatial resolution due to overlapping of electrical activity from different brain areas and low-pass filtering of the signal by the skull and skin. They also are more subject to artifacts from ocular or muscle movements and recording apparatus noise. Despite these weaknesses, EEG techniques can detect changes in brain activity that correlate with cognitive states, voluntary intentions, or external stimuli [1]. EEG-based BCIs offer patients who are missing sensory or motor control a new prosthetic to communicate with the external world. Recent research has shown the possibility of controlling 2-D movement [5], dialing a phone number [6] or spelling words [7]. In the following, we will concentrate on EEG-based BCIs only. The usability of a BCI depends on its running workload, effective bit rate (including speed and accuracy), and portability. The user’s workload depends upon the difficulty of the task required to use the BCI. The portability depends upon the number of electrodes, physical dimension of the equipment, and computing power needed to operate the BCI. The bit rate depends upon the algorithms used for preprocessing (artefact removal), feature extraction, and classification (Fig. 1). A BCI with a low bit rate will discourage the end user. Even though the bit rates of recent BCIs have improved greatly, they still need to be further increased to improve usability. Some possible methods to achieve this include: better algorithms, better use of available information, and the extraction of new salient information. Yet another approach consists in increasing the number of possible user tasks and choices [8], that is, instead of a binary choice, the user faces a multiple choice. These approaches are all complementary. Better algorithms will improve all BCI stages (Fig. 1). For example, a change in frequency band power can be tracked using autoregressive modelling [9], wavelet transform [10]–[12], or pseudotaper method [13]. More salient information can be obtained by extracting features that are more representative of the tasks under study or by using additional methods to process the raw signals. For example, common spatial pattern (CSP) [14] or common subspace decomposition (CSSD) [15] techniques can extract new time series from the signal that contain more discriminative information. The approach followed in this paper is to reconstruct the sources of the signal by adding biophysical information and solving the EEG inverse problem. In principle, a source-reconstructed signal should be easier to discriminate

0018-9294/$25.00 © 2008 IEEE

NOIRHOMME et al.: SINGLE-TRIAL EEG SOURCE RECONSTRUCTION FOR BRAIN–COMPUTER INTERFACE

Fig. 1. Brain–computer interface from signal acquisition to end application. 1) The signal is acquired either invasively or with a functional imaging modality. The good temporal resolution, the usability, and the noninvasiveness make EEG the first choice for BCI study. 2) The raw signal is preprocessed to remove artifact and filtered to remove the 50 (or 60) Hz. 3) The sources of the signal can be reconstructed and used instead of the EEG signal. 4) Representative task features are extracted from the signal, e.g., power change in a given frequency band at a given location or electrode. Eventually, a feature selection method selects the more representative features. 5) The features are classified according to the different tasks. 6) The classification is translated into command for either a computer’s mouse, a wheelchair, a robotic arm, or a word speller.

than a raw electrode one. Source reconstruction methods are known to improve the spatial resolution of the EEG. They could also be an alternative to intracranial recording. From a mathematical point of view, EEG source reconstruction can be expressed as a deterministic transformation of the data space. Data are transformed to a new space that should be more suitable for classification. The transformation helps to deblur signals from different sources. Information about the location of the activity can be more properly taken into account. The inverse problem reconstructs the brain sources underlying the EEG. Brain sources are modeled by an equivalent current dipole with unknown location, strength, and orientation [16]. The main difficulty is that a given potential on the scalp can be explained by mainy brain source configurations. There are two ways to manage this: a parametric approach and an imaging approach. The parametric approach assumes one or a few areas are activated, and their number is either an hypothesis or guessed by the method. Each activated area is represented by an equivalent current dipole. Parametric methods look for the best location, orientation, and strength (six parameters) for each dipole by doing a nonlinear search over the whole possible solution space. The imaging approach, on the other hand, assumes that either parts of the brain or the full brain are activated. Imaging methods reconstruct a 3-D image of the brain, where every voxel contains three equivalent current dipoles with fixed orientation. The only unknown left is the dipole strength. Then, the problem is linear but underdetermined. To get a unique solution, additional constraints must be introduced (see [16] and [17] for review). Parametric methods have been proposed to serve as classifier in BCI [18], [19]. Imaging methods can not only classify the data but also process the data before the feature extraction step

1593

[13], [20]. Using a source reconstruction method for single-trial classification demands powerful preprocessing [18], [19] as it is greatly influenced by noise and background activity. Therefore, the advantages of using a source reconstruction method instead of any classifier are questionable. Both parametric and imaging methods are based on the electrophysiological model of the head, including source and electrode locations, and Maxwell’s equations. The head model can range from a simple spherical approximation to a realistic prolate head model derived from MRI [21]. Spherical approximations use one to five nested concentric homogeneous spheres of given size and conductivity. The spheres represent the scalp, the skull, the brain (grey and white matter), and eventually, the cerebrospinal fluid (CSF). A more accurate geometry of each component can be segmented individually from MRI. Optimized analytic solutions exist for the spherical approximation enabling fast computation. More geometrically accurate solutions can be computed using boundary element methods or finite-element methods (FEMs). The need for MR images of the subject’s head could considerably deter the use of accurate source reconstruction methods for BCI. They could be replaced by an atlas head, but the resulting model would only be an approximation. The reconstructed source signal then replaces the EEG signal in the BCI or serves as a classifier. Qin et al. [18] preprocess the data using bandpass filtering and independent component analysis and they use the inverse problem as a classifier. Both parametric and imaging methods are applied using a spherical head model. The classification metric is the hemisphere with the strongest activity. The parametric method searches for the best location of one given dipole. Kamousi et al. [19] use the same preprocessing as Qin et al. [18] but with a classifier based on a 2-dipole parametric method. The classification rate reported by both papers is about 80%. Grave de Peralta et al. [13] introduce a source reconstruction method into an existing BCI. An imaging method, based on a realistic head model and the ELECTRA source model [22], [23], processes the data before the feature extraction and classification. Power in the frequency band 8–30 Hz is extracted from either ten electrodes covering the motor cortex or 50 sources. The application of the inverse method improves the resulting classification for both subjects from 88.4% to 96.3% and 89.5% to 95.1%, respectively. Congedo et al. [20] apply an imaging method before a CSP filter to process the data. The classification is done by source power magnitude comparisons between two selected areas. This method is applied to dataset IV of the BCI competition 2003 and the classification rate is 83.65% for the training set and 83.00% for the test set. All methods were applied to frequency features elicited by motor and motor imagery tasks. These results are encouraging. However, only the comparison between results obtained from EEG and reconstructed sources, as given by Grave de Peralta et al. [13], can demonstrate the utility of the source reconstruction. In the present paper, we propose a new comparison. Sources are reconstructed with a simpler head model based on spherical approximation and 400 dipoles. Different kinds of features, not only frequency features, are extracted.

1594

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 5, MAY 2008

Fig. 2. Four hundred dipoles located on an equiangular grid on half-sphere on the cortex. Nose is above. Dipoles are oriented perpendicularly to the cortex. Black dipoles represent the extended motor cortex from which the features were extracted.

We built six BCIs based on frequency and Bereitschaft potential features to classify self-paced keytyping. Three BCI extracted features from electrode potentials. The others reconstructed the sources first, before extracting the features. The head’s electrophysiological properties were approximated using a spherical head model and imaging methods reconstructed the sources. There are far more reconstructed sources than electrodes. Therefore, the number of extracted features increases dramatically and a feature or source selection method must be used. After extraction, the features were ranked with mutual information. The first ranked features were sent to a proximal support vector machine (SVM) classifier. The BCI framework was the same for both EEG and source signals, enabling accurate comparison between both approaches (Fig. 1). The comparison was done on the dataset IV of the BCI competition 2003 [24] and on another dataset with four subjects recorded under identical conditions. We noted an improvement due to source reconstruction for all data. Furthermore, the presented results on the competition data are equivalent to the ones of Congedo et al. [20] and to Wang et al. [15] (the best submission to the competition, which has a good classification rate of 84% on the test set and 93% on the training set). Wang’s method is based on Fisher discriminant analysis and CSSD. Extracted features from Bereitschaft potential and frequency power are classified with a perceptron neural network. Results on data for the four subjects confirmed the improvement. In the following, we will first introduce the inverse problem and its application to BCI. We then discuss how we build our BCIs for accurate comparison and further present the results. Finally, we discuss our method and conclude. II. INVERSE PROBLEM Imaging methods estimate the amplitude of fixed dipoles distributed on either the whole or part of the head. In our application, we limited the brain source space to a half-sphere just below the cortex (Fig. 2). The half-sphere covers most of the cortex and is centered close to the region of interest. The dipoles were located 2 mm below the cortex surface inside the grey matter. Four hundred dipoles were distributed on the halfsphere. Dipoles were located on an equiangular grid without poles. As the pyramid cells that form the cortex are oriented perpendicular to the cortex surface [25], the dipoles also were oriented perpendicular to the surface. Therefore, all sources

were radial. This assumption is physiologically not plausible. Indeed, the cortex is folded not spherical. While sources on top of gyri can be considered as radial, sources inside sulci are not. This model is a good compromise between spatial accuracy and computational cost, which increases with the number of dipoles especially when one extracts more than one feature from each dipole. The head’s electromagnetic properties were approximated by a four-shell head model [21]. Shells represented respectively the brain, the CSF, the skull, and the scalp. The scalp shell radius was arbitrarily fixed at 100 mm to ease the computation. The other shell radii followed Stock’s proportions [26]. This model was chosen because it is not subject-dependent and less computationally expensive. The EEG signal has a low-frequency spectrum, typically well below 1000 Hz. Therefore, the EEG physics can be described by the quasi-static approximation of Maxwell’s equations. The relationship between electrode potentials and source currents can be computed from the head model and the quasi-static approximation and expressed by the lead field matrix G [16], [21]. Each element of G represents the relation between one electrode and one source component, i.e., x, y, or z. The imaging approach can be mathematically formulated as follows for EEG recorded using a common average reference. Other references can be used as long as they are correctly integrated in the model. Let us assume Y is a n × t matrix containing recorded scalp potentials with n the number of electrodes and t the number of samples; S is an m × t matrix of dipole amplitude, which is normally distributed with zero mean and covariance matrix CS , with m the number of dipoles, m  n; G is the n × m lead field matrix. We can write our model thus Y =GS + η

(1)

where η is the noise (ambient noise, recording apparatus noise, physiological noise), which is normally distributed with zero mean and covariance matrix Cη . As (1) is underdetermined, a priori information must be introduced in the problem to guarantee the stability and unicity of the solution. One common approach is the Tikhonov regularization method [27]. Equivalent results can be obtained through Bayesian inference or weighted minimum norm methods [28]   (2) Sˆ = min Y − GS2C η + λ2 HS2 S

where the first part of the right-hand side measures the fit to the data, and the second part stabilizes the solution. The Cη norm is defined as Y − GS2C η = (Y − GS) Cη−1 (Y − GS), where the prime denotes the transpose. λ is the regularization parameter and H is the regularization or prior matrix. H contains a priori information on the solution. The relationship between H and CS is CS−1 = λ H  H

(3)

where λ gives the importance of the prior. H can be seen as a precision matrix (the inverse of covariance) [29]. Indeed, for a

NOIRHOMME et al.: SINGLE-TRIAL EEG SOURCE RECONSTRUCTION FOR BRAIN–COMPUTER INTERFACE

given source, greater the corresponding element of H smaller is the source variance. Consequently, the variations of the source amplitude around its mean are smaller and the source is known with more precision. In our case, elements with high precision will be close to zero, and therefore inactive. The solution of (2) is found by matrix derivation and is Sˆ = (G Cη−1 G + λ2 H  H)−1 G Cη−1 Y.

(4)

Equation (4) can be partially computed in advance to speed the computation T = (G Cη−1 G + λ2 H  H)−1 G Cη−1 .

(5)

The pseudoinverse T does not depend on the data. Equation (4) is then found by a simple matrix multiplication Sˆ = T Y

posed that are either metric based or estimation based. The metric-based methods select the lambda that best match a metric. They compute the metric for a large number of lambda and then select the best result. Metric methods are the L-curve [33], the cross-validation error [34], or based on Bayesian inference [35]–[38]. On the contrary, the restricted maximum likelihood (ReML) [29], [39]–[41] estimates the parameters as the ratio between the data noise and the prior noise. In a BCI application, the regularization parameter could be selected on the error rate. The selected regularization parameter would be the one with the best error rate computed on the training set. In this paper, we applied the latter method to select the regularization parameter.

(6)

and can be computed online. The general framework of (4) has led to various solutions based on different priors and methods to select the regularization parameter (for review see [16] and [17]). Different a priori information have been proposed for H. 1) The identity matrix which produces the regularized minimum norm solution (RMN) [27]. All sources are supposed uncorrelated. 2) A coherence matrix. If the distribution of dipoles is sufficiently dense, we can assume that activations of neighbouring dipoles are correlated. The correlation of dipoles often takes the form of a Laplacian [30]. The coherence prior smoothes the solution and avoids sparse activity. 3) A depth weights matrix. The weights compensate for the tendency of the least square to favor surface sources. 4) A location matrix. Dipole locations with supposed weak contribution to the solution get a bigger weight. Consequently, their variance is small and their activity close to their zero mean. 5) A temporal constraint, which assumes dipole magnitudes are slowly evolving with regard to the time frequency [31], [32]. This constraint smoothes the data from one time sample to the next. The conjoint use of weights and Laplacian leads to the well-known low-resolution brain electromagnetic tomography (LORETA) method [30]. In the proposed head model, all the dipoles were at the same level so we did not use depth weights. We also did not use temporal priors as they need to compute a new T for every time sample, and are therefore, not suited for real-time application. We tested three priors: identity, a spatial correlation (Laplacian), and a location-based prior. The location prior represented the central sulcus and was applied conjointly with the Identity and Laplacian priors. The location prior forced the activity of the sources on the central sulcus to be null to impose a separation between both hemispheres. Another approach could have been to remove these sources, but that would still allow activity to lie down both hemispheres. The regularization parameter, which weights priors, can considerably influence the final solution. Its selection is, therefore, of tremendous importance. Various methods have been pro-

1595

III. METHODS A. Data The data [42] were provided by Fraunhofer-FIRST, Intelligent Data Analysis Group, and Freie Universit¨at Berlin, Department of Neurology, Neurophysics Group. The data recording was made using a NeuroScan amplifier and an Ag/AgCl electrode cap from ECI. Twenty-eight EEG channels were measured at positions of the international 10/20system (F, FC, C, and CP rows, and O1, O2). Signals were recorded at 1000 Hz with a bandpass filter between 0.05 and 200 Hz. The first dataset was part of the BCI competition 2003 [24]. It was recorded from a normal subject during a no-feedback session. The subject sat in a normal chair, relaxed, arms resting on the table, and fingers in the standard typing position on a computer keyboard. The task was to press with the index and little fingers the corresponding keys in a self-chosen order and timing: “self-paced key typing.” The experiment consisted of three sessions of 6 min each. All sessions were conducted on the same day with a few minutes’ break in between. Typing was done at an average speed of 1 key/s; 416 epochs of 500 ms length, each ending 130 ms before a keypress, form the final dataset. There are 208 trials of each class (left and right hand movement). The first 316 trials formed the training set for the competition. The last 100 served as the test set. At the time of the competition, the test set labels were unknown. To compare our method with other methods using the same data, we decided to keep the partition and to test only the final result on the test set. The second data set was recorded under the same conditions but from four subjects. More trials were available: 1127, 1091, 1056, and 945 trials were, respectively, available for subjects 1–4. We had access to the full EEG recording, but we limited our analysis to the same conditions: 500 ms of data ending 130 ms before keypress. To avoid learning effects, the trials were randomly permuted. We then extracted 100 trials to serve as a test set. The rest of the trials served as the training set. The randomization and splitting process was done before every test. Therefore, the test set serves primarily as a check that the method did not overlearn and to observe how the method works on new data. It cannot be used to compare methods as every method

1596

had a different test set. All data were rereferenced to a common average reference. The task elicits a Bereitschaft potential (BP) over the motor cortex area and a change of the Mu and Beta rhythms. A BP is a negative and slowly decreasing potential preceding voluntary movement [43]. The BP is more prominent on the contralateral motor cortex. Mu and Beta rhythms are linked to movement. The Mu rhythm occurs in the 8–12-Hz frequency band over the motor and sensorimotor cortex. The Beta rhythm occurs in the 18–26-Hz frequency band and is usually associated with the Mu rhythm. A movement, the preparation for a movement, planning a movement, or even imagining a movement [44] is accompanied by a decrease in both rhythms. The decrease is most prominent on the contralateral motor cortex. The decrease is called event-related desynchronization (ERD) [45]. After movement, both rhythms increase or synchronize. Event-related synchronization (ERS) can also appear simultaneously with movement in nonsolicited areas [45]. Left-hand movement can elicit ERD in the contralateral hand area but also ERS in the ipsilateral hand area or contralateral foot area.

B. Feature Extraction Two kinds of features were extracted. A first set was based on change in frequency power due to movement. We computed the power spectral density at 20 frequencies between 0 and 38 Hz with modern multitaper methods. These methods have been applied successfully in other BCI applications [13]. We used frequencies below 40 Hz only because they are the frequencies more related to movement. We do not restrain ourselves to the Mu and Beta band, the limits of which are subject-dependent; we, instead, chose a wider frequency band and let the features selection method select the most appropriate features. A second set is based on the BP elicited by the hand movement. The slope of each trial was computed (BPs are characterized by a decreasing slope). Furthermore, as BP should be most prominent in the last data samples, last hundred milliseconds were split in four and averaged giving four averaged points. These points should have a lower value while a BP is elicited. A third set then conjointly used features from both previous sets. As activity related to movement is predominant in motor and sensorymotor cortex areas, we limited the extraction of features to electrodes above and near and to dipoles in and near those areas (Fig. 2). Selected electrodes formed a rectangle with corner FC5-FC6CP6-CP5. Dipoles were selected on the average of the BCI competition training set. Each class was averaged by trial and time. Dipoles showing the strongest response were selected first. Close dipoles were added by hand.

C. Features Selection While processing reconstructed sources instead of electrodes, we faced a dramatic increase in available features. Therefore, we applied a ranking method to select the most appropriate features. Ranking methods are model-independent and enable a good interpretation of selected features.

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 5, MAY 2008

Feature ranking was done using mutual information (MI) [46]. MI is a concept from information theory. MI gives a measure of the relationship between features or between a feature (or set of features) and the classification. Unlike correlation, the measure computed with MI is not bound to linear relationships. Furthermore, MI makes it possible to compute the relationships between a selected set of features and the classification rather than one feature at a time. After feature ranking, the best features were fed to the classifier. The number of features to send to the classifier was determined based upon the training set. The MI was estimated with a method based on nearest-neighbours [47]. D. Classification The data were divided into a training set and a test set. In the best of all worlds, we would further subdivide the training set into a smaller training set and a validation set to have three separate sets. The training set would serve as an estimate for all the parameters and train the classifier. The classification of the validation set would be based upon the results of the training set. The validation set would make it possible to choose from among major options like feature number, dipole number, and priors. Finally, once every parameter was defined, the test set would present the result of the method on totally new data. The training set was too small to be further divided into two significant sets. Instead, we split the training set into ten subsets and used a 10-fold cross validation. Nine of the 10 subsets were used to train the method and the last one was used as a validation set. We did a training and validation for each of the ten subsets. The final training and validation errors are the averages of the errors for each subset. The classifier was a linear proximal SVM (pSVM) [48]. In a classical SVM, trials are classified by assigning them to one of two disjoint half-spaces. The SVM looks for the separating hyperplane with the greatest distance from both classes, i.e., from the closest point of each class [49]. In pSVM, trials are classified by assigning them to the closest of two parallel planes that are pushed as far as possible one from the other [48]. pSVM is nearly as good as an SVM but faster. Other SVMs, linear or not, could also be used for our application. They could probably slightly improve the final classification but at a greater computational cost. E. Scalp Electrodes Data For comparison, we classified the electrode potentials with the same methods. After rereferencing to the common average reference, the features were extracted based on frequency band powers and the Bereitschaft potential. The best features then were selected with MI before being fed to the classifier. Three BCIs based on the three feature sets, as well as their reconstructed source-based twins, were built (Table I). IV. RESULTS The classification rate was used to assess and compare the performance of the BCIs presented in the previous section. For the four subjects’ dataset, the test set was different for every

NOIRHOMME et al.: SINGLE-TRIAL EEG SOURCE RECONSTRUCTION FOR BRAIN–COMPUTER INTERFACE

1597

TABLE I SIX BCIS

TABLE II RESULTS FOR THE THREE METHODS BASED ON ELECTRODE POTENTIALS FOR THE BCI COMPETITION DATASET

Fig. 3. Training, validation, and test-error rates as a function of the number of the frequency features. The best value is selected from the training set. TABLE III RESULTS FROM THE FOUR SUBJECTS FOR THE JOINT USE OF BOTH FEATURE SETS

test; therefore, it cannot be used to compare the results between methods but rather to check if we did not overlearn and to see how the method will work with new data. For the BCI competition data, we also concentrated our analysis on the validation set results because they are computed on more data than those of the test set. A. Electrode Potentials We computed the results of the three electrode potential BCIs on the BCI competition data and on the four subjects’ datasets. For the BCI competition data, the results from the Bereitschaft potential and frequency features set were nearly equivalent (Table II). The frequency features used by Frel were more representative of the tasks on the test set. They yielded a better classification on the test set than on the training set (Fig. 3). The joint feature set improved results from both single-feature sets. The maximum validation was obtained for 140 features, of which 18% were from the Bereitschaft potential set, which accounts for 20% of all features. For the four subjects’ datasets, we tested the method based on the joint use of both feature sets only, as it gave best results for BCI competition dataset (Table III). The method gave very good results for subject 1, good results for subjects 2 and 3, and poor results for subject 4. B. Imaging Methods The inverse solution with priors was tested for a head model with 400 dipoles distributed on the cortical surface. For each prior, the best lambda was computed on the training set. For all tests, the noise covariance matrix was an identity matrix. Noise covariance matrixes estimated from first 200 ms of training data

always gave worse results in preliminary tests, and therefore, were not used. For the BCI competition, results from reconstructed sources showed an improvement in comparison to results from EEG data for most of priors and methods (Table IV). Only Fris with Laplacian and central sulcus priors, and Frebis with identity and central sulcus priors, performed worse than their counterpart on the validation set. The identity prior showed the greatest improvment. However, this improvement was not significant. Only the comparison FreBel versus FreBis had a p-value under 0.1. Again, the conjoint use of features gave the best results. For the four subjects’ dataset, we tested the identity and Laplacian prior conjointly with the central sulcus prior. Other priors were not tested. As for electrode data, we only computed the results for the joint set of features. The conjoint use of the identity and central sulcus priors improved all the results (Table V) with significant improvement for subjects 2 and 4 (p-value < 0.05). Contradictorily, the conjoint use of Laplacian and central sulcus priors worsened the results. We compared the bit rates of BCIs based on electrodes and reconstructed sources using the definition of Wolpaw et al. [50] bits/trial = log2 (N ) + p log2 (p) + (1 − p) log2 [(1 − p)/(N − 1)]

1598

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 5, MAY 2008

TABLE IV CLASSIFICATION RATE FOR BCI COMPETITION DATASET

TABLE V CLASSIFICATION RATE FOR THE FOUR SUBJECTS’ DATASETS

TABLE VI BIT RATE COMPARISON BETWEEN ELECTRODE-BASED AND RECONSTRUCTED SOURCE-BASED BCIS FOR CONJOINT SET OF FEATURES. SOURCE RECONSTRUCTION BASED ON BEST PRIOR

where p is the probability that the desired selection will actually be selected and N denotes all the possible selections. The bit rate was computed on the mean value of the validation set. The comparison between bit rates of BCIs with most significant improvements presented an increase in information rate (Table VI).

V. DISCUSSION We have presented a source reconstruction method to improve the classification rate of a BCI application. This method was based on a spherical head model and simple source distribution. The comparison between results from electrode potentials and reconstructed sources showed an improvement of the classification rate due to source reconstruction. Furthermore, the results on the BCI competition dataset were equivalent to the best results on the same data [15], [20]. As the main goal of our BCIs was to compare techniques using raw electrodes against those using reconstructed sources, the proposed BCIs based on electrode potential were not built using

the best known methods nor were all the parameters selected in an optimal way. However, the three electrode-based BCIs would have been classed between the second and the sixth position of the BCI competition 2003 if they had competed (assertion based on the classification rate on the test set) [24]. The head model was based on a spherical approximation and 400 dipoles. Advantages of spherical approximation on boundary element method or FEM methods are the simplicity of computation and no need of an MRI of the subject’s head. A brain atlas could be used instead of the true MRI [13], but it is also an approximation and the lead field is more difficult to compute. More complex models with more dipoles or based on atlas [13], [20] would have a better spatial accuracy, but this is only needed if a better localization is needed for selection or classification. In such a case, the importance of electrode number must be asserted. Indeed, accurate source localization requires uniform sampling of the scalp potential distribution [17]. Several studies recommend an interelectrode distance of around 2–3 cm to avoid a distortion of the scalp potential (see [17] for review). Such a high number of electrodes is not mandatory as long as we are not looking for an accurate source localization. The spherical head model as well as more complex head models should be computed only once when using an imaging method. Therefore, the source reconstruction needs a matrix multiplication only and can be applied in real-time BCI. The imaging method was tested with different priors. The simplest priors gave the best results. For the BCI competition dataset, the simplest prior was the identity prior. While the four subjects’ dataset tested with only two priors, the best results was with the conjoint use of the identity and central sulcus prior. Therefore, a simpler prior such as the identity prior alone should still improve the results. More advanced priors, e.g., spatial correlation, could give better results with more complex head models as in Grave de Peralta et al. [13] and Congedo et al. [20]. Future work will explore the sensitivity of the classification rate to the complexity of the head model. The regularization parameter depends on the head model, the recording apparatus, and the noise in the data. The head model, either spherical or based on an atlas, must be computed only once for a given recording configuration. Therefore, if there are no changes in these parameters, the regularization parameter should not change. Indeed, the computed regularization parameter was very consistent through all our experiments. Spatial filter methods like common spatial pattern [14], [51] and common subspace decomposition [15], [52] have been

NOIRHOMME et al.: SINGLE-TRIAL EEG SOURCE RECONSTRUCTION FOR BRAIN–COMPUTER INTERFACE

proposed and successfully applied to BCI. Such methods also try to improve the spatial accuracy of EEG. While, in our results, the identity prior is not very specific and our head model is not very accurate, our method should not be regarded as a spatial filter. First, the inverse method transforms the data into a space with higher dimensionality, preserving the whole signal. Then, neurophysiological information can be applied to extract relevant features. Second, while spatial filters are based on statistical properties of the signal, inverse solutions are based on electroneurophysiological properties. Even our simple head model is based on properties like scalp, skull, CSF, and brain conductivities. Third, the head model must be computed only once and not for every subject and experiment. As both approaches give different results, they can be used conjointly [20]. The MI feature selection method is based on the features only. A dipole selection method could be more suited for a source reconstruction approach. Congedo et al. [20] and Grave de Peralta et al. [13] apply such methods. In both cases, the selected dipoles correspond to functionally relevant areas. Use of such a method could improve the final results.

VI. CONCLUSION In this paper, we proposed to include a source reconstruction method in a BCI framework to add information and improve classification. The source reconstruction method was based on an imaging approach and could be solved in one matrix multiplication enabling a real-time application. A simple spherical head model with 400 dipoles gave the electrophysiological information needed to reconstruct the sources. The imaging approach, was tested with different priors on the dataset IV of the BCI competition 2003 and on a four subjects, dataset recorded under same conditions. Six BCIs were built based on same feature extraction and selection methods and classifier. They differ in extracted features and original signal. Three BCIs used raw electrode signals directly while the other three used reconstructed sources. The comparison of the final results showed an improvement due to source reconstruction for the simplest priors. Furthermore, the improved results were equivalent to best known results on the same datasets. When reconstructing the sources, the complexity of the head model is of some importance. In using a simple spherical head model, we have already observed an improvement in classification. The significance of the head model in the final classification results will be investigated in a future study. The priors also are of importance and should be carefully selected. Also, due to the increased number of inferred sources, we had to deal with an increasing number of features. We have here proposed to select the best features based on their ranking with mutual information. However, a selection method working with dipoles could be more suited to a source reconstruction approach. Source reconstruction methods are a new function that can now be included in many current BCI implementations and can be used in tandem with most other BCI methods.

1599

ACKNOWLEDGMENT The authors would like to thank C. Krier for her implementation of the MI estimator. The authors would also like to thank Prof. K. R. M¨uller and Dr. B. Blankertz from the FraunhoferFIRST, Intelligent Data Analysis Group, and Freie Universit¨at Berlin, Department of Neurology, Neurophysics Group for giving access to the data. The authors also wish to gratefully acknowledge R. Grave de Peralta, M. Verleysen and L. Jacques for fruitful discussions. Finally, the authors thank the anonymous reviewers for their comments which clarified and improved the present paper. REFERENCES [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain–computer interfaces for communication and control,” Clin. Neurophysiol., vol. 113, pp. 767–791, 2002. [2] M. A. Lebedev and M. A. Nicolelis, “Brain–machine interfaces: Past, present and future,” Trends Neurosci., vol. 29, no. 9, pp. 536–546, 2006. [3] S.-S. Yoo, T. Fairneny, N.-K. Chen, S.-E. Choo, L. P. Panych, H. Park, S.-Y. Lee, and F. A. Jolesz, “Brain–computer interface using fMRI: Spatial navigation by thoughts,” Neuroreport, vol. 15, no. 10, pp. 1591–1595, 2004. [4] N. Weiskopf, K. Mathiak, S. W. Bock, F. Scharnowski, R. Veit, W. Grodd, R. Goebel, and N. Birbaumer, “Principles of a brain–computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI),” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 966–970, Jun. 2004. [5] J. R. Wolpaw and D. J. McFarland, “Control of a two-dimensional movement signal by a noninvasive brain–computer interface in humans,” PNAS, vol. 101, no. 51, pp. 17849–17854, Dec. 2004. [6] M. Cheng, X. Gao, S. Gao, and D. Xu, “Design and implementation of a Brain–computer interface with high transfer rates,” IEEE Trans. Biomed. Eng., vol. 49, no. 10, pp. 1181–1186, Oct. 2002. [7] R. Scherer, G. R. Muller, C. Neuper, B. Graimann, and G. Pfurtscheller, “An asynchronously controlled EEG-based virtual keyboard: Improvement of the spelling rate,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 979–984, Jun. 2004. [8] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M¨uller, “Boosting bit rates in noninvasive EEG single-trial classifications by features combination and multiclass paradigms,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 993–1002, Jun. 2004. [9] G. Pfurtscheller, C. Neuper, C. Guger, H. W., H. Ramoser, A. Schl¨ogl, B. Obermaier, and M. Pregenzer, “Current trends in graz brain–computer interface (BCI) research,” IEEE Trans. Rehabil. Eng., vol. 8, no. 2, pp. 216–219, Jun. 2000. [10] T. Hinterberger, A. K¨ubler, J. Kaiser, N. Neumann, and N. Birbaumer, “A brain–computer-interface (BCI) for the locked-in: Comparison of different EEG classifications for the thought translation device (TTD),” Clin. Neurophysiol., vol. 114, pp. 416–425, 2003. [11] V. Bostanov, “BCI competition 2003—data set Ib and IIb: Feature extraction from event-related brain potentials with the continuous wavelet transform and the t-value scalogram,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1057–1061, Jun. 2004. [12] S. Lemm, C. Sch¨afer, and G. Curio, “BCI competition 2003—data set III: Probalistic modeling of sensorimotor µ rhythms for classification of imaginary hand movements,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1077–1080, Jun. 2004. [13] R. Grave de Peralta Menendez, S. L. Gonz´alez Andino, L. Perez, P. W. Ferrez, and J. d. R. Mill´an, “Non-invasive estimation of local field potentials for neuroprosthesis control,” Cogn. Process, vol. 6, pp. 59–64, 2005. [14] H. Ramoser, J. M¨uller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement,” IEEE Trans. Rehabil. Eng., vol. 8, no. 4, pp. 441–446, Dec. 2000. [15] Y. Wang, Z. Zhang, Y. Li, X. Gao, S. Gao, and F. Yang, “BCI competition 2003—Data set IV: An algorithm based on CSSD and FDA for classifying single-trial EEG,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1081– 1086, Jun. 2004. [16] S. Baillet, J. C. Mosher, and R. M. Leahy, “Electromagnetic brain mapping,” IEEE Signal Process. Mag., vol. 18, no. 6, pp. 14–30, Nov. 2001.

1600

[17] C. M. Michel, M. M. Murray, G. Lantz, S. Gonzalez, L. Spinelli, and R. Grave de Peralta, “EEG source imaging,” Clin. Neurophysiol., vol. 115, pp. 2195–2222, 2004. [18] L. Qin, L. Ding, and B. He, “Motor imagery classification by means of source analysis for brain–computer interface applications,” J. Neural Eng., vol. 1, pp. 133–141, 2004. [19] B. Kamousi, Z. Liu, and B. He, “An EEG inverse solution based brain– computer interface,” Int. J. Bioelectromag., vol. 7, no. 2, pp. 292–294, 2005. [20] M. Congedo, F. Lotte, and A. L´ecuyer, “Classification of movement intention by spatially filtered electromagnetic inverse solutions,” Phys. Med. Biol., vol. 51, pp. 1971–1989, 2006. [21] J. C. Mosher, R. M. Leahy, and P. S. Lewis, “EEG and MEG: Forward solutions for inverse methods,” IEEE Trans. Biomed. Eng., vol. 46, no. 3, pp. 245–259, Mar. 1999. [22] R. Grave de Peralta Menendez, S. L. Gonz´alez Andino, S. Morand, C. M. Michel, and T. Landis, “Imaging the electrical activity of the brain: ELECTRA,” Hum. Brain Mapp., vol. 9, pp. 1–12, 2000. [23] R. Grave de Peralta Menendez, M. M. Murray, C. M. Michel, R. Martuzzi, and S. L. Gonz´alez Andino, “Electrical neuroimaging based on biophysical constraints,” NeuroImage, vol. 21, pp. 527–539, 2004. [24] B. Blankertz, K.-R. M¨uller, G. Curio, T. M. Vaughan, G. Schalk, J. R. Wolpaw, A. Schl¨ogl, C. Neuper, G. Pfurtscheller, T. Hinterberger, M. Schr¨oder, and N. Birbaumer, “The BCI competition 2003: Progress and perspectives in detection and discrimination of EEG single trials,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1044–1051, Jun. 2004. [25] P. L. Nunez and R. B. Silberstein, “On the relationship of synaptic activity to macroscopic measurements: Does co-registration of EEG and fMRI make sense?,” Brain Topogr., vol. 13, no. 2, pp. 79–96, 2000. [26] P. Berg and M. Scherg, “A fast method for forward computation of multiple-shell spherical head models,” Electroencephalogr. Clin. Neurophysiol., vol. 90, pp. 58–64, 1994. [27] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems. New-York: Wiley, 1977. [28] J. C. Mosher, S. Baillet, and R. M. Leahy, “Equivalence of linear approaches in bioelectromagnetic inverse solutions,” in Proc. 2003 IEEE Workshop Stat. Signal Process., St-Louis, MO, 2003, pp. 294–297. [29] C. Phillips, J. Mattout, M. D. Rugg, P. Maquet, and K. J. Friston, “An empirical Bayesian solution to the source reconstruction problem in EEG,” NeuroImage, vol. 24, pp. 997–1011, 2005. [30] R. D. Pascual-Marqui, C. M. Michel, and D. Lehmann, “Low resolution electromagnetic tomography: A new method for localizing electrical activity in the brain,” Int. J. Psychophysiol., vol. 18, pp. 49–65, 1994. [31] S. Baillet and L. Garnero, “A Bayesian approach to introducing anatomofunctional priors in the EEG/MEG inverse problem,” IEEE Trans. Biomed. Eng., vol. 44, no. 5, pp. 374–385, May 1997. [32] T. I. Alecu, S. Voloshynovskiy, and T. Pun, “Regularized two-step brain activity reconstruction from spatio-temporal EEG data,” in Proc. Image Reconstr. Incomplete Data III, Denver, CO, SPIE Int. Symp. Opt. Sci. Technol. Aug. 2004. [33] P. C. Hansen, “Analysis of discrete ill-posed problems by means of the L-curve,” SIAM Rev., vol. 34, pp. 561–580, 1992. [34] R. D. Pascual-Marqui, “Review of methods for solving the EEG inverse problem,” Int. J. Bioelectromag., vol. 1, no. 1, pp. 75–86, 1999. [35] N. J. Trujillo-Barreto, E. Aubert-V´azquez, and P. A. Vald´es-Sosa, “Bayesian model averaging in EEG/MEG imaging,” NeuroImage, vol. 21, pp. 1300–1319, 2004. [36] J. Daunizeau, C. Grova, J. Mattout, G. Marrelec, D. Clonda, B. Goulard, M. Pelegrini-Issac, J.-M. Lina, and H. Benali, “Assessing the relevance of fMRI-based prior in the EEG inverse problem: A Bayesian model comparison approach,” IEEE Trans. Signal Process., vol. 53, no. 9, pp. 3461– 3472, Sep. 2005. [37] O. Yamashita, A. Galka, T. Ozaki, R. Biscay, and P. Valdes-Sosa, “Recursive penalized least squares solution for dynamical inverse problems of EEG generation,” Hum. Brain Mapp., vol. 21, pp. 221–235, 2004. [38] A. Galka, O. Yamashita, T. Ozaki, R. Biscay, and P. Vald´es-Sosa, “A solution to the dynamical inverse problem of EEG generation using spatiotemporal Kalman filtering,” NeuroImage, vol. 23, pp. 435–453, 2004. [39] H. Patterson and R. Thompson, “Recovery of inter-block information when block sizes are unequal,” Biometrika, vol. 58, no. 3, pp. 545–554, Dec. 1971. [40] D. A. Harville, “Bayesian inference for variance components using only error constrasts,” Biometrika, vol. 61, no. 2, pp. 383–385, Aug. 1974.

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 5, MAY 2008

[41] C. Phillips, M. D. Rugg, and K. J. Friston, “Systematic regularization of linear inverse solutions of the EEG source localization problem,” NeuroImage, vol. 17, pp. 287–301, 2002. [42] B. Blankertz, G. Curio, and K.-R. M¨uller, “Classifying single trial EEG: Towards brain–computer interfacing,” in Proc. Adv. Neural Inf. Proc. Syst. (NIPS 01), 2002, T. G. Diettrich, S. Becker, and Z. Ghahramani, Eds., vol. 14. [43] E. Niedermeyer, “The normal EEG of the waking adult,” in Electroencephalography Basic Principles, Clinical Applications, and Related Fields, E. Niedermeyer and F. Lopes da Silva, Eds. Baltimore, MD: Williams and Wilkins, 1999. [44] G. Pfurtscheller and C. Neuper, “Motor imagery activates primary sensorimotor area in man,” Neurosci. Lett., vol. 239, pp. 65–68, 1997. [45] G. Pfurtscheller, “Event-related desynchronization (ERD) and eventrelated synchronization (ERS),” in Electroencephalography Basic Principles, Clinical Applications, and Related Fields, E. Niedermeyer and F. Lopes da Silva, Eds. Baltimore, MD: Williams and Wilkins, 1999. [46] F. Rossi, A. Lendasse, D. Francois, V. Wertz, and M. Verleysen, “Mutual information for the selection of relevant variables in spectrometric nonlinear modelling,” Chemom. Intell. Lab. Syst., vol. 80, pp. 215–226, 2006. [47] A. Kraskov, H. St¨ogbauer, and P. Grassberger, “Estimating mutual information,” Phys. Rev. E, Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top., vol. 69, no. 6, pp. 066138–226, 2004. [48] G. Fung and O. L. Mangasarian, “Proximal support vector machine classifiers,” in Proc. KDD-2001: Knowl. Discov. Data Min., F. Provost and R. Srikant, Eds. San Francisco, CA, Aug. 26–29, 2001, New York: Asscociation for Computing Machinery, 2001, pp. 77–86. [49] C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Min. Knowl. Discov., vol. 2, no. 2, pp. 121–167, 1998. [50] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J. Robinson, and T. M. Vaughan, “Brain–computer interface technology: A review of the first international meeting,” IEEE Trans. Rehabil. Eng., vol. 8, no. 2, pp. 164– 173, Jun. 2000. [51] G. Blanchard and B. Blankertz, “BCI competition 2003–data set IIa: Spatial patterns of self-controlled brain rhythm modulations,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1062–1066, Jun. 2004. [52] Y. Wang, P. Berg, and M. Scherg, “Common spatial subspace decomposition applied to analysis of brain responses under multiple task conditions: A stimulation study,” Clin. Neurophysiol., vol. 110, pp. 604–614, 1999.

Quentin Noirhomme received the Graduate degree in applied mathematics engineering and the Ph.D. degree from the Universit´e catholique de Louvain (UCL), Louvain-la-Neuve, Belgium, in 2001 and 2006, respectively. His Ph.D. dissertation was on localization of brain functions using transcranial magnetic stimulation (TMS) and electroencephalography (EEG). In collaboration with the team of Prof. E. Olivier [UCL/Laboratoire de Neurophysiologie (NEFY)], he developed a software to localize the TMS point on a subject MRI in real-time, thus enabling fast and accurate localization of the stimulation. He then turned on to developing a brain–computer interface (BCI) based on a neurophysiological prior: the reconstructed sources of brain activity. From 2006 to 2007, he was a Postdoctoral Researcher in the Electrical NeuroImaging Group, Geneva University Hospitals, where he was engaged in research on a steady-state visually evoked potential BCI, which enabled to control a virtual wheelchair. He was also involved in the detection of very high frequency oscillations in the EEG spectrum for diverse applications. He is currently with the Coma Science Group, University of Li`ege, Li`ege, Belgium. His current research interests include the processing of brain functional imaging (EEG, PET, fMRI, and TMS) and BCI.

NOIRHOMME et al.: SINGLE-TRIAL EEG SOURCE RECONSTRUCTION FOR BRAIN–COMPUTER INTERFACE

Richard I. Kitney (M’99) received the Ph.D. degree from the Imperial College of Science Technology and Medicine, London, U.K., in 1972. He has been engaged in research on the study of arterial disease, cardiorespiratory control, biomedical image processing related to magnetic resonance imaging and ultrasound, the development of picture archiving and communications systems (PACS), and 3-D visualization techniques. He has worked extensively in the United States and has been a Visiting Professor at Massachusetts Institute of Technology (MIT) since 1991. He is currently a Professor of BioMedical Systems Engineering and Dean of the Faculty of Engineering in the Department of Bioengineering, Imperial College London, London, U.K., of which he is the Founding Head. He is a Co-Director of the Imperial College—MIT International Consortium for Medical Information Technology. His current research interests include virtual reality, visualisation, image and signal processing in either technically focused projects or medically focused projects. Prof. Kitney became a Fellow of the World Technology Network in 1999 for his innovative work in the fields of health and medicine. He became an Academician of the International Academy of BioMedical Engineering in September 2003 (this is the highest honor bestowed by the International Federation of BioMedical Engineering Societies). He recently became a Fellow of the College of Fellows of the American Institute for Medical and Biological Engineering (AIMBE), and a Fellow of the City and Guilds of London Institute (FCGI). In June 2001, he was the recipient of the Order of the British Empire (OBE) in the Queen’s Birthday Honours List for services to Information Technology in Healthcare.

1601

Benoit Macq (S’83–M’84–SM’01) received the electrical engineering and Ph.D. degrees from the Universit´e catholique de Louvain (UCL), Louvainla-Neuve, Belgium, in 1984 and 1989, respectively. He did his Ph.D. thesis on perceptual coding for digital TV. He is a General Coordinator of the SIMILAR European Network of Excellence on Multimodal Interfaces and of the European Integrated Project on Digital Cinema called EDCine. He is the founder of seven spinoff companies. He is currently a Full Professor in the Communications and Remote Sensing Laboratory, Universit´e catholique de Louvain (UCL) leading a team of 40 researchers and engineers involved in image processing, image communications, and multimodal interactions. His main research interests are image compression, image watermarking and image analysis for medical and immersive communications. Prof. Macq is a member of the Image and Multidimensional Digital Signal Processing Technical Committee (IMDSP-TC) of the IEEE. He has been appointed as the General Chairman of ICIP 2011. He has been an Associate Editor of IEEE TRANSACTIONS on IMAGE PROCESSING, IEEE TRANSACTIONS on MULTIMEDIA, and IEEE TRANSACTIONS on MULTIMEDIA, Guest Editor of Signal Processing: Image Communication, PROCEEDINGS OF THE IEEE, and IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY.