G. Blanchard, B. Blankertz Fraunhofer FIRST, IDA group First we filtered the signals with a linear band-pass filter for the beta- and mu-frequency bands (a preliminary study was done to check these values and determine with more precision what spectrum bands were the most discriminative for each subject). We then applied an analysis of Common Spatial Patterns (CSP) extraction to both of this band-passed signals, to discriminate only between the top and bottom classes. For each trial, this CSP analysis was performed only on the segment of recorded signal corresponding to the time period [1s, 3s], corresponding to the time period where the cursor is visible on the screen (NOTE: our reference time t = 0 for each trial corresponds to the time at which the target appears on the screen). It turned out that, for all the subjects, the CSPs for only one of the two classes were significant (as measured using the associated eigenvalue). For subjects AA and CC, the top class produced the significant CSPs; for subject BB, it was the bottom class. For each subject we kept the two first CSPs for the significant class. In the end, we therefore kept 4 CSPs for each subjects (2 for each relevant frequency band). The features extracted for classifier training were the following: for each trial, we projected the signal in the interval [1s, 4s] onto the 4 CSPs. For each of these four projected signals, Fourier power coefficients were calculated for overlapping windows of size 0.5s, only in the relevant frequency band associated to the considered CSP. This yielded a feature vector of size 132. A regularized linear discriminant was then trained over the full training dataset (the regularization parameter determined by cross-validation). This classifier was then applied to the test set. Remarks. (1) Note that, since the classifier we used only depends on the training data, it is of course causal, since it does not gather any information from the test data to update the classification function. (2) The version used here is an offline-classifier (the whole recording of a trial is necessary to perform the classification). However, we have also built an online-version trying to mimic an actual feedback. This was done by using the same features as above but on a single window of size 0.5s. A linear classifier was trained only on the top and bottom classes, and the (real) output of the classifier was interpreted as a cursor movement (upwards or downwards, and supposing that these increments can be real numbers). By summing the classifier outputs corresponding to the different time windows over the trial, and picking the class based on this summed values, we were able to achieve comparable classification rates, albeit slightly worse (in fact, only about 2 to 3% worse).