- Final Results -
[ remarks |
winners |
true labels |
organizers ]
[ tübingen:Ia |
tübingen:Ib |
albany:IIa |
albany:IIb |
graz:III |
berlin:IV ]
The announcement and the data sets of the BCI Competition II can be found
here.
It would be very helpful for the potential organization of
further BCI competitions to get some feedback, criticism and
suggestions, about this competition. We are interested to hear
from participants about their experiences as well as from
non-participants reporting why they refrained from taking part.
Feedback may concern general issues or specific data sets
(e.g. pros and contras for each data set).
Write your feedback to Benjamin Blankertz
<benjamin.blankertz@tu-berlin.de>
[ top ]
There was a further change in the evaluation of
data set IIb.
This affected the ranking of submissions such that there are now
five winning teams!
All results are final now (May 2nd 2003).
[ top ]
The results should not be taken too seriously. Of course they do not
provide an objective ranking of quality:
- There is great variance of how much effort contributors put into
preparing their submissions.
- Given the time-pressure of an approaching deadline there are a
lot of possible pitfalls in processing the test data which may lead to
bad or even "random" results. [This is the voice of almost-experience
talking, bb.]
- When test sets (and the number of classes) are relatively small,
luck plays a big role. Let's take a look at a fictitious example.
Suppose in a binary problem there are 15 methods that are able to
classify correctly 60% of the ideal set of all trials and that they
have random output on the remaining 40%. The expected accuracy of
all these methods is 80%, but on a fixed test set consisting of
100 trials, the expected difference between the best and the
worst result is greater than 10% (assuming independence between
methods and test trials).
Nevertheless we hope that
- something can be learned from this competition by presenting
methods that worked on real world problems.
- people who had not much success here can do a failure
analysis (we provided labels of the test sets for that purpose)
and gain experience for the future.
- last but not least it was fun.
Watch out for the article on the competition in
IEEE Transactions on Biomedical Engineering
where each winning team will present a detailed description of their
algorithms.
[ top ]
Data sets Ia
[Tübingen, ‹self-regulation of SCPs›, subject 1]
#. |
contributor | error |
research lab | co-contributors |
1. |
Brett Mensh |
11.3% |
MIT |
Justin Werfel, Sebastian Seung |
2. |
Guido Dornhege |
11.6% |
Fraunhofer FIRST (IDA), Berlin |
Benjamin Blankertz, Klaus-Robert Müller |
3. |
Kai-Min Chung |
11.9% |
National Taiwan University, Taipei |
Tzu-Kuo Huang, Chih-Jen Lin |
4. |
Tzu-Kuo Huang |
15.0% |
National Taiwan University, Taipei |
Kai-Min Chung, Chih-Jen Lin |
5. |
David Pinto |
15.7% |
University of Florida |
|
6. |
Juma Mbwana |
17.1% |
Yale University |
Mark Laubach |
7. |
Vladimir Bostanov |
17.4% |
University of Tübingen |
|
8. |
Ulrich Hoffmann |
17.8% |
|
|
9. |
Deniz Erdogmus |
19.1% |
University of Florida |
Yadu Rao, David Pinto, Kenneth Hild, Tue Lehn-Schioeler, Justin Sanchez |
10. |
Justin Sanchez |
19.8% |
University of Florida |
Deniz Erdogmus, Tue Lehn-Schioeler, Yadu Rao |
11. |
Amir Saffari |
23.5% |
Sahand University of Technology, Tabriz |
T. Emami, S. Ashkboos |
12. |
Michael Grabner |
24.6% |
Technical University of Graz |
Alois Schlögl |
13. |
Yadu Rao |
34.5% |
University of Florida |
David Pinto |
14. |
Kenneth Hild |
46.8% |
University of Florida |
Tue Lehn-Schioeler |
15. |
Fabien Torre |
49.1% |
University of Lille, GRAppA |
|
-> Note: The expected error, if classification is made
by chance, is 50%. <-
1. Brett Mensh, Massachussets Institute of Technology, Cambridge
Features: DC-Level and features from spectral analysis
of high beta power band
Classification: Discriminant analysis
some details
[ txt ]
2. Guido Dornhege, Fraunhofer FIRST (IDA), Berlin
with Benjamin Blankertz, Klaus-Robert Müller
Preprocessing: Clustering
Features:
Intensity of evoked response at begin of trial, means of trials
Classification:
regularized discriminant analysis, linear programming machine
some details
[ pdf
| txt ]
3. Kai-Min Chung, National Taiwan University, Taipei
with Tzu-Kuo Huang, Chih-Jen Lin
Features:
time series after downsampling to 25Hz
Classification:
support vector machine (SVM)
some details
[ txt ]
4. Tzu-Kuo Huang, National Taiwan University, Taipei
with Kai-Min Chung, Chih-Jen Lin
Features:
After linear SVM training, select time series features with largest w_i
Classification: nonlinear SVM
some details
[ txt ]
5. David Pinto, University of Florida
Classification: Hidden Markov Model with 10 states
some details
[ txt ]
6. Juma Mbwana, Yale University
with Mark Laubach
Features: decimation of time series, discriminant analysis
Classification: SVM
some details
[ txt ]
7. Vladimir Bostanov, University of Tübingen
Features: Continuous Wavelet Transform, Scalogram Peak Detection
Classification: Linear Discriminant Analysis (LDA) with
stepwise or optimal selection of variables
some details
[ txt ]
8. Ulrich Hoffmann
Features: 0-7 fourier coefficients for every channel (0-2 Hz)
Classification: regularized linear fisher discriminant
some details
[ txt ]
9. Deniz Erdogmus, University of Florida
with Yadu Rao, David Pinto, Kenneth Hild, Tue Lehn-Schioeler,
Justin Sanchez
Classification: Majority vote of different 5 methods
some details
[ txt ]
10. Justin Sanchez, University of Florida
with Deniz Erdogmus, Tue Lehn-Schioeler, Yadu Rao
Classification: Recursive Multi-Layer Perceptron with
fully-connected MLP with 5 hidden processing elements
some details
[ txt ]
11. Amir Azar, Sahand University of Technology, Tabriz
with T. Emami, S. Ashkboos
Features: moving average downsampling, 25 features per channel
Classification: Neural Network
some details
[ pdf
| txt ]
12. Michael Grabner, Technical University of Graz
with Alois Schlögl
Features: time averaging
Classification: MLP neural network
some details
[ txt ]
13. Yadu Rao, University of Florida
with David Pinto
Classification:
Time-delay neural network predictor using an embedding of 5 (30 x 8 x 6).
some details
[ txt ]
14. Kenneth Hild, University of Florida
with Tue Lehn-Schioeler
Features:
three largest, length-five eigenvectors from PCA on time series
combined with information-theoretic feature reduction
Classification: non-parametric Bayes classifier
some details
[ txt ]
15. Fabien Torre, University of Lille
Method:
a stochastic algorithm (GloBo) learnt 2 resp. 3 rules for class 0 resp. 1
some details
[ txt ]
[ top ]
Data set Ib
[Tübingen, ‹self-regulation of SCPs›, subject 2]
Remark:
It is not clear if there is any information contained in
this data set that is useful for the classification task.
A down-to-earth view on the result suggests that it is not.
Some contributors commented on this fact.
#. |
contributor | error |
research lab | co-contributors |
1. |
Vladimir Bostanov |
45.6% |
University of Tübingen |
|
2. |
Tzu-Kuo Huang |
46.7% |
National Taiwan University, Taipei |
Kai-Min Chung, Chih-Jen Lin |
2. |
Juma Mbwana |
46.7% |
Yale University |
Mark Laubach |
4. |
Kai-Min Chung |
47.8% |
National Taiwan University, Taipei |
Tzu-Kuo Huang, Chih-Jen Lin |
5. |
Xichen Sun |
48.3% |
Fraunhofer FIRST (IDA), Berlin |
Qiansheng Cheng, Benjamin Blankertz |
6. |
Amir Saffari |
53.3% |
Sahand University of Technology, Tabriz |
T. Emami, S. Ashkboos |
7. |
Fabien Torre |
54.4% |
University of Lille, GRAppA |
|
8. |
Brett Mensh |
56.1% |
MIT |
|
-> Note: The expected error, if classification is made
by chance, is 50%. <-
1. Vladimir Bostanov, University of Tübingen
Features: Continuous Wavelet Transform, Scalogram Peak Detection
Classification: Linear Discriminant Analysis (LDA) with
stepwise or optimal selection of variables
some details
[ txt ]
2. Tzu-Kuo Huang, National Taiwan University, Taipei
with Kai-Min Chung, Chih-Jen Lin
Features:
After linear SVM training, select time series features with largest w_i
Classification: nonlinear SVM
some details
[ txt ]
2. Juma Mbwana, Yale University
with Mark Laubach
Features: decimation of time series, discriminant analysis
Classification: SVM
some details
[ txt ]
4. Kai-Min Chung, National Taiwan University, Taipei
with Tzu-Kuo Huang, Chih-Jen Lin
Features: time series after downsampling to 25Hz
Classification: SVM
some details
[ txt ]
5. Xichen Sun, Fraunhofer FIRST (IDA), Berlin
with Quiansheng Cheng, Benjamin Blankertz
Features: 100 features with high inter-class variance
Classification: Dynamic time warping / template matching
some details
[ pdf
| txt ]
6. Amir Azar, Sahand University of Technology, Tabriz
with T. Emami, S. Ashkboos
Features:
ICA demixing, moving average downsampling, 25 features per channel
Classification: Neural Network
some details
[ pdf
| txt ]
7. Fabien Torre, University of Lille
Method:
a stochastic algorithm (GloBo) learnt 3 rules for each class
some details
[ txt ]
8. Brett Mensh, Massachussets Institute of Technology, Cambridge
Features: DC-Level and features from spectral analysis of
high beta power band.
Classification: Discriminant analysis
some details
[ txt ]
[ top ]
Data set IIa
[Albany, ‹self-regulation of mu- and/or
central beta-rhythm›]
Remark:
The error shown is the error on the test set, averaged over all three subjects.
Contributors Jan Schleimer and Dominik Brugger submitted only results
for two subjects. For this reason their submission is officially not
valid.
#. |
contributor | error |
research lab | co-contributors |
1. |
Gilles Blanchard |
28.2% |
Fraunhofer FIRST (IDA), Berlin |
Benjamin Blankertz |
2. |
Xiaorong Gao |
34.1% |
Tsinghua University |
Ming Chang, Wenyan Jia, Shangkai Gao, Fusheng Yang |
3. |
Chunmao Wang |
69.6% |
Harvard |
|
- |
Dominik Brugger |
73.2% |
University of Tuebingen |
Fabian Sinz, Jan-Hendrik Schleimer |
- |
Jan Schleimer |
75.5% |
University of Tuebingen |
Dominik Brugger, Fabian Sinz |
-> Note: The expected error, if classification is made
by chance, is 75%. <-
1. Gilles Blanchard, Fraunhofer FIRST (IDA), Berlin
with Benjamin Blankertz
Features:
common spatial patterns for 'top' vs. 'bottom', band power by FFT
Classification:
regularized linear discriminant analysis (4 classes)
some details
[ pdf
| txt ]
2. Xiaorong Gao, Tsinghua University
with Chang Ming, Jia Wenyan, Shangkai Gao, Fusheng Yang
Features:
common spatial subspace decomposition (CSSD),
spectral power in the mu band and a related time feature
Classification:
linear
some details
[ doc
| txt ]
3. Chunmao Wang, Harvard
Features:
time-frequency spectral power (TFSP) by wavelet transform
Classification:
n/a
some details
[ txt ]
-. Jan Schleimer, University of Tuebingen
with Dominik Brugger, Fabian Sinz
Features:
ICA, decimation
Classification:
multi-class support vector machine
some details
[ txt ]
-. Dominik Brugger, University of Tuebingen
with Fabian Sinz, Jan-Hendrik Schleimer
Features:
n/a
Classification:
SVM with optimized parameters
some details
[ txt ]
[ top ]
Data set IIb
[Albany, ‹P300 speller paradigm›]
Remark:
There was a change in the evaluation of this data set.
If you visited this place before, please note that this
affected the ranking of submissions.
The aim, as stated in the data set description, was to
predict letters using all 15 available stimulus repetitions.
5 contributors reached perfect classification in this setting.
In column 'rep' the self-reported minimum number
of repetitions that was needed to produce the same (i.e., perfect)
result is shown ('n/a' means that only the result for 15
repetitions was reported).
Those numbers were obtained in different ways,
so please refer to the descriptions of the contributors below for more
details, or to the forthcoming article.
Since no details on how to evaluate the minimum number of
repetitions were specified beforehand, the ranking is solely based
on the error rate using all available information.
The organizers apologize for the helter-skelter in the evaluation
of this data set. We learned our lesson for the next competition.
#. |
contributor | error |
rep |
research lab | co-contributors |
1. |
Matthias Kaper |
0.0% |
5 |
University of Bielefeld |
Peter Meinicke, Ulf Grossekathoefer, Thomas Lingner,
Helge Ritter |
1. |
Xiaorong Gao |
0.0% |
5-8 |
Tsinghua University |
Neng Xu, Xiaobo Miao, Bo Hong,
Shangkai Gao, Fusheng Yang |
1. |
Vladimir Bostanov |
0.0% |
6 |
University of Tuebingen |
|
1. |
Benjamin Blankertz |
0.0% |
6-11 |
Fraunhofer FIRST (IDA), Berlin |
Gabriel Curio |
1. |
David Tax |
0.0% |
n/a |
Fraunhofer FIRST (IDA), Berlin |
Benjamin Blankertz |
6. |
Justin Werfel |
54.8% |
n/a |
MIT |
Brett Mensh |
7. |
Elena Glassman |
64.5% |
n/a |
Central Bucks West High School |
|
-> Note: The expected error, if classification is made
by chance, is 97%. <-
1. Matthias Kaper, University of Bielefeld
with Peter Meinicke, Ulf Grossekathoefer, Thomas Lingner
Features:
spatio-temporal [0-600ms] features of 0.5-30Hz band-pass filtered signals
Classification:
SVM trained on an equal number of positive and negative samples
some details
[ txt ]
1. Xiaorong Gao, Tsinghua University
with Neng Xu, Xiaobo Miao, Bo Hong, Shangkai Gao, Fusheng Yang
Features:
PCA, ICA, spatio-temporal filtering
Classification:
n/a
some details
[ doc
| txt ]
1. Vladimir Bostanov, University of Tuebingen
Features:
continuous wavelet transform (CWT), scalogram peak detection
Classification:
linear discriminant analysis (LDA) with stepwise or optimal
selection of variables
some details
[ txt ]
1. Benjamin Blankertz, Fraunhofer FIRST (IDA), Berlin
with Gabriel Curio
(Universitätsklinikum Benjamin Franklin, FU-Berlin)
Features:
spatio-temporal features from visual cortex, separately chosen parameters
for columns and rows
Classification:
regularized LDA on subtrials, averaged over available repetitions
some details
[ pdf
| txt ]
1. David Tax, Fraunhofer FIRST (IDA), Berlin
with Benjamin Blankertz
Features:
spatio-temporal features from central cortex
Classification:
regularized linear discriminant on subtrials, robustly averaged
some details
[ pdf
| txt ]
6. Justin Werfel, MIT
with Brett Mensh
Features:
samples from four channels between 250-400 ms
Classification:
minimizing squared error
some details
[ txt ]
7. Elena Glassman, Central Bucks West High School
Features:
discrete wavelet transform with the db4 wavelet
Classification:
12 Support Vector Machine classifiers, one for each column or row
some details
[ txt ]
[ top ]
Remarks:
The original aim, as stated in the data set description, was to
measure performance by mutal information (MI) divided by time.
Dividing by time prefers those algorithms that come early to good
classification results. But evaluating the time delay would not have
been fair, because not all methods are based on causal algorithms. For
this reason, the performance measure was changed to be plain maximum
MI.
The MI had to be re-evaluated, see the new version of the technical
report below, but the ranking stayed the same.
Details of the evaluation [ pdf ]
#. |
contributor | MI |
research lab | co-contributors |
1. |
Christin Schäfer |
0.61 |
Fraunhofer FIRST (IDA), Berlin |
Steven Lemm |
2. |
Akash Narayana |
0.46 |
DaimlerChrysler Research & Technology India Pvt Ltd. |
Mohan Sadashivaiah, Raveendran Rengaswamy, Shanmukh Katragadda |
3. |
Amir Saffari |
0.45 |
Sahand University of Technology, Tabriz, Iran |
T. Emami, S. Ashkboos |
4. |
Xiaorong Gao |
0.44 |
Tsinghua University, Beijing |
Wenyan Jia, Xianghua Zhao,
Shangkai Gao, Fusheng Yang |
5. |
Mohan Sadashivaiah |
0.29 |
DaimlerChrysler Research & Technology India Pvt Ltd |
Akash Narayana, Raveendran Rengaswamy, Shanmukh Katragadda |
6. |
Dan Rissacher |
0.26 |
Winooski, VT |
|
7. |
Thorsten Zander |
0.21 |
Fraunhofer FIRST (IDA), Berlin |
Guido Dornhege, Benjamin Blankertz |
8. |
Jorge del Río Vera |
0.09 |
|
|
9. |
Juma Mbwana |
0.00 |
Yale University |
Mark Laubach |
-> Note: The expected MI, if classification is made
by chance, is 0. <-
1. Christin Schäfer, Fraunhofer FIRST (IDA)
with Steven Lemm
(Universtitätsklinikum Benjamin Franklin, FU-Berlin)
Features:
Morlet-Wavelets at 10 and 22 Hz in channels C3, C4
Classification:
estimation of a multivariate normal distribution for each class,
previous time instances weighted according to Bayes error
some details
[ ps
| txt ]
2. Akash Narayana, DaimlerChrysler Research & Technology India Pvt Ltd.
with Mohan Sadashivaiah, Raveendran Rengaswamy, Shanmukh Katragadda
Features:
calculate AR-spectral power in 4 frequency bands, ratio of those energies
in C4 and C3
Classification:
linear discriminant analysis
some details
[ doc
| txt ]
3. Amir Saffari, Sahand University of Technology, Tabriz, Iran
with T. Emami, S. Ashkboos
Features:
AAR parameters
Classification:
several Neural Networks trained on different time regions,
results on overlapping regions were averaged
some details
[ doc
| txt ]
4. Xiaorong Gao, Tsinghua University, Beijing
with Wenyan Jia, Xianghua Zhao, Shangkai Gao, Fusheng Yang
Features:
energy of C3, C4 in 10-12Hz band
Classification:
linear discriminant analysis
some details
[ doc
| txt ]
5. Mohan Sadashivaiah, DaimlerChrysler Research & Technology India Pvt Ltd
with Akash Narayana, Raveendran Rengaswamy, Shanmukh Katragadda
Features:
coeficients of an AR model of order 6
Classification:
linear discriminant analysis
some details
[ doc
| txt ]
6. Dan Rissacher, Winooski, VT
Features:
spectral entropy
Classification:
feed-forward neural network
some details
[ txt ]
7. Thorsten Zander, Fraunhofer FIRST (IDA), Berlin
with Guido Dornhege, Benjamin Blankertz
Features:
time course of mu-power calculated from AAR models, weighted by
an optimized weight vector in time
Classification:
linear classifier
some details
[ txt ]
8. Jorge del Río Vera, Spain
Features:
principal component analysis
Classification:
MLP neural network
some details
[ txt
| txt ]
9. Juma Mbwana, Yale University
with Mark Laubach
Features:
decimation, discriminant pursuit
Classification:
support vector machine
some details
[ txt ]
[ top ]
Data set IV
[Berlin, ‹self-paced 1s›]
#. |
contributor | error |
research lab | co-contributors |
1. |
Zhiguang Zhang |
16% |
Tsinghua University, Beijing |
Yijun Wang, Yong Li, Xiaorong Gao,
Shangkai Gao, Fusheng Yang |
2. |
Radford Neal |
19% |
University of Toronto |
|
3. |
Ulrich Hoffmann |
23% |
| |
4. |
Tzu-Kuo Huang |
25% |
National Taiwan University, Taipei |
Kai-Min Chung, Chih-Jen Lin |
4. |
Brett Mensh |
25% |
Massachussets Institute of Technology |
|
6. |
Dominik Brugger |
27% |
University of Tübingen |
Rebecca Rörig, Michael Schröder |
6. |
Kai-Min Chung |
27% |
National Taiwan University, Taipei |
Tzu-Kuo Huang, Chih-Jen Lin |
8. |
Michael Schröder |
29% |
University of Tübingen |
Dominik Brugger, Rebecca Rörig |
9. |
Ray Smith |
31% |
University College Dublin |
Richard Reilly |
10. |
Rebecca Rörig |
32% |
University of Tübingen |
Dominik Brugger, Michael Schröder |
11. |
Juma Mbwana |
39% |
Yale University |
Mark Laubach |
12. |
Jorge Del Río Vera |
43% |
|
|
13. |
Daniel Rissacher |
45% |
Clarkson University |
|
14. |
Fabien Torre |
48% |
GRAppA, University of Lille 3 |
|
15. |
Amir Saffari |
49% |
Sahand University of Technology, Tabriz |
T. Emani, S. Ashkboos |
-> Note: The expected error, if classification is made
by chance, is 50%. <-
1. Zhiguang Zhang, Tsinghua University, Beijing
with Yijun Wang, Yong Li, Xiaorong Gao
Features:
3 features from combination of common subspace decomposition and
Fisher discriminant
Classification:
perceptron neural network
some details
[ doc
| txt ]
2. Radford Neal, University of Toronto
Features:
188 features of different types extracted and chosen
by exploratory data analysis (i.e. by hand)
Classification:
Bayesian logistic regression using Markov chain Monte Carlo
some details
[ txt ]
3. Ulrich Hoffman
Features:
principle components on first 8 Fourier coefficients (0-14 Hz)
Classification:
regularized Fisher discriminant
some details
[ txt ]
4. Tzu-Kuo Huang, National Taiwan University, Taipei
with Kai-Min Chung, Chih-Jen Lin
Features:
last 50ms of low-pass filtered (at 5Hz) signals
Classification:
Support Vector Machine
some details
[ txt ]
4. Brett Mensh, Massachussets Institute of Technology
Features:
time domain: slopes between first and second segment;
frequency domain: high beta power of central channels
Classification:
linear classifier
some details
[ txt ]
6. Dominik Brugger, University of Tübingen
with Rebecca Rörig, Michael Schröder
Features:
first three principle components of channelwise PCAs;
genetic algorithm with linear nu-SVM for channel selection
Classification:
sigmoide nu-SVM
some details
[ txt ]
6. Kai-Min Chung, National Taiwan University, Taipei
with Tzu-Kuo Huang, Chih-Jen Lin
Features:
several hundred features selected by a linear SVM
Classification:
non-linear SVM
some details
[ txt ]
8. Michael Schröder, University of Tübingen
with Brugger, Rebecca Rörig
Features:
for 8 different classifiers from two or three components
of PCA resp. ICA/PCA;
genetic algorithm with linear nu-SVM for channel selection
(for each classifier separately)
Classification:
voting of 4 linear, 3 sigmoid and one rbf SVM
some details
[ txt ]
9. Ray Smith, University College Dublin
with Richard Reilly
Features:
a collection of time and frequency domain features
including an Autoregressive with Exogenous input (ARX) model
and Fourier techniques
Classification:
linear discriminant
some details
[ txt ]
10. Rebecca Rörig, University of Tübingen
with Dominik Brugger, Michael Schröder
Features:
first two principal components of channelswise PCAs;
genetic algorithm using linear nu-SVM to select channels
Classification:
linear nu-SVM
some details
[ txt ]
11. Juma Mbwana, Yale University
with Mark Laubach
Features:
decimated channels separately processed with discriminant pursuit
Classification:
Support Vector Machine
some details
[ txt ]
12. Jorge Del Río Vera
Features:
12 component vector from second mode of PCA
Classification:
multi-layer perceptron (inner layers: 10 resp. 5 neurons)
some details
[ doc
| txt ]
13. Daniel J. Rissacher
Features:
Spectral entropy and wavelets
Classification:
averaged outputs of feed-forward neural networks
some details
[ txt ]
14. Fabien Torre, GRAppA, University of Lille 3
Method:
a stochastic algorithm (GloBo) learnt 5 rules for each class
some details
[ txt ]
15. Amir Saffari,
Sahand University of Technology, Tabriz
with T. Emani, S. Ashkboos
Features:
normalized sources of an ICA model
Classification:
several Neural Networks, retrained to agree on the test set
some details
[ txt
| pdf ]
[ top ]
data set |
contributor |
research lab |
co-contributors |
Ia |
Brett Mensh |
Massachussets Institute of Technology |
Justin Werfel, Sebastian Seung |
Ib |
Vladimir Bostanov |
University of Tübingen |
|
IIa |
Gilles Blanchard |
Fraunhofer FIRST (IDA), Berlin |
Benjamin Blankertz |
IIb |
Matthias Kaper |
University of Bielefeld |
Peter Meinicke, Ulf Grossekathoefer,
Thomas Lingner, Helge Ritter |
IIb |
Xiaorong Gao |
Tsinghua University, Beijing |
Neng Xu, Xiaobo Miao, Bo Hong,
Shangkai Gao, Fusheng Yang |
IIb |
Vladimir Bostanov |
University of Tübingen |
|
IIb |
Benjamin Blankertz |
Fraunhofer-FIRST (IDA), Berlin |
Gabriel Curio (UKBF, FU-Berlin) |
IIb |
David Tax |
Fraunhofer-FIRST (IDA), Berlin |
Benjamin Blankertz |
III |
Christin Schäfer |
Fraunhofer FIRST (IDA), Berlin |
Steven Lemm |
IV |
Zhiguang Zhang |
Tsinghua University, Beijing |
Yijun Wang, Yong Li, Xiaorong Gao |
Winners are asked to submit an article on their algorithm to
IEEE Transactions on Biomedical Engineering until
July 1st 2003. We will contact the winners to give them more
details on this.
[ top ]
[ top ]
Albany:
Theresa M. Vaughan, Gerwin Schalk, Jonathan R. Wolpaw
Berlin:
Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller
Graz:
Alois Schlögl, Christa Neuper, Gernot Müller,
Bernhard Graimann, Gert Pfurtscheller
Tübingen:
Thilo Hinterberger, Michael Schröder, Niels Birbaumer
[ top ]