Classification
Algorithm Description for
DATASET
V <mental imagery, multi-class> in BCI Competition III
J.
Ignacio Serrano and M. D. del Castillo
Instituto
de Automática Industrial. CSIC.
Ctra.
Campo Real, km 0.200, La Poveda
28500
Arganda del Rey, Madrid. Spain.
(nachosm,lola)@iai.csic.es
The dataset was given in two formats: raw EEG
data containing the 32 EEG potentials acquiring at a given moment (sample) with
a sampling rate of 512 Hz, and precomputed data, consisting of 12 components of
the PSD (estimated over the last second) for the 8 centro-parietal channels in
the range 8-30 Hz, with a sampling rate of 16 Hz. The latter format was the
used for the classification, so each example was composed of 96 attributes (8
channels x 12 PSD components) plus the class value (2, 3 or 7).
A supervised machine learning algorithm was
used for classification task. The algorithm selected was Multilayer Perceptron
Neural Network [1],[2]. The classification model is composed of a certain
number of layers of neurons interconnected between them. The architecture used
for this dataset is presented in Figure 1.
Figure
1. Architecture of the Multilayer Perceptron Neural Network used for Dataset V
classification task.
Each connection has an associated weight. The
input to each neuron is the weighted sum, using the associations weights, of
all the incoming values. The ouput of each neuron is the result of applying a
function. In this case, a typical sigmoid function is implemented in all the
neurons. Figure 2 shows the function expression and representation.
Figure
2. Expression and representationoif sigmoid function.
Thus, each of the attribute values from an
sample of the dataset are entered in the corresponding neuron of the input
layer, and the values spreads through the network to the output layer, where
the output value of the neuron is the predicted class.
The training consists of, given a set of
initial weights values, entering each of the labeled examples of the dataset
into the model and comparing the output value with the expected class.
Depending on the error of the predicted class backpropagation algorithm changes
weights from output layer to input layer, in order to make the predicted value
to be more similar to the expected one. This process is carried out a certain
number of epochs or iterations. In this case, this number is equal to 500. The
amount the weights are changed in backpropagation, so called learning rate, is
0.3, and the momentum applied to the weights during updating is 0.2. If the
backpropagation algorithm does not reach a good aproximation to the expected
output during an iteration then it resets the model and causes the learning
rate to decrease.
Once the neural net is trained, each sample of
the test set is labeled with the output prediction from the model. Then, each 8
(0.5 seconds) non-overlapping samples a class label is computed using the 16
last samples (1 second) by calculating the majority vote, i.e., the label that
appears the most within the 16 last samples.
All the attribute values in the dataset have
been previosuly preprocessed by normalizing them in the range 0-1. The neural
network was independently trainined and tested for each subject .
[1].
Laurence Fausett. Fundamentals of Neural Networks (1st edition).
Prentice Hall, 1994.
[2].
Simon Haykin. Neural Networks: A comprehensive Foundation (2nd
edition). Pearson Education, 1998.