Abstract:
Important features of electroencephalogram (EEG) that underlie emotional brain processes include high temporal resolution and
asymmetric spatial activations. Unlike voice signals or facial expressions, which are easily duplicated, EEG-based emotion documentation
has shown to be a reliable option. Because people react emotionally differently to the same stimulus, EEG signals of emotion are not
universal and can vary greatly from one individual to the next. As a consequence, EEG signals are highly reliant on the individual
and have shown promising results in subject-dependent emotion identification. The research suggests using ensemble learning with
an advanced voting mechanism to understand the spatial asymmetry and temporal dynamics of EEG for accurate and generalizable
emotion identification. Using VMD (Variational-Mode-Decomposition) and EMD (Empirical mode decomposition), two feature extraction
techniques, on the pre-processed EEG data. When selecting features, the Garra Rufa Fish optimization algorithm (GRFOA) is employed.
The ensemble model includes a Temporal Convolutional Network (TCNN), an Extreme Learning Machine (ELM), and a Multi-Layer
Perception Network (MLP). The proposed method involves utilizing EEG data from individual subjects for training classifiers, enabling
the identification of emotions. The result is then derived via a voting classifier that is based on heterogeneous ensemble learning. Two
publicly obtainable datasets, DEAP and MAHNOB-HCI, are used to validate the proposed approach using broader cross-validation
settings.