Steady State Visually Evoked Potentials (SSVEPs) are intrinsic responses to specific visual stimulus frequencies. When the retina is activated by a frequency ranging from 3.5 to 75 Hz, the brain produces electrical activity at the same frequency as the visual signal, or its multiples. Identifying the preferred frequencies of neurocortical dynamic processes is a benefit of SSVEPs. However, the time consumed during calibration sessions limits the number of training trials and gives rise to visual fatigue since there is significant human variation across and within individuals over time, which weakens the effectiveness of the individual training data. To address this issue, we propose a novel cross-subject-based classification method to enhance the robustness of SSVEP classification by employing cross-subject similarity and variability. Through an efficient time-series transformer, we compared Time Series Transformers (TST) with different deep learning approaches in the literature. We utilized the TST to speed up calibration processes and improve classification precision for new users. Then we compare this technique to other techniques: EEGNet, FBtCNN, and C-CNN. Our suggested framework’s outcomes are validated using two datasets with two different time window lengths. The experimental results suggest that cross-subject time-series transformers and EEGNet achieve better performance with specific subjects than state-of-the-art techniques when compared to other techniques that have high potential for building high-speed BCIs.
A Transformer-Based Deep Learning Architecture for Accurate Intracranial Hemorrhage Detection and Classification
Adel ElZemity, Maryam ElFdaly , Shorouk Abdelfattah , and 6 more authors
In 2023 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT) , 2023