[MICCAI'24] Epileptic Seizure Detection in SEEG Signals using a Unified Multi-scale Temporal-Spatial-Spectral Transformer Model
Accepted by MICCAI 2024
High-performance methods for automated detection of epileptic stereo-electroencephalography (SEEG) have important clinical research implications, improving the diagnostic efficiency and reducing physician burden. However, few studies have been able to consider the process of seizure propagation, thus failing to fully capture the deep representations and variations of SEEG in the temporal, spatial, and spectral domains. In this paper, we construct a novel long-term SEEG seizure dataset (LTSZ dataset), and propose channel embedding temporal-spatial-spectral transformer (CE-TSS-Transformer) framework. Firstly, we design channel embedding module to reduce feature dimensions and adaptively construct optimal representation for subsequent analysis. Secondly, we integrate unified multi-scale temporal-spatial-spectral analysis to capture multi-level, multi-domain deep features. Finally, we utilize the transformer encoder to learn the global relevance of features, enhancing the network's ability to express SEEG features. Experimental results demonstrate state-of-the-art detection performance on the LTSZ dataset, achieving sensitivity, specificity, and accuracy of 99.48%, 99.80%, and 99.48%, respectively. Furthermore, we validate the scalability of the proposed framework on two public datasets of different signal sources, demonstrating the power of the CE-TSS-Transformer framework for capturing diverse temporal-spatial-spectral patterns in seizure detection. The code is available at https://github.com/lizhuoyi-eve/CE-TSS-Transformer.