Official code for the paper “Temporal convolutional transformer for EEG based motor imagery decoding.”
Paper: https://www.nature.com/articles/s41598-025-16219-7 (Nature Scientific Reports, 2025)
- Built upon ideas/code from EEG-ATCNet: https://github.com/Altaheri/EEG-ATCNet
- Training pipeline structure and several implementations adapted from channel-attention: https://github.com/martinwimpff/channel-attention

TCFormer fuses a Multi-Kernel CNN (MK-CNN) front-end, a Transformer encoder with Grouped-Query Attention (GQA) + RoPE, and a Temporal Convolutional Network (TCN) head. The model captures local (CNN), global (Transformer), and long-range (TCN) temporal dependencies in MI-EEG.
Python 3.10 • PyTorch 2.6.0 • CUDA 12.4
Install dependencies from requirements.txt:
pip install -r requirements.txt
Tested on Ubuntu 24.04 with RTX A6000 GPUs (48 GB). Results may vary slightly by hardware and seeds.
Examples:
# BCI IV-2a, subject-dependent (within-subject), with augmentation
python train_pipeline.py --model tcformer --dataset bcic2a --interaug
# BCI IV-2b, subject-dependent (within-subject), no augmentation
python train_pipeline.py --model tcformer --dataset bcic2b --no_interaug
# HGD, cross-subject (LOSO), no augmentation
python train_pipeline.py --model tcformer --dataset hgd --loso --no_interaug
Batch a full sweep:
Helper script to enumerate models × datasets × seeds × {±augmentation} in both subject-dependent and LOSO settings:
bash run_all.sh
Summaries (tables are written under your results directory):
# Per-subject (Per‑subject and per-seed)
python summarize_per_subject.py /results/
# Dataset-level aggregation (averaged across subjects; per-seed)
python summarize_results.py /results/TCFormer/2a
Dataset | Tasks (classes) | Channels | SR (Hz) | Split (sessions) | Notes |
---|---|---|---|---|---|
BCI Comp IV-2a | L/R hand, Feet, Tongue (4) | 22 EEG | 250 | S1 train, S2 test | Motor imagery |
BCI Comp IV-2b | L vs R hand (2) | 3 (C3, Cz, C4) | 250 | S1–S3 train, S4–S5 test | Motor imagery |
HGD (High-Gamma) | L/R hand, Feet, Rest (4) | 128 → 44 | 512→250 | S1 train, S2 test | Motor execution |
The three datasets above are downloaded automatically by the pipeline.
This repository also supports BCI Comp III-IVa and REH-MI. For these two, download them manually and place the files in the directories defined in
utils/load_bcic3.py
andutils/load_reh_mi.py
.
Accuracy Summary (Subject-Dependent vs. LOSO, ± Augmentation)
The table reports mean accuracy (%) for all models across BCI IV-2a, BCI IV-2b, and HGD in both subject-dependent (Sub-Dep) and Leave-One-Subject-Out (LOSO) settings, with (+aug) and without (–aug) augmentation, plus model parameter counts (k). Parameter counts are referenced from the IV-2a configuration and may vary slightly with dataset/channel count.
Model | Params (k) | BCI Comp IV-2a | BCI Comp IV-2b | HGD (High-Gamma) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Sub-Dep | LOSO | Sub-Dep | LOSO | Sub-Dep | LOSO | ||||||||
–aug | +aug | –aug | +aug | –aug | +aug | –aug | +aug | –aug | +aug | –aug | +aug | ||
EEGNet | 1.7 | 70.39 | 72.62 | 52.01 | 52.03 | 82.80 | 83.65 | 77.67 | 77.89 | 85.59 | 85.94 | 57.95 | 60.12 |
ShallowNet | 44.6 | 60.50 | 65.72 | 48.83 | 47.31 | 79.12 | 81.45 | 74.50 | 75.58 | 89.75 | 91.54 | 72.47 | — |
BaseNet | 3.7 | 76.45 | 78.58 | 57.82 | 56.89 | 84.51 | 86.11 | 78.55 | 78.61 | 93.64 | 95.40 | 68.55 | — |
EEGTCNet | 4.1 | 75.62 | 78.82 | 55.09 | 55.99 | 85.54 | 86.74 | 78.82 | 80.56 | 91.83 | 93.54 | 60.59 | — |
TS-SEFFNet | 334.8 | 76.65 | — | 56.74 | — | 84.18 | — | 77.82 | — | 92.45 | — | 69.99 | — |
CTNet, F1=20 CTNet, F1=8 |
152.7 27.3 |
78.08 | 81.91 | 59.67 | 60.09 | 86.81 | 86.91 | 79.44 | 80.29 | 93.53 | 94.21 | 64.87 | 64.60 |
— | 79.24 | — | 56.17 | — | 87.50 | — | 80.15 | — | 92.22 | — | — | ||
MSCFormer | 150.7 | 75.25 | 79.16 | 52.04 | 54.27 | 85.57 | 87.60 | 78.88 | 79.20 | 91.33 | 94.31 | 61.06 | 61.19 |
EEGConformer | 789.6 | 70.70 | 75.39 | 45.44 | 45.59 | 79.46 | 81.89 | 73.44 | 75.25 | 93.60 | 94.67 | 69.21 | 69.92 |
ATCNet | 113.7 | 83.40 | 83.78 | 60.05 | 59.66 | 86.25 | 86.26 | 80.29 | 80.94 | 93.65 | 95.08 | 67.42 | — |
TCFormer (proposed) | 77.8 | 83.06 | 84.79 | 62.44 | 63.00 | 87.11 | 87.71 | 79.73 | 81.34 | 95.62 | 96.27 | 71.901 | 72.831 |
1 Using a deeper TCFormer encoder (N = 5, ≈131 k params). See the paper for details.
Reported accuracies were averaged over 5 runs (BCI IV-2a/2b) or 3 runs (HGD) using the final (last-epoch) checkpoint; no early stopping or validation-based model selection.
Please cite the paper if you use this code:
@article{Altaheri2025,
title = {Temporal convolutional transformer for EEG based motor imagery decoding},
author = {Altaheri, Hamdi and Karray, Fakhri and Karimi, Amir-Hossein},
journal = {Scientific Reports},
year = {2025},
volume = {15},
number = {1},
pages = {32959},
issn = {2045-2322},
doi = {10.1038/s41598-025-16219-7},
url = {https://doi.org/10.1038/s41598-025-16219-7},}
- Built upon ideas/code from EEG-ATCNet: https://github.com/Altaheri/EEG-ATCNet
- Training pipeline structure and certain implementations adapted from channel-attention: https://github.com/martinwimpff/channel-attention
This repository is released under the MIT License (see LICENSE
).
Contact: Hamdi Altaheri