Skip to content

Altaheri/TCFormer

Repository files navigation

TCFormer: Temporal Convolutional Transformer

Official code for the paper “Temporal convolutional transformer for EEG based motor imagery decoding.”
Paper: https://www.nature.com/articles/s41598-025-16219-7 (Nature Scientific Reports, 2025)


TCFormer architecture blocks

TCFormer fuses a Multi-Kernel CNN (MK-CNN) front-end, a Transformer encoder with Grouped-Query Attention (GQA) + RoPE, and a Temporal Convolutional Network (TCN) head. The model captures local (CNN), global (Transformer), and long-range (TCN) temporal dependencies in MI-EEG.


Environment

Python 3.10 • PyTorch 2.6.0 • CUDA 12.4

Install dependencies from requirements.txt:

pip install -r requirements.txt

Tested on Ubuntu 24.04 with RTX A6000 GPUs (48 GB). Results may vary slightly by hardware and seeds.


Training & Evaluation

Examples:

# BCI IV-2a, subject-dependent (within-subject), with augmentation
python train_pipeline.py --model tcformer --dataset bcic2a --interaug

# BCI IV-2b, subject-dependent (within-subject), no augmentation
python train_pipeline.py --model tcformer --dataset bcic2b --no_interaug

# HGD, cross-subject (LOSO), no augmentation
python train_pipeline.py --model tcformer --dataset hgd --loso --no_interaug

Batch a full sweep:

Helper script to enumerate models × datasets × seeds × {±augmentation} in both subject-dependent and LOSO settings:

bash run_all.sh

Summaries (tables are written under your results directory):

# Per-subject (Per‑subject and per-seed)
python summarize_per_subject.py /results/

# Dataset-level aggregation (averaged across subjects; per-seed)
python summarize_results.py /results/TCFormer/2a

Datasets

Dataset Tasks (classes) Channels SR (Hz) Split (sessions) Notes
BCI Comp IV-2a L/R hand, Feet, Tongue (4) 22 EEG 250 S1 train, S2 test Motor imagery
BCI Comp IV-2b L vs R hand (2) 3 (C3, Cz, C4) 250 S1–S3 train, S4–S5 test Motor imagery
HGD (High-Gamma) L/R hand, Feet, Rest (4) 128 → 44 512→250 S1 train, S2 test Motor execution

The three datasets above are downloaded automatically by the pipeline.
This repository also supports BCI Comp III-IVa and REH-MI. For these two, download them manually and place the files in the directories defined in
utils/load_bcic3.py and utils/load_reh_mi.py.


Results (from the paper)

Accuracy Summary (Subject-Dependent vs. LOSO, ± Augmentation)

The table reports mean accuracy (%) for all models across BCI IV-2a, BCI IV-2b, and HGD in both subject-dependent (Sub-Dep) and Leave-One-Subject-Out (LOSO) settings, with (+aug) and without (–aug) augmentation, plus model parameter counts (k). Parameter counts are referenced from the IV-2a configuration and may vary slightly with dataset/channel count.

Model Params (k) BCI Comp IV-2a BCI Comp IV-2b HGD (High-Gamma)
Sub-Dep LOSO Sub-Dep LOSO Sub-Dep LOSO
–aug+aug –aug+aug –aug+aug –aug+aug –aug+aug –aug+aug
EEGNet 1.7 70.3972.62 52.0152.03 82.8083.65 77.6777.89 85.5985.94 57.9560.12
ShallowNet 44.6 60.5065.72 48.8347.31 79.1281.45 74.5075.58 89.7591.54 72.47
BaseNet 3.7 76.4578.58 57.8256.89 84.5186.11 78.5578.61 93.6495.40 68.55
EEGTCNet 4.1 75.6278.82 55.0955.99 85.5486.74 78.8280.56 91.8393.54 60.59
TS-SEFFNet 334.8 76.65 56.74 84.18 77.82 92.45 69.99
CTNet,  F1=20
CTNet, F1=8
152.7
27.3
78.0881.91 59.6760.09 86.8186.91 79.4480.29 93.5394.21 64.8764.60
79.24 56.17 87.50 80.15 92.22
MSCFormer 150.7 75.2579.16 52.0454.27 85.5787.60 78.8879.20 91.3394.31 61.0661.19
EEGConformer 789.6 70.7075.39 45.4445.59 79.4681.89 73.4475.25 93.6094.67 69.2169.92
ATCNet 113.7 83.4083.78 60.0559.66 86.2586.26 80.2980.94 93.6595.08 67.42
TCFormer (proposed) 77.8 83.0684.79 62.4463.00 87.1187.71 79.7381.34 95.6296.27 71.90172.831

1 Using a deeper TCFormer encoder (N = 5, ≈131 k params). See the paper for details.
Reported accuracies were averaged over 5 runs (BCI IV-2a/2b) or 3 runs (HGD) using the final (last-epoch) checkpoint; no early stopping or validation-based model selection.

Figure 8 1


Citation

Please cite the paper if you use this code:

@article{Altaheri2025,
  title   = {Temporal convolutional transformer for EEG based motor imagery decoding},
  author  = {Altaheri, Hamdi and Karray, Fakhri and Karimi, Amir-Hossein},
  journal = {Scientific Reports},
  year    = {2025},
  volume  = {15},
  number  = {1},
  pages   = {32959},
  issn    = {2045-2322},
  doi     = {10.1038/s41598-025-16219-7},
  url     = {https://doi.org/10.1038/s41598-025-16219-7},}

Acknowledgements & License

This repository is released under the MIT License (see LICENSE).
Contact: Hamdi Altaheri

About

Temporal convolutional transformer for EEG based motor imagery decoding

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published