Attention temporal convolutional network for EEG-based motor imagery classification
-
Updated
Nov 29, 2025 - Python
Attention temporal convolutional network for EEG-based motor imagery classification
BabyGPT: Build Your Own GPT Large Language Model from Scratch Pre-Training Generative Transformer Models: Building GPT from Scratch with a Step-by-Step Guide to Generative AI in PyTorch and Python
A Faster Pytorch Implementation of Multi-Head Self-Attention
Transformer/Transformer-XL/R-Transformer examples and explanations
Transformer creation from scratch using Jax.
Official implementation of "HyPepTox-Fuse: An interpretable hybrid framework for accurate peptide toxicity prediction fusing protein language model-based embeddings with conventional descriptors"
Framework with enclude a new Contexualization module (CARU) to enrich embedding data with a lightweight architecture (Multi-Head Cross-Attention), and a module for weighing BPR triplets (TIL)
This notebook builds a complete GPT (Generative Pre-trained Transformer) model from scratch using PyTorch. It covers tokenization, self-attention, multi-head attention, transformer blocks, and text generation and all explained step-by-step with a simple nursery rhyme corpus.
EEG motor imagery classification using multi-head attention, TCN, Conv and advanced preprocessing.
This work reveals how certain attention heads in LLMs support multilingual processing and leverages them to improve cross-lingual performance.
PyTorch implementation of transformers with multi-headed self attention
Add a description, image, and links to the multi-head-self-attention topic page so that developers can more easily learn about it.
To associate your repository with the multi-head-self-attention topic, visit your repo's landing page and select "manage topics."