This is a supplementary repository for the paper titled Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition.
- Execute
requirements.txtandsetup.pyto install the necessary packages. - Save the data in the
archivesfolder. - Run
ar_dataset_preprocessing.pyfor the desired dataset preprocessing. The processed data will be saved inmts_archive. - Run
./datasetnametuning.sh Xin the desired folder (X: id of GPU). - Execute
datasetnameresults.py.
- You need to create a
datafolder manually.- The top-level folders contain raw data, and
mts_archivecontains data after each preprocessing.
- The top-level folders contain raw data, and
- We have to format all datasets into the same structure as the WESAD dataset.
- In each
Sifolder, have a file for each participant in.pkl. - In each
.pklfile, label and sensor signal data are innumpy.arrayformat.
- In each
- When you run
ar_dataset_preprocessing.py, the codes inside this folder will be executed. - The main files are
datasetname.py, which perform winsorization, filtering, resampling, normalization, and windowing and also formatting the dataset for deep learning models.- For datasets without user labels, we use
preprocessor.pyandsubject.py, while those with labels,preprocessorlabel.pyandsubjectlabel.pyare used.
- For datasets without user labels, we use
- Functions in the
multimodal_classifiersfolder are used for model training.- For each deep learning structure (i.e., Fully Convolutional Network (FCN), Residual Network (ResNet), and Multi-Layer Perceptron with LSTM (MLP-LSTM)), non-personalized models are implemented.
- For a detailed explanation of model implementation, please refer to section 3.3 Non-Personalized Model.
- Functions in the
multimodal_classifiers_finetuningfolder are used for model training.- For each deep learning structure, personalized models with fine-tuning are implemented.
- For a detailed explanation of model implementation, please refer to section 3.4.1 Unseen User-Dependent Fine-Tuning part.
- Functions in the
multimodal_classifiers_hybridfolder are used for model training.- For each deep learning structure, hybrid (partially-personalized) models are implemented.
- For a detailed explanation of model implementation, please refer to section 3.4.1 Unseen User-Dependent Hybrid part.
- Functions in the
multimodal_classifiersfolder andclusteringfolder are used for model training.- As explained in section 3.4.2 Unseen User-Independent, the difference between generalized model and cluster-specific personalized model is the data used for training, not the model itself.
- Therefore, we use the same functions in the
multimodal_classifiersfolder as in generalized models.
- Therefore, we use the same functions in the
- Using functions in the
clusteringfolder, trait-based clustering is done and its result is used for model training.
- As explained in section 3.4.2 Unseen User-Independent, the difference between generalized model and cluster-specific personalized model is the data used for training, not the model itself.
- Functions in the
multimodal_classifiers_mtlfolder andclusteringfolder are used for model training.- As explained in section 3.4.2 Unseen User-Independent, multi-task learning personalized models differ from generalized models in both the data used for training and the model itself.
- Therefore, we use the functions in the
multimodal_classifiers_mtlfolder.
- Also, using functions in the
clusteringfolder, trait-based clustering is done for multi-task learning models.
Codes for non-personalized models, i.e., arpreprocessing, GeneralizedModel, and multimodal_classifiers folder, are based on code provided at the "dl-4-tsc" GitHub repository. https://github.com/Emognition/dl-4-tsc
The datasets used are as follows, and they can be downloaded from the provided links:
- AMIGOS: AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups
- ASCERTAIN: ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors
- CASE: A dataset of continuous affect annotations and physiological signals for emotion analysis
- WESAD: WESAD: Multimodal Dataset for Wearable Stress and Affect Detection
- K-EmoCon: K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations
- K-EmoPhone: K-EmoPhone, A Mobile and Wearable Dataset with In-Situ Emotion, Stress, and Attention Labels