Behavioral analysis via self-supervised pretraining of transformers
beast is a package for pretraining vision transformers on unlabeled data to provide backbones
for downstream tasks like pose estimation, action segmentation, and neural encoding.
See the preprint here.
First, check to see if you have ffmpeg installed by typing the following in the terminal:
ffmpeg -version
If not, install:
sudo apt install ffmpeg
First, install anaconda.
Next, create and activate a conda environment:
conda create --yes --name beast python=3.10
conda activate beast
Move to your home directory (or wherever you would like to download the code) and install via Github clone or through PyPI.
For Github cloning:
git clone https://github.com/paninski-lab/beast
cd beast
pip install -e .
For installation through PyPI:
pip install beast-backbones
beast comes with a simple command line interface. To get more information, run
beast -h
Extract frames from a directory of videos to train beast with.
beast extract --input <video_dir> --output <output_dir> [options]
Type "beast extract -h" in the terminal for details on the options.
You will need to specify a config path; see the configs directory for examples.
beast train --config <config_path> [options]
Type "beast train -h" in the terminal for details on the options.
Inference on a single video or a directory of videos:
beast predict --model <model_dir> --input <video_path> [options]
Inference on (possibly nested) directories of images:
beast predict --model <model_dir> --input <video_path> [options]
Type "beast predict -h" in the terminal for details on the options.