Full train/inference/submission pipeline adapted to the Data Science Bowl competition from https://github.com/matterport/Mask_RCNN. Kudos to @matterport, @waleedka and others for the code. It is well written, but is also somewhat opinionated, which makes it harder to guess what's going on under the hood, which is the reason for my fork to exist.
I did almost no changes to the original code, except for:
- Everything custom in
bowl_config.pyandbowl_dataset.py. VALIDATION_STEPSandSTEPS_PER_EPOCHare now forced to depend on the dataset size, hardcoded.multiprocessing=False, hardcoded.- @John1231983's changes from this PR.
- Added
RESNET_ARCHITECTUREvariable to the config (resnet50orresnet101while 101 comes with a default config).
- First, you have to download the train masks. Thanks @lopuhin for bringing all the fixes to one place. You might want to do it outside of this repo to be able to pull changes later and use symlinks:
git clone https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes ../kaggle-dsbowl-2018-dataset-fixes
ln -s ../kaggle-dsbowl-2018-dataset-fixes/stage1_train stage1_train- Download the rest of the official dataset and unzip it to the repo:
unzip ~/Downloads/stage1_test.zip -d stage1_test
unzip ~/Downloads/stage1_train_labels.csv.zip -d .
unzip ~/Downloads/stage1_sample_submission.csv.zip -d .-
Install
pycocotoolsand COCO pretrained weights (mask_rcnn_coco.h5). General idea is described here. Keep in mind, to install pycocotools properly, it's better to runmake installinstead ofmake. -
For a single GPU training, run:
CUDA_VISIBLE_DEVICES="0" python train.py
- To generate a submission, run:
CUDA_VISIBLE_DEVICES="0" python inference.py
This will create submission.csv in the repo and overwrite the old one (you're welcome to fix this with a PR).
- Submit! You should get around 0.361 score on LB after 100 epochs.
- Poor man's Exploration Data Analysis -- to get a basic idea about data.
- Test submit for errors -- tries to read the submission and visualizes separate masks.
- Visualize inference -- since there's not too many masks in the test dataset, it's easy to look through all of them in a single place.
- Fix validation. For now, train data is used as a validation set.
- Normalize data.
- Move configuration to
argsparsefor easier hyperparameter search. - Parallelize data loading.
- Augmentations.
- External Data.