This repository was archived by the owner on Jan 3, 2023. It is now read-only.
Adding VGG that works for neon v2.3 with MKL backend #25
Open
wei-v-wang wants to merge 1 commit intomasterfrom
Open
Adding VGG that works for neon v2.3 with MKL backend #25wei-v-wang wants to merge 1 commit intomasterfrom
wei-v-wang wants to merge 1 commit intomasterfrom
Conversation
…ference), and GPU backend inference
wsokolow
suggested changes
Nov 20, 2017
| @@ -1,14 +1,19 @@ | |||
| #Overview | |||
|
|
||
| This example VGG directory contains scripts to perform VGG training and inference using MKL backend and GPU backend | ||
|
|
||
| ##Model |
|
|
||
| ### Model script | ||
| The model run script is included here [vgg_neon.py](./vgg_neon.py). This script can easily be adapted for fine tuning this network but we have focused on inference here because a successful training protocol may require details beyond what is available from the Caffe model zoo. | ||
| The model run scripts included here [vgg_neon_train.py] (./vgg_neon_train.py) and [vgg_neon_inference.py] (./vgg_neon_inference.py) perform training and inference respectively. We are providing both the training and the inference script, they can be adapted for fine tuning this network but we have yet to test the training script because a successful training protocol may require details beyond what is available from the Caffe model zoo. The inference script will take the trained weight file as input: supply it with the VGG_D_fused_conv_bias.p or VGG_E_fused_conv_bias.p or trained models from running VGG training. |
There was a problem hiding this comment.
change "[vgg_neon_train.py] (./vgg_neon_train.py)" to "vgg_neon_train.py", so the hyperlink will work
change "[vgg_neon_inference.py] (./vgg_neon_inference.py)" to "vgg_neon_inference.py", so the hyperlink will work
| | Total | 1152 ms | | ||
| ---------------------- | ||
| ``` | ||
| python -u vgg_neon_train.py -c vgg_mkl.cfg -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe |
| "numactl -i all" is our recommendation to get as much performance as possible for Intel architecture-based servers which | ||
| feature multiple sockets and when NUMA is enabled. On such systems, please run the following: | ||
|
|
||
| numactl -i all python -u vgg_neon_train.py -c vgg_mkl.cfg -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe |
| modify the above vgg_mkl.cfg 'backend' entry or simply using the following command: | ||
|
|
||
| If neon is installed into a `virtualenv`, make sure that it is activated before running the commands below. | ||
| python -u vgg_neon_train.py -c vgg_mkl.cfg -b gpu -vvv --save_path VGG16-model.prm --output_file VGG16-data.h5 --caffe |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
training and inference works fine with MKL
GPU training occasionally work
GPU inference works fine.
Include updated Readme regarding new format of weights.