Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 0 additions & 39 deletions demos/using_onnx_model/python/Makefile

This file was deleted.

40 changes: 6 additions & 34 deletions demos/using_onnx_model/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Steps are similar to when you work with IR model format. Model Server accepts ON
Below is a complete functional use case using Python 3.7 or higher.
For this example let's use a public [ONNX ResNet](https://github.com/onnx/models/tree/main/validated/vision/classification/resnet) model - resnet50-caffe2-v1-9.onnx model.

This model requires additional [preprocessing function](https://github.com/onnx/models/tree/main/validated/vision/classification/resnet#preprocessing). Preprocessing can be performed in the client by manipulating data before sending the request. Preprocessing can be also delegated to the server by creating a [DAG](../../../docs/dag_scheduler.md) and using a custom processing node. Both methods will be explained below.
This model requires additional [preprocessing function](https://github.com/onnx/models/tree/main/validated/vision/classification/resnet#preprocessing). Preprocessing can be performed in the client by manipulating data before sending the request. Preprocessing can be also delegated to the server by setting preprocessing parameters. Both methods will be explained below.

[Option 1: Adding preprocessing to the client side](#option-1-adding-preprocessing-to-the-client-side)
[Option 2: Adding preprocessing to the server side (building DAG)](#option-2-adding-preprocessing-to-the-server-side-building-a-dag)
Expand All @@ -17,9 +17,9 @@ git clone https://github.com/openvinotoolkit/model_server.git
cd model_server/demos/using_onnx_model/python
```

Prepare workspace with the model by running:
Download classification model
```bash
make client_preprocessing
curl --fail -L --create-dirs https://github.com/onnx/models/raw/main/validated/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx -o workspace/resnet50-onnx/1/resnet50-caffe2-v1-9.onnx
```

You should see `workspace` directory created with the following content:
Expand Down Expand Up @@ -55,29 +55,13 @@ Detected class name: bee

## Option 2: Adding preprocessing to the server side (building a DAG)

Prepare workspace with the model, preprocessing node library and configuration file by running:
```bash
make server_preprocessing
```

You should see `workspace` directory created with the following content:
```bash
workspace/
├── config.json
├── lib
│   └── libcustom_node_image_transformation.so
└── resnet50-onnx
└── 1
└── resnet50-caffe2-v1-9.onnx
```

Start the OVMS container with a configuration file option:
Start the OVMS container with additional preprocessing options:
```bash
docker run -d -u $(id -u):$(id -g) -v $(pwd)/workspace:/workspace -p 9001:9001 openvino/model_server:latest \
--config_path /workspace/config.json --port 9001
--model_path /workspace/resnet50-onnx --model_name resnet --port 9001 --layout NHWC:NCHW --mean "[123.675,116.28,103.53]" --scale "[58.395,57.12,57.375]" --shape "(1,224,224,3)" --color_format BGR
```

The `onnx_model_demo.py` script can run inference both with and without performing preprocessing. Since in this variant preprocessing is done by the model server (via custom node), there's no need to perform any image preprocessing on the client side. In that case, run without `--run_preprocessing` option. See [preprocessing function](https://github.com/openvinotoolkit/model_server/blob/main/demos/using_onnx_model/python/onnx_model_demo.py#L26-L33) run in the client.
The `onnx_model_demo.py` script can run inference both with and without performing preprocessing. Since in this variant preprocessing is done by the model server, there's no need to perform any image preprocessing on the client side. In that case, run without `--run_preprocessing` option. See [preprocessing function](https://github.com/openvinotoolkit/model_server/blob/main/demos/using_onnx_model/python/onnx_model_demo.py#L26-L33) run in the client.

Run the client without preprocessing:
```bash
Expand All @@ -86,15 +70,3 @@ Running without preprocessing on client side
Class is with highest score: 309
Detected class name: bee
```

## Node parameters explanation
Additional preprocessing step applies a division and an subtraction to each pixel value in the image. This calculation is configured by passing two parameters to _image transformation_ custom node in [config.json](https://github.com/openvinotoolkit/model_server/blob/main/demos/using_onnx_model/python/config.json#L32-L33):
```
"params": {
...
"mean_values": "[123.675,116.28,103.53]",
"scale_values": "[58.395,57.12,57.375]",
...
}
```
For each pixel, the custom node subtracted `123.675` from blue value, `116.28` from green value and `103.53` from red value. Next, it divides in the same color order using `58.395`, `57.12`, `57.375` values. This way we match the image data to the input required by onnx model.
72 changes: 0 additions & 72 deletions demos/using_onnx_model/python/config.json

This file was deleted.

4 changes: 2 additions & 2 deletions demos/using_onnx_model/python/onnx_model_demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,12 +60,12 @@ def getJpeg(path, size):
if args["run_preprocessing"]:
print("Running with preprocessing on client side")
img = getJpeg(args["image_path"], 224)
input_name = "gpu_0/data_0"
input_name = "data"
else:
print("Running without preprocessing on client side")
with open(args["image_path"], "rb") as f:
img = f.read()
input_name = "0"
input_name = "data"

client = make_grpc_client(args["service_url"])
output = client.predict({input_name: img}, "resnet")
Expand Down
12 changes: 6 additions & 6 deletions spelling-whitelist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ src/shape.cpp:438: strIn
src/shape.cpp:488: strIn
src/shape.cpp:507: strIn
src/shape.hpp:121: strIn
src/test/modelconfig_test.cpp:473: OptionA
src/test/modelconfig_test.cpp:479: OptionA
src/test/modelconfig_test.cpp:485: OptionA
src/test/modelconfig_test.cpp:491: OptionA
src/test/modelconfig_test.cpp:497: OptionA
src/test/modelconfig_test.cpp:503: OptionA
src/test/modelconfig_test.cpp:565: OptionA
src/test/modelconfig_test.cpp:571: OptionA
src/test/modelconfig_test.cpp:577: OptionA
src/test/modelconfig_test.cpp:583: OptionA
src/test/modelconfig_test.cpp:589: OptionA
src/test/modelconfig_test.cpp:595: OptionA
src/test/modelinstance_test.cpp:1093: THROUGHTPUT
third_party/aws-sdk-cpp/aws-sdk-cpp.bz
WORKSPACE:98: thirdparty
Expand Down
3 changes: 3 additions & 0 deletions src/capi_frontend/server_settings.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,9 @@ struct ModelsSettingsImpl {
std::string batchSize;
std::string shape;
std::string layout;
std::string mean;
std::string scale;
std::string colorFormat;
std::string modelVersionPolicy;
uint32_t nireq = 0;
std::string targetDevice;
Expand Down
27 changes: 27 additions & 0 deletions src/cli_parser.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,18 @@ std::variant<bool, std::pair<int, std::string>> CLIParser::parse(int argc, char*
"Resets model layout.",
cxxopts::value<std::string>(),
"LAYOUT")
("mean",
"Resets model mean.",
cxxopts::value<std::string>(),
"MEAN")
("scale",
"Resets model scale.",
cxxopts::value<std::string>(),
"SCALE")
("color_format",
"Resets model color format.",
cxxopts::value<std::string>(),
"COLOR_FORMAT")
("model_version_policy",
"Model version policy",
cxxopts::value<std::string>(),
Expand Down Expand Up @@ -587,6 +599,21 @@ void CLIParser::prepareModel(ModelsSettingsImpl& modelsSettings, HFSettingsImpl&
modelsSettings.userSetSingleModelArguments.push_back("layout");
}

if (result->count("mean")) {
modelsSettings.mean = result->operator[]("mean").as<std::string>();
modelsSettings.userSetSingleModelArguments.push_back("mean");
}

if (result->count("scale")) {
modelsSettings.scale = result->operator[]("scale").as<std::string>();
modelsSettings.userSetSingleModelArguments.push_back("scale");
}

if (result->count("color_format")) {
modelsSettings.colorFormat = result->operator[]("color_format").as<std::string>();
modelsSettings.userSetSingleModelArguments.push_back("color_format");
}

if (result->count("model_version_policy")) {
modelsSettings.modelVersionPolicy = result->operator[]("model_version_policy").as<std::string>();
modelsSettings.userSetSingleModelArguments.push_back("model_version_policy");
Expand Down
3 changes: 3 additions & 0 deletions src/config.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -377,6 +377,9 @@ const std::string& Config::batchSize() const {
}
const std::string& Config::Config::shape() const { return this->modelsSettings.shape; }
const std::string& Config::layout() const { return this->modelsSettings.layout; }
const std::string& Config::means() const { return this->modelsSettings.mean; }
const std::string& Config::scales() const { return this->modelsSettings.scale; }
const std::string& Config::colorFormat() const { return this->modelsSettings.colorFormat; }
const std::string& Config::modelVersionPolicy() const { return this->modelsSettings.modelVersionPolicy; }
uint32_t Config::nireq() const { return this->modelsSettings.nireq; }
const std::string& Config::targetDevice() const {
Expand Down
22 changes: 21 additions & 1 deletion src/config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -193,12 +193,32 @@ class Config {
const std::string& shape() const;

/**
* @brief Get the layout
* @brief Get the sout
*
* @return const std::string&
*/
const std::string& layout() const;

/**
* @brief Get means
*
* @return const std::string&
*/
const std::string& means() const;
/**
* @brief Get scales
*
* @return const std::string&
*/
const std::string& scales() const;

/**
* @brief Get color format
*
* @return const std::string&
*/
const std::string& colorFormat() const;

/**
* @brief Get the shape
*
Expand Down
Loading