Skip to content

Commit 4dd46b7

Browse files
Merge pull request #97 from suryam789/main
ASC age prediction and performance update.md
2 parents 2e18210 + 513e6ac commit 4dd46b7

File tree

4 files changed

+195
-18
lines changed

4 files changed

+195
-18
lines changed

docs_src/use-cases/automated-self-checkout/advanced.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,24 @@ The table below lists the environment variables (EVs) that can be used as inputs
9696
|`PIPELINE_SCRIPT` | Pipeline script to run. | yolo11n.sh, yolo11n_effnetb0.sh, yolo11n_full.sh |
9797

9898

99+
## Available Pipelines
100+
101+
- `yolo11n.sh` - Runs object detection only.
102+
- `yolo11n_full.sh` - Runs object detection, object classification, text detection, text recognition, and barcode detection.
103+
- `yolo11n_effnetb0.sh` - Runs object detection, and object classification.
104+
- `obj_detection_age_prediction.sh` - Runs two parallel streams:<br>
105+
&emsp;Stream 1: Object detection and classification on retail video. <br>
106+
&emsp;Stream 2: Face detection and age/gender recognition on age prediction video.
107+
108+
### Models used
109+
110+
- Age/Gender Recognition - `age-gender-recognition-retail-0013`
111+
- Face Detection - `face-detection-retail-0004`
112+
- Object Classification - `efficientNet-B0`
113+
- Object Detection - `YOLOv11n`
114+
- Text Detectoin - `horizontal-text-detection-0002`
115+
- Text Recognition - `text-recognition-0012`
116+
99117
## Using a Custom Model
100118

101119
You can replace the default detection model with your own trained model by following these steps:

docs_src/use-cases/automated-self-checkout/performance.md

Lines changed: 82 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,22 +2,66 @@
22

33
The performance tools repository is included as a github submodule in this project. The performance tools enable you to test the pipeline system performance on various hardware.
44

5-
## Benchmark specific number of pipelines
65

7-
Before running benchmark commands, make sure you already configured python and its dependencies. Visit the Performance tools installation guide [HERE]((../../performance-tools/benchmark.md#benchmark-a-cv-pipeline))
6+
## Benchmark Quick Start command
87

9-
You can launch a specific number of Automated Self Checkout containers using the PIPELINE_COUNT environment variable. Default is to launch `one` yolo11n.sh pipeline. You can override these values through Environment Variables.
8+
```bash
9+
make update-submodules
10+
```
11+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
1012

11-
!!! Note
12-
The first time running this command may take few minutes. It will build all performance tools containers
13+
```bash
14+
make benchmark-quickstart
15+
```
16+
The above command would:
17+
- Run headless (no display needed: `RENDER_MODE=0`)
18+
- Use full pipeline (`PIPELINE_SCRIPT=obj_detection_age_prediction.sh`)
19+
- Target GPU by default (`DEVICE_ENV=res/all-gpu.env`)
20+
- Generate benchmark metrics
21+
- Run `make consolidate-metrics` automatically
1322

14-
After running the following commands, you will find the results in `performance-tools/benchmark-scripts/results/` folder.
23+
## Understanding Benchmarking Types
24+
25+
Before running benchmark commands, make sure you already configured python and its dependencies. Visit the Performance tools installation guide [HERE]((../../performance-tools/benchmark.md#benchmark-a-cv-pipeline))
1526

1627
### Default benchmark command
1728

29+
```bash
30+
make update-submodules
31+
```
32+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
33+
1834
```bash
1935
make benchmark
2036
```
37+
Runs with:
38+
- `RENDER_MODE=0`
39+
- `PIPELINE_SCRIPT=yolo11n.sh`
40+
- `DEVICE_ENV=res/all-cpu.env`
41+
- `PIPELINE_COUNT=1`
42+
43+
You can override these values through Environment Variables.
44+
45+
List of EVs:
46+
47+
| Variable | Description | Values |
48+
|:----|:----|:---|
49+
|`BATCH_SIZE_DETECT` | number of frames batched together for a single inference to be used in [gvadetect batch-size element](https://dlstreamer.github.io/elements/gvadetect.html) | 0-N |
50+
|`BATCH_SIZE_CLASSIFY` | number of frames batched together for a single inference to be used in [gvaclassify batch-size element](https://dlstreamer.github.io/elements/gvaclassify.html) | 0-N |
51+
|`RENDER_MODE` | for displaying pipeline and overlay CV metadata | 1, 0 |
52+
|`PIPELINE_COUNT` | number of Automated Self Checkout Docker container instances to launch | Ex: 1 |
53+
|`PIPELINE_SCRIPT` | pipeline script to run. | yolo11n_effnetb0.sh, obj_detection__prediction.sh, etc. |
54+
|`DEVICE_ENV` | device to use for classification and detection | res/all-cpu.env, res/all-gpu.env, res/det-gpu_class-npu.env, etc. |
55+
56+
> **Note:**
57+
> Higher the `PIPELINE_COUNT`, higher the stress on the system.
58+
> Increasing this value will run more parallel pipelines, increasing resource usage and testing system
59+
60+
!!! Note
61+
The first time running this command may take few minutes. It will build all performance tools containers
62+
63+
After running the following commands, you will find the results in `performance-tools/benchmark-scripts/results/` folder.
64+
2165

2266
### Benchmark `2` pipelines in parallel:
2367

@@ -28,19 +72,48 @@ make PIPELINE_COUNT=2 benchmark
2872
### Benchmark command with environment variable overrides
2973

3074
```bash
31-
make PIPELINE_SCRIPT=yolo11n_effnetb0.sh DEVICE_ENV=res/all-gpu.env PIPELINE_COUNT=2 benchmark
75+
make PIPELINE_SCRIPT=yolo11n_effnetb0.sh DEVICE_ENV=res/all-gpu.env PIPELINE_COUNT=1 benchmark
3276
```
3377

34-
Alternatively you can directly call the benchmark.py. This enables you to take advantage of all performance tools parameters. More details about the performance tools can be found [HERE](../../performance-tools/benchmark.md#benchmark-a-cv-pipeline)
78+
### Benchmark command for full pipeline (age prediction+object classification) using GPU
3579

3680
```bash
37-
cd performance-tools/benchmark-scripts && python benchmark.py --compose_file ../../src/docker-compose.yml --pipeline 2
81+
make PIPELINE_SCRIPT=obj_detection_age_prediction.sh DEVICE_ENV=res/all-gpu.env PIPELINE_COUNT=1 benchmark
3882
```
83+
`obj_detection_age_prediction.sh` runs TWO video streams in parallel even with PIPELINE_COUNT=1:
84+
85+
Stream 1: Object detection + classification on retail video <br>
86+
Stream 2: Face detection + age/gender prediction on age prediction video
87+
88+
89+
90+
## Create a consolidated metrics file
91+
92+
After running the benchmark command run this command to see the benchmarking results:
93+
94+
```bash
95+
make consolidate-metrics
96+
```
97+
98+
`metrics.csv` provides a summary of system and pipeline performance, including FPS, latency, CPU/GPU utilization, memory usage, and power consumption for each benchmark run.
99+
It helps evaluate hardware efficiency and resource usage during automated self-checkout pipeline tests.
39100

40101
## Benchmark Stream Density
41102

42103
To test the maximum amount of Automated Self Checkout containers/pipelines that can run on a given system you can use the TARGET_FPS environment variable. Default is to find the container threshold over 14.95 FPS with the yolo11n.sh pipeline. You can override these values through Environment Variables.
43104

105+
List of EVs:
106+
107+
| Variable | Description | Values |
108+
|:----|:----|:---|
109+
|`TARGET_FPS` | threshold value for FPS to consider a valid stream | Ex. 14.95 |
110+
|`OOM_PROTECTION` | flag to enable/disable OOM checks before scaling the pipeline (enabled by default) | 1, 0 |
111+
112+
> **Note:**
113+
>
114+
> An OOM crash occurs when a system or application tries to use more memory (RAM) than is available, causing the operating system to forcibly terminate processes to free up memory.<br>
115+
> If `OOM_PROTECTION` is set to 0, the system may crash or become unresponsive, requiring a hard reboot.
116+
44117
```bash
45118
make benchmark-stream-density
46119
```

docs_src/use-cases/loss-prevention/advanced.md

Lines changed: 94 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,97 @@
1-
### 1. Run benchmarking on CPU/NPU/GPU.
2-
>*By default, the configuration is set to use the CPU. If you want to benchmark the application on GPU or NPU, please update the device value in workload_to_pipeline.json.*
1+
## Benchmark Quick Start command
2+
```bash
3+
make update-submodules
4+
```
5+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
36

4-
```sh
5-
make benchmark
7+
```bash
8+
make benchmark-quickstart
9+
```
10+
The above command would:
11+
- Run headless (no display needed: `RENDER_MODE=0`)
12+
- Target GPU by default (`WORKLOAD_DIST=workload_to_pipeline_gpu.json`)
13+
- Run 6 streams, each with different workload (`CAMERA_STREAM=camera_to_workload_full.json`)
14+
- Generate benchmark metrics
15+
- Run `make consolidate-metrics` automatically
16+
17+
18+
## Understanding Benchmarking Types
19+
20+
### Default benchmark command
21+
22+
```bash
23+
make update-submodules
24+
```
25+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
26+
27+
```bash
28+
make benchmark
629
```
30+
Runs with:
31+
- `RENDER_MODE=0`
32+
- `CAMERA_STREAM=camera_to_workload.json`
33+
- `WORKLOAD_DIST=workload_to_pipeline.json`
34+
- `PIPELINE_COUNT=1`
35+
36+
You can override these values through the following Environment Variables.
37+
38+
| Variable | Description | Values |
39+
|:----|:----|:---|
40+
|`RENDER_MODE` | for displaying pipeline and overlay CV metadata | 1, 0 |
41+
|`PIPELINE_COUNT` | number of Loss Prevention Docker container instances to launch | Ex: 1 |
42+
|`WORKLOAD_DIST` | to define how each workload is assigned to a specific processing unit (CPU, GPU, NPU) | workload_to_pipeline_cpu.json, workload_to_pipeline_gpu.json, workload_to_pipeline_gpu-npu.json, workload_to_pipeline_hetero.json, workload_to_pipeline.json |
43+
|`CAMERA_STREAM` | to define camera settings and their associated workloads for the pipeline | camera_to_workload.json, camera_to_workload_full.json |
44+
45+
> **Note:**
46+
> Higher the `PIPELINE_COUNT`, higher the stress on the system.
47+
> Increasing this value will run more parallel pipelines, increasing resource usage and testing system
48+
49+
### All CAMERA_STREAM options
50+
- `camera_to_workload.json`
51+
52+
| Camera_ID | Workload |
53+
|:----|:---|
54+
| cam1 | items_in_basket + multi_product_identification |
55+
| cam2 | hidden_items, product_switching |
56+
| cam3 | fake_scan_detection |
57+
58+
- `camera_to_workload_full.json`
59+
60+
| Camera_ID | Workload |
61+
|:----|:---|
62+
| cam1 | items_in_basket |
63+
| cam2 | hidden_items |
64+
| cam3 | fake_scan_detection |
65+
| cam4 | multi_product_identification |
66+
| cam5 | product_switching |
67+
| cam6 | sweet_heartening |
68+
69+
### All WORKLOAD_DIST options
70+
71+
- `workload_to_pipeline_cpu.json` - All the workloads run on CPU.
72+
- `workload_to_pipeline_gpu.json` - All the workloads run on GPU.
73+
- `workload_to_pipeline_gpu-npu.json` -
74+
- items_in_basket, hidden_items, multi_product_identification and product_switching run on GPU,
75+
- fake_scan_detection and sweet_heartening run on NPU.
76+
- `workload_to_pipeline_hetero.json` -
77+
78+
| Workload | gvadetect | gvaclassify | gvainference |
79+
|:---|:---|:---|:---|
80+
| items_in_basket | GPU | GPU | - |
81+
| hidden_items | GPU | CPU | - |
82+
| fake_scan_detection | GPU | CPU | - |
83+
| multi_product_identification | GPU | CPU | - |
84+
| product_switching | GPU | GPU | - |
85+
| sweet_heartening | NPU | - | NPU |
86+
- `workload_to_pipeline.json` -
87+
- items_in_basket, multi_product_identification and sweet_heartening run on CPU,
88+
- product_switching and hidden_items run on GPU,
89+
- fake_scan_detection runs on NPU.
90+
91+
!!! Note
92+
The first time running this command may take few minutes. It will build all performance tools containers
793

8-
### 2. See the benchmarking results.
94+
### See the benchmarking results
995

1096
```sh
1197
make consolidate-metrics
@@ -14,15 +100,15 @@ cat benchmark/metrics.csv
14100
```
15101

16102

17-
## 3.🛠️ Other Useful Make Commands.
103+
## 🛠️ Other Useful Make Commands
18104

19105
- `make validate-all-configs` — Validate all configuration files
20106
- `make clean-images` — Remove dangling Docker images
21107
- `make clean-containers` — Remove stopped containers
22108
- `make clean-all` — Remove all unused Docker resources
23109

24110

25-
## 4.⚙️ Configuration
111+
## ⚙️ Configuration
26112

27113
The application is highly configurable via JSON files in the `configs/` directory:
28114

@@ -72,4 +158,4 @@ The application is highly configurable via JSON files in the `configs/` director
72158
- `src/` — Main source code and pipeline runner scripts
73159
- `Makefile` — Build automation and workflow commands
74160

75-
---
161+
---

docs_src/use-cases/loss-prevention/getting_started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@
5555

5656
8. Verify Results
5757

58-
After starting Automated Self Checkout you will begin to see result files being written into the results/ directory. Here are example outputs from the 3 log files.
58+
After starting Loss Prevention you will begin to see result files being written into the results/ directory. Here are example outputs from the 3 log files.
5959

6060
gst-launch_<time>_gst.log
6161
```

0 commit comments

Comments
 (0)