You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs_src/use-cases/automated-self-checkout/performance.md
+82-9Lines changed: 82 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,22 +2,66 @@
2
2
3
3
The performance tools repository is included as a github submodule in this project. The performance tools enable you to test the pipeline system performance on various hardware.
4
4
5
-
## Benchmark specific number of pipelines
6
5
7
-
Before running benchmark commands, make sure you already configured python and its dependencies. Visit the Performance tools installation guide [HERE]((../../performance-tools/benchmark.md#benchmark-a-cv-pipeline))
6
+
## Benchmark Quick Start command
8
7
9
-
You can launch a specific number of Automated Self Checkout containers using the PIPELINE_COUNT environment variable. Default is to launch `one` yolo11n.sh pipeline. You can override these values through Environment Variables.
8
+
```bash
9
+
make update-submodules
10
+
```
11
+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
10
12
11
-
!!! Note
12
-
The first time running this command may take few minutes. It will build all performance tools containers
13
+
```bash
14
+
make benchmark-quickstart
15
+
```
16
+
The above command would:
17
+
- Run headless (no display needed: `RENDER_MODE=0`)
18
+
- Use full pipeline (`PIPELINE_SCRIPT=obj_detection_age_prediction.sh`)
19
+
- Target GPU by default (`DEVICE_ENV=res/all-gpu.env`)
20
+
- Generate benchmark metrics
21
+
- Run `make consolidate-metrics` automatically
13
22
14
-
After running the following commands, you will find the results in `performance-tools/benchmark-scripts/results/` folder.
23
+
## Understanding Benchmarking Types
24
+
25
+
Before running benchmark commands, make sure you already configured python and its dependencies. Visit the Performance tools installation guide [HERE]((../../performance-tools/benchmark.md#benchmark-a-cv-pipeline))
15
26
16
27
### Default benchmark command
17
28
29
+
```bash
30
+
make update-submodules
31
+
```
32
+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
33
+
18
34
```bash
19
35
make benchmark
20
36
```
37
+
Runs with:
38
+
-`RENDER_MODE=0`
39
+
-`PIPELINE_SCRIPT=yolo11n.sh`
40
+
-`DEVICE_ENV=res/all-cpu.env`
41
+
-`PIPELINE_COUNT=1`
42
+
43
+
You can override these values through Environment Variables.
44
+
45
+
List of EVs:
46
+
47
+
| Variable | Description | Values |
48
+
|:----|:----|:---|
49
+
|`BATCH_SIZE_DETECT`| number of frames batched together for a single inference to be used in [gvadetect batch-size element](https://dlstreamer.github.io/elements/gvadetect.html)| 0-N |
50
+
|`BATCH_SIZE_CLASSIFY`| number of frames batched together for a single inference to be used in [gvaclassify batch-size element](https://dlstreamer.github.io/elements/gvaclassify.html)| 0-N |
51
+
|`RENDER_MODE`| for displaying pipeline and overlay CV metadata | 1, 0 |
52
+
|`PIPELINE_COUNT`| number of Automated Self Checkout Docker container instances to launch | Ex: 1 |
53
+
|`PIPELINE_SCRIPT`| pipeline script to run. | yolo11n_effnetb0.sh, obj_detection__prediction.sh, etc. |
54
+
|`DEVICE_ENV`| device to use for classification and detection | res/all-cpu.env, res/all-gpu.env, res/det-gpu_class-npu.env, etc. |
55
+
56
+
> **Note:**
57
+
> Higher the `PIPELINE_COUNT`, higher the stress on the system.
58
+
> Increasing this value will run more parallel pipelines, increasing resource usage and testing system
59
+
60
+
!!! Note
61
+
The first time running this command may take few minutes. It will build all performance tools containers
62
+
63
+
After running the following commands, you will find the results in `performance-tools/benchmark-scripts/results/` folder.
64
+
21
65
22
66
### Benchmark `2` pipelines in parallel:
23
67
@@ -28,19 +72,48 @@ make PIPELINE_COUNT=2 benchmark
28
72
### Benchmark command with environment variable overrides
29
73
30
74
```bash
31
-
make PIPELINE_SCRIPT=yolo11n_effnetb0.sh DEVICE_ENV=res/all-gpu.env PIPELINE_COUNT=2 benchmark
75
+
make PIPELINE_SCRIPT=yolo11n_effnetb0.sh DEVICE_ENV=res/all-gpu.env PIPELINE_COUNT=1 benchmark
32
76
```
33
77
34
-
Alternatively you can directly call the benchmark.py. This enables you to take advantage of all performance tools parameters. More details about the performance tools can be found [HERE](../../performance-tools/benchmark.md#benchmark-a-cv-pipeline)
78
+
### Benchmark command for full pipeline (age prediction+object classification) using GPU
35
79
36
80
```bash
37
-
cd performance-tools/benchmark-scripts && python benchmark.py --compose_file ../../src/docker-compose.yml --pipeline 2
81
+
make PIPELINE_SCRIPT=obj_detection_age_prediction.sh DEVICE_ENV=res/all-gpu.env PIPELINE_COUNT=1 benchmark
38
82
```
83
+
`obj_detection_age_prediction.sh` runs TWO video streams in parallel even with PIPELINE_COUNT=1:
84
+
85
+
Stream 1: Object detection + classification on retail video <br>
86
+
Stream 2: Face detection + age/gender prediction on age prediction video
87
+
88
+
89
+
90
+
## Create a consolidated metrics file
91
+
92
+
After running the benchmark command run this command to see the benchmarking results:
93
+
94
+
```bash
95
+
make consolidate-metrics
96
+
```
97
+
98
+
`metrics.csv` provides a summary of system and pipeline performance, including FPS, latency, CPU/GPU utilization, memory usage, and power consumption for each benchmark run.
99
+
It helps evaluate hardware efficiency and resource usage during automated self-checkout pipeline tests.
39
100
40
101
## Benchmark Stream Density
41
102
42
103
To test the maximum amount of Automated Self Checkout containers/pipelines that can run on a given system you can use the TARGET_FPS environment variable. Default is to find the container threshold over 14.95 FPS with the yolo11n.sh pipeline. You can override these values through Environment Variables.
43
104
105
+
List of EVs:
106
+
107
+
| Variable | Description | Values |
108
+
|:----|:----|:---|
109
+
|`TARGET_FPS`| threshold value for FPS to consider a valid stream | Ex. 14.95 |
110
+
|`OOM_PROTECTION`| flag to enable/disable OOM checks before scaling the pipeline (enabled by default) | 1, 0 |
111
+
112
+
> **Note:**
113
+
>
114
+
> An OOM crash occurs when a system or application tries to use more memory (RAM) than is available, causing the operating system to forcibly terminate processes to free up memory.<br>
115
+
> If `OOM_PROTECTION` is set to 0, the system may crash or become unresponsive, requiring a hard reboot.
Copy file name to clipboardExpand all lines: docs_src/use-cases/loss-prevention/advanced.md
+94-8Lines changed: 94 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,97 @@
1
-
### 1. Run benchmarking on CPU/NPU/GPU.
2
-
>*By default, the configuration is set to use the CPU. If you want to benchmark the application on GPU or NPU, please update the device value in workload_to_pipeline.json.*
1
+
## Benchmark Quick Start command
2
+
```bash
3
+
make update-submodules
4
+
```
5
+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
3
6
4
-
```sh
5
-
make benchmark
7
+
```bash
8
+
make benchmark-quickstart
9
+
```
10
+
The above command would:
11
+
- Run headless (no display needed: `RENDER_MODE=0`)
12
+
- Target GPU by default (`WORKLOAD_DIST=workload_to_pipeline_gpu.json`)
13
+
- Run 6 streams, each with different workload (`CAMERA_STREAM=camera_to_workload_full.json`)
14
+
- Generate benchmark metrics
15
+
- Run `make consolidate-metrics` automatically
16
+
17
+
18
+
## Understanding Benchmarking Types
19
+
20
+
### Default benchmark command
21
+
22
+
```bash
23
+
make update-submodules
24
+
```
25
+
`update-submodules` ensures all submodules are initialized, updated to their latest remote versions, and ready for use.
26
+
27
+
```bash
28
+
make benchmark
6
29
```
30
+
Runs with:
31
+
-`RENDER_MODE=0`
32
+
-`CAMERA_STREAM=camera_to_workload.json`
33
+
-`WORKLOAD_DIST=workload_to_pipeline.json`
34
+
-`PIPELINE_COUNT=1`
35
+
36
+
You can override these values through the following Environment Variables.
37
+
38
+
| Variable | Description | Values |
39
+
|:----|:----|:---|
40
+
|`RENDER_MODE`| for displaying pipeline and overlay CV metadata | 1, 0 |
41
+
|`PIPELINE_COUNT`| number of Loss Prevention Docker container instances to launch | Ex: 1 |
42
+
|`WORKLOAD_DIST`| to define how each workload is assigned to a specific processing unit (CPU, GPU, NPU) | workload_to_pipeline_cpu.json, workload_to_pipeline_gpu.json, workload_to_pipeline_gpu-npu.json, workload_to_pipeline_hetero.json, workload_to_pipeline.json |
43
+
|`CAMERA_STREAM`| to define camera settings and their associated workloads for the pipeline | camera_to_workload.json, camera_to_workload_full.json |
44
+
45
+
> **Note:**
46
+
> Higher the `PIPELINE_COUNT`, higher the stress on the system.
47
+
> Increasing this value will run more parallel pipelines, increasing resource usage and testing system
Copy file name to clipboardExpand all lines: docs_src/use-cases/loss-prevention/getting_started.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@
55
55
56
56
8. Verify Results
57
57
58
-
After starting Automated Self Checkout you will begin to see result files being written into the results/ directory. Here are example outputs from the 3 log files.
58
+
After starting Loss Prevention you will begin to see result files being written into the results/ directory. Here are example outputs from the 3 log files.
0 commit comments