From 35969c4c363c7104ad5384972c8f98d911aee1a7 Mon Sep 17 00:00:00 2001 From: Jan-Frederik Schulte Date: Thu, 12 Dec 2024 11:52:20 -0500 Subject: [PATCH 1/3] add information about Vivado for part 7 --- README.md | 2 ++ part7a_bitstream.ipynb | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 45fd2bff..2963829e 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,8 @@ conda activate hls4ml-tutorial source /path/to/your/installtion/Xilinx/Vitis_HLS/202X.X/settings64.(c)sh ``` +Note that part 7 of the tutorial makes use of the `VivadoAccelator` backend of hls4ml for which no Vitis equivalent is available yet. For this part of the tutorial it is therefore necesary to install and source Vivado HLS version 2019.2 or 2020.1, which can be obtained [here](https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/vivado-design-tools/archive.html). + ## Companion material We have prepared a set of slides with some introduction and more details on each of the exercises. Please find them [here](https://docs.google.com/presentation/d/1c4LvEc6yMByx2HJs8zUP5oxLtY6ACSizQdKvw5cg5Ck/edit?usp=sharing). diff --git a/part7a_bitstream.ipynb b/part7a_bitstream.ipynb index 1aa91da8..e0ed5a54 100644 --- a/part7a_bitstream.ipynb +++ b/part7a_bitstream.ipynb @@ -26,7 +26,7 @@ "_add_supported_quantized_objects(co)\n", "import os\n", "\n", - "os.environ['PATH'] = os.environ['XILINX_Vivado'] + '/bin:' + os.environ['PATH']" + "os.environ['PATH'] = os.environ['XILINX_VIVADO'] + '/bin:' + os.environ['PATH']" ] }, { @@ -74,7 +74,7 @@ "import hls4ml\n", "import plotting\n", "\n", - "config = hls4ml.utils.config_from_keras_model(model, granularity='name', backend='Vitis')\n", + "config = hls4ml.utils.config_from_keras_model(model, granularity='name')\n", "config['LayerName']['softmax']['exp_table_t'] = 'ap_fixed<18,8>'\n", "config['LayerName']['softmax']['inv_table_t'] = 'ap_fixed<18,4>'\n", "for layer in ['fc1', 'fc2', 'fc3', 'output']:\n", From 15b60e20f1b755d0bb09680a0f48f4eaeac638b3 Mon Sep 17 00:00:00 2001 From: Jan-Frederik Schulte Date: Thu, 12 Dec 2024 12:17:47 -0500 Subject: [PATCH 2/3] add a mention of Vivdao to the part7a notebook --- part7a_bitstream.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/part7a_bitstream.ipynb b/part7a_bitstream.ipynb index e0ed5a54..3381c192 100644 --- a/part7a_bitstream.ipynb +++ b/part7a_bitstream.ipynb @@ -7,7 +7,7 @@ "source": [ "# Part 7a: Bitstream Generation\n", "\n", - "In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/)." + "In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/). NOTE: This tutorial reuires on Vivado HLS instead of Vitis." ] }, { From d23d40ab3cbc4f5b494b80101b6a9c1b195e61ea Mon Sep 17 00:00:00 2001 From: Jan-Frederik Schulte Date: Thu, 12 Dec 2024 13:17:30 -0500 Subject: [PATCH 3/3] fix typo --- part7a_bitstream.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/part7a_bitstream.ipynb b/part7a_bitstream.ipynb index 3381c192..32fdcdc4 100644 --- a/part7a_bitstream.ipynb +++ b/part7a_bitstream.ipynb @@ -7,7 +7,7 @@ "source": [ "# Part 7a: Bitstream Generation\n", "\n", - "In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/). NOTE: This tutorial reuires on Vivado HLS instead of Vitis." + "In the previous sections we've seen how to train a Neural Network with a small resource footprint using QKeras, then to convert it to `hls4ml` and create an IP. That IP can be interfaced into a larger design to deploy on an FPGA device. In this section, we introduce the `VivadoAccelerator` backend of `hls4ml`, where we can easily target some supported devices to get up and running quickly. Specifically, we'll deploy the model on a [pynq-z2 board](http://www.pynq.io/). NOTE: This tutorial requires on Vivado HLS instead of Vitis." ] }, {