Skip to content

Commit d09cffc

Browse files
authored
Merge branch 'main' into pre-commit-ci-update-config
2 parents 29db5e8 + b1d2c30 commit d09cffc

9 files changed

+24
-16
lines changed

environment.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ channels:
33
- conda-forge
44
dependencies:
55
- python=3.10.16
6+
- notebook==6.4.12
67
- jupyter_contrib_nbextensions
78
- jupyterhub
89
- jupyter-book

part1_getting_started.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -399,7 +399,7 @@
399399
"name": "python",
400400
"nbconvert_exporter": "python",
401401
"pygments_lexer": "ipython3",
402-
"version": "3.10.16"
402+
"version": "3.10.14"
403403
}
404404
},
405405
"nbformat": 4,

part2_advanced_config.ipynb

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,9 @@
148148
"metadata": {},
149149
"source": [
150150
"## Customize\n",
151-
"Let's just try setting the precision of the first layer weights to something more narrow than 16 bits. Using fewer bits can save resources in the FPGA. After inspecting the profiling plot above, let's try 8 bits with 1 integer bit.\n",
151+
"Let's just try setting the precision of the first layer weights to something more narrow than 16 bits. Using fewer bits can save resources in the FPGA. After inspecting the profiling plot above, let's try 8 bits with 2 integer bit.\n",
152+
"\n",
153+
"**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results. \n",
152154
"\n",
153155
"Then create a new `HLSModel`, and display the profiling with the new config. This time, just display the weight profile by not providing any data '`X`'. Then create the `HLSModel` and display the architecture. Notice the box around the weights of the first layer reflects the different precision."
154156
]
@@ -160,6 +162,7 @@
160162
"outputs": [],
161163
"source": [
162164
"config['LayerName']['fc1']['Precision']['weight'] = 'ap_fixed<8,2>'\n",
165+
"config['LayerName']['output']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n",
163166
"hls_model = hls4ml.converters.convert_from_keras_model(\n",
164167
" model, hls_config=config, output_dir='model_1/hls4ml_prj_2', part='xcu250-figd2104-2L-e'\n",
165168
")\n",
@@ -395,7 +398,7 @@
395398
"name": "python",
396399
"nbconvert_exporter": "python",
397400
"pygments_lexer": "ipython3",
398-
"version": "3.10.16"
401+
"version": "3.10.14"
399402
}
400403
},
401404
"nbformat": 4,

part3_compression.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -312,7 +312,7 @@
312312
"name": "python",
313313
"nbconvert_exporter": "python",
314314
"pygments_lexer": "ipython3",
315-
"version": "3.10.16"
315+
"version": "3.10.14"
316316
}
317317
},
318318
"nbformat": 4,

part4.1_HG_quantization.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -474,7 +474,7 @@
474474
"name": "python",
475475
"nbconvert_exporter": "python",
476476
"pygments_lexer": "ipython3",
477-
"version": "3.10.16"
477+
"version": "3.10.14"
478478
}
479479
},
480480
"nbformat": 4,

part4_quantization.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -397,7 +397,7 @@
397397
"name": "python",
398398
"nbconvert_exporter": "python",
399399
"pygments_lexer": "ipython3",
400-
"version": "3.10.16"
400+
"version": "3.10.14"
401401
}
402402
},
403403
"nbformat": 4,

part6_cnns.ipynb

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -658,7 +658,9 @@
658658
"\n",
659659
"![alt text](images/conv2d_animation.gif \"The implementation of convolutional layers in hls4ml.\")\n",
660660
"\n",
661-
"Lastly, we will use ``['Strategy'] = 'Latency'`` for all the layers in the hls4ml configuration. If one layer would have >4096 elements, we sould set ``['Strategy'] = 'Resource'`` for that layer, or increase the reuse factor by hand. You can find examples of how to do this below."
661+
"Lastly, we will use ``['Strategy'] = 'Latency'`` for all the layers in the hls4ml configuration. If one layer would have >4096 elements, we sould set ``['Strategy'] = 'Resource'`` for that layer, or increase the reuse factor by hand. You can find examples of how to do this below.\n",
662+
"\n",
663+
"**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results.\n"
662664
]
663665
},
664666
{
@@ -674,7 +676,7 @@
674676
"hls_config = hls4ml.utils.config_from_keras_model(\n",
675677
" model, granularity='name', backend='Vitis', default_precision='ap_fixed<16,6>'\n",
676678
")\n",
677-
"\n",
679+
"hls_config['LayerName']['output_dense']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n",
678680
"plotting.print_dict(hls_config)\n",
679681
"\n",
680682
"\n",
@@ -721,12 +723,13 @@
721723
},
722724
{
723725
"cell_type": "markdown",
724-
"metadata": {
725-
"deletable": false,
726-
"editable": false
727-
},
726+
"metadata": {},
728727
"source": [
729-
"The colored boxes are the distribution of the weights of the model, and the gray band illustrates the numerical range covered by the chosen fixed point precision. As we configured, this model uses a precision of ``ap_fixed<16,6>`` for all layers of the model. Let's now build our QKeras model"
728+
"The colored boxes are the distribution of the weights of the model, and the gray band illustrates the numerical range covered by the chosen fixed point precision. As we configured, this model uses a precision of ``ap_fixed<16,6>`` for the weights and biases of all layers of the model. \n",
729+
"\n",
730+
"Let's now build our QKeras model. \n",
731+
"\n",
732+
"**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results."
730733
]
731734
},
732735
{
@@ -737,6 +740,7 @@
737740
"source": [
738741
"# Then the QKeras model\n",
739742
"hls_config_q = hls4ml.utils.config_from_keras_model(qmodel, granularity='name', backend='Vitis')\n",
743+
"hls_config_q['LayerName']['output_dense']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n",
740744
"\n",
741745
"plotting.print_dict(hls_config_q)\n",
742746
"\n",
@@ -1315,7 +1319,7 @@
13151319
"name": "python",
13161320
"nbconvert_exporter": "python",
13171321
"pygments_lexer": "ipython3",
1318-
"version": "3.11.9"
1322+
"version": "3.10.14"
13191323
}
13201324
},
13211325
"nbformat": 4,

part7a_bitstream.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@
282282
"name": "python",
283283
"nbconvert_exporter": "python",
284284
"pygments_lexer": "ipython3",
285-
"version": "3.10.16"
285+
"version": "3.10.14"
286286
}
287287
},
288288
"nbformat": 4,

part8_symbolic_regression.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -492,7 +492,7 @@
492492
"name": "python",
493493
"nbconvert_exporter": "python",
494494
"pygments_lexer": "ipython3",
495-
"version": "3.10.16"
495+
"version": "3.10.14"
496496
}
497497
},
498498
"nbformat": 4,

0 commit comments

Comments
 (0)