|
658 | 658 | "\n",
|
659 | 659 | "\n",
|
660 | 660 | "\n",
|
661 |
| - "Lastly, we will use ``['Strategy'] = 'Latency'`` for all the layers in the hls4ml configuration. If one layer would have >4096 elements, we sould set ``['Strategy'] = 'Resource'`` for that layer, or increase the reuse factor by hand. You can find examples of how to do this below." |
| 661 | + "Lastly, we will use ``['Strategy'] = 'Latency'`` for all the layers in the hls4ml configuration. If one layer would have >4096 elements, we sould set ``['Strategy'] = 'Resource'`` for that layer, or increase the reuse factor by hand. You can find examples of how to do this below.\n", |
| 662 | + "\n", |
| 663 | + "**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results.\n" |
662 | 664 | ]
|
663 | 665 | },
|
664 | 666 | {
|
|
674 | 676 | "hls_config = hls4ml.utils.config_from_keras_model(\n",
|
675 | 677 | " model, granularity='name', backend='Vitis', default_precision='ap_fixed<16,6>'\n",
|
676 | 678 | ")\n",
|
677 |
| - "\n", |
| 679 | + "hls_config['LayerName']['output_dense']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n", |
678 | 680 | "plotting.print_dict(hls_config)\n",
|
679 | 681 | "\n",
|
680 | 682 | "\n",
|
|
721 | 723 | },
|
722 | 724 | {
|
723 | 725 | "cell_type": "markdown",
|
724 |
| - "metadata": { |
725 |
| - "deletable": false, |
726 |
| - "editable": false |
727 |
| - }, |
| 726 | + "metadata": {}, |
728 | 727 | "source": [
|
729 |
| - "The colored boxes are the distribution of the weights of the model, and the gray band illustrates the numerical range covered by the chosen fixed point precision. As we configured, this model uses a precision of ``ap_fixed<16,6>`` for all layers of the model. Let's now build our QKeras model" |
| 728 | + "The colored boxes are the distribution of the weights of the model, and the gray band illustrates the numerical range covered by the chosen fixed point precision. As we configured, this model uses a precision of ``ap_fixed<16,6>`` for the weights and biases of all layers of the model. \n", |
| 729 | + "\n", |
| 730 | + "Let's now build our QKeras model. \n", |
| 731 | + "\n", |
| 732 | + "**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results." |
730 | 733 | ]
|
731 | 734 | },
|
732 | 735 | {
|
|
737 | 740 | "source": [
|
738 | 741 | "# Then the QKeras model\n",
|
739 | 742 | "hls_config_q = hls4ml.utils.config_from_keras_model(qmodel, granularity='name', backend='Vitis')\n",
|
| 743 | + "hls_config_q['LayerName']['output_dense']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n", |
740 | 744 | "\n",
|
741 | 745 | "plotting.print_dict(hls_config_q)\n",
|
742 | 746 | "\n",
|
|
1315 | 1319 | "name": "python",
|
1316 | 1320 | "nbconvert_exporter": "python",
|
1317 | 1321 | "pygments_lexer": "ipython3",
|
1318 |
| - "version": "3.11.9" |
| 1322 | + "version": "3.10.14" |
1319 | 1323 | }
|
1320 | 1324 | },
|
1321 | 1325 | "nbformat": 4,
|
|
0 commit comments