-
Notifications
You must be signed in to change notification settings - Fork 37
overhaul elements of index.rst and inst.rst #497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
merging changes to highlight automatic BF16 conversion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR overhauls the documentation structure for index.rst and inst.rst to improve ease of use for developers. Key changes include reorganizing hardware configuration information, adding clearer installation instructions, and providing a structured development workflow overview.
- Moved hardware configuration and model support information from release notes to the main index page
- Enhanced installation instructions with specific links and PowerShell environment variable setup
- Added a 5-step development workflow overview for typical Ryzen AI usage
- Updated example titles and restructured documentation organization
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| docs/relnotes_backup.rst | Created as backup containing original release notes content with hardware configurations |
| docs/relnotes.rst | Removed hardware configuration sections to focus on actual release notes |
| docs/inst.rst | Enhanced with detailed prerequisites table, PowerShell setup code, and clearer installation steps |
| docs/index.rst | Added comprehensive hardware support tables and development workflow overview |
| docs/getstartex.rst | Updated tutorial title and added more detailed explanations |
| docs/examples.rst | Simplified structure and updated example titles |
| docs/conf.py | Added sphinx_copybutton extension for code block copying |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| - Specify the name for the conda environment (default: ``ryzen-ai-1.6.0``) | ||
|
|
||
| The Ryzen AI Software packages are now installed in the conda environment created by the installer. | ||
| The Ryzen AI Software packages should now installed in the conda environment created by the installer. |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing word 'be' in sentence. Should read 'should now be installed'.
| The Ryzen AI Software packages should now installed in the conda environment created by the installer. | |
| The Ryzen AI Software packages should now be installed in the conda environment created by the installer. |
|
|
||
| - Download and Install the NPU driver version: 32.0.203.280 or newer using the following links: | ||
| - Under "Task Manager" in Windows, go to Performance -> NPU0 to check the driver version. | ||
| - If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 here: |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The instruction mentions 'here:' but the actual download links are on the following lines. Consider rephrasing to 'download the NPU driver from one of the following links:' for better clarity.
| - If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 here: | |
| - If needed, download the NPU driver version: 32.0.203.280 or the latest 32.0.203.304 from one of the following links: |
| *The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.* You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported: | ||
|
|
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The CIFAR-10 dataset description is formatted with asterisks instead of proper reStructuredText formatting. Consider using proper emphasis markup or a note directive for better presentation.
| *The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.* You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported: | |
| .. note:: | |
| The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. | |
| You can learn more about the CIFAR-10 dataset here: https://www.cs.toronto.edu/~kriz/cifar.html. | |
| This dataset is used in the subsequent steps for quantization and inference. The script also exports the provided PyTorch model into ONNX format. The following snippet from the script shows how the ONNX model is exported: |
| The C++ source files, CMake list files, and related artifacts are provided in the ``cpp/resnet_cifar/*`` folder. The source file ``cpp/resnet_cifar/resnet_cifar.cpp`` takes 10 images from the CIFAR-10 test set, converts them to .png format, preprocesses them, and performs model inference. The example has onnxruntime dependencies that are provided in ``%RYZEN_AI_INSTALLATION_PATH%/onnxruntime/*``. | ||
|
|
||
| Run the following command to build the resnet example. Assign ``-DOpenCV_DIR`` to the OpenCV build directory. | ||
|
|
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Path uses backslashes which are Windows-specific. Consider using forward slashes for cross-platform compatibility or noting this is Windows-specific.
| .. note:: | |
| The following command uses Windows-style backslashes and is intended for use in a Windows environment. |
| NPU | ||
| ~~~ | ||
|
|
||
| - :doc:`Getting Started Tutorial for INT8 models <getstartex>` - Uses a custom ResNet model to demonstrate: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove "Getting Started" from the description?
| - 2025 | ||
| - ☑️ | ||
| - | ||
| * - Ryzen Z2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe this device is supported.
| **************** | ||
| Ryzen AI 1.6 Software runs on AMD processors outlined below. For a more detailed list of supported devices, refer to the `processor specifications <https://www.amd.com/en/products/specifications/processors.html>`_ page (scroll to the "AMD Ryzen™ AI" column toward the right side of the table, and select "Available" from the pull-down menu). Support for Linux is coming soon in Ryzen AI 1.6.1. | ||
|
|
||
| .. list-table:: Supported Ryzen AI Processor Configurations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This table will only grow and will be hard to maintain as we add support for more platforms. It's redundant with the https://www.amd.com/en/products/specifications/processors.html page. And we risk creating inconsistencies. Case in point, see the comment about Z2 below.
I recommend we simply link to the official processor specification page.
| - GPU | ||
| - NPU | ||
| - Hybrid (NPU + iGPU) | ||
| * - Ryzen AI 300 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We support LLMs on all STX and KRK platforms, not all Ryzen AI 300. The first column in this table is not needed and should be removed.
| .. list-table:: | ||
| :header-rows: 1 | ||
|
|
||
| * - Model Type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why mention CPU and GPU for LLMs and not for other models? BF16 models can run on CPU and GPU on PHX/HPT.
The way LLMs and CNN/NLPs are presented in inconsistent. It would be preferable to find a common way to presenting the information.
| ************************* | ||
|
|
||
| The Ryzen AI development flow does not require any modifications to the existing model training processes and methods. The pre-trained model can be used as the starting point of the Ryzen AI flow. | ||
| A typical Ryzen AI flow might look like the following: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is accurate for CNNs, but not for BF16 NLPs (no quantization step) or LLMs (OGA flow).
| - 2022 with `Desktop Development with C++` checked | ||
| * - `cmake <https://cmake.org/download/>`_ | ||
| - >= 3.26 | ||
| * - `Python (Miniforge preferred) <https://conda-forge.org/download/>`_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we really say "Miniforge preferred"?
Internally to AMD, we need to use Miniforge. But other companies may have different requirements.
| - Miniforge: ensure that the following path is set in the System PATH variable: ``path\to\miniforge3\condabin`` or ``path\to\miniforge3\Scripts\`` or ``path\to\miniforge3\`` (The System PATH variable should be set in the *System Variables* section of the *Environment Variables* window). | ||
| $existingPath = [System.Environment]::GetEnvironmentVariable('Path', 'Machine') | ||
| .. code-block:: powershell |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not put all 3 lines in the same code block?
| .. code-block:: powershell | ||
| $newPaths = "C:\Users\<user>\miniforge3\Scripts;C:\Users\<user>\miniforge3\condabin" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will only work for miniforge. If people have Anaconda or Miniconda, this will not work.
ThomasXilinx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of the proposed changes need more discussion
Overhaul of
index.rstandinst.rstfor ease of use for developers