|
| 1 | +Title: Using LLM with Azure AI Foundry |
| 2 | +Date: 2025-09-09 |
| 3 | +Category: Blog |
| 4 | +Tags: azure |
| 5 | +Slug: llm |
| 6 | +Author: Guy Gregory |
| 7 | +Summary: How to configure Simon Willison's popular LLM CLI tool with Azure AI Foundry models |
| 8 | + |
| 9 | +### Getting Started with LLM |
| 10 | + |
| 11 | +I'm a big fan of [Simon Willison's](https://simonwillison.net/) work, and recently, I had a need to do some simple inference using gpt-5 on Azure AI Foundry. In the past I've used GitHub Models for this type of work, and that works pretty well up to a certain point: |
| 12 | + |
| 13 | +```bash |
| 14 | +gh models run openai/gpt-4.1 "Tell me a joke" |
| 15 | +``` |
| 16 | + |
| 17 | +However, I was running into a few challenges with token limits, and usage limits on GitHub Models, so I wanted to switch to Azure AI Foundry instead - I thought it'd be a great opportunity to try out [Simon's hugely popular LLM CLI tool](https://github.com/simonw/llm). |
| 18 | + |
| 19 | +```bash |
| 20 | +llm "Tell me a joke" |
| 21 | +``` |
| 22 | + |
| 23 | + |
| 24 | +### 1. Installation |
| 25 | + |
| 26 | +Installation was a breeze - with a simple: |
| 27 | + |
| 28 | +```bash |
| 29 | +pip install llm |
| 30 | +``` |
| 31 | + |
| 32 | +### 2. Storing an API key |
| 33 | + |
| 34 | +Next, I needed to store my Azure AI Foundry API key, retrieved from the [Azure AI Foundry project overview:](https://ai.azure.com/foundryProject/overview) |
| 35 | + |
| 36 | +<img width="1025" height="425" alt="image" src="https://github.com/user-attachments/assets/60b03e48-598a-42fc-a447-9747559bae23" /> |
| 37 | + |
| 38 | +By default, `llm keys set` will store an OpenAI API key, so adding `azure` on the end allowed me to differenciate it later |
| 39 | + |
| 40 | +```bash |
| 41 | +llm keys set azure |
| 42 | +``` |
| 43 | +When entering the key, the text isn't echoed back to the console, but you can check the keys.json file in the next step if you need to confirm. |
| 44 | + |
| 45 | +### 3. Configuring the Azure AI Foundry Models |
| 46 | + |
| 47 | +Before the next step, you'll want to check the path to your llm configuration files: |
| 48 | + |
| 49 | +```bash |
| 50 | +PS C:\Users\gugregor> llm keys path |
| 51 | +C:\Users\gugregor\AppData\Roaming\io.datasette.llm\keys.json |
| 52 | +``` |
| 53 | + |
| 54 | +...so you can open the folder, and create a new (blank text) file called `extra-openai-models.yaml`. This file will be used to define any models from Azure AI Foundry. |
| 55 | + |
| 56 | +<img width="978" height="474" alt="image" src="https://github.com/user-attachments/assets/942e1fcc-d1ae-44c1-8707-b70cb64b2aac" /> |
| 57 | + |
| 58 | +In this YAML file, you'll want to provide the details of your Azure AI Foundry model deployments: |
| 59 | + |
| 60 | +```yaml |
| 61 | +- model_id: aoai/gpt-5 |
| 62 | + model_name: gpt-5 |
| 63 | + api_base: https://<foundry resource>.openai.azure.com/openai/v1/ |
| 64 | + api_key_name: azure |
| 65 | + |
| 66 | +- model_id: aoai/gpt-5-mini |
| 67 | + model_name: gpt-5-mini |
| 68 | + api_base: https://<foundry resource>.openai.azure.com/openai/v1/ |
| 69 | + api_key_name: azure |
| 70 | + |
| 71 | +- model_id: aoai/gpt-5-nano |
| 72 | + model_name: gpt-5-nano |
| 73 | + api_base: https://<foundry resource>.openai.azure.com/openai/v1/ |
| 74 | + api_key_name: azure |
| 75 | + |
| 76 | +- model_id: aoai/gpt-4.1 |
| 77 | + model_name: gpt-4.1 |
| 78 | + api_base: https://<foundry resource>.openai.azure.com/openai/v1/ |
| 79 | + api_key_name: azure |
| 80 | +``` |
| 81 | +Some important Azure-specific considerations: |
| 82 | +- `model_id` is essentially the 'friendly name' for the model within the LLM tool. I chose a `aoai/` prefix, so I could differenciate between Azure models, and OpenAI API models. |
| 83 | +- `model_name` is the Azure deployment name - which _could_ be different from the model name (although it makes sense to keep it the same where possible). |
| 84 | +- `api_base` needs to include the `openai/v1/` suffix, because the LLM tool isn't able to accept the `api_version` from the legacy API. If you're not sure where to find the <foundry resource>, check in the [Azure AI Foundry project overview:](https://ai.azure.com/foundryProject/overview) |
| 85 | +- `api_key_name` is the name of the key you stored in step 2 (I used `azure`, but you can use whatever you like, as long as they match) |
| 86 | + |
| 87 | +Don't forget to save the YAML file, once you've added all the above details. |
| 88 | + |
| 89 | +### 4. Testing the Azure AI Foundry model |
| 90 | + |
| 91 | +Now that you've configured the extra models, you should be able to run `llm` using the following command: |
| 92 | + |
| 93 | +```bash |
| 94 | +llm "Tell me a joke" -m aoai/gpt-5-mini |
| 95 | +``` |
| 96 | +the `-m` parameter specifies the model that you've defined in the YAML file from step 3. |
| 97 | + |
| 98 | +### 5. Setting the default model (optional, recommended) |
| 99 | + |
| 100 | +If you're going to use Azure AI Foundry models regularly, then you may want to change the default model over like this: |
| 101 | +```bash |
| 102 | +llm models default aoai/gpt-5-mini |
| 103 | +``` |
| 104 | + |
| 105 | +That way, you don't need to specify the model using the `-m` parameter each time, so you get an easy to remember, concise command: |
| 106 | + |
| 107 | +```bash |
| 108 | +llm "Tell me a joke" |
| 109 | +``` |
| 110 | + |
| 111 | +### Conclusion |
| 112 | + |
| 113 | +LLM is a super handy utility which can easily be configured to use Azure AI Foundry models with minimal effort. I can see myself using it for a range of simple command-line tasks, and potentially even for more advanced scripting. It doesn't have the power or advanced features of something like Semantic Kernel, but that level of functionality isn't always required. Try it out today! |
| 114 | + |
| 115 | +LLM - GitHub.com<br> |
| 116 | +https://github.com/simonw/llm |
0 commit comments