An unofficial Go client for the OpenRouter API.
This library provides a comprehensive, go-openai-inspired interface for interacting with the OpenRouter API, giving you access to a multitude of LLMs through a single, unified client.
This library's design and structure are heavily inspired by the excellent go-openai library.
While this library maintains a familiar, go-openai-style interface, it includes several key features and fields specifically tailored for the OpenRouter API:
- Multi-Provider Models: Seamlessly switch between models from different providers (e.g., Anthropic, Google, Mistral) by changing the 
Modelstring. - Cost Tracking: The 
Usageobject in responses includes aCostfield, providing direct access to the dollar cost of a generation. - Native Token Counts: The 
GetGenerationendpoint provides access toNativePromptTokensandNativeCompletionTokens, giving you precise, provider-native tokenization data. - Advanced Routing: Use 
Modelsfor fallback chains andRoutefor custom routing logic. - Reasoning Parameters: Control and request "thinking" tokens from supported models using the 
Reasoningparameters. - Provider-Specific 
ExtraBody: Pass custom, provider-specific parameters through theExtraBodyfield for fine-grained control. - Client Utilities: Includes built-in methods to 
ListModels,CheckCredits, andGetGenerationstats directly from the client. 
go get github.com/iamwavecut/gopenrouterFor complete, runnable examples, please see the examples/ directory. A summary of available examples is below:
| Feature | Description | 
|---|---|
| Basic Chat | Demonstrates the standard chat completion flow. | 
| Streaming Chat | Shows how to stream responses for real-time output. | 
| Vision (Images) | Illustrates how to send image data using the MultiContent field for vision-enabled models. | 
| File Attachments | Shows how to attach files (e.g., PDFs) for models that support file-based inputs. | 
| Prompt Caching | Reduces cost and latency by using OpenRouter's explicit CacheControl for supported providers. | 
| Automatic Caching (OpenAI) | Demonstrates OpenAI's automatic caching for large prompts, a cost-saving feature on OpenRouter. | 
| Structured Outputs | Enforces a specific JSON schema for model outputs, a powerful OpenRouter feature. | 
| Reasoning Tokens | Shows how to request and inspect the model's "thinking" process, unique to OpenRouter. | 
| Provider Extras | Uses the ExtraBody field to pass provider-specific parameters for fine-grained control. | 
| Tool Calling (History) | End-to-end tool-calling loop with full-history resend and tool result messages. | 
| Logprobs | Request token logprobs and inspect per-token candidates. | 
| Streaming with Usage | Stream responses and receive a final usage chunk before [DONE]. | 
| List Models | A client utility to fetch the list of all models available on OpenRouter. | 
| Check Credits | A client utility to check your API key's usage, limit, and free tier status on OpenRouter. | 
| Get Generation | Fetches detailed post-generation statistics, including cost and native token counts. | 
Details on specific features and client utility methods are available in the examples linked above.