diff --git a/blog/2025-07-29_llm-d-v0.2-our-first-well-lit-paths.md b/blog/2025-07-29_llm-d-v0.2-our-first-well-lit-paths.md new file mode 100644 index 0000000..47df5ab --- /dev/null +++ b/blog/2025-07-29_llm-d-v0.2-our-first-well-lit-paths.md @@ -0,0 +1,20 @@ +--- +title: "llm-d 0.2: Our first well-lit paths (mind the tree roots!)" +description: Announcing the llm-d 0.2 release with new features and improvements that light the way forward for large language model deployment +slug: llm-d-v0.2-our-first-well-lit-paths + +authors: + - robshaw + - smarterclayton + - chcost + +tags: [release, announcement, llm-d] +--- + +# llm-d 0.2: Our first well-lit paths (mind the tree roots\!) + +Our 0.2 release delivers three well-lit paths to accelerate deploying large scale inference on Kubernetes \- better load balancing, lower latency with disaggregation, and native vLLM support for very large Mixture of Expert models like DeepSeekR1. + +We’ve also enhanced our deployment and benchmarking tooling, incorporating lessons from real-world infrastructure deployments and addressing key antipatterns. This release gives llm-d users, contributors, researchers, and operators, clearer guides for efficient use in tested, reproducible scenarios. + +[rest of post to follow] \ No newline at end of file