Repo for langfuse.com. Based on Nextra.
You can easily contribute to the docs using GitHub Codespaces. Just click on the "Code" button and select "Open with Codespaces". This will open a new Codespace with all the dependencies installed and the development server running.
Pre-requisites: Node.js 22, pnpm v9.5.0
- Optional: Create env based on .env.template
- Run pnpm ito install the dependencies.
- Run pnpm devto start the development server on localhost:3333
All Jupyter notebooks are in the cookbook/ directory. For JS/TS notebooks we use Deno, see Readme in cookbook folder for more details.
To render them within the documentation site, we convert them to markdown using jupyter nbconvert, move them to right path in the pages/ directory where they are rendered by Nextra (remark).
Steps after updating notebooks:
- Ensure you have uv installed
- Run bash scripts/update_cookbook_docs.sh(uv will automatically handle dependencies)
- Commit the changed markdown files
Note: All .md files or .mdx files that contain "source: pages/ directory are automatically generated from Jupyter notebooks. Do not edit them manually as they will be overwritten. Always edit the Jupyter notebooks and run the conversion script.
We store all images in the public/images/ directory. To use them in the markdown files, use the absolute path /images/your-image.png.
We use a bucket on Cloudflare R2 to store all video. It is hosted on https://static.langfuse.com/docs-videos. Ping one of the maintainers to upload a video to the bucket and get the src.
To embed a video, use the Video component and set a title and fixed aspect ratio. Point src to the mp4 file in the bucket.
To embed a "gif", actually embed a video and use gifMode (<Video src="" gifMode />). This will look like a gif, but at a much smaller file size and higher quality.
Interested in stack of Q&A docs chatbot? Checkout the blog post for implementation details (all open source)
The docs site includes four interconnected features designed to make documentation accessible to LLMs and AI tools:
- 
Markdown URL endpoints ( .mdsuffix): Append.mdto any URL (e.g.,/docs.md) to get raw markdown. Built at compile time viascripts/copy_md_sources.jswhich copies all.mdxfiles from/pagesto/public/md-src/as static.mdfiles with inlined MDX components.
- 
Copy as Markdown button: UI button on docs pages that fetches the .mdendpoint and copies to clipboard for pasting into ChatGPT/Claude/Cursor.
- 
Export as PDF links: API endpoint /api/md-to-pdfthat fetches markdown from.mdURLs and converts to PDF using Puppeteer. Used on legal pages (terms, privacy, DPA, etc.).
- 
MCP Server: Model Context Protocol server at /api/mcpwith three tools:- searchLangfuseDocs: RAG search via Inkeep API
- getLangfuseDocsPage: Fetches specific page markdown from- .mdURLs
- getLangfuseOverview: Returns- llms.txtoverview
 
All three user-facing features (Copy, PDF, MCP) depend on the same foundation of pre-built static markdown files, making them fast, cacheable, and reliable. See RESEARCH-LLM-FEATURES.md for detailed implementation details.
Run pnpm run analyze to analyze the bundle size of the production build using @next/bundle-analyzer.
