Video tutorial coming soon.
Deploy Langfuse on Ubuntu with Docker — an open-source LLM observability and evaluation platform. Trace every AI call, measure output quality, detect prompt regressions, and track costs across all your LLM applications from one self-hosted dashboard.
Grab the automated bash script from GitHub to follow along with the video.
wget https://raw.githubusercontent.com/mhmdali94/Docker/main/ai/langfuse/langfuse-ubuntu.sh
chmod +x langfuse-ubuntu.sh
sudo bash langfuse-ubuntu.sh
The script installs Docker and deploys Langfuse with a PostgreSQL database for storing traces, evaluations, and prompt versions.
wget https://raw.githubusercontent.com/mhmdali94/Docker/main/ai/langfuse/langfuse-ubuntu.sh
chmod +x langfuse-ubuntu.sh
sudo bash langfuse-ubuntu.sh
Open your browser and navigate to Langfuse. Register an account, create a project, and generate your API keys:
http://<your-server-ip>:3000
Install the Langfuse SDK in your application and wrap your LLM calls. For Python with OpenAI, it takes two lines:
pip install langfuse
# In your code:
from langfuse.openai import openai # drop-in replacement
# All openai.chat.completions.create() calls are now traced automatically
Open the Langfuse dashboard to see your traces, review individual LLM calls, add human or LLM-based evaluation scores, and set up automated evaluators for continuous quality monitoring.
| Port | Purpose |
|---|---|
| 3000 | Langfuse Web UI & API |