🎬

Video tutorial coming soon.

🦙 Setup Ollama — Run LLMs Locally

Deploy Ollama on Ubuntu with Docker — run Llama 3, Mistral, Gemma, Phi, and hundreds of open-source language models locally. GPU and CPU supported. OpenAI-compatible API with zero cloud dependency after the initial model download.

⚠️ This script is provided for demo and testing purposes only. Not intended for production use.

📦 Resources & Setup Scripts

Grab the automated bash script from GitHub to follow along with the video.

Automated install — Ollama with Open WebUI in one command. GPU detected automatically.
View on GitHub

Quick Install:

wget https://raw.githubusercontent.com/mhmdali94/Docker/main/ai/ollama/ollama-ubuntu.sh
chmod +x ollama-ubuntu.sh
sudo bash ollama-ubuntu.sh

Tutorial Steps

1 Download & Run the Script

The script installs Docker, deploys Ollama with Open WebUI, and auto-detects your NVIDIA GPU if available.

wget https://raw.githubusercontent.com/mhmdali94/Docker/main/ai/ollama/ollama-ubuntu.sh
chmod +x ollama-ubuntu.sh
sudo bash ollama-ubuntu.sh

2 Pull Your First Model

Use the Ollama CLI to download a language model. Llama 3.2 (3B) is a good starting point for CPU-only servers:

docker exec -it ollama ollama pull llama3.2
# For a larger model with GPU:
docker exec -it ollama ollama pull llama3.1:8b

3 Access Open WebUI

Open your browser and navigate to the Open WebUI interface to chat with your local models:

http://<your-server-ip>:3000

4 Use the API

Ollama exposes an OpenAI-compatible REST API on port 11434. Connect any compatible app — AnythingLLM, Dify, or your own scripts:

curl http://<your-server-ip>:11434/api/generate \
  -d '{"model":"llama3.2","prompt":"Hello!"}'

Ports Used

PortPurpose
11434Ollama REST API
3000Open WebUI (chat interface)

Overview

Why Use It

When You Need It

    Who Should Use It

      Real Use Cases

        Main Features

          How to Use After Installation

            Security Best Practices

              Ports and Firewall Notes

              Backup and Maintenance

                Common Mistakes

                  Troubleshooting

                    Alternatives

                    When Not to Use It

                    Need Help Setting Up Ollama?

                      Contact Us

                      Frequently Asked Questions