Mar 4, 2025 4 min read

How To Self-Host Deepseek Opensource AI Models With Ollama And Docker On Ubuntu Linux

Run open-source AI models like Deepseek v1 and Meta's llama on ubuntu linux.

How To Self-Host Deepseek Opensource AI Models With Ollama And Docker On Ubuntu Linux
Table of Contents

Looking to run your own AI in your private network? Well, Ollama and all the opensource models has made that possible for all of us. In today's tutorial, we are going to tackle how we can self-host DeepSeek V1 with Docker on an Ubuntu Linux system. This can be any system that runs ubuntu like a server, mini or micro pc, desktop machine, and even on a raspberry pi. For this tutorial, we are going to run it on a Raspberry Pi 5. Let's get to it:

1. Docker Compose Script And Deployment

If you have read any of my docker self-host tutorials before, I think it is fair to say that you know the drill by now. Copy the docker-compose script below and edit if there are, for example, any different ports that you want to use and copy the script to a directory where you host your docker compose files on a remote docker host or deploy it on your local system.

services:
  ollama:
    container_name: ollama
    image: ollama/ollama
    volumes:
      - /docker/ollama/ollama-models:/root/.ollama
    ports:
      - 11434:11434
    environment:
      - OLLAMA_KEEP_ALIVE=24h
      - OLLAMA_HOST=0.0.0.0
    networks:
      - ollama-docker 

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    restart: unless-stopped
    depends_on:
      - ollama
    ports:
      - "3000:8080"
    environment:
      - OLLAMA_BASE_URL=http://host.docker.internal:11434
      - WEBUI_AUTH=False
      - WEBUI_NAME=Opensource Geeks AI
    extra_hosts:
      - host.docker.internal:host-gateway
    volumes:
      - /docker/ollama/open-webui:/app/backend/data
    networks:
      - ollama-docker

networks:
  ollama-docker:
    external: false

Once done copying or creating the docker compose file run docker compose up -d and once deployment completes run docker ps to confirm that your containers are running.

Pasted image 20250303191513.png
Docker pulling Ollama I

2. Access Open WebUI And Down Deepseek V1 Model

Once your deployment is done and your Ollama containers are up and running, head over to your browser and access the web UI via http://localhost:3000 , or if you deployed your stack on a remote docker host, then replace localhost with the ip-address of your remote docker host. Once the web UI is open in your browser, go to the user section in the lower left bottom and click on it. Go to settings -> Admin Settings -> Models and click on the download icon to open the popup window below.

OpenWeb UI

Once on this page, head over to the Ollama models website and search for deepseek-r1 which will look similar to the example below. We are going to download the 1.5 billion parameter, which means in the where it asks you to enter the model tag you should enter deepseek-r1:1.5B to download that LLM.

Ollama Website
Download DeepSeek LLM

3. Let's Ask Our AI Its First Question

To give this a test, let's ask it a question and run our first query. You can ask anything, I guess, within the bounds of the LLM. For this tutorial, we will be asking a simple question like According to github which programming languages are the top 10 globally?. See the response in the example below:

Deepseek Query
Deepseek generating results
Deepseek Query Results

Conclusion

In conclusion, there are many open-source LLMs to use and test by checking out the Ollama website. If you enjoyed this article, consider signing up for our newsletter and don't forget to share it with people who would find it useful. Leave a comment below with a tutorial you would like us to cover.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Opensource Geeks.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.