Seamless Video Sharing
Better Than Loom, Always Free
Another developer-friendly tool from Codersera
3 min to read
DeepSeek-VL2 is an open-source large language model (LLM) developed by the Chinese AI company DeepSeek, founded in 2023 by Liang Wenfeng. Known for its advanced reasoning capabilities, DeepSeek-VL2 rivals OpenAI's Model o1. This guide provides a comprehensive tutorial on how to install and run DeepSeek-VL2 on Ubuntu, covering prerequisites, installation steps, and usage.
DeepSeek-VL2 is a flagship AI model developed by DeepSeek, an AI company specializing in natural language processing. It is designed for tasks such as answering questions and generating text. The model is built on Qwen and Llama architectures, advanced neural network designs optimized for large-scale language modeling.
Before installing DeepSeek-VL2, ensure your system meets the following requirements:
Follow these steps to install and run DeepSeek-VL2 on Ubuntu.
First, update your system before installing new packages:
sudo apt update && sudo apt upgrade -y
Verify and install Python (version 3.8 or higher):
sudo apt install python3 python3 --version
Install Pip, the package manager for Python:
sudo apt install python3-pip pip3 --version
Install Git to manage repositories:
sudo apt install git git --version
Ollama simplifies running large language models locally. Install it using:
curl -fsSL https://ollama.com/install.sh | sh
Verify the installation:
ollama --version
Enable and start Ollama to run automatically on system boot:
sudo systemctl start ollama
sudo systemctl enable ollama
Check if Ollama is running:
systemctl is-active ollama.service
If inactive, manually start it:
sudo systemctl start ollama.service
Download and run the DeepSeek model with:
ollama run deepseek-r1:7b
Verify the downloaded models:
ollama list
DeepSeek-R1 provides various model sizes:
To remove a model and free up disk space:
ollama rm deepseek-r1:70b
Replace 70b
with the model size you want to delete.
For a user-friendly interface, use Ollama Web UI. First, create a virtual environment:
sudo apt install python3-venv
python3 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate
Install Open WebUI:
pip install open-webui
Start the server:
open-webui serve
Access the Web UI at http://localhost:8080
. Select the DeepSeek model and begin interacting.
To make Open-WebUI start on boot, create a systemd service:
sudo nano /etc/systemd/system/open-webui.service
Add the following content:
[Unit]
Description=Open Web UI Service
After=network.target
[Service]
User=your_username
WorkingDirectory=/home/your_username/open-webui-venv
ExecStart=/home/your_username/open-webui-venv/bin/open-webui serve
Restart=always
Environment="PATH=/home/your_username/open-webui-venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
[Install]
WantedBy=multi-user.target
Replace your_username
with your actual username. Reload and enable the service:
sudo systemctl daemon-reload
sudo systemctl enable open-webui.service
sudo systemctl start open-webui.service
Check the status:
sudo systemctl status open-webui.service
sudo systemctl restart ollama
).sudo apt install nvidia-driver-535
1.5b
).ollama run deepseek-r1:7b --numa # Uses CPU if GPU memory is full
pip install --upgrade torch llama-cpp-python
For advanced issues, consult the DeepSeek Documentation.
For resource-intensive tasks or scalability, consider cloud solutions:
To run DeepSeek on the cloud for scalability and performance, consider:
You’ve now installed DeepSeek-VL2 on Ubuntu and can interact with it via CLI or a user-friendly Web UI. Whether you’re developing AI applications, automating workflows, or experimenting with NLP, DeepSeek-VL2 offers enterprise-grade capabilities in an open-source package.
Need expert guidance? Connect with a top Codersera professional today!