Record & Share Like a Pro
Free Screen Recording Tool
Made with ❤️ by developers at Codersera, forever free
3 min to read
The implementation of Large Language Models (LLMs) in a Linux environment necessitates a robust and optimized toolchain. This guide delineates a systematic approach to installing and executing LLMs utilizing industry-standard tools such as Ollama, Anaconda, and Intel's IPEX-LLM.
Linux offers unmatched flexibility, security, and performance for AI workloads. Benefits include:
Ollama facilitates the seamless deployment and execution of LLMs. The following steps outline its installation:
Install Ollama using the official script:
curl https://ollama.ai/install.sh | sh
Install Curl (if not pre-installed):
sudo apt-get install curl
Once installed, an LLM model can be queried as follows:
ollama run mistral "What is the capital of France?"
This command invokes the model and retrieves a response.
Although not a prerequisite for Ollama, Anaconda enhances AI and machine learning workflows by enabling efficient environment management.
Execute the Anaconda installation script:
bash Anaconda3-2023.09-0-Linux-x86_64.sh
Verify the integrity of the downloaded package:
sha256sum Anaconda3-2023.09-0-Linux-x86_64.sh
Download Anaconda:
cd /tmp
sudo apt-get install wget
wget https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh
conda create -n llm_env python=3.9
conda activate llm_env
pip install transformers torch
This configuration ensures an isolated environment optimized for LLM execution.
The following methodology applies to Intel Data Center GPU Flex Series and Max Series.
sudo apt-get update
sudo apt-get -y install \
gawk \
dkms \
linux-headers-$(uname -r) \
libc6-dev
sudo apt install intel-i915-dkms intel-fw-gpu
For Intel Iris Graphics:
sudo apt install intel-i915-dkms=1.24.2.17.240301.20+i29-1 intel-fw-gpu=2024.17.5-329~22.04
sudo apt-get install -y udev \
intel-opencl-icd intel-level-zero-gpu level-zero \
intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2 \
libegl-mesa0 libegl1-mesa libegl1-mesa-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri \
libglapi-mesa libgles2-mesa-dev libglx-mesa0 libigdgmm12 libxatracker2 mesa-va-drivers \
mesa-vdpau-drivers mesa-vulkan-drivers va-driver-all vainfo
Utilize the following Python script to ascertain GPU availability:
import torch
print("Is GPU available:", torch.cuda.is_available())
sudo gpasswd -a ${USER} render
newgrp render
sudo apt-get install -y hwinfo
hwinfo --display
sudo reboot
For users lacking Conda, Miniforge can be deployed:
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
bash Miniforge3-Linux-x86_64.sh
source ~/.bashrc
Verify the installation:
conda --version
conda create -n llm python=3.11
conda activate llm
For installations within the United States:
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
For installations within China:
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "facebook/opt-1.3b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "What is the meaning of life?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Maximize AI efficiency for SEO-optimized articles:
Title: "How to Install LLMs on Linux"
- H1: Introduction to LLMs
- H2: Prerequisites
- H2: Step-by-Step Installation
- H3: Ollama Setup
- H3: IPEX-LLM for GPUs
- H2: Troubleshooting
- H1: Conclusion
sudo systemctl restart ollama
.conda update -n base -c defaults conda
.Installing LLMs on Linux unlocks powerful AI capabilities for development and content creation. By leveraging Ollama for simplicity, Anaconda for environment control, and IPEX-LLM for Intel GPUs, you can optimize performance and efficiency.
Need expert guidance? Connect with a top Codersera professional today!