OpenClaw is an open-source AI agent platform that runs on your own hardware. The 2026.3.22 release turns it into a full "agent operating system" with plugins, multi-model support, and stronger security.
This guide explains what OpenClaw 2026.3.22 is, how to install it locally, and how to use it in real work. It also covers benchmarks, pricing, and how it compares to other self-hosted AI tools.
What Is OpenClaw 2026.3.22
OpenClaw is a self-hosted AI agent runtime that runs on your machine and connects to chat apps, email, and other services through a gateway. You can think of it as an operating system for AI agents rather than a single chatbot.
Version 2026.3.22 adds a plugin marketplace called ClawHub, support for new models like MiniMax M2.7 and GPT-5.4 mini or nano, and major security and sandbox updates.
OpenClaw uses large language models (LLMs) as the "brain" of each agent. An LLM is a model that reads text and generates new text based on patterns it learned during training.
OpenClaw manages long-running sessions, tools, memory files, and cross-channel routing so agents can work across tasks while you stay in control of data.
Key Features
- ClawHub plugin marketplace: Built-in marketplace to search, install, and update skills and plugins with
openclaw skills and openclaw plugins commands. - Multi-model support: Built-in support for MiniMax M2.7, GPT-5.4 mini and nano, Anthropic Claude via Vertex, and many open-source models through providers like OpenRouter and Zhipu GLM.
- Per-agent reasoning controls: Different agents can use different reasoning depth and speed settings, instead of one global level for the whole system.
- Search and web tools: Integrations for Exa, Tavily, Firecrawl, and Firecrawl-based web fetch tools to browse and scrape sites from within agents.
- Sandboxed execution: Multiple sandbox backends including Docker-style containers, OpenShell, and SSH sandboxes with hardened exec and network policies.
- File-based memory: Agent memory stored as Markdown files on disk so you can inspect and edit what the agent "knows".
- Multi-channel gateway: One gateway service to connect agents with WhatsApp, Telegram, Slack, Discord, Email, WeChat, and more.
- ClawBox hardware option: Dedicated Jetson Orin Nano box that ships with OpenClaw pre-installed for plug-and-play local hosting.
How to Install or Set Up OpenClaw 2026.3.22 Locally
This section covers three main install paths: official script, npm package, and source from GitHub.
Note: The 2026.3.22 npm package has known issues with the Control UI and some plugins, especially WeChat integration. Check current issue status before using that specific tag and consider @latest if maintainers have patched it.
Prerequisites and Hardware Requirements
Minimum tested specs for a stable OpenClaw node include a 4-core CPU, 8 to 16 GB RAM, and 20 to 40 GB of SSD storage. Linux (Ubuntu 22.04 or 24.04), macOS, and Windows with WSL are all used in community guides.
For the ClawBox device, OpenClaw runs on an NVIDIA Jetson Orin Nano with 8 GB memory and 512 GB NVMe storage.
Basic steps before installation:
- Update your system packages (for example,
sudo apt update && sudo apt upgrade -y on Ubuntu). - Install Node.js (LTS), pnpm or npm, Git, and Python if they are not present.
- Ensure you can reach the internet from the host to pull packages and models.
Method 1: Official Install Script (Recommended for Most Users)
- Open a terminal on Linux or macOS.
- Run the official install script:bash
curl -fsSL https://openclaw.ai/install.sh | bash
This script detects your OS, installs Node.js if needed, installs the openclaw CLI, and launches the onboarding wizard. - Follow the on-screen onboarding steps, choose your default model provider, and accept security prompts.
- At the end, the script offers to install OpenClaw as a background daemon; accept if you want it to start on boot.
Method 2: Install via npm Global Package
- Ensure Node.js and npm are installed on your system.
- Install OpenClaw globally:bash
npm install -g openclaw@latest
The npm registry lists recent versions including 2026.3.13 and 2026.3.22-beta.1. - Run the onboarding and daemon install:bashopenclaw onboard --install-daemon
This command sets up the gateway service and guides you through model keys, channels, and basic security choices. - Confirm that the
openclaw service is active with the status command:bashopenclaw gateway status
- If you need the 2026.3.22 tag for testing, pin the version carefully and verify known issues from the issue tracker and Reddit thread.
Method 3: Install from GitHub Source
This method gives full control and is useful for advanced setups or custom builds.
- Clone the repository:bash
git clone https://github.com/openclaw/openclaw.git
cd openclaw - Check out the 2026.3.22 tag if available:bash
git fetch --tags
git checkout 2026.3.22
The SourceForge mirror and release notes confirm this tag and describe its changes. - Install dependencies using pnpm (preferred in many guides):bash
corepack enable
pnpm install - Build the project:bash
pnpm build - Run the onboarding flow from the local build:bash
pnpm openclaw onboard --install-daemon
A YouTube tutorial shows these steps for building from source and then calling the onboard command.
Special Case: ClawBox Appliance
If you use a ClawBox, OpenClaw comes pre-installed on an Orin Nano board. The usual steps are:
- Connect power and Ethernet to the ClawBox.
- Visit
http://clawbox.local in your browser. - Complete the setup wizard and log in to the web control UI.
How to Run or Use OpenClaw Locally
Once installed, OpenClaw runs as a background gateway plus one or more agents. You can control it from the CLI, the Control UI dashboard, or from chat channels like WhatsApp and Slack.
Starting and Stopping the Gateway
- To start or ensure the gateway is running:bashopenclaw gateway start
- To stop the service:bashopenclaw gateway stop
- To view status and logs:bashopenclaw gateway status
openclaw logs gateway
The refreshed dashboard in recent versions adds modular views for chat, config, agents, and sessions.
Running Your First Agent Session
- Open a terminal and run:bashopenclaw chat
This opens a local chat session with your default agent. - Ask a question like "Summarize my latest email" after you have connected an email skill.
- The message flows through the gateway, which routes it to the correct agent, loads its memory, calls the chosen LLM, and returns a response.
- Use
/btw for side questions that should not change the future session context.
Using ClawHub Plugins and Skills
ClawHub in 2026.3.22 turns skills into first-class plugins.
Common commands include:
- Search plugins:bashopenclaw skills search email
- Install a ClawHub package:bash
openclaw plugins install clawhub:email-inbox - List installed plugins:bashopenclaw plugins list
- Update skills and plugins:bashopenclaw skills update
openclaw plugins update
Skills define tools that agents can call, such as "read inbox", "create Jira issue", or "update calendar".
Connecting to Model Providers and Local Runtimes
During onboarding, you map agents to one or more model providers such as OpenAI GPT-5.4, Anthropic Claude, MiniMax, or open-source models via OpenRouter and Z.AI GLM.
Examples:
- Use GPT-5.4 mini as a fast default reasoning model, which one benchmark shows around 73 tokens per second compared with 46 tokens per second for a larger model on the same stack.
- Route heavy coding tasks to GLM-4.7 Flash running locally through vLLM or llama.cpp; community tests show 60 to 220 tokens per second on consumer GPUs depending on quantization.
For pure local models, many users attach Ollama or similar servers and then point OpenClaw to these endpoints. This keeps prompts and data on your hardware.
Managing Tokens, Cost, and Session Health
OpenClaw sessions can consume many tokens because they send long histories, tools output, and a large system prompt. A tuning guide shows that context accumulation and tool output can account for over half of total token use.
Practical steps:
- Use cheaper models like GPT-5.4 mini, Claude Haiku, or local open-source models for routine tasks.
- Enable prompt caching for expensive models and keep the system prompt stable across requests.
- Limit Heartbeat intervals so background checks do not wake the agent every few minutes with full context.
- Monitor per-session cost and context usage with commands like
openclaw /status.
Benchmark Results
The table below gathers reported metrics from public benchmarks and vendor tests that match typical OpenClaw setups.
| Setup | Model | Hardware | Throughput / Tokens per Second | Latency or Notes |
|---|
| OpenClaw agents on Tencent Cloud Lighthouse | GPT-4-class remote model | 2-core CPU, 4 GB RAM | 2,000 to 5,000 health-check requests per second; 500 to 1,200 echo requests per second | Health-check median latency under 5 ms; echo median 10 to 30 ms; P99 20 to 100 ms. |
| OpenClaw on ClawBox | Llama 3.1 8B (Q4) | Jetson Orin Nano, 8 GB | About 3.5 tokens per second | Rated "excellent" quality for chat and coding on edge hardware. |
| OpenClaw on ClawBox | Phi-3 Mini (Q4) | Jetson Orin Nano, 8 GB | About 6.8 tokens per second | Rated "good" quality and faster speed than larger models. |
| OpenClaw with OpenAI | GPT-5.4 mini | Cloud GPU (reference benchmark) | Around 73 tokens per second | About 60 percent faster than a larger GPT-5-class model at 46 tokens per second, with lower price. |
| OpenClaw with MiniMax | MiniMax M2.7 | Remote API | About 52.8 tokens per second | Token speed near the lower end among reasoning models in its price tier. |
| OpenClaw with local GLM-4.7 Flash | GLM-4.7 Flash | H200 or RTX-class GPUs | Between 60 and 220 tokens per second depending on GPU and quantization | Community benchmarks report 207 tokens per second on an H200 at single-user load and over 4,000 tokens per second at max throughput. |
These numbers show that OpenClaw overhead is usually small compared with the model speed and network latency. The main performance drivers are model type, hardware, and whether the model runs locally or behind a remote API.
Testing Details
Performance tests for OpenClaw often use synthetic workloads, such as HTTP health checks or echo endpoints, to measure the gateway and sandbox overhead. A Tencent Cloud guide describes testing on a 2-core, 4 GB Lighthouse instance and tracking throughput, median latency, P95 and P99 latencies, and failure rates for different levels of concurrent users.
In those tests:
- Health-check endpoints reached 2,000 to 5,000 requests per second with sub-5 ms median latency and sub-20 ms P99 latency.
- Echo tests with GPT-4-class backends showed median latencies from 2.1 seconds at 10 users to 5.2 seconds at 100 users, with failure rates under 5 percent up to 100 concurrent users.
Model benchmarks come from providers and independent testers:
- MiniMax M2.7 analysts measure about 52.8 tokens per second on its own API.
- GLM-4.7 Flash tests on H200 GPUs reach up to 4,398 tokens per second at high concurrency and around 207 tokens per second for a single user.
- OpenAI data and third-party reporting show GPT-5.4 mini at roughly 2 times the speed of GPT-5 mini and about 60 percent faster than one larger GPT-5.4 variant in real use.
Token usage tests for OpenClaw also examine how much of a large context window the session history consumes, and how Heartbeat jobs and system prompts add to cost. One optimization guide breaks token use into context accumulation, tool output, system prompt, multi-round reasoning, model choice, and cache misses.
Comparison Table: OpenClaw vs Alternatives
The tools below all help run AI agents or assistants locally but have different goals and designs.
| Tool | Main Role | Hosting Style | Channels and UI | Plugin / Tool Ecosystem | Typical Use Cases |
|---|
| OpenClaw 2026.3.22 | AI agent operating system that runs long-lived agents on your hardware with a gateway and plugin system. | Self-hosted on Linux, macOS, Windows, or ClawBox; agents talk to cloud or local models. | Control UI dashboard, CLI, and integrations for WhatsApp, Telegram, Slack, Discord, email, and more. | ClawHub plugin marketplace with skills, channel adapters, web tools, and bundled runtimes. | Autonomous personal assistant, multi-channel inbox, calendar and task automation, research bots, and dev agents. |
| CrewAI | Python framework for building collaborative multi-agent crews with role-based agents. | Library that runs inside Python apps or services; can be self-hosted or cloud-hosted. | No native UI; integrates with notebooks, scripts, and external tooling. | Tools and integrations defined in Python; strong ecosystem through LangChain and other libraries. | Research pipelines, report generation, and internal multi-agent workflows where developers script flows. |
| AnythingLLM | Self-hosted and desktop app for chat-with-your-data and document RAG systems. | Desktop app and Docker images; local by default with optional cloud service. | Web or desktop chat UI with workspaces and document uploads. | RAG features, document connectors, and LLM provider integrations rather than a full agent OS. | Knowledge-base chat, internal FAQ bots, and small team search assistants. |
| Open Interpreter | Open-source AI terminal assistant that translates natural language into local code execution. | CLI tool that runs on Windows, macOS, and Linux; local code execution. | Terminal chat interface, plus Python API for embedding in apps. | Strong local code execution, browser control, and file manipulation tools. | Data analysis, automation scripts, and interactive coding sessions driven by natural language. |
Pricing Table
OpenClaw itself is open source. Costs come from hosting, model usage, and optional services.
| Product / Tier | License or Core Price | Main Cost Drivers |
|---|
| OpenClaw (self-hosted) | Free open-source software; no license fee. | Hardware (local server, ClawBox, or cloud VM), API usage for models such as GPT-5.4, Claude, MiniMax, or others, and storage for logs and memory files. |
| OpenClaw on ClawBox | Hardware purchase covers OpenClaw install and device; pricing depends on vendor and region. | Upfront ClawBox cost plus same ongoing model API or power usage, with 8 GB RAM and 512 GB SSD included. |
| CrewAI (open-source) | MIT-licensed framework with no license fee. | Underlying compute and model API usage in the environment where it runs; optional CrewAI Cloud with usage-based pricing not detailed in public docs. |
| AnythingLLM Desktop | Free desktop application for local use. | Local compute and disk; paid cloud tiers for team use and managed hosting. |
| Open Interpreter | Open-source project; free to install. | Local hardware and any remote model API costs if not using local-only models. |
Unique Selling Proposition (USP)
OpenClaw stands out by treating AI agents as long-lived, multi-channel workers that live on your machine, not as single chats inside a browser tab. The 2026.3.22 release deepens this view with ClawHub, bundled web and search tools, pluggable sandboxes, and model-agnostic routing, so you can mix remote APIs with local models while keeping memory, tools, and channels in one coherent runtime.
Pros and Cons
Pros
- Self-hosted architecture that keeps memory and logs on hardware you control.
- Rich multi-channel gateway that connects to chat apps, email, and more from one agent system.
- ClawHub marketplace and plugin SDK for installing and updating skills without manual wiring.
- Strong sandbox and security hardening across exec, hooks, network, and device pairing in 2026.3.22.
- Per-agent reasoning settings and support for modern models like GPT-5.4 mini, MiniMax M2.7, and GLM-4.x series.
Cons
- Install and configuration complexity higher than desktop chat apps; requires comfort with terminals and configs.
- Token usage is high by default; without optimization, API bills can rise fast on remote models.
- Some 2026.3.22 packages have regressions, such as broken Control UI or WeChat plugin builds, which need patches or workarounds.
- Long-running agents and Heartbeat jobs demand stable hardware and uptime, closer to running a small server than a desktop app.
Quick Comparison Chart
| Aspect | OpenClaw 2026.3.22 | CrewAI | AnythingLLM | Open Interpreter |
|---|
| Primary focus | Self-hosted AI agent operating system with multi-channel gateway and plugins. | Python library for multi-agent orchestration. | Document-centric chat and RAG assistant for teams and individuals. | Terminal-based AI coding and automation assistant. |
| Install complexity | Medium to high; script, npm, or source plus onboarding wizard. | Medium; Python environment and library integration needed. | Low for desktop; medium for Docker self-hosting. | Low; pip install and terminal command. |
| Best for | Always-on personal or team agents that connect to many apps. | Developers who want code-level control over agent flows. | Users who want chat-with-documents with local or cloud models. | Technical users who want natural-language-driven code execution. |
| Model flexibility | Broad; supports OpenAI, Anthropic, MiniMax, GLM, local models, and more. | Broad; depends on how developers integrate providers. | Broad; supports many LLM providers but focused on chat and RAG. | Moderate; can call different LLMs but mainly emphasizes local execution. |
| UI options | Control UI, CLI, and chat channels. | No built-in UI; up to the host app. | Web and desktop UIs with workspaces. | Terminal UI and scripting hooks. |
Demo or Real-World Example: Email and Calendar Assistant on a Local Node
This example describes a simple workflow for a personal assistant agent on a Linux VPS or home server.
1. Install OpenClaw
- Use the official script:bash
curl -fsSL https://openclaw.ai/install.sh | bash - Complete onboarding, choose a default model like GPT-5.4 mini or Claude Haiku, and install the daemon.
2. Connect a Model and Optimize Costs
- Add API keys for OpenAI and Anthropic, or configure a local Ollama server and GLM-4.x model.
- For the main assistant agent, pick GPT-5.4 mini or a similar fast small model to keep latency and price low.
- For heavy research tasks, allow a larger model and enable prompt caching to cut token spend.
3. Install Email and Calendar Skills via ClawHub
- Search for skills:bashopenclaw skills search email
openclaw skills search calendar - Install a packaged inbox skill and a calendar integration through ClawHub:bash
openclaw plugins install clawhub:email-inbox
openclaw plugins install clawhub:calendar-sync - Configure OAuth or app passwords for your email and calendar services through the Control UI or config commands.
4. Define Agent Behavior and Memory
- Open the agent's configuration and memory Markdown files on disk.
- Write clear instructions for how it should sort email, label priorities, and schedule events.
- Keep the core system prompt stable to benefit from caching and predictable behavior.
5. Run the Workflow in Daily Use
- Keep the OpenClaw gateway running as a daemon on your server or ClawBox.
- Each morning, trigger a command or schedule a Heartbeat that asks the agent to "Review new email from the last 24 hours, mark urgent items, and propose a calendar plan."
- The agent reads your inbox via the email skill, writes notes into Markdown memory files, and proposes calendar entries through the calendar skill.
- You approve or adjust changes in your calendar and email client, while the agent keeps state between runs and refines behavior over time.
This pattern shows how OpenClaw turns a set of tools and models into a stable automation agent that lives beside your regular apps, rather than inside a single chat window.
Conclusion
OpenClaw 2026.3.22 moves from a powerful tool to a full agent operating system, with plugins, sandboxes, and modern model support. Installing it locally requires some command-line work but rewards you with a self-hosted, multi-channel assistant that you control.
FAQ
1. Is OpenClaw 2026.3.22 stable enough for production?
The core gateway and agent runtime are mature, but the 2026.3.22 npm package has known Control UI and channel issues, so many users treat this version as an early upgrade and wait for patched builds for production.
2. Do I need a GPU to run OpenClaw?
OpenClaw runs fine on CPU-only servers, though you may use remote APIs or smaller local models. A GPU helps if you want fast, large local models like GLM-4.7 Flash or 8B-class Llama models.
3. Can I run OpenClaw only with local models and no external API calls?
Yes. Many users pair OpenClaw with local engines like Ollama or vLLM and attach open-source models, which keeps data and tokens on their own hardware.
4. How does OpenClaw compare with tools like CrewAI or LangChain?
CrewAI and LangChain are Python frameworks for building workflows inside applications, while OpenClaw is an always-on agent OS with gateways, sandboxes, and channels. They can work together, for example by calling CrewAI-based tools from OpenClaw skills.
5. What are the main ways to reduce OpenClaw token costs?
The most effective steps are using small, fast models for routine work, enabling prompt caching, trimming session history, controlling Heartbeat frequency, and routing heavy tasks to cheaper or local models, which one guide reports can reduce costs by around 95 percent in tuned setups