3 min to read
Running Local Deep Researcher with Ollama on Ubuntu combines powerful AI-driven research capabilities with the privacy and control of local processing. This setup is ideal for users who need comprehensive, citation-backed reports without relying on cloud services.
Local Deep Researcher is an AI-powered assistant that transforms complex queries into detailed reports using iterative analysis. Key features include:
Before beginning, ensure your system meets these requirements:
Update your system packages and install core dependencies:
sudo apt update && sudo apt upgrade -y
sudo apt install python3 python3-pip git python3-venv -y
Download and configure the LLM management platform:
curl -fsSL https://ollama.com/install.sh | sh
sudo systemctl start ollama
sudo systemctl enable ollama
Pull a suitable language model for research tasks (Gemma3 is recommended):
ollama pull gemma3:12b
Clone the repository and set up the Python virtual environment:
git clone https://github.com/langchain-ai/local-deep-researcher
cd local-deep-researcher
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Edit the .env
file to enable web searches:
SEARCH_PROVIDER=searxng
SEARXNG_INSTANCE=https://searx.example.com
# Alternative: BRAVE_API_KEY=your-key-here
Launch the browser-based control panel:
python -m local_deep_research.web.app
Access the control panel at http://localhost:5000
to manage and monitor your research projects.
For advanced usage, run research tasks directly from the command line:
python -m local_deep_research.main --topic "Fusion Energy Developments" --cycles 5
Optional Flags:
--depth
: Set the number of research iterations (default is 3)--sources
: Specify or limit particular data repositories--format
: Choose the output format (e.g., markdown or pdf)To add new search engines, modify config/search_engines.yaml
:
custom_engine:
name: ArXiv
url: https://arxiv.org/search/?query={query}
parser: academic
weight: 0.8
Improve performance by enabling GPU acceleration:
pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python --no-cache-dir
Problem | Solution |
---|---|
Out-Of-Memory (OOM) | Use a smaller model: ollama pull gemma3:8b |
Slow Search Performance | Cache results using Redis: sudo apt install redis-server |
Citation Errors | Ensure cite_sources: True is set in config.yaml |
Running Local Deep Researcher with Ollama on Ubuntu empowers you to conduct deep, reliable research while maintaining full control over your data. With its iterative query refinement and multi-source integration.
This tool is particularly valuable for scientific research, policy analysis, and any application where data privacy is paramount. Combine it with academic database API keys and keep your models updated through Ollama for optimal performance.
Connect with top remote developers instantly. No commitment, no risk.
Tags
Discover our most popular articles and guides
Running Android emulators on low-end PCs—especially those without Virtualization Technology (VT) or a dedicated graphics card—can be a challenge. Many popular emulators rely on hardware acceleration and virtualization to deliver smooth performance.
The demand for Android emulation has soared as users and developers seek flexible ways to run Android apps and games without a physical device. Online Android emulators, accessible directly through a web browser.
Discover the best free iPhone emulators that work online without downloads. Test iOS apps and games directly in your browser.
Top Android emulators optimized for gaming performance. Run mobile games smoothly on PC with these powerful emulators.
The rapid evolution of large language models (LLMs) has brought forth a new generation of open-source AI models that are more powerful, efficient, and versatile than ever.
ApkOnline is a cloud-based Android emulator that allows users to run Android apps and APK files directly from their web browsers, eliminating the need for physical devices or complex software installations.
Choosing the right Android emulator can transform your experience—whether you're a gamer, developer, or just want to run your favorite mobile apps on a bigger screen.
The rapid evolution of large language models (LLMs) has brought forth a new generation of open-source AI models that are more powerful, efficient, and versatile than ever.