Connect with OneDrive
High Quality Video Sharing
Store & share your recordings seamlessly with OneDrive integration
3 min to read
AutoCodeRover is an AI-powered DevAssistant that streamlines software development by automating bug fixes, feature implementations, and code optimizations. Designed for developers working with complex systems, it supports three key workflows:
This comprehensive guide walks through Linux installation methods, configuration best practices, and real-world use cases.
Before installation, ensure your system meets these requirements:
conda
(Miniconda3 or Anaconda) or Docker Engine 24.0+There are two primary methods to set up AutoCodeRover on Linux: using Docker or a local installation with conda
.
Using Docker is the recommended method for running AutoCodeRover, as it simplifies dependency management and ensures a consistent environment.
Steps:
Run the Docker container:
docker run -it -e OPENAI_KEY="${OPENAI_KEY:-OPENAI_API_KEY}" acr
Build the Docker image:
docker build -f Dockerfile.minimal -t acr .
Set up API keys:Obtain an OpenAI API key and set it as an environment variable:
export OPENAI_KEY=sk-YOUR-OPENAI-API-KEY-HERE
For Anthropic models, set the Anthropic API key:
export ANTHROPIC_API_KEY=sk-ant-api...
Similarly, set the Groq API key.
Alternatively, you can set up AutoCodeRover locally by managing Python dependencies with environment.yml
. This method is recommended for SWE-bench experiments.
Steps:
conda activate auto-code-rover
conda env create -f environment.yml
AutoCodeRover can be run in three modes: GitHub issue mode, local issue mode, and SWE-bench mode.
This mode allows you to run AutoCodeRover on live GitHub issues by providing a link to the issue page.
Steps:
git clone ...
).git checkout ...
).<task id>
with a string to identify the issue.selected_patch.json
in the output directory.cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py github-issue --output-dir output --setup-dir setup --model gpt-4o-2024-05-13 --model-temperature 0.2 --task-id <task id> --clone-link <link for cloning the project> --commit-hash <any version that has the issue> --issue-link <link to issue page>
Example:
PYTHONPATH=. python app/main.py github-issue --output-dir output --setup-dir setup --model gpt-4o-2024-05-13 --model-temperature 0.2 --task-id langchain-20453 --clone-link https://github.com/langchain-ai/langchain.git --commit-hash cb6e5e5 --issue-link https://github.com/langchain-ai/langchain/issues/20453
This mode allows you to run AutoCodeRover on a local repository and a file containing the issue description.
Steps:
selected_patch.json
in the output directory.cd /opt/auto-code-rover
conda activate auto-code-rover
PYTHONPATH=. python app/main.py local-issue --output-dir output --model gpt-4o-2024-05-13 --model-temperature 0.2 --task-id <task id> --local-repo <path to the local project repository> --issue-file <path to the file containing issue description>
To run AutoCodeRover on SWE-bench task instances, a local setup of AutoCodeRover is recommended. Further details on this mode can be found in the AutoCodeRover documentation.
AutoCodeRover can resolve more issues if test cases are available. For example, refer to the provided video acr_enhancement-final.mp4
for a demonstration.
Symptom | Solution |
---|---|
Docker build fails |
Ensure Dockerfile uses FROM python:3.10-slim |
ModuleNotFoundError |
Run conda env update -f environment.yml |
API Key rejected |
Verify key validity at OpenAI Dashboard |
Patch generation fails |
Increase --model-temperature to 0.3-0.5 |
--gpus all
flag in docker run
for faster LLM inferencegpt-3.5-turbo
claude-3-opus-20240229
~/.cache/acr
periodicallyAutoCodeRover significantly enhances developer productivity by automating routine coding tasks. Setting up AutoCodeRover on Linux involves a few straightforward steps, whether you choose to use Docker or a local installation.
Need expert guidance? Connect with a top Codersera professional today!