Quick answer. Cursor slows on big repos because its index, file watcher, and AI context all scale with file count. The fastest wins: a stack-aware .cursorignore (cuts indexing from minutes to seconds), clearing the workspace index after branch swaps, tightening the repo map, and upgrading to Cursor 3 — its Apr 2 2026 release shipped a 2x faster indexer.
Why does Cursor slow down on big repos?
Cursor is a fork of VS Code with three extra background jobs bolted on: a semantic code index, a richer file watcher that feeds Tab completion, and an AI context selector that has to decide what to ship to the model on every request. All three scale with file count, not with the size of the file you happen to be looking at.
On a 50k-LOC Next.js project the cost is invisible. On a 1M-LOC monorepo with node_modules, generated protobufs, lockfiles, and a 200 MB .next folder, the same three jobs can saturate disk IO for 5 to 15 minutes after every open and eat 4 to 8 GB of RAM at steady state. The editor still works — it just feels like wading through glue. Forum reports of Cursor consuming 100 GB of RAM in extended sessions are almost always indexer churn plus a context window that has never been reset.
The seven fixes below are ordered roughly by leverage. The first one — a stack-aware .cursorignore — solves more than half of the slow-Cursor reports we see from engineers we work with. The rest tackle the cases where indexing alone isn't enough.
How do I write a .cursorignore per stack?
The single biggest performance lever is telling Cursor to stop indexing files that have no business being in semantic search. Cursor reads .cursorignore at the project root using gitignore syntax. Hierarchical .cursorignore (enabled in Settings) walks parent directories and is the right default for monorepos.
Generated code, dependency trees, build artifacts, lockfiles, and large binary fixtures should always be excluded. Test fixtures, snapshots, and database migrations are judgment calls — exclude them if they're noisy, keep them if you actually search them. Below are starting points for the five stacks where we most often see Cursor crawl.
Next.js / Node monorepo:
# .cursorignore — Next.js / Node
node_modules/
.next/
.turbo/
dist/
build/
coverage/
.vercel/
*.log
*.lock
package-lock.json
yarn.lock
pnpm-lock.yaml
# generated types
next-env.d.ts
**/*.generated.ts
**/__generated__/
# fixtures + snapshots
**/__snapshots__/
**/fixtures/**/*.json
Rails:
# .cursorignore — Rails
tmp/
log/
vendor/
public/assets/
public/packs/
storage/
node_modules/
coverage/
*.log
Gemfile.lock
yarn.lock
db/schema.rb
db/structure.sql
Go monorepo:
# .cursorignore — Go
vendor/
bin/
dist/
*.pb.go
*.pb.gw.go
**/mock_*.go
**/*_mock.go
# generated swagger / openapi
**/api/gen/
go.sum
Java / Maven (or Gradle):
# .cursorignore — Java / Maven
target/
build/
.gradle/
.idea/
.mvn/
*.class
*.jar
*.war
# generated sources
**/generated-sources/
**/generated/
Python:
# .cursorignore — Python
.venv/
venv/
__pycache__/
.pytest_cache/
.mypy_cache/
.ruff_cache/
*.pyc
*.pyo
dist/
build/
*.egg-info/
pip-wheel-metadata/
poetry.lock
uv.lock
On a 1M-file monorepo we measured indexing dropping from roughly 12 minutes to under 3 once a well-scoped .cursorignore was committed. The community has reported similar numbers — indexing time falling from 15 minutes to 4 after excluding three unrelated sub-apps in an apps/packages monorepo.
When should I clear the workspace index and restart?
Cursor's index is durable across sessions. That's a feature most of the time and a problem after a few specific events: a large branch swap, a dependency bump that rewrote thousands of files, a Git LFS pull, or a generated-code regeneration that bloated the index without invalidating stale entries.
The fix is Settings → Indexing & Docs → Clear Workspace Index, then a full restart of Cursor. The next open will re-index from scratch, which costs you a few minutes but produces a clean, dense index. If autocomplete suggestions start referencing files you deleted three branches ago, that's the signal.
Clearing the index also unsticks the file watcher in rare cases where it has lost track of which files are still on disk. If you see the indexer status spinning forever in the status bar, the clear-and-restart is the right move.
Where this masks a deeper problem: if you find yourself clearing the index every few days, the real issue is almost always missing .cursorignore entries letting churning generated files back into the index. Fix the ignore file, not the symptom.
How do I reduce repo-map size and context-window pressure?
The advertised 200K context window of Claude Sonnet 4.6 in Cursor is not what you actually get. Cursor's system prompt, codebase index results, conversation history, and auto-included file contents all spend tokens before your request gets its turn. The usable space lands closer to 40K to 60K tokens. Tab completion takes another 2K to 4K.
That means every file Cursor's repo map decides to include shrinks the slice of context available for the actual change you want to make. On big repos this is the difference between a one-shot working diff and an agent that loses track of step one by step four.
Three levers, in order of impact:
- Tighten the repo map. Cursor's settings include knobs for repo-map size and codebase indexing scope. Reduce the repo map first; you'll notice immediately whether the agent still has enough context to navigate.
- Open the package, not the root. In an apps/packages monorepo, opening
apps/webdirectly instead of the repo root cuts the index by an order of magnitude and keeps the AI focused. - Start new chats when you switch packages. Conversation history is part of the budget. A stale 80-message thread about the mobile app is dead weight when you're now editing the API.
Composer or Chat: which one handles big-file work?
Composer and Chat solve different problems. Chat is for streaming questions and small edits — you ask, the model answers, you accept or discard. Composer holds context across multiple files and produces a coordinated diff. On a big repo the cost profile is different.
Composer reads files into context as the agent runs, accumulating tool outputs across steps. A multi-step refactor touching eight files will fill the context window faster than a single-file edit, and by step three or four the agent is operating with degraded awareness of what it did in step one. The upside is that the diff is coherent — Composer thinks of the change as one unit.
Practical rule: use Chat for surgical edits inside one file, especially when you have the file open and can paste the relevant chunk. Use Composer for changes that cross file boundaries — adding a new API endpoint that touches the route, controller, service, and test file, for example. If Composer keeps running out of context on a refactor, that's a signal to scope the change smaller, not to push through.
How do I disable inline AI for noisy files?
Inline AI suggestions (Tab completion, ghost text) are great in source files and pointless in generated code. Worse, they prime the model with garbage tokens from minified bundles or 200-line JSON fixtures. Two ways to silence the noise.
The blunt instrument is .cursorignore — anything in there is invisible to the indexer and to inline AI. That's the right move for *.min.js, *.min.css, lockfiles, and machine-generated protobufs.
The surgical instrument is Settings → Features → Cursor Tab → Disabled file patterns, which silences inline completion on a glob without removing the file from semantic search. Use this when you want grep and chat to still see a file (a vendored SDK, say) but don't want Tab firing on every keystroke inside it.
Common patterns we disable inline AI on across most projects:
**/*.min.js
**/*.min.css
**/*.lock
**/*.snap
**/__snapshots__/**
**/fixtures/**/*.json
**/*.generated.*
How much RAM does Cursor actually need on an M-series Mac?
Apple Silicon's unified memory means Cursor's index, the renderer, all loaded extensions, and any local model assistants share the same pool with the OS, your browser, Docker, Slack, and everything else. There is no "swap is fine" answer — swapping on M-series is fast, but swapping under sustained pressure burns SSD write cycles and adds tail latency to every IDE action.
Rough numbers from what we see in the wild:
- 50k LOC project, 16 GB Mac: Cursor sits around 1.2 to 2 GB resident. No swap pressure, snappy.
- 250k LOC project, 16 GB Mac: 2 to 4 GB resident. Workable if Docker and Chrome aren't competing.
- 1M LOC monorepo, 16 GB Mac: 6 to 10 GB resident with the index loaded, plus 2 to 4 GB more during an active Composer run. Add a running Docker stack and you are swapping continuously. Activity Monitor's memory pressure graph stays yellow or red.
- 1M LOC monorepo, 32 GB Mac: the same 6 to 10 GB sits comfortably in RAM. Swap pressure drops to zero. This is the configuration we recommend for anyone working full-time in a monorepo of that size.
- 1M LOC monorepo, 64 GB Mac: overhead room for parallel agents and a local model on top.
If you're on a 16 GB Mac in a big repo and have done the .cursorignore work and still hit slowdowns, the bottleneck may not be Cursor. Open Activity Monitor, sort by memory, and look at swap used. Above 4 to 6 GB swap, hardware is the answer — either close half your apps or move to a 32 GB machine.
Should I upgrade to Cursor 3?
Cursor 3 shipped on Apr 2 2026 with a redesigned interface centered on running agents in parallel across repos and environments — local, worktrees, cloud, and remote SSH. Less discussed but more relevant for big-repo users: the codebase indexer was rewritten, with semantic search and instant grep both materially faster. Real-world reports put indexing at roughly half the wall-clock time of Cursor 2 on the same repo.
If you're still on Cursor 2 and your team has been complaining about index times, this is the lowest-effort fix on the list. Update from the in-app updater or download from cursor.com. Re-index once on the new version; the new index format is built from scratch.
The parallel-agents flow is also worth knowing about for big repos: it lets you delegate the slow part of a refactor (e.g. "update every consumer of this deprecated helper") to a background agent in a worktree while you keep working in the foreground. The performance win comes from not having to serialize your own work behind the agent.
Companion guide
For everything Cursor — features, workflows, comparisons — see our Cursor IDE complete guide for 2026.
What if none of these fixes work?
If you've worked through all seven and Cursor is still unusable on your repo, the problem is either a specific bug or a tool mismatch. The decision tree we use:
- Reproduce with extensions off. Launch with
cursor --disable-extensions. If the slowdown disappears, a VS Code extension is the culprit — re-enable one at a time until you find it. The usual suspects are heavy linters (ESLint on a 10k-file repo), Docker integrations, and any extension that itself runs a watcher. - File a focused bug report. If a clean install with no extensions and a tight
.cursorignorestill chugs, post a reproduction on the Cursor forum with the repo size, file count, your hardware, and the exact action that's slow. Cursor's team is responsive and "big-repo slow" is on their radar. - Downgrade temporarily. If a recent Cursor update made things worse, the release notes page lists previous versions. Pin to the last good build while you wait for a fix.
- Reach for a different tool for the heavy parts. For long, autonomous, multi-hour refactors on a million-line repo, Claude Code (running headless in a terminal) or a parallel agent in a worktree is often a better fit than the IDE-bound Cursor agent. Use Cursor for interactive editing, Claude Code for long batches, and don't try to force one tool to do both at the same time.
- Split the repo. If your monorepo has grown past what any IDE can comfortably hold, the long-term answer is package-level workspaces — open the package you're working in, not the root, and accept that whole-repo refactors need a different workflow.
Hire engineers who already know this stack
Fixing Cursor on a big repo is partly tooling and partly judgment — knowing which files actually matter to your search, which refactors to scope tight, when to give up and ship a smaller change. That judgment comes from time inside production codebases at this size, not from a tutorial.
If you're building or extending a team that works in a large monorepo with AI tooling in the loop, Codersera places vetted remote engineers who already work this way — comfortable with .cursorignore hygiene, repo-map tradeoffs, and the discipline of scoping AI-assisted changes so they actually merge. Risk-free trial, two-week engagement window, full technical fit before you commit.
FAQ
Why is Cursor using 100 GB of RAM?
Almost always a runaway indexer plus a long-lived context window. Quit Cursor, commit a .cursorignore that excludes node_modules, build outputs, lockfiles, and generated code, then reopen. If memory still climbs unbounded over hours, file a bug with the repo size and your hardware.
Does .cursorignore affect Git?
No. .cursorignore is read only by Cursor for indexing and inline AI; it doesn't touch Git tracking, your build, or any other tool. You can commit it to the repo or keep it local — committing is recommended so the whole team benefits.
Is Cursor slower than VS Code on big repos?
Yes, marginally, because of the extra indexer and AI context jobs. The gap closes to near zero once .cursorignore is tight and inline AI is disabled in noisy patterns. On Cursor 3 with a clean index, the overhead is usually well under 10 percent in CPU and a few hundred MB of RAM.
Should I disable codebase indexing entirely?
Only if you never use semantic search or @Codebase. Turning it off in Settings → Features saves roughly 800 MB of RAM on large repos but also removes the feature that makes Cursor smarter than vanilla VS Code with an LLM extension. For most teams the right call is a tight .cursorignore, not killing the index.
What files slow Cursor down the most?
Generated code (protobufs, GraphQL types, OpenAPI clients), minified bundles, lockfiles, large JSON fixtures, and dependency directories. Files over 5,000 lines also hurt — Cursor's tokenizer and the inline AI both struggle with them. Split or refactor very long files before letting an agent touch them.
Does clearing the workspace index lose data?
No code is touched. You only lose the cached embeddings that power semantic search and Cursor's AI context selection. They rebuild on the next open, which takes a few minutes on a large repo but produces a cleaner index.
How much RAM do I need for a 1M-LOC repo?
32 GB is the floor for comfortable full-time work in Cursor on a million-line repo, especially if Docker, Chrome, and Slack are running. 16 GB will work for short sessions or after closing competing apps, but you'll spend time fighting swap. 64 GB is overhead for parallel agents and local model assistants on top.
Is Cursor 3 worth upgrading to for performance alone?
Yes. The Apr 2 2026 release ships a roughly 2x faster indexer and a redesigned agent flow that lets you push slow work into background worktrees. If your team is still on Cursor 2 and complaining about index times, this is the lowest-effort fix available.