The 15 MCP Servers Worth Wiring Into Claude Code and Cursor (2026)

A curated list of the 15 MCP servers worth installing for Claude Code and Cursor in 2026 — what each one does, the install command, when it shines, and how to stay under Cursor's 40-tool ceiling.

Quick answer. The 15 MCP servers worth installing in Claude Code or Cursor in 2026 cover four jobs: code and repo access (filesystem, GitHub, Git), databases (Postgres, SQLite), web grounding (Brave Search, Fetch), reasoning aids (Memory, Sequential Thinking), team systems (Slack, Linear, Notion, Sentry), browser automation (Playwright), and date math (Time). Pick four to six, not all 15 — Cursor's 40-tool ceiling fills up faster than you think.

The Model Context Protocol catalogue went from a few dozen servers at the start of 2025 to more than 500 public servers by the time Google Colab shipped its own in April 2026. Most of them are noise. A few are infrastructure you will rely on every day — once you find them.

This is an opinionated, curated-by-stack list of the 15 servers we install on a typical engineering laptop running Claude Code or Cursor against a real codebase. For each one: what it does, the install command in JSON config form, when it earns its slot, what it costs, and the gotchas worth knowing before you wire it up.

What is MCP and why do you need a server?

The Model Context Protocol (MCP) is a small open standard that lets an AI agent talk to external tools and data sources through a uniform JSON-RPC interface. An MCP server is a process that speaks this protocol — it exposes a set of tools (functions the agent can call) and resources (data the agent can read) and the agent's host (Claude Code, Cursor, Claude Desktop, Codex, etc.) routes calls to it.

Without MCP, every agent has to ship its own bespoke integration for every tool. With MCP, you install one server — say, GitHub — and every MCP-compatible agent on your machine immediately has issues, PR review, code search, and repo ops. The protocol is the integration; the server is the implementation. That is the whole pitch, and after six months of daily use it is genuinely how it feels in practice.

How does the 40-tool ceiling change which servers you install?

Cursor has a soft ceiling of roughly 40 active tools across all your MCP servers combined. Exceed it and two things happen: you get a warning, and the agent silently loses access to some tools. Even on Claude Code, which raised its ceiling further in early 2026, the underlying problem holds — past about 50 visible tools the model starts picking the wrong one, because tool descriptions all sit in the context window and the selection task gets noisier the more options there are.

This matters because most MCP servers expose 5–15 tools each, not one. The GitHub server alone is roughly 20 tools. Playwright is roughly 25. Install six well-chosen servers and you are at the ceiling; install ten and you are over it. The right mental model is not which servers exist but which four to six should be on at any given moment, with the rest behind a per-project enable flag.

Practical pattern: keep filesystem, GitHub, Git, Fetch, and one search server on globally. Add the rest per-project in .cursor/mcp.json or .mcp.json at the repo root.

Which 15 MCP servers are worth installing?

1. Filesystem

The foundational server. Anthropic's official reference implementation gives the agent read and write access to a directory you whitelist — not your whole disk. Every coding workflow uses it. Tools: read_file, write_file, list_directory, search_files, move_file, edit_file, and a handful more.

Install.

{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/code"] } } }

When it shines. Any time you want the agent to operate on local files without round-tripping through git. Gotcha. The path you pass is a hard whitelist — pick a parent directory that covers all your repos rather than fighting the agent every time it tries to read something one folder up.

2. GitHub

The original Anthropic reference server was archived in May 2025; GitHub took over and shipped github-mcp-server, rewritten in Go in collaboration with Anthropic. It is the version you want. Issues, pull requests, code search, repo metadata, code scanning, and a get_me tool that improves natural-language phrasing.

Install.

{ "mcpServers": { "github": { "command": "docker", "args": ["run", "-i", "--rm", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN", "ghcr.io/github/github-mcp-server"], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_..." } } } }

When it shines. Triage workflows ("summarise the last 20 issues"), PR review prep, cross-repo code search. Gotcha. A fine-grained token scoped to one org is safer than a classic PAT — the agent will try to query everything it can see.

3. Git

Local repo operations: commits, branches, diffs, blame, log. Distinct from the GitHub server — this one operates on the working copy on disk, no network. Maintained as an active reference server.

Install.

{ "mcpServers": { "git": { "command": "uvx", "args": ["mcp-server-git", "--repository", "/Users/you/code/your-repo"] } } }

When it shines. "Show me what changed since main" / "who last touched this function" / "draft a commit message from the staged diff". Gotcha. Requires uv installed; if you do not have it, run pip install uv first. The Python server is faster to start than a node-based equivalent.

4. Postgres

SQL access to a Postgres database, read-only by default. The original reference server was archived in May 2025; the community-maintained fork on npm under the same package name remains the most-installed option, and several vendors (Supabase, Neon) now ship official servers tuned for their hosted Postgres.

Install.

{ "mcpServers": { "postgres": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://user:pass@localhost:5432/yourdb"] } } }

When it shines. Schema-aware debugging ("why does this query return zero rows?"), data-shape exploration before writing a migration. Gotcha. Default is read-only for a reason — never enable writes against production from an agent loop. Point at a read replica or a local snapshot.

5. SQLite

Local SQL against a SQLite file. The Anthropic reference SQLite server was archived; the community npm package mcp-server-sqlite-npx is the actively maintained replacement and the one most setup guides point to in 2026.

Install.

{ "mcpServers": { "sqlite": { "command": "npx", "args": ["-y", "mcp-server-sqlite-npx", "/Users/you/data/app.db"] } } }

When it shines. Inspecting local app databases (Chrome history, Slack cache, your own SQLite-backed tools), quick analytics on dump files. Gotcha. Read-only is the safe default; enable writes only against throwaway files.

Web grounding. The original Anthropic reference server was archived; Brave now publishes an official server at brave/brave-search-mcp-server that supports web search, local POI search, image, video, and news. The free tier covers most personal use (2,000 queries/month).

Install.

{ "mcpServers": { "brave-search": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-brave-search"], "env": { "BRAVE_API_KEY": "BSA..." } } } }

When it shines. Any "what changed in the last six months" question — library upgrades, model releases, framework migrations. Gotcha. Pick one search server. Installing Brave plus Tavily plus Exa is the most common cause of the agent picking the wrong tool and getting confused.

7. Fetch

HTTP requests with headers and cookie support. Lightweight, official, maintained. Fetches a URL, returns markdown-converted content suitable for the model to read.

Install.

{ "mcpServers": { "fetch": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-fetch"] } } }

When it shines. Reading specific URLs the agent already knows about (a docs page, a GitHub gist, a webhook payload). Pairs well with Brave Search — search to find URLs, fetch to read them. Gotcha. Not a JavaScript renderer; for SPA-only sites you want Playwright instead.

8. Memory

Cross-session knowledge persistence. Two solid choices: Anthropic's reference server-memory (knowledge-graph based, local JSON store) or Mem0's OpenMemory MCP (richer, scales further, optional cloud sync). Start with the reference one; switch to OpenMemory when its limits start to bite.

Install (reference).

{ "mcpServers": { "memory": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-memory"] } } }

When it shines. Long-running projects where you want the agent to remember your stack, conventions, and ongoing decisions. Gotcha. Memory is only useful if you actively prompt the agent to record things ("remember that we use pnpm, not npm") — it does not learn passively.

9. Sequential Thinking

A reasoning tool that externalises chain-of-thought as explicit, revisable steps. The agent calls it to lay out a multi-step plan, then can branch or revise specific steps. Useful for architectural questions and tricky debugging.

Install.

{ "mcpServers": { "sequential-thinking": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-sequentialthinking"] } } }

When it shines. "Plan the refactor before you start." The structured-thinking pass routinely catches a missed edge case the model would have ploughed straight through. Gotcha. Not worth it for short tasks — it adds latency and tokens for problems the model can solve in one shot.

10. Slack

Anthropic's reference Slack server was archived in May 2025. The most-used replacement is korotovsky/slack-mcp-server — supports DMs, group DMs, channel search, history, posting, and runs without requiring a workspace bot install. Roughly 9,000 active users at last count.

Install.

{ "mcpServers": { "slack": { "command": "npx", "args": ["-y", "slack-mcp-server"], "env": { "SLACK_MCP_XOXC_TOKEN": "xoxc-...", "SLACK_MCP_XOXD_TOKEN": "xoxd-..." } } } }

When it shines. Pulling a thread's worth of context into a coding task, summarising overnight channel activity, posting a deploy summary. Gotcha. Token handling matters — never commit these to a repo. Use environment variables or a secret manager.

11. Linear

Linear shipped an official MCP server in 2026 that exposes tickets, projects, cycles, and sprint operations. Connects over HTTPS with browser-based OAuth, so no API key juggling.

Install.

{ "mcpServers": { "linear": { "command": "npx", "args": ["-y", "linear-mcp-server"], "env": { "LINEAR_API_KEY": "lin_api_..." } } } }

When it shines. "Open a ticket for this bug," "what's still open in this cycle," "link this PR to LIN-1234." Ends a lot of context-switching. Gotcha. Scope the API key to a single team if you can — agents will eagerly query the whole workspace otherwise.

12. Playwright

Microsoft's official @playwright/mcp. Browser automation: navigate, click, type, screenshot, scrape, run JS in the page. Replaces the older archived Puppeteer reference server.

Install.

{ "mcpServers": { "playwright": { "command": "npx", "args": ["-y", "@playwright/mcp@latest"] } } }

When it shines. End-to-end debugging ("reproduce the bug, screenshot every step"), scraping JS-rendered docs, smoke-testing a deploy. Gotcha. Exposes roughly 25 tools — the single biggest contributor to a Cursor tool-count blowup. Keep it disabled by default; enable only on projects that actually need it.

13. Sentry

Sentry's official server @sentry/mcp-server exposes issue and event queries against your org's Sentry projects. Great for "why is this thing crashing in production right now."

Install.

{ "mcpServers": { "sentry": { "command": "npx", "args": ["-y", "@sentry/mcp-server"], "env": { "SENTRY_AUTH_TOKEN": "sntrys_..." } } } }

When it shines. Closing the loop between an alert and a fix — the agent pulls the stack trace, opens the right file, drafts the patch. Gotcha. Token needs project:read at minimum; broader scopes let the agent pivot into org-level data you may not want it touching.

14. Notion

Notion's official server @notionhq/notion-mcp-server reads pages, databases, and properties. Best for teams that keep architecture docs and design notes in Notion rather than the repo.

Install.

{ "mcpServers": { "notion": { "command": "npx", "args": ["-y", "@notionhq/notion-mcp-server"], "env": { "NOTION_API_KEY": "secret_..." } } } }

When it shines. Pulling a design doc into context before the agent writes a feature. Gotcha. Notion's API is rate-limited per integration; a chatty agent run can blow through 3 requests/second fast. Cache locally if you query the same pages repeatedly.

15. Time

Timezone-aware date math. Small, official, maintained. Tools to get current time in any zone, convert between zones, and parse natural-language date expressions.

Install.

{ "mcpServers": { "time": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-time"] } } }

When it shines. Anything involving scheduling logic, log timestamps from multiple regions, or cron-like math. The agent gets dates wrong far less often when this is wired in. Gotcha. None worth listing — this is the cheapest install on the list and the one with the best surprise-to-cost ratio.

How do you configure these in Claude Code vs Cursor?

The JSON shape is the same on both: a top-level mcpServers object keyed by server name, each entry carrying command, args, and optional env. The difference is where the file lives and what scope it applies to.

ScopeClaude CodeCursor
User-global (every project)~/.claude/settings.json under mcpServers~/.cursor/mcp.json
Project (per repo).mcp.json at the repo root.cursor/mcp.json at the repo root

Two practical rules. First, keep secrets out of the project-scoped file — .mcp.json and .cursor/mcp.json are committed to the repo by most teams. Use the env block to reference variables that live in your shell or a .envrc, not inline tokens. Second, when you add a new server, restart the agent host completely — both Claude Code and Cursor read the file at startup and ignore changes until you reload.

For a quick health check, both clients expose a UI that shows which servers are connected and how many tools each exposes. Use it to keep an eye on the cumulative tool count before you cross the 40-tool ceiling.

Companion guide

For the full landscape of agentic coding tools, see our AI coding agents complete guide for 2026.

What MCP servers should you not install?

The 500+ public catalogue includes a lot of servers that look useful and aren't. A few patterns to avoid:

  • More than one web-search server. Brave, Tavily, Exa, Serper, You.com — pick one. With two installed, the agent flips between them based on tool-name lexical similarity and you spend tokens on wasted lookups.
  • Redundant filesystem or git servers. The official ones are good. Community variants with extra features (recursive search, fancy diff) usually trip the agent up because the tool descriptions are subtly different.
  • Servers that wrap a single curl command. If a server exposes one tool that does POST /api/foo, you do not need MCP for it — you need Fetch with the right headers. Every server has overhead in startup and context.
  • Unverified third-party servers handling secrets. Anything that asks for an AWS root key, a production database URL, or a payments token deserves a careful read of the source before install. The protocol does not sandbox the server process.
  • "All-in-one" mega-servers. A few servers in the catalogue claim to wrap 50+ tools across multiple SaaS products. They will single-handedly blow your tool budget and the descriptions are usually weaker than the vendor-official servers they replicate.

The throughline: every server you install is a tax on the agent's tool-selection accuracy and on your context budget. Keep the list short and the per-server quality high.

Who builds and deploys agentic MCP infrastructure?

Building production-grade MCP servers — or wiring agents into private infrastructure with the right auth, observability, and guardrails — is the kind of work that pays back within a quarter once it lands. Codersera matches you with vetted remote engineers who have shipped agentic AI and MCP-server tooling in production: protocol implementers, agent harness authors, and developer-tooling specialists. We run a risk-free trial so you can validate technical fit before committing.

FAQ

Do Claude Code and Cursor share MCP config?

No — each client reads its own file. Claude Code uses ~/.claude/settings.json and .mcp.json; Cursor uses ~/.cursor/mcp.json and .cursor/mcp.json. The JSON shape is identical though, so symlinking or maintaining a small sync script (chezmoi is a common pick) keeps both in lockstep without duplicating content.

What's the real cost of running these servers?

The servers themselves are free and run locally. Costs come from two places: API quotas on the underlying services (GitHub PAT limits, Brave Search's 2,000 free queries/month, Sentry/Linear's per-seat pricing), and the agent's token spend — tool descriptions sit in the context window every turn, so 50 tools across six servers adds roughly 4–6K input tokens per request before you have asked anything. That is the hidden cost of installing too many servers.

Can I write my own MCP server?

Yes, and it is surprisingly approachable. Anthropic publishes Python, TypeScript, and Go SDKs; a working stdio server for a single custom tool is around 50 lines of code. The cases worth the effort are internal services that have no official MCP yet (your company's auth system, an internal docs index, a private database) — not wrapping public APIs that already have a maintained server.

How do I know if a server is archived or still maintained?

Check the repo. The official reference servers split in May 2025 — the active ones (filesystem, git, fetch, memory, sequential-thinking, time, everything) live at modelcontextprotocol/servers; the archived ones (GitHub, Postgres, Slack, Brave, Sentry, Puppeteer, Redis, others) moved to modelcontextprotocol/servers-archived. For the archived ones, the right move is to find the vendor-official replacement (GitHub's own, Brave's own, Sentry's own) or a well-starred community fork.

Why does Cursor have a 40-tool ceiling in the first place?

Two reasons. The practical one is context-window economics — every tool's name, description, and JSON schema sits in the prompt every turn, so unbounded tool counts directly inflate input cost. The deeper one is selection accuracy: past about 40 tools, LLMs measurably get worse at picking the right one. The ceiling is a guardrail, not an arbitrary number. The right response is to install fewer servers, not to fight the limit.

Should I use stdio or HTTP transport servers?

Default to stdio for anything running locally — lower latency, simpler auth, no port management. Use HTTP (--transport http) for vendor-hosted servers where the vendor handles auth in the browser and you do not want long-lived tokens on your laptop. Linear, Sentry, Notion, Figma, and Supabase have all moved their official servers toward HTTP transport with OAuth in 2026; stdio still dominates for purely local servers like filesystem and Playwright.

How often should I review my installed servers?

Quarterly is a sensible cadence. The catalogue moves fast — vendor-official servers replace community ones, archived servers get successors, and protocol versions bump. If your config was set up before March 2026 and you have not revisited it since, you are probably running at least one server that has been superseded by a faster, better-authenticated, better-maintained replacement.