Insights & Updates

Security advisories, tool reviews, tutorials, and the latest from the ClineTools team.

What Is CLAUDE.md and Why It Matters

If you have been using Claude Code for any length of time, you have probably noticed that it sometimes needs reminding about your project structure, coding conventions, or preferred workflows. That is exactly the problem that CLAUDE.md solves. It is a special markdown file placed in the root of your project that Claude Code reads automatically at the start of every session. Think of it as a persistent instruction set that gives Claude the context it needs to work effectively in your specific codebase from the very first message.

Without a CLAUDE.md, you end up repeating yourself constantly: "We use tabs not spaces," "Our API routes are in src/routes/," "Always use TypeScript strict mode." With a well-written CLAUDE.md, all of that context is loaded automatically. The result is a dramatically more productive experience where Claude understands your project from the start, follows your conventions without being asked, and produces code that fits seamlessly into your existing patterns.

Basic Structure

A good CLAUDE.md starts with three foundational sections: a project overview, your conventions, and a map of key files. Here is a minimal but effective starting template:

CLAUDE.md
# Project: MyApp

## Overview
A Next.js 14 e-commerce platform using App Router,
TypeScript, Tailwind CSS, and Prisma with PostgreSQL.

## Conventions
- Use TypeScript strict mode for all files
- Prefer named exports over default exports
- Use server components by default; add "use client"
  only when client interactivity is needed
- Follow the existing error handling pattern using
  Result types (see src/lib/result.ts)
- Write tests using Vitest, colocated with source files

## Key Files and Directories
- src/app/ - Next.js App Router pages and layouts
- src/components/ - Shared React components
- src/lib/ - Utility functions and shared logic
- src/server/ - Server-side business logic
- prisma/schema.prisma - Database schema
- .env.example - Required environment variables

Even this minimal version saves you significant repetition. Claude now knows your tech stack, your file organization, and your basic coding standards before you type a single prompt.

Advanced Patterns

Once you are comfortable with the basics, you can add more sophisticated instructions that dramatically improve the quality of Claude's output. Here are some powerful patterns to consider.

Conditional instructions let you specify different behavior depending on context:

CLAUDE.md
## Workflow Rules
- When creating new API routes, always include:
  - Input validation with Zod schemas
  - Error handling that returns proper HTTP status codes
  - Rate limiting middleware
  - OpenAPI JSDoc comments for documentation

- When modifying existing components, always:
  - Check for existing tests and update them
  - Preserve backward compatibility of props
  - Update the component's Storybook story if one exists

Tool preferences tell Claude which tools to reach for in different situations:

CLAUDE.md
## Tool Preferences
- Use the Grep tool instead of bash grep for searching
- Prefer editing existing files over creating new ones
- Run TypeScript compiler (npx tsc --noEmit) after
  making type changes to verify correctness
- Use Vitest (npx vitest run) to run tests, not Jest

Code style rules ensure consistency with your existing codebase:

CLAUDE.md
## Code Style
- Use functional components with arrow functions
- Prefer const assertions for literal types
- Use early returns to reduce nesting
- Maximum function length: 40 lines
- Name boolean variables with is/has/should prefixes
- Use descriptive variable names; avoid abbreviations

Real Example: A React Project

Here is a comprehensive CLAUDE.md from a production React application that demonstrates how all these patterns come together:

CLAUDE.md (React Project)
# Project: TaskFlow

## Overview
TaskFlow is a project management SPA built with React 18,
TypeScript, Zustand for state, React Query for data
fetching, and Tailwind CSS. The API is a separate
service; this repo is frontend only.

## Architecture
- src/features/ - Feature-based modules (each has
  components/, hooks/, api/, types/)
- src/shared/ - Shared components, hooks, and utilities
- src/app/ - App shell, routing, providers
- All API calls go through React Query hooks in
  each feature's api/ directory

## Conventions
- Functional components only, arrow function style
- Co-locate tests: Component.test.tsx next to Component.tsx
- Use Zustand slices pattern for global state
- CSS: Tailwind utility classes only, no custom CSS
- Imports: absolute paths using @/ alias

## Testing
- Unit tests: Vitest + React Testing Library
- Run: npx vitest run
- Minimum coverage: 80% for new features
- Mock API calls using MSW (see src/test/mocks/)

## Do NOT
- Do not use class components
- Do not add new dependencies without noting why
- Do not use inline styles; use Tailwind classes
- Do not modify shared/ui components without checking
  all usage sites first

Real Example: A Python API

Here is another example for a different technology stack, a Python FastAPI backend:

CLAUDE.md (Python API Project)
# Project: DataPipeline API

## Overview
FastAPI backend for a data processing pipeline.
Python 3.12, SQLAlchemy 2.0 with async, PostgreSQL,
Redis for caching, Celery for background tasks.

## Structure
- app/api/v1/ - API route handlers
- app/models/ - SQLAlchemy ORM models
- app/schemas/ - Pydantic request/response schemas
- app/services/ - Business logic layer
- app/tasks/ - Celery background tasks
- tests/ - Pytest test suite (mirrors app/ structure)

## Conventions
- Type hints on all function signatures
- Use async/await for all database operations
- Pydantic schemas for ALL request/response validation
- Services layer between routes and models (no direct
  ORM queries in route handlers)
- Use dependency injection for database sessions

## Commands
- Run tests: pytest -xvs
- Run linter: ruff check .
- Run formatter: ruff format .
- Start dev server: uvicorn app.main:app --reload

## Error Handling
- Use app/exceptions.py for custom exception classes
- All API errors return structured JSON with
  "detail" and "error_code" fields
- Log errors using structlog (see app/logging.py)

Common Mistakes to Avoid

  • Being too vague: "Write good code" tells Claude nothing. Be specific: "Use early returns, keep functions under 40 lines, prefer composition over inheritance."
  • Overloading with information: Claude Code reads the entire CLAUDE.md every session. If it is 2,000 lines long, you are wasting context window on instructions that may not be relevant. Keep it focused and concise, ideally under 200 lines.
  • Forgetting to update it: Your CLAUDE.md should evolve with your project. If you switch from Jest to Vitest, update the file. Stale instructions actively hurt productivity.
  • Duplicating what is obvious: You do not need to explain what React is. Focus on what is specific to your project that Claude would not otherwise know.
  • Not including a "Do NOT" section: Telling Claude what to avoid is just as valuable as telling it what to do. If you have learned the hard way that certain patterns cause problems in your codebase, document them.

Checklist for a Great CLAUDE.md

  • Project name and one-sentence description
  • Tech stack with specific versions
  • Directory structure with explanations
  • Coding conventions (naming, style, patterns)
  • Key commands (test, lint, build, run)
  • Error handling patterns
  • Testing requirements and tools
  • A "Do NOT" section with known pitfalls
  • Tool preferences for Claude Code
  • File kept under 200 lines and actively maintained

A well-crafted CLAUDE.md is one of the highest-leverage investments you can make in your Claude Code workflow. Spend thirty minutes writing one today and you will save hours of repeated instructions over the coming weeks. Start with the basic template, iterate as you discover what Claude needs to know about your project, and watch your productivity compound over time.

The MCP Security Landscape in 2026

The Model Context Protocol has transformed how developers interact with AI coding assistants. What started as a handful of official integrations has exploded into an ecosystem of thousands of community-built MCP servers covering everything from database management to cloud deployment. This rapid growth has been overwhelmingly positive for developer productivity, but it has also created a security landscape that most developers are not adequately prepared to navigate.

As of early 2026, the MCP registry lists over 4,000 publicly available servers. Of those, fewer than 15% have undergone any form of independent security review. The remaining 85% are installed on trust alone, often based on nothing more than a GitHub star count and a convincing README. When you consider that these servers can read files, execute commands, make network requests, and inject content into your AI assistant's context, the implications of that trust gap become deeply concerning.

Common Vulnerability Patterns

Through our ongoing security research, we have identified several recurring vulnerability patterns that appear across the MCP ecosystem. Understanding these patterns is the first step toward protecting yourself.

Excessive Permissions: The most common issue we see is MCP servers that request far more access than they need. A server that provides code formatting should not need filesystem write access, network permissions, or the ability to execute shell commands. Yet we routinely see servers that claim minimal functionality while requesting broad system access. The MCP protocol does not enforce a principle of least privilege by default, which means developers must evaluate permissions manually for every server they install.

Data Exfiltration: Some MCP servers, whether malicious or simply careless, transmit more data than users expect. We have documented cases where servers designed for simple tasks like linting or code search were silently sending directory listings, environment variable names, or file contents to external analytics endpoints. In the worst cases, this telemetry included API keys, database credentials, and authentication tokens found in configuration files. The data often travels alongside legitimate API calls, making it difficult to detect without deep packet inspection.

Supply Chain Attacks: MCP servers are typically installed from npm or GitHub, which means they inherit all the supply chain risks of those ecosystems. A compromised dependency deep in the server's dependency tree can introduce malicious behavior that the server author never intended. We have observed cases where popular MCP servers pulled in transitive dependencies with known vulnerabilities, giving attackers a vector into developer environments. The fact that MCP servers run with elevated system access makes them a particularly high-value target for supply chain attacks.

Real-World Examples of Risky Configurations

To illustrate these risks concretely, here are sanitized examples based on real configurations we have encountered during our audits.

Example 1: The overly permissive file server. A popular filesystem MCP server was configured without any path restrictions, giving Claude access to the entire home directory including ~/.ssh/, ~/.aws/, and ~/.env files. A single prompt injection in any tool response could have directed Claude to read and transmit those credentials.

Dangerous configuration
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y", "@modelcontextprotocol/server-filesystem",
        "/Users/dev"
      ]
    }
  }
}
Safer configuration
{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y", "@modelcontextprotocol/server-filesystem",
        "/Users/dev/projects/my-app/src",
        "/Users/dev/projects/my-app/tests"
      ]
    }
  }
}

Example 2: The analytics-heavy code search server. A code search MCP server included a telemetry module that sent every search query, along with file paths from results, to a third-party analytics service. While the server's documentation mentioned "anonymous usage analytics," it did not disclose that file paths and search patterns were included in the transmitted data. For teams working on proprietary code, this represented a significant intellectual property risk.

Example 3: The dependency chain compromise. An MCP server for Docker management depended on an npm package that was later hijacked through a maintainer account takeover. The compromised package added a postinstall script that attempted to read and exfiltrate ~/.docker/config.json, which often contains registry authentication tokens. Because the MCP server already had the permissions needed for Docker operations, the malicious behavior was difficult to distinguish from legitimate functionality.

Security Checklist for Evaluating MCP Servers

Before installing any MCP server, run through this evaluation checklist:

  • Source code availability: Is the full source code published and auditable? Reject any server distributed as minified or obfuscated bundles.
  • Permission scope: Does the server request only the permissions it genuinely needs? A Markdown formatter should not need network access.
  • Dependency count: How many transitive dependencies does the server pull in? Fewer dependencies mean a smaller attack surface. Run npm audit on the package before installing.
  • Maintainer reputation: Who maintains the server? Is it an established organization or an anonymous account created recently?
  • Network behavior: Does the server make outbound network requests? If so, to where and why? Use network monitoring tools to verify.
  • Update frequency: When was the server last updated? Servers that have not been updated in months may have unpatched vulnerabilities.
  • Community validation: Are there independent reviews, security audits, or reports from other users? Star counts alone are not sufficient validation.
  • Configuration options: Does the server support scoping its access? Can you restrict file paths, disable network features, or limit command execution?

Our 4-Phase Verification Methodology

At ClineTools, every MCP server undergoes our comprehensive 4-phase verification before being listed in our directory:

  1. Phase 1: Static Code Analysis. We review every line of source code, analyze the dependency tree, scan for known vulnerabilities using multiple databases (NVD, Snyk, GitHub Advisory), and check for obfuscated or suspicious code patterns. We also verify that the published package matches the source repository.
  2. Phase 2: Sandbox Testing. The server is installed in an isolated environment with full network and filesystem monitoring. We exercise every declared tool and resource, logging all system calls, file access patterns, and network traffic. Any undocumented behavior is flagged for investigation.
  3. Phase 3: Attack Simulation. We actively test the server against known attack vectors including prompt injection, path traversal, command injection, SSRF, and data exfiltration patterns. We also test how the server behaves when receiving malformed input or when operating under adversarial conditions.
  4. Phase 4: Continuous Monitoring. Approved servers are re-evaluated whenever they release updates. We monitor their dependency trees for newly disclosed vulnerabilities, track changes to their network behavior, and re-run our attack simulations against major version updates. If a server fails re-evaluation, it is immediately delisted with a public advisory.

The Future of AI Tool Security

Looking ahead, we see several developments that will shape the MCP security landscape over the coming year. First, we expect the MCP protocol itself to introduce more granular permission models, allowing users to explicitly grant or deny specific capabilities at installation time rather than relying on trust. Second, we anticipate the emergence of standardized security certification programs for MCP servers, similar to SOC 2 for cloud services. Third, tooling for runtime monitoring of MCP server behavior will improve, making it easier for developers to detect anomalous activity without deep security expertise.

Until those improvements materialize, the responsibility falls on individual developers and teams to evaluate the tools they install. Use the checklist above, prefer servers that have undergone independent security review, scope permissions as tightly as possible, and stay informed about newly discovered vulnerabilities. The convenience of AI-powered development tools is extraordinary, but that convenience must never come at the expense of security. The stakes are too high and the attack surface is too large to operate on trust alone.

Prerequisites

Before we start, make sure you have the following installed on your system. You will need Node.js 18 or later (we recommend the latest LTS release) and npm which comes bundled with Node. You should also have Claude Code installed and working so you can test the server once it is built. Familiarity with TypeScript is helpful but not strictly required; we will explain every line of code as we go.

Verify your setup by running these commands:

node --version    # Should be v18.0.0 or later
npm --version     # Should be v9.0.0 or later
claude --version  # Should show Claude Code version

Project Setup

Let us start by creating a new project directory and initializing it with the dependencies we need. We will build a weather API MCP server that provides Claude with the ability to look up current weather conditions for any city.

mkdir mcp-weather-server
cd mcp-weather-server
npm init -y

Now install the MCP SDK and TypeScript tooling:

npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node

Create a tsconfig.json for TypeScript compilation:

tsconfig.json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "declaration": true
  },
  "include": ["src/**/*"]
}

Update your package.json to add the build script and set the module type:

package.json (relevant fields)
{
  "type": "module",
  "main": "dist/index.js",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  }
}

Understanding the MCP Protocol

Before we write code, it helps to understand the three core concepts in the MCP protocol:

  • Tools are functions that the AI can call. They take structured input, execute some logic, and return a result. Our weather server will expose a get_weather tool that accepts a city name and returns weather data.
  • Resources are data endpoints that the AI can read. Think of them like GET endpoints in a REST API. We will not use resources in this tutorial, but they are useful for exposing static data like configuration files or documentation.
  • Prompts are reusable prompt templates that can be invoked by the client. They are useful for standardizing common interactions. We will skip these for now to keep things focused.

The MCP SDK handles all the protocol communication over stdio. Your server just needs to register its tools and implement the handler functions.

Building the Weather MCP Server

Create the source directory and main file:

mkdir src
touch src/index.ts

Now let us write the server. Here is the complete implementation:

src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create the MCP server instance
const server = new McpServer({
  name: "weather-server",
  version: "1.0.0",
});

// Define the input schema for our weather tool
const WeatherInput = z.object({
  city: z
    .string()
    .describe("City name to get weather for"),
  units: z
    .enum(["celsius", "fahrenheit"])
    .default("celsius")
    .describe("Temperature units"),
});

// Type for our weather response
interface WeatherData {
  city: string;
  temperature: number;
  units: string;
  condition: string;
  humidity: number;
  windSpeed: number;
  windDirection: string;
  updatedAt: string;
}

// Simulated weather data fetcher
// Replace this with a real API call in production
async function fetchWeather(
  city: string,
  units: "celsius" | "fahrenheit"
): Promise<WeatherData> {
  // In production, call a real weather API here:
  // const res = await fetch(
  //   `https://api.weather.example/v1/current?q=${city}`
  // );
  // return await res.json();

  // For this tutorial, generate plausible data
  const conditions = [
    "Sunny", "Partly Cloudy", "Overcast",
    "Light Rain", "Heavy Rain", "Thunderstorm",
    "Snow", "Fog", "Windy", "Clear"
  ];
  const directions = [
    "N", "NE", "E", "SE", "S", "SW", "W", "NW"
  ];

  const baseTemp = Math.floor(Math.random() * 30) + 5;
  const temp = units === "fahrenheit"
    ? Math.round(baseTemp * 9 / 5 + 32)
    : baseTemp;

  return {
    city,
    temperature: temp,
    units: units === "celsius" ? "C" : "F",
    condition:
      conditions[
        Math.floor(Math.random() * conditions.length)
      ],
    humidity: Math.floor(Math.random() * 60) + 30,
    windSpeed: Math.floor(Math.random() * 30) + 2,
    windDirection:
      directions[
        Math.floor(Math.random() * directions.length)
      ],
    updatedAt: new Date().toISOString(),
  };
}

// Register the get_weather tool
server.tool(
  "get_weather",
  "Get current weather conditions for a city",
  WeatherInput.shape,
  async ({ city, units }) => {
    try {
      const weather = await fetchWeather(city, units);

      const report = [
        `Weather for ${weather.city}`,
        `Temperature: ${weather.temperature} ${weather.units}`,
        `Condition: ${weather.condition}`,
        `Humidity: ${weather.humidity}%`,
        `Wind: ${weather.windSpeed} km/h ${weather.windDirection}`,
        `Updated: ${weather.updatedAt}`,
      ].join("\n");

      return {
        content: [{ type: "text" as const, text: report }],
      };
    } catch (error) {
      const message =
        error instanceof Error
          ? error.message
          : "Unknown error";
      return {
        content: [
          {
            type: "text" as const,
            text: `Error fetching weather: ${message}`,
          },
        ],
        isError: true,
      };
    }
  }
);

// Start the server
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("Weather MCP server running on stdio");
}

main().catch((error) => {
  console.error("Fatal error:", error);
  process.exit(1);
});

Let us break down what this code does. We create a McpServer instance with a name and version. We then define a tool called get_weather using Zod for input validation. The tool handler function fetches weather data (simulated here, but you would replace this with a real API call) and returns a formatted text response. Finally, we connect the server to a stdio transport, which is how Claude Code communicates with MCP servers.

Testing with Claude Code

Build the project first:

npm run build

Now configure Claude Code to use your server. Add this to your Claude Code MCP settings (typically in ~/.claude/claude_desktop_config.json or your project's .mcp.json):

.mcp.json
{
  "mcpServers": {
    "weather": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-weather-server/dist/index.js"]
    }
  }
}

Restart Claude Code, and you should see the weather server listed in your available tools. Try asking Claude: "What is the weather in Tokyo?" Claude will call your get_weather tool and report back the results.

Adding Error Handling and Validation

Our basic server works, but production MCP servers need more robust error handling. Here are the key improvements you should make:

Input sanitization: Even though Zod handles type validation, you should sanitize string inputs to prevent injection attacks if you pass them to external APIs:

src/index.ts (validation addition)
function sanitizeCity(input: string): string {
  // Remove any characters that are not letters,
  // spaces, hyphens, or apostrophes
  const sanitized = input
    .replace(/[^a-zA-Z\s\-']/g, "")
    .trim()
    .slice(0, 100);

  if (sanitized.length === 0) {
    throw new Error("Invalid city name");
  }

  return sanitized;
}

Rate limiting: Protect against excessive API calls by adding a simple in-memory rate limiter:

src/index.ts (rate limiting addition)
const rateLimiter = new Map<string, number[]>();
const RATE_LIMIT = 10; // requests per minute

function checkRateLimit(key: string): boolean {
  const now = Date.now();
  const windowMs = 60_000;
  const requests = rateLimiter.get(key) ?? [];

  // Remove requests outside the window
  const recent = requests.filter(
    (t) => now - t < windowMs
  );

  if (recent.length >= RATE_LIMIT) {
    return false;
  }

  recent.push(now);
  rateLimiter.set(key, recent);
  return true;
}

Timeout handling: Always set timeouts on external API calls to prevent your server from hanging indefinitely:

src/index.ts (timeout addition)
async function fetchWithTimeout(
  url: string,
  timeoutMs: number = 5000
): Promise<Response> {
  const controller = new AbortController();
  const id = setTimeout(
    () => controller.abort(),
    timeoutMs
  );

  try {
    const response = await fetch(url, {
      signal: controller.signal,
    });
    return response;
  } finally {
    clearTimeout(id);
  }
}

Publishing and Sharing Your Server

Once your server is working and well-tested, you can share it with the community. Here are the steps to publish it as an npm package:

Step 1: Update your package.json with proper metadata:

package.json
{
  "name": "@yourscope/mcp-weather-server",
  "version": "1.0.0",
  "description": "MCP server for weather data",
  "main": "dist/index.js",
  "type": "module",
  "bin": {
    "mcp-weather-server": "dist/index.js"
  },
  "files": ["dist"],
  "keywords": ["mcp", "claude", "weather", "ai-tools"],
  "license": "MIT"
}

Step 2: Add a shebang line to the top of your src/index.ts so it can run as a CLI executable:

#!/usr/bin/env node

Step 3: Build and publish:

npm run build
npm login
npm publish --access public

Once published, anyone can use your server by adding it to their MCP configuration:

{
  "mcpServers": {
    "weather": {
      "command": "npx",
      "args": ["-y", "@yourscope/mcp-weather-server"]
    }
  }
}

Congratulations! You have built, tested, and published a fully functional MCP server. From here, you can expand it with additional tools, add resources for static data, implement caching, or connect to real APIs. The MCP ecosystem thrives on community contributions, and the patterns you have learned in this tutorial apply to any kind of server you might want to build, from database connectors to deployment tools to custom business logic. The entire process, from npm init to a working server that Claude can call, takes about 30 minutes. Happy building.

Today we are excited to announce the launch of ClineTools, a platform built from the ground up to solve a problem that has been growing alongside the explosive adoption of AI coding assistants: how do you know which tools are safe to use? As developers integrate more MCP servers, extensions, and third-party integrations into their Claude Code workflows, the attack surface expands in ways that are often invisible until it is too late.

ClineTools is not just another tool directory. Every single tool listed on our platform goes through a rigorous 4-phase security audit before earning our verification badge. We perform deep code analysis, sandbox testing, attack simulation, and ongoing monitoring. We look for prompt injection vectors, data exfiltration patterns, command injection risks, and suspicious network activity. If a tool does not pass every check, it does not make it onto the platform.

Beyond the tool directory, we are building a comprehensive library of tutorials that teach developers how to use these tools effectively and safely. From getting started guides for newcomers to advanced production workflow patterns for experienced teams, our tutorials are designed to be practical, actionable, and security-conscious. Every tutorial includes notes on potential risks and how to mitigate them.

We believe the future of software development is deeply intertwined with AI assistants, and that future needs to be built on a foundation of trust. ClineTools is our contribution to making that future a reality. Whether you are a solo developer exploring AI tools for the first time or a team lead evaluating enterprise integrations, we are here to help you move forward safely and confidently. Welcome to ClineTools.

The rise of AI-powered coding assistants has introduced an entirely new category of security concerns that traditional code review and dependency scanning tools were never designed to catch. When you install an MCP server, you are granting a third-party tool the ability to execute code, read files, make network requests, and interact with your AI assistant in ways that can be extremely difficult to audit manually. The trust model is fundamentally different from installing a VS Code extension or an npm package.

Prompt injection is arguably the most insidious risk in this ecosystem. A malicious or compromised MCP server can embed hidden instructions in its responses that manipulate the AI assistant into performing actions the user never intended. These can range from subtle code modifications that introduce vulnerabilities to outright data exfiltration where sensitive code, API keys, or environment variables are silently sent to external servers. Because the AI acts as an intermediary, the user may never see the malicious payload directly.

Data exfiltration through AI tools deserves special attention. An MCP server that has read access to your file system can theoretically collect and transmit any data it encounters. Even servers that appear to only provide benign functionality like formatting or search might include telemetry that captures more than you expect. The opacity of many MCP server implementations, combined with the rapid pace at which new tools appear, means that community vetting alone is insufficient. Rigorous, systematic security evaluation is essential.

This is precisely why we built ClineTools with security as its foundational principle rather than an afterthought. Our 4-phase audit process examines every tool at the code level, monitors its runtime behavior in sandboxed environments, actively tests it against known attack vectors, and continues monitoring after approval. We believe that as AI tools become more powerful, the standards for verifying their safety must rise proportionally. Anything less puts developers and their organizations at unacceptable risk.

The MCP (Model Context Protocol) ecosystem has grown rapidly, and finding the right servers to enhance your Claude Code workflow can be overwhelming. We have tested dozens of MCP servers through our rigorous security and quality auditing process, and these ten stand out as the best options available in 2026. Each one has earned our verification badge, meaning it has passed code analysis, sandbox testing, attack simulation, and is under continuous monitoring.

  1. Filesystem MCP Server — The official Anthropic filesystem server provides safe, scoped file operations with built-in path traversal protection. Essential for any Claude Code workflow that needs to read and write project files within defined boundaries.
  2. GitHub MCP Server — Integrates Claude Code directly with GitHub for pull request management, issue tracking, code review, and repository operations. Supports fine-grained token scoping to limit access to only the repositories you specify.
  3. PostgreSQL MCP Server — Enables Claude to query and manage PostgreSQL databases with read-only mode by default. Includes query parameterization to prevent SQL injection and connection pooling for production environments.
  4. Brave Search MCP Server — Gives Claude access to web search results through the Brave Search API. Useful for looking up documentation, finding solutions, and gathering context without leaving your coding workflow.
  5. Puppeteer MCP Server — Allows Claude to control a headless browser for testing, scraping structured data, and automating web interactions. Sandboxed by default with configurable navigation restrictions.
  6. Memory MCP Server — Provides persistent memory across Claude Code sessions using a structured knowledge graph. Enables Claude to remember project context, decisions, and preferences between conversations.
  7. Slack MCP Server — Connects Claude Code to Slack workspaces for reading messages, posting updates, and managing channels. Useful for teams that want AI-assisted communication workflows with proper OAuth scoping.
  8. Docker MCP Server — Enables Claude to manage Docker containers, images, and compose stacks. Ideal for development environments where you need AI-assisted container orchestration with configurable permission boundaries.
  9. Sentry MCP Server — Integrates error monitoring directly into your Claude Code workflow. Claude can query recent errors, analyze stack traces, and suggest fixes based on real production error data from your Sentry projects.
  10. Linear MCP Server — Connects Claude Code to Linear for project management. Create issues, update statuses, and manage sprints directly from your coding environment with proper API token scoping.

Each of these servers represents the best combination of functionality, security posture, and developer experience in its category. You can find detailed installation guides, configuration tips, and our full security reports for each one in our tool directory. As always, we recommend reviewing the permission scopes of any MCP server before installation and limiting access to only what your workflow requires.

The MCP ecosystem continues to evolve rapidly, and we will update this list as new servers pass our verification process. Subscribe to our newsletter to stay informed about new additions, security advisories, and updated reviews as tools release major updates.