Security Verification Reports
Transparency is at the heart of our mission. See exactly how we evaluate every tool before recommending it.
Our 4-Phase Verification Methodology
Every tool in the ClineTools directory goes through a rigorous 4-phase audit. We don't just check boxes—we actively try to break things to ensure your safety.
Static Code Analysis
We manually review source code and run automated analysis looking for:
- Obfuscated or minified code hiding functionality
- Hardcoded URLs, especially to unknown domains
- File system operations outside declared scope
- Eval(), exec(), or dynamic code execution
- Credential harvesting patterns
Sandbox Testing
Tools run in isolated environments where we monitor:
- All file read/write operations
- Network requests (DNS, HTTP, WebSocket)
- Environment variable access
- Process spawning and shell execution
- Memory and CPU usage patterns
Attack Simulation
We actively attempt to exploit the tool:
- Prompt injection via tool inputs
- Path traversal to access sensitive files
- Command injection through parameters
- Data exfiltration via crafted responses
- Privilege escalation attempts
Ongoing Monitoring
After approval, we continue watching:
- Dependency updates and new CVEs
- Maintainer changes and repository activity
- Community-reported issues
- Behavioral changes in new versions
- Re-audit on every major version bump
Security Rating System
Each tool receives a letter grade based on our audit findings. Here's what each rating means:
| Rating | Meaning | Criteria |
|---|---|---|
| Excellent | Passes all checks. Minimal attack surface. No network calls beyond declared purpose. Open source with active maintenance. | |
| Good | Passes all critical checks. Minor findings that don't pose real risk. Well-maintained and transparent. | |
| Acceptable | Passes critical checks but has areas for improvement. May have broad permissions or infrequent updates. Use with awareness. | |
| Caution | Has notable security concerns. May lack transparency, have overly broad permissions, or show suspicious patterns. Listed with warnings. |
Example Audit Reports
Below are sample security reports for tools in our directory. Full reports are available for Professional and Team subscribers.
Filesystem MCP Server
Audited: February 2026 · by AnthropicOfficial Anthropic MCP server for secure file system access. Provides read/write operations scoped to allowed directories.
Puppeteer MCP Server
Audited: February 2026 · by AnthropicBrowser automation server enabling web scraping, testing, and interaction through Claude. Requires Chromium.
Brave Search MCP Server
Audited: February 2026 · by AnthropicWeb search integration using the Brave Search API. Enables Claude to search the web for current information.
Red Flags We Watch For
When evaluating AI tools, these are immediate warning signs that trigger deeper investigation or rejection:
- Obfuscated source code — If we can't read it, we don't trust it
- Unexplained network requests — Tools phoning home without clear purpose
- Overly broad permissions — Requesting access far beyond stated functionality
- Dynamic code execution — eval(), exec(), or loading remote code at runtime
- No version pinning — Dependencies without locked versions can be silently compromised
- Inactive maintenance — No updates for 6+ months with open security issues
- Closed source with broad access — Can't verify what a tool does? That's a problem
- Embedding hidden instructions — Prompt injection attempts in tool descriptions or responses
Have a tool you'd like us to audit?
Submit for Review