MCP Integrations
Clients & LLM Providers
WHAT CONNECTS HERE
- AI coding IDEs — Cursor, VS Code Copilot, Windsurf, Continue, Cline, Zed, JetBrains ACP
- Desktop AI apps — Claude Desktop, Cherry Studio, AnythingLLM, Jan.ai, Msty
- LLM providers — Claude, GPT-4o, Gemini 2.0 natively; Mistral, Llama, DeepSeek, Grok via bridge
- Two transports — stdio for local tools, Streamable HTTP for cloud and remote
- One config — same API key, same 7 tools, any client
AI Coding IDEs
Editors and coding tools with native MCP support — connect anonym.legal in minutes
AI-first code editor built on VS Code. The most-used MCP client among developers. Configure via .cursor/mcp.json per project or global settings. Supports both stdio subprocess and HTTP remote servers.
.cursor/mcp.jsonMCP since: January 2025
LLMs: GPT-4o, Claude 3.5, Gemini, custom
Visual Studio Code with GitHub Copilot Chat. MCP tools available in Agent mode. Configure via .vscode/mcp.json or workspace settings. Agent mode required — Copilot Chat inline suggestions do not call MCP.
.vscode/mcp.jsonMCP since: June 2025 (v1.99)
LLMs: GPT-4o, Claude 3.5, Gemini 2.0
Codeium's AI code editor with Cascade agent. Windsurf uses MCP to extend the Cascade agentic flow. Configure in Windsurf Settings → MCP Servers. Supports project-level and global server config.
MCP since: Late 2024
LLMs: Claude 3.5, GPT-4o, Codeium models
Open-source AI code assistant extension for VS Code and JetBrains. Highly configurable — supports any LLM provider. MCP tools available in agent mode via config.json. Connects to local Ollama models or cloud APIs.
~/.continue/config.jsonMCP since: 2024
LLMs: Any — Claude, GPT, Gemini, Llama, Mistral, DeepSeek
Autonomous AI coding agent VS Code extension. One of the most MCP-capable open-source clients. Manages its own MCP server registry via the Cline settings panel. Supports full agentic loops with tool calling.
MCP since: 2024
LLMs: Claude, GPT-4o, DeepSeek, Gemini, Llama
High-performance editor written in Rust with native AI assistant. MCP support added in 2025 for the Zed AI assistant and agent features. Configure via settings.json.
~/.config/zed/settings.jsonMCP since: 2025
LLMs: Claude 3.5, GPT-4o
IntelliJ IDEA, PyCharm, WebStorm, GoLand, Rider — all via JetBrains AI Assistant and the new Agent Coordination Protocol (ACP). MCP servers available in the AI Assistant tool window. Supports all JetBrains IDEs.
MCP since: January 28, 2026 (ACP GA)
LLMs: Claude, GPT-4o, Gemini
Cloud-based IDE and development platform. Replit Agent supports MCP servers, allowing tools like anonym.legal to be called from within agentic workflows. Configured via project settings or the Replit MCP panel.
LLMs: GPT-4o, Claude 3.5
Desktop AI Applications
Standalone AI tools with MCP tool-calling support
Anthropic's desktop app for Claude. The reference MCP client — MCP was designed here first. Uses stdio transport only (subprocess IPC). No network exposure. Configure via claude_desktop_config.json.
~/Library/Application Support/Claude/claude_desktop_config.jsonConfig (Win):
%APPDATA%\Claude\claude_desktop_config.jsonMCP since: November 2024 (MCP launch)
LLMs: Claude 3.5 Sonnet, Claude 3 Opus
Desktop AI chat app supporting 20+ LLM providers and full MCP integration. Configure MCP servers in the Cherry Studio settings panel. Supports both stdio and HTTP transports. Popular in China for DeepSeek + MCP workflows.
LLMs: Claude, GPT-4o, DeepSeek, Gemini, Qwen, Doubao, and 20+ more
Self-hosted AI workspace with RAG, document chat, and MCP agent tools. Connect anonym.legal MCP via the Agent Skills section. Supports local Ollama models and cloud APIs. Deployable on-premise for full data control.
LLMs: Any — OpenAI, Anthropic, Ollama, LM Studio, Mistral
Offline-first AI assistant and local model runner. MCP support added for tool-calling in chat sessions. Configure via the Jan settings panel. Fully local — ideal for air-gapped environments.
LLMs: Local models (Llama, Mistral, Phi, Gemma), OpenAI API
Desktop AI chat app with MCP integration. Supports multiple LLM providers simultaneously. Add anonym.legal as an MCP tool server in the Msty MCP panel. Simple UI-based configuration.
Open-source autonomous AI agent by Block (formerly Square). Built for agentic workflows with MCP extensions. Configure via ~/.config/goose/config.yaml. Strong support for developer toolchains.
~/.config/goose/config.yamlLLMs: Claude 3.5, GPT-4o, Gemini
Cloud & Developer Platforms
SDKs, frameworks, and cloud platforms with MCP support
OpenAI's official Python SDK for building agents. Native MCP support added March 2025. Connect any MCP server to GPT-4o, GPT-4.5, o1, or o3. Use MCPServerStreamableHTTP or MCPServerStdio classes.
openai-agentsMCP since: March 2025
LLMs: GPT-4o, GPT-4.5, o1, o3, o3-mini
Google's enterprise AI platform and Gemini API support MCP via the Agent Builder and Google ADK (Agent Development Kit). Connect anonym.legal as an MCP tool server to Gemini 2.0 Flash, Gemini 2.5 Pro, or Gemini 1.5 agents.
MCP since: February 2025
LLMs: Gemini 2.5 Pro, Gemini 2.0 Flash, Gemini 1.5 Pro
AWS's AI-powered coding assistant in the IDE and CLI. Amazon Q Developer added MCP support for custom tool integration. Connect anonym.legal to protect PII in AWS documentation queries and code generation workflows.
~/.aws/amazonq/mcp.jsonLLMs: AWS foundation models (Amazon Nova, Claude on Bedrock)
Python/TypeScript framework for building LLM applications and agents. MCP adapter package bridges LangChain tools to any MCP server. Use langchain-mcp-adapters to connect anonym.legal tools to any LangChain agent.
langchain-mcp-adaptersLLMs: Any LangChain-supported model
Data framework for LLM applications with native MCP tool integration. LlamaIndex can call anonym.legal MCP tools as part of a RAG or agentic pipeline. The McpToolSpec class handles MCP server connections.
llama-index-tools-mcpLLMs: Any LlamaIndex-supported model
Open-source workflow automation platform. n8n AI Agent nodes support MCP tool servers. Add anonym.legal as an MCP Tool node to automatically anonymize PII in documents flowing through n8n workflows.
LLMs: OpenAI, Anthropic, Google, local via Ollama
LLM Providers
Native MCP support in the API — no bridge layer needed
MCP originated at Anthropic in November 2024. Claude Desktop uses stdio MCP natively. The Claude API (Messages API with tool_use) is the protocol foundation. All MCP tool calls ultimately execute as Claude tool_use content blocks.
MCP via: Claude Desktop (stdio), any HTTP MCP client
API:
tool_use content blocks in Messages API
OpenAI added first-class MCP support to the OpenAI Agents SDK in March 2025. Connect anonym.legal directly to GPT-4o, GPT-4.5, o1, or o3 agent workflows. Both stdio and Streamable HTTP transports supported.
MCP via:
openai-agents Python SDKClass:
MCPServerStreamableHTTP
Google added MCP support to Vertex AI Agent Builder and the Google ADK in February 2025. Gemini 2.0 Flash and Gemini 2.5 Pro are the primary models. Connect anonym.legal via Streamable HTTP to Gemini-powered agents.
MCP via: Vertex AI Agent Builder, Google ADK
Also: VS Code Copilot, Cursor, Cline
Meta's open-weight Llama models run locally via Ollama. Continue and Cline support calling MCP servers while using Llama as the backend. AnythingLLM connects local Llama models to MCP tool servers. Full local workflow — no data leaves your machine.
Runtime: Ollama, LM Studio, llama.cpp
MCP via: Continue, Cline, AnythingLLM
Mistral Large 2, Mistral Medium, and open-weight Mixtral 8x22B support tool calling via clients. Continue supports Mistral API with MCP servers. Locally via Ollama. Ideal for EU deployments — Mistral servers are EU-based.
MCP via: Continue, Cherry Studio, Cline
DeepSeek V3 and DeepSeek R1 are popular open-weight models with strong coding performance. Widely used in Cherry Studio and Cursor with MCP tool support. Anonym.legal is particularly valuable here — DeepSeek servers are outside the EU.
MCP via: Cherry Studio, Cursor, Cline, Continue
xAI's Grok 2 and Grok 3 are accessible via Cursor and VS Code with custom API endpoints. MCP tools are available when using Grok as the model backend in clients that support it. xAI is working on native MCP support.
MCP via: Cursor, VS Code (OpenAI-compatible endpoint)
Cohere Command R and Command R+ support tool calling via the Cohere API. Connect anonym.legal using the LangChain MCP adapter with Cohere as the LLM backend. Useful for RAG pipelines with PII-bearing documents.
MCP via: LangChain MCP adapters
Transport Protocols
How your client connects to the MCP server — choose based on deployment
The client launches the MCP server as a child process and communicates via standard input/output streams. JSON-RPC messages exchanged over stdin/stdout. No TCP port, no HTTP, no network.
WHEN TO USE
- Claude Desktop (only supported transport)
- Air-gapped environments
- Local npm/npx servers
- Maximum security — no network attack surface
USED BY
- Claude Desktop
- Continue (local servers)
- Cline (local servers)
- Cursor (local servers)
{
"mcpServers": {
"anonym-legal": {
"command": "npx",
"args": ["@anthropic-ai/mcp-server-anonym-legal"],
"env": {
"ANONYM_LEGAL_API_KEY": "your-api-key"
}
}
}
}
HTTP POST to a single endpoint with optional Server-Sent Events streaming for long-running operations. Defined in MCP Specification 2025-11-25 as the standard transport. anonym.legal's HTTP endpoint uses this protocol.
WHEN TO USE
- Cursor, VS Code, Windsurf, Cline
- Remote/cloud MCP servers
- Multi-user shared instances
- All non-Claude-Desktop clients
ENDPOINT
https://anonym.legal/mcp
Auth via Authorization header
Bearer token or API key
{
"mcpServers": {
"anonym-legal": {
"url": "https://anonym.legal/mcp",
"headers": {
"Authorization": "Bearer your-api-key"
}
}
}
}
The original HTTP transport from MCP's early spec. Used two endpoints: /sse (server-to-client stream) and /messages (client-to-server POST). Replaced by Streamable HTTP in the MCP 1.0 spec (November 2025). Still supported by many clients for backwards compatibility but not recommended for new deployments. anonym.legal supports Streamable HTTP only.
MCP 1.0 Protocol Spec
Model Context Protocol specification — 2025-11-25 release. What anonym.legal implements.
Functions the LLM can call. anonym.legal exposes 7 tools: analyze_text, anonymize_text, detokenize_text, get_balance, estimate_cost, list_sessions, delete_session. Each tool has a JSON Schema definition for its parameters.
URI-addressed data sources the LLM can read (files, database rows, API responses). Clients like Claude Desktop can subscribe to resource changes. Not currently used by anonym.legal — data flows through tool calls.
Reusable prompt templates that clients can list and insert. Servers expose named prompt templates with arguments. anonym.legal exposes a default anonymization workflow prompt that configures the AI to automatically call the correct tools.
MCP server requests the client to run an LLM inference. Enables server-side agentic loops where the MCP server itself can ask the AI to generate text. Requires explicit client opt-in. Not currently used by anonym.legal.
File system scope boundaries. Clients tell servers which directories are accessible. Servers that read files (like file system MCP servers) use Roots to stay within safe bounds. Not applicable to anonym.legal.
New in MCP 1.0 — allows the server to request user input mid-call via the client UI. The server can ask for missing parameters, confirmations, or clarifications without aborting the tool call. Currently experimental.
PROTOCOL VERSION
The current MCP specification version is 2025-11-25, released November 25, 2025 as MCP 1.0. anonym.legal MCP Server is v2.2.0 and implements the full Tools capability set with Streamable HTTP transport and stdio transport. The protocol is maintained at spec.modelcontextprotocol.io.
Configuration Examples
Ready-to-use config snippets for every major client
Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows). Restart Claude Desktop after saving.
{
"mcpServers": {
"anonym-legal": {
"command": "npx",
"args": [
"-y",
"@anthropic-ai/mcp-server-anonym-legal"
],
"env": {
"ANONYM_LEGAL_API_KEY": "your-api-key-here"
}
}
}
}
Transport: stdio (subprocess). The -y flag auto-confirms npx install. Replace your-api-key-here with your anonym.legal API key from anonym.legal.
Create or edit .cursor/mcp.json in your project root (project-level) or in ~/.cursor/mcp.json (global).
{
"mcpServers": {
"anonym-legal": {
"url": "https://anonym.legal/mcp",
"headers": {
"Authorization": "Bearer your-api-key-here"
}
}
}
}
{
"mcpServers": {
"anonym-legal": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-server-anonym-legal"],
"env": {
"ANONYM_LEGAL_API_KEY": "your-api-key-here"
}
}
}
}
Create .vscode/mcp.json in your workspace. Requires VS Code 1.99+ with GitHub Copilot Chat and Agent mode enabled.
{
"servers": {
"anonym-legal": {
"type": "http",
"url": "https://anonym.legal/mcp",
"headers": {
"Authorization": "Bearer your-api-key-here"
}
}
}
}
Open Copilot Chat → Agent mode → the anonym-legal tools will appear in the tool picker. VS Code uses the key "servers" (not "mcpServers").
Edit ~/.continue/config.json. Add to the mcpServers array.
{
"mcpServers": [
{
"name": "anonym-legal",
"transport": {
"type": "streamable-http",
"url": "https://anonym.legal/mcp",
"requestOptions": {
"headers": {
"Authorization": "Bearer your-api-key-here"
}
}
}
}
]
}
Cline stores MCP config in VS Code settings. Open Cline → MCP Servers tab → Add Server → paste this config.
{
"anonym-legal": {
"url": "https://anonym.legal/mcp",
"disabled": false,
"transportType": "streamable-http",
"headers": {
"Authorization": "Bearer your-api-key-here"
}
}
}
After saving, Cline will auto-discover all 7 anonym.legal tools. They appear in the tools list and can be called from any Cline conversation.
Using the OpenAI Agents SDK (Python). Install: pip install openai-agents
from agents import Agent, Runner
from agents.mcp import MCPServerStreamableHTTP
async def main():
async with MCPServerStreamableHTTP(
url="https://anonym.legal/mcp",
headers={"Authorization": "Bearer your-api-key-here"},
name="anonym-legal"
) as mcp_server:
agent = Agent(
name="Privacy Agent",
model="gpt-4o",
instructions="Use anonym_legal_anonymize_text before sending any document to external services.",
mcp_servers=[mcp_server]
)
result = await Runner.run(
agent,
"Anonymize this report before archiving: John Smith (SSN 123-45-6789)..."
)
print(result.final_output)
Compatibility Matrix
Quick reference — transport support and key features per client
| Client | stdio | HTTP | Auto-call | Tool picker | Free tier |
|---|---|---|---|---|---|
| Claude Desktop | ✓ | — | ✓ | ✓ | ✓ |
| Cursor | ✓ | ✓ | ✓ | ✓ | limited |
| VS Code (Copilot) | ✓ | ✓ | ✓ | ✓ | Copilot required |
| Windsurf | ✓ | ✓ | ✓ | ✓ | limited |
| Continue | ✓ | ✓ | ✓ | ✓ | ✓ |
| Cline | ✓ | ✓ | ✓ | ✓ | ✓ |
| Zed | ✓ | ✓ | ✓ | partial | ✓ |
| JetBrains (ACP) | ✓ | ✓ | ✓ | ✓ | AI Pro required |
| Cherry Studio | ✓ | ✓ | ✓ | ✓ | ✓ |
| AnythingLLM | — | ✓ | ✓ | ✓ | ✓ |
| OpenAI Agents SDK | ✓ | ✓ | ✓ | code-only | ✓ |
| LangChain | ✓ | ✓ | ✓ | code-only | ✓ |
| n8n | — | ✓ | ✓ | ✓ | self-hosted |
Auto-call = AI automatically calls tools without user clicking. Tool picker = UI to select which tools to include. Free tier = client has a free usage tier.
Integration Patterns
How different teams connect anonym.legal to their AI stack
Document Review with Claude Desktop
Lawyers drop contracts, depositions, and client files into Claude Desktop. anonym.legal MCP intercepts every prompt — client names hash to tokens, SSNs redact, financial data masks. Claude analyses the structure; the law firm's data stays protected.
Operators: hash (names), redact (SSN), encrypt (account numbers)
Code Review with Real Data in Cursor
Developers paste production logs, database dumps, or API responses into Cursor for debugging. anonym.legal removes customer emails, IPs, and auth tokens before Cursor sends the context to any LLM — GPT-4o, Claude, or Gemini.
Operators: redact (API keys), replace (emails), keep (IPs for debugging)
Patient Data in AI Workflows
Healthcare teams use Continue or Cline to summarize clinical notes and discharge letters. anonym.legal's HEALTHCARE entity group detects names, dates-of-birth, diagnoses, and medical record numbers — all tokenized before any LLM sees the text.
Entity group: HEALTHCARE + UNIVERSAL
Operators: replace (person), redact (MRN)
LLM Pipelines on Production Data
Data teams run LlamaIndex or LangChain pipelines over real customer datasets. anonym.legal MCP is called programmatically via the OpenAI Agents SDK or LangChain adapter — anonymize before the LLM, detokenize after.
Entity group: FINANCIAL + UNIVERSAL
E2E mode: server never stores mapping
n8n Workflow Automation
Operations teams route customer tickets, emails, and CRM data through n8n AI workflows. The anonym.legal MCP Tool node sits before every LLM call — customer names, phone numbers, and account data are tokenized in transit.
Entity group: UNIVERSAL + CORPORATE
Operators: replace (person), mask (phone)
Llama + AnythingLLM On-Premise
Teams that run Llama or Mistral locally via Ollama + AnythingLLM can still protect PII before sending text to the local model. The anonym.legal API call goes to EU servers; the anonymized text stays local. Hybrid: cloud anonymization, local inference.
LLMs: Llama 3.3, Mistral, Phi-3
Security Checklist
Deploy anonym.legal MCP safely in regulated environments
✓ API Key Management
- Store API key in environment variable, not in config file
- Never commit
claude_desktop_config.jsonto git - Add
.cursor/mcp.jsonto.gitignoreif it contains keys - Use project-level configs with placeholders; inject keys at runtime
- Rotate API keys regularly via anonym.legal dashboard
✓ Transport Security
- HTTP transport: always HTTPS — never plain HTTP
- Verify endpoint:
https://anonym.legal/mcp(TLS 1.3) - stdio transport: process isolated, no network attack surface
- E2E mode: use when the server must not store token mappings
✓ Session & Data Retention
- Use
persistence: "session"(24h) for short workflows - Use
persistence: "persistent"(30d) only when detokenization is needed later - Call
anonym_legal_delete_sessionafter completing a task (GDPR Art. 17) - E2E mode: session token mapping never stored server-side
✓ Regulated Industries
- Infrastructure: ISO 27001:2022 (Hetzner, Germany)
- All data processing within EU (GDPR compliant)
- Zero-knowledge auth option available
- HIPAA: use HEALTHCARE entity group + redact/replace operators
- PCI-DSS: use FINANCIAL group + encrypt or mask for card data
- GDPR: use delete_session for right-to-erasure workflows
IMPORTANT NOTE ON CLIENT TRUST
anonym.legal MCP intercepts PII before the text reaches the AI model. However, the AI tool (Claude Desktop, Cursor, etc.) still receives the anonymized text and the AI's response. Your data is protected from the LLM provider — it is not protected from the AI tool client itself. Choose AI clients that comply with your organization's data processing agreements.
Connect Your AI Tool Now
One API key. One endpoint. Works with every MCP client listed here.
Related Resources
MCP Server Reference
All 7 tools, 6 operators, 26 entity groups, and advanced config options documented with full parameter tables.
Operators Deep Dive
When to use replace, hash, encrypt, mask, redact, or keep — with real-world examples for legal, healthcare, and fintech.
REST API Docs
Full REST API reference, authentication, code examples in Python, Node.js, and cURL for non-MCP integrations.