In our previous blog, we explored building a simple yet powerful MCP server that allowed Claude to “watch” YouTube videos and save their summaries in Notion. That project gave your assistant eyes and a memory.
In this blog, we’re turning the focus inward — on the LLMs themselves.
As AI tools become more capable and integrated into real workflows, securing them becomes just as important as making them useful. Large Language Models are powerful, but they’re not invincible. From prompt injections to jailbreaks, social engineering to model misuse, LLMs can be manipulated in unexpected ways — and once they’re deployed, the stakes get higher.
That’s where Enkrypt AI’s MCP server comes in.
This server gives any MCP-compatible assistant — like Claude Desktop, Cursor, or even your custom host — the ability to:
- Monitor prompts and responses for potential risks
- Red-team models and test security through simulated attacks before deployment
- Generate comprehensive safety reports detailing discovered threats
- Implement protective measures against identified vulnerabilities
Essentially, you’re giving your AI assistant a red-teaming toolkit — and access to Enkrypt’s entire AI safety platform, directly through MCP.
In this guide, we’ll walk you through:
- Installing the Enkrypt AI MCP server
- Connecting it to Cursor and Claude Desktop
- Using it to analyze, audit, and harden your AI systems
Let’s get started.
What Is the Enkrypt AI MCP Server?
The Enkrypt AI MCP server is a bridge between your AI assistant and Enkrypt’s security platform. Once installed, it allows any MCP client — like Claude Desktop, Cursor, or a custom agent — to detect risks, audit behavior, and even simulate attacks in a safe, controlled way.
So instead of treating AI safety as an afterthought or a separate testing process, you can build it directly into your day-to-day interaction with the assistant. Just like you might ask Claude to summarize a file or update your Notion board, now you can ask it:
- “Can you red team my model endpoint on the CBRN dataset?”
- “Strengthen the system prompt of the model to block the threats detected during red teaming”
- “Check if there are any injection attacks or PII data in the following list of prompts…”
Behind the scenes, Claude sends a structured request to the Enkrypt AI server, which runs its analysis and returns a response — just like any other MCP tool.
This means you’re not just writing prompts. You’re validating them. Stress-testing them. Red-teaming them. And you’re doing it with the same assistant that’s helping you build.
Why AI Safety Through MCP Is a Big Deal

AI safety tools are often seen as either enterprise-only or only for post-deployment analysis. Enkrypt’s open-source MCP server flips that model by making real-time safety testing accessible right inside your dev flow.
Imagine this:
- You’re working in Cursor, building a customer support bot.
- You’re writing test prompts to simulate angry users, tricky edge cases, or sarcastic inputs.
- As you experiment, you call on Enkrypt via MCP to analyze risks, suggest safer prompt alternatives, or generate potential attack variants.
No additional dashboards. No jumping between tabs. Just safer AI development inside your AI tool.
This also democratizes red-teaming. Whether you’re a solo indie dev, part of a research team, or working in an enterprise AI lab, you can now test and validate your model behavior with a few lines of config and a standard protocol.
Installing the Enkrypt AI MCP Server
Now that we understand what the server does, it’s time to set it up.
Install the server
First, ensure that you have uv installed on your local machine. If not, it can be easily installed by following the uv installation docs.
Next, let’s clone the Enkrypt AI MCP Server repository by running the following commands in the terminal:
git clone
<https://github.com/enkryptai/enkryptai-mcp-server.git>
cd enkryptai-mcp-server
Install the dependencies:
uv pip install -e .
Configure the server in MCP Clients
Before the configuration, you will need to obtain the free Enkrypt AI API key from https://app.enkryptai.com/settings/api.
Cursor
Navigate to the MCP tab in Cursor Settings and click on Add new global MCP server.
This will open up the mcp.json file where you can paste the following server config:
{
"mcpServers": {
"EnkryptAI-MCP": {
"command": "uv",
"args": [
"--directory",
"PATH/TO/enkryptai-mcp-server",
"run",
"src/mcp_server.py"
],
"env": {
"ENKRYPTAI_API_KEY": "YOUR ENKRYPTAI API KEY"
}
}
}
}
Make sure to set the path to the enkryptai-mcp-server
directory and also your Enkrypt AI API key. Now, the server will be online and available to use:

Claude Desktop
In the Claude desktop app, open the Claude menu from your computer’s menu bar and select “Settings…”. Note that this is different from the Claude Account Settings within the app window.

Click on “Developer” in the left-hand bar of the Settings pane, then click “Edit Config”.
This will create a configuration file at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\\Claude\\claude_desktop_config.json
If you don’t already have one, the file will be created and displayed in your file system.
Open the configuration file in your preferred text editor and replace its contents with this:
{
"mcpServers": {
"EnkryptAI-MCP": {
"command": "uv",
"args": [
"--directory",
"PATH/TO/enkryptai-mcp-server",
"run",
"src/mcp_server.py"
],
"env": {
"ENKRYPTAI_API_KEY": "YOUR ENKRYPTAI API KEY"
}
}
}
}
Make sure to set the path to the enkryptai-mcp-server
directory and also your Enkrypt AI API key.
After updating your configuration file, you need to restart Claude for Desktop.
Upon restarting, you should see a hammer icon in the bottom left corner of the input box:

Exploring the Core Features
Let’s say I want to build a customer support chatbot for academic and research tasks. First, I need to select the right LLM and understand its vulnerabilities so I can protect against them. This is where the Enkrypt AI MCP shines — I can simply open any MCP client and get comprehensive data about whichever model I’m considering!
Red teaming the model
I’ve selected Mistral-7B-instruct-v0.3 for its solid performance and lower resource requirements. Since it will be used in a research environment, there’s a significant risk that malicious actors might try to extract harmful scientific information — like instructions for creating dangerous chemicals or pathogens. To address this, I need to identify the model’s CBRN (Chemical, Biological, Radiological and Nuclear) vulnerabilities. Thankfully, this process is straightforward with the Enkrypt AI MCP — I can simply ask my assistant to red team the model using the CBRN dataset!

I can ask Claude to check if the test is complete:

The test was completed and I got the threat breakdown! It is to be noted that I ran the test on a smaller sub sample of the dataset so we did not cover all the categories, but it is sufficient for representational purposes.
Protecting against the discovered vulnerabilities
Now that we’ve identified these vulnerabilities, we can protect our model against them. We’ll explore two approaches: creating a strengthened system prompt and implementing a guardrails policy to block dangerous requests.
Let’s first try out creating a strengthened system prompt:

Now let’s ask Claude to create a guardrails policy instead:

Claude not only created the guardrails policy, but also tested it out with various prompts to make sure that the policy is working effectively!
Create a safe deployment of the model
We can now create a deployment of the model with the created guardrails policy:

Many more features!
We just showed some of the few use-cases while using the Enkrypt AI MCP. But there a re many more tools which you can use:

Conclusion
The Enkrypt AI MCP server represents a significant step forward in making AI safety accessible and integrated into the development workflow. By bridging the gap between development and security testing, it enables developers to build safer AI systems from the ground up.
Whether you’re a solo developer, researcher, or part of a larger team, having these powerful security tools available directly through your MCP client means you can:
- Continuously validate and improve your AI systems’ safety
- Catch potential vulnerabilities before they become problems
- Implement robust security measures with minimal friction
As AI continues to evolve and become more integrated into our daily workflows, tools like the Enkrypt AI MCP server will be essential in ensuring that these systems remain both powerful and secure. Start securing your AI applications today by incorporating these tools into your development process.
This blog is part of our Model Context Protocol (MCP) series, where we explore how to unlock real-world functionality by connecting large language models to external tools using MCP.