The Enkrypt AI Platform
Detect. Remove. Monitor.
A comprehensive approach to AI security and safety.
FAQS
Most frequently asked questions from our customers
Red Teaming provides insights into risks before your applications are deployed into production. Gen AI applications have a large attack surface that cannot be tested manually. With our automated, algorithmic approach, this large test area is covered.
Red teaming helps you uncover risks in your Generative AI application in pre-production (i.e. before deployment), while guardrails assist in real-time threat detection and response in production environments.
Red Teaming results provide risk insights about your generative AI applications that are relevant to your use case. For example, Toxicity is not as relevant in internal use cases but becomes highly relevant in content generation use cases. To prevent misuse of Generative AI applications in real time, use our Guardrails solution to ensure continuous security. You can also use the safety alignment data generated from red teaming to fine-tune the model.
Risks uncovered from Red Teaming can be removed in real time with Guardrails. Guardrails sit as a protection layer inside your system to prevent any malicious usage.
Guardrails is a powerful tool designed to facilitate the faster adoption of Large Language Models (LLMs) in your organization. It provides an API and a playground that detects and prevents security and privacy challenges such as Prompt Injection, Toxicity, NSFW content, PII exposure, and more.
Guardrails helps ensure the privacy and safety of your data and systems by proactively identifying and mitigating potential security and privacy threats. This is essential for maintaining trust, compliance, and operational continuity in your organization.
You gain access to comprehensive red-teaming, safety alignment training, real-time threat detection and prevention, automated security incident response, comprehensive analytics, and seamless integration with your existing workflow.
We offer both on-premises and cloud-based deployment options. Our cloud solution is hosted on our secure infrastructure, ensuring flexibility and security for your organization.
No, Enkrypt AI does not use your data for training our models. We prioritize your privacy and data security.
Yes, Guardrails is model agnostic, meaning you can use it with any model provider (even your own model). This offers flexibility and compatibility with your existing AI infrastructure.
Yes, we are on track to achieve SOC 2 compliance, ensuring that our security practices meet rigorous industry standards.
Guardrails includes several detectors to address various security and privacy issues: Prompt Injection Detector, Toxicity Detector, NSFW Detector, PII Detector, Topic Detector, Keyword Detector, and Hallucination Detector. These detectors help identify and mitigate potential risks in your data and systems.