Thought Leadership

How to Improve AI Security from Cloud Security Best Practices

Best practices from cloud security can greatly benefit Generative AI security.
January 8, 2025

How Can We Learn from Cloud Security Adoption When It Comes to AI Security?

Cloud security has grown significantly over the last ten years due to evolving cyber threats and legal requirements. The tech sector learned a painful lesson from the significant breaches and misconfigurations caused by the lag between cloud adoption and cloud security. For example, the 2017 AWS S3 bucket hack revealed over 100 million Americans' private information, underscoring the dangers of incorrect setups.


Businesses now are adopting generative AI even more quickly than they did cloud technology. Because of its quick adoption, generative AI poses security risks that must be proactively addressed. Organizations must give top priority to creating generative AI applications with security built in from the start as cyberattacks get more complex and regulations take shape.

The Shared Responsibility Model: A Blueprint for AI Security

The shared responsibility model is among the most crucial lessons to be learned from cloud security. In cloud settings, consumers oversee protecting their data, apps, and customizations, while cloud providers safeguard the infrastructure. This model translates well to AI.

Providers of Generative AI models (like OpenAI, Anthropic, and Hugging Face) ensure model integrity and basic safety of the Large Language models. Organizations using these models must secure the generative AI endpoints from misuse. OWASP Top 10 2025 defines a list of top security concerns for Generative AI applications. The MITRE Atlas Framework also offers a comprehensive analysis of the attack types that Generative AI applications are vulnerable. See the figure below.

MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) matrix.
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) matrix.

Overlooking the risks of prompt injection, toxic content generation or sensitive data exposure can result in monetary and reputational loss. For instance, in early deployments of GPT-3, researchers demonstrated how prompt injections could bypass filters, generating harmful or biased content. Organizations must take responsibility for building their own layers of security and fine-tuning models to align with internal policies.

Additionally, shared responsibility necessitates organizations to not just focus on endpoint security but also on data provenance, access control, and fine-grained permissions around AI usage. It is crucial to monitor data flows, track model versions, and log interactions to identify potential breaches. Companies should treat their AI deployments as extensions of their cloud environments, inheriting the same strict security measures.

Key Takeaway: Organizations must understand their role in securing AI deployments, emphasizing data governance, adversarial testing, and output monitoring.

Zero Trust for AI Models and Pipelines

Cloud security introduced the concept of Zero Trust, where no user or system is trusted by default. Access is continuously verified. This principle applies directly to AI systems, where adversarial actors may attempt to manipulate prompts, extract model data, or disrupt operations. Applying Zero Trust to AI involves rigorous identity verification, monitoring API usage, and limiting access to AI models based on need and risk. AI pipelines should include strict guardrails that filter prompts and validate outputs to prevent harmful behavior. An example is Enkrypt AI’s Guardrails solution, which consistently detects and blocks any threats to Generative AI apps.

Zero Trust extends beyond access control to continuous model evaluation. Models should be periodically retrained and tested against evolving threats. Shadow deployments, where new models run parallel to production ones, allow for continuous assessment without risking live operations.

Furthermore, Zero Trust in AI can be strengthened through multi-factor authentication (MFA) for critical AI pipelines and the segmentation of AI workloads, ensuring that sensitive applications operate in isolated environments.

Key Takeaway: Adopt a Zero Trust mindset for AI, ensuring no input or output is automatically trusted. Continuously audit and control AI interactions.

Addressing the Complexity of Multi-Cloud and Multi-AI Environments

Many organizations manage multi-cloud environments, relying on AWS, Azure, and Google Cloud simultaneously. This complexity gave rise to cloud security posture management (CSPM) tools that offer centralized visibility and control across platforms. Similarly, organizations are increasingly deploying multiple AI models and frameworks (from different vendors) to solve various business problems. This introduces security gaps, misconfigurations, and inconsistencies. AI security must evolve to offer centralized visibility across models and deployments. A notable example is Enkrypt AI Model Deployments which are cloud agnostic and offer support with multiple AI frameworks, streamlining model governance and monitoring across hybrid environments.

Moreover, as AI adoption scales, federated learning and edge AI deployments will further complicate the security landscape. Organizations must adopt tools that provide consistent policy enforcement across all environments, regardless of where the AI models reside.

Implementing standardized APIs and uniform security policies across AI frameworks can mitigate the risks posed by fragmented environments. This reduces the likelihood of model drift and ensures all AI tools operate within a secured and monitored environment.

Key Takeaway: Deploy AI security solutions that span across different AI platforms, providing unified risk management and visibility.

Automation and Continuous Monitoring

Cloud security relies heavily on automation and continuous monitoring. AI security must adopt the same approach. Automated tools can detect unusual AI outputs, flag biases, and prevent adversarial attacks in real-time. This proactive defense reduces the risk of malicious actors exploiting AI vulnerabilities. For example, threat detection tools like Enkrypt AI Guardrails constantly scan model input for vulnerabilities and suspicious activity.

Beyond monitoring, automation can facilitate rapid patching of AI models in response to emerging threats. Automated rollback mechanisms ensure that if vulnerabilities are detected, affected models can revert to a known safe state.

Continuous integration and continuous deployment (CI/CD) pipelines should extend to AI, with automated tests assessing the ethical implications, performance, and security of AI updates before deployment.

Key Takeaway: Implement automated monitoring and real-time anomaly detection for AI systems, ensuring rapid response to emerging threats.

Compliance and AI Governance

Regulatory frameworks like GDPR, HIPAA, and CCPA forced organizations to rethink cloud data protection. Compliance became a driver for cloud security innovation. With new laws like the White House Executive Order on AI and EU AI Act, AI is now entering a similar stage. AI governance systems must be integrated by organizations to guarantee adherence to changing legal requirements.

AI-specific regulations often demand model transparency, explainability, and auditability. Businesses must document AI decision-making processes, providing regulators and users with insights into how and why models arrive at specific outcomes.

Organizations should establish AI governance boards that include cross-functional members from compliance, legal, and technical teams to oversee AI projects from inception to deployment.

Key Takeaway: Just as compliance drove cloud security advancements, AI governance frameworks will shape the future of AI security. Proactively align AI strategies with regulatory requirements.

Security by Design

Cloud security shifted left with the rise of DevSecOps, where they embedded security checks early in the development lifecycle. This prevents vulnerabilities from reaching production. AI development must learn from such best practices and do the same. AI security by design -means embedding security into model training, data pipelines, and prompt engineering. Rd-teaming AI models and conducting adversarial testing ensures robustness before deployment. Enkrypt AI Red Teaming can be used to surface vulnerabilities before deployment while Enkrypt AI Safety alignment can be used to safety train LLMs against harmful prompts.

Security by design also necessitates the use of synthetic data to train models, reducing the reliance on sensitive datasets and lowering the risk of data leaks.

Embedding security awareness into AI development teams through regular training and simulation exercises fosters a proactive approach to identifying and mitigating vulnerabilities.

Key Takeaway: Integrate security into the AI development lifecycle from the outset to prevent vulnerabilities during training and testing.

Conclusion

The parallels between cloud security and AI security are striking. Both technologies disrupt traditional paradigms, introduce new attack surfaces, and demand innovative approaches to risk mitigation. However, the adoption gap that once plagued cloud security is now emerging in AI security. By addressing this gap early, organizations can build resilient and trustworthy AI systems. As AI adoption grows, the industry must champion the same security principles that transformed cloud computing. Zero Trust, shared responsibility, automation, and compliance will be the cornerstones of secure AI ecosystems. Through rigorous planning, continuous monitoring, and proactive governance, organizations can ensure the secure deployment and management of generative AI systems, paving the way for responsible AI innovation.

FAQs

  1. What lessons can be learned from cloud security for AI security?

Answer: Cloud security has evolved significantly due to past breaches, highlighting the importance of proactive measures. Organizations must prioritize building security into generative AI applications from the start, like how they adapted to cloud security challenges after incidents like the 2017 AWS S3 bucket hack.

  1. What is the Shared Responsibility Model in the context of AI?

Answer: The Shared Responsibility Model delineates roles between cloud providers and users. In AI, providers ensure model integrity while organizations must secure their AI endpoints and manage data governance, access control, and compliance with internal policies.

  1. How does Zero Trust apply to AI systems?

Answer: Zero Trust means no user or system is trusted by default. For AI, this involves continuous verification of identities, monitoring API usage, and implementing strict access controls to prevent unauthorized manipulation of models and data.

  1. What are the key security concerns for generative AI applications?  

Answer: Key concerns include prompt injection, toxic content generation, and sensitive data exposure. Organizations must address these risks through robust security measures and continuous monitoring to mitigate potential monetary and reputational losses.

  1. How should organizations manage multi-cloud and multi-AI environments?  

Answer: Organizations should adopt centralized visibility tools to manage multiple cloud platforms and AI frameworks. This helps in addressing security gaps and ensuring consistent policy enforcement across diverse environments.

  1. What role does compliance play in AI governance?

Answer: Compliance with regulations like GDPR and HIPAA drives organizations to integrate governance frameworks that ensure transparency, explainability, and auditability of AI systems. Establishing cross-functional governance boards can help oversee adherence to these requirements.

  1. How can security be integrated into the AI development lifecycle?

Answer: Security should be embedded from the outset of AI development through practices like adversarial testing, using synthetic data, and regular training for development teams on security awareness.

Satbir Singh