Back to Blogs
CONTENT
This is some text inside of a div block.
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Industry Trends

Top 5 AI Security Trends Discussed at the Confidential Computing Summit 2024

Published on
June 13, 2024
4 min read

The 2-day conference featured discussions with the smartest minds in confidential computing and privacy-preserving (generative) AI. 

2024 CoCO Summit


In June, I had the pleasure to both attend and speak at the 2024 CoCo Summit in San Franciso. My talk entitled, “Strategies for Effectively Deploying Trustworthy Generative AI Solutions”, was a bit generalized for this audience, as confidential computing solves only one aspect of LLM security, specifically on deployment. 

Nonetheless, I received great feedback on my talk from the audience members, namely how impressed they were with the comprehensiveness of the Enkrypt AI platform. They appreciated its benchmarked and dynamic Red Teaming, Alignment, Guardrails, and continuous Monitoring capabilities. All of which can be done simultaneously in the platform. 

We are proud of building a product that can (among other things):

  1. Detect both security risks (jailbreak, malware, leakage) and model risks (toxicity, bias and hallucinations), and 
  2. Evaluate AI systems against operational and reputational risks throughout development and deployment.

The rest of the conference was filled with presentations from industry luminaries representing Microsoft, Nvidia, Google, and others. 

Top 5 AI Security Trends


Here are the top trends I came away with after digesting the jam-packed content:

  1. Internal threat actors are increasing, so protecting LLM IP is becoming critical. And in some cases, of national security importance. Jason Clinton, CISO, at Anthropic essentially made this point in his presentation.

  2. The technology for confidential computing for Generative AI is not yet mature – confidential GPUs are a year away.

  3. Despite their infancy, use cases for confidential computing are starting to pick up steam. One example is to port the workloads (AI training, data processing) into confidential computing. 
  1. Challenges abound at the CPU-GPU communication level when it comes to confidentiality.

  2. There is an obvious need to provide responsible and secure Generative AI. Threat actors know AI applications are currently an easy and profit-rich target to exploit. 

We look forward to attending next year’s event, as interest will only grow in this industry. 

Meet the Writer
Prashanth H
Latest posts

More articles

Industry Trends

The Clock is Ticking: EU AI Act's August 2nd Deadline is Almost Here

The EU AI Act’s key compliance deadline on August 2, 2025, marks a major shift for AI companies. Learn how this date sets new regulatory standards for AI governance, affecting general-purpose model providers and notified bodies across Europe. Prepare now for impactful changes in AI operations.
Read post
Industry Trends

An Intro to Multimodal Red Teaming: Nuances from LLM Red Teaming

As multimodal AI models evolve, continuous and automated red teaming across images, audio, and text is essential to uncover hidden risks. Collaboration among practitioners, researchers, and policymakers is key to building infrastructures that ensure AI systems remain safe, reliable, and aligned with human values.
Read post
Industry Trends

Uncovering Safety Gaps in Gemini: A Multimodal Red Teaming Study

A comprehensive red team assessment exposes critical vulnerabilities in Google’s Gemini 2.5 AI models, with over 50% success rates for CBRN attacks in some configurations. The findings highlight urgent risks in multimodal AI and call for immediate, industry-wide safety enhancements to prevent mass casualty scenarios and adversarial misuse.
Read post