Secure By Design: Architecting Defense for AI Systems
AI technologies are already deeply integrated into modern enterprises, powering tools, automating decisions, and shaping user experiences. But with rapid deployment comes a growing wave of security concerns. AI doesn’t behave like traditional software—it adapts, evolves in production, and often operates with limited transparency, making it uniquely difficult to secure.
In this webinar, Mike Burch, Director of Application Security at Security Journey, will explore the architectural challenges of building secure AI systems. You’ll gain insight into the emerging risks that set AI apart and walk away with practical strategies to address them at the design level.
In this session, we’ll explore:
-
How AI’s dynamic, non-deterministic nature introduces unexpected attack vectors.
-
Why AI assets often operate under the radar and what that means for risk visibility.
-
Misconceptions about AI security that can lead to blind spots.
-
An overview of AI-specific vulnerabilities like prompt injection, model manipulation, and insecure integrations.
-
Concrete examples of AI-specific threats such as adversarial inputs, sensitive data exposure, and unsafe model tuning.
-
A framework for AI threat modeling tailored to today’s evolving landscape.
If you're involved in shaping your organization's security, compliance, or AI strategy, this session will help you take a proactive stance and build AI systems with security in mind from the start.