CSA Triangle chapter meets monthly, usually on the third Thursday of the month. Everyone is welcome regardless of membership status. Attendees are invited and encouraged to join the chapter in recognition and support of the value the chapter brings to individuals, sponsors, and the community.
Speaker: Daniela Lulli
Title: Robots vs Robots – Securing AI Throughout the Data Lifecycle
Summary of Presentation:
As AI systems, copilots, and autonomous workflows proliferate, defenders must secure not only the data that fuels them—but the AI behaviors, access paths, and automation they introduce. Robots vs. Robots explores how organizations can protect AI systems end‑to‑end by controlling data exposure, governing AI access, and using automation to stay ahead of adversaries.
Speaker: TBD
Title: TBD
Summary of Presentation:
TBD
Speaker: Sif Baksh
Title: Automating the Cloud Controls Matrix: From Deterministic Audits to AI Agents
Modernizing GRC by evolving static risk workflows into intelligent AI agents that map security alerts directly to CSA standards
Summary of Presentation:
Despite the rise of AI, "manual work remains stubbornly high" for security teams. This session demonstrates how to bridge the gap between CSA Cloud Auditing (CCAK) and practical automation by building a three-stage GRC pipeline in Tines:
The Foundation (Deterministic): We begin by ingesting raw cloud alerts and creating a standardized "Risk Register using Tines Records", ensuring every violation is captured without manual entry.
The Intelligence (AI Agent): We upgrade the workflow with an AI Agent action. Because "Agents shouldn’t be a black box", we explicitly instruct it to map alerts to the Cloud Controls Matrix’s "197 control objectives" and recommend remediation.
The Interaction (AI Chatbot): We conclude by deploying an "AI interaction layer". This allows auditors to query their compliance data using natural language (e.g., "Show me all GDPR risks"), effectively "unlocking AI’s full value" for the audit process
Speaker: Jacob Graves
Title: Securing AI Use in the Cloud: Practical Guardrails for Enterprises
Summary of Presentation:
Enterprises are rapidly adopting AI across browsers, code assistants, and internal apps—but with that adoption comes new risks: shadow AI, data leakage, prompt injection, unsafe responses, and governance gaps. This session shares a practitioner’s view on how to put actionable guardrails in place across three core areas: employee use of AI tools, developers using AI code assistants, and homegrown AI applications. We’ll walk through real-world patterns for visibility and control (e.g., detection of unsanctioned AI tools, policy-based redaction and enforcement, content moderation), and discuss emerging challenges with agentic AI and MCP frameworks—plus how dynamic risk scoring and gateway patterns can help mitigate them in cloud environments.
Key Takeaways:
A simple framework to assess and prioritize AI security risks across employees, developers, and internal apps.
Pragmatic controls you can deploy now: observability for shadow AI, automatic data redaction, policy enforcement, and content safeguards.
How to approach agentic AI and MCP risk: what to monitor, how to score risk, and where to enforce controls (endpoint, proxy, or app layer).
Implementation patterns that balance developer velocity and user experience with enterprise-grade governance.