Course Overview
AI is transforming products, operations, and decision-making across industries. But when AI systems move into production, they open new attack paths, through models, prompts, data pipelines, agent workflows, APIs, and integrations, creating vulnerabilities adversaries are already targeting. Traditional pentesting doesn’t fully cover LLM vulnerabilities. Prompt injection, data poisoning, and model manipulation require specialized offensive skills. Certified Offensive AI Security Professional is the first credential built specifically for AI red teamers.
Who should attend
C|OASP is designed for security professionals who wish to master offensive and defensive AI security techniques.
Prerequisites
Learners are recommended to have around 3 years of cybersecurity experience
Course Objectives
- Think like an attacker inside AI systems
- Uncover weaknesses across models and pipelines
- Validate security controls
- Reduce operational risk before deployment
Outline: Certified Offensive AI Security Professional (COASP)
- Module 01 - Offensive AI and AI System Hacking Methodology
- Module 02 - AI Reconnaissance and Attack Surface Mapping
- Module 03 - AI Vulnerability Scanning and Fuzzing
- Module 04 - Prompt Injection and LLM Application Attacks
- Module 05 - Adversarial Machine Learning and Model Privacy Attacks
- Module 06 - Data and Training Pipeline Attacks
- Module 07 - Agentic AI and Model-to-Model Attacks
- Module 08 - AI Infrastructure and Supply Chain Attacks
- Module 09 - AI Security Testing, Evaluation, and Hardening
- Module 10 - AI Incident Response and Forensics