AI Security Research
AI Security Research
AI cybersecurity and adversarial ML research. Securing the next generation of systems.
AI & ML Security
As organisations adopt AI and machine learning, new attack surfaces and threats emerge. Our research focuses on adversarial ML, model security, and secure AI development practices.
Scope
We study evasion, poisoning, and extraction attacks against ML systems, and defences such as robust training and detection. We advise clients on securing AI-powered applications and data pipelines.
Outcomes
Practical guidance and tooling to help organisations deploy AI securely. Our research supports our advisory and testing services in this evolving space.