Fooling AI Into Believing Turtles Are Weapons
23.04, 10:00–10:25 (Europe/Vienna), HS i2
Sprache: English

In this talk, we will explore adversarial attacks on neural networks. We will show how it is possible to make neural networks believe that turtles look like weapons or any other kind of object.


In recent years, AI systems have shown incredible abilities, such as playing chess, driving cars, recognizing speech, diagnosing cancer, and recognizing different kinds of objects. But there are cases in which AIs fail in strange ways.

In this talk, we will explore adversarial attacks on neural networks. The goal is to create images that look unsuspicious to humans but can trick AI into believing that it sees something entirely different. We will show how it is possible to make neural networks believe that turtles look like weapons or any other kind of object.

This talk will cover:

  • What kinds of adversarial attacks are there?
  • How do they work?
  • What are the consequences for security and safety in AI technology?
Siehe auch: Slides (1,3 MB)

I am a computer science student at TU Graz interested in the intersection of IT Security and Artificial Intelligence. Also I am part of the LosFuzzys CTF team.

Diese(r) Vortragende hält außerdem: