/ Confuse, Obfuscate, Disrupt: Using Adversarial Techniques for Better AI and True Anonymity

Description

In a world where algorithms dictate what we see, buy, and believe, understanding how to disrupt and manipulate them is as powerful as knowing how to build them. This session dives into adversarial techniques that challenge the assumptions of AI/ML models. By introducing noise, obfuscating data, and exploiting algorithmic learning cycles, we can uncover hidden biases and vulnerabilities in AI systems. These techniques push the boundaries of ML training, offering developers a toolkit for crafting more resilient models. Beyond improving AI, these techniques offer a unique avenue for achieving digital anonymity. Whether you want to obfuscate your online footprint or prevent data collectors from profiling you, adversarial inputs provide a practical path forward. In this session, attendees will learn how adversarial methods can transform vision and NLP models, empower individuals to take control of their privacy, and showcase live demonstrations of these disruptive strategies in action.

Session 🗣 Intermediate ⭐⭐ Track: AI, ML, Bigdata, Python

Voice AI

AI Assistants

PyTorch

ML

Machine Learning

🗳️ Vote this talk
This website uses cookies to enhance the user experience. Read here