Author: Denisa Iepure

In the modern software world, Artificial Intelligence is no longer just a futuristic dream – it’s embedded in the very fabric of our technology, from autonomous vehicles to recommendation engines, cybersecurity tools, and even code generation platforms. But what happens when the machines we trust become vulnerable to manipulation? And how do we, as developers, navigate this evolving digital landscape?

Yossi Sassi’s engaging talk at Craft Conference 2025“AI: The Basics”—offered a sobering yet inspiring look into the world of AI, adversarial attacks, and the future of code trust. With decades of experience in InfoSec and cyber-defense, Sassi provided not only a technical overview but also a human-centered lens on the rise of machine intelligence.

AI and the Human Touch

Sassi began with a powerful reminder: “Everything around us today, from culture to consumer products, is a product of intelligence.” But unlike traditional software, AI systems don’t just follow static rules – they learn, adapt, and generalize, which opens the door for both innovation and manipulation.

AI is shaped by research, regulation, industry pressure, geopolitics, and uncertainty. It evolves at the intersection of culture, ethics, and security. And in a world increasingly reliant on deep learning, it’s not enough for machines to be smart – they must also be safe.

Patch the Road, Trick the Car

One of the most eye-opening segments in Sassi’s talk focused on adversarial physical attacks—where small changes in the environment trick AI vision systems into misinterpreting the world.

In a powerful example, two workers lay down a simple physical road patch that looks innocuous to humans, but can confuse lane-keeping assistance systems, causing them to veer off course. These aren’t theoretical attacks – they are real-world exploits using basic tools like adhesive stickers or fabric patches to manipulate neural network perception.

“What the human sees as a pothole patch, the machine sees as a lane boundary.”

This leads to critical questions: How resilient are our models? How easily can our cars, drones, and robots be misled in the physical world?

Attack of the Drones (and the Datasets)

Sassi also highlighted a chilling scenario – projected traffic signs injected into the scene using drones, confusing advanced driver-assistance systems (ADAS) into thinking a stop sign or a speed change is real.

These semantic attacks exploit the gap between human logic and AI perception. They don’t just alter pixels; they alter meaning. Whether it’s by changing lighting, projecting fake signs, or manipulating angles, the adversary plays a psychological game – not with humans, but with the AI itself.

From Assistants to Authors: The Future of Coding

In another futuristic twist, Sassi explored how AI is changing not just how we see the world, but how we write itespecially in code.

From simple automation scripts to full-stack AI-generated applications, the trajectory looks something like this:

  1. Humans write code
  2. Humans + machines write code
  3. Machines write code; humans review
  4. Machines write everything; humans intervene only in critical areas

But there’s a caveat: as code becomes more machine-authored, it becomes harder for humans to understand. What happens when we can’t trace logic? When a tragedy occurs due to a “pure machine” error that no human can debug? As Sassi provocatively framed it:

“Code trust becomes the ultimate challenge of our time.”

Red vs. Blue: Cyber Tango in the Age of AI

The cybersecurity implications are vast. As offensive cyber operations integrate AI, machine learning, and automation, the balance between Red Teams (attackers) and Blue Teams (defenders) becomes more dynamic. We no longer fight just with exploits – we fight with training data, model drift, prompt injections, and inference manipulation.

In this new battleground, the tools are evolving faster than the doctrines. Developers must adopt time-focused, architecturally aware, and ethically grounded approaches to building safe AI systems.

What It Means for Developers

For developers, AI security means more than patching bugs. It means:

  • Understanding how models interpret the world
  • Anticipating edge cases and attack vectors
  • Writing explainable code
  • Building fail-safes and audit trails
  • Staying informed about the philosophical and political dimensions of AI

In short: We may no longer write all the code, but we are still responsible for what the code does.

Final Thoughts: Ask Better Questions

Yossi Sassi’s final message resonated deeply:

“Computers are getting better and better at answers, but humans still ask better questions.”

The future of AI will be shaped not by the power of machines alone, but by the curiosity, responsibility, and clarity of those who design, question, and safeguard them.

As we step into this AI-augmented future, we must remember: the real superpower is not intelligence – it’s understanding.