Connect with us

AI Researchers Find Way to Fool Image Recognition

Credit: Unsplash

Complicated problems, simple solutions.

If you’ve ever seen any of the Terminator movies, you’d know that one of the elements that make the eponymous assassin machines so intimidating is their unparalleled ability to recognize and hunt their targets. This is thanks to the complicated image recognition software they utilize, which allows them to easily spot and identify a human being. However, as it turns out, human survivors could’ve easily evaded the Terminators. All they needed was a paper sign on their chests that said “MACHINE.”

Artificial intelligence research team OpenAI made an amusing discovery in a recent study: while their neural network CLIP is perfectly capable of recognizing and categorizing objects on sight, a piece of paper with a word on it is enough to completely dissuade its eyes. Case in point, when they placed a Granny Smith apple on a table, CLIP was easily able to recognize it as such, but when they stuck a piece of paper with the word “iPod” on the apple, CLIP became 99.7% certain that the object was, in fact, an iPod. This unusual quirk has been named a “typographic attack” by the researchers.

“We believe attacks such as those described above are far from simply an academic concern. By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model,” the researchers said.

“We also believe that these attacks may also take a more subtle, less conspicuous form. An image, given to CLIP, is abstracted in many subtle and sophisticated ways, and these abstractions may over-abstract common patterns—oversimplifying and, by virtue of that, overgeneralizing.”

OpenAI is working on figuring out a way to help CLIP overcome this quirk, and has invited any third-party researchers in the AI community to chime in with their own ideas and analyses.

Connect