Such artificial intelligence systems are already being deployed in the real world, both by law enforcement agencies and in the commercial sector. At their most benign, they are used to simply count how many people enter and exit a store, for example. But they can also be used to try to locate and track a specific person, or even a specific kind of person — a fact that has raised concerns about profiling from citizen rights advocacy groups. As it turns out, those artificial intelligence systems aren’t as sophisticated as many have been led to believe, and the researchers have found that they can be fooled with just a printed photo.
In this case, the photo is of people holding umbrellas, which has been digitally altered to make it less clear. All someone has to do is wear that photo somewhere around their lower torso and they become undetectable — at least on the YoLo(v2) AI system that the technique was tested with. It works because the system sees the photo as an unknown entity, and one that isn’t part of what it considers a “human.” You and I can immediately recognize it for what it is, but the AI can’t. This particular vulnerability would be easy to enough to fix, but it does illustrate how easy it is to fool an AI by confusing it with unexpected imagery.