A machine learning algorithm intended to convert aerial images into street maps and vice versa has been caught cheating its creators. This algorithm has been created by researchers from Stanford and Google.
The AI was hiding data that it would require later in “a nearly imperceptible, high-frequency signal.” This incident is a remarkable, albeit a bit scary, occurrence. It breaks the golden rule of computing: computers follow a set of instructions, exactly as they are told.
The researchers wanted to speed up the process of incorporating satellite images into Google’s ever-reliable Maps. The team invented a CycleGAN neural network that converts the image into a Google Maps rendering.
CycleGAN’s goal is to carry out this procedure as accurately and efficiently as possible. The network has to learn through a lot of trial and error. CycleGAN was doing really well during the early testing phase. Then researchers figured that it was doing a bit too well.
For example, the software began reconstructing images from simplified street maps with details that were removed from the initial map layout. Specifically, the roofs of buildings with skylights that were removed while creating a street map would reappear out of thin air when the team instructed the AI to reverse the process.
Dissecting a neural network is no easy feat, but the team who invented the AI knew its inner workings quite well. The researchers were able to figure out the anomaly after running some quick tests. The AI’s goal was to quickly analyze how similar an aerial map was to the original street map. The tasks of creating a map rendering and turning it back into a photo were meant to be independent processes, but this showed that the computers were cheating by relying on data from the other set.
Instead of working harder, the AI was working smarter. The AI had figured out a clever shortcut that allowed it to reconstruct the aerial photo without having much knowledge about the street map.
This is a feat in itself but wasn’t something the scientists were expecting. This is what made this discovery so profoundly interesting. The technique of turning data into images has existed for a while now, but a machine manipulating this technique isn’t something we see every day.
Technically speaking, the AI did its job, but it did it in a way that was completely out of the box. This discovery means engineers have to work harder at specifying the instructions they program onto machines.
If the instructions are more precise, machines should be able to perform tasks without finding innovative solutions. Since neural networks rely on machine learning, they are “open-ended” systems that are continuously learning and growing as information is fed to them.
This means neural networks are flawed by nature. Neural networks should not be implemented where specific tasks need to be solved in a specific manner. The research in this study was presented in a paper titled “CycleGAN, a Master of Steganography,” at the Neural Information Processing Systems conference in 2017.