From self driving cars to robotic bartenders, AI has made exciting progress over the past decade. Now, Google is claiming that its inhouse engineers are teaching artificial intelligence (AI) to predict the future.
According to the internet giant its DeepMind Technologies division is working on giving its AI algorithms an imagination. While there won’t be Minority Report style revelations, its AI agents may be able to start predicting how scenarios will play out. The research has already been published in two papers and has created big ripples within the scientific sphere.
“When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall,” explains a DeepMind scientist in a recent blog post. “If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to ‘imagine’ and reason about the future.”
AI trained to plan and predict
Created to “solve intelligence” and “use it to make the world a better place,” DeepMind Technologies is one of Google’s most exciting forays. It made headlines last year when its AlphaGo narrow AI computer program was able to defeat some of the world’s best players at the ancient Chinese board game. The company coveted AlphaGo as a perfect example of how AI agents can be trained to plan and predict the future. Of course, it was only able to operate within the context of the game which spurred scientists to apply the concept to real world examples.
Introducing “imagination-augmented agents”
This sparked the creation of I2As, aka imagination-augmented agents. They’re designed with a neural network that actively extracts information that could be useful for the decision making process. They’re capable of adapting multiple imagined possibilities for particular tasks, as well as simultaneously learning different strategies to action the best plans.
AI let loose on Sokoban
DeepMind tested its theory on a popular Japanese puzzle game called Sokoban. Set in outer space, players must navigate spaceships through the game which requires forward planning and strategising. The results were impressive, with DeepMind noting that “for both tasks, the imagination-augmented agents outperform the imagination-less baselines considerably,” According to agents, the AI robots “learn with less experience and are able to deal with the imperfections in modelling the environment.”
So what’s the next step? DeepMind has revealed that it plans to scale up the idea to other problems, and design AI agents that can use their imaginations to plan for a host of other scenarios.
From manual labour to quantum computing, AI is rapidly making its mark. Eventually it could play an important role in laboratory science, including food safety. For a glimpse at the latest industry developments ‘How Safe is Safe? Analytical Tools for Tracing Contaminants in Food’ spotlights the latest analytical instrumentation technologies and methods used to guarantee food safety with consumer, animal and plant protection.