Google Built A Learning Computer That Teaches Itself How To Play Games

|

If you’ve ever seen the 1983 Matthew Broderick film WarGames, you’ll know that playing games against a computer can have the unwanted side effect of nearly starting World War 3. Google employees must not watch a lot of movies, because they’re taken that premise a few steps further by building a learning computer that teaches itself how to play games.  If that weren’t enough, it can get better at these games while it plays.

Thankfully, Google is only teaching it to play old Atari 2600 games at the moment – none of which include Global Thermonuclear War.

Maybe a good game of Chess really would be better...
Maybe a good game of Chess really would be better…

The New York Times explains some of the results that they found:

The Google program successfully taught itself the rules to over 49 Atari 2600 computer games from the 1980s, eventually figuring out strategies for victory. This included figuring out navigation, actions and positive outcomes, then using these for improved outcomes.

In 43 of the games, which included such classics as Space Invaders and Breakout, the D.Q.N. (deep-Q network) outperformed previous computational efforts to win, the paper said. In more than half the games, the new system could eventually play at least three-quarters as well as a professional human games tester.

In some instances, the computer almost seemed to learn how to cheat.  The New York Times also outlines the results from the game Seaquest, where the computer learned to simply keep the game’s submarine at the top of the screen for the entire game in order to win.

The short term goal is that this technology can be used to help with search results and language translation, both of which Google has a bit of experience with.  In the future though, the hope is to improve this type of learning to a point where it can be used in a pair of Google’s other side projects: robots or self-driving cars.

If that sounds like a recipe for disaster, be comforted in the fact that this technology is still years away from gaining the same basic sorts of knowledge as a typical human. Again from the New York Times:

…the new findings also underline how far artificial intelligence research is from developing human-type intelligence. D.Q.N. is not capable of developing conceptual knowledge, like what a submarine actually is, or transferring what it learned from one game to excel at another.

We humans, Mr. Hassabis noted, “have prior knowledge that we bring from the real world.” To master abstractions or conceptual thought, he said, “we suspect it will require some new kind of algorithm.”

Let’s just refrain from calling this program WOPR, or giving it access to any sorts of launch codes, like CPE1704TKS (I’d really like to hope that launch codes would be more complex than that nowadays). If this research speeds us towards a future of better self-driving cars, or robots that can help us perform dangerous tasks I’m all for it.  But if I wake up one morning with Skynet coming online, I’m going to be particularly upset with Google.

Tell us what you think! Great technology, or the first step towards our robot overlords?  Let us know in the comments or on your favorite social network.

[button link=”http://bits.blogs.nytimes.com/2015/02/25/a-google-computer-can-teach-itself-games/” icon=”fa-external-link” side=”left” target=”blank” color=”285b5e” textcolor=”ffffff”]Source: The New York Times[/button]

Last Updated on November 27, 2018.

Previous

Facebook To Add Suicide Prevention Feature

UPDATED: Neill Blomkamp Wants His Alien Film To Ignore Alien 3 And Alien: Resurrection

Next

Comments are closed.

Latest Articles

Share via
Copy link
Powered by Social Snap