OK, so maybe we’re not one step closer to Skynet with this new MIT AI chip but we are progressing at a decent rate with artificial intelligence (AI) research. MIT researchers have introduced a new chip, dubbed Eyeriss, to implement neural networks and it’s ten times more efficient than a mobile GPU making it ideal to run artificial-intelligence algorithms at the local level of a mobile device. With these algorithms networked devices could make on the spot local decisions without having to communicate with an Internet based server. It’s all pretty amazing stuff.
[graphiq id=”1oQs0t6ErSB” title=”Massachusetts Institute of Technology Profile” width=”1085" height=”503" url=”https://w.graphiq.com/w/1oQs0t6ErSB” link=”http://colleges.startclass.com/l/1954/Massachusetts-Institute-of-Technology” link_text=”Massachusetts Institute of Technology Profile | StartClass”]
“Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, an assistant professor of electrical engineering at MIT whose group developed the new chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”
A neural network is typically organized into layers, and each layer contains a large number of processing nodes. Data come in and are divided up among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The output of the final layer yields the solution to some computational problem.
“This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a senior vice president at Samsung’s Micro Plasma Ion Lab. “In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe.”
MIT researches had to make special circuits to make this chip work so it goes without saying that this is a very specialized piece of hardware were talking about. MIT’s write up is extensive on this new chip and heading over to the source link will probably give you a better understanding of what the team is actually doing along with more technical specifications.
What do you think of the new MIT AI chip? Let us know in the comments below or on Google+, Facebook and Twitter.Source: MIT