Skip to content

Deep Learning: Einstein or Savant?

Deep Learning- Einstein Or Savant.png

Artificial intelligence is one of the hottest topics in analytics today. Currently the most popular member of the AI family, deep learning is solving some very difficult problems very well. Best known for image recognition, it is now being applied to a wide range of other problems.

Given the success of the approach, it is easy to forgive people for thinking that deep learning is incredibly intelligent. However, once you dig into deep learning, you’ll find that as opposed to being a generally brilliant algorithm akin to Albert Einstein, it is much more akin to a savant like the famous movie character from Rain Man. In other words, a deep learning process is really smart at a specific task or two, but not smart at all for anything else.

GENERAL VERSUS SPECIFIC INTELLIGENCE

First, let me define what I mean by general intelligence (i.e. Einstein) and specific intelligence (i.e Rain Man). Einstein was a brilliant man who was able to solve a wide range of problems. More importantly, he was also able to identify entirely new problems to solve and entirely new ways to solve those new problems. That’s really smart! His gift was the ability to guide himself through problem formulation and solutions at a level that 99.9999999% of humans simply aren’t capable of.

So what does specific intelligence mean? Savants like Rain Man have very low mental performance in most areas. However, they also have unusually high performance at a very small number of things. For example, Rain Man could do math problems in his head better than 99.9999999% of people – probably even better than Einstein. But, outside of that narrow area, he was low functioning. In other words, his extreme intelligence was very specific.

DEEP LEARNING CAN LEARN TO PLAY PONG!

Let’s use Pong to illustrate why deep learning is more a savant than an Einstein. One really cool application of deep learning was teaching it to play the video game Pong. Watching the model at work hitting the ball again and again, you’ll initially think, “Boy, that is really smart!” However, it isn’t as smart as you might expect.

Watch the video of the Pong-playing algorithm and you’ll notice how jerky the movement of the paddle is. While a person would identify where the ball will end up and move there fairly smoothly, the algorithm eventually gets to the right spot, but in what is a somewhat random set of moves that are biased toward the right end point.

DEEP LEARNING’S BRUTE FORCE APPROACH

At its core, deep learning uses a brute force method to gain the appearance of generalized intelligence. The algorithm doesn’t really know how to play Pong in the way we think of it. It doesn’t know there is a ball. It doesn’t even know what a ball is. It has no idea that the paddle is a paddle or that it will “hit” the ball. It really doesn’t understand anything about Pong at all. Yet, it appears smart. How is that?

The algorithm is simply analyzing individual frames of a game and studying the pattern of the pixels. It is simply told that it can move up or move down (it doesn’t understand the concept of “up or down” either. It just sees two choices). A “correct move” will prevent a “negative” pixel pattern from happening in the near future. In this case, a circle of pixels beyond its paddles’ pixels. So, the algorithm starts randomly moving up and down. Over millions or billions of iterations it lucks into identifying the right moves based on the pixel patterns.

That’s why the paddle is so jerky. The algorithm really doesn’t understand that it needs to get to a certain spot based on the ball’s trajectory. Rather, it has simply learned that for certain pixel patterns moving up is better than moving down. It will eventually get to the right spot to hit the ball, but with a lot of random movements along the way.

DEEP LEARNING’S INTELLIGENCE IS HIGHLY SPECIFIC

Contrary to popular opinion, therefore, deep learning is much like Rain Man and not at all like Einstein. While deep learning can be directed to many problems, any given deep learning process will only do the exact problem it is directed to do. In that sense every deep learning model is a Savant. It can identify cats in pictures, play Pong, or identify cancer in a medical scan. But it can’t teach itself to do all of those things without being specifically directed by a human to do so.

More interestingly, even once it is really good at playing Pong or identifying a cat, the algorithm has no abstract understanding about what it is identifying or why. It has simply learned that certain patterns lead it to get rewarded more often for choosing “up” or “cat”. Even as the algorithm becomes amazing good at its task, it does so without any broader context. It really isn’t as smart as we perceive.

WHAT DOES THIS MEAN FOR DEEP LEARNING’S FUTURE?

Don’t mistake my comments here to imply that deep learning won’t continue to have huge impacts for us across a range of analytics problems. In cases where we have a specific problem to solve, deep learning’s status as a savant can be exactly what we need. However, as we look to the future goal of more generalized artificial intelligence, it is important to understand that we have a long way to go.

One day we may achieve an Einstein-quality generalized artificial intelligence engine. As of today, we can help our savant engine learn to solve some very relevant problems for us. Deep learning will continue to drive amazing results. Just don’t forget that it is Rain Main and not Einstein!