The Potential Of Artificial Intelligence In The Future Of Sports


Another human has lost to the descendants of Deep Blue. First, world champion Garry Kasparov lost to IBM’s Deep Blue supercomputer in chess in 1997. Then Jeopardy’s all-time consecutive show winner Ken Jennings succumbed to IBM’s Watson computer system in 2010. Recently, Go’s “best player” Ke Jie was soundly defeated by a computer system called AlphaGo powered by DeepMind.

Ke’s loss to AlphaGo seemed like a particularly tough defeat for humans because Go seemed potentially too complex for a computer to master. As context, the total number of possible moves in Go is about 10^170 while there only exist approximately 10^80 atoms in the entire observable universe.

Yet, Ke was still soundly defeated in a recent match. As the Economist explains (referencing an earlier match with another human Go champion), “Until Mr. Lee’s defeat, Go’s complexity had made it resistant to the march of machinery. AlphaGo’s victory was an eye-catching demonstration of the power of a type of AI called machine learning, which aims to get computers to teach complicated tasks to themselves.”

What may be particularly fear-inducing to humans is that AlphaGo actually taught itself how to beat Ke rather than relying on humans to train the machine. What does this mean in English? This version of AlphaGo was not the first DeepMind machine used for this task.

Get The Latest Sports Tech News In Your Inbox!

The original AlphaGo studied thousands of examples of human games, a process called supervised learning. Since human play reflects human understanding of such concepts, a computer exposed to enough of it can come to understand those concepts as well. Once AlphaGo had arrived at a decent grasp of tactics and strategy with the help of its human teachers, it kicked away its crutches and began playing millions of unsupervised training games against itself, improving its play with every game.

AlphaGo did not train using human teachers. The DeepMind researchers created a “reward function” which told the machine what goal it was trying to achieve. Then the computer experimented with different moves to determine the best strategy in each game. In only two days, AlphaGo learned how to far outperform earlier versions and vastly outperform the best Go players.

Perhaps the most “devastated” person in this whole experience would be Ke. He was the best player at Go, and it was increasingly clear he would never be better than AlphaGo. Instead Ke took the opposite approach to machine learning. Ke studied what the machine was doing and applied that to his own game. Because the machine did not learn how to play from studying humans, it could “see” the game in an entirely different way than humans could. Ke went on to have a 22 match winning streak against the world’s best human competition.

This is not the first time that big data analysis has led to insights that could be applied to human activities that humans were unlikely to develop themselves. The most well-known example involves defensive shifts in baseball. In his book Big Data Baseball: Math, Miracles, and the End of a 20-Year Losing Streak, Travis Sawchik perhaps best explains how a team used something completely counter-intuitive to human thinking as a way to gain a strategic advantage. The Pittsburgh Pirates would shift their infielders to all be on one side of the field because their analysis of hitting patterns found that many hitters overwhelmingly hit to that side. This would mean more ground ball outs and less runs scored by opposing teams.

The problem is that this strategy left large portions of the field uncovered by players. To humans, that seems absurd. Why not cover all parts of the field with your players? Why would hitters not just adapt their swings and hit where there were no fielders? A manager potentially has the ability to look very stupid by employing this strategy. For a variety reasons, including the speed and spin rotation of pitches at the Major League level, it is very difficult for hitters to change their natural swing. Therefore, Pittsburgh’s seemingly foolish strategy worked surprisingly well.  

The difference between what the Pirates discovered and AlphaGo is that humans spent hundreds of hours analyzing big data to determine that field shifting would benefit their team. AlphaGo determined the equivalent of field shifting for Go on its own in two days. This is actually a very good thing for humans. More specifically, companies are often looking for innovation and new insights from big data or technology. The concept of “disruption” is based on the idea that humans will discover the next big innovation that can provide companies (or sports organizations) with that next big competitive advantage.

What AlphaGo shows is that humans can now learn disruptive practices from machines rather than machines “waiting” for disruptive ideas to come from humans. Computers can think “outside of the box” (even though the machines are often contained in boxes) because they do not think like humans. Humans can and should leverage the power of machines to come up with disruptive ideas to gain competitive advantages.

 

Adam Grossman is the CEO and Founder of Block Six Analytics (B6A). In addition, he is a lecturer for Northwestern University’s Masters of Sports Administration. He is the co-author of The Sports Strategist: Developing Leaders For A High-Performance Industry. His work has been featured in publications including Forbes, The Washington Post, The Chicago Tribune, and Comcast SportsNet Chicago.