Before sophisticated AI take over the world with their robotic underlings, they may help us stop cheaters in our favorite games. In the case of Overwatch, Blizzard has begun to leverage its own in-house AI to track trolls in the popular FPS title, utilizing machine learning to teach the system what their behaviour is like.
"We’ve been experimenting with machine learning," Overwatch game director, Jeff Kaplan, told Kotaku during an interview at Blizzard’s offices. "We’ve been trying to teach our games what toxic language is, which is kinda fun. The thinking there is you don’t have to wait for a report to determine that something’s toxic. Our goal is to get it so you don’t have to wait for a report to happen."
Overwatch's report system has been rather maligned by the gamer community in the past, but Blizzard has taken firm steps to make it much more functional over the past few months. Indeed its usefuless appears to have improved dramatically and this has resulted in gamers using it up to 20 percent more, according to Kaplan. It resulted in a reduction in insulting language in games by 17 percent, which is a notable figure.
Kaplan believes that machine learning can take things a step further though and highlight trolls and those misbehaving in game, without the need for someone to report them for it. Indeed it could be even more accurate, as there would be fewer false positives and no counter-reporting in response to a troll being called out on their behaviour.
"With anything involving reporting and player punishments, you want to start with the most extreme cases and then find your way to ease up after that," he warned though, suggesting that the process of teaching an AI what toxic gameplay looks like is difficult. How do you highlight the difference between legitimate tough play and someone being an ass in a competitive game? It's not always easy for humans to differentiate, so AI may find it even harder.
Blizzard is working on it though.
How do you guys feel about an AI keeping an eye on your play?