20,000 Counter-Strike: Global Offensive players were banned during their first six weeks in order to fight against toxicity in online gaming through analysis of the game’s text chat.
Minerva is the AI. Created in collaboration with Google Cloud and Jigsaw, a Google tech supporter, a team on the FACEI online gaming site–which was the host for 2018’s “CS: GO London Cup.” In late August, Minerva began examining the messages of CS: GO chat, marking 7,000,000 as toxic messages in the first month and a half, issuing 90,000 warnings, and baning 20,000 players.
The AI was first trained by machine learning and warned of verbal abuse when a toxic message was perceived and spam messages were flagged. Within a couple of seconds of the match, Minerva sent a warning or banning notices to the offending player, and the penalties for repeat offenders were harder.
Between August to September, the number of toxic messages decreased by 20 percent, and the number of players who receive toxic messages decreased by eight percent.
The trial began after “months” to eradicate false positive ones. This is the first step in Minerva’s growth of online games. “In-game chat identification is just the first and most basic of the implementations of Minerva and more of a case study that acts as a first step toward our dream of this AI,” says FACEIT in a blog post. “We are excited about this foundation because it is a strong basis that will enable us to improve Minerva until, in the coming weeks, we finally detect and address all sorts of abusive behaviours in real-time.”