Riot Games and Ubisoft plan to change the negativity in gaming.
Riot Games and Ubisoft are working together to develop artificial intelligence (AI) capabilities that can identify and stop toxic conduct in order to combat it. Gaming toxicity is a significant issue, and the sense of anonymity people experience when playing at a computer frequently encourages bad behavior. Despite Riot Games’ years-long efforts to fight toxicity, many gamers continue to share their unfavorable encounters with people that ruin gaming on social media.
In order to better train AI-based preventive moderation tools that identify and address disruptive conduct in games, Riot and Ubisoft have formed a joint cooperation. The two companies are creating a database that collects data from games. Any user identification information that can expose users to privacy risks will be eliminated.
The study effort is known as “Zero Harm in Comms” and it has been undertaken by Riot Games and Ubisoft. All gamers, not just those who play Riot or Ubisoft games, will gain from it.
According to both businesses, the gaming industry must “communicate, collaborate, and make cooperative efforts” in order to improve the social dynamics of online games. The resulting database of this agreement should include a wide spectrum of users and use cases to better train AI systems to detect and neutralize harmful behavior thanks to Ubisoft’s extensive catalog of well-liked games and Riot’s highly competitive games.
The outcomes of the collaboration will be made public next year. Riot Games has already started implementing tools that aid in spotting disruptive behavior in all of its games, and in the upcoming months, it will also be extending its toxicity monitoring tools to voice chat for League of Legends and Valorant.