Last week, Google announced the launch of Perspective, an API that helps online platforms and publishers host more civil conversations and keep away hateful trolls.

Many readers and publishers share the ideal of the internet as a global forum for comment, debate and exchange of ideas. But in practice, maintaining this ideal has not always been easy. The daunting task of sifting through and moderating thousands of reader comments has driven platforms to turn off comment rights, and users to be wary of expressing themselves or engaging with contentious opinions. For publishers, this can result in less engaged readers and a diminished sense of reader community. It’s a serious business challenge.

It’s also a technological challenge that Google has decided to tackle. Thanks to advancements in machine learning, Google has been able to build models that can analyze text and score it based on historic data and its perceived impact on conversation. The API Google has released will give access to these models, starting with a “toxicity” score (i.e. whether a comment could be perceived as “toxic” to a discussion).

Publishers can then use this score to give realtime feedback to commenters, help moderators do their job, or allow readers to easily find relevant information. Our hope is that Perspective will become a key part of publishers’ ability to analyze comment language at scale, and thus facilitate an open and constructive commenting community.

Google started with a handful of experimental partnerships (The New York Times, Wikipedia, The Guardian and The Economist) to explore how our API could be helpful. Today Google want to give all DNI members privileged access, so that they can experiment and build their own tools. Google’re dedicating a development team to the project, who will provide technical support and ansGoogler questions that might arise. The API is initially only available in English, but Google will be expanding to other languages – and would value publishers help to do so.

Perspective_1.gif

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

Source: Google