By hosting the results of our efforts and current conclusions, we provide building blocks for future research around our central question about assessments of news credibility.

Below you can find papers and datasets that have been sponsored by the Coalition and published in a public format.

Questions about ongoing research and data under development can be directed to hello [at] credibilitycoalition [dot] org.


Amy Zhang, Aditya Ranganathan, Sarah Emlen Metz, Scott Appling, Connie Moon Sehat, Norman Gilmore, Nick B. Adams, Emmanuel Vincent, Jennifer 8. Lee, Martin Robbins, Ed Bice, Sandro Hawke, and David Karger. A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles. The Web Conference, April 2018.

The proliferation of misinformation in online news and its amplification by platforms are a growing concern, leading to numerous efforts to improve the detection of and response to misinformation. Given the variety of approaches, collective agreement on the indicators that signify credible content could allow for greater collaboration and data-sharing across initiatives. In this paper, we present an initial set of indicators for article credibility defined by a diverse coalition of experts. These indicators originate from both within an article’s text as well as from external sources or article metadata. As a proof-of-concept, we present a dataset of 40 articles of varying credibility annotated with our indicators by 6 trained annotators using specialized platforms. We discuss future steps including expanding annotation, broadening the set of indicators, and considering their use by platforms and the public, towards the development of interoperable standards for content credibility.

Download the paper. PDF


View data.


This paper would not be possible without the valuable support and feedback of members of the Credibility Coalition, who have joined weekly calls and daily Slack chats to generously contribute their time, effort and thinking to this project. In addition to the authors of this paper, this includes Nate Angell, Robyn Caplan, Renee DiResta, James P. Fairbanks, Dan Froomkin, Dhruv Ghulati, Vinny Green, Natalie Gyenes, Cameron Hickey, Stuart Myles, Aviv Ovadya, Karim Ratib, Cameron Hickey, Evan Sandhaus, Heather Staines, Robert Stojnic, Sara-Jayne Terp, Jon Udell, Rick Weiss, Dan Whaley.

We are also grateful for feedback and support from the attendees of our in-person meetings, including Jordan Adler, Erica Anderson, Dan Brickley, Mike Caulfield, Miles Campbell, Jeff Chang, Jason Chuang, Nic Dias, Mark Graham, Eric Kansa, Burt Herman, Mandy Jenkins, Olivia Ma, Sunil Paul, Aubrie Johnson, Sana Saleem, Wafaa Heikal, Mark Graham, Tessa Lyons-Laing, Patricia Martin, Alice Marwick, Andrew Mullaney, Merrilee Proffitt, Zara Rahman, Paul Resnick, Prashant Prakashbhai Shiralkar, Joel Schlosser, Ivan Sigal, Dario Taraborelli, Tom Trewinnard, Paul Walsh, Rebecca Weiss, Cong Yu. A special thanks to Sally Lehrmann and Subramaniam Vincent from the Trust Project for shared thinking and support.

We owe thanks to those who have housed conversations and workshops and offered critical feedback, including First Draft and the Shorenstein Center on Media, Politics and Public Policy at Harvard University; the Brown Institute for Media Innovation at Columbia University; and Northwestern University. Thanks to conferences and events that have hosted workshops or presentations with us, including W3C TPAC, the Mozilla Festival, MisinfoCon, Newsgeist, the Knight Commission on Trust, Media and Democracy, and the Computation and Journalism Symposium.