Research

By hosting the results of our efforts and current conclusions, we provide building blocks for future research around our central question about assessments of news credibility.

Below you can find papers and datasets that have been sponsored by the Coalition and published in a public format.

Questions about ongoing research and data under development can be directed to hello [at] credibilitycoalition [dot] org.

Papers

Amy Zhang, Aditya Ranganathan, Sarah Emlen Metz, Scott Appling, Connie Moon Sehat, Norman Gilmore, Nick B. Adams, Emmanuel Vincent, Jennifer 8. Lee, Martin Robbins, Ed Bice, Sandro Hawke, and David Karger. A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles. The Web Conference, April 2018.

The proliferation of misinformation in online news and its amplification by platforms are a growing concern, leading to numerous efforts to improve the detection of and response to misinformation. Given the variety of approaches, collective agreement on the indicators that signify credible content could allow for greater collaboration and data-sharing across initiatives. In this paper, we present an initial set of indicators for article credibility defined by a diverse coalition of experts. These indicators originate from both within an article’s text as well as from external sources or article metadata. As a proof-of-concept, we present a dataset of 40 articles of varying credibility annotated with our indicators by 6 trained annotators using specialized platforms. We discuss future steps including expanding annotation, broadening the set of indicators, and considering their use by platforms and the public, towards the development of interoperable standards for content credibility.

Download the paper. PDF

Dataset

View data.

Acknowledgements

This paper would not be possible without the valuable support and feedback of members of the Credibility Coalition, who have joined weekly calls and daily Slack chats to generously contribute their time, effort and thinking to this project. In addition to the authors of this paper, this includes Nate Angell, Robyn Caplan, Renee DiResta, James P. Fairbanks, Dan Froomkin, Dhruv Ghulati, Vinny Green, Natalie Gyenes, Cameron Hickey, Stuart Myles, Aviv Ovadya, Karim Ratib, Cameron Hickey, Evan Sandhaus, Heather Staines, Robert Stojnic, Sara-Jayne Terp, Jon Udell, Rick Weiss, Dan Whaley.

We are also grateful for feedback and support from the attendees of our in-person meetings, including Jordan Adler, Erica Anderson, Dan Brickley, Mike Caulfield, Miles Campbell, Jeff Chang, Jason Chuang, Nic Dias, Mark Graham, Eric Kansa, Burt Herman, Mandy Jenkins, Olivia Ma, Sunil Paul, Aubrie Johnson, Sana Saleem, Wafaa Heikal, Mark Graham, Tessa Lyons-Laing, Patricia Martin, Alice Marwick, Andrew Mullaney, Merrilee Proffitt, Zara Rahman, Paul Resnick, Prashant Prakashbhai Shiralkar, Joel Schlosser, Ivan Sigal, Dario Taraborelli, Tom Trewinnard, Paul Walsh, Rebecca Weiss, Cong Yu. A special thanks to Sally Lehrmann and Subramaniam Vincent from the Trust Project for shared thinking and support.

We owe thanks to those who have housed conversations and workshops and offered critical feedback, including First Draft and the Shorenstein Center on Media, Politics and Public Policy at Harvard University; the Brown Institute for Media Innovation at Columbia University; and Northwestern University. Thanks to conferences and events that have hosted workshops or presentations with us, including W3C TPAC, the Mozilla Festival, MisinfoCon, Newsgeist, the Knight Commission on Trust, Media and Democracy, and the Computation and Journalism Symposium.

Md Momen Bhuiyan, Amy Zhang, Connie Moon Sehat, Tanushree Mitra. Investigating ‘Who’ in the Crowdsourcing of News Credibility. Computation+Journalism Symposium, March 2020.

Concerns about the spread of misinformation online via news articles have led to the development of many tools and processes involving human annotation of their credibility. However, much is still unknown about how different people judge news credibility or the quality or reliability of news credibility ratings from populations of varying expertise. In this work, we consider credibility ratings from two “crowd” populations: 1) students within journalism or media programs, and 2) crowd workers on UpWork, and compare them with the ratings of two sets of experts: journalists and climate scientists, on a set of 50 climate-science articles. We find that both groups’ credibility ratings have higher correlation to journalism experts compared to the science experts, with 10-15 raters to achieve convergence. We also find that raters’ gender and political leaning impact their ratings. Among article genre of news/opinion/analysis and article source leaning of left/center/right, crowd ratings were more similar to experts respectively with opinion and strong left sources.

Download the paper. PDF

Dataset

View data

Acknowledgements

This paper would not be possible without the valuable support of the Credibility Coalition, with special thanks to Caio Almeida, An Xiao Mina, Jennifer 8. Lee, Rick Weiss, Kara Laney, and especially Dwight Knell. Bhuiyan and Mitra were partly supported through National Science Foundation grant # IIS-1755547.