Skip to content

How journalists can spot flawed research before citing it

Not all research is created equal—some studies hide bias, weak methods, or conflicts. Here's how journalists can separate fact from flawed science before hitting publish.

The image shows a group of people sitting around a table, with one person holding a paper in their...
The image shows a group of people sitting around a table, with one person holding a paper in their hand. On the table there are papers and other objects, and at the bottom of the image there is text that reads "Libel Hunters on the Look Out, or Daily Examiners of the Liberty of the Press".

How journalists can spot flawed research before citing it

We updated this tip sheet, originally published in March 2017, to elaborate on key points and add links to new resources to help journalists vet research quality.

Academic research is one of journalists' best tools for covering public policy and holding public officials accountable. It's also a tool that takes skill to use.

Experienced journalists use research to ground their work and fact-check claims made by sources. Many journalists, however, have a tough time differentiating between a quality study and a questionable one.

We put together this list of questions to help you identify red flags in research. It can help you avoid relying on problematic findings. It can also help you scrutinize the studies that politicians and policymakers cite when defending their stances on issues.

It's important to note that many of these questions apply primarily to quantitative research, or research that involves the analysis of numerical data.

1. Is the research peer reviewed?

Peer review is a formal process through which researchers with expertise in a specific subject evaluate and provide feedback on one another's work. The process is designed for quality control - to weed out low-quality studies and strengthen others.

Keep in mind that peer reviewers are not fact checkers or fraud detectors, however. Their main focus is making sure the research questions are clear and the study's design, sampling methods and analysis are appropriate for answering those questions. Peer reviewers also assess whether a study's findings advance knowledge in the field and whether the authors complied with ethical standards, especially for studies involving human or animal subjects.

2. Is it published in a top-tier academic journal?

Top journals are more likely to feature high-quality research. They are more selective about the research they accept for publication and their peer-review processes tend to be more rigorous. Many of the most reputable journals are affiliated with professional organizations such as the American Economic Association, National Academy of Sciences, American Society for the Advancement of Science, American Educational Research Association and American Medical Association.

One measure for gauging an academic journal's prestige is its impact factor, a number that represents how often the average article in that journal is cited across various types of scholarly work during a given period. Keep in mind, though, that newer journals might not yet have impact factors and that these ratings can be artificially inflated.

You can look up a journal's impact factor in the Journal Citation Reports database. The ratings range from zero to over 100. The Lancet, a leading academic journal featuring public health research, has an impact factor of 88.5, for example.

3. Do other scholars trust this work?

One indicator of whether other scholars consider a study credible is the number of times they cite it in their own research. It can take years, though, for a study to generate a high citation count. Use Google Scholar, a free search engine, or Web of Science, a subscription-based service,to find a citation count for a specific paper.

Journalists can find out what researchers are saying about a particular paper on PubPeer, a website where scholars critique one another's work, often anonymously. They can also use the free Altmetric Bookmarklet to check a paper's Altmetric score, which measures how often it is viewed, downloaded, saved, bookmarked, cited, mentioned or discussed online. Another helpful resource: The Retraction Watch Database, which tracks retracted, withdrawn or corrected publications and is searchable by author.

4. Who funded the research?

It's important to know who sponsored the research as well as what role, if any, a sponsor played in designing or implementing a study or presenting its findings to the public. Researchers generally note the sources of funding for a particular study toward the end of an academic paper.

5. What are the authors' credentials?

Knowing where the authors work, their job titles and how often their work has been published in academic journals can help you gauge their level of expertise in a subject. Two strong indicators of expertise: Having a long history of publishing academic articles on a narrow topic and receiving prestigious research awards such as the Stockholm Prize in Criminology, Yidan Prize for Education Research and Johan Skytte Prize in Political Science.

6. How old is the study?

In fields such as technology and public opinion, a study that is several years old may no longer be reliable. The relevance of older research depends largely on the field, the topic and whether its methodology is still considered ethical and reliable.

7. Do the authors have a conflict of interest?

Be leery of research conducted by individuals or organizations that stand to gain from the findings. Although academic journals typically require authors to disclose conflicts of interest, some authors do not comply. One way to check for conflicts of interest is the Open Payments Database, a project of the U.S. Centers for Medicare & Medicaid Services. It collects data on payments that drug and medical device companies make to health care professionals, including public health researchers.

8. What's the sample size?

For studies based on samples, larger samples generally yield more accurate results than smaller samples. Researchers who study people - voters, students or motorcyclists, for instance - often aim for samples of at least 1,000 to 1,500 people.

9. Does the study rely on survey data?

Researchers as well as government agencies, advocacy organizations and private companies conduct surveys to collect data on a range of topics. The quality of those surveys can vary considerably, however, and influence how people answer the questions asked. Survey results can be biased if, for example, respondents were not chosen by random selection. The order in which questions are asked can also affect results.

10. Can you follow the methodology?

Scholars should explain how they approached their research questions, where they got their data and how they used it. They also should clearly define key concepts and describe the statistical methods used in their analyses. This level of detail is necessary to allow other people to check and replicate their work.

11. Is statistical data presented?

Authors should share details about the data they examined and the numerical results of their analyses. This allows others to review their calculations and statistical models. In some fields, authors make their datasets publicly available.

12. Are the study's findings supported by the data?

Good researchers are very careful in describing their conclusions - because they want to convey exactly what they learned. They also point out weaknesses in their data, study design and findings. Sometimes, however, researchers exaggerate or minimize their findings or there will be discrepancies between what an author claims to have found and what the data shows.

If you have trouble making sense of the data, reach out to a statistician for help. Organizations such as SciLine, part of the American Association for the Advancement of Science, help journalists connect with scholars on deadline.

13. Is it a meta-analysis or randomized controlled trial?

Meta-analyses and randomized controlled trials are two of the most reliable forms of research. A study can be high quality if it is not one of these. But these two types of research, when conducted properly, provide particularly strong evidence.

To conduct a meta-analysis, also known as a meta-study, researchers analyze numerical data collected from multiple individual studies that focus on the same or a similar research question. By pooling the data, researchers can obtain more precise estimates to describe the relationship between certain variables - for example, the strength of the association between personality and cognitive ability.

Randomized controlled trials, on the other hand, are considered the gold standard for evaluating the effectiveness of a program, policy, medication or other intervention. By randomly assigning people to control or experimental groups, researchers reduce bias in the results. It also allows researchers to compare the groups to determine whether an intervention caused or contributed to any differences.

The Journalist's Resource would like to thank these scholars for their help in creating this tip sheet: Adam Berinsky, the Mitsui Professor of Political Science at MIT; Marybeth Gasman, the Samuel DeWitt Proctor Endowed Chair in Education at Rutgers University's Graduate School of Education; Morgan Hazelton, professor of political science and law at Saint Louis University; Ivan Oransky, co-founder of Retraction Watch and a journalist in residence at New York University's Arthur Carter Journalism Institute; and Thomas E. Patterson, the Bradlee Professor of Government and the Pressat Harvard Kennedy School.

Read also:

Latest