Key Questions to Ask While Reading a Research Paper

Here at Greater Skilful, we embrace enquiry into social and emotional well-being, and we try to help people apply findings to their personal and professional person lives. We are well aware that our business concern is a tricky one.

Summarizing scientific studies and applying them to people's lives isn't just difficult for the obvious reasons, similar understanding and and so explaining scientific jargon or methods to non-specialists. It's also the case that context gets lost when we translate findings into stories, tips, and tools for a more than meaningful life, especially when we push information technology all through the nuance-squashing machine of the Internet. Many people never read by the headlines, which intrinsically aim to overgeneralize and provoke interest. Considering our articles can never exist equally comprehensive as the original studies, they about always omit some crucial caveats, such as limitations acknowledged past the researchers. To get those, y'all demand admission to the studies themselves.

And information technology's very common for findings to seem to contradict each other. For example, we recently covered an experiment that suggests stress reduces empathy—after having previously discussed other research suggesting that stress-decumbent people can exist more empathic. Some readers asked: Which one is correct? (Y'all'll find my answer here.)

Advertisement X

But probably the most important missing piece is the future. That may audio like a funny thing to say, merely, in fact, a new study is not worth the PDF it's printed on until its findings are replicated and validated by other studies—studies that haven't still happened. An experiment is simply interesting until time and testing turns its finding into a fact.

Scientists know this, and they are trained to react very skeptically to every new newspaper. They likewise expect to be greeted with skepticism when they present findings. Trust is skilful, but science isn't about trust. It's about verification.

However, journalists like me, and members of the general public, are often prone to treat every new study every bit though it represents the terminal word on the question addressed. This particular issue was highlighted concluding calendar week by—wait for it—a new study that tried to reproduce 100 prior psychological studies to see if their findings held upward. The result of the three-year initiative is chilling: The squad, led by University of Virginia psychologist Brian Nosek, got the aforementioned results in only 36 per centum of the experiments they replicated. This has led to some predictably provocative, overgeneralizing headlines implying that we shouldn't take psychology seriously.

I don't agree.

Despite all the mistakes and overblown claims and criticism and contradictions and arguments—or possibly because of them—our knowledge of man brains and minds has expanded dramatically during the by century. Psychology and neuroscience have documented phenomena similar cognitive dissonance, identified many of the brain structures that support our emotions, and proved the placebo effect and other dimensions of the listen-trunk connection, among other findings that have been tested over and over again.

These discoveries have helped us empathize and treat the truthful causes of many illnesses. I've heard it argued that rising rates of diagnoses of mental illness found prove that psychology is failing, but in fact, the opposite is truthful: We're seeing more and better diagnoses of problems that would have compelled previous generations to dismiss people as "stupid" or "crazy" or "hyper" or "blue." The important thing to bear in listen is that it took a very, very long time for science to come up to these insights and treatments, following much trial and fault.

Science isn't a organized religion, but rather a method that takes time to unfold. That'due south why it's equally wrong to uncritically cover everything you read, including what you are reading on this page.

Given the complexities and ambiguities of the scientific endeavor, is information technology possible for a not-scientist to strike a residue between wholesale dismissal and uncritical belief? Are there red flags to look for when you read about a written report on a site like Greater Adept or in a popular self-assistance book? If you do read one of the actual studies, how should y'all, as a non-scientist, guess its credibility?

I drew on my own experience as a science announcer, and surveyed my colleagues here at the UC Berkeley Greater Skillful Science Center. We came up 10 questions y'all might ask when yous read about the latest scientific findings. These are also questions nosotros ask ourselves, earlier we cover a study.

1. Did the written report appear in a peer-reviewed journal?

Peer review—submitting papers to other experts for independent review before credence—remains 1 of the best ways nosotros have for ascertaining the basic seriousness of the report, and many scientists describe peer review every bit a truly humbling crucible. If a written report didn't go through this process, for whatever reason, it should be taken with a much bigger grain of salt.

2. Who was studied, where?

Animal experiments tell scientists a lot, only their applicability to our daily man lives volition exist limited. Similarly, if researchers only studied men, the conclusions might not be relevant to women, and vice versa.

This was actually a huge problem with Nosek's effort to replicate other people'due south experiments. In trying to replicate one German study, for example, they had to use different maps (ones that would exist familiar to University of Virginia students) and modify a scale measuring assailment to reflect American norms. This kind of variance could explain the different results. Information technology may as well advise the limits of generalizing the results from 1 study to other populations not included within that study.

As a matter of arroyo, readers must remember that many psychological studies rely on WEIRD (Western, educated, industrialized, rich and democratic) samples, mainly college students, which creates an in-built bias in the subject's conclusions. Does that hateful you should dismiss Western psychology? Of class not. Information technology's just the equivalent of a "Caution" or "Yield" sign on the road to agreement.

3. How large was the sample?

In general, the more participants in a written report, the more valid its results. That said, a large sample is sometimes impossible or even undesirable for certain kinds of studies. This is peculiarly truthful in expensive neuroscience experiments involving functional magnetic resonance imaging, or fMRI, scans.

And many mindfulness studies have scanned the brains of people with many thousands of hours of meditation experience—a relatively pocket-size group. Fifty-fifty in those cases, however, a written report that looks at xxx experienced meditators is probably more than solid than a similar one that scanned the brains of simply 15.

four. Did the researchers control for cardinal differences?

Variety or gender balance aren't necessarily virtues in a research study; it'due south really a good thing when a report population is as homogenous equally possible, considering it allows the researchers to limit the number of differences that might affect the result. A skillful researcher tries to compare apples to apples, and command for equally many differences equally possible in her analysis.

5. Was there a command grouping?

One of the first things to look for in methodology is whether the sample is randomized and involved a control group; this is especially important if a report is to propose that a certain variable might actually cause a specific outcome, rather than just be correlated with information technology (see next indicate).

For instance, were some in the sample randomly assigned a specific meditation practice while others weren't? If the sample is large enough, randomized trials tin produce solid conclusions. But, sometimes, a written report will not have a control group because it's ethically impossible. (Would people still divert a trolley to kill ane person in order to save five lives, if their conclusion killed a existent person, instead of only being a idea experiment? Nosotros'll never know for sure!)

The conclusions may yet provide some insight, merely they need to be kept in perspective.

six. Did the researchers constitute causality, correlation, dependence, or some other kind of relationship?

I often hear "Correlation is not causation" shouted as a kind of battle cry, to try to discredit a study. Merely correlation—the degree to which two or more than measurements seem to change at the same time—is of import, and is one step in somewhen finding causation—that is, establishing a change in one variable directly triggers a modify in another.

The important thing is to correctly place the human relationship.

vii. Is the journalist, or even the scientist, overstating the result?

Linguistic communication that suggests a fact is "proven" by one study or which promotes 1 solution for all people is most likely overstating the case. Sweeping generalizations of any kind often indicate a lack of humility that should be a red flag to readers. A study may very well "suggest" a sure conclusion but it rarely, if always, "proves" it.

This is why nosotros employ a lot of cautious, hedging linguistic communication in Greater Adept, like "might" or "implies."

viii. Is there any conflict of interest suggested past the funding or the researchers' affiliations?

A recent study found that you could potable lots of sugary beverages without fright of getting fat, as long equally you exercised. The funder? Coca Cola, which eagerly promoted the results. This doesn't mean the results are wrong. But it does suggest you should seek a 2d opinion.

9. Does the researcher seem to have an agenda?

Readers could understandably be skeptical of mindfulness meditation studies promoted by practicing Buddhists or experiments on the value of prayer conducted by Christians. Once again, it doesn't automatically hateful that the conclusions are incorrect. It does, however, raise the bar for peer review and replication. For instance, it took hundreds of experiments earlier we could brainstorm saying with confidence that mindfulness can indeed reduce stress.

10. Practise the researchers acknowledge limitations and entertain culling explanations?

Is the written report focused on merely 1 side of the story or one estimation of the data? Has it failed to consider or refute culling explanations? Do they demonstrate sensation of which questions are answered and which aren't by their methods?

I summarize my personal opinion every bit a non-scientist toward scientific findings as this: Curious, but skeptical. I take information technology all seriously and I have information technology all with a grain of salt. I gauge it against my feel, knowing that my experience creates bias. I try to cultivate humility, uncertainty, and patience. I don't always succeed; when I fail, I try to acknowledge mistake and forgive myself. My own agreement is imperfect, and I remind myself that one study is only one pace in understanding. Above all, I try to bear in mind that science is a process, and that conclusions always raise more questions for u.s.a. to answer.

murrayfithe1974.blogspot.com

Source: https://greatergood.berkeley.edu/article/item/10_questions_to_ask_about_scientific_studies

0 Response to "Key Questions to Ask While Reading a Research Paper"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel