Critical thinking: An essential skill for scientists and science writers alike
When you look at the data, what you see matters just as much as what's not there.
Good morning, Fancy Comma newsletter readers! This month has been a hectic one for me, mostly because I (Sheeva) am delving back into the academic science world, writing up my Master’s work in the form of reviews. This means revisiting the science world, examining study methodologies, and generally approaching all published research studies with a skeptical eye.
As scientists, we learn to approach the data with caution and skepticism. My own Master’s research never made it to publication in the past 11 years because the reviewers of an article that came out of it — top scholars in my field who found my research interesting, useful, and novel — were wary of a few limitations. It’s kind of a bummer, but I’m currently in the midst of writing an updated review that can help researchers in my field with the insights I learned over the course of my graduate career. So exciting!
So, I also approach my academic reviews of the literature by thinking about what’s not being said…what approaches are not being used…and why.
As scientists, we know that failure is part of science life. We look up to scientists who persevered despite it all, making great discoveries despite enormous setbacks.
So many great scientific studies (mine included) never make the cut for reasons that have little to do with quality. Sometimes they don’t furnish the expected findings, or sometimes reviewers don’t understand why the work is interesting. Sadly, this also creates a quandary: What studies are missing in science and why? What would these studies tell us if they could have been published?
One example of what’s not being published in science is relevant to COVID-19 pandemic clinical trials. Clinical trials are heavily male and white, so it is difficult to know if their results readily generalize to women and people of various different races and ethnicities. Amidst the pandemic clinical trials for vaccines, companies such as Moderna made their clinical trials more diverse to better represent the general population in their trials. The idea here is that, by recruiting clinical trials participants that look like the general population, the biotech industry can better serve a larger swath of people.
In science, just as in journalism, we have to think about the evidence we have, as well as what’s not being said or observed. This type of critical thinking is just as essential to science writing, science communications, and science journalism as it is to the laboratory. So, this month for our newsletter, our resident quantitative sociologist, Kelly Tabbutt, writes about the nature of scientific findings, and the generalizability of research findings. Keep reading for her thoughts on how to critically analyze your scientific sources!
Discoveries and Generalizability
Whether you are researching in the hard or soft sciences, and regardless of the type of study you are conducting, the people and environments you choose to include in your study and which effects or causes you choose to analyze will shape the discoveries you make.
Specifically, who you include and what you focus on will shape how generalizable your findings are – that is, how reliably your findings can be applied to the general population or environment. Read on to learn more about how research samples impact research findings and generalizability.
Samples and Generalizability
When conducting a study, you have to make decisions about who or what to include in your “sample.” A sample refers to the participants in your study. For example, if you are conducting a human subjects study, you must decide who you will include. The criteria you use to select your participants is known as your study’s selection criteria. The selection criteria determines who is eligible to be in your sample.
It is called a sample because you are not looking at every person, animal, or environment, but rather a select set – or a sampling. Who or what you choose to include in your sample shapes how confident you can be that what you found in your study is true in other populations or settings beyond what you included in your study.
One of the biggest factors affecting generalizability is sample bias. Sample bias means biasing your sample towards a particular group or characteristic. In the case of studies of people, this often refers to recruiting a disproportionate number of participants of a particular group of one gender, age, race or ethnicity, or social class, or oversampling for participants with a particular physical or psychological characteristic.
Sometimes this is intentional and necessary. If a study is focused specifically on women, that study would necessarily only include women. In this case, the findings would only be applicable to women. However, if your study is meant to include everyone, and you end up recruiting only women, this will create sample bias. You will not be sure if you can generalize your results from your study to, for example, males. The sample bias in this case means that you cannot be confident that your research can apply to the entire population.
Sample size is another key factor affecting generalizability. Generally, the larger your sample, the more generalizable your results. However, you do not want an unwieldy, large sample.
Quantitative and qualitative research methods have different limitations for sample size. There are many factors that shape ideal sample size, but there are online tools to calculate this for you so that you can make statistical inferences with the proper statistical power (that’s science speak for ensuring that your results are really measuring a real-world phenomenon).
There are many issues associated with a sample size that is too small. When you are examining the outcome of your research, you are focusing on average effects. The larger your sample, the smaller the effect of outliers that could skew your findings. Furthermore, a larger sample generally means a more diverse sample.
How you determine and collect your sample has a huge impact on the generalizability of your findings. The gold standard is “random sampling.” This method creates a randomized selection of subjects, who chosen using techniques to ensure that you capture the necessary level of diversity for your study to be generalizable to the population relevant to your study, whether that is a certain group or everyone.
There are also other sampling techniques. Essentially, you want a sample that is diverse enough that you capture the range of variations that are represented in the full population: a “demographically representative” sample.
Research findings and discoveries move the world forward. However, research findings are only useful if we know how and when they apply. To be generalizable, your sample of participants (people, animals, flora, fauna, or environments) should be diverse enough, large enough, and representative of the “universe” of populations or settings.
Generalizability does not always only mean that something is generalizable to everyone or every context. It is vital to determine and be transparent about who and what is reflected in your research – that is, to what degree it is generalizable. Generalizability is determined at the outset by what questions you ask, how you design your research, and how you determine your sample.
Thanks, Kelly! Readers, what are your thoughts on ways science journalists can use the above basic research methodologies to improve the way science is reported by the media? What are your questions about the interplay between science research methods and the way that science is communicated? We’d love to know in the comments!
Links from around the web — what we’ve been reading (and writing):
Andy Strote has a helpful blog out about contracts for freelancers. A contract is a legal document that defines the scope of your work arrangements with your client. It’s important to know about contracts as a freelancer, especially if you have no legal background (as is the case for most freelance writers).
Check out, also, Andy’s list of 10 things your freelancer website should have. His website is a great source of information for freelancers in general!
Subscribe to Mark Bayer’s weekly newsletter, One for the Week, to get one tip a week on ways to leverage science communication to get things done.
Tamsen Webster has a great blog on ways to tie two pieces of content together.
On the Fancy Comma blog, we interviewed SciCommer Abdullah Iqbal and recapped our new YouTube video series on science news literacy. We have more videos in the works, so subscribe to our YouTube channel!
That’s it for this week! Thanks for reading the Fancy Comma, LLC Newsletter. If you found our newsletter useful, make sure to share it with your friends!