ChatGPT seems to be everywhere these days and I’ve been trying to make sense of the hype. I wrote an explainer for the Fancy Comma blog on how ChatGPT works and have been interviewed about the technology’s impact on my work as a science writer.
I should state that I’m not a technology futurist; I’m more of a technology realist. I love innovation — I’m an MIT grad, after all — but I’m also a bit of a luddite (still so nostalgic for my ‘00s era Nokia phone). So when it comes to ChatGPT, I am quick to caution people to examine social, legal, cultural, and economic aspects, as well as technological limitations.
Despite all of this, ChatGPT has taken the world by storm. I definitely feel that ChatGPT has changed the way I work, in the short-term, and perhaps in the long-term, also.
ChatGPT’s impact on my freelance writing career (and ways I helped develop it)
ChatGPT represents a form of generative artificial intelligence or generative AI — a type of AI that uses algorithms to create content. I could probably ask ChatGPT to write the blog articles we wrote for Fancy Comma, as well as this newsletter, over the past few years — but that doesn’t make sense, in a way, since ChatGPT is now using those blogs (and other content from before 2021) to serve up content to its users.
It seems clear to me that ChatGPT has changed my work as a writer. As the US economy recovers from the COVID-19 pandemic, clients are taking more jobs “in-house” and are looking to save money here and there. One way they do so is by outsourcing some writing tasks to ChatGPT.
Back in the day, I could do a quick Google search to write an article like “Why Color Temperature Matters in Your Home” or “Challenges of Solar Panels for Firefighters” (both are articles I have actually written; click the links to read!) to make a quick $100 or $200. Now, ChatGPT can do that. That’s money saved, which is everything for marketers in a world fearful of yet another recession. That’s great for a company’s bottom line, but not so great for the financial stability of content writers such as myself.
The other day, a client reached out to me to write a white paper about ChatGPT’s potential uses. The outline for the white paper was drafted by ChatGPT. Normally, putting together a white paper outline would be a several-hour task for me, and I would bill them hundreds of dollars for my work — but when done with ChatGPT, it’s free.
I feel happy that I don’t have to do that type of writing anymore; for one thing, I have a lot more skills and can charge a lot more for the work I do. As a real human, I have real intelligence, expertise, and skills that robots would envy (if they were capable of feeling emotions). It’s weird to me that I haven’t gotten those types of blog article requests recently; maybe they will come back, but given the fact that you can write them with ChatGPT, probably not!
Since ChatGPT has become popular, I notice that the requests I get from clients are much more complex, involving actual subject matter expertise and analytic reasoning skills — stuff that people can’t get from ChatGPT.
People are obsessed with ChatGPT, so I think that it will soon be everywhere. Increasingly, it’s becoming a normal part of our lives, just like smart phones are these days. Sephora has been using an AI-based virtual assistant, and, according to AIContentfy, uses ChatGPT in their chatbot. Sephora shoppers on Reddit’s r/Sephora community have been complaining about confusing customer service communications, speculating that ChatGPT may be driving the miscommunication. Despite the implementation challenges, if generative AI’s good enough for Sephora, soon, it might be used everywhere.
Examining the interfaces between ChatGPT, SciComm, and society
As we’ve previously written here, new technologies change society, but society also influences new technologies. For the most part, we’ve been talking about the need to examine social implications of science in the context of the COVID-19 pandemic — but these lessons extend to novel challenges such as generative AI, too.
“Science leads to novel impacts which change society,” we wrote in our February 2023 newsletter titled “The Science-Technology-Society Connection.”
In our newsletter issue on the sociology of science, we wrote:
“Scientific and technological breakthroughs can actually change the way we live and relate as humans. It’s important to consider the social context of science. For science communicators, understanding this could help them communicate science with more nuance.”
In our May 2023 newsletter, Fancy Comma’s Kelly Tabbutt, a sociologist, talked about paradigm shifts, which can occur with the advent of new technologies like ChatGPT:
“Paradigm shifts are integral to the social construction of scientific knowledge. They not only create new knowledge, but they often do this by reinterpreting, or rebuking, previously held knowledge.”
Kelly has also written about bringing society into SciComm through “social SciComm.”
In what ways can we apply these lessons from sociology of science to ChatGPT and the challenges of generative AI? What does ChatGPT mean for society, and for science communicators?
Analyzing implications of the ‘Notorious GPT’
Recently, The Stem Advocacy Institute asked me to comment on a paper in the Journal of Science Communication. The paper, entitled “The Notorious GPT: science communication in the age of artificial intelligence,” was authored by Mike S. Schäfer, who is Professor of Science Communication at the University of Zurich in Switzerland.
After poring through the paper, which was a daunting 15 pages, I found myself in agreement with the author. His main message? We should be cognizant, as science communicators, about the interface between ChatGPT, science communication, and society.
Schäfer made four recommendations for practitioners of science communication:
1. Researchers should analyze how people — and that means everyone — communicate about generative AI. That includes looking at ways scholars, scientific communicators, higher education, Big Tech, regulators, journalists, non-governmental organizations, and other stakeholders (including the general public) communicate about generative AI. I would add lawmakers to this list; generative AI technologies are likely to face scrutiny in Congress, especially since they repurpose content that is often copyrighted to serve up to users of the chatbot technology. OpenAI CEO Sam Altman testified to Congress on May 16, 2023 about the technology.
2. Researchers should analyze how people communicate with AI. How do they use ChatGPT? I assume that includes what people ask it, and the ways in which they use ChatGPT to improve their lives.
3. Researchers should examine the impact of generative AI on science communication. Here, Schäfer talks about how AI impacts SciComm on many issues, from environment to health to agriculture to anything else involving science. He also talks about the need for researchers to investigate whether generative AI technologies affects and change the broader science communication ecosystem, and if so, in what ways that happens.
4. Finally, Schäfer notes that “emergence of generative AI is a conceptual and theoretical challenge” — in other words, it’s a game-changer, but we still don’t understand it fully. While ChatGPT can communicate, it's not a human, creating a need for more AI-centric study of communication. Most studies of communication involve communication between humans — think of newspapers, media, and other types of communication you may interface with daily. Now robots have been thrown into the mix!
I feel that if people could better understand generative AI, they could more effectively and powerfully use ChatGPT. That’s one task SciCommers can take on in this new era in which we can communicate with humans and robots alike. However, even beyond that, we must consider social implications of ChatGPT for science communication and for society at large.
Links from around the web (what we’ve been reading and writing):
As we mentioned at the top of this newsletter: on the Fancy Comma blog, we’ve been busy unpacking the technology behind ChatGPT.
Check out Sheeva’s interview about ChatGPT at The Loft, too!
We’ve also been talking about math communication; interviewing SciComm professor Sam Illingworth; and talking about beauty copywriting as a form of science writing.
I loved reading this article from The Xylom about the dreaded term, “women in STEM.”
The Open Notebook is always a great source for insights on ways to improve one’s reporting skills. Did you know their blog has an entire self-care section?
That’s it for this week! If you liked this newsletter, please share it! Have a great summer, Fancy Comma newsletter readers. :)