Living in an AI-enabled world
We must understand the challenges posed by chatting with computers.
Hi Fancy Comma newsletter readers! By now, here in the United States, we are in the “dog days of summer” — hot, humid days that make it impossible to do anything. Even my waterproof mascara budges in the heat and humidity this time of year. Best to stay indoors and read, if you ask me.
I read a newsletter from NYU’s Center for Social Media and Politics about the future of artificial intelligence (AI) and misinformation. According to CSMoP, AI will make the internet rife with misinformation. ChatGPT can be used to create misinformation, which can then be propagated by fake robot accounts on social media. Not quite the AI-enabled utopia we envisioned with “better living through technology,” right?
This got me thinking…what would an AI-enabled future look like?
A couple of months ago, we talked about the implications of artificial intelligence (AI) for SciComm. You can read that newsletter here. While I do not use ChatGPT as a SciCommer, many people find ChatGPT valuable for brainstorming. I just saw a post on LinkedIn about ways to use ChatGPT to use the first page of Google results to recommend page titles that would have similar structure and format to the first page of Google. Whether that really works…I have no idea…but it seems like a clever use of the technology, even if it means that new content created by AI will basically be a rehash of old content.
My own view on ChatGPT (click here for my explainer blog on the generative AI tool!) is that it cannot replace me as a scientist or even science writer. It does not have subject matter expertise and it does not know what resonates with its audience. Even if it has some knowledge of this through studying texts, it cannot have a conversation with someone or interview them. It cannot figure out what people are feeling or thinking, use intuition, or any of those other things that are foundational to human communication. It can, however, attempt to make emotional appeals via its form of “fancy autocomplete,” as Dan Hollick has called it.
However, one thing ChatGPT is good at is creating content very quickly. Does anyone else remember those formulaic content briefs as a science copywriter, writing about anything from different types of restaurant lighting to types of concrete you can have on your garage floor? Those articles can now done with ChatGPT because they were basically a restatement of Google search results already (I’m not an expert in restaurant lighting or garage floors, though I know a bit more about them now, after having written about them).
Joshua Tucker of the NYU CSMaP writes in The Hill that misinformation can shape the political narratives of the 2024 election. For those science communicators that experienced the misinformation on social media during the pandemic, this application of AI should be no surprise.
I often say that technology is amoral — it doesn’t have any inherent morality. It’s up to the people who use it to determine the use cases, which can then be judged. Whether we’re setting ourselves up to be fooled using AI on purpose seems part fearmongering — but also part reality. What makes misinformation so effective is the emotional aspect that people can relate to. It’s what convinced people in the COVID-19 pandemic that the jab gave them 5G or whatever misinformation was trending that week.
I’m not super thrilled about the future of AI. Lacking the familiarity with ChatGPT needed to properly interface with the software, people now use the website instead of Google. It rehashes content that has already been written without attribution, using technology people do not understand, so they cannot accurately critique its output. What could possibly go wrong?
What we’ve been reading (and writing):
On the Fancy Comma blog, we’ve been talking about SciComm lessons learned from Taylor Swift, breaking stereotypes in STEM education, and the next 75 years of science policy.
Continuing with the AI theme, I found this article from Web Writing Advice about using apps vs. hiring an assistant interesting.
Hollywood actors have joined the writers on strike, reports the Associated Press. In the era of ChatGPT, perhaps movie and TV makers thought they could pay writers less, or fire them entirely, and use AI tools instead…and perhaps some, replaced by the new generative AI tools that apparently can write movies, won’t be invited back to their jobs after the strike. Who knows. To those on strike, thank you for advocating for a living wage for writing — for some reason, people tend to assume that writing is not a real job.
I liked this LinkedIn post from last November about freelancers as “recession-proof.”
Sheeva has been publishing reels on Instagram. Check out our latest reel with SciComm advice and follow us on Instagram @fancycomma for free advice on science writing, journalism, marketing, freelance life, and more.
That’s it for this month! If you liked our newsletter, please share it!