Is "just ChatGPT it, bro" good advice?
AI is both useful and creepy
We love to analyze things here at Fancy Comma, LLC. So does AI, apparently — or it would “love to” if it were not a robot, but an actual person.
Our blog — and this very newsletter — is for humans, by humans. We do sometimes use generative AI in the writing process, but the end output is human-edited and tailored for human understanding. My personal pet peeve is encountering an AI-written article with its lists of items and “it’s not just X — it’s Y.” Sigh.
In fact, this month, we have been lucky to have another (human) writer on deck — Monet, our brilliant Georgetown VIEW student. Monet is a freshman at Georgetown University’s Edmund A. Walsh School of Foreign Service who cares about using science to solve tough policy problems through bipartisanship.
So, this month, while I wrote about participatory action research to solve public health crises, Monet tackled several topics. First, she discussed “America First” vs. multilateral US climate policy. Next, she moved on to a discussion (with solutions!) of bipartisan federal health policy narratives. After that, she wrote about US pharmaceutical industry reforms, and capped off the month talking about the aftereffects of dismantling the United States Agency for International Development, or USAID.
I, personally, do use generative AI to come up with answers to quick online queries often, though, including in the research phase — it’s kind of unavoidable.
I was having a stressed-out day the other month when I decided that, against my best instincts, I would do what it seems like many people out there are doing…I decided to ask generative AI for help.
I hopped on over to perplexity.AI, my LLM of choice (ChatGPT is so overused in my opinion — LOL — and now it is getting ads!), and asked it one of those types of existential questions you ask your parents when you’re 5 years old. Like, we’re talking “why is the sky blue?” kind of stuff.
It kind of worked or at least I read its response and felt better. The next week, I did it again.
And again.
One day, I decided to ask perplexity.AI how much it knew about me. Was it really paying attention? I was surprised to learn that it knew that I lived in Oklahoma, was working on being less stressed out, and a few other things (all derived from my existential crisis LLM usage).
I asked the website to forget everything it knew about me. Then I asked it what it knew about me and it told me the same stuff. In other words, it had not deleted the information I wanted it to. I went to my perplexity.AI account’s search history and manually deleted the queries one-by-one and asked it again. It still told me my location.
So, I deleted my account and don’t really use it anymore.
It seems to have forgotten me now, though it still serves up my location when I ask it what it knows about me.

I can see how perplexity.AI could be useful for someone with more practical questions.
The truth is, though, that I actually like to learn and think about stuff, even when it’s painful to think because the world is too loud (my new nemesis is those keys that honk and lock car doors. How am I supposed to do pencil-and-paper algebra problems at the coffee shop when all the people coming in are doing that loud remote locking thing, sometimes multiple times in a row, just because they can? Is it just that nobody in this world is doing pencil-and-paper math anymore? Let me know in the comments…).
Apparently, AI is the solution to our complex and noisy world. It’s like the Big Tech companies are saying, “Don’t think! We can do it for you.” Who knows, maybe Mark Zuckerberg and Sam Altman both hate being distracted when they are deep in thought, and just want a way to keep the thoughts flowing in the event of a mass honking occurrence.
The innovation is going to a new extreme, however.
“United States AI companies must be free to innovate without cumbersome regulation,” states President Trump’s Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence.”
The president says there should be no AI regulations at all, but a lot of things can break when you move fast without any guardrails. This is so ill-advised in practice that I can only think that this EO is a result of lobbying on behalf of Big Tech. The big tech companies do all have DC-based policy offices…and the CEOs were also all sitting at the same table as the president at his second inauguration.
Another thing is that if you’re feeding an AI high-quality data, it’s more likely to be able to help you…but where else does that information go? Without guardrails, that information can go anywhere.
Well, we know for sure that it is used to improve the LLMs, and besides that, it’s not clear. So, you’re volunteering your personal information to make these companies more profitable, with the idea that if it has enough information, it will be the most powerful supercomputer alive…without any real evidence that it can get there. What could possibly go wrong?
It’s cool to have fancy, impressive technology, but it’s not cool to use your user base as guinea pigs. We don’t do that in human subjects research in neuroscience…so why would that be allowed with AI?
So, I’m done with LLMs, to the extent that I can be done with them (which is not that much, actually). Back to pen and paper and deep introspection and problem-solving and talking to actual people I go…even if it takes me more time and it feels like everyone out there is using AI. Maybe they don’t know these things…or don’t care. I care.
The effort to make AI work better than humans do
An MIT study predicts that about 12% of jobs can be replaced by AI…but I can’t imagine who implements AI solutions in a world in which robots still don’t understand you and go rogue often…I can only assume that most CEOs have no real science or tech background…or are willing to do anything to save money.
There’s a lot of excitement that AI will “get smarter” and there is certainly a lot of hype around that. When people say AI will “get smarter,” I assume that they mean that it will produce more factual answers, analyze things with more detail, and just overall, be more powerful and fast to respond to their many questions. Currently, “[AI] is not always that successful at evaluating, and reflecting can’t (yet) be outsourced to AI,” says Karen Thornber, a Harvard professor teaching Literature and East Asian Languages and Civilizations. She’s certainly a lot more optimistic than me about AI’s eventual ability to reflect on things. However, she and others at Harvard seem to believe that it is “dulling” our ability to think critically. Maybe what we are wishing for is not a smarter AI, but a world where we can think better. A faculty member and affiliate not too far away at Yale write that chatbots are prone to “manipulation, groupthink, and hallucination.”
Scientific American is way more excited about AI than these Ivy League scholars who are widely regarded as the cream of the crop in scholarly work and critical thinking. “Today’s leading AI models can already write and refine their own software,” writes the publication in a December 6, 2025 article called “Are We Seeing the First Steps Toward AI Superintelligence?” The article boasts about the “reasoning” capabilities of current AI models. “If the systems already have these abilities, what, then, is the missing piece?” the article questions. It goes on: “One answer is artificial general intelligence (AGI), the sort of dynamic, flexible reasoning that allows humans to learn from one field and apply it to others.”
The Scientific American article mentions opinions of a few top (human) thinkers on when they feel that AI can reach AGI…and it sounds like it might take a while.
Assuming that AI can think for us in even a decade is a huge gamble. There are so many things that could change between now and then, and even the introduction of AI itself could render AGI a useless concept for our everyday lives (do we really need a high-powered supercomputer to generate a thank-you letter, put together a research plan, or even write a research paper?), though such superintelligence could make a lot of things we don’t care about (and maybe don’t even care for) as individual AI users a lot easier.
Some job titles AI already has
To expand on that idea, here are some of my least favorite existing AI use cases ranging from poorly-executed to downright creepy. These could be even more powerful assuming that AI can reach its hypothesized AGI that it has so far not reached (at least by the following list).
A retail customer service agent
I’ve heard working in retail is terrible…and it’s probably expensive to hire and train employees to have the proper nuance for such interactions…so it makes sense that companies would want to replace customer service staff with AI.
Remember when ChatGPT was first launched and everyone said it would revolutionize all customer-service interactions? “This thing is crazy good at customer service,” wrote one business owner on Reddit who detailed closing more deals and managing difficult customers using the LLM.
While customer service chatbots may be saving companies money and hassle, it’s not really doing the human job of building trust, writes tech reporter Tom Snyder of WRAL: “In many cases, businesses are saving money while quietly eroding goodwill.”
Just take this thread on Reddit’s r/Sephora community: “Have you tried Sephora’s “new” AI beauty chatbot? Thoughts?”
“[N]ot sure if its just another marketing mechanism, or if it will actually offer real personalized recommendations,” wrote the post’s author.
Here are some snippets of the responses:
“I’m sick of AI in everything and I already spend too much without a robot chat encouraging me to spend more.”
“AI is horrible for the environment so I try to avoid it. Disappointed with how it’s being pushed everywhere.”
“AI lies. All the time. About everything. Why would I trust it to answer basic makeup questions when the internet exists and I can just research products myself with better results?”
“I asked it twice if a product had fragrance in it. Once it said no (even though it did) and the next time it said yes.”
A Reddit user who identified herself as a former Sephora chat employee stated that she believed the chatbot could make better recommendations “once it gets better,” but also stated that when she worked there, Sephora trained chat associates to “just look at what’s best selling, top reviewed, what tags match what the customer was looking for, just using the filters on the side.”
“[Y]ou're better off googling/reading old reddit threads for real recommendations or product help,” they wrote.
While a chatbot might be good for simple queries, it can’t replace deep research (such as the kind used in makeup shopping — because let’s be real — those are very serious decisions!).
A law enforcement officer and/or analyst
If you’ve ever navigated the chaotic, stressful, bureaucratic, and time-consuming endeavor that is navigating our legal immigration system, you know that it involves filling out lots of paperwork…and waiting. While the Immigration and Naturalization Services were previously under the US Department of Justice, they are now part of the US Department of Homeland Security’s US Citizenship and Immigration Services (USCIS). One of USCIS’s stated goals on their website is to have “machine learning models eliminate redundant paperwork by pulling together customer information from disparate systems.”
It’s one thing to create a machine learning system with actual data (though the USCIS project surely has limitations that must be understood), but it is another to try to do predictive capabilities using AI. That brings a whole host of other issues into the frame, including not only the quality of the training data, but how it is collected and used — and whether the data was even collected Constitutionally. ProPublica reports that these AI systems can be biased against African-Americans, and in one case, only predicted 20% of crimes correctly.
It’s also becoming incredibly invasive to capture all this data to feed into AI. As AI technology continues to advance, law enforcement agencies are doubling down on the use of AI to monitor people, even down to their body movements, which could flag them as a potential criminal, writes the Marshall Project.
On November 20, 2025, the Associated Press reported on a United States Border Patrol program “monitoring millions of American drivers nationwide in a secretive program to identify and detain people whose travel patterns it deems suspicious.”
According to the AP article, cameras placed along the road captures locations and license plates, which are then passed through an algorithm that looks at “where they came from, where they were going and which route they took.” After this information is processed by the federal agents, they may ask local law enforcement to pull these people over.
There are more use cases of AI over on the US Customs and Border Patrol (CBP) website. According to HackerNoon, CBP is looking to implement AI to be able to have “unified central operating system for all land, air, and subterranean surveillance technology,” among other goals which include reducing the number of human staff and, of course, tracking illegal border crossings as they happen.
Using AI, the Border Patrol “can monitor ordinary Americans’ daily actions and connections for anomalies instead of simply targeting wanted suspects,” the AP article states (man, they really did a good job of simplifying what that means for the everyday person — and I am grateful for that).
“Unreasonable searches and seizures” are banned by the Fourth Amendment. Is tracking law-abiding people driving on the road using AI to see if they are criminals considered “unreasonable”? It seems we’d have to ask the Supreme Court, which has no scientists or engineers, to answer that question. Would they be persuaded by legal arguments explaining how algorithms work or “garbage in, garbage out”? Honestly, it seems doubtful.
A writer (of disinformation!)
Propaganda, in case you don’t know wa tiThere was a funny story a while back when X started publishing the locations where their social media accounts were established from, outing several US politics influencer accounts as coming from outside of the United States.
Such accounts could easily spread misinformation or even conduct disinformation campaigns to disrupt US democracy in the age of LLMs. LLMs make it easier than ever to generate text, even social media text. They can create whatever words you need in whatever format — not to mention video, audio, and pictures.
The General Accountability Office, the independent Congressional watchdog agency, has stated that disinformation can “weaken democracies while increasing political instability and conflict among people.” They explain that “tactics to create or spread disinformation include employing foreign actors behind fake social media accounts and using websites with both hidden operators and hidden connections to foreign governments,” wrote the GAO in a 2024 report.
There are many examples of disinformation campaigns against nations throughout history, before the age of LLMs. A famous one was executed by The Soviet Union’s intelligence operations when they stated that HIV/AIDS was a result of lab experiments. When this statement made its way to the US, this would rile up Americans, who would then write about it more, creating a flurry of earned media (that’s PR-speak for “shoutouts”) for the false information, which would then spread it even more. “Conspiracy theorists in America would cite [Soviet] sources, and vice versa,” writes NPR. This can (and probably does) happen at scale in the age of LLMs.
So, should you “just ChatGPT it”? You certainly can. Writing is fast and convenient with ChatGPT, though I find its output boring, frustrating, and hard to read at times. However, if you like things written in lists, comma splices, and starting sentences with “but” and “and,” ChatGPT is not just your bot; it’s your writing ride-or-die and digital companion.
Just don’t be surprised when you surrender your deepest, darkest secrets to Big Tech and they get more and more profitable from your free labor to improve their product. The AGI endgame (to create a supercomputer smarter than us) is fueled mostly by hype at this point. Who knows what will happen if we don’t reach the anticipated AGI and it all falls apart.
Surely, by the time we get to that point, AI will be more useful, but we’ve made the conditions of our environment incompatible with valuing critical thought and have our world set up in a way that devalues human connection. Is AGI really going to fix that or make it worse?
It would be a hilarious set of catch-22s (and the good premise for a sci-fi novel) if it were not our real lives.
By the way, if you care about these issues, you can donate to the American Civil Liberties Union, which advocates for civil liberties has a dedicated Privacy & Technology arm, or the Electronic Frontier Foundation, which advocates for commonsense digital laws.
That’s all for this post. If you liked it, please share it.
Until next month!
