Note: before we start, I’m not touching any of the visual generation tools in this piece. The things they produce are gholas1Is this my first Dune reference on the new blog? Wow. and I just plain hate them.
I work in SEO. More specifically, I work in content-focused SEO. I know that search engine optimization is a forbidden phrase among people who only associate it with spam farms and bad-faith attempts to drive traffic,2I am not going to pretend that those things don’t exist. Despite my slack-jawed appearance and habit of walking into doors, I am no fool. but it’s a discipline that I am quite happy to be employed in a lot of the time. Helping businesses and individuals drive interested parties to the right page on their website is a combination of science (how often should you use what words and where) and art (what do people actually want when they get there).
So, you can imagine that my whole industry flipped out when OpenAI gave us regular folks access to ChatGPT, their Large Language Model interface that was friendly and easy to use. Finally, you could just ask this magic machine with access to a massive amount of information to do things in plain English: list ten people who have played Hamlet in order of acclaim; describe the fundamentals of orbital mechanics in language a tenth grader could understand; recite three jokes by Mitch Hedburg. And it would do it!
Kind of.
Sort of.
It’s had problems, as you probably know. It will make up facts and it’s weirdly bad at math, which is the one thing I thought that computers were good at with no questions asked?
And, of course, there’s the whole “writing” thing.
I’m going to get this out of the way right now: ChatGPT does not actually write. You should not use it to replace actual writers who can provide insights or unique points of view. It chews up a lot of other people’s work and spits out something that approximates real writing. That may not be entirely bad, as it turns out, but more on that later.
First, I want to address the true elephant in the room:
ChatGPT Is Not A True AI, And I Will Not Tolerate People Acting Like It Is (Same for you, Bard and Gemini. Don’t think I don’t see you.)
Signal CEO Meredith Whittaker described ChatGPT as something that trains on content from “the darkest corners of the web” and then uses “a massive amount of computational power to predict what will be the next word in a sentence.” This is not wrong. This is, in fact, deeply correct.
ChatGPT’s interface is friendly and anyone can use it. For a lot of people, it feels like a genuine interaction with something instead of offering inputs into an algorithm and getting a pre-chewed response back. That’s how it’s programmed: to use its however many teraflops of processing power to give you a fast, human-seeming back and forth. But it’s running a program. A very complex program, to be sure, but it’s just a program.
ChatGPT has no desires, no wants, no innate curiosity. ChatGPT does not dream. It has no life experience to use when considering a subject. It can’t connect seemingly unrelated ideas. It has no emotional intelligence because it has no emotions. Most of all, it doesn’t create: it rehashes, remixes, and reworks a highly specific set of existing material in ways that are close to, but not quite analogous to what individuals do when they’re given the opportunity.
Individuals make; ChatGPT generates.
Okay, now that that’s off my chest…
What Should You Use ChatGPT For?
Not fiction, never fiction. (Yes, there’s a section called “What Shouldn’t You Use ChatGPT For?” right under this, but I wanted to get that out in the open immediately. Stop asking it to plot your Star Trek novels3Even if I think a lot of modern Trek books feels like they’re the product of algorithmic, continuity-dependant plotting. or write a short story. Even setting aside the generally poor prose it generates, it’s never going to genuinely surprise a reader or allow them to connect with you. It’s a hard truth for ChatGPT Evangelists to acknowledge, but the fact that an LLM can generate better prose than you doesn’t mean that it’s a good writer on its own; it just means that it’s a better writer than you.
However, since I’m an SEO (which is how we started this whole rant), I will give you some use cases in which I’ve found it’s very useful.
- It’s very good at web page outlines and determining what topics and subtopics those pages should cover.
- It also can provide supporting phrases commonly associated with those topics.
- It can help you structure a silo of web pages quickly, with a parent page, child pages, and even grandchild pages.
- It can generate a list of FAQs around a product or service.
- It can generate a series of questions to ask an interview subject to glean insights on a topic at a very high level.
- If you go a bit deeper and give it some specific topics you’d like to cover, it plunders the web really effectively and makes you sound like you know what you’re talking about.
- It can analyze a piece of web content and give suggestions about additional topics to be included.
- This tends to be pretty hit or miss; it will sometimes recommend something you’ve covered using different language than what the LLM recognizes around that topic, but then you know to incorporate some of that language into your existing page.
- It can break down complex topics (particularly those around math and science) in a way that enables others to comprehend them better.
- My wife has used it to quickly take science lessons aimed at readers operating at an eighth-grade level and break it down to something more appropriate for someone reading at a fifth grade level. She polished the result a bit, but it has reduced her workload from a minimum of an hour on each lesson to five minutes. Every minute like that a teacher gets back in their day is a minute that can be spent with a kid.
- It’s really good at rewriting product descriptions for individual web pages, which is key for trying to rank your ecommerce site.
I’m sure you can find more out there, but these are things that I’ve done where using a lot of computing power to do something I would do in much less time made sense. (I’ll get to the energy use issue; stick with me.)
And as far as actual writing goes, I’m about to say something that will make some people mad. They’re going to be going to the pitchfork store and asking for something extra pointy. Then they’re going by the torchery and getting something rated for castle-sieging.
For some web pages, with very specific prompts,4I’ve found it very useful to tell ChatGPT to avoid adverbs when generating copy, for example. you can create perfectly serviceable web copy. An example would be a plumber’s website. When you land on a page for, let’s say, bathroom plumbing repairs in your area, what do you do?
You scan the page and look for hooks. You look through the subheadings on the page, scan any bulleted lists. You then look for signs of human life, the kind of stuff that ChatGPT and other LLMs can’t manage: testimonials from individuals, photos of actual humans, links to relevant content elsewhere on the web.
In other words, you’re not reading. You’re processing. You’re trying to see if they do the thing you need (fix a leaky faucet, unclog your toilet, replace your shower head) and if other people say they do a good job. You want to see if they operate in your area and if they can be at your house quickly. That’s it.
I understand the impulse some of my colleagues and peers have to make this kind of content really special and personal, but I think their energies are better spent elsewhere. You’re not looking to make a 1:1 human connection with this business; you’re looking to see if they do the thing. That’s it. That’s why this is the kind of page where you can use copy that’s not great; it just has to be highly functional and well-formatted.
What Shouldn’t You Use ChatGPT For?
This list may seem shorter, but each item is much broader.
- Fiction. Duh. (See above rant.)
- Writing that you want actual humans to read. Writing where you need to persuade them on that 1:1 level.
- I think it’d be awful for landing page copy, for example.
- Anything where you want to show your expertise on a topic.
- Related, and this could probably be a footnote but I’m just going to put it here anyway. This is something that college professors in particular are going to need to figure out how to work with because so much in the way of writing for higher education is actually pretty rote. How do you get a student to demonstrate core competency around a topic without basically saying “Write something that synthesizes a bunch of other people’s writings”?
- Anything journalistic, period. A journalist’s job is not to just spew out press release copy or say “so-and-so said X.” A journalist’s job is to find the actual, essential truth and contextualize what’s been said and acted upon around that.
- Even sports journalists, the ones you think could be most easily replaced because so much of their expertise is statistically driven, have a point of view and ability to realize connections.
- Anything where the idea of LLM-created content is morally or ethically dubious.
- This is open to interpretation. Somebody told me that they used ChatGPT to write a condolence note and I blanched at how cold that seemed.
Now, finally, the thing that I have to mention because if I don’t mention it, people are gonna be even madder at me. (Also, because it needs to be discussed, obviously.)
All Of This Assumes That ChatGPT and Bard Are More Efficient Than They Actually Are
I was really struck with the statistic that making an image with Generative AI uses as much energy as charging your phone. ChatGPT slurps down 500 milliliters of water every time you ask it a series of prompts.5Where does this water go? Why can’t they just ask the giant computer to pee in a bucket so it can be sent back into the water cycle? These stats are roughly in-line with how much energy and water that many data centers use, mind, but it’s still concerning.
If LLMs are going to become the pervasive, large-scale societal change their investors wants them to be, then they have to find more energy-efficient methods, period. I am not going to pretend that they do not.
Here’s the thing, though, and I’ve not found a good comparison: how much energy would a person use searching 20 different plumbing websites, writing up an outline based on what they found, conducting keyword research to support all those topics, and then writing the serviceable6Again: this needs to be good, not great copy?
Of course, it’s all a matter of scale. Let’s say the energy usage is 1:1 but the LLM model means that the page is created 25 times faster. Someone using an LLM doing 25 pages in the time it takes one means 25 times the energy usage.
Businesses may not care about any of this because it’s more profitable, but there’s a cost to society as a whole that needs to be addressed.
Okay, This It. The End…For Now?
Thank you for reading this far. This is not meant to be any kind of definitive post about the topic of large language models and how they’re used; I just wanted there to be more nuance in the conversation. It seems like every platform I go on features two distinct groups of people: those who are bloviating about how life-altering and perfect LLMs are, and those who hiss “AI grifter” whenever someone brings up the fact that they use them. Since this is my platform, I can belong in neither camp quite happily.
And for the record: the algorithmically-fueled Yoast plugin7That I really should uninstall because this is a personal site, not something I really care about optimizing for search. says that this page is deeply unreadable. I hope you disagree.
Leave a Reply