Thoughts
April 1, 2024

ChatGPT is full of hot air

Did you go to a restaurant this weekend? If you did, you probably saw lots of vivid descriptions on the menu: flame-grilled, house-made, local, organic, zesty, tender, succulent. Maybe it was even “grandma’s recipe” if you got lucky.

Long lists of adjectives are good business in the restaurant world, with some studies showing that more descriptive labels can increase sales by 27% (and also increase repurchase rates down the line). But while painting an appetizing picture works for food, this flowery language isn’t always appropriate.

This is why it’s so odd to see an explosion of similar descriptors making their way into academic peer reviews. In the past year, usage of adjectives like “intricate,” “innovative,” meticulous,” and “versatile” has skyrocketed, according to a recent paper by researchers at Stanford University, NEC Labs, and UC Santa Barbara.

Did these scientists suddenly get a steep group discount on thesauruses, or is something else at play? As with so many things in this moment, the culprit behind this “notable,” “substantive,” and “prevalent” change is an “invasive” and “sizeable” adoption of AI tools. (All those words also appeared in the top 100 adjectives disproportionally used by AI.)

The study’s authors looked at peer reviews from an annual AI conference and noticed that up to 16.9% of these reviews could have been “substantially modified” by LLMs like ChatGPT. And the modification that our current breed of AI is most adept at making? Using lots and lots of adjectives.

ChatGPT loves its fluffy language

If you’ve played around with ChatGPT yourself, you’ve probably noticed this too – the tool is notoriously verbose. Ask it a simple question, and you get a book report. Ask it to write a blog post, and it seems like it’s padding things out to reach your desired word count.

Somewhere lost to my TikTok feed, I once even saw somebody recommend a “hack” to get around this quirk: At the end of your prompt, drop in the phrase “no yapping” to get the AI to knock off the fluff.

This all happens because of the fundamental nature of how LLMs work. At least until we reach the next level of AI development, we must remember that these programs don’t know the answers; they know what an answer looks like. In the same way image generators like MidJourney know what colors and shapes represent a penguin, ChatGPT and Gemini know what phrases and formats represent a peer review or a book report. After that, it just fills in the blanks, often with a heavy dose of pretty-sounding adjectives and adverbs.

By the way, the researchers also found something else interesting – the “deadline effect.” In peer reviews submitted in the final days before the deadline, evidence of ChatGPT usage spikes. It turns out that we all love a little help when under the gun.

About the Author

Ben Guttmann ran a marketing agency for a long time, now he teaches digital marketing at Baruch College, just wrote his first book (Simply Put), and works with cool folks on other projects in-between all of that. He writes about how we experience a world shaped by technology and humanity – and how we can build a better one.

Get my new book, it just came out.

Read Next

Got it. You're on the list. 🍻
Oops! Something went wrong while submitting the form.
Ben Guttmann
Copyright Ben Guttmann
Privacy Policy