Is ChatGPT Making You Stupid?

MIT conducted a new study that tracked how people performed with and without AI

This Week

AI won’t make you dumber, but letting it think for you will.

That’s the takeaway from new MIT research on cognitive performance and the starting point for this week’s deeper look at what it really means to be effective when working with large language models. Here's what we've got:

  • Your Brain on ChatGPT: MIT’s latest study reveals how over-relying on AI affects memory, creativity, and original thinking.

  • Prompt Engineering Is Overrated: Wharton’s Ethan Mollick says good prompting relies on curiosity, clarity, and iteration.

  • In Defense of Prompt Engineering: A new benchmarking study shows that in complex tasks, structure and strategy still win.

  • Your Ultimate Prompting Toolkit: Five free guides and libraries to improve your promptsI—without wasting hours on prompt hacks.

Let’s go.

Using ChatGPT Might be Making You Dumber

ChatGPT won’t make you slower. But letting it think for you will.

That’s what researchers at MIT’s Media Lab found in a new study that tracked how people write essays with and without AI. Participants were split into groups—one used ChatGPT, and another relied on their own brains. All were asked to write SAT-style essays while wearing EEG headsets that monitored activity across their brain.

The ChatGPT group had the lowest cognitive engagement. Their essays were less original, more formulaic, and visibly weaker across neural, linguistic, and behavioral markers. By their third attempt, many had stopped trying entirely, copying and pasting ChatGPT’s answer without edits. When later asked to rewrite an earlier essay without help, most could barely recall what they’d written

The brain-only group performed differently. Their EEGs showed stronger connectivity in areas linked to memory, creativity, and semantic processing. Their writing reflected more ownership and variation. And when given access to ChatGPT after writing unaided, they stayed engaged, using it to support their thinking, not replace it.

Some pundits have spun the study into a warning that ChatGPT rots your brain. That’s not what the data says. The problem isn’t the tool, it’s how you use it.

As Paul Roetzer of the Marketing AI Institute puts it, the real imperative is to teach people how to use AI to accelerate learning and thinking, not outsource it.

That means:

  • Starting with human comprehension

  • Using AI to test, refine, or expand ideas—not to generate entire outputs blindly

  • Creating environments that reward cognitive engagement, not just finished deliverables

If you want to learn how to work with ChatGPT, Section is hosting a free webinar on July 11 to give you the playbook for becoming an AI manager vs. an AI freeloader – and how to use AI to gain 30 IQ points.

Prompt Engineering is a Waste of Your Time

If you’re still trying to engineer the perfect prompt, you’re wasting your time. 

That’s one of the main findings from Ethan Mollick and his team at Wharton. They’ve been studying how people interact with large language models and testing what improves outputs.

The verdict? 

The latest models don’t need perfect phrasing or clever tricks. Being polite doesn’t help (I’m still going to say please and thank you regardless!). Step-by-step reasoning offers minimal gains. What matters most is treating the AI like a capable coworker who has a bad memory and doesn’t know your background. 

Instead of trying to reverse-engineer the “right” prompt, focus on giving AI something useful. Add context, upload a sample, or include a short intro about yourself and your goals. It also helps if you ask for 30 ideas, not three, or you can ask it to ask you questions. 

Use branching to explore alternatives. Claude, ChatGPT, and Gemini all let you edit prompts after the first output. This creates a new “branch” of the conversation. You can move between branches by using the arrows that appear after you edit an answer. It is a great  way to see how tweaking your prompts will impact the conversation.

My favorite advice is also the simplest: talk to the AI. Don’t overthink the prompt. If you’re willing to go back and forth, to push and clarify, the answers will get better.

You don’t need a perfect prompt library. According to Mollick, the best prompters aren’t the most precise—they’re just the most curious.

Want to read the full article? Head over to Mollick’s blog One Useful Thing

In Defense of Prompt Engineering

Ethan Mollick says prompt engineering is mostly a waste of time, but not everyone agrees.

According to a recent benchmarking study using GPT-3.5, the right kind of prompting can measurably improve performance, especially on complex tasks. Some of the top-performing techniques involve careful setup, structured reasoning, even asking the model to critique its own answers.

So if you’re wondering whether all those prompt hacks are noise or signal, the truth might be: it depends on what you’re doing. 

Here's a quick breakdown of the prompt techniques that performed best:

Giving the AI a few examples with a step-by-step explanation before asking it to solve a similar problem. That method, called Few-Shot Chain-of-Thought, had the highest accuracy.

Another approach that worked well is called Self-Consistency. Instead of asking the AI once, you ask it the same question multiple times in different ways, then pick the answer that comes up most often. This works especially well for math, logic, and common sense questions.

Other techniques that performed well include:

  • Decomposition, which means breaking a complex question into smaller, easier ones.

  • Self-Criticism, where the AI reviews and improves its own answers. One method, "Self-Refine," lets the AI revise its response using its own feedback.

  • Role, Style, and Emotion Prompting, where the AI is told to respond with a certain tone or mindset. These work best for creative tasks.

In short: prompting matters, but the best technique depends on the task. Some methods help the model think more clearly and others help it stay on track.

Your Ultimate Guide to Prompting

If you still want to know the ins and outs of prompting here are a few good guides to get you started:

  • Justin Parnell’s A Beginner’s Guide to Prompt Engineering has a full breakdown of all the tips, tricks and techniques you’ll need. As the Founder of AI Automations Agency, Justin GPT he knows what he’s talking about.

  • If you want super dialed design prompts check out this custom made image prompt builder, from none other than my sister, the AI and design expert, Toyah Perry.

  • Prompt Engineering Guide starts you off with the basic elements of prompting and then advances into more complex techniques like Chain-of-Thought and Meta prompting.

  • If you want to level up your text-to-image generation skills, Google’s guide is a great place to start.

  • Adobe’s text-to-video prompt guide includes a general outline for getting started and more technical references for the more advanced. 

And if you just want a bunch of prompts you can steal, then check out this Medium article 8 Best Prompt Libraries.

What did you think of today's email?

Your feedback helps me create better emails for you! Send your thoughts to [email protected].

Reply

or to participate.