The AI ick is real. But so is this.

AI seems to be giving women the ick. And tbh, I get it. 

Disclaimer: This newsletter is longer than usual but there's a reason this can't be summed up in a 30 second read. Stick with me…

A Harvard Business School working paper* synthesized 18 studies across more than 143,000 people in 25 countries and found that women have about 22% lower odds of using generative AI than men. A Federal Reserve Bank of New York survey put it in even plainer terms: half of men had used AI in the past year compared to only about a third of women. 

Ok, so we're hanging back. And recent headlines seem to validate this choice.

Oracle just laid off somewhere between 20,000 and 30,000 employees (by email might I add). The same week, the company announced billions in new AI infrastructure investments. And it's not just Oracle. AI was behind roughly 55,000 layoffs in the U.S. in 2025 alone.**  Paycom laid off 500+ employees after automating payroll and back-office functions, with staff told their roles had been replaced by AI-driven systems. IBM said AI agents had already eliminated hundreds of back-office roles. Microsoft cut 15,000 jobs over the course of 2025. The list keeps going. 

So yeah, pretty scary.

On top of that, we keep getting hit with headlines screaming that AI is making us dumber. That people are engaging their brains significantly less when they use it. 

Terrifying. 

Very share-worthy. 

But stay with me, because there's a lot more going on here...


First. Let's zoom out.

This might feel like a totally new landscape but…we've actually been here before. 

We panicked about calculators rotting our math brains! We panicked about Google making us incapable of remembering anything! ***

Each time, researchers eventually caught up to the panic and found that the reality was a lot more nuanced than the headlines suggested. 

I mean, the idea of being scared of a calculator is quite funny isn't it? (people should really be scared of my math without a calculator…)

Research**** on how new technologies affect cognition (aka our thinking) consistently shows that fears of harm tend to be overstated, and that what actually matters is how you use the tool, not whether you use it. That context doesn't make the concerns about AI meaningless. But it's a useful reminder that "AI is melting our collective brain" is probably not the full story. 

Now let's look at what's actually happening, from two angles.


Angle 1: Companies are already having regrets.

A 2026 Careerminds survey of 600 HR professionals who had made AI-driven layoffs in the prior year found that two in three employers were already rehiring laid-off workers, often within months.***** The phenomenon even has a name now: "AI boomerangs." Companies moved fast, cut people, and are discovering that the gaps left behind aren't ones a chatbot can fill.

Klarna is the poster child for this. You know Klarna…the buy-now-pay-later app that makes that new pair of jeans suddenly feel like a “financially responsible" purchase. They dropped their headcount from 5,500 to 3,400, publicly declared AI could do all the jobs humans do, and celebrated $10 million in savings. Then customer satisfaction tanked, complaints mounted, and by mid-2025 the CEO was rehiring the exact roles he'd just cut. His own words: "there will always be a human available if a customer wants one." Expensive lesson.

Oh! Oh! And have you heard about what IKEA did?  

IKEA launched an AI chatbot called Billie that handled 47% of all customer enquiries. The conventional response to a 47% automation rate in a call centre would have been to reduce headcount proportionally. Instead, the company launched a structured reskilling programme and trained 8,500 call centre workers as remote interior design advisers. That channel has now been set a target of reaching 10% of Ingka Group's total revenue by 2028.

The chatbot didn't generate that revenue. The humans did! The humans who got freed up from the repetitive stuff to do something the chatbot literally could not. 

The research backs this up. A field experiment****** looked at what happened when AI took over the scripted, repetitive parts of a sales job and left the complex, unscripted problem-solving to humans. They found that workers got measurably more creative. They came up with better answers to questions they'd never been trained on. They performed better. And the workers who benefited most were the ones who already had strong skills in their domain and who leaned into the change rather than resisted it.


Angle 2: Okay, about that "AI makes us dumber" research. 

This headline drives me, in the words of Gwen Stefani, B-A-N-A-N-A-S.

The study behind those headlines is an MIT preprint*******(meaning it has not yet gone through peer review, which is worth noting) that hooked 54 people up to EEG brain scanners while they wrote essays. Some used ChatGPT, some used a search engine, some used nothing. Over four sessions across four months, the ChatGPT group showed the weakest brain connectivity, the lowest sense of ownership over their writing, and they couldn't even quote back what they'd just written minutes earlier. That's genuinely interesting. I'm not dismissing it.

But here's what's worth sitting with: 54 people. One task. Essay writing. And critically, everyone in the LLM group was told to "use ChatGPT to write an essay." 

Nobody was asked to push back on it, edit aggressively, treat it as a first draft to tear apart, or use it as a thinking partner rather than a ghostwriter. The study didn't vary how people used AI, so it simply cannot tell us whether using AI differently would have produced different results.

Here's why that matters. Hot off the press, a much larger and more nuanced study**** followed nearly 2,000 professionals using AI on real work tasks and found something more specific going on. 

First, the study is explicit that its findings do not demonstrate cognitive harm, impairment, or decline. 

Second, what it found instead is variability in how people engage with AI, and that people who passively accepted whatever AI generated, no pushback, minimal edits, reported lower confidence in their own reasoning. But people who actively challenged and modified the AI's output? Higher confidence. Meaningfully higher. The more someone treated AI as a collaborator to interrogate rather than an oracle to accept, the better they felt about their own thinking.

 So the question isn't really "does AI make you dumb?" The question is how are you using it? Outsourcing your thinking wholesale and going along for the ride is probs not great for you. Using it as a tool you actively work with, push on, and override when it's wrong might just boost your confidence and your output.


I've babbled enough. Let's get to why you're here: what does all of this mean for you?

1.  Uncover your eyes. The ick is understandable, but uninformed is not a safe place to be. AI is already reshaping what work looks like, and the workers navigating it best are the ones paying attention to what's actually happening - the icky bits, the scary bits, AND the empowering bits.

2. Be skeptical of the sexy, scary headlines. There's always more nuance underneath them, and the research is only just beginning. Your job is to stay curious. 

3. Know your value in an AI world. The IKEA story and the research are both pointing at the same thing from different directions. The gaps left by AI are deeply human ones. Relationship-building. Creativity. Complex judgment. Emotional intelligence. The stuff traditionally called "soft skills," which, let's be honest, is a deeply irritating name for skills that are genuinely hard and increasingly non-negotiable. 

4. Train accordingly. AI is absorbing the scripted, repetitive, codifiable work. What it cannot do well is the messy, contextual, human-centered work. That is your lane. The question is whether you're developing it.

Where in your work is there a gap that only a human can fill? And are you building the skills to fill it?


Let's go girls.

xoxo

Kelsey 

 If you want to go deeper on any of this, I run a talk called T

he AI Reality Check that is basically this newsletter but with coffee and snacks  and a lot more back and forth. No jargon, no hype, just let's actually figure out what this means for you and your people, whether that's your work team, your book club, or your girls' night that somehow turned into a two-hour conversation about AI. 

This Week's Thing: 

Wherever you're at with all of this, pick one:

  1. Do a little reading or research on AI. Just start somewhere.

  2. Dig into the headlines a little more and interrogate what's actually being said. DM me if you want to chat about them.

  3. Reflect on your specific value in your role in an AI-focused world. Where do you bring something a chatbot simply can't?

  4. Identify one skill you can work on that sits squarely in the "human" column.

Next
Next

IMAGINE