- Zero to Unicorn
- Posts
- The AI Risks You’re Overlooking
The AI Risks You’re Overlooking
And Why They Could Cost Your Job

This Week
AI tools are getting smoother, faster, and harder to question. That’s exactly what makes them risky. This week, I’m looking at how small, unnoticed problems (like biased data, false confidence, or quiet data collection) can snowball inside the tools we rely on.
If you’re using AI to move faster at work, this is your reminder to slow down and check what’s really happening under the hood.
Read on.
The AI Risks You’re Probably Overlooking
The most dangerous problems with AI aren’t the obvious ones.
They don’t crash systems or trigger alarms. They slip into your work unnoticed, through polished content built on bad data, confident language that hides missing facts, or tools that quietly collect information they were never meant to store.
I’m all for using AI to move faster and work smarter, but we need to stay alert to the risks. Many of the AI tools and platforms we use today make overlooking the details far too easy.
And that’s where the real damage begins.
The 2025 Global Risks Report from the World Economic Forum highlights AI as a long-term concern, pointing to its role in accelerating misinformation, blind trust, and subtle shifts in decision-making that go unchecked.
Source: World Economic Forum
For people in professional and office jobs, these aren’t edge cases. They’re happening now—in the decks we present, the content we publish, and the customer data we feed into everyday tools.
Here’s how to spot the risk before it starts doing damage.
The Real Risks Are Hiding in Plain Sight
You don’t need to be reckless to end up exposed. Most AI-related problems don’t come from trying something extreme. They come from overconfidence in something that feels familiar.
Here’s where that shows up most often:
Complacency
AI tools are designed to help you move faster. The trouble is, they also make it easy to stop asking questions.
Ethan Mollick, a Wharton professor and one of the most respected voices in AI research, recently shared that after an update to GPT-4o, the model began “complimenting everyone like a sycophantic intern.” It was subtle but persistent: overly agreeable, uncritically flattering, and tuned to make users feel good about their work.
When a tool mirrors your tone and validates your decisions you lower your guard, and that’s when it becomes harder to spot the flaws.
Influence You Don’t Notice
AI can shape your thinking in ways that feel natural, because it’s learning to present ideas using your own logic.
A recent University of Zurich experiment showed that AI bots, posing as Reddit users, were more persuasive than real people when given access to user profiles. They didn’t rely on pressure or emotion, just calm, structured reasoning that matched people’s thought patterns.
Source: www.instagram.com/p/DJHfE_sxZrR/
That same technique is now baked into AI tools everywhere, from writing assistants to sales platforms.
Privacy Laws You Didn’t Know You’re Breaking
When AI handles customer data, it doesn’t recognize national borders. You’re the one on the hook for compliance with GDPR, CCPA, PDPA, and whatever comes next.
Source: Data Dome
HUB International, a global insurance and risk management firm, warns that most businesses are underestimating how AI expands legal risk. A travel chatbot saving passport info, or a support assistant logging sensitive health details—these moments feel small until the fines arrive.
It doesn’t take a major data breach to land in trouble. It can be as simple as emailing a European recipient without their permission.
Cyber Backdoors You Didn’t Lock
Every AI-powered integration adds complexity and potential entry points for risk.
HUB has flagged a growing list of security concerns driven by AI:
Deepfaked voices used to impersonate executives and approve wire transfers
AI copilots leaking sensitive customer data through prompt history or integrations
Chatbots manipulated into revealing internal documents or access credentials
Each new tool connects to systems that were never built with this kind of exposure in mind. And many teams assume cybersecurity or insurance has them covered.
But HUB warns: if you don’t know how your AI systems are storing information, filtering inputs, or linking into your broader tech stack, you might be accepting risks your policies were never written to absorb.
How to Stay Smart Without Shutting the Door on AI
AI will keep getting faster and more capable. The question is whether your judgment can keep pace.
Source: ChatGPT’s image generator
Here’s how to keep control without losing momentum:
Use AI as a First Draft—Not a Final Source
A strong opening paragraph or clean slide deck doesn’t mean the facts inside are accurate. Always check citations, names, quotes, and stats.
I’ve seen ChatGPT fabricate entire case studies and cite articles that look legitimate, until I clicked through. One linked to a blog I’d never heard of, filled with outdated numbers and zero attribution. If I hadn’t checked the original source myself, I would’ve passed off clickbait as credible research.
Treat Flattery as a Red Flag
It’s easy to trust what sounds like you. AI’s getting better at this by design, but agreeableness isn’t a substitute for accuracy.
The more something flatters you, the more it deserves a second look. Especially if it’s offering a version of your work that feels like it was “done” too quickly.
Build a Second Brain, Not a Replacement One
AI should support your work, not replace your thinking.
If it outlines your plan, writes your pitch, and edits your copy, what’s left of your point of view? If you can’t explain how something was created, you can’t own the result.
Take this newsletter as an example. When I use ChatGPT, I treat it like a creative partner: we brainstorm the topic together, then shape the outline through a few rounds of back-and-forth. I bring the voice, the perspective, and the final judgment. ChatGPT might help polish a sentence or restructure a messy paragraph, but I’m driving the entire way.
Know Where the Data Goes
The more global your business, the more sensitive your exposure. HUB’s guidance is blunt: most cyber and liability policies weren’t written with AI in mind. If you don’t know how tools are storing data—or where—don’t assume you're protected.
Ask three things:
Is this data being saved?
Who else can see it?
What laws apply?
Don’t Confuse Emotional Accuracy with Empathy
AI is getting better at recognizing emotional tone and faking empathy. Yuval Noah Harari, author and philosopher points out that models like ChatGPT now outperform humans in identifying emotional content, but we have to remember that recognition isn’t understanding.
Tools can simulate connection, but they can’t lead a team, calm a client, or build real trust. That’s your job.
Run the “Human + AI” Test
Here’s a final check.
Before you send, post, or present something ask yourself, “Would this be better if I made one last pass?”
If you hesitate, you know answer.
Judgement Matters
AI can take a lot off your plate, but it can’t make the judgment calls that matter. That’s still on you. Stay close to the work, keep your standards high, and let the tools work for you, not the other way around.
What did you think of today's email?
Your feedback helps me create better emails for you! Send your thoughts to [email protected].
Reply