Are We Trusting AI Too Much? | AI Insights San Francisco w/ Tiffany Saadé
I had the pleasure of recording an AI Insights San Francisco episode with Tiffany Saadé, a researcher at Stanford who lives at the crossroads of cybersecurity, public policy, and artificial‑intelligence ethics. Tiffany isn’t the kind of guest who stays in the safety of theory—she has helped draft her home country’s first national AI strategy and routinely red‑teams models for real‑world vulnerabilities. From the moment we hit “record,” I knew this would be a conversation that forces all of us—builders, policy folks, and everyday users—to slow down and ask: Have we started trusting AI a little too much?
Why “human‑AI teaming” can slip into dependence
We opened with a blunt observation: the efficiency of large models is intoxicating. Automating meeting notes? Great. Offloading scheduling? Even better. But Tiffany warns that, somewhere between “helpful” and “hands‑off,” we cross a line. If we defer our cognitive autonomy—our own judgment and skepticism—then we risk letting hallucinations, malicious prompts, or poisoned datasets steer the ship while we’re asleep at the wheel.
I could feel the room get quieter when she put it this way:
“Descartes said ‘I think, therefore I am.’ What happens when we stop thinking?”
Security by design… or by regret
Tiffany’s security background showed up in every example. She compared today’s AI rush to building a skyscraper without fire codes—impressive until the first spark. Organizations love the productivity boost, but ransomware, jailbreak prompts, and membership‑inference attacks are growing just as fast. Her advice? Bake in red‑team drills, data minimization, and explainability from day one—before a breach drags you into million‑dollar compliance failures.
Agents, feedback loops, and the vanishing human
When we drifted into agentic AI, Tiffany’s tone shifted from caution to outright concern. Multi‑agent systems swap data and refine one another’s behavior so quickly that a single glitch propagates before any human can hit pause. She’s researching ways to use agents for diplomacy and conflict‑resolution, but the same architecture can turbo‑charge misinformation or automate cyberattacks. The takeaway: “autonomy at scale” cuts both ways, and policy needs to keep pace.
A personal journey from Beirut to Silicon Valley
One of my favorite moments was hearing Tiffany describe leaving Beirut after the 2020 explosion and promising herself she’d return, armed with technology, security, and leadership. That promise now guides her Stanford work and her advisory role to Lebanon’s Ministry of IT & AI. It’s easy to talk abstractly about “global impact”; it’s rarer to meet someone turning that phrase into policy drafts and capacity‑building at home.
What startups (and the rest of us) should do next
Tiffany’s call to action is simple but demanding:
- Get an AI‑policy voice on your team early. Waiting until regulators knock is a losing strategy.
- Prioritize AI literacy—company‑wide. Engineers, marketers, and executives all need a shared language for risk.
- Think of security as a feature, not overhead. Users want speed, but they also want safety; build both.
Why this episode matters
Recording this conversation left me energized and a little uneasy—in the best possible way. I’m bullish on AI, but Tiffany reminded me that progress isn’t just bigger models and smarter apps. It’s also the messy, essential work of aligning incentives, guarding data, and making sure humans stay in the loop.
If you’re building with AI, regulating it, or simply curious about where the tech is headed, give the full episode a watch. I think you’ll walk away re‑examining your own “automation comfort zone” and, hopefully, adding a few guardrails before the next big leap.
Thanks for reading—and big thanks to Tiffany for bringing both expertise and heart to the table. See you in the next AI Insights SF episode.



