Most companies are asking the wrong question about AI safety.
They ask: “Is this AI tool secure?”
They should be asking: “What exactly are we protecting — and from whom?”
A recent Guardian investigation found serious flaws in hundreds of AI safety benchmarks — the very tests meant to prove AI systems are safe and reliable.
But that’s not where the biggest risks lie. They stem from everyday operational choices — like treating AI as “just another SaaS tool” instead of a system with privileged access to your entire operation.
The companies getting this right aren’t moving slower — they’re asking sharper questions. Here’s how they’re thinking about it:
- Where does this data actually go? (“The cloud” isn’t an answer.)
- Who can access it — including employees at the AI provider?
- Is our data training someone else’s model?
- Can AI make decisions, or only recommendations?
It’s the same approach you’d take when onboarding a senior hire. Clear boundaries, limited permissions, earned trust over time.
That mindset builds confidence without slowing innovation. The path forward isn’t avoidance. It’s controlled experimentation.
Start small. Keep humans in the loop. Scale once the controls actually work.
If you’re building AI governance frameworks right now, we’ve documented what’s working (and what isn’t): Brim - AI Powered Business System