Do you trust AI services 100%?
Sep 29, 2025
I enjoy exploring and building new tools, so I’m usually open to trying early AI products.
But when using tools from small startups, I can’t help but ask myself a few questions.
Do they “really” care about security?
How can we be sure that a company takes risks like malware or prompt injection seriously, and that they are applying a true multi-layered defense strategy? In many Privacy Policies, you’ll often find a line like:
“We consider Input Sanitization, Sandbox, and the Principle of Least Privilege.”
But is that one sentence really enough to trust?
Do they “really” not use my data?
How can we trust a service’s promise that they don’t store or use personal data for training? Users are left anxious, relying only on lengthy privacy policies.
Service providers also face a dilemma
On the other side, early startups face a paradox:
trust is needed to gain users,
but users are needed to invest in trust.
While chasing PMF, they’re also expected to prove their security posture—but few early-stage companies have the budget for certifications like SOC 2 or the bandwidth to dedicate significant resources.
What are some realistic solutions?
I've been thinking deeply about this problem and searching solutions. And I found inspiration in the standard that secured the internet: SSL
Just like SSL's '🔒 lock icon,' the AI era needs a more intuitive, low-cost "AI Trust Protocol" that startups can actually adopt.

Imagine a world where:
🔧 For Builders:
Proving AI security is as easy as installing a verified, open-source toolchain.
🙋 For Users:
Checking a service's trustworthiness is as simple as looking for a trust signal while using the service.
As both a user and a designer who advocates them
I believe AI security must move beyond lengthy privacy policies and static certification badges on a landing page.
It's time to prove trust as a real feature delivered inside the product itself.
The question that remains is..
Who leads the way?
Should a neutral foundation like Linux take charge—or will this be a decentralized, community-driven effort?
What's your take?