The Trust Problem: What This Week's AI Policy Collision Means for Enterprise
This week gave me some serious AI Trust whiplash. Ready?
Monday: California signs the nation's first AI chatbot safety law. Protecting kids. Seems like a good move.
Tuesday: OpenAI announces it will allow erotica on ChatGPT starting December. (Oh, just by the by? On that same day, OpenAI inked a multi-billion dollar chip deal with Broadcom.)
This isn't coincidence. This is the moment when Consumer AI and Enterprise AI officially split apart. And if you're responsible for AI adoption in your organization, you need to understand what's happening here.
THE COLLISION
AI companies are facing two completely incompatible markets.
On one side, consumers want fewer restrictions and more personalization.
They want AI that feels human. There’s a huge demand for AI relationships. That's why OpenAI announced they'll allow erotica for verified adults starting December, calling it their "treat adult users like adults" principle.
On the other side, enterprises want control. Auditability. Most importantly: zero liability risk. You need predictable, governable tools that won't blow up in your face.
These demands cannot coexist in the same product. The companies know it. And this week, we watched it play out in real time.
California Governor Newsom signed SB 243 on October 13, requiring AI chatbot operators to implement suicide prevention measures and age verification. The law takes effect January 1, 2026. It's protective. It's cautious. It's designed to keep people safe.
Less than 24 hours later, OpenAI moved the opposite direction for adults. Mark Cuban immediately called it out, warning the move would "backfire hard" because "no parent is going to trust that their kids can't get through your age gating."
Here's what that means for you: If your employees are using ChatGPT at work, and that same brand is now associated with adult content, you have a perception problem. Even if your enterprise version is completely locked down, the brand association matters.
WHY THIS KILLS ADOPTION
Here’s why this is tricky for our goal of AI adoption (it’s kinda what we obsess about around here at AI Mindset).
I work with hundreds of companies on AI adoption, and I see the same pattern everywhere. The barrier isn't technology. It’s behavioral.
But let’s just focus on one element for a second: Trust.
I hear it all the time – it’s the excuse tons of organizations use, especially in regulated industries. “We can’t use AI – we don’t trust it.”
Leadership doesn't trust that AI tools won't create liability. Every headline about inappropriate chatbot conversations reinforces this fear. And when Sam Altman announces looser content restrictions the day after a child safety law passes, that doesn't exactly inspire confidence in enterprise decision-makers.
IT doesn't trust they can control what employees actually do with these tools. You need to update acceptable-use policies, logging, and data prevention rules before December, especially on shared devices. But here's the reality: most organizations haven't figured out basic AI governance yet, and now the target keeps moving.
Employees don't trust that using AI won't get them in trouble. When the rules keep changing and the headlines keep getting worse, the safest move is often to just not use it. This is the hidden cost of the trust crisis: people quietly opt out rather than risk making a mistake.
The result? You can deploy all the enterprise AI tools you want. But if trust becomes the barrier where leadership is slowing all kinds of behavioral shifts because they don’t trust AI? You end up with expensive shelfware and a workforce that's more confused than empowered.
WHAT TO DO NOW
Look, you can't wait for AI companies to solve this. They're making business decisions based on consumer demand and competitive pressure, not on what makes your compliance officer sleep better at night.
Here's what you need to do:
Separate Consumer and Enterprise AI in Your Policies
Stop treating "AI" as one thing in your organization.
Create two clear lists. Approved Enterprise Tools: Microsoft Copilot, Google Gemini Enterprise, Claude for Work, ChatGPT Enterprise. Make it crystal clear these are the only tools employees should use for work tasks.
Consumer AI: Everything else. Free ChatGPT, Character.AI, Replika, all the consumer chatbots. Your policy should state clearly: no company data, no customer information, no proprietary material in these tools. Ever.
The key here is clarity, not prohibition. Employees need to know exactly where the line is, not guess whether something is okay.
Lock Down Data Before December
You have six weeks before OpenAI's changes roll out. Use them.
If you're using consumer ChatGPT accounts anywhere in your organization, migrate to ChatGPT Enterprise now. If you're using other consumer AI tools, same thing: move to the enterprise versions or cut them off entirely. This isn't optional anymore.
Audit your systems. Who in your organization has access to AI tools? What data can those tools access? Can employees paste customer emails, proprietary code, or confidential documents into consumer chatbots? If you don't know the answers, you have a problem that's about to get worse.
Build Trust Through Transparency
Here's where most organizations get it wrong: they respond to trust problems by adding more rules and restrictions.
I get it. I do! But it backfires. Heavy-handed policies don't build trust. They build resentment and workarounds.
Instead, explain why you're making these changes. Show your teams this week's news: the California law, the OpenAI policy change, what it means for the organization. When people understand the why, they're much more likely to comply with the what.
Tell employees what they can and should do with AI, not just what's forbidden. Most people want to use AI effectively and responsibly. Give them clear guidance to do it right.
Choose Your AI Partners Carefully
Not all AI companies are prioritizing the same things. Some are optimizing for consumer engagement and growth. Others are building for enterprise trust and governance.
When you're evaluating AI tools, ask yourself: Are they building for your use case, or are you an afterthought to their consumer strategy? Look at where they're investing. Enterprise governance features and compliance capabilities? Or consumer engagement features that create headaches for IT?
And accept this: Your AI governance isn't a one-time project. The landscape is moving too fast. You need someone to monitor developments and update policies accordingly. Build relationships with your vendor enterprise teams so you hear about changes before they hit the headlines.
AI NEWS OF THE WEEK
Salesforce Launches Agentic Enterprise and New Slack
At Dreamforce, Salesforce unveiled the Agentic Enterprise vision, introducing Agentforce 360: a unified platform where AI agents manage routine and complex business tasks across all departments. Slack was redesigned as the central hub for collaboration between humans and AI agents, allowing users to communicate with AI effortlessly within their usual workflows.
Gemini Enterprise: AI at Work
Gemini Enterprise really wants to unify workplace AI under one roof. What they call the front door for workplace AI unifies chat, agents, and enterprise data. The idea here is that you connect your Google Workspace, Microsoft 365, Salesforce, and SAP all in one platform.
OpenAI and Broadcom Partner on Custom AI Chips
OpenAI and Broadcom announced on October 13, 2025, a collaboration to deploy 10 gigawatts of custom AI accelerators. Systems begin deployment in late 2026. Broadcom stock jumped 9.88% on the news as OpenAI pushes to control more of its infrastructure stack.