Summary
- Cloudflare One now provides IT teams with insights into employee interactions with generative AI tools—like ChatGPT, Claude, and Gemini—through agentless API integrations and prompt controls in real time.
- Included features comprise a Shadow AI report that identifies all AI tools being utilized within an organization.
- It also offers AI Prompt Protection, which can alert or block employees trying to upload sensitive data to AI platforms.
Employers might appreciate generative AI—until an employee inadvertently shares internal financial data or proprietary code with ChatGPT, Claude, or Gemini, risking the company’s confidential information.
Cloudflare, whose technology supports almost 20% of the web, has introduced AI monitoring within its enterprise security platform, Cloudflare One. This feature offers IT teams immediate visibility into who is engaging with AI and what data they are sharing with it. The company markets it as providing a sort of oversight for employees’ AI interactions, integrated into the existing IT dashboard.
“Admins can now ask: What activities are our employees conducting in ChatGPT? What data is being uploaded in Claude? Is Gemini properly set up in Google Workspace?” the company noted in a blog post.
No More Shadow AI
Cloudflare indicates that 75% of employees utilize ChatGPT, Claude, or Gemini in their work for tasks ranging from text editing to data analysis and design. However, sensitive information often gets absorbed by these AI tools without any record. Cloudflare’s offering integrates at the API level to detect suspicious uploads.
The company notes that a single inappropriate prompt can quickly train an external model with valuable confidential data, which can then disappear indefinitely.
Major competitors in enterprise security, like Zscaler and Palo Alto Networks, also offer AI monitoring. Cloudflare claims that its distinguishing feature is its hybrid, agentless model. This approach merges out-of-band API scanning (for posture, configuration, and data leaks) with inline prompt controls across ChatGPT, Claude, and Gemini, all without requiring software installations on endpoints.
Free Speech Stance
Cloudflare has consistently portrayed itself as a content-neutral infrastructure provider—not a content moderator—thus typically abstaining from overseeing what its clients publish unless legally compelled. This principle stretches back over a decade: CEO Matthew Prince has emphasized that Cloudflare is not a host and does not decide what content is acceptable; it merely ensures that all websites, regardless of ideology, remain online and secured.
This “free-speech absolutist” perspective has attracted criticism. Detractors argue that Cloudflare has facilitated the persistence of hate-filled, extremist, or otherwise damaging sites—often merely because no official demand to remove them was made. A Stanford study from 2022 discovered that Cloudflare disproportionately supports misinformation websites compared to its overall internet traffic share.
Nevertheless, there have been rare exceptions. In 2017, Cloudflare terminated services for the white supremacist site The Daily Stormer—a contentious decision made only after the site falsely accused Cloudflare of secretly endorsing its pro-Nazi stance. Prince later described this choice as a reluctant exception forced upon them by external pressures.
Similarly, in 2019, Cloudflare severed ties with 8chan due to its association with mass shootings, recognizing that the community had become dangerously unregulated.
Most recently, in 2022, Cloudflare withdrew support from Kiwi Farms amidst rising harassment, doxxing, and threats to life, following activist-driven pressures and reports of increasing violence.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.