Sinaptic® AI
We asked 32 European companies about their AI adoption. Most were lying to themselves.
Scroll to explore
FIELD RESEARCH

The Real State of AI Adoption in Europe

We interviewed 32 European organizations across industries — from pharma to fintech, travel to retail. What we found challenges most assumptions about enterprise AI readiness.

Key findings

Four patterns emerged consistently across all 32 interviews.

0 / 32

Shadow AI Is Near-Universal

87% of organizations with active AI usage reported unsanctioned AI tool use.

0%87%100%
0 → Lab

The Readiness Gap Is Vast

From zero AI exploration to dedicated 14-person AI labs.

ZeroAI Lab (10-14 ppl)
0 / 32

Monitoring Is an Afterthought

Only 2 of 32 had AI monitoring as a foundation.

2 monitor 30 do not
0 – 40%

Productivity Gains Vary Wildly

Context determines value — not the tool itself.

0% Legacy
~15% Avg
+40% Greenfield

Abstract

Between late 2025 and early 2026, we conducted in-depth interviews with 32 European organizations spanning pharmaceuticals, healthcare, cancer research, data analytics, travel, fintech, retail, logistics, and professional services.

The findings paint a picture far more nuanced than typical survey data suggests. Shadow AI is effectively universal — even heavily regulated organizations cannot prevent it. The readiness spectrum is dramatically wider than expected. And the tools most companies reach for first are consistently deployed as afterthoughts rather than foundations.

Field insights

Six representative cases from 32 interviews. Scroll to reveal.

01 / 06
PharmaNordics

Nordic Pharmaceutical Association

A traditional industry body representing hundreds of member companies. Has not yet explored generative AI tools in any capacity. Represents the massive segment of organizations at absolute zero on the AI journey.

The silent majority at absolute zero on the AI journey.

02 / 06
ResearchFrance

French Cancer Research Institute

Uses a 3-level confidentiality classification (C1/C2/C3). Runs local AI instances for sensitive research data. Views AI-specific DLP as a "false concern" — believes user responsibility and training matter more than technical controls.

User responsibility over technical controls — but human behavior has blind spots.

03 / 06
AnalyticsCentral EU

Central European Data Analytics

Actively uses ChatGPT, Cursor, and Claude Code for development. Shares code subsets and metadata with LLMs, but never customer data. Has internal AI usage policies but acknowledges significant enforcement gaps.

Rules exist on paper — nobody knows what data reaches external AI.

04 / 06
TravelPan-European

Major European Travel Conglomerate

A €20B+ enterprise with a dedicated AI lab of 10–14 people exploring agentic internet applications. No GRC or AI monitoring in place. Teams show blind trust in AI-generated outputs without verification.

Blind trust in AI outputs. SEO declining from AI search. Even advanced adopters lack governance.

05 / 06
FintechEU-Regulated

EU-Regulated Fintech Platform

Strict MDM security policies and DLP browser plugins for Chrome. Deployed dedicated LLM security protection. Operates under EU financial regulations with formal compliance requirements.

Employees bypass DLP via non-Chrome browsers. Shadow AI persists under strictest enforcement.

06 / 06
RetailEastern EU

Eastern European Retail Holding

Transitioned from in-house AI models to external LLM APIs. Speed of development prioritized over security. Mass shadow AI usage in engineering with zero monitoring or visibility.

Legacy teams: 0% gain. Greenfield: +40%. Context determines ROI.

Patterns we observed

Cross-cutting themes across industries, sizes, and geographies.

Blocking fails. Monitoring is absent. The middle ground is empty.

Organizations either try to block AI entirely (and fail) or allow it freely with no visibility. Almost nobody occupies the rational middle: allow AI, but monitor what data flows where.

Policies without enforcement create a false sense of security.

Multiple organizations have written AI usage policies. None could confirm those policies are consistently followed. Written policies without monitoring are theater.

Have policies
~60%
Can verify
0%

AI maturity does not correlate with AI governance.

The travel conglomerate has a dedicated AI lab — yet has no GRC framework and no monitoring. Technical sophistication and governance maturity are on entirely separate tracks.

The "training over tools" argument has limits.

User responsibility matters — but it assumes perfect human behavior. Trained employees with clear policies still bypass controls when tools create friction.

AI is already disrupting revenue, not just operations.

AI-powered search engines are actively reshaping external business models. Organizations that focus solely on internal governance miss the larger strategic picture.

Shadow AI creates workslop — and nobody tracks the damage.

Across all organizations with active AI usage, shadow AI consistently leads to workslop — the practice of outsourcing a task to AI instead of doing it yourself, then using the output without verification. In most cases this leads to incorrect results due to hallucinations, or to rework that wastes more resources than the original task would have required. In multiple cases, hallucinated AI reports reached C-level board presentations without anyone questioning the data. The damage is real but invisible: no organization we interviewed tracks losses or harm caused by unverified AI outputs. There is no incident reporting, no quality audit trail, no feedback loop. The risk compounds silently.

Nobody thinks about AI's environmental footprint — but everyone wants to.

Not a single organization we interviewed actively tracks the carbon, water, or energy footprint of their AI usage. Yet when asked, 100% of respondents said they would find such functionality useful. It is not a priority and there is no budget for it — but the interest is universal. The gap exists because no existing tool makes it easy. This insight directly informed our decision to embed environmental impact tracking into Sinaptic® DROID+ — not as a premium add-on, but as a built-in feature.

Closing the gap

Every organization we spoke to confirmed the same structural gap: they have AI usage, but no AI visibility. Policies exist but nobody can verify compliance. Shadow AI produces hallucinated outputs that reach decision-makers unchecked. Environmental costs accumulate invisibly. And the organizations most advanced in AI capabilities are often the least advanced in governing them.

The solution is not more policies, stricter blocking, or better training alone. Each of those addresses a symptom. The root cause is the absence of an observation layer — infrastructure that makes AI activity visible before attempting to govern it.

What the market offers today

Several approaches exist, each with trade-offs:

Enterprise DLP vendors (Netskope, Zscaler, Forcepoint)

Strong network-level controls, but treat AI as just another SaaS app. No understanding of prompt content, no intent analysis, no AI-specific governance. Designed for data loss prevention, not AI governance.

AI-native security startups (Prompt Security, Lakera, Robust Intelligence)

Focus on prompt injection and output safety, but typically cover only the API layer. Don't address shadow AI in browsers, don't provide organizational visibility, limited GRC integration.

Internal governance frameworks (manual policies, training programs)

Necessary but insufficient. Our research shows 0% verified compliance across all organizations with written policies. Training helps but cannot prevent the friction-driven workarounds that sustain shadow AI.

Observation before regulation

The gap we observed is not a product gap — it is a methodology gap. Organizations don't fail at AI governance because they lack tools. They fail because they start with the wrong step: writing policies before understanding what is actually happening.

The M3 Framework was designed specifically to address this sequence problem. It is an open standard, not a proprietary product — anyone can implement it with any tooling. The methodology is simple: Mount observation infrastructure first, Monitor actual AI behavior to establish a factual baseline, then Manage with evidence-based policies that reflect reality rather than assumptions.

Regardless of which tooling an organization chooses — the methodology applies. The critical insight from this research is that observation must precede regulation. You cannot govern what you cannot see.

For organizations ready to act on these findings, the observation layer can take multiple forms. Browser DLP addresses shadow AI at the browser level — where most unsanctioned AI usage actually happens. Sinaptic AI Intent Firewall® provides runtime verification for AI agents that are already sanctioned — ensuring that every action an agent takes is verified against organizational policy before execution. Together with the M3 methodology, they form a complete governance stack: see what AI does, verify what AI intends, manage based on evidence.

Want to see your own AI landscape?

The organizations in this study all had the same blind spot: no visibility. The M3 Framework fixes that in days, not months.