Consultants Are Key to Making AI-Driven Strategies Worth Trusting
AI systems are no longer experimental tools sitting on the edge of operations. They’re becoming central to how companies make decisions, deliver value, and create new business models.

But with that shift comes a bigger responsibility — one that goes far beyond technical accuracy or ROI.
In a recent session from The Consultant’s Playbook for AI-Driven Strategy, experts from Cambrian, Deloitte, KPMG and Trendtracker came together to explore a topic that doesn’t always get the same attention as LLMs or agents: governance.
And yet, it’s governance that will determine whether organizations can deploy AI at scale, and whether their clients will trust the results.
Here’s what we learned.
AI Governance begins with trust and not control
Frédérique Joos, Technology Partner and Founder at Cambrian, opened with a sharp reminder: legal frameworks like the EU AI Act aren’t just about compliance. They’re about creating trust.
Just as the GDPR aimed to restore trust in how companies handle personal data, the AI Act aims to do the same for intelligent systems. But in practice? These frameworks often create the opposite effect.
Instead of clarity, they provoke confusion. Instead of trust, they trigger anxiety.
Especially in organizations just beginning to explore AI. Are we considered high-risk? What if we misclassify a system? What tools do we need to assess that risk — and who’s responsible?
That uncertainty is exactly what governance should resolve.
When done well, governance doesn’t slow things down. It creates clarity. It helps organizations understand how AI fits into their operations, how suppliers are using it, and how outputs are validated and traced. Trustworthy systems — like trustworthy data policies — don’t emerge from individual tools or documents. They‘re built into culture.
“We need to acknowledge that everyone will be working in cascades of technology,” Joos said. “So we need to challenge our suppliers, question our own tools, and understand how we implement AI across the full lifecycle.”
If Joos framed governance as a matter of trust, Louis Longeval — Commercial Law Consultant at Deloitte — brought it back to the legal core. His point? Trust needs structure. And in AI, structure isn’t simple.
Limiting governance to the AI Act alone overlooks key legal dimensions — from data protection and nondiscrimination to intellectual property. That complexity means governance can’t be siloed or static. It has to evolve alongside both regulation and technology.
“Governance is about changing structures and embedding a new culture,” Longeval said. “If leaders don’t actively communicate its importance, nothing will change.”
Regulators, he noted, are often playing catch-up. ChatGPT wasn’t even part of the conversation when the first draft of the AI Act was written — and there’s still no clear guidance for governing agents.
So what can companies do now?
Longeval offered a starting point: define a company-wide AI strategy, establish internal policies, and build an inventory of tools. Appoint “AI champions,” train teams, and make sure every use case aligns with broader business goals.
Compliance isn’t just about avoiding risk. It’s about building the foundation for AI that scales — and earns trust along the way.

But how do you govern what you can’t see?
It’s one of the most urgent — and least visible — challenges in AI strategy today. As systems evolve from dashboards to dynamic agents that learn, act, and operate across tools, the governance gap widens. The more autonomous the AI, the harder it becomes to track what it's doing — and why.
Bart Van Rompaye, Head of Advanced Analytics and Machine Learning at KPMG, put it bluntly: AI agents aren’t static models. They don’t just classify data or generate summaries. They take actions — often based on natural language prompts from business users who may not fully understand what they’ve activated.
“It’s not the IT teams setting up the agents,” Van Rompaye said. “It’s the business. And that leads to inconsistent interpretations, incomplete documentation, and a whole new class of shadow AI.”
In this new reality, traditional governance doesn’t scale. Policy PDFs go unread. Training sessions can’t keep up. The risks aren’t just technical — they’re embedded in everyday workflows.
Van Rompaye’s answer? Human-scale governance — systems where risks are surfaced contextually, embedded in the tools people actually use, and monitored by AI compliance agents.
It’s not a future ideal. It’s a design brief for right now. One moment from the session captured the stakes:
How can consultants assure their clients that AI-generated recommendations — presented in familiar formats like strategic reports — are actually trustworthy?
Clients know that AI is shaping these insights. Some even know the consultants are using platforms like Trendtracker. So what guarantees can they give?
Helping clients ask better questions about AI? Let’s make sure you have the answers. Join the Trendtracker Partnership Program — a network for consultants building transparent, trusted AI strategies.
The transparency gap and why consultants must help close it
“Can we trust this?” It’s the question clients are starting to ask — and consultants can’t afford to brush it off. As AI-generated recommendations land in boardrooms, many know these insights were shaped by agents, tools, and teams their clients can’t see. And when trust is murky, strategy stalls.
In the session, a common thread emerged: transparency isn’t a side concern. It’s the foundation of AI governance.
Consultants — whether advising on strategy, building models, or deploying platforms — must be clear about what their systems do, where data comes from, and what limitations exist. Without that, the promise of AI quickly becomes a black box.
But the issue isn’t only technical. Many organizations are hesitant to fully disclose how much AI is being used — or whether their systems would stand up to scrutiny. As Joos put it, there’s a kind of “AI shame” quietly spreading inside companies.
That’s where consultants have a critical role. Not just to advise, but to create clarity. As Frédérique from Cambrian emphasized, human oversight is only effective when it’s informed. The “human in the loop” must understand what to assess — and when to intervene. But as AI becomes more embedded in workflows, that becomes harder to guarantee.
Louis from Deloitte added that responsibility alone isn’t enough. People need tools to interpret what AI is doing — and explain it. With agents now performing complex, multi-step tasks across systems, opacity is only increasing.
Which leads to a harder question: Can human oversight actually scale alongside AI?
That’s the challenge modern governance must solve. And consultants can help by working with clients to:
- Map AI use cases across the business
- (Platforms like Trendtracker can help surface insights, trace outputs, and add visibility to decision flows.)
- Evaluate supplier reliability
- Translate legal requirements into operational guardrails
- Embed oversight in ways that are not only meaningful — but scalable
Van Rompaye put it clearly:
“We’ve done a lot of good things. But they’re still not good enough for the future.”
Why governance is the foundation for Strategic AI
If there was one message from the session, it was this: governance isn’t a side task. It’s the backbone of strategic AI.
It enables trust. It enables scale. And without it, even the most advanced systems risk becoming unmanageable or worse, untrusted.
As companies adopt more agentic, autonomous tools, the old governance playbook falls apart. Static documentation. One-off trainings. Siloed accountability. None of it holds up.
Consultants, strategists, and legal advisors now face a critical role: shaping what comes next.
That means helping clients build systems that are not just compliant, but explainable. Not just safe, but scalable. Not just smart, but trusted.
Because the future of AI won’t hinge on what the technology can do — but on whether people believe in the systems that use it.
And belief doesn’t come from code. It comes from governance, adoption, and design that puts clarity at the center.
Want to explore the full session? Rewatch the conversation with experts from Cambrian, KPMG, Deloitte, BCG, and Arthur D. Little as they unpack what AI governance really requires — and how consultants can lead the way.
Ready to build trusted AI-powered strategies with your clients?
The Trendtracker Partnership Program is built for consultants and advisors working at the intersection of foresight and strategic intelligence.
Join a growing network of partners using Trendtracker to deliver insights that are transparent, scalable, and client-ready.