AI & Automation

Facing the friction: the challenges of AI in BI 

This is the third article in our AI & BI series. After covering the evolution of BI and real-world applications, we now look at the challenges organizations face when adding AI to business intelligence.

Read full story below

The data behind the insight

It’s no secret that AI depends on good data. For years, we’ve followed the simple rule: garbage in, garbage out. Even with powerful models like large language models (LLMs), this rule still applies. These models can deliver helpful answers based on their training, but if you want them to work well with your own data, that data needs to be in great shape. 

When it comes to BI, that means your datasets and semantic models - the things behind your dashboards – need to be well organized. These are the building blocks that tools like copilots or AI assistants rely on to answer questions. Unfortunately, this is often where things break down.

We’ve seen many cases where the data model is unclear, with vague field names, inconsistent metrics, missing relationships, or no metadata. These issues might not stop a dashboard from loading, but they can completely throw off an AI assistant. If the model doesn’t “speak” clearly, AI will misinterpret the data or generate wrong answers. 

The best way to think about AI in this context is like a  new team member. You wouldn’t expect someone to understand your business logic and tools on day one without any onboarding. The same goes for AI – it needs a well-built model and a clear structure to perform well.

Whether you’re using an out-of-the-box Copilot, or custom solution, your semantic model needs to be complete, consistent, and aligned with your business logic. If it’s not, the answers you get won’t just be off – they could be confusing, misleading, or even risky. 
 

And speaking of risk: if the semantic model doesn’t include proper controls, you might also show users data they shouldn’t see. That’s why governance needs to go beyond data access. It should cover the structure, meaning, and logic that AI will use to generate answers. 

The risks of "AI-driven" without readiness

There’s a lot of buzz around AI right now. Many companies are rolling out copilots, buying licenses, and calling themselves “AI-driven.” But what we’ve seen is that without strong foundations, those efforts don’t go very far. 

If AI gives inconsistent or incorrect answers, trust disappears quickly. And once users lose trust, it’s tough to win them back. Most people will only give an AI assistant one chance to get it right. If it fails, they’ll go back to what they were using before. 
 

The bigger issue is when people make important decisions based on flawed outputs. If AI suggests a next step or insight that’s wrong - and someone acts on it - there can be real business consequences. Often, the root cause is poor data structure. If the model is unclear or inconsistent, the AI can’t generate accurate results. This turns BI into a black boxwhere no one can explain what’s going on. 
 

That’s a huge problem for teams trying to build a data culture, and it’s unacceptable in regulated industries like finance, healthcare, or ESG. If you can’t trace how a recommendation was made, you can’t rely on it - and your stakeholders won’t either. 

The bottom line: if your BI isn’t ready for AI, adding AI won’t help. In fact, it can hurt your credibility and make people more skeptical of data tools overall. 

Building trust - governance, validation & oversight

One of the most common questions I hear is: How do I know the AI is giving me the right answer? And the honest truth is: you don’t - at least not always. AI models work with probabilities, not guarantees. Even in traditional machine learning, we accept some level of uncertainty. 
 

That’s why we need new ways to evaluate and explain AI results. Things like trust scores or confidence ratings can help business users judge if an output is solid or should be reviewed. These metrics should be part of the report, not hidden away. 
 

Just as important is explainability. Users should know where a number came from, how it was calculated, and what logic was used. That’s only possible if the semantic model is clean and transparent. When it is, AI has the context it needs to give reliable answers - and users can follow the logic. 
 

Until AI systems reach more maturity, humans need to stay in the loop. Business users should always have a chance to review outputs, flag issues, and provide feedback. That’s how we build trust and improve performance over time. 
 

Finally, we have to think about drift and misuse. As data changes, models may start generating different results. If no one’s watching, this can lead to confusion or errors. That’s why guardrails are essential - they help us stay innovative while avoiding unintended consequences. 

Being ‘AI-ready’ isn’t about licenses or dashboards - it’s about having the right foundations. If your data isn’t trustworthy, your AI won’t be either. And you won’t get a second chance to convince your users.

  • AI in BI only works when powered by clear, well-structured semantic models. 
  • Poor data preparation leads to broken trust, hallucinations, and failed adoption. 
  • Buying AI tools isn’t enough - success depends on readiness, not branding. 
  • Human oversight and explainable outputs are essential for trust and governance. 

Before you scale AI in BI, ask yourself:
● Is your data structured and ready for AI?
● Can your users trust and explain the results?
● Are the right guardrails in place for responsible adoption?

If you’re unsure, we’re happy to discuss your AI readiness approach and help you take the right next step.

Wiktor Zdzienicki

Senior Portfolio Lead D&AI

Inspired? Let’s Connect

If something sparked your interest, let’s keep the momentum going. Whether you’re facing a specific data challenge, looking to unlock the full potential of your analytics, or just curious how our expertise could support your business — we’re here to talk.
Leave your contact details below and one of our experts will get in touch to explore what’s possible together.

Providing contact information will allow Clouds on Mars to send information about products and services. You may unsubscribe at any time. For more information on our privacy policy please click on the link. Privacy policy.

Read more

Let's Talk

Reach out to us—our experts are ready to assist with your inquiry.

Providing contact information will allow Clouds on Mars to send information about products and services. You may unsubscribe at any time. For more information on our privacy policy please click on the link. Privacy policy.
Providing contact information will allow Clouds on Mars to send information about products and services. You may unsubscribe at any time. For more information on our privacy policy please click on the link. Privacy policy.