Why agentic analytics starts with a well-governed data layer
As AI changes how executives interact with data, analytics is moving out of the dashboard...
As AI changes how executives interact with data, analytics is moving out of the dashboard era and into a far more dynamic operational model. Natural-language interfaces, AI-driven insights, and agentic workflows promise broader access to intelligence, but they also expose a problem many organizations have lived with for years: fragmented definitions, inconsistent metrics, and governance models that were never designed for AI scale.
To unpack what that means in practice, I spoke with Nick Eayrs, Vice President of Field Engineering for Asia-Pacific and Japan at Databricks. With nearly 25 years in leadership across multiple regions, Eayrs has a broad view of how data insights can be an accelerator within organizations, and what it takes to succeed in the new era of agentic analytics.That background gives him a broad view of how data and AI strategies play out across markets, operating models, and levels of enterprise maturity.
The throughline of our conversation was his conviction that AI is not eliminating the need for semantics and governance. It is making them far more important. In his view, organizations will not get trusted AI outcomes until they fix the data layer beneath them: the business definitions, lineage, access controls, and open standards that allow intelligence to scale without collapsing under cost and complexity.
AI Is Rewriting the Rules of Analytics
Catherine Brown: Why does AI put semantics and governance pressure on analytics in a way that legacy BI never had to deal with?
Nick Eayrs: Legacy BI was really a world of static dashboards and predefined reports. Business users had to navigate fairly complex interfaces, and if they had a follow-up question or wanted to explore something more deeply, they usually needed specialist support. There was very little true self-service.
The semantic layer underneath traditional BI was also relatively static and slow to change. If the business needed a new definition for revenue, churn, or customer lifetime value, that usually meant going back to IT or specialist teams to update the semantic layer and rebuild reports. It was a very predetermined model.
AI changes that completely. It no longer has to be static, and it no longer has to be purely descriptive. Traditional BI is often rear-view-mirror analytics. It tells you what happened. With AI, you can start to predict what might happen, ask why it happened, and understand what to do next. You can reason across much more data yourself and generate insights in real time.
But semantics do not go away in that world. If anything, they matter more. AI and agents are still informed by the data underneath them. That gets back to the old principle of garbage in, garbage out. The more trusted, high-quality data you have, with the right business context around your products, services, taxonomy, and terminology, the better the AI experience will be.
If someone asks, “Why did we miss our Q3 targets?” the system needs to understand what “targets” means in that organization, what period the user is referring to, and how those metrics are defined. Without that semantic context, the system is just guessing. It may produce generic answers, but not trusted ones.
There is another important point here as well. In the Databricks view, the semantic layer should be open and interoperable. Traditional BI vendors often lock the semantic model into their own tool, which means everything has to flow through that interface. That becomes a major constraint. If you want AI and agentic experiences to scale, a strong custom example in APJ would be Takeda. With the right data foundations and guardrails in place, they were able to build out multiple AI use cases across commercial, R&D, manufacturing, and back office functions.
Catherine: Can you talk more specifically about the governance pressure AI puts on analytics?
Nick: On both the BI side and the AI side, governance comes down to trust, lineage, and traceability.
If you are producing dashboards or business intelligence insights, you need to understand how they were built. Which underlying data was used? How were the metrics defined? If you do not know that, then you cannot trust what you are looking at.
The same is true on the AI side. You are not going to trust the output from a model, an agent, or an agentic application if you cannot understand how that output was derived. Which table did it come from? Which features were used? Which model was serving the inference? That end-to-end lineage is essential.
There is also a compliance dimension. In highly regulated industries, organizations are increasingly going to be required to prove that traceability. If an AI-driven decision is being exposed externally to consumers, citizens, or patients, you have to be able to stand behind it and audit how it was created. AI is putting more pressure on analytics because the expectations around trust and traceability are rising.
Fragmented Metrics Are Slowing Decisions
Catherine: What are the most common conflicting metrics patterns you see, and what do they cost organizations?
Nick: The biggest issue is fragmentation. Most organizations have multiple BI tools in the estate, and each of those tools may have its own semantic model and its own interpretation of business metrics. That means you end up with no single source of truth and a lot of duplicated logic that may not align.
One dashboard might define revenue one way. Another tool may define it differently. Someone in finance may be working from another version in Excel. At that point, trust starts to erode very quickly. Decision-making slows down because people are no longer debating the decision itself. They are debating which number is right.
Why Legacy BI Models Break at AI Scale
Catherine: Why does dashboard logic, when it is trapped in tools, collapse under AI scale?
Nick: Traditional BI tools often extract data out of source systems, aggregate it for a specific reporting outcome, move it into proprietary storage, and then layer proprietary semantics and dashboards on top of that. Everything gets locked into the tool.
That becomes a real problem in an AI world because users always have follow-up questions. They want to go deeper. They want to expose that logic to other systems. They want data scientists or machine learning teams to build on it. If everything is trapped in one proprietary layer, that does not work well. You have to keep going back to the source, pulling more data, transforming it again, and rebuilding the logic. It becomes repetitive and expensive.
If, instead, everything is built on open data formats and open interfaces, then BI, AI, notebooks, agents, and data science teams can all work from the same governed foundation. You store and process the data once. Everyone can interact with it in natural language. Everyone can build on it. That is a much better model for scale.
There is also a significant engineering burden in the old way of doing it. You end up maintaining lots of synchronization pipelines and a lot of custom code just to keep fragmented systems aligned. That complexity becomes very hard to justify.
What a Machine-Readable Semantic Layer Looks Like
Catherine: What does a machine-readable semantic layer look like in practice?
Nick: First, business metrics have to be treated as a foundational pillar. That means the definitions of things like revenue, churn, or customer lifetime value need to be explicitly defined, certified, and reusable across the organization.
Second, those metrics need to be accessible through standard languages, primarily SQL, and they need to be consumable not just by BI tools but by AI interfaces, notebooks, and agents as well. If they are not accessible and reusable, you have not really solved the problem.
Third, you need openness and interoperability. You do not want to push all of your business logic into a system that you cannot get it back out of. Open standards matter because they give you optionality and a safe exit strategy if you ever need to change systems or providers.
You also need AI-enabled governance. In an agentic world, you may have thousands of models or agents interacting with the semantic layer all the time. Keeping metadata, comments, and business metrics current is a huge challenge if that is all done manually. AI can help generate and maintain that metadata so the semantic layer stays usable at scale.
And then, of course, you need conversational and contextual intelligence on top so that agents and applications can interact with that layer through APIs and natural-language interfaces.
Catherine: Where does evaluation fit into this? Does the certification of the data happen first, and then the AI layer and evaluations come after?
Nick: Yes. The data foundations come first. You need the metadata, the business logic, the comments, and the business metrics in place before AI can use that data well.
Then you build the AI or agentic layer on top of it. After that, the evaluation frameworks come into play to validate whether the outputs are aligned with expectations and to refine what the system is doing. But the evaluation layer is not a substitute for getting the foundations right. It depends on those foundations.
Why Per-Seat BI Models Limit Adoption and Value
Catherine: Where are per-seat BI models actively limiting adoption and value creation?
Nick: The goal of data and AI democratization should be to put intelligence into the hands of every knowledge worker in the organization. A per-seat model works directly against that goal.
It constrains democratization because it forces organizations to choose which users, teams, or business units get access. It also constrains innovation because now you are deciding which projects are allowed to move forward based on license availability rather than business value.
That affects value creation too. The best outcomes often come when diverse teams come together around a business problem. If only a subset of those teams can access the system, you limit collaboration and you limit the organization’s ability to create value.
The other issue is efficiency. In a consumption-based model, you pay for what you use. If usage scales up, you pay for the increased usage. If it drops to zero, you pay zero. That is a much more rational model than paying for seat licenses that may be underused or overprovisioned.
Catherine: Some organizations might argue that license limits are effectively acting as a governance layer. What would you say to that?
Nick: If you are trying to govern access to data by constraining licenses, you are going to fail. That is the wrong control point.
Good governance starts at the platform and data layer. It starts with role-based and attribute-based controls, with authentication and authorization tied into your identity systems, and with clear segregation and classification of data assets. You solve for entitlements and policy enforcement up front.
If you do that properly, then you can roll out access broadly while still ensuring that people only see what they are supposed to see. Using seat licenses as your governance mechanism is not scalable and it is not a substitute for doing the underlying governance work.
The Fastest Way to Improve Trust and Lower Cost
Catherine: What is the fastest architecture move organizations can make to improve trust and reduce analytics cost at the same time?
Nick: The most important move is to establish a unified semantic layer grounded in a strong governance foundation.
That starts with the catalog decision. How are you going to govern your data and AI assets? Once you have a catalog in place, you can define your semantics there, certify the business metrics there, and create a single source of truth. In the Databricks model, that source of truth is open and interoperable, which matters a lot.
Once you do that, a few things happen. You get trust because you have lineage, governance, auditability, and certified definitions. You get simplification because you avoid unnecessary duplication and repeated ETL. And you reduce the IT burden because you are no longer rebuilding logic every time someone asks a new question.
The implementation pattern is fairly clear. First, get the data foundations right. Second, build the semantic layer and certify the business metrics. Third, layer on AI and then use evaluation frameworks to monitor and refine those outputs. That sequence matters. NTT Docomo is a great example of this. Having used Databricks Lakehouse, Unity Catalog, and workflows to automate log analysis, they reduced manual processing time from 66 hours per month to 6 hours and improved analysis efficiency by 90 percent. That is a strong example of governance and foundation enabling much faster decision-making.
Why APJ Is Moving Faster on Data and AI Monetization
Catherine: What are APJ enterprises doing differently or faster when it comes to monetizing the data layer for AI?
Nick: APJ is a fascinating market because it is incredibly diverse. You are dealing with very different countries, languages, levels of maturity, and regulatory environments. But one of the common patterns is that organizations tend to move very quickly on digital transformation, and many governments across the region have clear national AI strategies in place.
What we see from customers is that they often start with the governance and data foundation layer, then move fast into AI-native applications once that base is in place. That sequencing matters.
We also see that pattern in industries like financial services, where customers are consolidating analytics on top of a governed data layer and then democratizing access.
Another example is Net One Systems in Japan. Once they had the foundation in place, they built an AI-infused knowledge tool integrated with other systems and achieved a 75 percent reduction in response time to support queries while saving 10,000 hours of labor per year.
One of the things that is especially unique in APJ is the multilingual dimension. Customers are building capabilities in Japanese, Mandarin, Cantonese, Thai, and other local languages. That is powerful, but it only works if the underlying data layer is governed and structured well enough to support it.
APJ customers tend to get the foundations right quickly, then pivot rapidly into AI-first application development on top of that. In many cases, they are moving faster than other regions.
Closing thoughts
Nick’s point is both technical and strategic. The organizations creating value from AI are not treating analytics, semantics, and governance as separate conversations. They are treating them as one foundation. For executives, that matters because the payoff is not just better architecture. It is faster decision-making, broader access to insight, and lower analytics cost at scale. AI will not fix a fragmented data layer. It will expose it. The companies that move fastest from experimentation to trusted intelligence will be the ones that define their metrics clearly, govern them centrally, and make them open enough for analytics and AI to build on the same truth.
To learn more about building an effective operating model, download the Databricks AI Maturity Model.
Databricks Blog
https://www.databricks.com/blog/why-agentic-analytics-starts-well-governed-data-layerSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Self-Evolving AI

RSAC Innovation Sandbox 2026: Two Sides Of AI On Display
AI already runs inside most enterprises. Forrester’s Q4, 2025 AI Pulse Survey shows that 50% of organizations were piloting agentic AI, while 24% had it in production. Security teams are catching up after the fact. The RSAC Innovation Sandbox (ISB) finalists (ZeroPath, Token Security, Realm Labs, Humanix, Glide Identity, Geordie AI, Fig Security, Crash Override, [ ]




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!