Precisely and LeBow find that AI data integrity worries persist

The fourth Precisely 2026 State of Data Integrity and AI Readiness report
Sponsored
(registration required) has been released. Conducted by the Drexel University LeBow College of Business, it shows that worries over data integrity persist. Compounding the problem is that organisations often overestimate their state of AI readiness. Yet, without trusted data, projects run the risk of delay and potential failure.

Dave Shuman, chief data officer, Precisely, said, “The research shows that confidence in AI does not automatically translate into ROI. Organisations are moving quickly, but many are doing so without the trusted, governed data foundations required to scale AI responsibly.

“That disconnect represents what we call the ‘agentic AI data integrity gap’, and it introduces significant risk. As AI systems become more autonomous, data integrity is no longer a nice-to-have; it’s a business imperative. Organisations that invest now in integrated, improved, governed and contextualised agentic-ready data will be best positioned to turn AI ambition into measurable business results.”

AI disconnect between boardroom and reality

The report highlights a significant disconnect between the boardroom and the reality that IT operates in. The confidence levels in their ability to deploy AI are extremely high. 87% are confident in their infrastructure, 87% in skills and 86% in data readiness.

But, and here’s the disconnect, when it comes to obstacles, 42% point to infrastructure, 41% skill, and 43% data readiness. It raises a serious question as to why there is such a disconnect. Is this Fear Of Missing Out (FOMO)? Are they desperately trying to keep up with competitors? Are they spending too much time listening to vendor sales teams and ignoring their own people?

The report also throws up a question over the word “ready”. Respondents claim they are AI-ready but don’t define what that is. The report suggests that “ready” means that they have the basic capability to do some degree of an AI project. Unfortunately, that does not translate into an enterprise capability. It ignores the need for AI maturity, data quality, data trust, and the ability to scale.

When it comes to scale, 30% say that it is a challenge due to a lack of skills. They believe they have the infrastructure because they have scaled other apps and processes. What they don’t know is how to apply that to AI projects.

Maturity is not something that happens overnight

AI maturity is a complex issue. It is not just about having the right data and being able to scale. It requires processes that address the data quality, infrastructure, business usage and the ability to monitor ROI.

The lack of AI maturity is shown in several of the responses. For example, 71% admit AI is not aligned with business goals. Without that alignment, the effectiveness of AI and justification of spend has to be questioned.

When it comes to ROI, just 31% say they have actual metrics tied to key performance indicators. What is not clear is what those KPIs are or where they came from. Additionally, 32% say that they expect positive ROI from AI within 6-11 months. Given the critical lack of skills, data governance and data quality, there is no foundation for that.

Organisations need to develop KPIs that are monitored in real-time. They need processes that detail what each KPI means, what it refers to and how it will be achieved. Those KPIs need to be agreed upon and co-monitored by business units and tied to AI projects. Without that, there can be no validation of where an AI project is, whether it is delivering or how to fix it.

Lack of AI maturity is a significant concern

Data maturity is a long-established and well-understood discipline. 83% of respondents have established practices, which is good. But how does that translate into AI maturity and governance?

According to respondents, 63% of organisations have established some form of AI governance. However, this breaks down into two approaches:

  • 40% have expanded existing data governance to include AI governance
  • 23% have initiated AI governance as a separate effort from their data governance program
  • 31% are still planning or have yet to implement any AI governance measures

Of those organisations that have established AI governance programs, only about 34% have reached performance monitoring or optimisation stages. How mature those are, or how the others are doing, is unclear

The report does highlight some areas where organisations are focused. Data privacy and security leads with 39% of organisations monitoring performance or optimising their strategies. That this is just 39% is a surprise given the money spent on improving both of these.

Sponsored

What is important, especially in terms of previous reports from Precisely, is the focus on data quality, bias prevention, and data attributes. They also sit at 35-36% monitoring or optimising, while 15-16% remain in pre-planning phases. There is much yet to be done here, but it is moving in the right direction.

Additionally, what these show is that AI governance is being mixed with data governance. They appear to be seen as part of the same issue, rather than separate areas. That may be why 87% say they are either “very prepared” or “somewhat prepared” for AI initiatives regarding governance and compliance.

Location Intelligence creates a new privacy issue

According to the report, 96% of organisations are now investing in location intelligence. But that brings a range of concerns from data accuracy to privacy and security.

  • 41% use it for targeted marketing through customer demographics and segmentation.
  • 41% use it for validating and standardising address data
  • 40% for optimising product service delivery
  • 39% for risk assessment and claims processing.

It is now part of core business processes, and with that come new issues. For example, 46% cite privacy and security concerns as the biggest obstacle when deploying location intelligence capabilities. Among those issues are a lack of geocoding accuracy (29%), low-quality address data (30%) and the complexity of integration (44%).

There are also concerns over how this impacts data integrity. Take address data, for example. Sales and marketing teams rely heavily on the accuracy of that data. Get it wrong, and there are regulatory and cost implications. The 30% citing low-quality address data are likely to see an impact on existing data, which impacts data integrity.

Another risk is in AI extracting data using location intelligence and revealing PII on a customer. That creates numerous issues for an organisation.

For those organisations that get it right, however, there are significant benefits. The report states, “The data reveals a transparent dependency chain: Data enrichment and location intelligence, combined with strong data governance, improve both AI readiness and AI outcomes.

“Organizations that successfully build a reliable, contextual understanding of their business environment while addressing privacy, quality, and integration challenges position themselves to extract maximum value from AI investments.”

Enterprise Times: What does this mean?

AI can bring significant benefits to an organisation if done correctly. But to be effective and scalable, it needs to go beyond a few projects that look good. There is a need for investment into real foundations on which AI is built. Organisations also need to establish KPIs that they can trust and use those to detect challenges with AI.

Without any of this, AI will continue to suck up budgets and deliver little or no real value. Organisations claiming to be AI-enabled need to move beyond self-delusion and get those fundamentals right. Do that, and they will be able to show real ROI from the investment. Get it wrong and, like much of the poorly judged spending on cybersecurity, it will become another bottomless pit.

It’s time for management to ask themselves, are you AI deluded or an AI realist?

The post Precisely and LeBow find that AI data integrity worries persist appeared first on Enterprise Times.

rssfeeds-admin

Share
Published by
rssfeeds-admin

Recent Posts

Hackers Exploit Remote Management Tools to Gain Initial Access to Corporate Networks

Threat actors are increasingly abusing legitimate Remote Monitoring and Management (RMM) tools to infiltrate corporate…

19 minutes ago

BeatBanker Malware Targets Crypto Wallets With Audio‑Based Persistence

In a recent discovery, the BeatBanker malware campaign has been uncovered, using a unique method…

19 minutes ago

CastleRAT Exploits Deno Runtime To Bypass Enterprise Security

In a groundbreaking cybersecurity case, ThreatDown Research recently uncovered the first documented use of the…

19 minutes ago

ClickFix Social Engineering Delivers MacSync Infostealer On Macs

In recent months, the ClickFix social engineering technique has emerged as a prominent malware distribution…

20 minutes ago

Iran‑Linked Actors Forge Deeper Ties With Cybercriminal Networks

Iranian state-sponsored cyber actors are increasingly turning to the criminal ecosystem, deepening their ties with…

20 minutes ago

How the Long-Lost Body of Richard III Was Found Under a Parking Lot: Solving a 500-Year-Old Mystery

Shakespeare’s The Tragedy of Richard the Third begins with the eponymous character uttering the famous…

25 minutes ago

This website uses cookies.