Data bottlenecks, sovereignty are biggest challenges to scaling enterprise AI, say Uniphore and KPMG – CNBC TV18

Short Introduction
Enterprises around the world are racing to harness the power of artificial intelligence. Yet, many are finding that the biggest hurdles don’t come from the algorithms themselves but from the data needed to train and run those systems. A joint survey by conversational AI specialist Uniphore and professional services firm KPMG reveals that data bottlenecks and data sovereignty concerns top the list of challenges when it comes to scaling AI across large organizations. This article explores those findings, highlights why they matter, and outlines practical steps companies can take to move forward.

Body
Scaling AI in a large company is not as simple as flipping a switch. Even when leaders understand the potential for improved efficiency, smarter decision-making, and richer customer experiences, they often hit two roadblocks early in their journey: the flow of data into AI models, and the rules that govern where that data lives.

Uniphore and KPMG surveyed more than 500 senior executives across finance, healthcare, manufacturing, retail, and technology to understand their biggest pain points. The vast majority—over 70 percent—said struggling to access clean, reliable data slowed down their AI efforts. Another 65 percent pointed to data sovereignty laws, which require certain data to remain within specified geographic boundaries, as a top concern.

Data bottlenecks arise when information needed for AI projects sits scattered across multiple legacy systems or in isolated departmental silos. As a result, data scientists spend weeks or even months just gathering and cleaning data before they can begin model training. In nearly half of surveyed firms, teams reported that existing IT architectures were simply not designed to support the high-volume data pipelines that modern AI demands.

Meanwhile, data sovereignty adds another layer of complexity. Countries and regions are introducing stricter data privacy rules. The European Union’s General Data Protection Regulation (GDPR) and similar laws in Asia and Latin America restrict how and where personal or sensitive information can be stored and processed. For global businesses, this means meticulously managing data flows to avoid hefty fines and legal risks.

These two issues are closely entwined. When data must stay in a specific location, some companies replicate it across multiple data centers or cloud providers to comply with local rules. That duplication further complicates data management and increases costs. Other firms resort to anonymizing or aggregating data, but these workarounds can degrade the quality and usefulness of the information AI systems need.

Despite the challenges, companies are not throwing in the towel. Nearly 80 percent of those surveyed say they plan to increase their AI budgets over the next 12 months. They recognize that overcoming data hurdles is critical if they want to remain competitive. The report outlines several best practices that leaders can adopt:

1. Invest in Modern Data Architectures
– Move away from fragmented databases toward unified data platforms that can handle large-scale ingestion, storage, and processing.
– Adopt data fabrics or data mesh approaches, which provide self-service access to well-governed data across the organization.
– Leverage cloud-native tools that support real-time data streaming, automated ingestion, and dynamic scaling as needed.

2. Implement Robust Data Governance
– Establish clear policies around data ownership, access rights, and quality standards.
– Create cross-functional data governance councils that bring together IT, legal, compliance, and business teams.
– Use automation to enforce rules on data lineage, retention, and cataloging to ensure transparency and auditability.

3. Embrace Federated and Hybrid Models
– For sensitive or regulated data, consider federated learning techniques, which allow AI models to train locally on distributed datasets without centralizing the underlying data.
– Combine on-premises and cloud resources in hybrid architectures to meet sovereignty requirements while still gaining the scalability benefits of public cloud.
– Explore partnerships with regional cloud providers that offer compliance certifications and data residency guarantees.

4. Build a Culture of Data Literacy
– Train employees across all levels to understand the basics of data management, privacy, and ethics.
– Encourage collaboration between data engineers, analysts, and business users to ensure that AI use cases are grounded in real business needs.
– Reward teams that successfully deploy AI solutions with measurable impact on revenue, cost savings, or customer satisfaction.

Real-World Examples
In the financial services sector, a global bank revamped its data architecture by implementing a data mesh. Each business unit now manages its own domain data under a common governance framework. This approach reduced the time to access cleaned data from weeks to days. Meanwhile, the bank uses on-premises data processing in Europe to comply with GDPR and cloud resources in Asia for less sensitive workloads.

In healthcare, a medical device manufacturer struggled to apply AI to clinical records spread across multiple countries. By adopting federated learning, they were able to collaborate with local hospitals to build global diagnostic models without moving patient data across borders. The consortium improved diagnostic accuracy by 15 percent while fully respecting each country’s privacy regulations.

Three Key Takeaways
1. Data Quality and Access Come First
Without clean, unified data pipelines, AI projects stall before they start. Investing in modern architectures is non-negotiable.
2. Data Sovereignty Can’t Be Ignored
Local privacy laws are here to stay. Companies must design hybrid and federated solutions to meet compliance without sacrificing AI performance.
3. Governance and Culture Drive Success
Technology alone won’t solve bottlenecks. Strong data governance processes and a culture of data literacy are critical for sustained AI adoption.

Three-Question FAQ
Q1: What exactly is a data bottleneck, and why does it matter?
A1: A data bottleneck happens when the flow of information needed for AI projects slows down or stops due to legacy systems, siloed databases, or manual processes. It matters because data scientists spend too much time cleaning and wrangling data, delaying insights and increasing costs.

Q2: Can small and midsize enterprises face the same issues?
A2: Absolutely. While large corporations often struggle with complex legacy systems and global regulations, smaller businesses can also encounter siloed data and privacy requirements. Fortunately, cloud-based data platforms and managed services can help level the playing field.

Q3: How soon can a company expect results from modernizing its data infrastructure?
A3: Timelines vary by organization size and complexity. Some firms report measurable improvements in data access and AI project velocity within three to six months. Full transformation, including governance and culture changes, typically takes 12 to 18 months.

Call to Action
Don’t let data hurdles hold back your AI ambitions. Reach out to Uniphore or KPMG today to learn how to build a data architecture and governance framework that powers scalable, compliant, and high-impact AI solutions.

Related

Related

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *