Why Consumer AI Needs a Data Integrity Foundation—Not Just Better Models
The race to deploy consumer-facing personal AI assistants is accelerating, and for good reason. Enterprises across financial services, healthcare, retail, and media are investing heavily in AI personalization due to its enormous potential to transform customer experience, operational efficiency, and competitive positioning.
Unfortunately, there’s just as much potential for harm and we’re already seeing AI agents fail in predictable ways. They push recommendations against consumer interests, contradict things that are known to be true, and can't distinguish between who someone is now and who they were years ago.
The problem isn't the AI itself. There's a foundational challenge that most AI strategies aren't addressing: data integrity.
The Business Risk That Nobody's Talking About
In a recent piece published in IEEE Security & Privacy, Inrupt's Chief of Security Architecture Bruce Schneier identifies why today's AI personalization is falling short: "we've mastered data availability, we're working on confidentiality, but we've never properly solved integrity."
As AI assistants become more sophisticated, they require access to more intimate personal data (interactions across every touchpoint, preferences over time, transaction history, and behavioral patterns) than email providers, social media platforms, or cloud storage ever needed.
This depth of personalization introduces new requirements that traditional data infrastructure wasn't designed to meet. Schneier outlines why integrity, the third leg of the “security triad” after availability and confidentiality, becomes critical at this level of AI personalization. The intimacy required for effective AI assistance creates many new ways for these systems to negatively impact, harm, or disappoint consumers.
When AI systems operate on incomplete, inaccurate, or unverifiable data, the results range from poor customer experiences to regulatory exposure to eroded brand trust at scale. For enterprises deploying AI at scale, these aren't edge cases. They're predictable outcomes of systems without integrity controls. Without verifiable data integrity, you're building customer-facing AI on an untrustworthy foundation.
Six Requirements for Trustworthy AI
As a solution to these problems, Schneier proposes each customer needs a personal data store that is separate from the AI system itself. He then outlines the capabilities these personal data stores must provide:
- Broad accessibility as a data repository: encompassing personal data, transaction data, and inferred preferences
- Broad accessibility as a data source: working across multiple AI models and systems, not locked to a single vendor
- Provable accuracy: with audit capabilities for high-stakes interactions
- User control and audit: fine-grained permissions with the ability to grant, revoke, and review access over time
- Security against both read and write attacks: protecting against unauthorized access and data manipulation
- Ease of use: accessible to all users without specialized security training
The Competitive Advantage Nobody’s Talking About
The enterprises that solve for data integrity proactively will benefit from massive first-mover advantages in trustworthy AI: differentiating on customer confidence while competitors still compete primarily on model performance.
By separating personal data from the systems that use them, and the business gain compelling advantages:
- Risk transfer: When customers control their own personal data with verifiable integrity, liability is reduced massively for enterprises. Your business is no longer a single point of failure for data completeness, correctness, or security.
- Competitive advantage in the AI era: While competitors invest in incremental model improvements, your brand differentiates on trustworthiness — an increasingly valuable positioning as AI becomes ubiquitous and both consumer and regulatory scrutiny intensifies.
- Future-proofing: Customer-controlled data can work across multiple AI models and vendors. Businesses are not locked into today's technology choices, and customers receive improved, trusted, and hyper-personal products or services.
- Compliance becomes differentiation: As frameworks like the CFPB’s Section 1033 in the US and others globally signal the direction toward consumer data rights, this architecture puts businesses ahead of compliance requirements rather than retrofitting them later and stifling innovation.
When customers have agency over their personal data and trust as the foundation of brand relationships, businesses gain access to verified, authentic data across contexts while reducing exposure to the integrity risks that increasingly erode trust and create liability in the age of AI.
Inrupt's Approach: Built on Solid Foundations
At Inrupt, we've been working on exactly this separation by extending Tim Berners-Lee's Solid protocol into enterprise infrastructure for customer-centric data sharing with the integrity properties Schneier describes.
Our approach gives B2C enterprises a way to deliver personalized AI experiences while transferring data ownership back to consumers. Enterprises get the AI innovation and personalization advantages without the integrity risks, regulatory exposure, or customer trust erosion that come with centralized data control.
In order for brands to remain relevant and connected to their customers in the age of agentic AI, enterprise innovation and technology leaders building consumer AI, must proactively prioritize solving for data integrity. As Schneier writes: "Making this all work is a challenge, but it's the only way we can have trustworthy AI assistants."
Interested in how Inrupt's infrastructure enables trustworthy, transformative AI at scale?


