July 19, 2023
.

The Future of Software: Trust by Default and the Big Tech Pioneers Leading the Way

Privacy is no longer a feature -- it's the foundation. From Apple's ATT to Microsoft's Azure OpenAI and Salesforce's EinsteinGPT, the biggest names in tech are embedding Trust by Design into their platforms. Here's what that means for how software gets built, and where the responsibility still falls on your organization.

Patricia Thaine
Founder, Chairwoman, Thought Leader

For most of the internet's history, privacy was an afterthought in software design -- something bolted on after the fact, if it was addressed at all. That is changing fast. The combination of consumer pressure, landmark regulation, and the explosive arrival of generative AI has pushed privacy from a compliance checkbox to a foundational design principle. The organizations paying attention -- and building accordingly -- are not just the ones that will avoid regulatory penalties. They are the ones that will earn the trust that drives long-term growth.

This article traces how some of the world's largest technology companies are leading that shift, why the arrival of large language models accelerated the conversation in ways nothing before it had, and what the emerging architecture of Trust by Design means for every organization building or consuming software today.

How Apple and Meta Set the Stage for Privacy as a Business Imperative

The clearest early signal that privacy had moved from a niche concern to a market-moving force came in 2021, when Apple introduced App Tracking Transparency (ATT). The feature gave users a simple choice: allow an app to track them across other apps and websites, or decline. The majority declined. For Meta, whose business model was built on exactly that kind of cross-platform behavioral tracking, the consequences were severe. February 2022 opened with the biggest single-day stock drop in the history of the stock market, and the company simultaneously reported its first-ever quarter-over-quarter decline in daily active users -- a decline widely speculated to be connected to users' growing privacy concerns. The message to the market was unambiguous: privacy is not a soft preference. It is a hard business variable.

Microsoft had already read that writing on the wall. In 2018, CEO Satya Nadella publicly declared that privacy is a "human right" -- a framing that maps directly to Article 12 of the Universal Declaration of Human Rights, which states:

"No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks."

Treating privacy as a rights issue rather than a technical problem has downstream implications for how software is designed, what trade-offs are acceptable, and how accountability is assigned when things go wrong. Microsoft's willingness to stake out that position years before it became widely fashionable reflected a genuine strategic bet -- one that has increasingly been validated.

Why Generative AI Changed the Privacy Conversation Permanently

The arrival of ChatGPT did not just accelerate the privacy conversation. It restructured it entirely.

When OpenAI CEO Sam Altman testified before the Senate Judiciary Committee in May 2023, privacy was raised in the very first minutes of the hearing. That was not an accident. Privacy concerns around generative AI had become one of the defining technology policy conversations of the year, and for good reason. Across industries, companies began banning or restricting employee use of ChatGPT, with CISOs, CIOs, and Chief Privacy Officers scrambling to understand the exposure.

The situation created a genuine organizational paradox. Employees were increasingly treating access to large language models not as a convenience but as a competitive necessity -- a technology that could be the difference between hitting a deadline and missing it, between producing a good analysis and an excellent one. Leaders understood that companies failing to adopt it risked falling behind. And yet privacy concerns were, for many organizations, the single largest blocker to that adoption. The question stopped being "should we use AI?" and became "how do we use AI without creating unacceptable risk?"

What Is Driving the Shift Toward Privacy-First Software Design?

Three forces have converged to produce the current moment. The first is a measurable shift in consumer expectations around data privacy -- and a corresponding recognition by companies that how they handle data is now a differentiator, not merely a compliance burden. The second is the realization that large language models represent a qualitative leap in employee productivity, creating genuine organizational risk for those who sit on the sidelines. The third -- and perhaps least discussed -- is the expansion of what "privacy" means in practice.

The International Association of Privacy Professionals defines privacy as the right to be free from interference or intrusion and the right to control how personal information is collected and used. That definition was long understood to apply primarily to individuals. But the same logic applies to organizations. Companies use copyright, trade secrets, non-disclosure agreements, and business associate agreements to maintain control over their data. Generative AI -- particularly consumer-grade tools like ChatGPT -- complicates that control in ways legal frameworks were not designed to handle. High-profile incidents like Samsung's accidental leak of internal trade secrets to ChatGPT and alleged copyright violations surfacing from AI-generated outputs illustrated the stakes clearly. Privacy, in other words, is no longer just a consumer protection issue. It is an enterprise data governance issue.

This shift did not emerge overnight. It has roots in Edward Snowden's revelations in 2013, in the scramble organizations underwent when GDPR took effect in 2018, and in a gradual accumulation of data breach headlines that eroded public trust in digital institutions. But generative AI has compressed the timeline. What might have been a decade-long cultural shift is playing out in real time.

How Microsoft and Salesforce Are Building Trust by Design Into AI

What Is Microsoft Doing to Make LLMs Safe for Enterprise Use?

Microsoft moved quickly after the launch of ChatGPT to offer a secure enterprise path forward. The Azure OpenAI Service gave organizations access to OpenAI's models within a controlled, compliance-oriented cloud environment. Then, on June 19, 2023, Microsoft announced an additional capability: the ability for organizations to use OpenAI models on their own proprietary data. Rather than requiring fine-tuning -- which creates its own data exposure risks -- this approach uses internal data as context for model responses.

Limina's data de-identification solution integrates with this architecture to add a critical additional layer of protection. Before prompts are sent to Azure OpenAI, Limina can automatically identify and remove sensitive personal information -- preventing that data from being exposed to the model in the first place. The same capability applies to datasets used to provide context to the model, preventing sensitive information from being inadvertently reproduced in AI-generated outputs.

How Is Salesforce Addressing the AI Trust Gap?

Salesforce's approach to the same challenge took shape through EinsteinGPT, announced shortly before Microsoft's update. The goal was to provide a privacy-preserving environment for running LLMs on Salesforce data and corporate data alike. Salesforce CEO Marc Benioff has described what he calls the "AI trust gap" -- the tension between every executive's desire to rapidly adopt AI and the legitimate concerns about what large language models do to data in corporate environments. As the Wall Street Journal reported, Benioff noted that every CEO he had spoken to had raised this exact issue.

The announced partnership between Salesforce and the Auto Club Group (AAA) -- one of the largest auto clubs in the United States -- signaled that Trust by Design LLM environments are not just a product category. They are fast becoming the expected standard for enterprise AI adoption.

That said, Salesforce's own terms make clear where organizational responsibility does not transfer. The Salesforce BAA is explicit that customers may not submit Protected Health Information (PHI) to certain Einstein features, and that enabling specific automation functions in a way that results in PHI exposure is the customer's liability, not Salesforce's. This matters enormously for organizations in regulated industries. It means that even within a platform that provides strong trust architecture, the obligation to sanitize sensitive data before it enters the system falls on the data controller -- the organization itself.

This is exactly the gap that purpose-built PII redaction and data de-identification tools are designed to fill. Platforms like Salesforce and Microsoft provide the infrastructure for trusted AI. Organizations need a layer underneath that ensures the data flowing into those platforms is already clean.

If your organization is navigating this challenge, talk to the Limina team about how automated de-identification fits into your AI and data workflows.

Why No Single Platform Can Solve the Entire Trust Problem

One of the most important takeaways from the work Microsoft and Salesforce have done is also a caution: even the most sophisticated enterprise AI platforms cannot handle every dimension of privacy on their behalf.

Personal information identification and removal is one of those missing puzzle pieces. An organization can deploy Azure OpenAI Service, configure EinsteinGPT, and invest in robust access controls -- and still expose sensitive data if the content flowing into those systems contains unredacted names, medical record numbers, financial identifiers, or other personally identifiable information. The platforms provide the secure channel. The organization has to ensure the content traveling through that channel is appropriately handled before it gets there.

This is why Privacy by Design -- a concept developed by Dr. Ann Cavoukian and now embedded in frameworks like GDPR -- matters as a first-principles approach rather than a feature set. Privacy by Design asks organizations to treat privacy protection as an architectural requirement from the beginning of a project, not a filter applied at the end. It is the intellectual foundation for Trust by Design, and it is what the most forward-thinking technology companies are now building toward.

What Industries Face the Most Pressure to Adopt Privacy-First AI?

The urgency of this shift is not evenly distributed. Some industries operate under regulatory frameworks that make privacy in AI a legal obligation, not just a best practice.

In healthcare, any organization covered by HIPAA faces strict limitations on how PHI can be processed and shared. The use of AI tools that might inadvertently expose patient data to third-party model providers creates significant compliance exposure. The same applies to pharma and life sciences, where clinical trial data, patient records, and proprietary research data all carry heightened sensitivity.

In financial services, institutions face a combination of federal and state-level obligations around customer data that create significant liability for AI implementations that do not adequately protect account information, transaction records, and personally identifiable financial data. Insurance organizations face parallel obligations, particularly around claims data and sensitive health-related information used in underwriting.

Contact centers occupy a unique position in this landscape. They handle high volumes of sensitive personal information in unstructured formats -- voice transcripts, chat logs, free-text notes -- where PII is scattered throughout in unpredictable ways. Applying AI to contact center operations without a robust de-identification layer creates a significant risk of sensitive data leakage into model training or analysis pipelines.

Across all of these sectors, the challenge is the same: how do you take advantage of the genuine productivity and intelligence gains that AI offers without creating unacceptable privacy, compliance, or reputational risk? The answer lies in building de-identification into the workflow before data reaches the model.

Reach out to Limina to learn how regulated industries are solving this challenge in practice.

The Longer Arc: Privacy, Bias Reduction, and Equitable Software

The push toward Trust by Design in LLM ecosystems is not the endpoint. It is the beginning of a much larger shift in how software is conceived and built.

Privacy becoming a core design requirement for AI will create pressure for the same standard to apply to the broader software ecosystem: ETL pipelines, API gateways, internal tooling, and consumer-facing products. The expectation of privacy by default, once established in high-visibility AI tools, will migrate to every layer of the stack.

Following closely behind is the question of algorithmic bias -- and there is a meaningful relationship between the two. The same attributes that can be used to identify an individual (origin, race, address, socioeconomic signals embedded in data) are also the attributes most likely to introduce bias into model outputs. Removing personal identifiers from training data and inference inputs is therefore not only a privacy intervention. It is also a bias-reduction measure. The two goals align more often than they conflict.

The third pillar of this shift is equitable access to software. Salesforce has committed an AI accelerator specifically aimed at extending the benefits of AI to nonprofits and underserved organizations, framing it as part of the same trust and equity agenda. Microsoft's venture arm, M12, partnered with GitHub to launch the GitHub Fund in November 2022, directing venture capital toward open-source software development -- ensuring that the infrastructure of the internet remains open and accessible.

The convergence of privacy, bias reduction, trust, and equitable access as core software development values represents a genuine maturation of the field. These are no longer soft commitments or aspirational statements. They are increasingly embedded in regulation, in product architecture, and in procurement decisions made by organizations that have learned, sometimes painfully, what happens when they are absent.

Where Does This Leave Organizations Building or Buying Software Today?

Software developers face a clear choice: adapt to these standards now, or be pushed to them later by regulators, enterprise customers, and a public that has internalized the expectation of privacy as a baseline right.

For organizations consuming software -- particularly in regulated industries -- the implication is just as clear. The platforms they deploy, whether that is Azure OpenAI, Salesforce, or anything else, are not automatically privacy-compliant simply by virtue of being enterprise-grade. The data that enters those platforms must be handled responsibly. That means understanding what sensitive information exists in the data, where it lives, and how to remove or mask it before it is processed by models that may expose it through outputs.

Limina is built for exactly this challenge. Its linguist-built data de-identification platform processes text, documents, audio, and images -- identifying and redacting over 50 entity types across more than 52 languages, at a throughput of 70,000 words per second, with accuracy exceeding 99.5%. Because the solution is built by linguists rather than relying solely on pattern matching, it understands context and entity relationships within documents in ways that rule-based or statistical-only approaches miss. That distinction matters enormously when the data is complex, idiosyncratic, or domain-specific -- as it almost always is in healthcare, financial services, pharma, and insurance environments.

Trust by Design is not a product feature. It is an architecture. And building that architecture requires the right foundation.

Related Articles