The Colorado Privacy Act and the Connecticut Data Privacy Act: A Full Comparison
The Colorado Privacy Act and Connecticut Data Privacy Act share a common framework but differ in key ways. This guide breaks down both laws, compares their definitions of personal data, de-identification, and sensitive data, and explains how organizations can use Limina's technology to simplify compliance — or eliminate the applicability of these laws altogether.

While the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence has called for a federal privacy law, that legislation has yet to materialize. In the meantime, it is still the states that are shaping the data privacy landscape in the United States. The Colorado Privacy Act (CPA) and the Connecticut Data Privacy Act (CDPA) both came into effect in July 2023, following similar legislation in California and Virginia. Together, these state-level frameworks represent a growing patchwork of privacy obligations that organizations operating across state lines must understand and reconcile.
For businesses that collect, process, or share personal data, the arrival of the CPA and CDPA raises practical questions: What counts as personal information under each law? When does de-identification actually remove data from the scope of these acts? How do pseudonymization and de-identification differ legally, and what are the compliance implications of that distinction? And how should organizations structure their data practices to meet these obligations at scale?
This article addresses all of those questions. It examines the CPA and CDPA side by side, focusing on the definitions that matter most for compliance: personal information, de-identification, pseudonymization, and sensitive data. It also explains how Limina's data de-identification technology can help organizations satisfy the standards these laws set — or step outside their scope altogether.
How Do the CPA and CDPA Define Personal Information?
The starting point for any privacy compliance analysis is understanding what each law actually covers. Under both the CPA and the CDPA, personal information is defined as information that is linked or reasonably linkable to an identified or identifiable individual. Both laws explicitly exclude publicly available information from this definition, though that exclusion is narrowly drawn in both cases. They also exclude de-identified data, which is a significant carve-out that has real implications for how organizations manage their data pipelines.
One aspect that both acts share, and that is worth flagging for organizations in regulated industries, is the way they define "consumer." Both the CPA and the CDPA carve out individuals acting in their capacity as employees. This means that many of the consumer rights granted under each law — such as the right to access, correct, or delete personal data — do not extend to employees acting in that role. Employment records and job application data are not categorically excluded from the definition of personal information, but employees acting in a professional capacity do not receive the same protections as consumers under these statutes.
This distinction matters for healthcare organizations, financial services firms, and other enterprises that maintain large volumes of both consumer and employee data. The scope of applicable rights, and therefore the compliance obligations, differs depending on whose data is being processed and in what context.
What Is De-Identified Data Under the CPA and CDPA?
Both the CPA and the CDPA treat de-identified data as falling outside the definition of personal information. This is one of the most consequential provisions in either law: if data is properly de-identified, the acts do not apply to it. That means organizations that effectively de-identify their data can significantly reduce the regulatory burden associated with both statutes.
The two laws define de-identified data in closely aligned terms. Under both acts, de-identified data must satisfy two core requirements. First, the data cannot reasonably be used to infer information about an identified or identifiable individual, or to be linked to such an individual or a device associated with them. Second, the controller processing the data must take active organizational and technical steps to support that status: implementing measures against re-identification, publicly committing to process the data only in de-identified form, and contractually requiring downstream data recipients to do the same.
This is not a passive standard. It is not enough to strip obvious identifiers and consider the job done. Controllers must maintain ongoing safeguards, demonstrate a public commitment to de-identified processing, and extend those obligations through their data sharing agreements. For organizations with complex data ecosystems — including those operating in pharma and life sciences or insurance — meeting this standard consistently requires purpose-built tooling, not ad hoc processes.
Limina's data de-identification platform is designed precisely for this kind of operational requirement. Built by linguists, Limina's technology is context-aware in a way that pattern-matching tools are not. It identifies and redacts over 50 types of personal data entities across structured and unstructured text, images, audio, and documents — and it does so with 99.5%+ accuracy at speeds of up to 70,000 words per second. That combination of precision and scale makes it possible to de-identify data in a way that genuinely meets the "reasonably linkable" standard these laws establish, rather than approximating it.
If your organization is working through how to apply these standards to your own data, speak with Limina's team to see the platform in action.
How Does Pseudonymization Differ from De-Identification?
The CPA and CDPA both define pseudonymized data, and those definitions are also closely aligned. Under both acts, pseudonymized data means personal data that cannot be attributed to a specific individual without the use of additional information, provided that additional information is kept separately and is subject to appropriate technical and organizational measures to prevent re-identification. The CPA uses the phrase "a specific individual" where the CDPA refers to "an identified or identifiable individual," but this is a minor textual variation rather than a substantive distinction.
The more important difference is the legal status of pseudonymized data compared to de-identified data. De-identification, when done correctly, removes data from the definition of personal information under both acts. Pseudonymization does not. Pseudonymized data remains personal data, because re-identification using the separately held key is practically expected to occur. The pseudonymized record still points to a real person; it is simply stored in a form that requires an extra step to connect back to that person.
This creates an interesting interpretive question about the relative standards the two definitions establish. The de-identification standard asks whether inferences about an individual are "reasonably" possible. The pseudonymization standard asks whether attribution "cannot be" achieved without additional information. Read literally, that would suggest pseudonymization requires a higher technical barrier than de-identification, even though de-identification is the more privacy-protective outcome. It remains to be seen how courts and regulators will interpret these provisions as enforcement of the CPA and CDPA matures. For now, organizations should not assume that pseudonymization provides the same compliance benefits as de-identification. The two techniques carry different legal consequences, and that distinction should inform how data is classified and processed.
What Counts as Sensitive Data Under the CPA and CDPA?
Both acts recognize a category of sensitive data that attracts additional protections beyond those applied to personal information generally. The two laws share substantial common ground on what qualifies as sensitive, but they diverge in one notable area.
Both the CPA and the CDPA treat the following as sensitive data: racial or ethnic origin, religious beliefs, mental or physical health conditions or diagnoses, sex life or sexual orientation, citizenship or immigration status, genetic or biometric data, and personal data collected from a known child.
The key difference is that the CDPA also includes precise geolocation data in its definition of sensitive data, while the CPA does not. For organizations operating in Connecticut, or processing data about Connecticut residents, this means that location-derived information requires the same heightened treatment as categories like health data or biometric identifiers. This is a meaningful distinction for industries like contact centers that may capture geolocation data incidentally through customer interactions, and for any organization that relies on location-aware applications or services.
Understanding which categories of data are sensitive under each applicable law is a prerequisite for building a compliant data governance framework. It affects what consent mechanisms are required, what processing restrictions apply, and what disclosures must be made to consumers.
How Limina Helps Organizations Comply with the CPA and CDPA
Meeting the requirements of the CPA and CDPA is not just a legal exercise. It requires technical infrastructure capable of identifying, classifying, and processing personal and sensitive data consistently and at scale. Limina's platform was built for exactly this kind of operational challenge.
Limina's de-identification technology supports compliance with both acts in several concrete ways. Its ability to detect and redact over 50 personal data entity types — including names, addresses, financial identifiers, health conditions, biometric references, and more — directly addresses the breadth of what each law defines as personal or sensitive information. Because the platform is built by linguists, it understands context and entity relationships within documents rather than relying on surface-level pattern matching. This matters because real-world data is rarely clean or predictable. A health record might reference a condition indirectly. A customer service transcript might contain sensitive information embedded in conversational language. Limina's contextual approach handles these cases with a level of accuracy that rules-based tools cannot match.
The platform also supports flexible deployment. Organizations that require maximum data control can deploy Limina on-premises, keeping data within their own infrastructure throughout the de-identification process. Those with different requirements can use Limina's API-based deployment. This flexibility is particularly relevant for organizations in regulated industries where data residency and security controls are themselves compliance requirements.
Limina also processes data in over 52 languages, which is critical for multinational organizations managing data about individuals across different jurisdictions. The CPA and CDPA apply based on the residency of the consumer, not the location of the controller, so organizations with a global customer base need de-identification capabilities that are not limited to English-language content.
For organizations using AI tools in their operations, Limina's integration capabilities allow sensitive data to be sanitized before it enters AI workflows, ensuring that outputs remain compliant with applicable privacy laws and that sensitive business or customer data is not inadvertently exposed through model interactions.
If your organization is evaluating how to align your data practices with the CPA, the CDPA, or both, connect with Limina's team to explore what's possible for your use case.
Why De-Identification Is the Most Valuable Compliance Strategy
Both the CPA and the CDPA are structured in a way that rewards de-identification. When data meets the legal standard for de-identification under either act, it falls entirely outside the scope of the law. That means no consent requirements, no access rights to honor, no deletion obligations to manage, and no sensitive data restrictions to navigate. De-identification does not just reduce compliance burden — it can eliminate it for the relevant data altogether.
This is why de-identification is not simply a technical checkbox but a strategic compliance decision. For organizations processing large volumes of personal data, the ability to de-identify effectively and consistently can determine whether entire data workflows fall inside or outside the regulatory perimeter. That has implications for product development, analytics, AI training, research, and any other use case that involves personal information.
The challenge, as both acts make clear, is that de-identification must be done rigorously. The standard is not cosmetic. Controllers must implement active safeguards, make public commitments, and extend de-identification obligations to third parties who receive the data. Meeting that standard requires a solution that is accurate enough to remove real risk of re-identification, fast enough to operate at production scale, and comprehensive enough to address the full range of personal and sensitive data categories the law covers.
Limina's data de-identification platform was built to meet that standard. For organizations operating in healthcare, financial services, pharma, insurance, or any other sector where personal data is central to operations, it represents a foundational layer of privacy infrastructure — one that supports compliance with the CPA and CDPA today, and with whatever state-level legislation follows.



