Data Privacy Regulation Michael Adler Data Privacy Regulation Michael Adler

The NJDPA Takes Effect: A New Era of Data Privacy in New Jersey

The New Jersey Data Protection Act comes into force January 15, 2025. Learn more about what you need to know!

The New Jersey Data Protection Act (NJDPA) officially comes into force on January 15th, 2025. This legislation marks a significant step in safeguarding the personal information of New Jersey residents and brings the state in line with a growing number of states enacting comprehensive data privacy laws.

Understanding the NJDPA's Core Principles:

The NJDPA centers around several key principles:

  • Consumer Control: Empowering New Jersey residents with greater control over their personal data.

  • Business Accountability: Placing clear obligations on businesses to handle personal data responsibly and transparently.

  • Risk-Based Approach: Requiring businesses to assess and mitigate the risks associated with their data processing activities.

Key Provisions for Businesses to Note:

  • Consumer Rights: The NJDPA grants New Jersey residents various rights, including the right to access, correct, delete, and obtain a copy of their personal data.

  • Data Security: Businesses must implement reasonable security measures to protect personal information from unauthorized access, use, or disclosure.

  • Sensitive Data: Processing sensitive data, such as health information or biometric data, requires explicit consumer consent.

  • Targeted Advertising and Profiling: Businesses engaged in targeted advertising or profiling must conduct data protection assessments to evaluate and mitigate risks.

  • Universal Opt-Out: Starting July 15th, 2025, businesses must recognize a universal opt-out mechanism, allowing consumers to easily opt out of the sale or sharing of their personal data.

Preparing for the NJDPA:

Businesses subject to the NJDPA should take proactive steps to ensure compliance, including:

  • Reviewing and updating privacy policies.

  • Implementing data protection measures and conducting risk assessments.

  • Establishing procedures for responding to consumer rights requests.

  • Staying informed about the latest guidance and interpretations of the NJDPA.

White & Case has a great article that goes into additional detail, which you can read here.

By understanding and complying with the NJDPA, businesses can demonstrate their commitment to protecting consumer privacy and fostering trust in the digital marketplace.

Read More
Data Privacy Regulation, Data Privacy Michael Adler Data Privacy Regulation, Data Privacy Michael Adler

CCPA Compliance in 2025: Updates to fines & Penalties

CCPA fines increased January 1, 2025 - here’s what you need to know.

As of January 1st, 2025, businesses subject to the California Consumer Privacy Act (CCPA) must be aware of significant updates to the potential fines and penalties for non-compliance. These adjustments, mandated by California law and tied to the Consumer Price Index (CPI), reflect the state's ongoing commitment to protecting consumer data privacy.

Key Changes:

  • Increased Administrative Fines: Fines for non-compliance have increased to $2,663 per violation.

  • Higher Penalties for Intentional Violations: Intentional violations or those involving the mishandling of data from minors (under 16) now carry a penalty of $7,988 per violation.

Implications for Businesses:

These increased penalties underscore the importance of prioritizing CCPA compliance. Businesses that handle the personal information of California consumers should review their data privacy practices and ensure they have the necessary safeguards in place to protect consumer data.

What Businesses Should Do:

  • Perform compliance audits

  • Review policies, and how they are being implemented

  • Educate your employees on CCPA requirements and best practices

  • Engage in incident response planning

Read more on the subject here.

Read More

Is Your Business Ready for the EU AI Act? (Part 2)

The EU AI Act is hefty with:

  • 180 recitals

  • 113 articles

  • 13 annexes

  • 144 pages

  • And a partridge in a pear tree…

And it gets even more complex, as different rules within these 144 pages apply depending on the level of risk for the AI in question and the role your business plays.  And, unlike GDPR enforcement, penalties are assigned based on the size of a business, its role, and the nature of the infraction.

So… How Does the EU AI Act Work?

The EU AI Act takes a risk-based approach to regulating AI systems and General Purpose AI (GPAI) models. It classifies them into four categories:

  • Minimal Risk

  • Limited Risk

  • High Risk

  • Unacceptable Risk

Each risk level comes with its own set of requirements, and these also vary depending on an organization's role (e.g., provider, deployer, importer). Most of the AI Act's obligations (and their associated risks) center around high-risk systems.


Unacceptable Risk: The No-Go Zone

Let's start with the most straightforward category: unacceptable risk. These AI systems are simply prohibited in the EU because they pose a serious threat to the rights of individuals.

Here are some examples of unacceptable risk AI systems:

  • Systems that use manipulative or subliminal techniques.

  • Real-time remote biometric identification for law enforcement.

  • Systems that apply social scoring and classify people based on their behavior, socio-economic status, or personal characteristics, leading to discriminatory treatment.

  • Predictive policing based solely on profiling or personal characteristics (sounds a bit like Minority Report, right?).

  • Systems used for untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases.

  • Systems that recognize emotions in workplaces or schools.

  • Systems that exploit vulnerabilities in individuals due to their age, physical or mental disabilities, socioeconomic status, or other traits to influence their behavior in harmful ways.

  • Systems that categorize individuals based on biometric data to deduce sensitive attributes like race, political opinion, or sexual orientation (excluding law enforcement with lawful datasets).

The potential for abuse in these cases is clearly high, and the consequences could be irreversible.

 

High-Risk Systems: Treading Carefully

High-risk systems are at the core of the AI Act's regulatory framework. An AI system is classified as high-risk based on its intended purpose. This means that careful analysis of each specific case is crucial to determine whether an AI system falls into this category.

An AI system is considered high-risk if:

  • It's used in products that already require a third-party conformity assessment under EU regulations, such as explosives, toys (which, by the way, you shouldn't combine!), and safety components in vehicles like self-driving cars.

  • It's listed in Annex III of the AI Act. This annex covers a wide range of applications, including biometric systems, critical infrastructure, education, employment, essential services, credit scoring, law enforcement, migration, asylum and border control, and the administration of justice and democratic processes.

Important exception: An AI system may not be considered high-risk if it doesn't pose a significant risk to individuals, such as when it's designed for a narrow, procedural task.

Providers of high-risk AI systems have several obligations under the AI Act, including:

  • Risk management

  • Data and data governance

  • Technical documentation

  • Record-keeping (using automated logs)

  • Transparency and provision of information to deployers

  • Human oversight

  • Accuracy, robustness, and cybersecurity

On top of these, additional obligations apply to providers, deployers, importers, and distributors of high-risk AI systems. These stakeholders must collaborate to uphold the principles of transparency, safety, and accountability to foster trust and innovation in the AI ecosystem.

For example, deployers must take appropriate measures to ensure they deploy high-risk AI systems according to instructions and maintain "AI literacy" to monitor the system and exercise human oversight. They must also report incidents and notify providers and distributors of any inappropriate or risky behavior by the AI system.

It's important to note that the AI Act's obligations often overlap with the GDPR, especially since AI systems frequently process personal data.

Importers and distributors also have their own set of obligations under the AI Act. (We'll save the finer details for a future deep dive!)

 

Limited Risk: Transparency is Key

Limited risk AI systems, also known as "Transparency Risk" systems, include those that interact directly with individuals or generate synthetic content like audio, images, videos, or text. These are common in B2C applications. Think AI chatbots, AI image generators, emotion recognition systems, deepfakes, and so on.

To prevent these systems from infringing on individual rights, the AI Act imposes transparency obligations on providers and deployers.

Providers must ensure that:

  • AI systems that interact directly with people (like ChatGPT) are designed to make users aware they are interacting with AI.

  • Systems that generate content (like audio or text files) clearly label their output as artificially generated or manipulated.

Deployers must:

  • Inform individuals when they are exposed to an emotion recognition or biometric categorization system and process their data in accordance with the GDPR.

  • Clearly disclose any deepfakes as artificially generated or manipulated.

  • Disclose when AI is used to create or manipulate text published to inform the public on matters of public interest.

 

Minimal Risk: Smooth Sailing (Mostly)

If your AI system doesn't fall into any of the higher-risk categories, congratulations! You have a "minimal risk" AI system and aren't subject to specific obligations under the AI Act. However, you still need to comply with other relevant laws, such as the GDPR (yes, it comes up a lot!).


Multi-Category AI Systems: The Juggling Act

What happens when an AI system meets the criteria for multiple risk categories? These are called "multi-category AI systems," and they can be tricky to navigate.

Organizations must comply with the requirements for each category their technology falls into. AI systems with biometric capabilities are a prime example.

  • Systems categorizing people based on biometric data to deduce sensitive attributes like race, political opinions, or sexual orientation fall under the Unacceptable Risk classification.

  • The definition of a high-risk AI system specifically includes biometric systems.

  • AI with biometric categorization is also considered limited risk.

As you can see, an AI system can be subject to multiple layers of obligations. It's also important to consider how a system might evolve. For example, a limited risk AI chatbot that learns to infer sensitive attributes for marketing purposes could become an Unacceptable Risk.

 

GPAI Models: A Chapter of Their Own

The AI Act dedicates an entire chapter to regulating General Purpose AI models. A GPAI model:

  • Is an AI model, not a system (though it may be integrated into one).

  • Is trained on a massive amount of data with self-supervision (think ChatGPT 3, trained on over 570 gigabytes of data!).

  • Is general-purpose and can perform a wide variety of tasks.

GPAI models are regulated based on whether they pose a systemic risk. This is determined by the model's size or if the European Commission designates it as a systemic risk.

Providers of GPAI models must:

  • Maintain and provide technical documentation to authorities and downstream providers.

  • Comply with EU copyright law.

  • Cooperate with the European Commission and national authorities.

  • Publish a summary of the training data.

  • Appoint an authorized representative in the EU.

Providers of GPAI models that present systemic risks have additional obligations, such as:

  • Performing model evaluation and adversarial testing.

  • Assessing and mitigating systemic risks at the EU level.

  • Ensuring adequate cybersecurity protection.


Enforcement and Penalties

Like the GDPR, the AI Act includes administrative fines for violations. These penalties are designed to be effective, proportionate, and dissuasive.

Maximum penalties under the AI Act:

  • €35 million or 7% of worldwide annual turnover for using a prohibited AI system.

  • €15 million or 3% of worldwide annual turnover for non-compliance with other obligations.

  • €7.5 million or 1% of worldwide annual turnover for providing incorrect or misleading information to regulators.

Unlike the GDPR, the AI Act considers business size when determining fines. For large organizations, the maximum fine is the greater of the percentage or amount. For startups and SMEs, it's the lower of the two.

 

Implementation Timeline

The AI Act was enacted on August 2, 2024, but its enforcement will roll out in phases:

  • February 2, 2025: Ban on prohibited AI systems.

  • August 2, 2025: Obligations for providers of GPAI models.

  • August 2, 2026: Remaining obligations.

  • August 2, 2027: Obligations for AI systems that are products requiring third-party conformity assessment and AI systems used as safety components in such products.

 

Your Unique Path to Compliance

This introduction to the AI Act provides a general overview. How the AI Act applies to you will depend on several factors, including the nature of your AI system, your role, its interaction with other laws like the GDPR, and your business model.

I hope this overview has been helpful! I look forward to diving deeper into this topic in the future. In the meantime, don't forget to subscribe to our YouTube channel for more video content or reach out to schedule a complimentary consultation today.

Read More

Is Your Business Ready for the EU AI Act? (Part 1)

The EU AI Act is hefty with:

  • 180 recitals

  • 113 articles

  • 13 annexes

  • 144 pages

  • And a partridge in a pear tree…

And it gets even more complex, as different rules within these 144 pages apply depending on the level of risk for the AI in question and the role your business plays.  And, unlike GDPR enforcement, penalties are assigned based on the size of a business, its role, and the nature of the infraction.

What is the EU AI Act?

The EU AI Act is a new law designed to regulate the development and use of artificial intelligence (AI) within the European Union. This law has a broad reach, applying to anyone who:  

  • Provides AI systems within the EU.  

  • Deploys AI systems within the EU.  

  • Imports AI systems into the EU.  

  • Makes the output of their AI system available in the EU (regardless of where they are based).  

In essence, if your AI system or its output touches the EU in any way, you need to understand these regulations and integrate compliance into your business operations. Enforcement will be rolled out in phases, with the earliest provisions taking effect in February 2025.

By the Numbers

The EU AI Act is a substantial piece of legislation, comprising:

  • 180 recitals

  • 113 articles

  • 13 annexes

  • 144 pages

...and a partridge in a pear tree?

To add to the complexity, different rules apply depending on the risk level of the AI system and your business's role in its lifecycle. Unlike GDPR enforcement, penalties under the AI Act consider the size of the business, its role, and the nature of the infraction.  

 

AI Systems vs. GPAI Models

The AI Act governs both AI systems and general-purpose AI models (GPAI).  

An AI system is a machine-based system that:

  • Operates with some degree of autonomy and may even adapt after deployment.  

  • Infers how to generate outputs (predictions, content, recommendations, decisions) from its inputs.

  • Can influence physical or virtual environments.  

A GPAI model is an AI model that:

  • Is trained on a massive dataset with self-supervision.  

  • Can perform a wide range of tasks and has broad applicability.  

  • Can be integrated into various downstream systems or applications. 

While the AI Act applies to both, different obligations apply to each based on their potential risk. GPAI models, in particular, may pose a systemic risk. Non-compliance can lead to hefty fines, regulatory scrutiny, and damage to reputation and goodwill.  

 

Key Players in the AI Act

Understanding the different roles defined in the AI Act is crucial for compliance.

  • Provider: Develops the AI system or GPAI model (or has it developed on their behalf) and places it on the market under their name or trademark (e.g., OpenAI, Google, Anthropic). This also includes companies that use third-party language learning models (LLMs) with tailored prompts to create specific outputs.  

  • Deployer: Uses a provider's AI system for a specific purpose (e.g., using an AI chatbot for customer service).

  • Importer: A person or organization within the EU that imports an AI system from outside the EU.  

  • Distributor: Makes an AI system available in the EU without being a provider or importer.  

  • Product Manufacturer: Incorporates an AI system into their product. If the AI system is high-risk (e.g., a safety component in a car), the manufacturer takes on the role of a provider.  

It's important to remember that businesses can hold multiple roles under the AI Act and must fulfill the obligations associated with each role. Just like under the GDPR, an entity's role is determined by its actions in practice, not just contractual definitions.  

 

The Journey Ahead

This introduction to the AI Act provides a foundational understanding. There's much more to explore, such as the role of supervisory bodies and how the AI Act applies to public entities, research, and non-public models. As enforcement phases approach, we can expect further commentary and guidance.

How the AI Act applies to you will depend on your specific circumstances. We're here to help you navigate these complexities and ensure your AI initiatives are compliant and responsible.

If you'd like to learn more about how the AI Act relates to your business, schedule a complimentary consultation today.

Read More
Data Privacy Regulation, GDPR Michael Adler Data Privacy Regulation, GDPR Michael Adler

Privacy Principles by Design

An introduction to Privacy by Design and how you can gain a strategic advantage by crafting a Privacy Principles by Design approach to regulatory compliance in the areas of data privacy and GDPR (and CCPA and every other regulation that may come in the future).

"Privacy by design" is a concept that has been tossed around a lot lately, and it’s one that's becoming increasingly important in our data-driven world. It essentially means that when you're creating a new product, service, or system, you should consider and integrate privacy protections from the very beginning, rather than treating it as an afterthought, so really, it’s more like “privacy integrated into the design.”

Think of it like this: instead of building a house and then trying to add a security system later, you're incorporating things like strong locks, alarm systems, and maybe even a moat with sharks (okay, maybe not sharks) into the initial blueprints.

In the context of data privacy, this could mean things like:

  • Minimizing data collection: Only collect the data you absolutely need.

  • Giving users control: Allow users to access, correct, or delete their data.

  • Building in security: Use encryption and other security measures to protect data.

  • Being transparent: Be open about how you collect, use, and share data.

By incorporating privacy from the get-go, you can build trust with your users and avoid potential privacy issues down the road.

Now, let’s go even deeper into the concept of Privacy by Design, with a particular focus on a practical, risk-based approach that I created and refer to as “Privacy Principles by Design.” This approach is particularly well-suited for startups, SMBs, and entrepreneurs who are navigating the complexities of data privacy regulations, such as the General Data Protection Regulation (known more commonly as GDPR).

Understanding the GDPR Challenge

The GDPR, as you may know, is a substantial piece of legislation. It's 261 pages long with 99 articles. That's a lot to digest! Traditionally, privacy by design meant building your entire data processing system with every single one of those GDPR requirements in mind. That's a daunting task for any organization, let alone a smaller, growing business. The sheer volume and complexity of the requirements can be overwhelming, leading to potential delays, increased costs, and the risk of non-compliance.

Introducing “Privacy Principles by Design”

This is where the “privacy principles by design” approach comes in. Instead of getting bogged down in the minutiae of specific requirements, we focus on the core principles of the GDPR. These principles, which are at the heart of the regulation, include:

  • Lawfulness, fairness, and transparency: Processing personal data in a lawful, fair, and transparent manner.

  • Purpose limitation: Collecting personal data only for specified, explicit, and legitimate purposes.

  • Data minimization: Collecting only the minimum amount of personal data necessary for the intended purpose.

  • Accuracy: Keeping personal data accurate and up-to-date.

  • Storage limitation: Limiting the storage of personal data to the necessary period.

  • Integrity and confidentiality (or security): Ensuring the security of personal data through appropriate technical and organizational measures.

  • Accountability: Demonstrating compliance with the GDPR principles.

By aligning your data processing activities with these principles, you're essentially building a strong foundation of compliance. It's a more achievable goal, especially for businesses with limited resources. And the risk-based approach that we apply in our strategic consulting process allows you to demonstrate a reasonable level of compliance early on, which is crucial for attracting investors, getting business from customers (especially enterprise customers), satisfying regulators, and avoiding the "technical debt" of non-compliance down the line.

Building a Strong Foundation

Going back to that house analogy, the GDPR requirements are like the detailed blueprints with all the tiniest details annotated, but without a key to interpreting all those symbols you’re looking at, while the principles of GDPR are the fundamental building codes - the rules that you follow in construction to make sure your final product is fundamentally safe. Focusing on the principles ensures that your foundation is strong, even if you haven't added all the finishing touches yet.

Advantages of the Privacy Principles by Design Approach

  • Sustainable Competitive Advantage: By proactively addressing privacy concerns and demonstrating compliance, we can help you differentiate yourself from competitors and build trust with customers.

  • Mitigation of Regulatory Risk: While startups and smaller businesses may not face the same level of scrutiny as large corporations, compliance is still essential. A principles-based approach helps reduce the risk of penalties.

  • Avoid a Regressive Tax.  Unfortunately, GDPR applies to all businesses equally, with no allowance for differences in size or revenue. The financial cost of compliance for startups and SMBs can represent a much larger investment relative to their overall operating budget compared to large corporations. A principles-based approach enables you to maximize the “I” in your compliance R.O.I. and avoid paying for compliance with a lower “R.”  In our house-building analogy, it’s like if your town had one electrician who charged a flat rate no matter how big the building is or how long the work would take - you’re building a bungalow, but you’re paying the same amount as the giant construction conglomerate downtown that’s building a skyscraper.

  • Positive Impression for Investors and Customers: Demonstrating a commitment to privacy principles can attract investors and reassure customers, especially enterprise customers, that their data is being handled responsibly. Companies who demonstrate privacy compliance see significant increases to their valuations, especially where that compliance is related to their core business activities.

  • Solid Foundation for Future Growth: As your business grows and evolves, we can build upon this foundation and develop a more comprehensive privacy program that adapts to changing regulatory requirements - especially as you expand and are subject to new regulations - and business needs.  While GDPR applies to all businesses equally, the bigger your business gets, the more scrutiny you’ll attract from regulators, and those regulators often hold larger businesses to a higher standard and expect greater sophistication in their privacy compliance.

GDPR's Global Impact

Remember, GDPR is not just European regulation. It has global implications.  First, due to what’s known as “extraterritorial application,” even if you’re not located in the EU or UK, GDPR’s rules still apply to your business as soon as you process the personal data of any EU or UK citizen. Also, by adopting our Privacy Principles by Design approach, you're not just complying with GDPR, you're preparing your business for a global landscape of data privacy laws. Many other countries and regions have implemented or are implementing or considering similar regulations based largely on GDPR. The principles enshrined in the GDPR already are, or are likely to be, reflected in these laws.

Strategic and Proactive Approach

In essence, Privacy Principles by Design is about being smart and strategic. It's about understanding the spirit of the law, not just the letter of the law. It's about building a culture of privacy within your organization. And it's about positioning your business for success in a world where data privacy is increasingly important.

We can work with your business to embrace the principles of privacy by design.  Returning to our house analogy, even if you are a general contractor yourself, you can’t just decide to break ground on a new building one day - you need experts like engineers, architects, people to check that everything is up to code so you have a solid plan and path forward to make sure what you’re building will stand the test (or tests) of time.

By working with Aetos to create this strategic blueprint for your company, you're taking a proactive step towards protecting your business, your customers, and your future by building a foundation for sustainable growth in a privacy-conscious world. Remember, privacy is not just a compliance issue; it's a business opportunity.

By prioritizing privacy, you can:

  • Enhance Customer Trust: Demonstrating a commitment to protecting customer data fosters trust and loyalty. In an era where data breaches and privacy concerns are prevalent, prioritizing privacy can be a key differentiator for your business.  Enterprise customers, in particular, are sensitive to introducing risks from vendors or other businesses into their own privacy and security ecosystem, and your business’s ability to demonstrate a savvy level of compliance can provide you with a significant advantage in winning those deals.

  • Mitigate Legal and Financial Risks:  Proactive privacy measures help you navigate the complex and rapidly evolving regulatory landscape, reducing the risk of legal disputes, fines, and reputational damage.

  • Gain a Competitive Advantage:  Businesses that prioritize privacy position themselves as leaders in their industry, attracting customers and investors who value their data security and privacy. This is especially true for your core business activities. Regulators have turned to a new deterrent for businesses that are built on data that was processed in non-compliant ways - they’re calling it “algorithmic disgorgement,” which is a scary not-safe-for-work-sounding way to say that they have required businesses who have built their products, code, AI systems, algorithms, etc. by processing data (even a little bit) in violation of privacy laws to delete not only that data, but also the resulting products, code, AI systems, algorithms, etc. that they created using that data. This type of penalty could quickly bring about the collapse of a business or scare away potential investors who don’t want to inherit that risk.

  • Foster Innovation: A privacy-centric approach encourages innovation by promoting the development of new technologies and business models that respect and protect user privacy.

If you embrace privacy as a core business value and integrate it into your strategic planning, you can build a resilient and successful organization that is well-prepared for the future. Remember, privacy is not just a checkbox to tick; it's a fundamental aspect of building a sustainable and trustworthy business in the digital age.

Read More