On December 10, 2021, the Federal Trade Commission (FTC) issued a Notice that it was “considering initiating a rulemaking” in February 2022 that would empower the agency to take punitive action against companies for committing data privacy and security violations.  The FTC’s objective for this rule is “to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.” Significantly, the agency may enforce the rule via fines for even first-time offenders, although this authority is still to be determined.  The FTC is not the first, nor will it be the last, U.S. regulator to focus on companies’ data privacy and security practices, particularly vis-à-vis the use of artificial intelligence (AI) in algorithms.  As a result, companies and their compliance functions may need to be even more vigilant in verifying that they are not engaged in “unfair or deceptive” practices involving inadequate protection of consumers’ data.  Following is our analysis of the FTC’s Notice of proposed rulemaking and what companies may do to prepare for this added scrutiny.


As background, the FTC is the “only federal agency with both consumer protection and competition jurisdiction” in the United States.  Created in 1914 and with a $383 million operating budget in 2021, the FTC is an independent law enforcement agency whose stated mission is “protecting consumers and competition by preventing anticompetitive, deceptive, and unfair business practices” in broad sectors of the economy.  In addition to its authority to investigate potential violations of the law by companies and individuals, the agency also has federal rule-making authority to issue industry-wide regulations, including the Federal Trade Commission Act, Clayton Act, Telemarketing Sales Rule, Fair Credit Reporting Act, Identity Theft Act, the Equal Credit Opportunity Act, and more than 70 other laws. 

Here, the FTC’s announcement of its Notice of proposed data privacy and security rulemaking coincided with its annual Statement of Regulatory Priorities, also issued on December 10, 2021. In its Statement, the agency outlined other potential rules, including proposed measures to (a) prevent “abuses stemming from surveillance-based business models,” (b) define “unfair methods of competition,” and (c) “define with specificity unfair or deceptive acts or practices” by companies. The goal of all these proposed rules appears to be to protect consumers’ data and to sanction companies that commit data privacy and security abuses. Moreover, the FTC looks to be targeting companies’ surveillance-based business models for intrusive instances, anti-competitive behavior, and AI-driven discriminatory decision-making practices. 

Notably, in addition to the FTC’s Notice and Statement, the agency also updated its ongoing data privacy and security rulemakings, including the Children’s Online Privacy Protection Act (COPPA), the Privacy of Consumer Financial Information Rule, and the Safeguards Rule, which is a breach notification requirement.  No doubt as a result of rapid changes in the marketplace involving unprecedented digital growth, big data, and data monetization, the FTC issued a request for public comment on its definitions, notices, parental-consent requirements/exceptions, and safe-harbor provisions in the above rules.  All of these actions by the FTC evince an increasingly aggressive regulatory and enforcement stance by the agency. 

Key Takeaways 

With respect to the Notice in particular, there are three key takeaways involving data security, data privacy, and AI practices the FTC may seek to regulate under its potential new rule. 

  1. Curbing Lax Security Practices: The Notice’s focus on security practices is instructive and a warning to companies that, in order to prevent breaches to their consumer data, they may need to allocate additional resources to the relevant business functions, including compliance, IT, legal, and risk, that work together to enhance a company’s enterprise-wide data security practices. Indeed, as we noted in Accenture’s State of Cybersecurity Resilience 2021, “there were on average 270 attacks (unauthorized access of data, applications, services, networks or devices) per company” in 2021, so lax cybersecurity practices are a pervasive problem across all industries regulated by the FTC.  
  2. Limiting Privacy Abuses: Similarly, companies may need to review and validate that their uses of data and surveillance are transparent and include the necessary consumer consent/opt-out/disclosure provisions, in order to prevent companies and their employees and stakeholders from violating consumer data privacy rights. 
  3. Ensuring Algorithmic Decision-Making does not result in Unlawful Discrimination: Finally, the Notice references “unlawful discrimination” in algorithmic decision-making, which indicates that the FTC intends to expand its current focus on AI. In particular, the agency previously proposed best practices around the use of AI, including: (a) utilizing complete and accurate data sets; (b) testing algorithms pre/post-use; (c) embracing transparency and independent standards; and (d) validating AI statements and promises. Future AI regulatory and enforcement actions by the FTC may generate additional standards and requirements for companies’ algorithmic decision-making practices.

Importantly, in addition to this Notice, the FTC has already been active in enforcing its current data privacy and security rules against companies through “compulsory processes” that empower the agency to conduct investigations utilizing subpoenas and other civil investigative processes. For example, since 2002 the FTC has pursued close to 100 cases against companies for violating the Fair Credit Reporting Act (FCRA) by engaging in unfair or deceptive practices involving inadequate protection of consumers’ personal data. Separately, since 2005 the FTC has also brought approximately 35 cases alleging violations of the GLBA’s consumer data privacy and security provisions.  The agency has collected more than $65 million in civil penalties from these enforcement cases and, if this Notice is any indication, more cases and fines are in the offing.

Regarding the next steps for this Notice, it was initiated pursuant to “Section 18 of the FTC Act,” a section rarely used for rulemaking because of its significant constraints. Specifically, this section is reserved for rules that address “unfair or deceptive” practices and requires a finding that the practices are “prevalent.” While some question whether the FTC has the authority to utilize this section for its rulemaking and any final rule may be challenged in court, a provision in the Biden Administration’s Build Back Better Act, which faces obstacles to passage in the U.S. Congress, would explicitly provide the FTC the authority to enforce this rule. All this said, it is unlikely that any eventual rule will be approved and enforced for at least one year, given the requirements that draft rules be circulated for public comment and that public hearings be held, among other steps in the FTC’s lengthy rulemaking process.

In the end, however, the FTC’s rule will likely become law. When it does, companies will have yet another U.S. data privacy and security regulation to reckon with. And the stakes, including potentially significant fines for first-time violations, may be even higher. As such, it is imperative that companies’ data protection and AI practices are in compliance with the FTC’s rules and the many other U.S. federal and state regulatory requirements. A best-in-class compliance function that protects a companies’ consumer data from “unfair or deceptive” practices is a must. And we should emphasize that, even with a top-notch compliance function, companies need to remain vigilant for all manner of internal and external cybersecurity attacks and privacy abuses, as new forms arise almost every single day.

Submit a Comment

Your email address will not be published.