AI in litigation series: Dark patterns and artificial intelligence

March 27, 2026

About the series

As companies increasingly integrate artificial intelligence (AI) into their daily operations, novel litigation based on those integrations will follow. Tracking the volume of cases across all relevant jurisdictions is time-consuming. This AI in Litigation series aims to help by highlighting key AI-related cases and the developments companies need to know.

This post focuses on dark patterns in AI. It provides background on what dark patterns are, summarizes a recently filed case alleging that the defendants improperly used dark patterns in their AI platforms and offers practical takeaways.

Summary of the article

  • Dark patterns are deceptive or manipulative design choices that undermine a user’s ability to make autonomous and informed decisions.
  • In D.W. v. Character Technologies, Inc. et al. (E.D. VA No. 2:25-cv-00824), the plaintiff alleges that defendants used dark patterns to obscure relevant notices and disclaimers related to their AI product. Plaintiff’s dark patterns claim is based on a theory of strict product liability.
  • To avoid similar issues, companies should consider implementing AI design governance policies and processes, including pre-launch reviews of user flows that affect notice, consent, data use, cancellation and content safety.

Summary of dark patterns

Dark patterns, also called “deceptive design,” is a term used to refer to “a user interface designed or manipulated with the effect of substantially subverting or impairing user autonomy, decision-making, or choice, and includes any practice the Federal Trade Commission refers to as a dark pattern.” See, e.g., Tex. Bus. & Comm. Code § 541.001(10). Those interfaces may inappropriately steer users, improperly hide or downplay important notices or make it unreasonably difficult to opt out of certain data processing or data collection—practices increasingly recognized by regulators and academics alike. Put simply, a dark pattern is a user interface intentionally designed to “trick or manipulate users into making choices they would not otherwise have made and that may cause harm.” Bringing Dark Patterns to Light, Fed. Trade Comm’n (Sept. 2022).

Dark patterns are not limited to AI—or even to technology products generally—but in the AI context they might appear in designs that obscure risks or system actions, hinder user control (such as through confusing opt-out processes) or anthropomorphize systems in ways that overstate or misstate safety or capability.

Dark patterns can be subject to a variety of statutes and regulations and can trigger potential liability under deceptive trade practice statutes, attract scrutiny from federal or state enforcers and invite private litigation. In particular, some recent state AI-governance laws explicitly address dark patterns in AI systems. See, e.g., The Texas Responsible AI Governance Act: What your company needs to know before January 1, Norton Rose Fulbright (December 2023).

Summary of the parties, allegations and claims in D.W. v. Character Technologies, Inc.

These issues are now being tested in court. In D.W. v. Character Technologies, Inc., a Virginia mother, D.W., filed suit on behalf of her minor child, A.W., against Character Technologies (maker of Character AI), its co-founders Noam Shazeer and Daniel De Freitas Adiwarsana and Google LLC in the Eastern District of Virginia. The complaint alleges that Character’s AI platform was designed, marketed and deployed to reach Virginia minors like A.W. and ultimately caused A.W. harm.

At a high level, the plaintiff alleges an AI-based product‑safety and child‑protection case. The plaintiff brings multiple claims—including product liability, negligence, violations of the Children’s Online Privacy Protection Act and the Virginia Consumer Protection Act—all of which coalesce around the same theory: that Character AI was unsafe for minors, misrepresented its safety features, collected children’s data without obtaining proper consent to collect and used that data for AI model training. And, relevant to this article, plaintiff specifically alleges that Character AI’s violations and its liability arise from the impermissible use of dark patterns in the AI. The plaintiff seeks damages and injunctive relief that would require stronger safeguards for minors and potential algorithmic disgorgement. If the plaintiff prevails and obtains that relief, it would indicate that state remedies are aligned with recent Federal Trade Commission decisions ordering algorithmic disgorgement and destruction of improperly collected and processed sensitive personal data (including in some cases data from children).

The dark patterns allegations and claim

The plaintiff’s dark patterns allegations are based on common‑law theories of strict product liability. Plaintiff alleges that the design of Character’s AI chat interface—including anthropomorphic prompts and engagement‑driven features—created unreasonable risks for minors and that the warnings were inadequate. The complaint connects these design elements to dark‑pattern practices by alleging that the relevant interfaces obscure relevant notices and disclosures. The complaint also identifies what plaintiff views as reasonable alternatives—such as clearer disclosures, age gating, parental controls and safer default settings—to demonstrate that the challenged design was defective because reasonable alternatives existed.

More broadly, this case reflects an emerging trend: parties are beginning to attempt to apply traditional tort frameworks—here, design-defect and failure-to-warn theories—to AI, using established product-safety concepts to test how courts will treat algorithmic design choices and user-interface dynamics.

These applications of common law for private claims also exist alongside specific dark patterns claims that can arise under statutory law. On top of traditional tort frameworks, several states—including California and Texas—expressly prohibit obtaining consent through dark patterns. See Cal. Civ. Code § 1798.99.31(b)(7) (“A business that provides an online service, product, or feature likely to be accessed by children shall not take any of the following actions… Use dark patterns to lead or encourage children to provide personal information beyond what is reasonably expected…”); Tex. Bus. & Com. Code § 541.001(6)(c) (Defining consent to expressly “not include…agreement obtained through the use of dark patterns”).

And these restrictions are not just occurring at the state level; the Federal Trade Commission also enforces these principles under Section 5 of the FTC Act and has explicitly examined dark patterns in a 2022 staff report. See Bringing Dark Patterns to Light, Fed. Trade Comm’n (Sept. 2022).

Conclusion: Practical takeaways

This particular case is still at the pleading stage and the defendants have not yet answered. Because of its early stage, this case hasn’t produced specific, actionable decisions which companies could review or act on. We will watch this case and provide additional updates on any relevant actions in the future.

The case does, however, underscore the importance of considering dark patterns when implementing new technologies and as a part of an overall AI governance program. As companies continue to expand their investment in and use of AI, new litigation theories targeting these design choices are likely to follow.

To mitigate potential dark patterns issues, companies should consider implementing design governance practices that specifically address and identify potential issues, including pre-launch reviews of user flows that affect consent, data use, cancellation and content safety. Taking these steps are likely to reduce the possibility of enforcement exposure and are likely cheaper than addressing these issues after an issue arises.

Our team has significant experience with AI-related governance, litigation and eDiscovery. For questions regarding AI litigation matters, please contact us.