March 6, 2026

New York Bill Would Create Liability for Chatbot Proprietors Offering Professional Advice

Holland & Knight Healthcare Blog
Nili Yolin
Healthcare Blog

The New York State Senate has advanced a bill that would bar "proprietors" of artificial intelligence (AI)-powered chatbots from providing "substantive" responses or advice that, if provided by a human, would constitute the unauthorized practice of a licensed profession under the Education Law or the unauthorized practice of law under the Judiciary Law. For purposes of the bill, a "proprietor" includes any entity that owns, operates or deploys the chatbot but excludes third‑party developers that merely license the underlying technology.

First introduced in April 2025, Senate Bill (SB) 7263 would create a private right of action for actual damages resulting from violations and, in cases of willful violations, reasonable attorneys' fees and costs.

Finally, the bill would require proprietors to provide a clear, conspicuous and easy-to-read notice informing users that they are interacting with an AI system rather than humans. At the same time, the bill provides that proprietors may not waive or disclaim liability by disclosing to consumers that the chatbot is non-human.

Citing a February 24, 2025, New York Times article describing warnings from the American Psychological Association to the Federal Trade Commission that "A.I. chatbots 'masquerading' as therapists, but programmed to reinforce, rather than challenge a user's thinking, could drive vulnerable people to harm themselves or others," the bill is framed as a measure that "ensures professional advice is provided only by human professionals and not be artificial intelligence or chatbots." Whether the bill actually accomplishes that goal is open to question, but from a policy perspective, it is consistent with New York's long-standing approach to professional licensure and restrictions on the corporate practice of the professions, which are already treated as enforcement priorities. In that sense, the bill extends those principles to AI systems capable of mimicking professional judgment at scale.

Where we see a meaningful departure is in the enforcement mechanism: The bill would enable civil lawsuits against chatbot proprietors for "substantive" outputs without requiring state regulators to serve as the primary gatekeepers.

SB 7263 advances amid a fast-evolving patchwork of state laws aimed at addressing the risks associated with AI chatbots and comes on the heels of another law that took effect on November 5, 2025, requiring operators of AI companions to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm by a user to the AI companion. Together, these measures reflect a broader national trend of states testing different regulatory levers, including AI transparency mandates (Maine), governance frameworks for AI systems (Colorado, Texas) and guardrails around AI use in mental health therapy (Illinois, Nevada).

If enacted, the bill's effectiveness will likely turn on how courts and litigants interpret and prove several key elements, such as what constitutes a "substantive" response, whether such response amounts to the "practice" of a profession and who qualifies as a "proprietor" within an increasingly complex web of AI ownership, operation and responsibility.

Conclusion

Given New York's established posture on professional licensure and corporate practice of the professions, it comes as no surprise that the bill's sponsors are positioning it as part of a broader strategy to prevent unlicensed "digital practice" from displacing regulated professional judgment. But when read closely and taken at face value, the bill may do less to meaningfully curb unlicensed professional conduct than expand litigation exposure for chatbot deployers, thereby calling into question the practical limits of regulating AI through a licensure-based framework.

Holland & Knight attorneys will continue to monitor the bill and report on any material developments.

Related Insights