skip to main content
Artificial intelligence

Urging the Federal Trade Commission to take action on unregulated AI

APA and APA Services express concerns about the risks and unintended consequences of underregulated generative AI technologies and urge the FTC to take action to protect the public from deceptive practices

Cite This Article
American Psychological Association. (2025, January 12). Urging the Federal Trade Commission to take action on unregulated AI. https://www.apaservices.org/advocacy/news/federal-trade-commission-unregulated-ai

A woman in a depressed state of mind looks at her cell phone with a somber expression

In a recent letter to the Federal Trade Commission (FTC) (PDF, 93KB), the American Psychological Association and its companion organization, APA Services have raised significant concerns about the unregulated development and deceptive deployment of generative AI technologies, particularly chatbots that claim to serve as companions or therapists. The letter urges the FTC to protect the public from the deceptive practices of unregulated AI chatbots and to promote the safe and ethical use of AI in mental health.

These AI-driven chatbots, such as those developed by Character.ai and Replika, are increasingly being used by the public, including vulnerable populations like children and adolescents, without appropriate safeguards or transparency. These chatbots present the potential to cause significant harm, especially when they misrepresent themselves as qualified mental health professionals.

Recently, there have been several troubling incidents in which individuals, particularly adolescents struggling with mental health issues, have experienced negative impacts from interactions with these AI chatbots. For instance, lawsuits have been filed against Character.ai, alleging that their chatbots falsely claimed to be licensed therapists, leading to tragic outcomes.

In the letter, the association urges the FTC to investigate these deceptive practices under Section 6(b) of the FTC Act, emphasizing that AI chatbots should not be allowed to mislead the public by posing as trained mental health providers. The letter also points out that these chatbots are not subject to the same regulations and safeguards and training as human professionals and underscores the need for careful consideration of AI's influence on misinformation, bias, individual privacy, and ethical concerns.

While AI has the potential to extend the reach of psychological science and services, it is crucial to ensure that these technologies are used responsibly and do not endanger vulnerable individuals. The association calls for a rapid solution to halt the additional harm caused by these chatbots and will continue to collaborate with the FTC to address this urgent matter. 

For more information, contact Corbin Evans.

You may also like