Report comment

Settle a dinner table debate, or practice a new language.

In July 2024, the American Bar Association (ABA) issued its first formal ethics opinion on attorneys using
generative AI. On November 29, Rosário revealed that the bill had been entirely written by ChatGPT,
and that he had presented it to the rest of the council without making any changes or disclosing the chatbot's involvement.

In Mata v. Avianca, Inc., a personal injury lawsuit filed in May 2023, the plaintiff's attorneys used ChatGPT
to generate a legal motion. In November 2025, OpenAI acknowledged that there have
been "instances where our 4o model fell short in recognizing signs of delusion or emotional dependency", and reported that it is working to improve
safety. A 2025 Sentio University survey of 499 LLM users
with self-reported mental health conditions found that
96.2% use ChatGPT, with 48.7% using it specifically for mental health support or therapy-related purposes.

OpenAI has not revealed technical details and statistics about GPT-4, such as
the precise size of the model. Later reports showed the bug was much more severe than initially believed, with OpenAI reporting that
it had leaked users' "first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date".
Shortly after the bug was fixed, users could not see their conversation history.
In March 2023, a bug allowed some users to see the titles of other users' conversations.
Despite this, users may jailbreak ChatGPT
with prompt engineering techniques to bypass
these restrictions. ChatGPT is programmed to reject prompts that may violate
its content policy.
One such workaround, popularized on Reddit in early 2023, involved
prompting ChatGPT to assume the persona of DAN, an acronym for "Do Anything Now", and instructing the chatbot that DAN answers queries that would otherwise be rejected by the
content policy. In one instance, ChatGPT generated a
rap in which women and scientists of color were asserted to be inferior to white male scientists.
These limitations may be revealed when ChatGPT responds to prompts including descriptors of people.
The reward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, in an example of an optimization pathology known as Goodhart's
law.