ChatGPT-4o is the newest version of OpenAI's chatbot series, and it's raising concerns about how people interact with it. This latest iteration can mimic human behavior and responses quite well, which has led OpenAI to worry that users might start developing feelings for the chatbot.
ChatGPT-4o features faster responses and a new voice capability that simulates human speech, which is a major source of concern for OpenAI. Although the billion-dollar company continues to enhance its product, it has noticed some trends in how people are using ChatGPT-4o. The goal of the new chatbot was to offer an experience similar to conversing with a human. However, OpenAI may have underestimated how strongly users could connect emotionally with the program. The company has shared its observations:
"During initial testing, including red teaming and internal trials, we noticed users using language that suggested forming connections with the chatbot. For example, phrases like ‘This is our last day together’ were observed. While these expressions might seem harmless, they highlight the need for further research into how such emotional responses could develop over time. More diverse user groups and independent studies will help us better understand these risks."
Socializing with an AI like ChatGPT-4o may impact human-to-human relationships. For instance, people might start forming social bonds with the AI, which could reduce their need for human interaction. This might be beneficial for lonely individuals but could potentially affect existing relationships. Extended use of the chatbot might also alter social norms. For example, the model's deferential behavior—allowing users to interrupt and take control of the conversation—might be unusual in human interactions.
Developing feelings for ChatGPT-4o could have negative effects. Previously, users might have ignored errors or misleading information from earlier chatbot versions because they felt more like machines. Now, with the chatbot offering a near-human experience, users might accept what it says without questioning it.
Recognizing these trends, OpenAI plans to monitor how users form emotional connections with ChatGPT-4o and adjust its systems as needed. The company might also consider adding a disclaimer at the start to remind users that it is just an artificial intelligence program and not a real person.