News

These days, it's not unusual to hear stories about people falling in love with artificial intelligence. People are not only using AI to solve equations or plan trips, they are also telling chatbots ...
A new study from researchers at University of Pennsylvania shows that AI models can be persuaded to break their own rules ...
OpenAI and Meta will adjust chatbot features to better respond to teens in crisis after multiple reports of the bots ...
After a California teenager spent months on ChatGPT discussing plans to end his life, OpenAI said it would introduce parental controls and better responses for users in distress.
When you ask ChatGPT or other AI assistants to help create misinformation, they typically refuse, with responses like “I ...
OpenAI and Meta are adjusting how their chatbots respond to teenagers showing signs of distress. OpenAI, the maker of ChatGPT ...
The parents of a teenager who died by suicide have filed a wrongful death suit against ChatGPT owner OpenAI, saying the chatbot discussed ways he could end his life after he expressed suicidal ...
The company will limit its AI characters and train the chatbot not to discuss self-harm and suicide, or have romance conversations with children.
The lack of guardrails leaves AI chatbots open to manipulation and has resulted in them generating antisemitic and other hateful online content, researchers said.
Tech products like Dialogues and Sway aim to improve civility and encourage healthy discourse in college classrooms.
Generally, AI chatbots are not supposed to do things like call you names or tell you how to make controlled substances. But, just like a person, with the right psychological tactics, it seems like at ...