AI chatbots are providing inaccurate and inconsistent information that could have serious implications for patients' health.
Mustafa Suleyman, Microsoft’s head of AI, said he is growing ‘more and more concerned’ about the risk - Brendan McDermid/Reuters Microsoft’s head of artificial intelligence (AI) has warned that ...
In November, the Food and Drug Administration (FDA) held a Digital Health Advisory Committee meeting where it considered treating artificial intelligence mental health chatbots as medical devices. As ...
Nearly two-thirds of U.S. adults who have tried generative AI have used it for financial guidance, and most say it improved their situation, the New York Times reported Saturday, citing a recent ...
A Stanford-led study found that most AI chatbots have stopped including medical disclaimers in health responses, raising concerns that users might trust potentially unsafe or unverified advice. Most ...
Today’s “large language models” like ChatGPT don’t have the training to act in users’ best interest, but an MIT professor ...
Nearly 1/3 of teens say they use chatbots for roleplaying romantic and sexual relationships. But some parents and digital safety experts warn that these apps can put teens at risk of suicide.
Jianna Jin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their ...
Experts on adolescent psychiatry and psychology say it’s important to have open and continuous discussion with kids about their use of artificial intelligence and AI chatbots. Parents should set ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results