A 60-year-old man found himself in a hospital after following dietary advice from ChatGPT, a popular AI tool. His goal was to cut out table salt for health reasons, but the advice he received led to unexpected consequences. The AI suggested using sodium bromide instead of sodium chloride, a choice that ended up being quite harmful to his health.
This case was reported in the Annals of Internal Medicine. Sodium bromide, while it might look like salt, is actually toxic and not meant for human consumption. Historically, it was used as a sedative, but today it’s mainly found in cleaning and manufacturing products, according to the National Institutes of Health.
When the man arrived at the hospital, he was dealing with various troubling symptoms. He reported feeling extremely tired, struggling with insomnia, and experiencing poor coordination. Additionally, he had facial acne, red skin bumps, and excessive thirst, all pointing to bromism, which occurs due to prolonged exposure to sodium bromide.
Things took a more concerning turn as the man exhibited paranoia, believing his neighbor was trying to poison him. He also suffered from hallucinations, both auditory and visual, and was placed on a psychiatric hold after attempting to leave the hospital. Medical staff treated him with fluids, electrolytes, and anti-psychotic medications, and he was released after three weeks.
Researchers highlighted the potential dangers of relying on AI for health advice. They pointed out that these tools can lead to “preventable adverse health outcomes” because they lack common sense and critical thinking. Not having access to the exact ChatGPT conversation makes it challenging to determine precisely what advice the man received.
Experts like Dr. Jacob Glanville emphasized the importance of not substituting AI for professional medical advice. He explained to Fox News Digital that AI systems are not equipped with common sense and can produce harmful suggestions if users don’t apply their own judgment. Dr. Harvey Castro, another expert, echoed these sentiments, stressing that AI is a tool, not a doctor.
Dr. Castro pointed out that large language models, like ChatGPT, generate text based on statistical likelihood rather than fact-checking. He called for more regulations and safeguards in using AI for medical advice. He believes integrating medical knowledge bases and risk flags could help prevent such incidents.
OpenAI, the company behind ChatGPT, stated that their AI is not meant for medical treatment and advised seeking professional guidance. They mentioned ongoing efforts to improve safety and encourage users to consult professionals. Overall, this incident serves as a cautionary tale about the limitations and risks of using AI for health-related decisions.
Darnell Thompkins is a Canadian-born American and conservative opinion writer who brings a unique perspective to political and cultural discussions. Passionate about traditional values and individual freedoms, Darnell’s commentary reflects his commitment to fostering meaningful dialogue. When he’s not writing, he enjoys watching hockey and celebrating the sport that connects his Canadian roots with his American journey.