News

WHO Calls for Caution in Using AI Language Models for Health

TEHRAN (Tasnim) – The World Health Organization (WHO) is urging caution in the use of artificial intelligence (AI)-generated large language model tools (LLMs) to safeguard human well-being, safety, autonomy, and public health.

According to the WHO website, LLMs, such as ChatGPT, Bard, Bert, and others, mimic human communication and have rapidly gained popularity. Their potential to support health needs has sparked excitement, but WHO stresses the need for careful examination of risks when utilizing LLMs to enhance access to health information, decision support, or diagnostic capabilities in under-resourced settings, aiming to protect people’s health and address inequities.

While WHO acknowledges the value of technologies like LLMs in supporting healthcare professionals, patients, researchers, and scientists, concerns arise from the inconsistent exercise of caution, transparency, inclusion, public engagement, expert supervision, and rigorous evaluation typically applied to new technologies.

Hasty adoption of untested systems may result in errors by healthcare workers, harm to patients, erosion of trust in AI, and undermine the long-term potential benefits of such technologies worldwide.

Key concerns demanding rigorous oversight for the safe, effective, and ethical use of LLMs include:

Biased data used for training AI, which can generate misleading or inaccurate health information, posing risks to equity and inclusiveness.

LLM-generated responses may appear authoritative but contain serious errors, particularly for health-related inquiries.

Use of data without prior consent and inadequate protection of sensitive user-provided data, including health data.

Potential misuse of LLMs to create and disseminate convincing disinformation, blurring the distinction between reliable health content and falsehoods.

While WHO is committed to leveraging new technologies for human health improvement, it recommends policy-makers prioritize patient safety and protection, while technology firms work toward commercializing LLMs.

WHO proposes addressing these concerns and gathering substantial evidence of benefits before widespread incorporation of LLMs in routine healthcare, whether by individuals, care providers, or health system administrators and policy-makers.

The WHO emphasizes the significance of applying ethical principles and appropriate governance outlined in its guidance on AI ethics and governance for health. The six core principles identified by WHO include protecting autonomy, promoting human well-being, safety, and the public interest, ensuring transparency, explainability, and intelligibility, fostering responsibility and accountability, guaranteeing inclusiveness and equity, and promoting responsive and sustainable AI.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top