WHO outlines considerations for regulation of artificial intelligence for health

WHO outlines considerations for regulation of artificial intelligence for health

The World Health Organization (WHO) has released a new publication on artificial intelligence (AI) for health, highlighting the importance of safety, effectiveness, and dialogue among stakeholders. AI tools have the potential to transform the health sector by enhancing clinical trials, improving medical diagnosis and treatment, and supplementing healthcare professionals’ knowledge and skills.

AI’s Potential in Healthcare

With the increasing availability of health care data and advancements in analytic techniques, AI has the potential to revolutionize healthcare. WHO recognizes the benefits of AI in strengthening clinical trials, improving medical diagnosis and treatment, and enhancing person-centered care. In settings with a shortage of medical specialists, AI can be particularly beneficial in interpreting retinal scans and radiology images, among other applications.

Challenges and Risks

However, the rapid deployment of AI technologies, including large language models, raises concerns about their performance and potential harm to end-users. The use of health data by AI systems also raises privacy and security issues, necessitating robust legal and regulatory frameworks. Dr. Tedros Adhanom Ghebreyesus, WHO Director-General, acknowledges the challenges of AI, including unethical data collection, cybersecurity threats, and the amplification of biases or misinformation.

Regulating AI for Health

To address the responsible management of AI health technologies, the WHO publication outlines six areas for regulation:

1. Transparency and Documentation: Emphasizing the importance of documenting the entire product lifecycle and tracking development processes to foster trust.

2. Risk Management: Comprehensive addressing of issues like intended use, continuous learning, human interventions, training models, and cybersecurity threats to ensure safety.

3. External Validation and Intended Use: Clear communication about the intended use of AI and external validation of data to assure safety and facilitate regulation.

4. Data Quality: Rigorous evaluation of systems pre-release to ensure they do not amplify biases and errors.

5. Privacy and Data Protection: Addressing complex regulations like GDPR and HIPAA to understand jurisdiction and consent requirements for safeguarding privacy.

6. Collaboration: Encouraging collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners to ensure compliance throughout the lifecycle of products and services.

Managing Risks of AI Biases

AI systems depend not only on the code they are built with but also on the data they are trained on. Better regulation can help mitigate the risks of AI amplifying biases in training data. For instance, regulations can ensure that attributes such as gender, race, and ethnicity are reported in training data to make datasets representative of diverse populations.

Guidance for Governments and Regulatory Authorities

The new WHO publication aims to provide key principles for governments and regulatory authorities to develop new guidance or adapt existing guidance on AI at national or regional levels. By following these principles, countries can effectively regulate AI, harness its potential, and minimize risks in various healthcare applications, from cancer treatment to tuberculosis detection.
The World Health Organization (WHO) has released a set of considerations for the regulation of artificial intelligence (AI) for health. The organization, renowned for its composite datasets and global standards for health, developed these recommendations in partnership with Google and Microsoft, as part of a wider effort to address ethical and governance challenges of AI for health.

AI has the potential to revolutionize healthcare, making it more accessible and equitable. The technology has been deployed to quickly detect diseases, monitor pandemics, and even therapeutically engage with those with mental health issues—all requiring careful regulation and governance.

WHO’s considerations emphasize that ethical questions need to be addressed at every stage of the development and deployment of AI for health. These include: ensuring that data used in AI models used for healthcare are used for public good and that no bias or discrimination exists; formulating a clear framework for privacy and security; enhancing transparency in decision-making involving AI; and consistently applying ethical standards in the use of AI for health.

The considerations also touch upon issues of governance, setting out clear operational guidelines and direction for the management of AI in healthcare. They include standards for implementation, such as the type of training, the minimum basic resources and expertise that should be onboarded, how AI systems should be monitored and evaluated, and accountability and reporting structures for decision-making.

The considerations provide a much-needed set of global principles to shape the use of AI in healthcare and are an important milestone in the ethical and governance challenges of AI for health. After widespread consultation with stakeholders, further guidance and resources are set to be released in the upcoming months.


Posted

in

by

Tags: