Regulation of AI in healthcare is still in its infancy. Many countries have just issued their national plans, guidelines, or codes—which often highlight essential principles for developing ethical AI—without having passed much substantive law. Notable examples include the European Parliament’s resolution on Civil Law Rules on Robotics (February 2017), the European Union’s Ethics Guidelines for Trustworthy AI (April 2019), the European Commission’s Proposal for a Regulation on a European approach for Artificial Intelligence (April 2021) and OECD’s Council Recommendation on Artificial Intelligence (May 2019).
AI deployment in healthcare potentially drives game-changing improvements for underserved communities and developing countries in general. From enabling community-health workers to better serve patients in remote rural areas to helping governments in developing countries prevent deadly disease outbreaks, there is growing recognition of the potential of AI tools to improve health access, quality, and cost. Health systems in many developing countries face obstacles, including shortages of health care workers, medical equipment, and other medical resources. AI tools have exciting potential to optimize existing resources and help overcome these workforce resource shortages and significantly improve healthcare delivery and outcomes in low-income settings in ways never previously imagined.
However, the deployment of AI in resource-constrained settings has been surrounded by a lot of hype; and more research is needed on how to deploy best and effectively scale AI solutions in health systems across developing countries. It is challenging to take disruptive technology innovations from developed countries and replicate them to address the unique needs of the developing world.
Unlike developed countries, which have abundant and readily available data that have driven healthcare decisions, governments, and organizations in developing countries lack reliable data collection, verification, and aggregation systems. Considering that developing countries are deprived of the necessary systems that generate and maintain robust, accurate, and relevant health data, the use of data to address issues related to disease prevention, intervention assessment, and community education has become challenging.
No single country or stakeholder has all the answers to these challenges. International cooperation and multi-stakeholder discussion are crucial to developing responses to guide the development and use of trustworthy AI for broader public health.
This paper is intended to identify both barriers to AI deployment at scale in developing countries and the types of regulatory and public policy actions that can best accelerate the appropriate use of AI to improve healthcare in developing countries’ contexts. While AI technologies hold great potential for improving healthcare around the globe, these technologies cannot be considered a panacea for solving global health challenges. Scaling AI technologies has risks and tradeoffs. Therefore, adoption, acceleration, and use of AI should strengthen local health systems and be owned and driven by the needs and priorities of developing countries’ governments and stakeholders to help them best serve their populations.
The paper starts with a landscape assessment of AI and big data analytics deployment in developing countries, where it considers three fields of AI deployment in diagnosis and clinical care, in health research and drug development, and in health systems management and planning. Then, the paper outlines the key challenges that need to be addressed by regulators in governing AI in healthcare, such as data access, data quality, data privacy and ethics. Lastly, the paper outlines the key governance mechanisms for AI innovation in healthcare in developing countries, such as data collection and management, data sharing and open source solutions for data de-identifications, open source data banks and data annotation.