ICMR’s Ethical Guidelines for AI in Healthcare and Biomedical Research

Artificial intelligence (AI) has revolutionized many sectors, including healthcare. With the recognition of this potential, the Indian Council of Medical Research (ICMR) has recently released ethical guidelines for AI in healthcare and biomedical research. These guidelines aim to guide the effective and safe development, deployment, and adoption of AI-based technologies in healthcare.

Applications of AI in Healthcare

ICMR recognizes several applications of AI in healthcare, such as diagnosis and screening, therapeutics, preventive treatments, clinical decision-making, public health surveillance, complex data analysis, predicting disease outcomes, behavioural and mental healthcare, and health management systems. However, since AI cannot be held accountable for its decisions, an ethically sound policy framework is essential to guide the development and application of AI technologies in healthcare.

Patient-Centric Ethical Principles

ICMR has outlined ten key patient-centric ethical principles for AI application in the health sector. These principles are accountability and liability, autonomy, data privacy, collaboration, risk minimization and safety, accessibility and equity, optimization of data quality, non-discrimination and fairness, validity and trustworthiness.

The autonomy principle ensures human oversight of the functioning and performance of the AI system. Before initiating any process, it is critical to obtain consent from the patient and inform them of the physical, psychological, and social risks involved. The safety and risk minimization principle aims to prevent unintended or deliberate misuse, anonymized data delinked from global technology to avoid cyber attacks, and a favorable benefit-risk assessment by an ethical committee, among other areas.

Stakeholders Involved

ICMR’s guidelines also outline a brief for relevant stakeholders, including researchers, clinicians/hospitals/public health systems, patients, ethics committee, government regulators, and the industry. Developing AI tools for the health sector is a multi-step process involving all these stakeholders. Each of these steps must follow standard practices to make the AI-based solutions technically sound, ethically justified, and applicable to a large number of individuals with equity and fairness. All stakeholders must adhere to these guiding principles to make the technology more useful and acceptable to the users and beneficiaries of the technology.

Ethical Review Process

The ethical review process for AI in health falls under the domain of the ethics committee, which assesses factors such as data source, quality, safety, anonymization, and/or data piracy, data selection biases, participant protection, payment of compensation, possibility of stigmatization, among others. The committee is responsible for assessing both the scientific rigor and ethical aspects of all health research and ensuring that the proposal is scientifically sound and weighs all potential risks and benefits for the population where the research is being carried out.

Informed Consent and Governance

Informed consent and governance of AI tools in the health sector are other critical areas highlighted in the guidelines. Governance of AI tools is still in preliminary stages, even in developed countries. India has a host of frameworks that combine technological advances with healthcare, such as the Digital Health Authority for leveraging digital health technologies under the National Health Policy (2017), the Digital Information Security in Healthcare Act (DISHA) 2018, and the Medical Device Rules, 2017.




Leave a Reply

Your email address will not be published. Required fields are marked *