ABSTRACT
Artificial Intelligence (AI) is a rapidly transforming healthcare delivery, particularly in diagnostics, radiology, oncology, and surgical procedures. While these technologies enhance efficiency and accuracy, they also raise complex medico-legal questions when errors occur. The central issue remains who is accountable when machines err—the doctor, the hospital, or the developer? This study undertakes a doctrinal and comparative analysis of AI liability in healthcare. It examines global approaches, including the EU AI Act, U.S. Food and Drug Administration (FDA) guidelines, and the UK’s medical device framework, contrasting them with India’s existing negligence laws under tort, consumer protection, and medical jurisprudence. The research highlights the inadequacy of current Indian legal mechanism particularly the Bolam test and conventional negligence doctrine in addressing AI-related harm. Through comparative study and legal reasoning, the paper argues that India requires a hybrid accountability model that balances professional responsibility, institutional oversight, and developer liability, along with tailored statutory reform. The study concludes by recommending the adoption of AI-specific liability provisions, integration with digital health regulations, and judicial guidelines to safeguard patient rights while fostering innovation.
KEYWORDS
Artificial Intelligence, Healthcare, Accountability, Medicine , Liability
INTRODUCTION
Artificial intelligence in healthcare is a rapidly evolving field using algorithms and cognitive technologies to mimic human intelligence for medical tasks like diagnosing diseases and analysing data. AI promises better efficiency and patient outcomes but whereas its deployment in clinical setting introduces complex legal and ethical questions regarding the liability when machines err. It is transforming healthcare in various forms by enhancing the diagnostics and treatment procedures but it introduces complex medical liability questions about who is accountable when an AI system causes patient harm . When AI is used , the accountability for these errors doesn’t fall on one person or a group whereas it depends on the role each party plays . In instances where the errors occur doctors are liable for clinical judgement, developers are accountable for its designs and whereas hospitals are responsible for its implementation. AI,particularly machine learning , analyses vast amounts of health data to improve diagnoses, predict outcomes and personalised treatment. The traditional model of accountability is where a single clinician is responsible but it is blurred when AI is involved in the decision making process.
EVOLUTION
The evolution of AI medicine has progressed through several phases where the early exploration period was in 1950-1970. The term was coined in 1956 ar a conference at Dartmouth college and the first AI application in medicine appeared in the 1970s, with systems like MYCIN designed to identify blood infections. Research focused on creating systems that followed extensive “if- then” rules that aid in clinical decision making but whereas these systems were limited by their complexity and inability to adapt to new information . The machine learning renaissance was started by early 2000-2010 with an increased computing power and the availability of large datasets is where the AI research shifted to machine learning and neutral networks. Later by 2015 the deep learning and real world adoption with breakthroughs and computer vision leg to more sophisticated applications , the AI has moved beyond research and is now being deployed in clinical settings for diagnostics, drug discovery, and operational efficiency.
GOVERNMENT INITIATIVES
- The National Strategy for Artificial Intelligence, introduced by NITI Aayog, and the Prime Minister’s Science, Technology, and Innovation Advisory Council (PM-STIAC) have identified AI in healthcare as a priority area.
- The Ayushman Bharat Digital Mission (ABDM) aims to create a national digital health ecosystem using unique health IDs, professional registries, and healthcare facility repositories. A key component is a financial incentive scheme for providers adopting digital health solutions.
- Initiatives like e-Sanjeevani for remote consultations and the U-WIN Portal for digitizing vaccination services are examples of large-scale, technology-enabled healthcare delivery. The “Cough against TB” AI solution, for instance, has successfully improved tuberculosis screening in communities.
- The Indian Council of Medical Research (ICMR) has released ethical guidelines for using AI in biomedical research and healthcare, providing a framework for ethical decision-making.
- The government is exploring the establishment of an AI Safety Institute to provide technical expertise and develop safety standards for AI systems, including generative AI.
- While India currently lacks a comprehensive AI-specific law, regulations are evolving. The proposed Digital India Act and the Digital Personal Data Protection Act of 2023 include provisions that impact AI, particularly regarding data privacy, consent, and accountability.

FACTORS AFFECTING
- The growth of the digital health ecosystem, including electronic health records and telemedicine, provides the necessary data and infrastructure for AI systems.
- AI offers a solution for managing the increasing burden of chronic diseases and supporting an aging population with limited medical professionals.
- AI tools can automate repetitive administrative tasks, such as patient scheduling and clinical coding, to help address labour shortages and reduce costs.
- Comprehensive datasets are needed to train effective AI systems. Challenges include data availability, interoperability between different healthcare systems, and the risk of perpetuating bias from unrepresentative training data.
- Many rural and underserved areas still lack the necessary digital and technological infrastructure to fully support AI-enabled healthcare.
- The lack of clear, comprehensive, and evolving regulatory standards for AI in healthcare creates ambiguity regarding data privacy, security, and accountability.
- There is a shortage of healthcare professionals trained in AI. Medical schools and continuous professional development programs have yet to fully incorporate AI-related training.
- The “black box” nature of some AI algorithms, where the reasoning behind decisions is opaque, can lead to mistrust among clinicians and patients.
CURRENT TRENDS OF AI IN HEALTHCARE
There are several trends that are shaping the future of AI in healthcare, particularly in 2025 and beyond which majority contains of Agentic medical assistance, Generative AI for personalized medicine, Advanced diagnostics, Wearables and remote monitoring as these tools can predict health issues, assist with medication adherence, and provide alerts to providers, Improved workflows, Focus on transparency as AI adoption increases, there is a greater push for explainable AI to help clinicians understand how an algorithm reached a decision, building trust and enabling critical oversight and finally the Mental health support where the conversational AI chatbots are providing accessible and private mental health support to address the growing demand for care, especially where a shortage of therapists exists.
OBJECTIVES
- To examine the existing legal framework governing the use of Artificial Intelligence (AI) in healthcare and its implications for medical liability.
- To analyze judicial interpretations, statutory provisions, and ethical guidelines related to accountability in AI-assisted medical decision-making.
- To identify the gaps, ambiguities, and challenges in assigning liability when AI systems cause medical errors or harm to patients.
- To compare the approaches adopted by different jurisdictions (both national and international) in regulating AI accountability in healthcare.
REVIEW OF LITERATURE
Habli, I.,and et.al(2020) stated that the prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. AI-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions.
Malika , G and et.al (2021) analysed that policy options could ensure a more balanced liability system, including altering the standard of care, insurance, indemnification, special/no‐fault adjudication systems, and regulation. Such liability frameworks could facilitate safe and expedient implementation of artificial intelligence and machine learning in clinical care. While prior work has focused on medical malpractice, the artificial intelligence ecosystem consists of multiple stakeholders beyond clinicians.
Naik., and et.al (2022) examined the legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment. Concerns about newer digital technologies becoming a new source of inaccuracy and data breaches have arisen as a result of its use. Mistakes in the procedure or protocol in the field of healthcare can have devastating consequences for the patient who is the victim of the error. Because patients come into contact with physicians at moments in their lives when they are most vulnerable, it is crucial to remember this.
RESEARCH METHODOLOGY
This study adopts a doctrinal research methodology, focusing on the analysis of existing laws, judicial decisions, and scholarly writings rather than empirical data collection. The research primarily seeks to interpret and evaluate the legal principles governing accountability in the use of Artificial Intelligence within healthcare.The research is qualitative and analytical in nature. It involves critical examination of statutory provisions, judicial pronouncements, and legal doctrines to understand how liability is determined when AI systems in healthcare cause harm or errors.
ANALYSIS
DISCUSSION
In India, the use of artificial intelligence (AI) in healthcare is driven by significant government initiatives, though its development is shaped by multifaceted challenges and evolving trends. As AI systems become more autonomous, the question of accountability for medical errors introduces complex legal and ethical challenges for which a definitive framework has yet to be established. While these technologies enhance precision and efficiency, they also blur the lines of legal accountability when an AI-driven decision leads to harm.
Traditionally, medical negligence is assessed on the principles of the doctor’s duty of care, breach, causation, and damage. However, when AI participates in clinical decision-making, determining liability becomes complex. Questions arise such as: Is the physician liable for relying on the AI’s recommendation? Is the developer responsible for algorithmic errors? Or should liability rest with the hospital that deployed the system? Current laws in India and most jurisdictions do not provide explicit answers to these questions.
In India, liability in medical negligence cases continues to rely on precedents like Jacob Mathew v. State of Punjab (2005), which emphasize professional standards of care. However, these doctrines do not yet account for autonomous or semi-autonomous decision-making by machines. Similarly, under the Consumer Protection Act, patients may seek compensation for medical errors, but assigning fault to an AI system is legally untested.
Internationally, jurisdictions such as the European Union have begun developing frameworks like the EU Artificial Intelligence Act, which categorizes medical AI as “high-risk” and imposes compliance obligations on developers and users. In contrast, the United States primarily relies on product liability and FDA regulations for AI-based medical devices. These comparative insights reveal that India lacks a comprehensive policy on AI accountability in healthcare, creating a pressing need for legal clarity.

MEDICAL LIABILITY: WHO IS ACCOUNTABLE WHEN MACHINES ERR?
The issue of medical liability for AI errors is a complex and evolving area of law. The current legal and ethical frameworks have not yet fully adapted to address the role of AI in clinical decision-making, leaving questions of accountability unsettled. There are Potential liable parties which includes
- Physicians/healthcare providers: In most jurisdictions, including India, the final clinical decision-making authority rests with the human clinician. Physicians have a professional duty to exercise reasonable care, and they could be held liable for malpractice if they negligently follow a flawed AI recommendation or fail to identify a clear error.
- AI developers/manufacturers: Developers could face product liability claims if an AI tool causes harm due to a design flaw, manufacturing defect, inadequate testing, or a failure to warn users of its limitations. This depends on whether the AI is legally classified as a “medical device” or merely a “service”.
- Healthcare institutions: Hospitals and clinics can be held liable under theories of vicarious liability for the actions of their employees or direct liability for negligent implementation and oversight. This includes failing to provide proper training, testing the system, or having robust fail-safe protocols.
- Patients: If a patient is harmed because they did not follow a clinically reasonable AI recommendation, they might also bear some responsibility.
FINDINGS
Jurisdictions such as the European Union, United States, and United Kingdom have introduced structured and forward-looking frameworks to regulate Artificial Intelligence in healthcare. The EU AI Act treats medical AI as a “high-risk” system, ensuring strict compliance, transparency, and accountability. The U.S. FDA regulates AI under its Software as a Medical Device (SaMD) guidelines, focusing on validation, post-market surveillance, and algorithmic updates. Similarly, the UK’s MHRA emphasizes safety and continuous monitoring for AI-based medical devices.
The global mechanisms adopt a preventive regulatory model, emphasizing pre-deployment risk assessment and ethical compliance. In contrast, India’s legal response remains reactive, relying on post-harm remedies under tort and consumer law rather than establishing pre-emptive AI safety or accountability standards.
The Bolam test, a cornerstone of Indian medical negligence law, is outdated in the context of AI-driven care. It measures liability based on whether a medical professional’s conduct aligns with accepted medical practice — a standard that presupposes human judgment. Since AI systems function autonomously or semi-autonomously, applying the Bolam principle to algorithmic errors is conceptually and legally insufficient.
Indian law lacks clear guidelines on who bears responsibility when AI causes harm — the doctor, the hospital, the manufacturer, or the software developer. This absence of defined liability leads to doctrinal uncertainty and undermines patient trust in AI-assisted medical decisions.
SUGGESTIONS
India should enact a dedicated legislation or regulatory framework for AI in healthcare, similar to the EU AI Act, classifying medical AI as “high-risk” and setting out clear safety, accountability, and data protection standards. Legal provisions must explicitly outline responsibility among doctors, hospitals, manufacturers, and AI developers. A shared or hybrid liability model can ensure fair distribution of accountability when harm results from AI-assisted medical decisions. The Bolam principle should be updated to incorporate technological factors — distinguishing between human and algorithmic decision-making. Courts should recognize “algorithmic negligence” as a distinct category to address harm caused by faulty or biased AI systems. A specialized Medical AI Regulatory Authority or an empowered body under the National Medical Commission (NMC) should oversee approval, monitoring, and ethical compliance of AI-based medical tools, ensuring continuous evaluation of algorithmic performance and transparency.
CONCLUSION
Artificial Intelligence has emerged as a transformative force in modern healthcare, offering unprecedented accuracy, efficiency, and diagnostic capability. While global jurisdictions like the EU, U.S., and U.K. have proactively developed AI-specific governance structures, India continues to rely on traditional negligence principles, such as the Bolam test, which are inadequate for addressing the complexities of autonomous technologies. The study concludes that the absence of a specialized AI liability framework in India creates ambiguity, undermines patient safety, and deters responsible innovation. A forward-looking legal approach — combining statutory reform, regulatory oversight, and ethical accountability — is essential to balance technological progress with legal certainty. Ultimately, ensuring that AI serves humanity responsibly requires a clear, adaptive, and ethically grounded legal system capable of governing machines that think but do not bear moral responsibility.
REFERENCES
- Naik, N., Hameed, B. M., Shetty, D. K., Swain, D., Shah, M., Paul, R., … & Somani, B. K. (2022). Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility?. Frontiers in surgery, 9, 862322.
- Habli, I., Lawton, T., & Porter, Z. (2020). Artificial intelligence in health care: accountability and safety. Bulletin of the World Health Organization, 98(4), 251.
- Maliha, G., Gerke, S., Cohen, I. G., & Parikh, R. B. (2021). Artificial intelligence and liability in medicine: balancing safety and innovation. The Milbank Quarterly, 99(3), 629.
