Ethical standards for AI clinical trials

 

Artificial intelligence (AI) systems are increasingly being investigated in clinical trials. However, many of these trials are deemed minimal risk and therefore do not require informed consent. A disconcerting feature of AI trials that obtain a waiver of informed consent is that, because use of AI is often “silent” to patients, patients could be unwitting participants in trials that effect their care. In this this manuscript, we argue that disclosure is a minimal standard when patients’ data are being used in an AI clinical trial that may impact clinical decisions.

AI clinical trials need to be held to the same ethical requirements for participant disclosure and consent as all human subject studies, which are based on the risk to participants. There are and will be many AI trials that meet criteria for a waiver of consent. However, because we are in such early days of clinical AI implementation, there are many unique and amplified challenges for AI system risk assessment. Some of these challenges include the unknown impact of human-AI interactions, interpretability challenges, and data limitations. Because these risks, and how to mitigate them, are incompletely understood, we believe that patients have a right to know when their clinical decisions are informed by an AI system under investigation. Disclosure as a minimal standard also ensures that patient retain their right to determine how, and by whom or what, their healthcare decisions are made. Recommendations for what information should be provided to patients are provided.

 

AIM Investigators