Ethieklab © TU Delft
Author profile picture

Staff shortages and the constant desire to provide high-quality medical care. These are two main reasons why the application of artificial intelligence (AI) in healthcare will take off in the coming years. With the opening of the first AI ethics lab for healthcare, Erasmus MC and TU Delft are betting on “ethically sound and clinically relevant AI that has a positive impact on both healthcare and the healthcare worker,” according to a statement from the university.

In the future, will doctors dare to stop treatment based on information provided by a computer model? That is perhaps one of the most difficult questions surrounding the application of AI in healthcare. But, of course, there are plenty of less intense questions. Such as whether a patient can safely be discharged a few days earlier than protocol dictates after a surgical procedure, which is more pleasant for both the patient and the caregiver. Or can an ICU nurse use AI to provide quality care to more patients?

With all these questions, the underlying AI models that support physicians in making such decisions must make ethically sound recommendations. “The World Health Organization has established six core principles for this, such as clarity of responsibility and ensuring fairness and applicability for each patient,” said Stefan Buijsman, associate professor of ethics at TU Delft. Jeroen van den Hoven, director of the TU Delft Digital Ethics Centre, contributed to the WHO principles. “The big challenge is that it is often not at all obvious what exactly it means that such a model is fair and how you then guarantee that in practice.”

Safe and with demonstrable added value

The Responsible and Ethical AI in Healthcare Lab (REAiHL), a collaboration between Erasmus MC, TU Delft, and software company SAS, aims to answer this. “Erasmus MC’s clinical expertise is leading in this – they come up with the question and will eventually work with the models,” Buijsman says. “As TU Delft, we have been leaders in digital ethics – how do we translate ethical values into design requirements for engineers – for two decades.” In addition to responsible design, TU Delft will play an important role in demonstrating the clinical added value of developed AI models.

REAiHL is an ICAI lab (Innovation Center for Artificial Intelligence); a research collaboration between industrial, government or non-profit partners and knowledge institutes. ICAI labs must meet requirements for data, expertise and capacity. They are expected to operationalize outcomes for the real world. REAIHL is the 9th ICAI lab in which TU Delft collaborates with partners and other knowledge institutes.

“On the one hand, this concerns demonstrating the positive impact on patient care,” says Jacobien Oosterhoff, associate professor of Artificial Intelligence for Healthcare Systems at TU Delft. “For rockets to Mars, we know more about how to test them safely in a remote area, but with AI for patient care, we still have many open questions about how to test it safely. On the other hand, it is about effectively integrating AI models into the clinical workflow to support doctors and nurses. These open questions we hope to figure out in the lab. With physicians, engineers, nurses, data scientists, and ethicists together, a unique synergy.”

Develop a framework for AI hospital-wide

The new AI ethics lab was created at the initiative of internist-intensivist Michel van Genderen of Erasmus MC. Diederik Gommers, professor of Intensive Care Medicine at Erasmus MC, is also closely involved. “So the initial focus of the new AI ethics lab is that it should produce best practices for Intensive Care,” Buijsman says. “But the ultimate goal is to develop a generalist framework for how AI can be applied hospital-wide safely and ethically. So we expect to start working with use cases from other clinical departments soon.”