Robert Morris and Koko are Violators of International Standards

The Declaration of Helsinki is the bedrock of international principles in human research, and the foundation of governmental practices, including the ICH E6 Good Clinical Practice. The core principle is respect for the individual (Article 8), their right to self-determination and the right to make informed decisions (Articles 20, 21 and 22) regarding participation in research, both initially and during the course of the research. Principles that Dr Robert Morris violated when his firm, Koko, used artifical intelligence to engage in medical research on uninformed participants. The man, and his company, deserves the full force of international censure, including disbarment by the NHS and all other international bodies with even a shred of oversight on healh practices.

I’m infuriated by this. AI is already an ethically ambigious area full of concerns, and for this callous individual and his company to waltz in and break a fundamental principle of human research is unconsciouable.

Another reason why we need serious regulatory oversight of AI. We won’t see this from the US, so hopefully the EU gets their act together and pushes forward. GPDR may not be perfect but we are in a better place with something rather than nothing, and as the actions of callous companies like Koko show we are in desperate need for protection when it comes to the ‘promises’ of AI.

Also, shame on Stonybrook’s Institutional Review Board. While not a case of IRB shopping, they sure did their best to avoid grappling with the issues behind the study.

I am pretty sure this AI counts as software as a device, in which case a whole lot of regulations were broken.

“Move fast and Break Things” is a horrible mantra, especially when health and well being is involved. Robert Morris, like Elizabeth Holmes, are examples of why we need a strong oversight regime when it comes to scientific research and why technology on its own is never the solution.

AI/ML-Based SaMD Framework

The US Food and Drug Administration’s proposed regulatory framework for artificial intelligence- (AI) and machine learning- (ML) based software as a medical device (SaMD) is fascinating in what it exposes about the uncertainty around the near-term future of a lot of industry 4.0 initiatives in pharmaceuticals and medical devices.

While focused on medical devices, this proposal is interesting read for folks interested in applying machine learning and artificial intelligence to other regulated areas, such as manufacturing.

We are seeing is the early stages of consensus building around the concept of Good Machine Learning Practices (GMLP), the idea of applying quality system practices to the unique challenges of machine learning.