The Coalition for Health AI (CHAI) released its“Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare” (Blueprint). The Blueprint addresses the quickly evolving landscape of health AI tools by outlining specific recommendations to increase trustworthiness within the healthcare community, ensure high-quality care, and meet healthcare needs.
“Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians,” says Brian Anderson, a co-founder of the coalition and chief digital health physician at MITRE. “The CHAI Blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care.”
The Blueprint builds upon the White House OSTP “Blueprint for an AI Bill of Rights” and the “AI Risk Management Framework (AI RMF 1.0)” from the U.S. Department of Commerce’s National Institute of Standards and Technology. OSTP acts as a federal observer to CHAI, as do the Agency for Healthcare Research and Quality, Centers for Medicare & Medicaid Services, U.S. Food and Drug Administration, Office of the National Coordinator for Health Information Technology, and National Institutes of Health.
"The needs of all patients must be foremost in this effort. In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology. Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The Blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia, and industry,” says John Halamka, president, Mayo Clinic Platform, and a co-founder of the coalition.