In a rapidly evolving landscape where artificial intelligence is reshaping the very fabric of healthcare, the question of trust has never been more pressing. As algorithms increasingly guide diagnoses, treatment decisions, and even patient monitoring, the need for clear ethical boundaries and robust oversight grows ever more acute. Against this backdrop, the Association for the Advancement of Medical Instrumentation (AAMI) has announced its alignment with the National Academy of Medicine’s (NAM) AI Code of Conduct — a move that signals both a recognition of responsibility and a commitment to shaping the future of medical technology.
For decades, AAMI has stood at the intersection of innovation and safety in medical devices, setting standards that underpin everything from pacemakers to infusion pumps. Its decision to embrace NAM’s AI Code of Conduct is notable not only for what it says about the organization’s priorities, but also for what it reveals about the wider medical technology community’s reckoning with the promises and perils of artificial intelligence.
The NAM’s code is both ambitious and urgently needed. Developed in collaboration with clinicians, computer scientists, ethicists, and patient advocates, it lays out a framework designed to ensure that AI systems in health and medicine are deployed responsibly. The code addresses a host of concerns, from transparency and data privacy to algorithmic bias and the need for continuous monitoring of AI performance. Its ultimate aim is to foster trust — among clinicians who must rely on these new tools, among patients whose lives may depend on their accuracy, and among an increasingly wary public.
By aligning with the code, AAMI is lending its considerable weight to this effort, and in doing so, setting an example for others in the field. “AAMI’s alignment with the NAM AI Code of Conduct reflects our deep commitment to advancing safe, effective, and ethical use of AI in health technology,” said Robert Burroughs, AAMI’s chief learning and development officer, in a statement. “We believe this is essential to building and maintaining public trust as these technologies become more prevalent in health care settings.”
This alignment is not merely symbolic. AAMI’s role as a standards-setting body means that its endorsement will have practical ramifications, potentially influencing the design, validation, and regulation of AI-powered medical devices in the years to come. Manufacturers and developers who look to AAMI for guidance may soon find that adherence to the NAM code becomes a de facto requirement for gaining credibility — if not regulatory approval — in the marketplace.
The stakes could hardly be higher. Recent years have seen a proliferation of AI applications in healthcare, from radiology algorithms that flag potential tumors on scans to predictive tools that anticipate patient deterioration in intensive care units. The potential benefits are enormous: increased accuracy, earlier interventions, and the hope of democratizing access to high-quality care. But these advances are shadowed by a history of high-profile failures and ethical lapses. In some cases, AI systems have been found to perpetuate bias, producing less accurate results for women or people of color. In others, a lack of transparency has made it impossible for clinicians to understand — let alone challenge — an algorithm’s recommendations. For patients, the sense of being at the mercy of a “black box” can erode confidence in the very systems designed to help them.
NAM’s code seeks to address these concerns head-on. It calls for transparency in how algorithms are developed and validated, the inclusion of diverse data sets to minimize bias, and continuous post-deployment monitoring to catch unforeseen problems. It also emphasizes the importance of patient consent and data privacy, insisting that individuals must have a say in how their information is used and that their rights must be protected. Importantly, it recognizes that AI is not static — what works safely today may become hazardous tomorrow as data patterns shift or adversarial actors probe for weaknesses.
AAMI’s endorsement of the code is thus more than a box-ticking exercise; it is a recognition that the credibility of AI in healthcare depends on ongoing vigilance and a willingness to course-correct as new challenges emerge. It is also an invitation to other stakeholders — including regulators, hospital systems, and technology firms — to join in a shared project of ethical stewardship.
The timing of this announcement is significant. Governments around the world are grappling with how best to regulate AI, and the European Union’s landmark AI Act is set to impose strict requirements on high-risk systems in healthcare and other sectors. In the United States, the Food and Drug Administration (FDA) has issued guidance on the use of AI in medical devices, but a comprehensive regulatory framework remains a work in progress. In this context, industry-led initiatives like the NAM code, backed by organizations such as AAMI, could help fill the gaps and establish a baseline of expectations for safe and ethical conduct.
Yet, for all the promise of codes and standards, their impact will ultimately be measured by what happens in practice. Will developers invest the time and resources needed to root out bias and ensure transparency? Will hospitals and clinicians demand evidence that an AI tool performs well across diverse populations before adopting it? Will patients be given a genuine voice in decisions about how their data is used? The answers to these questions will determine whether AI fulfills its potential to improve healthcare — or becomes just another source of risk and inequity.
For now, AAMI’s decision to align with the NAM AI Code of Conduct is a welcome signal of intent. It recognizes both the promise and the peril of a technology that is fast becoming ubiquitous, and it places the values of safety, transparency, and equity at the center of the conversation. In an era when algorithms can save lives — or imperil them — such leadership is not just desirable. It is essential.