Comprehending and Implementing Responsible AI Practices in Healthcare and Aged Care Governance
As AI progressively integrates into various sectors, including healthcare and aged care, the need for its responsible application has become significantly urgent. Responsible AI is crucial as it can guide decisions towards more beneficial and equitable results. By embracing responsible AI, organisations can avoid unconscious bias, validate AI outcomes, protect data security and privacy, and gain a competitive edge.
In the context of clinical care, responsible AI can be used to improve patient outcomes and streamline healthcare delivery. For instance, AI can be used to predict patient deterioration based on vital signs and lab results, enabling early intervention. However, it’s crucial that these predictions are transparent and can be explained to healthcare providers, ensuring trust in the system.
Safeguarding patient data and enhancing care coordination with AI
EMRs are a rich source of data that can be used to train AI models. However, the use of this data must respect patient privacy and consent. For example, de-identification techniques can be used to protect patient information while still allowing for valuable insights to be gleaned from the data.
HIPAA in the U.S. sets standards for protecting sensitive patient data. Any AI system used in healthcare must be HIPAA compliant, ensuring that all data is securely stored and transmitted. This includes using secure methods for data transfer and storage, as well as ensuring that only authorized individuals have access to the data.
Similarly in Europe, GDPR sets the standard for protecting sensitive patient data. Any AI system used in healthcare must be GDPR compliant, ensuring that all data is securely stored and transmitted. The GDPR prohibits solely automated decision-making and processing of health data, with a few exemptions, such as if it is done with the patient’s consent or for the public interest.
FHIR is an emerging standard for exchanging healthcare information electronically. AI systems can use FHIR to access and integrate data from different healthcare systems, improving the quality and coordination of care.
However, it’s important that these systems are designed to respect the principles of responsible AI, including fairness, transparency, and accountability, user consent, and privacy. Organisations must have confidence when building, deploying, managing, and supervising models throughout the AI lifecycle. This implies that board directors must comprehend the significance of the principles of responsible AI and their application within their organisation.
CEOs are under scrutiny to ensure their company’s responsible use of AI systems, which extends beyond adhering to the spirit and intent of these relevant laws such as HIPAA and GDPR. Even seemingly harmless uses of AI can have serious implications. For instance, numerous instances of AI bias, discrimination, and privacy breaches have made headlines in recent years. These incidents have rightfully raised concerns among leaders about ensuring the safe deployment of their AI systems.
The optimal solution is not to abstain from using AI—the potential value and early adoption benefits are too significant. Instead, organisations can ensure the responsible construction and application of AI by diligently confirming that AI outputs are fair, that increased personalisation does not result in discrimination, that data acquisition and usage do not compromise consumer privacy, and that they balance system performance with transparency into AI system predictions.
Is it my job?
While it may seem reasonable to delegate the use of responsible AI to clinicians or data scientists, given their expertise in understanding the emerging use of AI in clinical care, the role of board directors is critical in ensuring the consistent delivery of responsible AI systems. Board directors need to possess at least a strong working knowledge of AI development to ensure they are asking the right questions to prevent potential ethical issues.
There are numerous benefits to adopting responsible AI. By developing and operating AI systems that serve the greater good while achieving transformative business impact, companies can think beyond mere algorithmic fairness and bias to identify potential secondary and tertiary effects on safety, privacy, and society at large.
Responsible AI should not be viewed merely as a risk-avoidance mechanism. Such a perspective overlooks the upside potential that companies can realise by pursuing it. Besides representing an authentic and ethical guiding principle for initiatives, responsible AI can generate financial returns that justify the investment.
It is vital for healthcare and aged care board directors to understand responsible AI as it can help guide decisions towards more beneficial and equitable results while avoiding potential harm. By embracing responsible AI principles and ensuring their consistent application within their organisation, board directors can help their company secure a competitive edge while also contributing positively to society at large.
About: Gary Morgan is an experienced board director, chief executive, consultant, and corporate advisor with deep expertise in strategy, innovation, and growth in the health tech, aged care, agtech, information security, and research sectors. He is a Fellow of the Governance Institute of Australia, Entrepreneur in Residence at The Allied Health Academy, and serves on the Griffith University Industry Advisory Board for the ICT School. Gary has co-authored papers and reports published in leading entrepreneurship and medical journals.
Acknowledgment: This article was composed in part using AI technology.