Skip to content
Home » Articles » The Necessity of Responsible AI for Boardroom Leaders 

The Necessity of Responsible AI for Boardroom Leaders 

Understanding and Applying Responsible AI Practices in Corporate Governance

With AI progressively becoming a cornerstone in numerous sectors of business and society, the need for its responsible application has escalated significantly. Responsible AI is crucial as it can steer decisions towards more advantageous and equitable results. By embracing responsible AI, organisations can circumvent unconscious bias, validate AI outcomes, safeguard data security and privacy, and secure a competitive edge.

The development of responsible AI necessitates fairness, transparency, accountability, user consent, and privacy. Organisations must have faith when constructing, deploying, managing, and supervising models throughout the AI lifecycle. This implies that board members must understand the importance of the principles of responsible AI and their application within their organisation.

CEOs are under scrutiny to guarantee their company’s responsible employment of AI systems, which extends beyond adhering to the spirit and letter of relevant laws. Even seemingly harmless uses of AI can have serious consequences. For instance, numerous instances of AI bias, discrimination, and privacy breaches have made headlines in recent years. These incidents have rightfully raised concerns among leaders about ensuring the safe deployment of their AI systems.

The optimal solution is not to abstain from using AI—the potential value and early adoption benefits are too significant. Instead, organisations can ensure the responsible construction and application of AI by diligently confirming that AI outputs are fair, that increased personalisation does not result in discrimination, that data acquisition and usage do not compromise consumer privacy, and that they balance system performance with transparency into AI system predictions.

Responsible AI Principles

The AI Ethics Principles of Australia [1] and the OECD AI Principles [2] share a common goal of ensuring that AI is used in a manner that is beneficial, fair, and respects human rights. These principles aim to guide the development and use of AI in a way that is ethical, responsible, and beneficial to all:

1.Inclusive growth, sustainable development, and well-being: AI systems should contribute to growth and prosperity for all, advancing global development objectives, and benefiting individuals, society, and the environment.

2.Human-centred values and fairness: AI systems should respect the rule of law, human rights, democratic values, and diversity, with safeguards to ensure a fair and just society. They should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities, or groups.

3.Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

4.Reliability and safety: AI systems should reliably operate in accordance with their intended purpose and should function robustly, securely, and safely throughout their lifetimes, with continual risk assessment and management.

5.Transparency and explainability: AI systems should be transparent and responsibly disclosed, so people understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.

6.Contestability: When an AI system significantly impacts a person, community, group, or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

7.Accountability: Organisations and individuals involved in developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with these principles. People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Whose role is it?

While it may seem reasonable to delegate these principles to data scientists, given their expertise in understanding AI, the role of board members is critical in ensuring the consistent delivery of responsible AI systems. Board members need to possess at least a strong working knowledge of AI development to ensure they are asking the right questions to prevent potential ethical issues.

What are the benefits of Responsible AI?

There are numerous benefits to adopting responsible AI. By developing and operating AI systems that serve the greater good while achieving transformative business impact, companies can think beyond mere algorithmic fairness and bias to identify potential secondary and tertiary effects on safety, privacy, and society at large.

Responsible AI should not be viewed merely as a risk-avoidance mechanism. Such a perspective overlooks the upside potential that companies can realise by pursuing it. Besides representing an authentic and ethical guiding principle for initiatives, responsible AI can generate financial returns that justify the investment.

Conclusion

It is crucial for board members to understand responsible AI as it can help steer decisions towards more advantageous and equitable results while avoiding potential harm. By embracing responsible AI principles and ensuring their consistent application within their organisation, board members can help their company secure a competitive edge while also contributing positively to society at large.

About: Gary Morgan is an experienced board director, chief executive, consultant, and corporate advisor with deep expertise in strategy, innovation, and growth in the health tech, aged care, agtech, information security, and research sectors. He is a Fellow of the Governance Institute of Australia, Entrepreneur in Residence at The Allied Health Academy, and serves on the Griffith University Industry Advisory Board for the ICT School. Gary has co-authored papers and reports published in leading entrepreneurship and medical journals.

Acknowledgment: I would like to thank Prof Dian Tjondronegoro for his input and valued feedback. This article was composed in part using AI technology.

References:

  1. “Australia’s AI Ethics Principles”. Department of Industry, Science and Resources. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles
  2. “OECD AI Principles overview”. Organisation for Economic Co-operation and Development. https://oecd.ai/en/ai-principles
  3. “What is Responsible AI?” Microsoft Learn. https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai?view=azureml-api-2
  4. “Six Steps to Bridge the Responsible AI Gap”. Boston Consulting Group. https://www.bcg.com/publications/2020/six-steps-for-socially-responsible-artificial-intelligence
  5. “Leading your organization to responsible AI”. McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/leading-your-organization-to-responsible-ai
  6. “The Responsible AI Network”. CSIRO. https://www.csiro.au/en/work-with-us/industries/technology/National-AI-Centre/Responsible-AI-Network

en_AUEnglish