Leading with AI

A Modular Program for Boards and C-Suite Accountability

Showcase of Bonar Institute’s “Digital Responsible AI” program – endorsed by the GGA!

Join us for a dynamic showcase of the Bonar Institute’s Digital Responsible AI program.

Tailored for board members and C-suite executives, this webinar will equip you with the tools, frameworks, and ethical insights needed to navigate AI’s growing impact on governance and strategy.

  • Gain practical, industry-relevant knowledge
  • Explore tailored case studies
  • Understand AI’s role in shaping future-ready leadership
  • Featuring expert insights from Marc Morley, John Barker, and James Bonar.

 

Don’t miss this opportunity to lead with purpose in a digitally transforming world.

Background information

As artificial intelligence (AI) rapidly transforms industries and redefines business models, the responsibility for guiding its ethical and strategic use increasingly falls on the shoulders of boards and executive leaders. Yet many decision-makers still lack the frameworks and confidence needed to govern AI effectively.

 

Recognizing this gap, the Bonar Institute has developed the Digital Responsible AI Program—a modular, tailored workshop designed specifically for board members and C-suite executives. This program goes beyond technical understanding, focusing on the strategic, ethical, and governance implications of AI within different organizational and industry contexts.

 

This webinar introduces the core principles and real-world application of the program. Participants will gain insights into how AI is reshaping governance expectations, the importance of aligning AI strategies with ethical standards, and how to cultivate an executive culture that is prepared for responsible AI leadership.

By showcasing practical tools and customized case studies, the session empowers leaders to make informed, forward-thinking decisions that harness AI’s potential while managing its risks.

Questions and Answers

The Good Governance Academy (GGA) is a non-profit organisation established in 2019 by Professor Mervyn King. Its mission is to raise awareness of, and provide thought leadership to address critical business issues. They achieve this by working with organisations worldwide, from all sectors (public and private, large and small), to educate and empower leaders. The GGA organises webinars, colloquia, and endorses courses that offer cutting-edge, relevant content for contemporary governance challenges.

Responsible AI is deemed a governance imperative because AI is no longer just a technical issue but a fundamental concern for leadership at the board and C-suite levels. The rapid evolution of AI, from predictive to generative and now agentic AI, means it is becoming pervasive and can profoundly impact organisations, potentially even destroying businesses if not managed correctly. While access to AI itself is no longer a competitive advantage due to its democratisation, the ability of senior leaders to be AI literate and understand both its opportunities and inherent risks is what differentiates successful companies. Responsible AI ensures that AI is implemented ethically, sustainably, and with accountability, balancing innovation with the need to prevent negative consequences like data breaches or societal harm.

AI has evolved rapidly from “old-fashioned” machine learning, which was deterministic and low-risk, primarily serving as an efficiency tool within IT departments. Today, generative AI and multimodal generative AI are transformative, carrying much higher risks. They can fundamentally change or even destroy a business if not handled responsibly. Conversely, they also present immense opportunities, enabling small teams to build multi-billion dollar valuations. The key difference now is that modern AI, especially agentic AI, can make independent decisions, necessitating robust governance, ethical frameworks, and an understanding of its potential to amplify existing issues within an organisation, such as a toxic culture.

Considering multiple stakeholders is crucial in AI governance because the impact of AI extends far beyond a company’s immediate operations. “Responsible and trustworthy AI” principles, which are often legally enforceable (e.g., European Union AI Act), mandate this consideration. Key stakeholders include the business itself, its employees, customers, regulators, and society in general. Overlooking internal stakeholders, like human employees, can lead to demotivation, and even sabotage, if they feel threatened by AI. Furthermore, businesses have a duty to consider public policy and the broader societal implications of their AI systems. Effective AI governance must identify these diverse stakeholders and consider their needs and potential impacts from the outset of AI program development and deployment.

Boards can ensure employees thrive alongside AI by treating AI transformation as a high-order change management initiative. This involves clear messaging about the organisation’s purpose for adopting AI, its processes, timelines, and deliverables. Crucially, it must address the “what’s in it for me” question from the employee’s perspective. Boards should support employees through necessary training to upskill their AI knowledge and proactively identify roles best suited for human skills versus AI automation, or where human-AI collaboration is most effective. Demonstrating ethical leadership, setting an “AI-first culture,” and openly upgrading their own AI skills can foster psychological safety. Actively collecting feedback from employees, suppliers, and customers also helps in successful AI rollout. The goal is augmentation, not just job reduction, making employees’ roles more exciting and engaging.

Boards should establish proactive safeguards and ethical frameworks that go beyond traditional compliance. Several international frameworks provide guidance, including:

  • The NIST Artificial Intelligence Risk Management Framework (AIRMF) and accompanying playbook: Designed to be industry-agnostic and usable by any size organisation, focusing on governance.
  • The Asilomar Principles and OECD Trustworthy and Responsible AI Principles: These have influenced major regulations globally.
  • The European Union AI Act: Includes specific enforceable articles on ethics, accountability, trustworthiness, privacy, and data security, with significant fines for violations.
  • Voluntary frameworks from the UK, Japan, and Singapore.
  • ISO 42001: An international standard for AI management systems.
  • General Purpose AI Codes of Conduct (e.g., from the European Union).
  • Industry-specific model bulletins (e.g., National Association of Insurance Commissioners).

Adopting one or more of these frameworks and integrating their principles into AI governance processes is essential. This includes continuous auditing and monitoring for bias, especially in training data, to ensure fairness and prevent harm.

The “human firewall” refers to the human element within an organisation that can either protect or undermine AI systems. It highlights the risk that employees, fearing job loss, might intentionally or unintentionally sabotage AI deployment. A survey cited revealed that 37% of Generation Z employees anonymously admitted to proactively sabotaging their businesses’ AI due to job security concerns. Furthermore, a large percentage were using personal AI tools for work, posing security risks. Addressing the human firewall is vital because AI systems are trained on vast amounts of data, often including employee knowledge and interactions. If an organisation has a toxic culture, the AI can amplify it. Therefore, boards must set a clear ethical tone, invest in upskilling employees, encourage a security-first and AI-first culture, and foster engagement to ensure successful and secure AI integration.

No, boards and human oversight are unlikely to be fully replaced by AI. Despite fantasies of full automation, current AI systems, including general-purpose AI, rely on vast amounts of data. As new data becomes scarce, AI models may start “hallucinating” or exhibiting unpredictable “emergent behavior,” necessitating human oversight. Legal and regulatory systems, including stock exchange rules, are still geared towards human accountability. Liability for harm caused by AI agents (e.g., reputational damage, financial losses) is currently assigned to human beings, such as board members and C-suite executives, not the AI itself. This area of law is evolving, but centuries-old principles of contract and tort law can still apply even without AI-specific legislation. Proactive measures, like demonstrating good faith efforts to comply with AI risk management frameworks (e.g., NIST AI Risk Management Framework), are becoming crucial for mitigating potential liability for directors.

Our guests

Terms and Conditions

  • The Good Governance Academy nor any of its agents or representatives shall be liable for any damage, loss or liability arising from the use or inability to use this web site or the services or content provided from and through this web site.
  • This web site is supplied on an “as is” basis and has not been compiled or supplied to meet the user’s individual requirements. It is the sole responsibility of the user to satisfy itself prior to entering into this agreement with The Good Governance Academy that the service available from and through this web site will meet the user’s individual requirements and be compatible with the user’s hardware and/or software.
  • Information, ideas and opinions expressed on this site should not be regarded as professional advice or the official opinion of The Good Governance Academy and users are encouraged to consult professional advice before taking any course of action related to information, ideas or opinions expressed on this site.
  • When this site collects private information from users, such information shall not be disclosed to any third party unless agreed upon between the user and The Good Governance Academy.
  • The Good Governance Academy may, in its sole discretion, change this agreement or any part thereof at any time without notice.

Privacy Policy

Link to the policy: GGA Privacy Policy 2021

The Good Governance Academy (“GGA”) strives for transparency and trust when it comes to protecting your privacy and we aim to clearly explain how we collect and process your information.

It’s important to us that you should enjoy using our products, services and website(s) without compromising your privacy in any way. The policy outlines how we collect and use different types of personal and behavioural information, and the reasons for doing so. You have the right to access, change or delete your personal information at any time and you can find out more about this and your rights by contacting the GGA, clicking on the “CONTACT” menu item or using the details at the bottom of the page.

The policy applies to “users” (or “you”) of the GGA website(s) or any GGA product or service; that is anyone attending, registering or interacting with any product or service from the GGA. This includes event attendees, participants, registrants, website users, app users and the like.

Our policies are updated from time-to-time. Please refer back regularly to keep yourself updated.