As Artificial Intelligence (AI) becomes deeply embedded in organisational life, questions around its ethical use are more urgent than ever.
This event explored the critical challenges and responsibilities organisations face in ensuring the ethical use of AI. Drawing on insights from a new guidebook designed for boards, executives, ethics and risk practitioners, and IT teams, we will unpack the societal debate around AI, identify the key ethical risks, and offer practical steps for governance.
Now explore how to lead your organisation in managing AI ethically, responsibly, and sustainably.
The launch of ChatGPT in November 2022 marked a turning point in global awareness of Artificial Intelligence (AI). While AI had already been embedded in various systems and processes for years, the public’s interaction with a conversational AI brought the technology’s potential—and its risks—into sharper focus. Since then, AI has been rapidly integrated into organisational tools, platforms, and daily operations, making it nearly impossible for businesses to remain unaffected.
As AI continues to evolve, many organisations are actively encouraging its adoption to maintain competitive advantage. However, this rapid uptake often occurs without the necessary ethical safeguards. Concerns are growing about how AI affects decision-making, privacy, accountability, and the future of work. Influential voices in the tech world have called for a pause in AI development until clearer guidelines are established.
Despite the growing awareness of AI’s ethical implications, many organisations still struggle with how to manage these risks effectively. There is a clear need for practical, accessible guidance tailored not to AI developers, but to organisational leaders, governance professionals, and technical teams. This webinar responds to that need, offering insights from a newly developed guidebook that outlines core concepts, ethical challenges, and actionable steps. It also introduces the EU AI Act as a benchmark for regulatory developments in this space.
The Ethics Institute defines ethics as the balance between what is good for ourselves and what is good for others. When applied to AI, this definition is extended to include the goal of leading to individual and collective flourishing, aiming to build societies that work for everyone while protecting individual rights and liberties. A crucial element is the concept of “human-centric AI,” which means developing AI that strengthens human dignity, capabilities, and decision-making, rather than weakening or replacing them. The concern is that AI could potentially prioritise what is good for machines or corporations over what is good for human beings.
The guidebook (freely available for download from The Ethics Institute here) distinguishes between machine learning and generative AI. Machine learning is described as more analytical, used for tasks like identifying patterns in data (e.g., fraud detection in medical aid applications). Generative AI, on the other hand, exemplified by tools like ChatGPT, learns patterns to generate new information rather than just analysing existing data. This distinction is crucial because they present different types of ethical risks and require different management approaches. Generative AI poses a “diffuse risk” due to its widespread, everyday use by employees, necessitating systemic culture interventions like widespread training and policies. Machine learning, being more project-based, presents a “contained risk” that requires a systemic risk management intervention with specific processes, checks, and balances.
For Large Language Models (LLMs), a primary ethical risk is accuracy, as LLMs can “hallucinate” or generate false information, such as fake legal case law. This highlights the need for users to understand that LLMs predict logical words and phrases rather than necessarily providing accurate or truthful information. Another significant risk is data security, as employees often upload copyrighted or sensitive company information to generative AI tools without proper safeguards, risking intellectual property breaches and training AI on proprietary data.
For project-based AI, key ethical risks include:
Organisations should adopt a risk-based approach tailored to their level of AI use.
Human-centric AI is central to the ethical framework, aiming to develop AI that strengthens human dignity and capabilities. It ensures that AI serves human well-being and progress, rather than diminishing it. This means AI should augment human decision-making, making people more reflective and capable, instead of replacing their wisdom or creative abilities. The goal is for AI to contribute to individual and collective flourishing by building societies that work for everyone, protecting individual rights and liberties, and fostering greater human consciousness, awareness, and intellectual abilities. The concern is that if AI is not human-centric, it could lead to societies where machines or corporate interests are prioritised over human needs, potentially weakening human agency and leading to undesirable societal outcomes.
It is problematic to view AI, especially large language models, as having moral agency because they do not possess the capacity for moral choice, understanding of right and wrong, or accountability in the human sense. As explained, LLMs operate by predicting the “next logical word or phrase” based on patterns in their training data. They are not conscious entities attempting to lie or deceive; rather, their “apologies” or “emotive” responses are simply generated based on these predictive patterns. Human moral agency is deeply embedded in vulnerability—the understanding of one’s own and others’ susceptibility to harm. AI, not being vulnerable in the same way, cannot experientially comprehend this fundamental aspect of human ethics. Therefore, attributing moral agency to AI is an unrealistic expectation that can lead to a misunderstanding of its capabilities and limitations, potentially leading to a “dehumanisation or unhumanisation” of governance and decision-making.
Broad societal concerns related to AI include the digital divide, societal polarisation, job loss, environmental impact (due to high energy consumption), and even existential threats to humanity. While AI offers opportunities to address these issues (e.g., better education, more meaningful work), there’s a dilemma: the opportunities are often organisational (enhanced profitability, effectiveness, efficiency), while the risks are frequently societal. This creates an ethical challenge where organisations might prioritise their immediate benefits over the broader, long-term societal detriments, such as known job losses or significant energy use. The sources emphasise that it is a risk to not utilise AI, as this could lead to falling behind and missing potential human-centric opportunities. However, this must be balanced with a clear understanding and mitigation of the societal risks involved.
International standards and frameworks like ISO/IEC 42001, the NIST AI framework, and the OECD AI principles are crucial for governing AI ethics. While laws often lag technological advancements, these standards provide a “minimum acceptable societal norm” and guidance for organisations. The EU AI Act, for instance, offers a useful framework for categorising AI projects by risk (high, intermediate, low) and calibrating the necessary governance accordingly. By aligning with these standards, organisations can ensure they are considering ethical implications, such as human values and capabilities, throughout their AI development and deployment. Leveraging these documents can help organisations establish robust governance policies, set a clear “tone from the top,” and ensure mindful and purposeful management of AI, ultimately building trust and avoiding negative impacts.
Link to the policy: GGA Privacy Policy 2021
The Good Governance Academy (“GGA”) strives for transparency and trust when it comes to protecting your privacy and we aim to clearly explain how we collect and process your information.
It’s important to us that you should enjoy using our products, services and website(s) without compromising your privacy in any way. The policy outlines how we collect and use different types of personal and behavioural information, and the reasons for doing so. You have the right to access, change or delete your personal information at any time and you can find out more about this and your rights by contacting the GGA, clicking on the “CONTACT” menu item or using the details at the bottom of the page.
The policy applies to “users” (or “you”) of the GGA website(s) or any GGA product or service; that is anyone attending, registering or interacting with any product or service from the GGA. This includes event attendees, participants, registrants, website users, app users and the like.
Our policies are updated from time-to-time. Please refer back regularly to keep yourself updated.