Managing the ethics of AI

in organisations

In partnership with:

As Artificial Intelligence (AI) becomes deeply embedded in organisational life, questions around its ethical use are more urgent than ever.

 

This event explored the critical challenges and responsibilities organisations face in ensuring the ethical use of AI. Drawing on insights from a new guidebook designed for boards, executives, ethics and risk practitioners, and IT teams, we will unpack the societal debate around AI, identify the key ethical risks, and offer practical steps for governance.

 

Now explore how to lead your organisation in managing AI ethically, responsibly, and sustainably.

Background information

The launch of ChatGPT in November 2022 marked a turning point in global awareness of Artificial Intelligence (AI). While AI had already been embedded in various systems and processes for years, the public’s interaction with a conversational AI brought the technology’s potential—and its risks—into sharper focus. Since then, AI has been rapidly integrated into organisational tools, platforms, and daily operations, making it nearly impossible for businesses to remain unaffected.

 

As AI continues to evolve, many organisations are actively encouraging its adoption to maintain competitive advantage. However, this rapid uptake often occurs without the necessary ethical safeguards. Concerns are growing about how AI affects decision-making, privacy, accountability, and the future of work. Influential voices in the tech world have called for a pause in AI development until clearer guidelines are established.

 

Despite the growing awareness of AI’s ethical implications, many organisations still struggle with how to manage these risks effectively. There is a clear need for practical, accessible guidance tailored not to AI developers, but to organisational leaders, governance professionals, and technical teams. This webinar responds to that need, offering insights from a newly developed guidebook that outlines core concepts, ethical challenges, and actionable steps. It also introduces the EU AI Act as a benchmark for regulatory developments in this space.

Explainer Video

Questions and Answers

The Ethics Institute defines ethics as the balance between what is good for ourselves and what is good for others. When applied to AI, this definition is extended to include the goal of leading to individual and collective flourishing, aiming to build societies that work for everyone while protecting individual rights and liberties. A crucial element is the concept of “human-centric AI,” which means developing AI that strengthens human dignity, capabilities, and decision-making, rather than weakening or replacing them. The concern is that AI could potentially prioritise what is good for machines or corporations over what is good for human beings.

The guidebook (freely available for download from The Ethics Institute here) distinguishes between machine learning and generative AI. Machine learning is described as more analytical, used for tasks like identifying patterns in data (e.g., fraud detection in medical aid applications). Generative AI, on the other hand, exemplified by tools like ChatGPT, learns patterns to generate new information rather than just analysing existing data. This distinction is crucial because they present different types of ethical risks and require different management approaches. Generative AI poses a “diffuse risk” due to its widespread, everyday use by employees, necessitating systemic culture interventions like widespread training and policies. Machine learning, being more project-based, presents a “contained risk” that requires a systemic risk management intervention with specific processes, checks, and balances.

For Large Language Models (LLMs), a primary ethical risk is accuracy, as LLMs can “hallucinate” or generate false information, such as fake legal case law. This highlights the need for users to understand that LLMs predict logical words and phrases rather than necessarily providing accurate or truthful information. Another significant risk is data security, as employees often upload copyrighted or sensitive company information to generative AI tools without proper safeguards, risking intellectual property breaches and training AI on proprietary data.

For project-based AI, key ethical risks include:

  • Bias and Fairness: AI systems can exhibit bias if trained on biased data, leading to unfair outcomes, especially in sensitive areas like recruitment or parole decisions.
  • Data Privacy: Ensuring that data used by AI is sourced appropriately, is accurate, and is used only for its intended purpose, aligning with privacy regulations like GDPR.
  • Transparency and Explainability: The difficulty in explaining how AI makes certain decisions can be an ethical concern, especially when those decisions impact individuals.
  • Recourse: Individuals affected by AI decisions need clear avenues for human recourse, as seen in cases where people were unfairly penalised by automated systems with no human oversight.
  • Autonomy: AI should not be used to manipulate people into harmful actions, respecting their agency and personhood.
  • Accountability: Determining who is accountable for AI’s actions, as machines do not possess moral agency.
  • Job Replacement: The potential for AI to cause significant job losses, which is a major concern, particularly in developing countries.

Organisations should adopt a risk-based approach tailored to their level of AI use.

  1. Governance Oversight: At any level of AI use, there needs to be some form of governance oversight, with the board being at the forefront of managing emerging technologies.
  2. Commitment to Responsible AI: Before implementing any measures, organisations must decide if responsible AI use (human-centric AI) is a core value.
  3. Set Standards and Codify Guidance: For diffuse risks (e.g., employees using LLMs), simple guidelines are essential. A sample code might include ensuring accuracy, avoiding bias, safeguarding data security, and reminding employees that they remain accountable for their AI-assisted output.
  4. Socialise and Monitor: Policies need to be communicated and made accessible to everyone, followed by monitoring AI use and integrating feedback into reporting cycles.
  5. Project-based Risk Management (for contained risks):
  • Assign Responsibility: A senior human should be assigned overall ownership of each AI project.
  • Consider Team Diversity: Projects should involve diverse teams in terms of demographics, skills, and backgrounds, and testing should reflect this diversity.
  • Assess Against Standards: Projects must be evaluated against established ethical principles (e.g., beneficence, avoiding harm, safeguarding human autonomy, competence, transparency, explainability, oversight, and accountability).
  • Keep a Human in the Process: Maintain human involvement in decision-making, even when AI provides recommendations.
  • Provide Recourse Avenues: Ensure clear and accessible human-led channels for individuals to seek redress if they are negatively impacted by AI decisions.
  • Regular Review: Continuously test and review AI models, for example, every two years or with every major revision, and actively solicit feedback from users.

Human-centric AI is central to the ethical framework, aiming to develop AI that strengthens human dignity and capabilities. It ensures that AI serves human well-being and progress, rather than diminishing it. This means AI should augment human decision-making, making people more reflective and capable, instead of replacing their wisdom or creative abilities. The goal is for AI to contribute to individual and collective flourishing by building societies that work for everyone, protecting individual rights and liberties, and fostering greater human consciousness, awareness, and intellectual abilities. The concern is that if AI is not human-centric, it could lead to societies where machines or corporate interests are prioritised over human needs, potentially weakening human agency and leading to undesirable societal outcomes.

It is problematic to view AI, especially large language models, as having moral agency because they do not possess the capacity for moral choice, understanding of right and wrong, or accountability in the human sense. As explained, LLMs operate by predicting the “next logical word or phrase” based on patterns in their training data. They are not conscious entities attempting to lie or deceive; rather, their “apologies” or “emotive” responses are simply generated based on these predictive patterns. Human moral agency is deeply embedded in vulnerability—the understanding of one’s own and others’ susceptibility to harm. AI, not being vulnerable in the same way, cannot experientially comprehend this fundamental aspect of human ethics. Therefore, attributing moral agency to AI is an unrealistic expectation that can lead to a misunderstanding of its capabilities and limitations, potentially leading to a “dehumanisation or unhumanisation” of governance and decision-making.

Broad societal concerns related to AI include the digital divide, societal polarisation, job loss, environmental impact (due to high energy consumption), and even existential threats to humanity. While AI offers opportunities to address these issues (e.g., better education, more meaningful work), there’s a dilemma: the opportunities are often organisational (enhanced profitability, effectiveness, efficiency), while the risks are frequently societal. This creates an ethical challenge where organisations might prioritise their immediate benefits over the broader, long-term societal detriments, such as known job losses or significant energy use. The sources emphasise that it is a risk to not utilise AI, as this could lead to falling behind and missing potential human-centric opportunities. However, this must be balanced with a clear understanding and mitigation of the societal risks involved.

International standards and frameworks like ISO/IEC 42001, the NIST AI framework, and the OECD AI principles are crucial for governing AI ethics. While laws often lag technological advancements, these standards provide a “minimum acceptable societal norm” and guidance for organisations. The EU AI Act, for instance, offers a useful framework for categorising AI projects by risk (high, intermediate, low) and calibrating the necessary governance accordingly. By aligning with these standards, organisations can ensure they are considering ethical implications, such as human values and capabilities, throughout their AI development and deployment. Leveraging these documents can help organisations establish robust governance policies, set a clear “tone from the top,” and ensure mindful and purposeful management of AI, ultimately building trust and avoiding negative impacts.

Our guests

Key Terms

  • AI (Artificial Intelligence): The use of machines to perform tasks that typically require human intelligence, such as generating content or analysing data.
  • AI Ethics: A field of study concerned with the moral implications of developing and using AI technologies, focusing on principles like fairness, accountability, and transparency.
  • Augmented Decision-Making: The use of AI to enhance and support human decision-making, providing insights and recommendations without fully automating the decision process, thereby fostering human wisdom rather than replacing it.
  • Autonomy (Human Autonomy): The capacity of an individual to make their own independent choices and act on them. In AI ethics, it refers to respecting human agency and avoiding manipulation.
  • Beneficence: An ethical principle that states one should act to benefit others or do good. In AI, it suggests that AI systems should add value and include the consideration of all stakeholders.
  • Bias (in AI): Systematic and unfair prejudice in an AI system’s outputs, often due to biased training data or algorithmic design, leading to discriminatory outcomes (e.g., in recruitment or parole systems).
  • Contained Risk: An AI risk profile where the use of AI is specific, project-based, and often involves machine learning applications (e.g., fraud detection). It requires systemic risk management interventions with clear processes and checks.
  • Copyright Material: Original literary, dramatic, musical, and artistic works that are protected by intellectual property law, preventing unauthorised copying or use. Uploading such material to general AI models can pose a data security risk.
  • Diffuse Risk: An AI risk profile characterised by the widespread, informal use of general AI tools (like large language models) by employees across an organisation. It requires a systemic culture intervention with broad training and policies.
  • Digital Divide: The gap between those who have access to modern information and communications technology and those who do not, or who have restricted access. AI can exacerbate or help mitigate this divide.
  • Existential Threat: A risk that is capable of causing the extinction of humanity or the collapse of civilisation. Some people view advanced AI as a potential existential threat.
  • Explainability (in AI): The ability to understand and interpret how an AI system arrived at a particular decision or output. This is crucial for transparency and accountability.
  • Flourishing (Individual and Collective): A state of well-being and thriving, both for individual human beings and for society as a whole. The Ethics Institute views this as the ultimate goal of ethical practice.
  • Generative AI: A type of artificial intelligence that can create new content, such as text, images, or code, by learning patterns from existing data (e.g., ChatGPT, Copilot).
  • Hallucinations (in LLMs): Instances where large language models generate false, nonsensical, or made-up information, presenting it as factual.
  • Human-Centric AI: AI development and use that prioritises human well-being, dignity, rights, and capabilities, ensuring that AI serves and strengthens humanity rather than weakening it.
  • Intellectual Property (IP): Creations of the mind, such as inventions, literary and artistic works, designs, and symbols, names and images used in commerce. Protecting organisational IP from inadvertent sharing with AI is a key concern.
  • King V: A draft corporate governance code in South Africa that acknowledges the board’s role in overseeing emerging technologies.
  • Large Language Models (LLMs): A type of AI model trained on vast amounts of text data, capable of understanding, generating, and processing human language (e.g., ChatGPT, Claude).
  • Machine Learning (ML): A subset of AI that enables systems to learn from data, identify patterns, and make decisions with minimal human intervention; often more analytical in nature.
  • Moral Agency: An individual’s capacity to make moral choices based on a sense of right and wrong and to be held accountable for those actions. AI is currently not considered to possess moral agency.
  • Nudging: Subtle interventions that influence choices while preserving freedom of choice, often used in marketing or behavioural economics. In AI, it can become a form of manipulation if abused.
  • OECD AI Principles: A set of principles developed by the Organisation for Economic Co-operation and Development to guide responsible AI innovation and stewardship, typically at a government level.
  • Personal Information: Any information relating to an identified or identifiable natural person. Managing this effectively while using AI tools is a major data privacy concern.
  • Project-Based AI: AI applications developed for specific organisational tasks or initiatives (e.g., a chatbot for client interaction, an AI system for detecting fraud).
  • Recourse: The right or ability to appeal a decision or seek a remedy when harmed. In AI ethics, it means ensuring that individuals can challenge AI-driven decisions and reach a human for resolution.
  • Retrieval Augmented Generation (RAG): An AI technique that combines large language models with external knowledge retrieval systems to improve the accuracy and factual grounding of generated content.
  • Ring-Fencing Data: Isolating or segmenting specific data within an AI system to prevent it from being used for general training or shared publicly, often to protect sensitive or proprietary information.
  • Risk-Based Approach (to AI Ethics): Tailoring the ethical governance and management procedures for AI according to the level and type of risk posed by its use.
  • Shadow AI: The unofficial or unsanctioned use of AI tools by employees within an organisation, often bypassing official IT protocols and potentially leading to data security and compliance risks.
  • Societal Polarization: The division of a society into two opposing groups or factions, which can be influenced by AI-driven content and algorithms.
  • Stakeholder Thinking: A management philosophy that considers the interests of all parties affected by an organisation’s actions (employees, customers, community, environment, etc.), rather than solely focusing on shareholders.
  • Transparency (in AI): The ability to openly communicate how an AI system works, its data sources, its limitations, and its intended use, both internally and externally.
  • Vulnerability (Human): The state of being exposed to harm or suffering. Kris Dobie argues that human moral agency is embedded in this shared sense of vulnerability, which AI lacks.

Terms and Conditions

  • The Good Governance Academy nor any of its agents or representatives shall be liable for any damage, loss or liability arising from the use or inability to use this web site or the services or content provided from and through this web site.
  • This web site is supplied on an “as is” basis and has not been compiled or supplied to meet the user’s individual requirements. It is the sole responsibility of the user to satisfy itself prior to entering into this agreement with The Good Governance Academy that the service available from and through this web site will meet the user’s individual requirements and be compatible with the user’s hardware and/or software.
  • Information, ideas and opinions expressed on this site should not be regarded as professional advice or the official opinion of The Good Governance Academy and users are encouraged to consult professional advice before taking any course of action related to information, ideas or opinions expressed on this site.
  • When this site collects private information from users, such information shall not be disclosed to any third party unless agreed upon between the user and The Good Governance Academy.
  • The Good Governance Academy may, in its sole discretion, change this agreement or any part thereof at any time without notice.

Privacy Policy

Link to the policy: GGA Privacy Policy 2021

The Good Governance Academy (“GGA”) strives for transparency and trust when it comes to protecting your privacy and we aim to clearly explain how we collect and process your information.

It’s important to us that you should enjoy using our products, services and website(s) without compromising your privacy in any way. The policy outlines how we collect and use different types of personal and behavioural information, and the reasons for doing so. You have the right to access, change or delete your personal information at any time and you can find out more about this and your rights by contacting the GGA, clicking on the “CONTACT” menu item or using the details at the bottom of the page.

The policy applies to “users” (or “you”) of the GGA website(s) or any GGA product or service; that is anyone attending, registering or interacting with any product or service from the GGA. This includes event attendees, participants, registrants, website users, app users and the like.

Our policies are updated from time-to-time. Please refer back regularly to keep yourself updated.