Innovative Technology

Friend or Foe?

In partnership with:

In a world driven by rapid technological advancement, the question arises,
is innovative technology our greatest ally or a potential threat?

 

This webinar is a thought-provoking  exploration of the balance between the promise and
peril of emerging technologies.

 

Join industry experts and thought leaders as we delve into the opportunities, challenges, and ethical implications of cutting-edge innovations like AI, automation,
and digital transformation.

 

Through insightful discussions and real-world examples, we’ll examine how technology empowers businesses and individuals while also addressing the risks of outpacing human adaptability. Don’t miss this chance to engage in a dynamic conversation about the evolving role of technology in shaping our future.

Background information

As technology continues to evolve at a rapid pace, it brings both immense opportunities and significant challenges. “Innovative Technology – Friend or Foe?” is a session that aims to explore the complex relationship between emerging technologies and their impact on our lives and businesses.

 

From artificial intelligence and automation to digital transformation, this webinar will examine the transformative power of innovation and how it’s reshaping industries. We’ll explore how technology is empowering individuals and organizations to reach new heights of productivity and efficiency, while also considering the risks of over-reliance and unintended consequences.

 

This session discusses the ethical considerations, potential dangers, and strategies for navigating the rapidly changing tech landscape. With a focus on balancing the promise of innovation with the risks it presents, this session provides valuable insights on how to stay ahead in an ever-evolving world.

Short Explainer Video

Questions and Answers

BEN-Africa, the Business Ethics Network of Africa, was established in 1999 by academics. Its primary mission is to bridge the gap between academia, the business community, civil society, the public sector, and other stakeholders within the business ethics landscape. At its heart, Ben Africa aims to unite individuals on the African continent who share a passion for business ethics.

The nature of work has undergone a fundamental shift, moving beyond the historical replacement of manual labour with machines, as seen in previous industrial revolutions. Now, it’s our intellect that is being challenged by AI and other innovative technologies. This means that the way we engage in labour is changing significantly. Professor Marie Noelle N’guessan distinguishes between “job” (a specific paid role) and “work” (a broader concept encompassing mental and physical activity for a purpose). While job-related impacts are more visible and may have affordable solutions, the influences on “work” are deeper, with less evident solutions, affecting mental health and creating a fast-changing environment that prioritises productivity and efficiency over emotional engagement.

The “Ghost in the Shell” concept, originating from a Japanese comic series, highlights a futuristic world where humans are constantly enhanced by technology (e.g., cyber brains, prostheses). This raises profound questions about our self-perception and how the concept of “self” fits within technology. The integration of AI also opens up humans to cyberattacks, questioning our responsibility towards technology. Dr. Suraj Juddoo further expands on this, noting concerns about AI becoming self-aware and the ethical dilemmas surrounding the trustworthiness of data and the potential for harm. The key question is how to ensure technology remains ethical and does not damage humankind, particularly as AI systems become more capable of self-learning.

“Meta Barons” refer to individuals or entities who wield enormous power through emerging technologies like AI, similar to the concept of powerful, patriarchal figures who enhance themselves with machines to dominate. The term draws a parallel to Mark Zuckerberg’s “Meta” (formerly Facebook), signifying a visualisation of virtual reality and concentrated power. Professor Kemi Ogunyemi describes Meta Barons as those who see opportunities to leverage platforms (like social media and online platforms) to shape public discourse, gain influence, and earn money. This concentration of power inspires fear, as it can be used for both good and harm, raising concerns about the ethics of those behind the scenes and the potential human cost of innovation.

Dr. Suraj Juddoo highlights several challenges in ensuring AI trustworthiness. Firstly, foundational models (like ChatGPT) are pre-trained on vast amounts of publicly available data, raising questions about data quality and the accuracy of responses (e.g., “hallucinations” in LLMs). Organizations need to fine-tune these models with their own data to make them more focused and safer. Secondly, there’s the ongoing challenge of controlling self-learning AI systems and implementing safeguards. Cybersecurity threats, particularly those focused on manipulating training data to produce incorrect results, are also a significant concern. Finally, the debate continues on whether regulations or policies are more effective in governing AI, as laws can be slow to adapt, and enforcement is complex.

The tech-enabled economy profoundly influences adult learning and work models. Adult learning, traditionally compliance and safety-driven, is now insufficient to meet the demands of fast-changing, tech-enabled labour markets, necessitating upskilling. The prevalence of hybrid and flexible work models, while offering benefits, also increases the need for reskilling, as reduced in-person interaction can hinder experience sharing among colleagues. This dynamic creates challenges for continuous learning and adaptation in the workforce.

The pursuit of profit in a tech-driven global context presents several significant risks. As Professor Kemi Ogunyemi points out, one major risk is the blurring of lines between truth and fiction due to deepfakes and social manipulation, leading to misinformation and reputational damage. There’s also a heightened threat to online security, with concerns about voice cloning, fake identities, data privacy breaches, and surveillance. Over-reliance on AI for emotional support can lead to increased mental health issues like depression. Furthermore, the immense energy consumption required for training AI models contributes negatively to the environment, posing a sustainability challenge.

Navigating the ethical challenges of AI and technology requires a multi-faceted approach. Professor Marie Noelle N’guessan emphasizes the need for critical thinking about how technology impacts human well-being and happiness, urging us to find solutions that improve work environments rather than merely creating suffering. Professor Kemi Ogunyemi stresses the importance of acknowledging both the opportunities and risks, advocating for courage and awareness. She suggests that technologists should adopt human-centered design, considering all stakeholders, and that society needs to ensure inclusive development, carrying all communities and generations along through upskilling initiatives. Dr. Suraj Juddoo highlights the crucial role of governance, urging deep discussions between regulators, technology developers, and consumers to shape a future where AI systems are “friendly and not a fool.”

Our guests

Key terms used during the session

  • Job: A specific paid role or occupation, a narrower concept compared to “work.”
  • Work: A broader concept encompassing any mental and physical activity performed to achieve a purpose or to express oneself.
  • Adult Learning: Education and training for adults, identified as currently compliance and safety-driven, and insufficient for the upskilling needs of tech-enabled markets.
  • Hybrid and Flexible Work Models: Work arrangements that combine in-office and remote work, increasing the need for reskilling due to reduced in-person experience sharing.
  • Mental Health Disorders in the Workplace: An increasing issue linked to fast-changing work environments, disengagement from emotions, and a focus on productivity and efficiency in tech-driven industries.
  • Artificial Intelligence (AI): The integration of AI in the workplace, adding complexity and increasing the necessity of training while decreasing time for it.
  • The Ghost in the Shell: A panel title referring to an old Japanese animated comic series, exploring futuristic human enhancement through technology and the ethical questions it raises about selfhood and cyber attacks.
  • Cyberbrain Attacks / Body Attacks: Refers to the vulnerability of technologically enhanced humans to malicious digital interference, as depicted in “The Ghost in the Shell.”
  • Data Governance: The overarching management of data availability, usability, integrity, and security, a key expertise of Dr. Suraj Juddoo.
  • Automated Algorithms: Systems that perform tasks based on predefined rules, encompassing AI and predating the recent “hype” around AI, facing similar challenges regarding trustworthiness and ethics.
  • Ethics: The moral principles that govern a person’s behaviour or the conducting of an activity; often involves dilemmas between right and wrong.
  • Disruption: The significant alteration or innovation caused by new technologies, seen by panelists as both negative and positive.
  • Foundational Models: AI models (like ChatGPT, LLMs) pre-trained on vast amounts of often publicly available data, forming the basis for more specific applications.
  • Hallucinations (of LLMs): Refers to instances where Large Language Models generate information that is plausible but incorrect or nonsensical.
  • Pre-training: The initial phase of training a foundational AI model on a large, general dataset.
  • Self-supervised Model: A refined AI model that has been further trained on specific, often organizational, data to be more focused and safer.
  • Reinforcement Learning: A machine learning technique where an AI model learns to make decisions by performing actions and receiving rewards or penalties.
  • Bias (in AI data): Prejudices or inaccuracies in the data used to train AI systems, leading to skewed or unfair outcomes. It is seen as subjective and always existing to some degree.
  • Trustworthiness (of AI): The ability of an AI system to provide accurate, reliable, and ethical responses, free from errors, and with satisfactory data quality.
  • Cybersecurity Issues (for AI): New forms of cyber attacks specifically targeting AI systems, primarily by changing training data rather than accessing internal workings.
  • Technical Robustness: The resilience and reliability of AI systems, a key theme in international guidelines for ethical AI.
  • Accountability Measures (for AI): Mechanisms to determine responsibility for AI system actions and outcomes.
  • Transparency Measures (for AI): Methods to ensure the inner workings, data usage, and decision-making processes of AI systems are understandable.
  • Explainability (of AI systems): The ability to articulate the rationale behind an AI system’s decisions or outputs in an understandable way.
  • RAM (Readiness Assessment Methodology for AI): A framework issued by organizations like UNESCO to evaluate the ethical level of AI implementation.
  • Provenance of Data: The origin and history of data, important for controlling its source and ensuring its authenticity.
  • Application Layer: The user-facing interface of an AI system (e.g., web interface, mobile app), which can present security risks.
  • Societal Level Impact (of AI): The broader effects of AI on human society, including educational curricula and public understanding.
  • Technocrats: Individuals who are actively shaping the future with innovation, excited by tech, and focused on technological possibilities rather than power.
  • Meta Barons: Powerful individuals who leverage technology and platforms to wield power and earn money, often dominating public discourse. (Distinct from the “Metaverse” of Mark Zuckerberg, though related in theme).
  • Printing Press Burners / Luddites: A metaphorical term for individuals or groups resistant to new technology and change, often out of fear of its implications or human cost.
  • Deepfakes: Synthetic media in which a person in an existing image or video is replaced with someone else’s likeness, often used for misinformation or manipulation.
  • Social Manipulation: The use of technology to influence public opinion, beliefs, or actions, often through misinformation.
  • Data Privacy: The protection of personal data from unauthorised access, use, or disclosure.
    Autonomous Weapons: Weapon systems that can select and engage targets without human intervention.
  • Over-reliance on AI: The potential pitfall of excessive dependence on AI systems, leading to a decrease in human critical thinking, social interaction, or problem-solving skills.
  • Human-centered Design: An approach to design that ensures products and services are developed with the human user’s needs, capabilities, and limitations as the primary focus.
    Inclusive
  • Development: Ensuring that technological advancements and their benefits are accessible to and shared by all communities and generations, without leaving anyone behind.

Terms and Conditions

  • The Good Governance Academy nor any of its agents or representatives shall be liable for any damage, loss or liability arising from the use or inability to use this web site or the services or content provided from and through this web site.
  • This web site is supplied on an “as is” basis and has not been compiled or supplied to meet the user’s individual requirements. It is the sole responsibility of the user to satisfy itself prior to entering into this agreement with The Good Governance Academy that the service available from and through this web site will meet the user’s individual requirements and be compatible with the user’s hardware and/or software.
  • Information, ideas and opinions expressed on this site should not be regarded as professional advice or the official opinion of The Good Governance Academy and users are encouraged to consult professional advice before taking any course of action related to information, ideas or opinions expressed on this site.
  • When this site collects private information from users, such information shall not be disclosed to any third party unless agreed upon between the user and The Good Governance Academy.
  • The Good Governance Academy may, in its sole discretion, change this agreement or any part thereof at any time without notice.

Privacy Policy

Link to the policy: GGA Privacy Policy 2021

The Good Governance Academy (“GGA”) strives for transparency and trust when it comes to protecting your privacy and we aim to clearly explain how we collect and process your information.

It’s important to us that you should enjoy using our products, services and website(s) without compromising your privacy in any way. The policy outlines how we collect and use different types of personal and behavioural information, and the reasons for doing so. You have the right to access, change or delete your personal information at any time and you can find out more about this and your rights by contacting the GGA, clicking on the “CONTACT” menu item or using the details at the bottom of the page.

The policy applies to “users” (or “you”) of the GGA website(s) or any GGA product or service; that is anyone attending, registering or interacting with any product or service from the GGA. This includes event attendees, participants, registrants, website users, app users and the like.

Our policies are updated from time-to-time. Please refer back regularly to keep yourself updated.