This is the third of four in the series “Bridging digital capability gaps: an AI perspective”.

Over the past decade, the rapid integration of AI has impacted our everyday lives. There has been an explosion of research and development in this field, which has influenced how we live, how we work and how we make decisions. 

On one hand, human nature has this tendency to ask for ever more. And AI can afford to push the frontiers of knowledge and scientific discovery for merely all sectors and industries. On the other hand, techno-determinism supports the idea that if a technology has been invented, then it has to be deployed. 

However, AI misuse has caused a significant number of downside risks and challenges. Where there is a unity of purpose that peace and prosperity for people and the planet are desirable outcomes, then we also have a shared responsibility to foster trustworthy AI systems.

Of AI systems that work in the service of good

It is no secret that as humans, we have greatly benefited from the vast array of AI applications. To be sure, the unfolding of the AI revolution holds out the promise for further breakthroughs and solutions to global issues, from climate change and world hunger, to chronic disease and poverty reduction, to financial inclusion and optimizing economic aid. 

It is widely documented that a hellbent drive towards AI solutions at all costs can also reduce welfare and cause unintended /negative consequences. These include law enforcement, criminal sentencing, healthcare delivery or employment opportunities given AI would replay and amplify the same existing inequalities. Furthermore, by way of numerous concerns including security of data, issues of bias and privacy, the development and deployment of AI techniques should ensure and aim to perform equally for all. 

As noted by Professor Sarvapali Gopal Ramchurn, “the pace of change in the AI field over the last decade had been too fast for those that use, operate and regulate systems that end up using AI-based solutions.”

Throughout 2021, we have seen a flurry of national and regional endeavours to push the agenda forward. 

For instance, the National Research Center of Canada is harnessing the power of AI for the global good. It aims to boost scientific discovery and engineering design to solve the most complex problems in a range of fields.

The UK government has published a transparency standard for algorithms that is geared towards managing risks and building impact. It requires a brief description of the algorithm and its use in terms of how and why, as well as a more detailed info about how the tool works, the training data and the level of human oversight. 

The European Commission released a proposal to regulate AI describing it as an attempt to ensure a “well functioning internal market for AI systems” that is based on “EU values and fundamental rights.”

In November 2021, UNESCO released a set of recommendations that speak for the range of national /international policies and guidelines to promote human rights but also contribute to the achievement of the SDGs.

These 4 archetypes are:

  • Protection data 
  • Banning social scoring and mass surveillance
  • Helping to monitor and evaluate
  • Protecting the environment 

There is broad consensus that human-centered AI approaches and regulatory frameworks are necessary to make sure that emerging technologies benefit humanity as a whole. Several guidelines have been established and many more are underway. 

Responsible AI at the firm level

According to Algorithm Watch, a Berlin-based non profit organization, at least 173 set of AI principles have been published around the world. 

At business and enterprise levels also, AI governance frameworks are numerous and diverse. Many companies have enabled AI principles to guide their actions. They have seen these as the curbs on the roadside to keep themselves out of trouble. Yet, the hurdle really lies in the execution, that is crossing the bridge that translate into everyday practice.

In this regard, monitoring requirements (in terms of transparency and explainability) and control requirements (in terms of data) both contribute to drive complexity of the algorithms/ projects. But they also reveal the limitations. 

Chief among them is to acknowledge that it may not always be possible to train AI systems on all possible scenarios. There is a trade-off between efficiency and fairness. For critical systems, where there may not be time for human intervention, trust in the design stage is of paramount importance. In addition, several use cases attest that transparency is not always desirable as expected because it can lead to worse results. In other cases, we ask ourselves: does the algorithm factor randomness and human indecision? or else who is harmed when algorithms are to become unfair? For instance, even though experiments under real conditions tell that people’s reaction is mostly to freeze or do nothing, the frequently-discussed Trolley problem (linked to autonomous vehicles) and its footbridge variant – although purely hypothetical – provide evidence that modern day moral dilemmas can be highly problematic. 

The diversity of data science teams /developers is also crucial to raise awareness of any unconscious biases that stem from the integrity of data or the decision-making process.

Outside the realms of design and process, another booming area considers the ethics of AI focusing on outcomes. Impact assessments akin to human rights and sustainability reports, or audits similar to companies financials are two popular trends. Although these approaches would not iron out the algorithm issues, hard believers argue that they will enforce legal accountability and ensure the results are reviewable by the public. This helps supporting innovation while managing a range of risks such as security and compliance, as well as reputational risks including talent acquisition and employee retention.

Leadership roles and risk-based approach in the era of AI

A recent survey conducted by Edelman Research revealed that ethics was 3 times more important than competence as far as trustworthiness of companies is concerned. Therefore, leadership endeavours to encourage ethical behaviours and align them to corporate values are critical to produce positive outcomes and financial performance. According to Professor Nelson Phillips, the old leadership competencies are not disappearing, but new ones are appearing. And ethical competencies depend on leaders having deep competencies in technological competencies and organizational competencies.

The Covid-19 pandemic has blurred the boundaries between work and life. And with the power of social media and the scandals of deep fakes, it has also spurred the creation of a workforce that is even more curious and questioning. This is on top of the pressures for responsible innovation and the ethical adoption of new technology. No doubt that AI will increase the demands for effective leadership that combine a good dose of humility and confidence under uncertainty, as well as the need for transparent communication. 

Organizations may also have to design AI governance frameworks that are adequate to their risk profile and tolerance. These guidelines are particularly challenging for SMEs due to the lack of resources. In “Building trustworthy AI solutions: a case for practical solutions for small businesses”, the authors argue that a business must question where responsibility (tasks and obligations) lies within their AI governance framework and must define accountability (oversight and liability) to roles across the design, development and deployment lifecycle. Their analysis provides a mechanism for SMEs to select their own toolkits based on their current capacity, resources and ethical awareness levels focussing initially at the conceptualization stage of the AI lifecycle and then extending throughput. 

Furthermore, innovation also requires collaboration and cross-functional integration. 

Last November, we learned more in “Talks on Transdisciplinarity” organized by the Division of Arts & Humanities at the University of Kent, how the field of Transdisciplinarity was evolving among scholars. Likewise on AI ethical issues, it is gaining momentum within institutions and university circles involving research and education from many disciplines to work and collaborate with various stakeholders.

Government also play a leading role to set new standards of collaboration on ethical AI. For example, the Canadian government has developed a risk-based approach to AI adoption in the public sector which divides the AI systems in different levels. The four factors used to determine the risk-level are impact on:

  • the rights of individuals or communities 
  • the health or well-being of individuals or communities 
  • the economic interests of individuals, entities, or communities 
  • the ongoing sustainability of an ecosystem

And a study from the World Economic Forum, “Unlocking Public Sector AI, AI Procurement in a box” recommends a multistakeholder approach with collaboration from AI experts and developers, consultancies, prominent IT service providers, startups, universities, research institutes and citizen rights organizations. The guidelines aim to address specific business needs and lead to more efficient, responsible and sustainable outcomes for the public and private sectors. 

Closing remarks

There is real and proven potential for AI to transform our life and work. Thus far, scepticism surrounding algorithms although well founded do not outweigh the vast array of advantages AI applications bring to the table. The limitations of AI systems and the ethical concerns voiced are important considerations in order to channel efforts towards any future implementation success.

While there is agreement that AI needs human-centered regulatory frameworks, there is much scope for the practical details. In this new age of constant change and digital transformation where technologies in general and AI in particular are ubiquitous, purpose matters. And a new set of leadership skills are required to address the business needs in both public and private sectors. Forging a common understanding of the implications provides a massive opportunity to enable AI systems that work for all.