Comparing International Frameworks for the Development of Responsible AI

As AI becomes increasingly ubiquitous, policymakers around the world are grappling with the potential implications and exploring what they can do to promote the benefits of the technology while safeguarding against potential unintended impacts. Because the AI landscape is comprised of a diverse set of underlying technologies, use cases, and stakeholders – each of which presents its own opportunities and challenges – policymakers have tended to focus on the development of non-regulatory structures that set out principles-based guidance that can apply across the diverse ecosystem. One common approach is to develop a framework of principles to support the development of responsible AI.

In recent months, the European Commission, Japan, Singapore, Australia, and the Organization for Economic Cooperation and Development have issued such frameworks. Despite surface-level differences, these frameworks share a similar conceptual foundation and are aimed at identifying key principles that AI systems should adhere to. While each principles framework is uniquely structured and incorporates its own terminology, a meta-analysis reveals that there are ten guiding principles against which each can be meaningfully evaluated. The AI Principles Framework Comparison Chart is a tool for measuring the consistency of existing (and future) AI ethics frameworks as measured against these ten guiding principles.

Click on the icons     in each column to read excerpts from the framework that address the principle.

Key

  • Satisfactory
  • Partial
  • Inconsistent with international best practices
  • Unaddressed
ValuesDefinitionsEUAustraliaJapanSingaporeOECD
Human-CenteredAI systems should be designed to be inclusive, accommodating the needs of the individuals that interact with it, and used in a manner that is aligned with the values of the community in which it is deployed.

EU High Level Experts Group

Human-Centered

AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. This requires that AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight. (Pg. 15)

Australia

Human-Centered

“Although this AI Ethics Discussion Paper was developed in keeping with this concept, there are a few foundational assumptions that lie at the heart of the document—that we do have power to alter the outcomes we get from technology, and that technology should serve the best interests of human beings and be aligned with human values.”

Japan

Human-Centered

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over-depend on AI or not to ill-manipulate human decisions by exploiting AI.”

Singapore

Human-Centered

AI solutions should be human-centric. As AI is used to amplify human capabilities, the protection of the interests of human beings, including their well-being and safety, should be the primary considerations in the design, development and deployment of AI.” (Pg. 3)

“Organisations operating in multiple countries should consider the differences in societal norms and values, where possible.” (Pg. 7)

OECD

Human-Centered

“1.2. Human-centred values and fairness: AI actors should respect human rights and democratic values, including freedom, dignity, autonomy, privacy, non-discrimination, diversity, fairness and social justice, and core labour rights throughout the AI system lifecycle.

To this effect, they should implement safeguards and consider mechanisms, such as capacity for human final determination, that are appropriate to the context and benefit from multidisciplinary and multi-stakeholder collaboration, and assess the effectiveness of these mechanisms on an ongoing basis.”

Source: OECD
Mitigate Risks and Promote BenefitsAI systems should be designed and deployed for the benefit of end-users and avoid unintended negative impacts on third parties.

EU High Level Experts Group

Mitigate Risks and Promote Benefits

“We also want producers of AI systems to get a competitive advantage by embedding Trustworthy AI in their products and services. This entails seeking to maximise the benefits of AI systems while at the same time preventing and minimising their risks.” (Pg. 4)

Australia

Mitigate Risks and Promote Benefits

Principle – “Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimize any negative outcomes.”

“This makes it all the more important to track and consider the implications of new technologies at the time they are emerging. If we accept that we have the ability to determine the outcomes we get from AI, then there is an ethical imperative to try to find the best possible outcomes and avoid the worst.” (Pg. 15)

Japan

Mitigate Risks and Promote Benefits

“To use AI effectively to contribute to the society and to avoid or reduce the negative effects beforehand, we should promote, along with the research and development of the technologies related to AI, the transformation into the “AI -Ready Society” in which we can effectively and safely utilize AI, by redesigning society in all aspects including Japan’s social system, industry structure, innovation system, governance, and its citizens’ character.” (Pg. 1)

Singapore

Mitigate Risks and Promote Benefits

AI can be used by organisations to provide new goods and services, boost productivity, enhance competitiveness, ultimately leading to economic growth and better quality of life. As with any new technologies, however, AI also introduces new ethical, legal and governance challenges. These include risks of unintended discrimination potentially leading to unfair outcomes, as well as issues relating to consumers’ knowledge about how AI is involved in making significant or sensitive decisions about them.

OECD

Mitigate Risks and Promote Benefits

“RECOGNISING that trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology, while limiting the risks associated with it”

Source: OECD
FairnessGovernance and technical safeguards are important to identify and mitigate risks of unfair biases, particularly in circumstances where an AI system could have a consequential impact on people.

EU High Level Experts Group

Fairness

“Identifiable and discriminatory bias should be removed in the collection phase where possible. The way in which AI systems are developed (e.g. algorithms’ programming) may also suffer from unfair bias. This could be counteracted by putting in place oversight processes to analyse and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner. Moreover, hiring from diverse backgrounds, cultures and disciplines can ensure diversity of opinions and should be encouraged.”

Australia

Fairness

Principle – “Fairness: The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the ‘training data’ is free from bias or characteristics which may cause the algorithm to behave unfairly.”

“Fairness” is a difficult concept to pin down and AI designers essentially have to reduce it to statistics. Researchers have come up with many dozens of mathematical definitions to define what fairness means in an algorithm and many of them perform extremely well when measured from one angle, but from a different angle can produce very different results.

Japan

Fairness

Under the AI design concept, all people must be treated fairly without unjustified discrimination on the grounds of diverse backgrounds such as race, sex, nationality, age, political beliefs, religion, etc.

Singapore

Fairness

As part of risk management, internal controls companies should “[use] reasonable efforts to datasets used for AI model training are adequate for the intended purpose, and to assess and manage the risks of inaccuracy or bias, as well as reviewing exceptions identified during model training. Virtually, no dataset is completely unbiased. Organisations should strive to understand the ways in which datasets may be biased and address this in their safety measures and deployment strategies.” (Pg. 6)

OECD

Fairness

2. Human-centred values and fairness: AI actors should respect human rights and democratic values, including freedom, dignity, autonomy, privacy, non-discrimination, diversity, fairness and social justice, and core labour rights throughout the AI system lifecycle.

To this effect, they should implement safeguards and consider mechanisms, such as capacity for human final determination, that are appropriate to the context and benefit from multidisciplinary and multi-stakeholder collaboration, and assess the effectiveness of these mechanisms on an ongoing basis.

Source: OECD
ExplainabilityAI systems should be understandable; context will dictate the appropriate mechanisms for providing transparency about a particular system's decision-making processes.

EU High Level Experts Group

Explainability

Explainability concerns the ability to explain both the technical processes of an AI system and the related human decisions (e.g. application areas of a system). Technical explainability requires that the decisions made by an AI system can be understood and traced by human beings. Moreover, trade-offs might have to be made between enhancing a system’s explainability (which may reduce its accuracy) or increasing its accuracy (at the cost of explainability). Whenever an AI system has a significant impact on people’s lives, it should be possible to demand a suitable explanation of the AI system’s decision-making process. Such explanation should be timely and adapted to the expertise of the stakeholder concerned (e.g. layperson, regulator or researcher).

Australia

Explainability

Principle – “Transparency and explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.”

“Transparency is key, but not a panacea. Transparency and AI is a complex issue. The ultimate goal of transparency measures are to achieve accountability, but the inner workings of some AI technologies defy easy explanation. Even in these cases, it is still possible to keep the developers and users of algorithms accountable [26]. An analogy can be drawn with people: an explanation of brain chemistry when making a decision doesn’t necessarily help you understand how that decision was made—an explanation of that person’s priorities is much more helpful. There are also complex issues relating to commercial secrecy as well as the fact that making the inner workings of AI open to the public would leave them susceptible to being gamed [26].

Japan

Explainability

Appropriate explanations should be provided such as the fact that AI is being used, the method of obtaining and using the data used in AI, and the mechanism to ensure the appropriateness of the operation results of AI according to the situation AI is used.

Singapore

Explainability

“Organisations using AI in decision-making should ensure that the decision-making process is explainable, transparent and fair. Although perfect explainability, transparency and fairness are impossible to attain, organisations should strive to ensure that their use or application of AI is undertaken in a manner that reflects the objectives of these principles. This helps build trust and confidence in AI.” (Pg. 3)

OECD

Explainability

“1.3. Transparency and explainability: Stakeholders should promote a culture of transparency and responsible disclosure regarding AI systems.

More specifically, AI actors should provide meaningful information, appropriate to the context and the state of art, to make stakeholders aware of their interactions with AI systems, including in the workplace, to enable those adversely affected by an AI system to understand and challenge the outcome, and to foster a general understanding of AI systems.”

Source: OECD
Privacy and SecurityAI systems should be secure and enable users to make informed choices regarding use of personal information.

EU High Level Experts Group

Privacy and Security

“AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle. This includes the information initially provided by the user, as well as the information generated about the user over the course of their interaction with the system (e.g. outputs that the AI system generated for specific users or how users responded to particular recommendations).”

Australia

Privacy and Security

Principle – “Privacy protection: Any system, including AI systems must comply with all relevant international, Australian, Local, State/Territory and Federal government obligations, regulations and laws.”

[NOTE – No discussion of security]

Japan

Privacy and Security

“In society premised on AI, it is possible to estimate each person’s political position, economic situation, hobbies / preferences, etc. with high accuracy from data on the data subject’s personal behavior. This means, when utilizing AI, that more careful treatment of personal data is necessary than simply utilizing personal information.”

“Positive utilization of AI means that many social systems will be automated, and the safety of the systems will be improved. On the other hand, within the scope of today’s technologies, it is impossible for AI to respond appropriately to rare events or deliberate attacks. Therefore, there is a new security risk for the use of AI. Society should always be aware of the balance of benefits and risks, and should work to improve social safety and sustainability as a whole.”

Singapore

Privacy and Security

OECD

Privacy and Security

“AI actors should address risks related to AI systems, including digital security risk, by applying a systematic risk management cycle to each phase of the AI system lifecycle on a continuous basis.”

“AI actors should respect human rights and democratic values, including freedom, dignity, autonomy, privacy, non-discrimination, diversity, fairness and social justice, and core labour rights throughout the AI system lifecycle.”

Source: OECD
Safety and ReliabilityAI systems should be designed to mitigate foreseeable safety risks and adequately tested to ensure that they operate as intended.

EU High Level Experts Group

Safety and Reliability

“Even if an ethical purpose is ensured, individuals and society must also be confident that AI systems will not cause any unintentional harm. Such systems should perform in a safe, secure and reliable manner, and safeguards should be foreseen to prevent any unintended adverse impacts. It is therefore important to ensure that AI systems are robust. This is needed both from a technical perspective (ensuring the system’s technical robustness as appropriate in a given context, such as the application domain or life cycle phase), and from a social perspective (in due consideration of the context and environment in which the system operates).”

Australia

Safety and Reliability

“The assessment of AI is largely an exercise in accounting for and addressing risks posed by the use of the technology [201]. As such, consideration should be given to whether certain uses of AI require additional assessment, these may be considered to be threshold assessments. FATML have developed a Social Impact Statement that details requirements of developers of AI to consider who will be impacted by the algorithm and who is responsible for that impact [206]. Similar assessments may be well placed to identify high risk applications and uses of AI that require additional monitoring or review.”

Promotion of regular assessment of AI systems and how they are being used will be a key tool to ensure that all of the core ethical principles are being addressed. Although initial assessments before the deployment of AI are critical they are unlikely to provide the scope needed to assess the ongoing impact of the AI in the changing world.

Japan

Safety and Reliability

To implement AI efficiently and securely in society, methods for confirming the quality and reliability of AI and for efficient collection and maintenance of data utilized in AI must be promoted. Additionally, the establishment of AI engineering should also be promoted. This engineering includes methods for the development, testing and operation of AI.

Singapore

Safety and Reliability

Wherever possible, testing should reflect the dynamism of the planned production environment. To ensure safety, testing may need to assess the degree to which an AI solution generalises well and fails gracefully. For example, a warehouse robot tasked with avoiding obstacles to complete a task (e.g. picking packages) should be tested with different types of obstacles and realistically varied internal environments (e.g. workers wearing a variety of different coloured shirts). Otherwise, models risk learning regularities in the environment which do not reflect actual conditions (e.g. assuming that all humans that it must avoid will be wearing white lab coats). Once AI models are deployed in the real-world environment, active monitoring, review and tuning are advisable.”

OECD

Safety and Reliability

4. Robustness and safety: AI systems should be robust and safe throughout their entire lifecycle so that they can both withstand or overcome adverse conditions and avoid posing unreasonable safety risk in conditions of normal or foreseeable use or misuse.

To this end, AI actors should ensure traceability of the datasets, processes and decisions made during the AI system lifecycle to enable understanding of its outcomes and responses to inquiry, where appropriate.

Source: OECD
AccountabilityA lifecycle approach to AI accountability, including appropriate governance structures for the design phase and redress mechanisms following deployment is important.

EU High Level Experts Group

Accountability

“The requirement of accountability complements the above requirements, and is closely linked to the principle of fairness. It necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.

Australia

Accountability

Principle – “Contestability: When an algorithm significantly impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.”

Principle – “Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm.”

Japan

Accountability

In order for people to understand and judge AI proposals, there should be appropriate opportunities for open dialogue on the use, adoption and operation of AI, as needed.

Singapore

Accountability

Organisations should have internal governance structures and measures to ensure robust oversight of the organisation’s use of AI. The organisation’s existing internal governance structures can be adapted, and/or new structures can be implemented if necessary. For example, risks associated with the use of AI can be managed within the enterprise risk management structure; ethical considerations can be introduced as corporate values and managed through ethics review boards or similar structures. Organisations should also determine the appropriate features in their internal governance structures. For example, when relying completely on a centralised governance mechanism is not optimal, a de-centralised one could be considered to incorporate ethical considerations into day-to-day decision-making at operational level, if necessary. The sponsorship, support and participation of the organisation’s top management and its Board in the organisation’s AI governance are crucial.” (Pg. 5)

OECD

Accountability

5. Accountability: AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and the state of art.

“More specifically, AI actors should provide meaningful information, appropriate to the context and the state of art, to make stakeholders aware of their interactions with AI systems, including in the workplace, to enable those adversely affected by an AI system to understand and challenge the outcome, and to foster a general understanding of AI systems.”

Source: OECD
Risk-based and ProportionateRisks are context-specific and encourages stakeholders to deploy risk management techniques that are tailored to specific use cases.

EU High Level Experts Group

Risk-based and Proportionate

“The level of safety measures required depends on the magnitude of the risk posed by an AI system, which in turn depends on the system’s capabilities. Where it can be foreseen that the development process or the system itself will pose particularly high risks, it is crucial for safety measures to be developed and tested proactively.” (Pg. 17)

Australia

Risk-based and Proportionate

“Implementing ethical AI. AI is a broad set of technologies with a range of legal and ethical implications. There is no one-size-fits all solution to these emerging issues. There are, however, tools which can be used to assess risk and ensure compliance and oversight. The most appropriate tools can be selected for each individual circumstance.” (Pg. 8)

Japan

Risk-based and Proportionate

Singapore

Risk-based and Proportionate

“The extent to which organisations adopt the recommendations in this Model Framework depends on several factors, including the nature and complexity of the AI used by the organisations; the extent to which AI is employed in the organisations’ decision-making; and the severity and probability of the impact of the autonomous decision on the individuals. To elaborate: AI may be used to augment a human decision-maker or to autonomously make a decision. The impact on an individual of an autonomous decision in, for example, medical diagnosis will be greater than in processing a bank loan. The commercial risks of AI deployment would therefore be proportional to the impact on individuals.

OECD

Risk-based and Proportionate

“To this effect, they should implement safeguards and consider mechanisms, such as capacity for human final determination, that are appropriate to the context and benefit from multidisciplinary and multi-stakeholder collaboration, and assess the effectiveness of these mechanisms on an ongoing basis.”

Source: OECD
Multiple StakeholdersMultiple stakeholders have important roles to play in mitigating risks involved in the development, deployment, and use of AI.

EU High Level Experts Group

Multiple Stakeholders

“Different groups of stakeholders have different roles to play in ensuring that the requirements are met:

  1. Developers should implement and apply the requirements to design and development processes;
  2. Deployers should ensure that the systems they use and the products and services they offer meet the requirements;
  3. End-users and the broader society should be informed about these requirements and able to request that they are upheld.”

Australia

Multiple Stakeholders

“This chapter provides guidance for individuals or teams responsible for any aspect of the design, development and deployment of any AI-based system that interfaces with humans.” (Pg. 58)

Japan

Multiple Stakeholders

“We recognize that developers and user enterprises of AI should establish and comply with the development and utilization principles of AI based on the fundamental philosophies and “Social Principles of AI” mentioned above.”

Singapore

Multiple Stakeholders

“Prior to deploying AI solutions, organisations should decide on their commercial objectives of using AI, e.g. ensuring consistency in decision-making, improving operational efficiency and reducing costs, or introducing new product features to increase consumer choice.” (Pg. 7)

OECD

Multiple Stakeholders

“AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and the state of art.”

Source: OECD
Promotes InnovationGovernment is a key enabler of AI innovation, and promotes a policy environment that is conducive to cross-border data flows, value-added data services, access to non-sensitive government data, R&D, and workforce development initiatives.

EU High Level Experts Group

Promotes Innovation

Australia

Promotes Innovation

“Quality research into addressing ethical AI by design and implementation is key to ensuring that Australia stays ahead of the curve. Without methods of accessible transfer of knowledge from theory to practice the impact is lost. Collaboration is increasingly important between researchers and the tech industry to ensure that AI is developed and used ethically and should be prioritised.” (Pg. 61)

Japan

Promotes Innovation

To ensure the sound development of AI technology, it is necessary to establish an accessible platform in which data from all fields can be mutually utilized across borders with no monopolies, while ensuring privacy and security. In addition, research and development environments should be created in which computer resources and high-speed networks are shared and utilized, to promote international collaboration and accelerate AI research.

Singapore

Promotes Innovation

OECD

Promotes Innovation

1. Investing in AI research and development: Governments should consider and encourage long-term investments in basic research and development, including inter-disciplinary efforts, to spur innovation in trustworthy AI that focus on challenging technical issues and on AI-related social implications and policy issues. They should also consider and encourage investment in open data sets that are representative and preserve privacy to support an un-biased environment for AI research and development.

2. Fostering an enabling digital ecosystem for AI: Governments should foster the development of, and access to, an enabling ecosystem for trustworthy AI, including digital technologies and infrastructure and mechanisms for sharing AI knowledge.

3 Shaping an apt policy environment for AI innovation: Governments should design a policy environment that supports, within a clear framework, an agile transition from the research and development stage to the deployment stage for trustworthy AI systems. To this effect, they should consider using experimentation, including regulatory sandboxes, innovation centres and policy labs, to provide a controlled environment in which AI systems can be tested. Governments should review and adapt, as appropriate, their regulatory framework and assessment mechanisms as they apply to AI systems to encourage innovation and competition.

4. Building human capacity and preparing for job transformation: Governments should work closely with social partners, industry, academia, and civil society to prepare for the transformation of the world of work and empower people to use, interact and work with AI by equipping them with the necessary competencies and skills. Governments should take steps to ensure that AI deployment in society goes hand in hand with a fair transition for workers and new opportunities in the labour markets. They should do so with a view to fostering entrepreneurship and the creation of quality jobs, to making human work safer, more productive and more rewarding, and to leaving no one behind

Source: OECD