Introduction
Confronting Bias: BSA’s Framework to Build Trust in AI sets forth a first-of-its-kind framework for AI bias risk management that organizations can use to perform impact assessments to identify and mitigate risks of bias that may emerge throughout an AI system’s lifecycle.
What Is AI Bias?
AI systems that systematically and unjustifiably yield less favorable, unfair, or harmful outcomes to members of specific demographic groups.
The Need for AI Risk Management
What Is Risk Management?
Risk management is a process for ensuring systems are trustworthy by design by establishing a methodology for identifying risks and mitigating their potential impact. Risk management processes are particularly important in contexts, such as cybersecurity and privacy, where the combination of quickly evolving technologies and highly dynamic threat landscapes render traditional “compliance” based approaches ineffective.
Rather than evaluating a product or service against a static set of prescriptive requirements that quickly become outdated, risk management seeks to integrate compliance responsibilities into the development pipeline to help mitigate risks throughout a product or service’s lifecycle. Effective risk management is anchored around a governance framework that promotes collaboration between an organization’s development team and its compliance personnel at key points during the design, development, and deployment of a product.
Managing the Risk of Bias
Organizations that develop and use AI systems must take steps to prevent bias from manifesting in a manner that unjustifiably yields less favorable or harmful outcomes based on someone’s demographic characteristics. Effectively guarding against the harms that might arise from such bias requires a risk management approach because:
“BIAS” AND “FAIRNESS” ARE CONTEXTUAL
It is impossible to eliminate bias from AI systems because there is no universally agreed upon method for evaluating whether a system is operating in a manner that is “fair.” In fact, as Professor Arvind Narayanan has famously explained, there are at least 21 different definitions (i.e., mathematical criteria) that can be used to evaluate whether a system is operating fairly, and it is impossible for an AI system to simultaneously satisfy all of them. Because no universal definition of fairness exists, developers must instead evaluate the nature of the system they are creating to determine which metric for evaluating bias is most appropriate for mitigating the risks that it might pose.
EFFORTS TO MITIGATE BIAS MAY INVOLVE TRADE-OFFS
Interventions to mitigate bias for one group can increase it for other groups and/or reduce a system’s overall accuracy. Risk management provides a mechanism for navigating such trade-offs in a context-appropriate manner.
BIAS CAN ARISE POST-DEPLOYMENT
Even if a system has been thoroughly evaluated prior to deployment, it may produce biased results if it is misused or deployed in a setting in which the demographic distribution differs from the composition of its training and testing data.
AI Bias Risk Management Framework
We outline an AI Bias Risk Management Framework that is intended to aid organizations in performing impact assessments on systems with potential risks of AI bias. In addition to setting forth processes for identifying the sources of bias that can arise throughout an AI system’s lifecycle, the Framework identifies best practices that can be used to mitigate those risks.
The Framework is an assurance-based accountability mechanism that can be used by AI Developer and AI Deployer organizations for purposes of:
- Internal Process Guidance. AI Developers and AI Deployers can use the Framework as a tool for organizing and establishing roles, responsibilities, and expectations for internal processes.
- Training, Awareness, and Education. AI Developers and AI Deployers can use the Framework to build internal training and education programs for employees involved in developing and using AI systems. In addition, the Framework may provide a useful tool for educating executives about the organization’s approach to managing AI bias risks.
- Assurance and Accountability. AI Developers and AI Deployers can use the Framework as a basis for communicating and coordinating about their respective roles and responsibilities for managing AI risks throughout a system’s lifecycle.
- Vendor Relations. AI Deployers may choose to use the Framework to guide purchasing decisions and/or developing vendor contracts that ensure AI risks have been adequately accounted for.
- Trust and Confidence. AI Developers may wish to communicate information about a product’s features and its approach to mitigating AI bias risks to a public audience. In that sense, the Framework can help organizations communicate to the public about their commitment to building ethical AI systems.
- Incident Response. Following an unexpected incident, the processes and documentation set forth in the Framework provide an audit trail that can help AI Developers and AI Deployers identify the potential source of system underperformance or failure.
News & Events
- Blog: Introducing BSA’s AI Bias Risk Management Framework, June 8, 2021
- Press Release: BSA Releases Framework to Confront Bias in Artificial Intelligence and Calls for Legislation, June 8, 2021