Policymakers worldwide want to promote the responsible development and use of artificial intelligence (AI). Lawmakers in some countries are developing new AI-specific laws or providing guidance to companies across the AI value chain. Many are also leveraging existing laws – in areas like privacy, cybersecurity, and intellectual property – that are important components of a system that promotes responsible AI. Together, these laws, policies, and guidance can promote the benefits of AI and safeguard against potentially unintended impacts.
BSA’s AI Heatmap identifies common areas of focus that will enable and enhance AI. When BSA released its first AI Heatmap five years ago, only a handful of governments had put out AI guidance and there were non AI-specific laws. There has been substantial activity since then, and the 2024 AI Heatmap also identifies progress on key policy issues that go beyond AI-specific rules, including intellectual property, cross-border data transfers, cybersecurity, privacy, and workforce.
What we found:
- Virtually every leading economy has established rules for cybersecurity risk management, policies to support workforce development and readiness, and rules for protecting privacy, including in connection with AI.
- There is a need for continued focus on AI transparency, copyright policies, and cross-border data transfers to ensure they enable trustworthy AI.
- And among many economies, there is consensus around using risk management frameworks and impact assessments to mitigate AI risks.
Related
- TechPost: AI Policy Readiness: 2024 Outlook, July 22, 2024
Key
- Yes
- Partial
- No
- Not Addressed / Not Applicable
Issue | Description | US | EU | UK | Australia | India | Japan | Singapore | G7 Hiroshima | ASEAN AI Guide |
---|---|---|---|---|---|---|---|---|---|---|
Risk-Based Approach to AI | Laws, policies, and guidance are calibrated to risk, by: | |||||||||
- Risk Management Framework | Encouraging or requiring companies that develop or deploy AI systems for high-risk uses to implement risk management programs and take steps to mitigate identified risks | |||||||||
- Unlawful bias and discrimination | Prioritizing mitigating the risk of harms, such as unlawful bias and discrimination, particularly in the context of making consequential decisions | |||||||||
- Role-based responsibilities | Reflecting that multiple stakeholders, such as AI developers and AI deployers, have important roles to play in mitigating risks involved in the AI life-cycle and should have obligations appropriate to their respective roles | |||||||||
- Impact Assessments | Recognizing that AI developers and deployers should mitigate risks by conducting impact assessments appropriate to their role | |||||||||
Privacy | Effective consumer privacy laws establish guardrails for how companies can collect and use personal information, including in connection with AI | |||||||||
Creativity and Innovation | Copyright laws protect rightsholders from infringing content and enable responsible AI training | |||||||||
Cybersecurity | Laws and policies improve cybersecurity risk management | |||||||||
Transparency | Laws and policies encourage watermarks or other disclosure methods for AI-generated content, promote content authentication, and disclose when AI is interacting with consumers | |||||||||
Cross-Border Data Transfers | Laws and policies promote responsible cross-border data transfers | |||||||||
R&D/Workforce | Policies include investments in AI research & development and investments in workforce development | |||||||||
Government Use | Laws, policies, and guidance promote responsible public sector use of AI while creating guardrails around government agencies' use of AI, particularly for high-risk uses |