In a remarkable move, the Group of Seven (G7) nations, representing some of the world's most influential economies, have called for the adoption of international technical standards for trustworthy artificial intelligence (AI). Meeting in Hiroshima, Japan, the leaders acknowledged the diverse paths to achieving "the common vision and goal of trustworthy AI," emphasizing the need for governance of the digital economy to align with their shared democratic values.
This bold announcement comes on the heels of the European Union edging closer to legislating AI technology, setting the stage for potentially the world's first comprehensive AI law. "We want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin," said European Commission President Ursula von der Leyen.
Within the AI sphere, the G7 leaders specifically highlighted generative AI, the subset thrust into the spotlight by the ChatGPT app, stating an urgent need to recognize the opportunities and challenges posed by this technology. They further agreed to establish a ministerial forum known as the "Hiroshima AI process" to address issues around generative AI tools, such as intellectual property rights and disinformation.
In response to this call for AI regulation, organizations need to ensure that their AI governance meets these potential regulatory requirements. Here are some key steps to achieve that:
1. Understand the Regulatory Landscape: Keep an eye on regulatory developments globally and in your industry. This includes understanding existing laws and staying informed about potential new regulations. (Check out this blog for the latest AI laws, regulations and proposals.)
2. Implement AI Governance Framework: Establish a robust AI governance framework that aligns with international technical standards. This framework should set clear guidelines on the ethical use of AI, data privacy, transparency, and accountability. (Explore these AI Governance Policy resources to help you get started.)
3. Invest in AI Ethics and Compliance: Hire or train an AI ethics and compliance team to monitor AI applications' compliance with internal policies and external regulations.
4. Transparency and Explainability: As per the G7 leaders' call, AI systems should be accurate, reliable, safe, and non-discriminatory. This requires an emphasis on transparency and explainability in your AI systems.
5. Continuous Learning and Adaptation: AI is a rapidly evolving field. Regular training and updates on AI ethics, regulation, and best practices should be provided to employees.
6. Engage in Industry Dialogue: Join industry forums and engage in dialogue about AI regulation. This helps you gain insights, share best practices, and influence policy development.
7. Risk Assessment and Mitigation: Adopt a risk-based approach to AI regulation. Regularly assess and mitigate risks associated with the use of AI in your organization.
The unanimous call for AI regulation from the G7 signifies a significant shift in global AI policy, reflecting the increasing recognition of AI's transformative power and potential risks. Organizations need to respond proactively to ensure their AI governance is prepared for the impending regulatory requirements. As this regulatory wave looms, a proactive and comprehensive approach to AI governance and compliance can help organizations ride the tide confidently and responsibly.