The International Association of Privacy Professionals (IAPP) recently conducted the inaugural AI Governance Global Conference in Boston, bringing together over 1,200 professional risk management leaders who are interested in AI governance and responsible AI practices.
The IAPP AI Governance Conference provided a ton of insight and takeaways for businesses looking to establish or enhance their AI governance programs. For those who couldn’t make it to the event, we’ve compiled the summary below on key insights any business can focus on.
AI Governance Key Elements
A big focus across IAPP AI Governance speakers and panels was on the key elements of a sound AI governance program that companies should be focused on. These elements centered on:
1. Cross-functional Stakeholder Committees:
Function: These committees play a crucial role in setting the company's risk tolerance levels, reviewing various AI use cases, and developing comprehensive protocols and policies.
Composition: Typically includes members from diverse departments such as IT, legal, compliance, and business units, ensuring a holistic view of AI governance.
Responsibility: Its main task is to ensure that AI implementations align with the company's ethical standards, legal requirements, and business objectives.
2. Guiding Principles and Policies:
Purpose: These serve as the backbone of an organization's AI strategy, offering a clear framework for the acquisition, development, integration, and application of AI technologies, both internally and externally.
Content: Typically includes ethical guidelines, compliance standards, and operational protocols that govern AI usage.
Function: They help in maintaining consistency in AI practices across the organization and in ensuring responsible and ethical AI use.
3. AI Impact Assessments:
Objective: These assessments are crucial for documenting and understanding the potential risks, mitigation strategies, and expected outcomes associated with AI implementations.
Process: It involves evaluating the impact of AI on various stakeholders, including customers, employees, and business operations, and assessing the potential for bias, privacy breaches, and other risks.
Outcome: The goal is to ensure that AI applications are beneficial, fair, and align with the organization’s values and objectives.
4. Fairness Testing Tools:
Tools: These can be internal processes or third-party solutions designed to evaluate and ensure the fairness of AI applications.
Application: They are used to detect and mitigate biases in AI algorithms, ensuring that the AI solutions are equitable and do not perpetuate existing inequalities.
Significance: Fairness testing is critical in maintaining public trust and adherence to regulatory standards regarding AI solutions.
5. Stakeholder Training:
Target Group: This includes individuals involved in AI development, procurement, and implementation within the organization.
Content: Training often covers topics like ethical AI use, understanding AI technologies, recognizing potential biases in AI, and compliance with relevant laws and regulations.
Objective: The aim is to equip stakeholders with the necessary knowledge and skills to make informed decisions about AI and to ensure responsible usage.
Incorporating these elements into an AI governance framework helps organizations manage the complexities of AI applications, ensuring they are used ethically, responsibly, and in compliance with regulatory requirements. These components, when integrated into an AI governance platform, can offer a centralized, streamlined approach to managing AI governance across an organization.
The Role of Privacy Teams
Another IAPP AI Governance takeaway was on the subject of privacy teams and the cross-functional nature of their roles within an organization. Privacy teams are increasingly important in AI governance due to their expertise in risk assessment and control applications. They often collaborate with legal and business stakeholders to calibrate risks and opportunities, ensuring business buy-in for AI governance programs. Making sure you connect your privacy team efforts with AI efforts is going to be a critical link to ensure the proper coordination is occurring as you move forward on both fronts.
Challenges and Solutions
Resource Challenges: Many companies struggle with resources, especially when AI governance is viewed solely as a compliance issue. Success is often found by collaborating with business stakeholders to understand AI’s opportunities. According to NIST, approximately ⅓ of an AI budget should be devoted to AI governance as a good benchmark for many companies as they move forward in resource allocation.
Leveraging Existing Frameworks: Companies often start AI governance by adapting existing processes and policies to account for AI-specific risks and opportunities. This is a good start, but AI brings unique risks with it that should be managed against with standards that are built against AI risk factors. Standards like NIST AI RFM and OECD principles are a good start for many companies looking for guidance in this area.
Many of the speakers at the IAPP AI Governance Conference raised the issue of upcoming regulation and needing to think ahead of the actual passage of legislation. Global regulators, including privacy and data protection authorities, are focusing on AI. They prioritize holding companies accountable for AI uses that have not had risks appropriately assessed and mitigated, but all companies should be aware of the current and proposed regulations on the near horizon. Regulatory frameworks like the EU AI Act, White House AI Executive Order and Bill of Rights as well as state level initiates should be closely monitored and understood.
1. Benchmarking: Companies are encouraged to benchmark with peers and collaborate on AI governance approaches.
2. Consulting Resources: Many organizations share resources on AI governance, including multinational principles and guidance from data protection authorities and civil society organizations. The IAPP works with many of the providers in this space and many were well represented at the IAPP AI Governance Conference.
The IAPP AI Governance Global Conference 2023 offered invaluable insights for companies striving to establish or enhance their AI governance frameworks. The key takeaways by the diverse set of speakers and panels are a great north star for many companies looking to start or enhance their AI governance efforts today. AI implementation can be complex, but ensuring ethical, responsible, and compliant practices doesn’t have to be. By laying the foundation for strong AI governance your organization will lower its associated risks and position itself well for upcoming compliance standards in the years ahead.