SIFMA Comments to NTIA on AI Accountability Measures and Policies
Organizations around the world are drafting AI policies, offering a virtual buffet of ideas to include in any individual effort. This blog highlights the thoughtful comments from SIFMA (The Securities Industry and Financial Markets Association) to NTIA (National Telecommunications and Information Administration) on their proposed AI Accountability Measures and Policies. The focus here is neither SIFMA nor NTIA, but rather the key ideas and recommendations from a responsible trade organization to policymakers which are broadly applicable.
In their response, SIFMA emphasizes a few key points
Maintaining public trust in AI applications is essential to realizing the enormous benefits that AI has to offer
The importance of adopting a risk-based approach in AI governance
The need for a flexible and effective AI governance framework
I. A Risk-Based AI Governance Framework
SIFMA advocates for a risk-based approach to AI governance that is not overly prescriptive. The framework should differentiate between AI applications based on the likelihood and severity of potential harm at a high level. Compliance obligations should vary depending on the risk level of the AI application.
Treating all AI applications the same would result in onerous risk assessments and audits, which would be impractical and inefficient. Instead, the focus should be on identifying and mitigating risks associated with high-risk AI applications, while allowing companies to innovate and manage low-risk applications efficiently.
Regulators should primarily guide companies to focus their efforts on identifying their highest-risk AI uses, and on mitigating the risks they present. Such risks may include legal and regulatory risk, which may also include operational, reputational, contractual, discrimination, cybersecurity, privacy, consumer harm, lack of transparency, and confidentiality risks. This flexible framework would provide accountability mechanisms to reduce risk, while also providing companies with space to innovate in collaboration with applicable sectoral regulators.
This type of approach provides accountability by balancing upside potential with downside risks. As an added plus, adopting a flexible, risk-based approach reduces the need to define AI which is proving to be more difficult than one might anticipate.
II. Risk-Based AI Governance Components
SIFMA outlines foundational components that should be included in a risk-based AI governance framework:
Scoping: Companies should determine which AI applications are within the scope of the governance framework as part of their overall governance programs.
Inventory: Maintaining an inventory of AI applications with sufficient detail is essential for risk rating and assessment purposes. (AI Guardian’s AI Registry helps easily manage AI projects)
Risk Rating: Companies should have a process to identify and assess their highest-risk AI applications. This process should consider legal and regulatory risks, as well as operational, reputational, contractual, discrimination, cybersecurity, privacy, consumer harm, lack of transparency, and confidentiality risks. (AI Guardian’s AI Risk Center scores AI project risk flagging areas of concern)
Responsible Person(s) or Committee(s): Designating individuals or committees responsible for identifying, assessing, and mitigating risks associated with high-risk AI applications ensures accountability within the organization. (Explore sample roles & responsibilities for an AI Governance Board, AI Officer and AI Ethics Committee)
Training: Companies should develop training programs to educate stakeholders on the various risks associated with AI and the options for risk reduction.
Documentation: Maintaining documentation that supports an audit of the risk assessment program is crucial for transparency and accountability. (AI Guardian’s AI Registry includes this type of documentation)
Vendor Management: As companies increasingly use third-party AI applications, they should develop third-party AI risk management policies to address the unique risks associated with such applications. (AI Guardian’s AI Risk Center includes third-party compliance tracking)
To the extent regulation or guidance on the above components is issued, it should be sufficiently flexible to allow companies to incorporate these components in a manner that best fits their existing governance and compliance.
III. Auditing and Third-Party Risk Management:
Auditing: Companies should have the autonomy to determine the most effective way to audit their AI applications, including the timing and methodology. Audits should primarily focus on risk assessment programs rather than auditing each individual AI application. This avoids misallocation of resources and allows for efficient utilization of in-house expertise.
Third-Party Risk Management: Applying principles from cybersecurity risk management, companies should identify and mitigate risks associated with third-party AI applications through diligence, audits, and contractual terms. A flexible and principles-based approach, supplemented by specific requirements from applicable sectoral regulators, would be effective in managing third-party AI risks.
SIFMA's input to the NTIA on risk-based AI governance provides valuable insights for policymakers. By adopting a risk-based approach, policymakers can develop a flexible framework that balances potential benefits and risks associated with AI systems. SIFMA's recommendations regarding governance, foundational components, avoiding over-prescriptiveness, and auditing and third-party risk management contribute to the development of an effective AI accountability ecosystem. This framework promotes accountability, trust, innovation, and resource efficiency, ultimately ensuring that high-risk AI applications are carefully reviewed and mitigated while avoiding undue burden on low-risk applications.