top of page

AI in the Spotlight: SEC’s 2026 Examination Priorities Highlight Artificial Intelligence for the Third Year Running

  • laytonhurt
  • 4 days ago
  • 6 min read

Artificial Intelligence (AI) continues to revolutionize industries worldwide, and the financial sector is no exception. For the third consecutive year, the U.S. Securities and Exchange Commission (SEC) has included AI in its Fiscal Year 2026 Examination Priorities, underscoring the growing importance of this transformative technology in shaping the future of financial services.

As AI becomes more integrated into investment advisory services, trading platforms, and compliance tools, the SEC is taking proactive steps to ensure that its use aligns with regulatory standards and investor protection. This focus reflects the agency’s commitment to addressing the risks and opportunities associated with

emerging technologies.


ree

Why AI is Relevant to the SEC


The SEC recognizes that AI has the potential to enhance efficiency, improve decision-making, and optimize operations across the financial industry. From automated investment tools to trading algorithms, AI is enabling firms to deliver faster, more personalized services to investors. However, with these advancements come new challenges, including transparency, fairness, and robust oversight.

The SEC is tasked with protecting investors, maintaining fair and orderly capital markets, and facilitating capital formation. In advancing this mission, the Division of Examinations does advance the SEC's mission "through risk-focused examinations that improve compliance, prevent fraud, monitor risk, and inform policy". 

The rapid adoption of AI has introduced new challenges in managing information security and operational risks.


Examination Priorities and Key Areas of Scrutiny


In its 2026 priorities, the SEC has outlined specific areas where AI will be under scrutiny. These include:

  1. Fair and Accurate Representations: Firms must ensure that their claims about AI capabilities are truthful and not misleading. Investors need to understand the scope and limitations of AI-driven tools.

  2. Alignment with Investor Profiles: AI-generated advice and recommendations must be consistent with investors’ financial goals, risk tolerance, and investment strategies.

  3. Monitoring and Supervision: The SEC will assess whether firms have implemented adequate policies and procedures to oversee AI technologies, particularly in areas like fraud prevention, anti-money laundering (AML), and trading functions.

  4. RegTech Integration: The use of regulatory technology to automate compliance processes and enhance operational efficiency will also be reviewed.


The Growing Role of AI in Financial Services


The SEC’s continued focus on AI highlights its increasing prevalence in the financial industry. From meeting transcription to advisor screening, hiring, and onboarding, AI is reshaping how firms interact with clients, manage risks, and comply with regulations. For example:

  • Automated Investment Tools: AI-powered platforms are helping advisors make data-driven recommendations tailored to individual financial goals.

  • Compliance & Supervisory Automation: AI-enhanced RegTech is helping firms streamline compliance tasks, including communications review, surveillance, recordkeeping, KYC processes, onboarding checks, and mapping controls to regulatory requirements. These tools reduce manual effort and lower the risk of human oversight gaps.

  • Operational Efficiency Across the Firm: AI is playing a growing role in automating back-office tasks—such as data entry, workflow routing, reporting, policy updates, and searching large document repositories—freeing staff to focus on higher-value activities.


The Importance of Governance and Transparency


AI and related technologies—such as marketing content-generating models and meeting assistants—are significantly transforming financial services. These technologies are being incorporated not only into client-facing services (such as automated advisory services and recommendations) but also into internal processes, including fraud prevention, AML compliance, back-office functions, and regulatory technology (RegTech)- driven efficiency optimization.

The growing dependence on automated tools demands a strong focus on governance and transparency. For financial institutions, this involves ensuring that the outputs of their AI systems are consistent with their core regulatory responsibilities, particularly the fiduciary standards of conduct that require impartial advice.


Preparation for the Future: What Firms Should Do


As the financial industry continues to embrace AI, firms must stay ahead of regulatory expectations. The SEC’s priorities serve as a roadmap for organizations to evaluate their AI practices, identify potential risks, and implement measures to ensure compliance. By doing so, firms can not only meet regulatory requirements but also build trust with investors and stakeholders.

The SEC clearly stated they "will assess whether firms have implemented adequate policies and procedures to monitor and/or supervise their use of AI," not just for trading, but for back-office operations, regulatory compliance, and more. But what do adequate policies and procedures look like?

 

In our recent webinar, Jordan Wilcox emphasized 10 key things firms should consider:

  1. Ongoing monitoring of AI systems, Wilcox reminds us to ensure “ongoing monitoring, that they have a process in place to document their systems, that you have the authority either to receive active notice or request information.”

  2. Documenting inventory of AI, use cases, and vendors. He currently manages “an overall tracker of all AI use cases, and who the vendor is.” This is something he “strongly recommends any organization to do.”

  3. Policies require vendors to disclose model details, “At a minimum, I want to make sure that they can provide me the details around what the AI models are, the licensing…who they’re getting it from, if they’re getting it from Azure or AWS.”

  4. Authority to request information and audit. Wilcox states he always wants “the ability to audit their system, at my request.” “If you are working with vendors and that vendor's going to have some level of access to your data directly, indirectly you're conveying it to them. I would strongly recommend having standard AI language that goes into those contracts, … based on your organization's risk tolerance. At minimum, you want to have some language around there that defines generative AI or what that technology looks like, the types of risks you would want to mitigate or want the vendor to be aware of, whether that's training on it, accuracy, bias, hallucinations, those very common AI risks.”

  5. Controls for unannounced AI features rolled out by vendors. “A lot of SaaS providers are constantly rolling out AI… It could go in a simple update over a weekend… all of a sudden your team logs in on Monday and there’s a ton of new AI features… You want to be cognizant of that.” This point directly supports the SEC’s language that firms must supervise all uses of AI—including when vendors enable AI “by default.”

  6. Policies Tailored to Risk Tolerance (not one-size-fits-all). “It’s not about governing every use of AI with the same level of rigor. It’s tailoring the rigor to the risk.” “When I engage with a client, the first question I ask is: What are your AI risks? That frames what language I’ll propose.”

  7. Requirements for model drift, updates, and lifecycle. “I check again based on the risk level—quarterly or twice a year… Suppose that model has a public issue… I want to know exactly where I’m exposed.”

  8. Transparency Expectations for Vendors: “Make sure they have some level of transparency… through ongoing monitoring, testing, evaluation, transparency documentation.” “A vendor should feel comfortable sharing the risks… that is the way we identify, mitigate, and improve our AI.”

  9. Integration into vendor risk management workflows. “One of the first things I do is integrate AI into our existing procurement process, most organizations have some form of vendor risk management, that evaluates their vendors … Understanding what services the vendor is providing, what data they have access to, how they access it, and how critical that vendor is.”

  10. Maintenance of Contractual Rights: “Once we’ve aligned on mutually agreeable language…I have a process set up to exercise those rights so I can continue to evaluate the risk.” “Some clauses I have are very detailed…, but you can keep it more general, which is recommended as a starting place to give you more flexibility. You really want to evaluate based on the client relationship.”


Final Thoughts


The SEC’s ongoing focus on AI is a clear signal that the technology is here to stay—and that its impact on the financial industry will only grow. For firms leveraging AI, this is an opportunity to demonstrate their commitment to innovation, transparency, and investor protection. By aligning their practices with regulatory expectations, they can harness the power of AI to drive growth while safeguarding the integrity of the financial markets.

At AI Guardian, we’re dedicated to helping organizations navigate the complexities of AI governance and compliance. As the regulatory landscape evolves, we’re here to ensure that your AI systems are not only cutting-edge but also secure, ethical, and compliant.

What are your thoughts on the SEC’s focus on AI? How is your organization preparing for the future of AI in financial services? Let us know in the comments below!

 
 
 

Comments


bottom of page