• Home
  • Insights
  • Artificial Intelligence in Financial Services: Building Blocks and Safety Guards for Responsible Innovation

Cybersecurity

July 1, 2025

Artificial Intelligence in Financial Services: Building Blocks and Safety Guards for Responsible Innovation

Author: Kathryn Konzen, Esq. is the Director of Operations and Counsel, at Jacko Law Group, PC (“JLG). With over 15 years of experience in the legal profession, she brings a diverse range of expertise in areas such as operations, eDiscovery consulting, business development, recruiting, and more. Her practice focuses on working closely with clients, assisting them with their Cybersecurity and AI legal needs. 

JLG works extensively with investment advisers, broker-dealers, investment companies, private equity and hedge funds, banks and corporate clients on securities and corporate counsel matters. For more information, please visit https://www.jackolg.com/.

The information contained in this article may contain information that is confidential and/or protected by the attorney-client privilege and attorney work product doctrine. This email is not intended for transmission to, or receipt by, any unauthorized persons. Inadvertent disclosure of the contents of this article to unintended recipients is not intended to and does not constitute a waiver of attorney-client privilege or attorney work product protections.

The Risk Management Tip is published solely based off the interests and relationship between the clients and friends of the Jacko Law Group P.C. (“JLG”) and should in no way be construed as legal advice. The opinions shared in the publication reflect those of the authors, and not necessarily the views of JLG. For more specific information or recent industry developments or particular situations, you should seek legal opinion or counsel.

You hereby are notified that any review, dissemination or copying of this message and its attachments, if any, is strictly prohibited. These materials may be considered ATTORNEY ADVERTISING in some jurisdictions.

[1] Service of process refers to the delivery of the legal documents that gives a defendant notice of the legal action filed against it and the opportunity to respond. Valid service of process on a defendant is required by the U.S. Constitution. Service of process must be accomplished by the plaintiff pursuant to the rules or statutes of the appropriate jurisdiction. These rules include how process documents can be delivered (such as in-hand delivery or certified or registered mail) and to whom that delivery can be made.

Artificial intelligence (AI) is rapidly transforming the financial industry. Whether streamlining compliance reviews, enhancing surveillance, or delivering personalized portfolio strategies, AI is driving a new wave of operational efficiency and insight. But as these tools become more integrated into core functions, so does the responsibility to implement them in a safe, transparent, and compliant manner.

For investment advisers (IAs)  and broker-dealers (BDs), adopting AI means understanding its foundational elements, and establishing strong safeguards for preserving regulatory compliance, managing risk, and maintaining client trust.

Regulatory Considerations for Use of AI in the Financial Industry

1.High-Quality Financial Data
In the Financial Services industry, AI output is determined by the quality of data input. For IAs and BDs, this includes structured data such as transaction records, portfolio performance, market activity, and client risk profiles. IAs and BDs are responsible for maintaining robust data governance to ensure accuracy, privacy protections, and safeguards pursuant to Regulation S-P and in accordance with regulatory requirements governing the business.

2. Advanced AI Models
Sophisticated AI systems continue to be increasingly relied upon for fundamental operations. They improve efficiency and efficacy. It is, however, vital for firms to to ensure that the coding used for the AI model is accurate and to consider what surveillance techniques are deployed to detect errors.

3. Compliant and Scalable Infrastructure
Firms must evaluate and oversee their AI infrastructure and service providers to ensure they meet regulatory expectations for cybersecurity, data integrity, and business continuity. This includes conducting due diligence, maintaining oversight programs, and ensuring vendors comply with applicable SEC and FINRA standards.

4. Human Expertise and Governance
AI does not replace the need for human oversight. In fact, it requires professionals to provide knowledgeable oversight. Financial advisors, IT, risk officers, and compliance and legal teams must collaborate throughout the model lifecycle, ensuring fiduciary principles, supervisory controls, and compliance standards are maintained.

 

Guardrails for Responsible AI Use

1. Bias Mitigation and Fair Practices
When AI predictive analytics are used in areas such as portfolio recommendations or client risk scoring, it is essential to implement controls that mitigate bias and promote fair practices. AI tools should be evaluated against standards set by laws such as the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA). Doing so helps firms avoid unintentional bias.

  • Adopt and adhere to strict internal protocols for data input to ensure consistency and reduce the risk of bias.

2. Model Explainability and Regulatory Transparency
Firms must be able to interpret and explain the rationale behind AI-generated outputs, particularly those that influence client recommendations or compliance decisions. This is essential for regulatory reviews, internal audits, and client transparency.

  • Maintain thorough documentation of all uses and functions of the AI model. Ensure that the information in the documentation is consistently transparent and understandable.

3. Surveillance, Fraud Detection, and Cyber Resilience
AI enhances firms’ ability to detect trade manipulation, insider activity, and cyber threats in real-time. But these tools can also become targets. Firms must work with cybersecurity teams and external providers to safeguard against AI manipulation, unauthorized access, and data breaches.

  • Work with internal and external IT and cybersecurity teams that understand the new cybersecurity threats associated with AI use and adjust policies and procedures to mitigate and address such threats.

4. Privacy and Confidentiality
Firms should work with providers that implement strong data privacy measures and ensure those practices meet standards under GLBA, Regulation S-P, and firm privacy policies, making sure that all parties have a strong understanding of how data is protected when used by AI systems.

  • Perform thorough due diligence on any third-party vendors to ensure their policies and procedures for handling client data also meets regulatory requirements.

5. Human Oversight for Critical Decisions
One of the most critical guardrails to AI implementation is human oversight. AI should support, not replace decisions.

  • Identify and have qualified personnel to review and make key AI-related decisions that could have regulatory or fiduciary consequences.

6. Ethical AI Governance
Firms should establish cross-functional AI governance committees responsible for overseeing model selection, integration, monitoring, and retirement.

  • AI Governance teams should assess compliance risks, review vendor-provided models, and ensure ethical and fiduciary alignment.

7. Ongoing Monitoring and Model Lifecycle Management
AI models require continuous oversight and must be treated like any other high-risk asset. Performance should be regularly tested, validated and adjusted for market conditions or changing client behavior.

  • AI management should include training and clear documentation of model implementation, all essential to maintaining the model’s relevance and reliability over time.

AI with Accountability

AI offers significant promise for IAs and BDs from operational efficiency to personalized client service. However,  failure to implement robust AI protocols can result in lapses in oversight, and can lead to regulatory violations, reputational harm, and loss of client confidence.

By grounding AI implementations in high-quality data, tested algorithms, and rigorous governance, and by enforcing safeguards around fairness, privacy, and explainability, firms can harness the power of AI responsibly. In an industry where trust and compliance are non-negotiable, responsible AI use isn’t just a technology choice, it’s a business imperative.


Regulatory Considerations and Applicability

As AI becomes more embedded in the financial services industry, firms must remain responsive to evolving regulatory expectations. Agencies such as the SEC and FINRA are increasingly focused on how AI affects supervision, suitability, communications, and operational risk. Whether operating under current requirements or preparing for new AI-related frameworks, firms should integrate compliance into their AI planning from the outset.

 

About the author

Kathryn Konzen, Esq.

Director of Operations & Counsel

Kathryn Konzen, Esq., is the Director of Operations and Counsel at Jacko Law Group, PC. With over 15 years of experience in the legal profession, Ms. Konzen brings a diverse range of expertise in area...

Related Insights