• Home
  • Insights
  • Privacy and Cybersecurity Considerations for AI Use in the Financial Industry

Broker-Dealers

October 24, 2025

Privacy and Cybersecurity Considerations for AI Use in the Financial Industry

Evolving artificial intelligence (AI) tools are quickly affecting every industry, including the financial services industry. Undeniably, AI technology provides several benefits to businesses in various sectors. AI tools help to automate previously tedious tasks and assist with tailoring more personalized customer service experiences. However, there are AI privacy risks and cybersecurity concerns that need to be recognized and addressed to help protect firms and its users from potential issues later on. For the financial industry, adoption of AI tools for transcription and note taking, to do lists, client messaging, marketing analytics, risk modeling, and compliance exception reporting is increasingly becoming the norm. Prior to adopting such technologies, it is prudent to identify the potential privacy and cybersecurity risks associated with these automated technologies. There are also regulatory considerations that must be weighed relating to the maintenance of books and records. By taking a proactive approach to map privacy, security and books and records concerns associated with AI use to internal controls, you can protect, rather than leave vulnerable your business to serious risks that can escalate into regulatory and legal battles and costly risk mitigation steps in the future. In this blog, we will explore steps you can take to safeguard your firm as it further deploys AI. We will consider some of the top privacy and cybersecurity risks related to using AI in the finance industry and then will discuss how to deploy effective AI risk management policies and techniques. We will discuss the importance of a governance structure you can put in place to mitigate these risks as much as possible and the steps to take if a cyber incident occurs. It also is important to confer with an experienced regulatory compliance counsel to consider state regulations and reporting requirements.

Key AI Privacy Risks in the Financial Services Industry

First, it’s worth assessing the privacy concerns that are associated with AI use in the financial services industry. The primary risk stems from the way in which AI models rely on volumes of data to learn and train, some of which can involve highly sensitive or personally identifiable information (“PII”). AI models can expose potentially confidential and sensitive customer information to these models, raising security and privacy concerns. Moreover, customers may not give their consent to have their data used. Take for example Zoom or MS Team notetaker; dependent upon the jurisdiction, both parties would need to consent to the transcription service. Thus, prior to using the service, firms need to understand customer protections afforded by state rules and federal regulations. This includes guidelines from the Gramm-Leach-Bliley Act (GLBA) and Amended Regulation S-P, General Data Protection Regulation (GDPR), and consumer laws such as the California Consumer Privacy Act (CCPA). Now is the best time to review your corporate governance practices to reduce the risk of unauthorized data use, inadvertent re-identification, or other security concerns associated with the integration of AI tools into your firm’s operations or business practices.

Cybersecurity Risks Related to AI Tools

From a cybersecurity standpoint, implementation of AI tools can introduce malicious actors into your environment if not properly deployed. Most commonly, cyber attacks occur through phishing whereby malware or corrupt data is inserted into the AI’s training set—a practice known as “data poisoning.” Corrupt data can also be inserted through a data steward, malware can take over ingestion pipelines, an employee can inadvertently approve a data feed, or allow an attacker to bypass validation, The corrupted AI tools will then produce inaccurate or malicious outputs because of the bad data they were trained on. In the context of a financial services environment, if data becomes corrupted, the impact can be particularly severe. Corrupted or manipulated data can cascade into critical functions such as risk modeling, trading algorithms, client reporting, and regulatory submissions. This not only jeopardizes data integrity and internal controls, but also raises the risk of misstatements, compliance breaches, and financial loss. Remediation efforts may require extensive forensic review and revalidation of models and systems, increasing operational burden and potentially undermining stakeholder confidence in the firm’s governance and oversight framework. Another security concern about the use of AI tools is that they may have an impact on data integrity, regulatory compliance and operational resilience. Internal cybersecurity and risk teams may be unable to trace decisions or detect malicious interference, which creates a lack of auditability across the firm’s systems, making it difficult to maintain a clear and accurate picture of the system’s workings. Many AI systems rely on external platforms, which increases the firm’s dependency on third-party security controls. Cyber attackers may also attempt to steal AI models or reverse engineer them, allowing them to gain access to sensitive client and firm information. As you review your financial firm’s practices, it can be helpful to partner with a knowledgeable and highly skilled securities law attorney who can provide you with the customized guidance you need to make fully informed decisions with greater understanding and confidence, as well as assist with the development of effective internal AI controls. Here are just a few of the risk mitigation strategies your firm can address in your updated governance guidelines and practices.

AI Use Policies and Procedures

Consider revising your current policies and procedures to address AI use. For instance, your firm should define the acceptable uses of AI, articulate which tools are approved for use, conduct robust due diligence on the tool, and beta test prior to deploying. Moreover, training must be provided to end users, focusing on how to minimize risks and protect customer privacy and security. When employees understand that using tools like ChatGPT or Copilot carry inherent risks (and that there will be consequences for the unauthorized use of these tools), your firm can further reduce potential security and privacy risks.

Privacy Impact Assessments (PIAs)

As part of ongoing privacy safeguard measures, conduct a privacy impact assessment that analyzes how your firm gathers, uses and shares PII, which systems (including AI tools) obtain access to PII, and where it is maintained Then, map to the internal controls (such as policies and procedures, and IT data security safeguards) to address how the organization protects against potential privacy risks. The PIA should also ensure compliance with all relevant regulatory requirements and align with the most current privacy best practices.

Incident Response and Monitoring

Another strategy for protecting customer data and reducing security risks is to create AI-specific incident protocols that are tied to cyber incidents and data leakages. By taking proactive steps to create a detailed protocol for responding to incidents, your firm will be ready to address these events swiftly, appropriately, and efficiently.

Legal and Regulatory Enforcement Trends

Regulatory compliance bodies like the SEC, FINRA, and FTC are increasingly focusing on AI misuse among financial firms. These bodies scrutinize firms that are misusing or otherwise improperly securing AI technologies. For example, the SEC recently charged two advisers, Delphia (USA) Inc. and Global Predictions, Inc. with making misleading claims about their use of AI.1 with making false and misleading statements about their AI use. The protection of customer data remains the top priority for regulators throughout the financial services industry. If you are ready to review your corporate governance structure, reach out to us today to get started. The experienced and dedicated legal team at Jacko Law Group, PC is here to provide you with solutions that are both practical and effective. We can assist you with a wide range of matters, from SEC securities law counsel, to AI risk mitigation transactional law support, to general corporate counsel, M&A and litigation support. Please reach out to our experienced attorneys today at (619) 298-2880 to get started, or visit us in our San Diego, California headquarters.  

In the Matter of Delphia (USA) Inc. SEC Investment Advisers Act Release No. 6573, 18 Mar. 2024. U.S. Securities and Exchange Commission. https://www.sec.gov/newsroom/press-releases/2024-36.

In the Matter of Global Predictions, Inc. SEC Investment Advisers Act Release No. 6574, 18 Mar. 2024. U.S. Securities and Exchange Commission. https://www.sec.gov/newsroom/press-releases/2024-36.

About the author

Jacko Law Group, PC

Jacko Law Group provides tailored legal services and effective strategies for success, delivering exemplary solutions to complex legal and regulatory challenges to ensure that both business efforts and compliance obligations are satisfied.

Related Insights