Header image

DeepSeek AI: Adopting a Security-First Approach in Finance

Why and how financial firms must establish clear AI governance policies to ensure compliance and mitigate risks.

The growing use of generative AI presents both opportunities and risks for financial institutions. On the one hand, these tools and technologies provide far-reaching benefits for financial services firms, enabling them to increase their operational efficiency, achieve time and cost savings, and enhance their investment decisions.

By automating tasks like market research, report generation and data analysis, GenerativeAI can free up time for financial services executives to focus on higher value activities such as risk management, client engagement and identifying new growth opportunities.

On the other hand, however, AI also introduces significant challenges including regulatory uncertainty, data security risks, AI bias and accountability concerns. In light of all this, the recent discussions around DeepSeek AI highlight the importance of implementing strong governance, risk, and compliance (GRC) frameworks to ensure financial firms use AI responsibly.

After all, if they fail to put well-defined GRC structures in place, financial organisations could be in danger of inadvertently generating biased AI outputs and delivering financial misinformation, potentially leading to  reputational damage and fines or penalties.

The use of AI is, of course, on a spectrum, from some firms who rapidly and enthusiastically embrace the new technologies, to others who ban it outright. Considering Statista has estimated an increase from US$35 Billion in AI spending across the industry in 2023 to a staggering US$126.4 Billion in 2028, it is clear that most firms are electing to adopt the innovations.

The importance of good governance in a fast-changing regulatory environment

New regulations are setting important guidelines for the way financial services firms use AI. The European Union’s AI Act is expected to set a global precedent for governance, potentially influencing regulatory expectations in the U.S. and other markets.

Meanwhile, the U.S. Securities and Exchange Commission (SEC) and Federal Trade Commission (FTC) continue to monitor AI usage in financial decision-making, with increased pressure on firms to demonstrate transparency and accountability.

High-risk systems, such as those used for credit scoring and fraud detection, are subject to stringent requirements, including rigorous testing, transparency and human oversight. Additionally, existing laws like the GDPR impose further obligations on data privacy and protection, adding another layer of compliance for firms utilising AI technologies.

Beyond this, financial organisations need to start developing their own AI governance policies and putting them in place to provide oversight and accountability across the entire AI lifecycle, from development and deployment to monitoring and auditing.

This will typically include assessing AI models, aligning them with existing security controls and making sure they do not conflict with any regulatory obligations.

It is important to highlight here also that it is easiest to put these policies in place earlier in the AI utilisation process. The best approach for firms is to build a governance framework before they implement AI in the first place. If they hold off until AI has already permeated their workflows, the whole process becomes much more challenging.

Even following ‘go-live’, these kinds of frameworks need to incorporate regular audits to ensure ongoing compliance with both internal policies and external regulations.

Strengthening security measures for AI in financial services

Once governance frameworks are in place, firms should prioritise enhanced security practices for AI adoption. This should begin with strict endpoint security controls to block unauthorised AI use and ensure that only approved systems can interact with sensitive data. Firms also need to ensure that third-party AI providers comply with internal data governance policies, lessening risks associated with external vendors.

Regular security assessments should be carried out to assess potential vulnerabilities in AI models, while DLP strategies can help track and manage how AI interacts with confidential information.

Additionally, educating employees on AI-related security threats, like phishing scams and fraudulent AI tools and techniques, is crucial to put a stop to exploitation and develop a culture based on cybersecurity awareness.

Recent cybersecurity incidents, like impersonation scams and the rise of AI-powered malware, underline the evolving threats facing financial firms. Attackers are increasingly targeting AI tools to exploit vulnerabilities, steal credentials and compromise proprietary data. Consequently, financial institutions must proactively manage AI security risks. 

A call to action: responsible AI adoption

The discussions around DeepSeek AI are a reminder that financial firms need to be thorough in managing AI-related risks. This doesn’t mean prohibiting AI adoption but rather ensuring that it fits within a compliant and secure operational framework.

Financial services organisations should take this opportunity to review their AI governance policies, reinforce security controls and train their employees on AI risks. Doing so will enable them to harness the benefits of AI while addressing potential threats to data security, regulatory compliance and business integrity. 


Travis DeForge
Travis DeForge Director of Offensive Cybersecurity Engineering Abacus Group
Travis DeForge
Travis DeForge Director of Offensive Cybersecurity Engineering Abacus Group

Upcoming Events

No events found.