The New York State Department of Financial Services (DFS) has issued new guidance emphasizing the importance of multi-factor authentication (MFA) and biometric authentication in mitigating cybersecurity risks posed by artificial intelligence (AI). This guidance comes amid growing concerns about AI-enabled fraud, following recent warnings from FinCEN about the rising threat of deepfake media fraud schemes in financial institutions.
Covered Entities, including financial institutions regulated under DFS’s Cybersecurity Regulation (23 NYCRR Part 500), must ensure MFA implementation for all authorized users accessing sensitive systems or non-public information (NPI) by November 2025. This requirement mandates the use of two or more authentication factors, such as passwords, biometric traits, or possession-based tokens, and encourages defenses against AI-manipulated deepfakes, including digital certificates and physical security keys. The move aligns with broader industry trends, as exemplified by Mastercard’s recent commitment to replace passwords with biometric authentication by 2030.
In tandem with access control mandates, Covered Entities must also periodically review and limit access privileges to NPI. Additionally, data governance measures must be strengthened to control the collection, storage, and disposal of data, particularly for AI-enabled products. By November 2025, organizations are required to establish comprehensive data inventories to prevent unauthorized access, underscoring a commitment to minimizing AI-related threats while meeting regulatory compliance.
The DFS Guidance, issued last month, builds on the foundational Cybersecurity Regulation established in 2017. While it does not introduce new requirements, it provides a detailed framework to address emerging AI risks. Key recommendations include annual or event-driven updates to Risk Assessments to account for developments such as deepfakes, which have become increasingly sophisticated and concerning for financial institutions, as demonstrated by recent partnerships between major financial institutions and verification providers to combat deepfake fraud.
Vendor management is another critical area of focus. Covered Entities are advised to establish stringent policies for vetting third-party service providers (TPSPs), emphasizing minimum requirements for access controls and encryption. Contracts with TPSPs should mandate timely notifications of any cybersecurity events affecting NPI and ensure enhanced privacy and security if TPSPs use AI technologies.
Cybersecurity training is also a priority under the new Guidance. Annual training for all personnel must now address AI-related risks, such as deepfake social engineering and AI-enhanced cyberattacks. Specialized training for cybersecurity staff and TPSP personnel is required to ensure secure AI system design and operation. Covered Entities are urged to monitor user activities for vulnerabilities, including unusual query behavior in AI-enabled tools.
While the Guidance provides actionable steps, it leaves room for interpretation, particularly concerning the definition of “material changes” that warrant updates to Risk Assessments. As AI technologies continue to evolve, DFS expects Covered Entities to routinely reassess their cybersecurity controls to remain compliant and reduce risks.
Source: Reuters
–
November 18, 2024 – by Cass Kennedy
Follow Us