Key Checklist for Credit Union TPRM and Security Teams on NCUA’s AI Guidance
the PromptArmor team
Aug 20, 2025
5 min read
If you are in IT or Security at a credit union, NCUA’s AI guidance is a clear sign: regulators are ramping up, and the ball is in your court now. Credit Unions are adopting third party AI in record numbers, and this guidance is here to help. Below is a brief walkthrough of the important points to keep in mind to steer your program in the right direction. If you'd like to quickly access this content as an excel checklist, please feel free to click the button below:
Key Checklist for Credit Union TPRM and Security Teams on NCUA’s AI Guidance
1. AI Risk Management and Governance
Strong governance starts with knowing what AI you’re using and how it’s being controlled. We’ve worked with the organizations providing the leading frameworks for a detailed mapping or risks to be wary of, but general best practices are to:
Maintain an inventory – track all AI vendors, tools, and internal use cases in a living document.
Assess vendor AI features directly – identify which AI functions are in play (RAG, web search, MCP, etc.) and the risks each introduces.
Map subprocessors – document downstream models or services your vendors rely on.
Set governance standards – define how vendor risk will be tested, validated, and monitored over time.
You can’t manage AI risks if you don’t know what’s in your environment. Build visibility first, then layer governance and oversight on top.
Check out how PromptArmor maps the AI risk associated with each vendor to NIST AI, OWASP LLM Top 10, and MITRE AI.
2. AI Implementation and Scalability
Scaling AI should follow the same discipline as other enterprise risks. COSO’s guidance emphasizes aligning AI adoption with your credit union’s overall strategy and risk appetite:
Track use cases – track your AI use cases and tie them back to clear business objectives.
Trusted, Tried, True AI – ensure systems are transparent, tested, and aligned with CU values.
Start small, scale smart – document the basics first (inventories, policies, ownership) and expand as adoption grows. Track an inventory of ongoing PoCs and their associated data scope.
Review regularly with continuous monitoring – monitor performance, risks, and member impact on an ongoing basis.
The business pressures are increasing to scale AI faster than you can govern it. It is important to put guidance in place early, build policies that match your resources, then expand as maturity grows.
3. AI Data Security
AI outcomes are only as secure as the data behind them. The NSA/CISA guidance highlights a few core risks, and we’ve added some of the most important best practices to prioritize for credit unions:
Source reliable data and track data provenance for model use
Verify and maintain data integrity during storage and transport to and from LLMs
Leverage trusted infrastructure to host and serve models
Classify data and use access controls during the model lifecycle
Store training and prompt data securely
Leverage AI privacy-preserving techniques
Manage data deletion during offboarding, especially for trained models
Conduct ongoing data security risk assessments with frameworks like NIST AI RMF
Data is the fuel of AI, but also its biggest weakness. Secure it at every stage to keep trust in both the system and its outcomes.
Curious about more detail on potential security exploitations in LLM systems? Check out the TPRA AI Security Executive course to learn more
4. Deploying AI Systems Securely
AI systems need the same discipline as any other critical IT deployment. The NSA/CISA guidance highlights a few key priorities:
Secure the Deployment Environment
Ensure a Robust Deployment Environment Architecture
Harden Deployment Environment Configurations
Protect Deployment Networks from Threats
Continuously Protect the AI System
Secure Exposed APIs
Actively Monitor Model Behavior
Protect Model Weights
Secure AI Operation and Maintenance
Prepare for High Availability & Disaster Recovery
Plan Secure Delete Capabilities
For the full checklist of each of these main plays, check this document.
Treat AI deployments as high-value assets. Lock them down, monitor them closely, and plan for failures before they happen.
5. AI Uses and Opportunities in Financial Services
The Department of Treasury’s Artificial Intelligence in Financial Services report highlights both the promise of AI and the risks that must be managed. For credit unions, the focus should be on balancing opportunity with safeguards:
Especially around third-party management, as reliance on externally developed AI is unavoidable, but it demands stronger oversight. Treasury’s report stresses:
Specifically address AI-related risk by updating due diligence (data quality, privacy, supply chain).
Enhanced governance frameworks for these LLM applications accessing sensitive data
Ongoing risk assessments for the LLM Application and other applications
Information Security Controls and Business Continuity Testing (maybe delete)
Data privacy and security – ensure AI tools meet strict standards for protecting member data.
Review licensing, ownership, and secondary use restrictions for training and shared data.
Bias and explainability – require vendors to test for bias and provide clear explanations of model behavior.
Push vendors for access to impact assessments and model documentation needed for governance
Consumer protection – monitor AI-driven decisions (e.g., lending, credit scoring) to prevent harm or unfair treatment.
Require Human review, aka “Human in the loop,” for contested or adverse outcomes (credit denials, account closures), ensure a person can intervene.
Apply fair lending laws consistently – confirm that AI-driven tools (credit scoring, account screening, fraud detection) comply with existing consumer protection laws like ECOA and UDAAP.
Concentration risk – watch for over-reliance on a small number of vendors or models across the industry.
AI can expand member services and efficiency, but credit unions must embed safeguards around privacy, fairness, consumer protection, and vendor oversight to capture value responsibly.
6. AI-Specific Fraud Risks
Not only are there risks that come from the vendor side with LLM applications, FinCEN’s November 2024 alert highlights how financial institutions are increasingly targeted by fraud schemes using AI-generated deepfake media, especially during identity verification and account opening processes:
Evaluate Vendor phishing risk – ensure vendors are not insecurely set up to allow an end-user to fall prone to an LLM phishing attack.
Recognize red flags in User Behavior– watch for fake IDs, mismatched documents, inconsistent identity information, or irregular account behavior.
Enhance identity checks – use liveness tests, biometric verification, and video/voice checks to confirm legitimacy.
Monitor account activity – flag rapid transactions, high volumes of chargebacks, odd IP address usage, and coordinated activity indicating potential synthetic fraud.
Deepfake-powered fraud is here, and a large contributor to the fraud sphere. Strengthen identity controls, monitor unusual behaviors, and escalate suspicious cases immediately.
Takeaways
AI is powerful, but inherently risky if unmanaged. NCUA is stepping in to provide guidance on how to approach AI security and risk, but guidance is always evolving.
That means credit unions must put a deeper focus on third party risk management, and these teams must lead, not follow. Use this checklist as your foundation to build a sensible, explainable, and compliant AI oversight program that protects both your institution and your members.
If you want to talk more deeply about best practices around AI Governance and Data Security, let us know below.