Why Third-Party Cyber Risk is a board-level concern in the age of AI

Aug 6, 2025

Read more

When your company rolls out a new AI-powered tool or integrates a slick third-party API, the move usually feels like a win – faster workflows, smarter automation, better customer experiences. But behind the scenes, every new vendor you adopt quietly extends your attack surface.

And in 2025, those vendor connections aren’t just risky – they’re potentially catastrophic.


The new normal: AI-powered and deeply integrated

Third-party risk management (TPRM) has long been the backbone of enterprise cybersecurity. These teams have helped companies stay resilient by thoroughly evaluating vendors and managing complex regulatory requirements.

But the pace of technological change – especially with AI – is now faster than ever. Vendors are embedding LLMs and launching AI-driven features with a frequency that outpaces traditional review cycles. What was secure and well-understood six months ago may look very different today.

In this new reality, even the most robust TPRM programs need to adapt. The fundamentals still matter – but the surface area has expanded.


How AI is complicating Third-Party Risk

AI isn't just another technology layer. It's a paradigm shift – and it comes with a new class of cyber and governance challenges:

  • Black-box behavior: Many AI vendors don’t fully disclose how their models work or where data flows, making due diligence harder.


  • Rapid AI feature releases: Vendors are shipping new AI-powered features at breakneck speed — often without notice. A tool that started as a simple SaaS product last quarter might now embed an LLM, use your data to fine-tune models, or rely on external APIs that introduce additional risk.


  • Data risk: LLMs and AI services often require access to sensitive files, systems, or communications – and it’s not always clear what’s logged, retained, or used for model training…let alone whether that data flows through additional third-party systems that haven’t been vetted.

A vendor you trust with scheduling meetings could, in theory, be training their LLM on your calendar metadata – or worse, customer communications.


Third-Party Risk has always been on C-Suite’s top of mind – AI just makes it less predictable

Executives and boards have long understood the importance of managing third-party risk. What’s changing now isn’t the importance of the problem – it’s the nature of the uncertainty.

One of the most serious is indirect prompt injection – a tactic where malicious instructions are hidden in data that an AI system might later ingest, such as emails, web content, PDFs, or even help desk tickets. If a vendor’s AI system (say, a summarization tool) encounters this poisoned input, it can be manipulated into leaking data, sending phishing emails, or performing unauthorized actions – all without the vendor or the user realizing it’s happening.

These attacks are fundamentally different from traditional vulnerabilities. They don’t exploit code – they exploit model behavior. And because many vendors now embed LLMs deep into their platforms, an indirect injection attack in one service can cascade into downstream systems, including your own.


New blind spots: Where TPRM is adapting for AI

Even the most well-established TPRM teams are being asked to adapt to a rapidly changing landscape. The fundamentals still apply, but the rise of AI introduces behaviors and risks that weren’t on the radar even a year ago – and many existing frameworks aren’t equipped to catch them yet.

Here are a few areas where traditional practices are evolving:

  • Surfacing novel types of risk — subtle, AI-specific signals that aren’t always easy to detect or define within a vendor’s offering.


  • Expansion of Questionnaires to cover AI-specific concerns – like model usage, fine-tuning practices, data retention, or third-party dependencies.


  • Monitoring needs to be continuous, especially as vendors frequently update their platforms with new AI-powered features — sometimes adding access to web search, new data integrations, or autonomous agents without prior notice.

This isn’t a matter of gaps in discipline – it’s a shift in context.

The TPRM programs that succeed in this new era will be the ones that adapt quickly – with the same rigor, equipped with the right intel.