Download Full Report

How do you protect your data when using AI in Microsoft Copilot?

Here’s what you need to know around the AI risk.

We assess each vendor’s AI risk posture using 26 risk vectors mapped to leading frameworks like OWASP for LLM. This page will show a high-level snapshot of some information for each vendor. For the full, vendor-specific AI risk report, click the image below.

Feature Overview

Each vendor provides a unique set of features, and implements AI in certain ways. Let's see what types of AI features they have to offer:

Google Workspace
Microsoft Copilot

Google AI Studio

Q&A interface for accessing Gemini models supporting options to provide fine-tuning data, upload files for analysis, and communicate via text or audio.

Power Apps

Build and customize no-code and low-code apps that automate processes and connect to data; integrate AI-powered components to analyze information and generate insights.

… see more in full report

Want to see all AI features?

Pricing Details

Let's dive into the relevant pricing details for accessing AI, as providers vary widely in their pricing models and cost drivers.

Here is the pricing model breakdown for Microsoft Copilot

Here is the pricing model breakdown for Microsoft Copilot

Here is the pricing model breakdown for Microsoft Copilot

Vendor
Description
Microsoft Copilot

Freemium

Offers free tiers

[x]

Per License/Seat

Charges per user or access point

[x]

Consumption-Based

Pay per taken, API call, inference, etc.

[x]

Outcome-Based

Pay only when certain results or performance goals are achieved


Some Quick facts about each vendor

Here are some facts about Microsoft Copilot

Microsoft Copilot

Microsoft Copilot was officially launched in 2023 as part of Microsoft’s broader push into AI productivity tools.

The Copilot branding now spans multiple Microsoft products, from Windows and Edge to GitHub, showing its central role in their AI strategy.

The name “Copilot” reflects Microsoft’s vision of AI tools acting as supportive partners for users, rather than replacing human decision-making.

Even well-secured apps can leak data

Even well-secured apps can leak data

If your app pulls in third-party content — like URLs, comments, or files — LLM features can be tricked into leaking private data through indirect prompt injection. Most teams don’t even realize it’s happening.

According to a 2025 Gartner survey,


73%

of enterprises have suffered an AI-related security breach in the last year



$4.8M

average cost per incident — with indirect prompt injection and data leakage via LLMs now among the top attack vectors for financial services and healthcare organizations



In recent incidents, platforms like ChatGPT and Microsoft 365 Copilot were exploited by attackers using hidden prompts and indirect content injection, leading to unintended data exposure

Click below to access the full AI Features, Security, and Risk report

Click below to access the full AI Features, Security, and Risk report

  • Global 1000 Financial Institution
    Fortune 50 Technology Company
    Fortune 50 Healthcare Company
    Top 10 Law Firm
    Fortune 100 Technology Company
    Top 20 Law Firm
    Cloud 100 Technology Company
    Public YC Company
    Am Law 100 Law Firm
    Global 2000 Software Company
    Cloud 100 Technology Company

We are trusted by top organizations to navigate AI security and risk. To see what this looks like, check out our platform below

Unlock full AI Risk report

Unlock full AI Risk report

Download Full Report

Download Full Report