How do you protect your data when using AI in Microsoft Copilot?
Here’s what you need to know around the AI risk.
We assess each vendor’s AI risk posture using 26 risk vectors mapped to leading frameworks like OWASP for LLM. This page will show a high-level snapshot of some information for each vendor. For the full, vendor-specific AI risk report, click the image below.
Feature Overview
Each vendor provides a unique set of features, and implements AI in certain ways. Let's see what types of AI features they have to offer:
Google Workspace | Microsoft Copilot |
---|---|
Q&A interface for accessing Gemini models supporting options to provide fine-tuning data, upload files for analysis, and communicate via text or audio. |
Build and customize no-code and low-code apps that automate processes and connect to data; integrate AI-powered components to analyze information and generate insights. |
… see more in full report |
Want to see all AI features?
Pricing Details
Let's dive into the relevant pricing details for accessing AI, as providers vary widely in their pricing models and cost drivers.
Vendor | Description | Microsoft Copilot |
---|---|---|
Freemium | Offers free tiers | |
Per License/Seat | Charges per user or access point | |
Consumption-Based | Pay per taken, API call, inference, etc. | |
Outcome-Based | Pay only when certain results or performance goals are achieved |
Some Quick facts about each vendor
Here are some facts about Microsoft Copilot
Microsoft Copilot |
---|
Microsoft Copilot was officially launched in 2023 as part of Microsoft’s broader push into AI productivity tools. |
The Copilot branding now spans multiple Microsoft products, from Windows and Edge to GitHub, showing its central role in their AI strategy. |
The name “Copilot” reflects Microsoft’s vision of AI tools acting as supportive partners for users, rather than replacing human decision-making. |
If your app pulls in third-party content — like URLs, comments, or files — LLM features can be tricked into leaking private data through indirect prompt injection. Most teams don’t even realize it’s happening.
According to a 2025 Gartner survey,
73%
of enterprises have suffered an AI-related security breach in the last year
$4.8M
average cost per incident — with indirect prompt injection and data leakage via LLMs now among the top attack vectors for financial services and healthcare organizations
In recent incidents, platforms like ChatGPT and Microsoft 365 Copilot were exploited by attackers using hidden prompts and indirect content injection, leading to unintended data exposure
We are trusted by top organizations to navigate AI security and risk. To see what this looks like, check out our platform below
This week's top 7 searches
ChatGPT for Enterprise: AI Features, Security & Risk Report
CoCounsel: AI Features, Security & Risk Report
Cursor: AI Features, Security & Risk Report
Github Copilot: AI Features, Security & Risk Report
Gradient AI: AI Features, Security & Risk Report
Harvey: AI Features, Security & Risk Report
Zoom: AI Features, Security & Risk Report