How can you protect your data when using AI in Harvey?
Here’s what you need to know around the AI risk.
We assess each vendor’s AI risk posture using 26 risk vectors mapped to leading frameworks like OWASP for LLM. This page will show a high-level snapshot of some information for each vendor. For the full, vendor-specific AI risk report, click the image below.
Feature Overview
Each vendor provides a unique set of features, and implements AI in certain ways. Let's see what types of AI features they have to offer:
Harvey |
---|
Gen AI search leveraging by real-time data pertaining to taxes, public company disclosures, the Court of Justice of the European Union, additional regional case law, a database of law firm memos, and web search results. |
… see more in full report |
Want to see all AI features?
Pricing Details
Let's dive into the relevant pricing details for accessing AI, as providers vary widely in their pricing models and cost drivers.
Vendor | Description | Harvey |
---|---|---|
Freemium | Offers free tiers | |
Per License/Seat | Charges per user or access point | |
Usage-Based | Pay per taken, API call, inference, etc. | |
Outcome-Based | Pay only when certain results or performance goals are achieved |
Some Quick facts about each vendor
Here are some facts about Harvey
Harvey |
---|
Rapidly scaled to become a global leader in legal AI, serving over 300 professional service firms across 53 countries. |
Raised significant recent funding ($300M Series E), now valued at $5B as of June 2025. |
Focus is on augmenting — not replacing — legal professionals, aiming for widespread daily use among lawyers. |
If your app pulls in third-party content — like URLs, comments, or files — LLM features can be tricked into leaking private data through indirect prompt injection. Most teams don’t even realize it’s happening.
According to a 2025 Gartner survey,
73%
of enterprises have suffered an AI-related security breach in the last year
$4.8M
average cost per incident — with indirect prompt injection and data leakage via LLMs now among the top attack vectors for financial services and healthcare organizations
In recent incidents, platforms like ChatGPT and Microsoft 365 Copilot were exploited by attackers using hidden prompts and indirect content injection, leading to unintended data exposure
We are trusted by top organizations to navigate AI security and risk. To see what this looks like, check out our platform below
This week's top 7 searches
ChatGPT for Enterprise: AI Features, Security & Risk Report
CoCounsel: AI Features, Security & Risk Report
Cursor: AI Features, Security & Risk Report
Github Copilot: AI Features, Security & Risk Report
Gradient AI: AI Features, Security & Risk Report
Microsoft Copilot: AI Features, Security & Risk Report
Zoom: AI Features, Security & Risk Report