Pricing Details
Vendor | Description | Duck Creek | Gradient AI |
---|---|---|---|
Freemium | Offers free tiers | ||
Per License | Charges per user, org, or access point | ||
Consumption-Based | Pay per taken, API call, inference, etc. | ||
Outcome-Based | Pay only when certain results or performance goals are achieved |
Some Quick facts about each vendor
Duck Creek | Gradient AI |
---|---|
Duck Creek delivers cloud-based core systems (policy, billing, claims) and a broad suite of automation, analytics, and management tools for insurance carriers. | Gradient AI specializes in providing advanced machine learning and generative AI solutions tailored for insurance (workers' comp, health, group benefits, and property/casualty) risk assessment and automation. |
Duck Creek rapidly integrates LLM and agentic AI — using Microsoft, expert.ai, and Charlee.ai partnerships to automate claims, underwriting, document processing, and customer communications. | Gradient AI offers true consumption-based billing — organizations pay based on API calls, tokens processed, or knowledge base storage/usage. The pricing is aligned with AI/ML inference and data processing volume, not number of users. |
All AI (including LLM features) is available to customers as part of all-user, organization-level contracts. There is no freemium or granular pay-as-you-go AI; unlimited users gain access within contracted scope. | Gradient's AI models are accessed via cloud APIs, with open support for commercial and open-source LLMs, making it easy for carriers and partners to embed AI directly into their digital or legacy insurance workflows. |
Even well-secured apps can leak data
If your app pulls in third-party content — like URLs, comments, or files — LLM features can be tricked into leaking private data through indirect prompt injection. Most teams don’t even realize it’s happening.
According to a 2025 Gartner survey,
73%
of enterprises have suffered an AI-related security breach in the last year
$4.8M
average cost per incident — with indirect prompt injection and data leakage via LLMs now among the top attack vectors for financial services and healthcare organizations
In recent incidents, platforms like ChatGPT and Microsoft 365 Copilot were exploited by attackers using hidden prompts and indirect content injection, leading to unintended data exposure