Threat Intelligence
A One Pixel Image can Leak Your Data in HuggingChat
HuggingChat (HuggingFace’s app for chatting with models) has been found vulnerable to zero-click data exfiltration risks via indirect prompt injection.

By including a malicious instruction in a document, webpage, MCP tool response, or other data source, an attacker can manipulate models in HuggingChat to output unsafe, dynamically generated Markdown images that exfiltrate data from other sources the AI has access to when they are rendered in the chat.
This vulnerability was responsibly disclosed to HuggingFace on 12/3/2025. As HuggingFace’s security team has not responded to the disclosure, we are disclosing publicly to ensure users are informed of the risks and can take precautions accordingly.
In this article, we demonstrate that an injection in a webpage can manipulate an AI model to output a malicious Markdown image that exfiltrates confidential financial data from a user’s uploaded document.
The Attack Chain
The user uploads a document with financial data and adds an external ‘reviews’ website to the chat context using ‘Fetch from URL’.
The user uploads a sensitive document (.txt with financial data) using ‘Add text file > Upload from device’.

The user adds an untrusted data source (user reviews on an external site) to the model’s context window using ‘add text file > Fetch from URL’. This website looks benign to the user reading it, but contains a hidden prompt injection visible to the LLM.

The user makes a query, asking about the negative review on the website.

The model reads the prompt injection and is manipulated into constructing a malicious Markdown image.

Here, the model output indicates that it believes it must build a visual to provide the negative reviewer (this is what the prompt injection requested the model to do).
Now, the model outputs Markdown syntax to create a malicious image. In the Markdown syntax, the image's source URL contains an attacker-controlled domain and data from the user’s uploaded file stored in query parameters.
When the Markdown syntax is processed by the user's browser, a request is made to the attacker's server, retrieving a 1-pixel image.
The attacker can read the user’s financial data from their server logs.
When the user's browser makes a request to retrieve the 1-pixel image from the attacker's domain, the financial data stored in the query parameters is transmitted with the request. Then, the attacker can read the financial data from the user’s uploaded file in their server logs:
Recommended Remediations for HuggingChat
Programmatically prohibit the rendering of Markdown images from external sites in model outputs without explicit user approval (e.g., add a warning confirmation asking the user if they would like to render the image, displaying the full URL being requested before the image is inserted).
Implement a strong Content Security Policy. This will prevent network requests from being made to unapproved external domains.