Security researchers have uncovered a novel and subtle attack vector against AI assistants that exploits the difference between human and machine perception. The technique uses font-rendering tricks on a web page to create text that is invisible to the human eye but is readable and interpretable as a command by an AI agent. This allows an attacker to embed hidden, malicious instructions on a website. When an AI assistant, such as a browser extension or a web scraper, processes the content of the 'poisoned' page, it may execute these commands without the user's knowledge or consent. This could lead to a range of security incidents, from data theft to the AI performing unauthorized actions on the user's behalf, representing a new frontier in adversarial AI.
The attack is a new form of prompt injection or instruction hijacking, but it relies on visual manipulation rather than just text-based tricks. The core idea is to create two different 'views' of the same web content: one for the human user and one for the AI model that is processing the page.
How it Works (Conceptual): An attacker could use CSS and custom fonts to manipulate the appearance of text. For example:
.woff2, .ttf) so that a human sees one word (e.g., "Welcome") while the underlying character codes that the AI reads spell out a malicious command.When an AI assistant with access to the page's content (e.g., through a screen reader API or by parsing the DOM) processes the text, it reads the literal character codes, not the visual representation. It would therefore pick up the hidden command and, if it has the necessary permissions, execute it.
This technique represents a vulnerability in the abstraction layer between rendered content and the underlying data. AI models, especially those that process raw HTML or accessibility tree data, are susceptible because they trust the textual content without understanding the visual context in which it is presented.
This is a new type of attack that doesn't fit neatly into existing MITRE ATT&CK techniques but is related to the concept of T1204 - User Execution. In this case, the user is not directly executing anything, but their act of directing an AI agent to a malicious page causes the execution.
The potential impact of this attack vector will grow as AI agents become more autonomous and are granted more permissions. Potential scenarios include:
.woff, .ttf)Detecting this is extremely challenging for traditional security tools.
AI assistants should be heavily sandboxed with strict permissions, requiring user confirmation for any sensitive action.
Users should be made aware of the risks associated with granting broad permissions to AI agents.
Researchers disclose the font-rendering attack technique.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats
Every tactic, technique, and sub-technique used in this threat has been identified and mapped to the MITRE ATT&CK framework for consistent, actionable threat language.
Observables and indicators of compromise (IOCs) have been extracted and cataloged. Risk has been assessed and correlated with known threat actors and historical campaigns.
Detection rules, incident response steps, and D3FEND-aligned mitigation strategies are included so your team can act on this intelligence immediately.
Structured threat data is packaged as a STIX 2.1 bundle and can be visualized as an interactive graph — relationships between actors, malware, techniques, and indicators.
Sigma detection rules are derived from the threat techniques in this article and can be converted for deployment across any major SIEM or EDR platform.