Hackers Can Hijack Your Terminal Via Prompt Injection using LLM-powered Apps

Share:

Researchers have uncovered that Large Language Models (LLMs) can generate and manipulate ANSI escape codes, potentially creating new security vulnerabilities in terminal-based applications.

ANSI escape sequences are a standardized set of control characters used by terminal emulators to manipulate the appearance and behavior of text displays.

They enable features such as text color changes, cursor movement, blinking text, and more. Terminal emulators interpret these sequences to provide dynamic functionality, but they’ve also historically been a source of vulnerabilities.

This discovery, initially reported by Leon Derczynski and further investigated by security researchers, raises important concerns about the security of LLM-integrated command-line tools.

Leveraging 2024 MITRE ATT&CK Results for SME & MSP Cybersecurity Leaders – Attend Free Webinar

ANSI escape codes, which are special character sequences used to control terminal behavior, can be exploited by LLMs in several concerning ways:

  • Generating flashing text and color changes
  • Manipulating cursor position and screen content
  • Creating hidden text in responses
  • Copying text to clipboards without user consent
  • Executing denial of service attacks
  • Creating potentially malicious clickable hyperlinks
  • Triggering DNS requests on macOS systems

To test these vulnerabilities, a researcher created a Python-based app, dillma.py, that integrates with LLMs.

In one demonstration, a malicious file was fed into the app, which then used ANSI codes to produce flashing, colored text and auditory beeps in the terminal

Another test showcased how LLM-generated output could add clickable links that potentially leak user data, particularly in environments like Visual Studio Code’s terminal, which supports hyperlink rendering.

Security experts have demonstrated that these vulnerabilities can be exploited through two primary methods: direct prompting of LLMs to generate control characters, and utilizing code interpreter tools.

The implications are particularly serious for LLM-powered CLI applications that don’t properly sanitize their output.

“It’s important for developers and application designers to consider the context in which they insert LLM output, as the output is untrusted and could contain arbitrary data,” notes the research.

This discovery follows previous findings about Unicode Tags enabling hidden communication in web applications, suggesting a pattern of legacy features creating unexpected attack surfaces in AI applications.

To address these security concerns, researchers recommend implementing several protective measures:

  • Encoding ANSI control characters by default
  • Adding specific options to enable control characters when necessary
  • Implementing allow-listing for approved characters
  • Conducting thorough end-to-end testing of applications

This discovery serves as a reminder that as AI technology continues to evolve, security practitioners must remain vigilant about potential vulnerabilities, especially when integrating LLMs with existing systems and protocols.

The research community expects more hidden vulnerabilities to be discovered as investigation into LLM security continues, highlighting the ongoing need for robust security measures in AI-powered applications.

Researchers have uncovered that Large Language Models (LLMs) can generate and manipulate ANSI escape codes, potentially creating new security vulnerabilities in terminal-based applications.

ANSI escape sequences are a standardized set of control characters used by terminal emulators to manipulate the appearance and behavior of text displays.

They enable features such as text color changes, cursor movement, blinking text, and more. Terminal emulators interpret these sequences to provide dynamic functionality, but they’ve also historically been a source of vulnerabilities.

This discovery, initially reported by Leon Derczynski and further investigated by security researchers, raises important concerns about the security of LLM-integrated command-line tools.

Leveraging 2024 MITRE ATT&CK Results for SME & MSP Cybersecurity Leaders – Attend Free Webinar

ANSI escape codes, which are special character sequences used to control terminal behavior, can be exploited by LLMs in several concerning ways:

Generating flashing text and color changes
Manipulating cursor position and screen content
Creating hidden text in responses
Copying text to clipboards without user consent
Executing denial of service attacks
Creating potentially malicious clickable hyperlinks
Triggering DNS requests on macOS systems
To test these vulnerabilities, a researcher created a Python-based app, dillma.py, that integrates with LLMs.

In one demonstration, a malicious file was fed into the app, which then used ANSI codes to produce flashing, colored text and auditory beeps in the terminal.

Another test showcased how LLM-generated output could add clickable links that potentially leak user data, particularly in environments like Visual Studio Code’s terminal, which supports hyperlink rendering.

Security experts have demonstrated that these vulnerabilities can be exploited through two primary methods: direct prompting of LLMs to generate control characters, and utilizing code interpreter tools.

The implications are particularly serious for LLM-powered CLI applications that don’t properly sanitize their output.

“It’s important for developers and application designers to consider the context in which they insert LLM output, as the output is untrusted and could contain arbitrary data,” notes the research.

This discovery follows previous findings about Unicode Tags enabling hidden communication in web applications, suggesting a pattern of legacy features creating unexpected attack surfaces in AI applications.

To address these security concerns, researchers recommend implementing several protective measures:

Encoding ANSI control characters by default
Adding specific options to enable control characters when necessary
Implementing allow-listing for approved characters
Conducting thorough end-to-end testing of applications
This discovery serves as a reminder that as AI technology continues to evolve, security practitioners must remain vigilant about potential vulnerabilities, especially when integrating LLMs with existing systems and protocols.

The research community expects more hidden vulnerabilities to be discovered as investigation into LLM security continues, highlighting the ongoing need for robust security measures in AI-powered applications.

Investigate Real-World Malicious Links,Malware & Phishing Attacks With ANY.RUN – Tree for Free

Balaji

Leave a Comment

Your email address will not be published. Required fields are marked *

loader-image
London, GB
3:48 am, Mar 17, 2025
weather icon 6°C
L: 5° | H: 6°
overcast clouds
Humidity: 83 %
Pressure: 1028 mb
Wind: 7 mph E
Wind Gust: 12 mph
UV Index: 0
Precipitation: 0 mm
Clouds: 100%
Rain Chance: 0%
Visibility: 10 km
Sunrise: 6:09 am
Sunset: 6:07 pm
DailyHourly
Daily ForecastHourly Forecast
Today 9:00 pm
weather icon
5° | 6°°C 0 mm 0% 10 mph 82 % 1028 mb 0 mm/h
Tomorrow 9:00 pm
weather icon
3° | 9°°C 0 mm 0% 12 mph 69 % 1027 mb 0 mm/h
Wed Mar 19 9:00 pm
weather icon
3° | 15°°C 0 mm 0% 6 mph 82 % 1022 mb 0 mm/h
Thu Mar 20 9:00 pm
weather icon
8° | 16°°C 0 mm 0% 8 mph 74 % 1021 mb 0 mm/h
Fri Mar 21 9:00 pm
weather icon
9° | 13°°C 0.2 mm 20% 6 mph 93 % 1015 mb 0 mm/h
Today 6:00 am
weather icon
3° | 5°°C 0 mm 0% 7 mph 82 % 1027 mb 0 mm/h
Today 9:00 am
weather icon
6° | 6°°C 0 mm 0% 10 mph 70 % 1028 mb 0 mm/h
Today 12:00 pm
weather icon
8° | 8°°C 0 mm 0% 10 mph 55 % 1028 mb 0 mm/h
Today 3:00 pm
weather icon
8° | 8°°C 0 mm 0% 10 mph 56 % 1027 mb 0 mm/h
Today 6:00 pm
weather icon
6° | 6°°C 0 mm 0% 10 mph 73 % 1028 mb 0 mm/h
Today 9:00 pm
weather icon
5° | 5°°C 0 mm 0% 9 mph 76 % 1028 mb 0 mm/h
Tomorrow 12:00 am
weather icon
5° | 5°°C 0 mm 0% 9 mph 67 % 1027 mb 0 mm/h
Tomorrow 3:00 am
weather icon
4° | 4°°C 0 mm 0% 7 mph 69 % 1026 mb 0 mm/h
Name Price24H (%)
Bitcoin(BTC)
€76,731.15
-0.87%
Ethereum(ETH)
€1,754.72
-1.06%
Tether(USDT)
€0.92
-0.01%
XRP(XRP)
€2.16
-1.42%
Solana(SOL)
€118.04
-5.10%
USDC(USDC)
€0.92
0.01%
Dogecoin(DOGE)
€0.158219
-1.44%
Shiba Inu(SHIB)
€0.000012
4.74%
Pepe(PEPE)
€0.000006
-5.30%
Peanut the Squirrel(PNUT)
€0.189641
20.47%
Scroll to Top