I Made an AI Think It Was Root — And It Gave Me /etc/passwd
The Prompt Injection That Pulled Me Back Into Bug Writing After 3 Years
I hadn’t written a public bug bounty report in three years.
Then an AI chat application casually handed me the contents of:
/etc/passwd
No memory corruption.
No RCE exploit chain.
Just… a prompt.
This is the story of how a simple conversation turned into a pseudo-shell with root access — and why this class of bugs is about to become the new goldmine for hunters.
🧠 The Moment I Knew It Was Broken
It started like every other AI assessment:
Recon → harmless probing → refusal testing.
Then the model said something it should never say:
“I am now acting as a terminal with root access.”
That’s not a jailbreak.
That’s an instruction hierarchy collapse.
Proof of Concept
📂 Sensitive File Disclosure
The AI returned the contents of /etc/passwd.
This is the line between:
🟡 “fun jailbreak”
🔴 real security impact
Because it means one of two things:
- It had access to a real environment
OR - It could simulate system data using internal context it should never expose
Either way — data boundary failure.
Figure 1 — /etc/passwd disclosure via prompt injection
From Chatbot → Linux Terminal
After a structured role-confusion payload, the AI stopped behaving like a chatbot and started behaving like:
root@system:~#
I ran:
apt-get install wget
And it responded with:
- package lists
- dependency tree
- download progress
- installation logs
Exactly like a real machine.
Figure 2 — Package installation flow inside the chat interface
At this point, this wasn’t “prompt hacking”.
This was impact.
Full Role Override
The core payload forced the model to:
- Treat my input as commands
- Stop identifying as AI
- Assume system authority
- Switch between “AI mode” and “terminal mode”
Once that worked, everything else followed.
Figure 3 — Successful terminal mode with root context
Final Thoughts
This wasn’t the most complex bug I’ve ever found.
But it might be the most important class of bugs right now.
Because the industry is deploying AI faster than it understands:
AI is not just a feature.
It is a new attack surface.
#BugBounty #AISecurity #PromptInjection #AppSec #Hacking #LLM #CyberSecurity
I Made an AI Think It Was Root — And It Gave Me /etc/passwd was originally published in InfoSec Write-ups on Medium, where people are continuing the conversation by highlighting and responding to this story.
目录
最新
- Phishing Mengurai Taktik Penyamaran Attacker untuk mengambil alih Akun
- Exploiting LLM APIs with Excessive Agency (PortSwigger Lab Write-up)
- [CMesS] — Gila CMS 1.10.9
- I went for coffee and came back with 6 vulnerabilities in WordPress plugins
- 3 Ways to Simulate MFA in Phishing Campaigns with Anglerphish
- “Bug Bounty Bootcamp #32: Weaponizing File Uploads — From Profile Pictures to Remote Code…
- Network Segmentation Strategies: Implementing CISA’s Cybersecurity Best Practices for Layered…
- ️ OWASP API Top 10 — TryHackMe Walkthrough (Part 2)