
OpenClaw/Clawdbot: Why SOCs Must Block This AI Agent Immediately
Over 1,800 exposed control panels, supply chain attacks, and a VS Code trojan deploying ScreenConnect RAT. Here's the evidence-based case for blocking Clawdbot and the actionable playbook your SOC needs.
The AI agent revolution has its first major security catastrophe. If your organization hasn't already taken action, you're likely exposed.
OpenClaw (formerly Clawdbot and Moltbot) is an open-source AI agent platform that promised to revolutionize personal productivity. Instead, it has become a case study in how autonomous AI systems can become catastrophic security liabilities. In late January 2026, security researchers uncovered a cascade of critical vulnerabilities, supply chain attacks, and active exploitation that should alarm every security leader.
Bottom Line: If any employee in your organization has installed Clawdbot/Moltbot/OpenClaw, you have an immediate, high-severity risk that requires containment. This briefing provides the evidence and the playbook.
What Is OpenClaw (Clawdbot/Moltbot)?
Clawdbot launched as an ambitious open-source project designed to be a "personal AI agent," an autonomous assistant that could:
The platform connects large language models (Anthropic Claude, OpenAI GPT, Google Gemini) to these capabilities, creating what the developers called a "local AI that actually does things."
The problem? Broad system access + internet connectivity + misconfigured deployments = security nightmare.
Following a trademark dispute with Anthropic (makers of Claude), the project was hastily rebranded from "Clawdbot" to "Moltbot" and then "OpenClaw." This created brand confusion that attackers immediately exploited.
The Breach Timeline: January 2026
Week 1: Exposed Control Panels Discovered
Security researcher Jamieson O'Reilly began investigating Clawdbot deployments and discovered something alarming: hundreds of Clawdbot Control panels exposed directly to the internet.
Using Shodan, researchers identified the scope:
| Date | Exposed Instances |
|---|
| January 20, 2026 | 900+ |
| January 22, 2026 | 1,673 |
| January 24, 2026 | 1,842 |
| January 28, 2026 | 4,000+ |
The critical finding: 92% of exposed instances had authentication completely disabled.
These weren't just dashboards. The exposed control panels provided:
Week 2: Supply Chain Attack Exploits Rebrand Chaos
When Anthropic's legal team forced the "Clawdbot" → "Moltbot" rebrand, the project's creator briefly released the original GitHub organization name and Twitter handle. Within 10 seconds, attackers seized both.
The impersonation campaign included:
moltbot[.]you, clawbot[.]ai, clawdbot[.]you$CLAWD on Solana reached $16M market cap before crashingThis wasn't opportunistic. It was coordinated infrastructure for supply chain attacks.
Week 3: Malicious VS Code Extension Deploys RAT
Before the legitimate Clawdbot team could publish an official VS Code extension, attackers published a fake "ClawdBot Agent" extension to the VS Code Marketplace.
The payload: Upon installation, the extension:
Code.exe binaryMicrosoft removed the extension after researcher reports, but the damage window was significant.
Ongoing: Prompt Injection Attacks
The most insidious attack vector requires no vulnerability at all. It exploits Clawdbot's intended functionality.
Attack flow:
Documented attacks include:
The Evidence: CVEs and Technical Details
Known CVEs
| CVE | Vulnerability | CVSS | Impact |
| CVE-2025-49596 | Unauthenticated Access | Critical | Full administrative control |
| CVE-2025-6514 | Command Injection | Critical | Remote code execution |
| CVE-2025-52882 | Arbitrary File Access | High | Data theft, persistence |
Default Port and Service
| Service | Port | Protocol |
| Clawdbot Gateway (Control UI) | 18789 | HTTP/WebSocket |
Shodan Detection Queries
http.title:"Clawdbot Control"
http.title:"Clawdbot Control" port:18789
port:18789 http
Known Malicious Domains (Defanged)
| Domain | Type |
moltbot[.]you | Typosquat/Phishing |
clawbot[.]ai | Typosquat/Phishing |
clawdbot[.]you | Typosquat/Phishing |
github[.]com/gstarwd/clawbot | Malicious Clone |
Malicious VS Code Extension
| Extension Name | Publisher | Payload |
| ClawdBot Agent | (Removed) | ScreenConnect RAT |
Why SOCs Must Block Clawdbot: The Business Case
1. Shadow AI Is Already in Your Environment
Clawdbot gained viral popularity among developers and knowledge workers. Unlike traditional shadow IT, it doesn't require admin privileges. It runs in userspace with the user's full permissions.
If your employees discovered ChatGPT, they've probably discovered Clawdbot.
2. Credential Exposure Is Near-Certain
Clawdbot stores sensitive credentials in plaintext on the local filesystem:
Even properly configured instances are honey pots for commodity malware that targets local files.
3. It's a Persistent Backdoor
A compromised Clawdbot instance provides:
4. Regulatory and Liability Risk
If a breach traces back to an unmanaged AI agent:
SOC Action Playbook
Immediate Actions (First 24 Hours)
1. Network-Level Blocking
Block inbound AND outbound traffic on port 18789 at the perimeter firewall:
iptables -A INPUT -p tcp --dport 18789 -j DROP
iptables -A OUTPUT -p tcp --dport 18789 -j DROP
2. DNS Sinkhole Malicious Domains
Add to your DNS blocklist:
moltbot.youclawbot.aiclawdbot.you*clawdbot* or *moltbot*3. EDR/Endpoint Hunt
Search for:
clawdbot, moltbot, openclawclawdbot, moltbot, .clawdbot4. VS Code Extension Audit
find /Users -name "extensions.json" -path "*/.vscode/*" 2>/dev/null | xargs grep -l -i "clawdbot\|clawbot"Short-Term Actions (First Week)
5. Email Gateway Rules
Implement content inspection rules to detect prompt injection patterns:
ignore previous instructions6. VPN/Remote Access Audit
Check for any Clawdbot instances exposed through VPN tunnels or remote access solutions.
7. Cloud Service Audit
Review connected applications in:
Ongoing Monitoring
8. Shodan/Censys Monitoring
Set alerts for your external IP ranges:
http.title:"Clawdbot Control" ip:YOUR_RANGE9. Threat Intelligence Feeds
Subscribe to feeds covering:
Detection Rules
Sigma Rule (Network)
title: Clawdbot Control Panel Access
status: experimental
logsource:
category: proxy
detection:
selection:
c-uri|contains: 'Clawdbot Control'
selection_port:
dst_port: 18789
condition: selection or selection_port
falsepositives:
- Legitimate developer testing (should be rare)
level: high
Yara Rule (File)
rule Clawdbot_Config {
meta:
description = "Detects Clawdbot configuration files"
severity = "high"
strings:
$s1 = "clawdbot" ascii nocase
$s2 = "moltbot" ascii nocase
$s3 = "anthropic_api_key" ascii
$s4 = "openai_api_key" ascii
condition:
any of ($s1, $s2) and any of ($s3, $s4)
}
The Board Brief
What to tell leadership:
"We have identified a critical shadow AI risk from an application called Clawdbot/Moltbot/OpenClaw. Security researchers discovered over 1,800 exposed instances globally, with 92% lacking authentication. Active exploitation includes credential theft, supply chain attacks, and malware distribution. We have implemented blocking controls across our network and are conducting an enterprise-wide hunt. This incident reinforces the need for formal AI governance policies to prevent employees from deploying unapproved AI agents with broad system access."
Key metrics to report:
Lessons Learned: The Larger AI Agent Risk
Clawdbot is the canary in the coal mine. As AI agents become more capable and more popular, we should expect:
The answer isn't to ban AI. The answer is to bring it under governance.
Conclusion
The Clawdbot incident is a wake-up call for every security organization. AI agents with broad system access, deployed without oversight, connected to the internet with default credentials: this is a recipe for disaster that played out exactly as security professionals would predict.
Your action items:
The broader question: What other AI agents are your employees running? Do you have visibility? Do you have controls?
How I Help
This incident highlights why AI governance is not optional. It is urgent. If you are concerned about shadow AI, agentic security, or AI-related compliance, I bring 20+ years of security leadership to help organizations get ahead of these risks.
Relevant services:
Schedule a discovery call to discuss your organization's AI security posture.
Sources and References
Adil Karam
Security & AI Governance Advisor
Helping organizations navigate security leadership and AI governance challenges.
Related Articles
The Real-Time Compliance Era: How NIS2 and DORA Are Changing Executive Accountability in 2026
The 2026 Regulatory Collision: Navigating NIS2, DORA, and Personal Liability for Boards
NIS2 Enforcement Era Begins: Why US Executives with EU Operations Can't Ignore Personal Liability in 2026
Ready to Put These Insights Into Action?
Whether you need AI governance, security leadership, or compliance guidance—let's discuss how to apply these strategies to your organization.