Phase2 Logo
Services
Our Team
Results
Insights
Contact Us
Phase2 Logo
Menu Button IconX Close Button
Services
Our Team
Results
Insights
Contact Us
_
Insights

Less Chat, More Outcome — And the Security Nightmare That Comes With It

AI agents like Clawdbot deliver real productivity gains, but they also create serious security risks—from credential theft to supply chain attacks—that most organizations aren't prepared to handle.

AI agents are everything users want. They're also everything IT security teams fear.

Written by
Jeremy Waller
,
Vice President of Technology & Innovation
Last updated:
January 27, 2026

AI agents are everything users want. They're also everything IT security teams fear.

A year ago, an AI that could browse the web, manage files, send messages, and complete tasks on its own would have sounded like science fiction. Today, projects like Clawdbot (recently rebranded to Moltbot) are doing exactly that. And people love them.

Not because this is an innovation in capabilities, but because the ability to do real work is now wrapped into a neat package.

Users Want Outcomes, Not Conversations

For years, AI assistants were search engines with a chat interface. You asked questions, you got answers. However, it was still you who did the work.

That's changing. AI agents don't just talk about what you could do. They do it. Research is gathered and summarized. Documents are drafted and reviewed. Workflows are configured, executed, and debugged.

Clawdbot's popularity isn't an accident. Users are tired of chatting. They want real outcomes.

18 months ago, ChatGPT writing a decent email was impressive. Today, agents are orchestrating multi-step workflows, handling errors, retrying failures, and performing real labor that used to take people hours.

It’s a fundamental shift in how labor gets done.

Here's What You're Actually Doing

Installing one of these agents is a decision to hand over the keys.

You’re installing a system that bundles powerful agentic capabilities like file access, shell execution, browser control, and messaging without adequately addressing the security, isolation, and governance implications of that power.

Security researchers have already shown how this goes wrong:

  • SSH keys quietly exfiltrated
  • Entire chat histories siphoned off
  • Legitimate tools hijacked through prompt injection
  • Malicious behavior hidden behind normal-looking output

The user sees normal behavior. Behind the scenes, their data walks out the door.

I’m a Hypocrite

I’m critical of tools like Clawdbot, and I’m also running it.

Is the hype justified? It absolutely is. When it works, it offers a clear glimpse of what agentic systems can do when friction drops away. It feels like we’re living in the future when I can hand off something to my agent, go make a cup of coffee, and come back to a fully complete task.

That effectiveness is validating. It confirms that the direction is right and that the productivity gains people are chasing are real.

It also scares the crap out of me. 

To run an agent with this much autonomy, I’ve put guardrails all around it: isolated hardware, no sensitive credentials, no real access to anything because the security model doesn’t match the level of authority the tool assumes.

The same power that makes it useful makes it dangerous in less controlled environments.

That contradiction matters. Agentic AI doesn’t need to prove it works; it needs to prove it can be trusted.

This Week's Headlines (And Why They Matter)

Daniel Miessler, one of the most respected voices in security, posted a warning about Clawdbot that pulled in 140,000 views in under 24 hours:

"I'm asking you to please listen to this. Here are some of the top security issues with clawd.bot that you should be avoiding. Don't avoid the project. It's great. But please be safe with it!"

Attackers are already taking advantage of the surge in popularity

When Clawdbot announced a rebrand to "Moltbot," someone immediately squatted on the npm package name. Anyone who followed the migration docs and ran: npm install -g moltbot installed an attacker’s code instead of the real tool.

Shortly after, a fake VS Code extension showed up impersonating Moltbot. Professional icon, polished UI, multiple AI providers integrated. It looked real. It had multiple layers of malicious payload.

These weren’t sophisticated attacks.
They were cheap. Fast. Obvious in hindsight.

And they worked because people valued the capabilities of the tool without giving any thought to the security implications.

The Question You Should Be Asking

How much of this is already lurking within your infrastructure?

Shadow IT isn't new. But shadow AI agents with elevated permissions are different.

When a developer installs an AI coding assistant that reads files, runs commands, and touches your codebase, what's exposed? When someone hooks their corporate Slack to an AI workflow, who else can see those messages?

And then there’s the stuff that doesn’t show up in logs:

  • Prompt injection. Malicious content hidden in documents or web pages can hijack what the agent does next.
  • Supply chain risk. We just watched it happen. The tool ecosystem is a target.
  • Data residency. Where does your data go when the AI processes it?
  • Credential exposure. Agents need API keys and tokens. Those can be extracted.

So Now What?

These tools work. The productivity gains are real. People will use them whether IT approves or not.

However, the security model for most tools is half-baked. I don't know how many times I’ve heard, ‘keep a human in the loop.’ But if we’re being honest, how many users actually review every action? How many even can, when the interface hides the details?

‍Simon Willison put it bluntly: "We've known about prompt injection for over two years and still don't have good mitigations."

So many organizations are shipping systems that can act autonomously before they’ve figured out how to secure them properly.

What Trustworthy AI Actually Looks Like

These tools work. The productivity gains are real. And people will use them whether IT approves them or not.

Enterprise software has always been about trust. AI raises the bar even further.

After nearly three decades of building systems that have to work when it matters, here’s how Phase2 thinks about trustworthy AI:

Security 

Protects sensitive data, intellectual property, and systems through strong access controls, data protection, and secure model operations.

Governance

Aligns AI systems with organizational values, business intent, legal obligations, and ethical standards through clear ownership and oversight.

Reliability

Delivers consistent, auditable results that remain dependable under real-world variability and scale, even as systems and data evolve.

Predictability

Balances the power of probabilistic AI with transparency, validation, and human judgment so outcomes are understandable, reviewable, and appropriate for their level of impact.

Enterprise software has always had to meet this bar. AI doesn't get a free pass just because it’s new and exciting.

The Path Forward

Clawdbot validates both sides of the equation. 

The technology is real. People are getting real work done. 

The risks are real. We watched supply chain attacks materialize within hours of a rebrand.

This is exactly why we've been building our AI practice around trustworthy delivery. The upside is too valuable to ignore. But they only work in the enterprise if you deploy these systems safely.

That's the opportunity. Organizations that figure out how to deploy AI agents with proper security, governance, reliability, and predictability are the ones who will actually capture the productivity gains. Everyone else will either miss out or learn the hard way.

The agents are already here. The question is whether you're ready for them.

‍

Explore More Insights

View All Insights
_
News
Using the Sequential Thinking MCP Server to go from Generative to Agentic AI
_
News
Executive Summary: Using the Sequential Thinking MCP Server to go from Generative to Agentic AI
_
News
How to Build Smarter AI That Remembers What Matters: Strengthening Organizational Memory with Zep
Phase2 Logo

Phase2 is an employee-owned software engineering and AI consultancy. For nearly 30 years, we've delivered complex enterprise solutions with proven expertise and reliable execution.

Sales@phase2online.com
100% Onshore • United States
Services
AI IntegrationSoftware EngineeringData SystemsEnterprise SecurityDigital Transformation
Company
About UsCareersContact Us
© 2025 Phase2_. All rights reserved.
Privacy Policy