How OpenClaw inspired me to build AIdaemon, my own AI agent
I'd been using AI agents for coding for a while. Claude Code, Codex, Cursor, Windsurf, Open Code. They changed how I write software. But that was it. AI stayed inside the editor.
Then OpenClaw came along. An open-source AI agent that could actually control your apps, send messages, manage your inbox. Not just coding, but everything on your computer. That got me thinking about what else AI agents could do.
But I wanted something different. I didn't need an agent that controls Spotify or manages my smart lights. I needed one that lives on my server, runs terminal commands, deploys code, and remembers what I was working on last week. Something I could message from my phone while I'm away from my desk and have it do real dev work on my machine.
That's why I built AIdaemon.
What is AIdaemon?
AIdaemon is a self-hosted AI agent that runs as a background daemon on your machine. You talk to it through Telegram, Slack, or Discord, and it can execute terminal commands, browse the web, manage files, run scheduled tasks, and remember things across sessions. It's a single Rust binary. No Docker required, no Node.js dependencies. Just copy it to any machine and it runs.
Think of it as having a capable assistant permanently running on your server or laptop that you can message from anywhere.
Why not just use OpenClaw?
I actually started with OpenClaw. The idea was brilliant, but when I dug in, I kept bumping into things that weren't there yet. I wanted to approve commands directly from my phone before they ran. I wanted to see the steps the AI was taking in real time, not just the final result. Basic stuff for a tool that's running commands on your machine.
Then I hit a bug where OpenClaw didn't work with Gemini. I fixed it and opened a pull request, but the PR queue was massive. I knew it would take days, maybe longer, before my fix made it into the main branch. That's when it hit me. If I'm going to keep patching someone else's project to get what I need, maybe I should just build my own and use it as a learning experience to understand how personal AI agents need to be architected, their edge cases, limitations, and everything in between.
On top of that, OpenClaw was noticeably slow on one of my Macs. For something that's supposed to run as a background daemon, performance matters. That made me want something lighter.
But the biggest gap was memory. OpenClaw didn't have it yet. Every conversation started from scratch. No context from yesterday, no recollection of what projects you work on, no learning from past mistakes. I couldn't work with that.
So the question shifted from "how do I make OpenClaw work for me" to "what would my ideal AI daemon look like?" I kept running into the same frustration. I'd be away from my computer and need to check something, restart a service, or run a quick command. SSH from my phone works in a pinch, but it's painful for anything beyond ls.
What if I could just send a Telegram message like "check if the nginx service is running" and get an answer back? Or "deploy the latest changes to staging"? That was the core idea inspired by OpenClaw.
Memory was the first thing I designed
From day one, I knew AIdaemon had to remember things. Not just chat history, but actual knowledge. What projects I work on. What tools I prefer. What errors I've hit before and how I fixed them.
Every six hours, a background process reviews recent conversations and extracts durable facts. Things like "David uses Cloudflare for deployment" or "the staging server runs on port 3002." Old facts get superseded when new information comes in, so the knowledge stays current.
It also learns from mistakes. When AIdaemon encounters an error and then successfully resolves it, it stores the pattern and the fix. Next time a similar error shows up, it already knows what to do.
Workflows get learned too. If I build, test, and deploy a Rust app the same way multiple times, AIdaemon notices the pattern. After enough successful runs, the procedure gets automatically promoted to a reusable skill. No manual setup needed.
All of this is backed by vector embeddings for semantic recall, weighted by freshness and usefulness. Facts that haven't been recalled in 30 days gradually decay. Memory stays relevant without manual cleanup.
The memory is privacy-aware too. Facts are tagged with visibility levels, so information shared in a private DM never leaks into a team channel.
How it works
The architecture is straightforward. AIdaemon starts as a system service (launchd on macOS, systemd on Linux) and connects to your messaging channels. When a message comes in, it goes through an agent loop.
- Intent classification - A fast model figures out what you want. A quick answer, a task to execute, an automation to set up.
- Tool selection - The agent picks from 40+ built-in tools based on your request
- Execution with safety - Commands go through risk assessment and approval flows before running
- Memory update - Important context gets stored for future conversations
The tools
AIdaemon ships with over 40 tools out of the box.
- Terminal - Execute shell commands with allowlist-based safety and inline approval (Allow Once / Always / Deny)
- File operations - Read, write, edit, and search files
- Git - Commit, branch, check status
- Web browsing - Headless Chrome for pages that need JavaScript rendering
- HTTP requests - Full header control, auth profiles, OAuth support
- MCP integration - Connect external MCP servers to extend capabilities
- CLI agents - Delegate work to Claude Code, Gemini CLI, Codex, or Aider
- Scheduling - Cron-style task automation with natural language time parsing
- Memory management - Query, update, and share knowledge across channels
You can also add channels at runtime. Need to connect a Discord bot alongside your existing Telegram one? Just use the /connect command. No restart needed.
Intelligent model routing
Not every message needs the most expensive model. AIdaemon classifies requests into three tiers.
- Fast - Simple questions, quick lookups (cheapest model)
- Primary - Standard tasks, most interactions (default model)
- Smart - Complex reasoning, multi-step tasks (most capable model)
It supports multiple LLM providers out of the box. OpenAI-compatible APIs (including OpenRouter), Google Gemini, Anthropic Claude, Ollama, and any local model you want to run. You can swap providers at runtime without rebuilding.
Safety first
Giving an AI agent terminal access is something you want to get right. AIdaemon takes a cautious approach.
- Allowlist-based execution - Only pre-approved command prefixes run without asking
- Inline approval flow - For anything not on the allowlist, you get an Allow Once / Allow Always / Deny prompt right in your chat
- Risk assessment - Commands are scored for destructive potential, path sensitivity, and complexity
- SSRF protection - HTTP tools block requests to internal IPs
- Encrypted state - Database encryption with SQLCipher, secrets stored in your OS keychain
- Stall detection - The agent stops itself if it repeats the same tool call three times or gets stuck in alternating patterns
Getting started
The quickest way to try it.
# Install via Homebrew
brew tap davo20019/tap
brew install aidaemon
# Or install from crates.io
cargo install aidaemon
# Or download the binary directly
curl -sSfL https://get.aidaemon.ai | bash
Then create a config.toml with your LLM provider and Telegram bot token, and start the daemon.
# Install as a system service
aidaemon install-service
# Or run in the foreground
aidaemon
That's it. Message your Telegram bot and you're talking to your machine. Check the full documentation for configuration options and advanced features.
Why Rust?
I'd never written a line of Rust before this project. After experiencing OpenClaw's sluggishness on my Mac, I wanted something fast. I also wanted to see if I could build a production-grade system in an unfamiliar language by leaning on AI coding tools for the busywork. Claude Code, Codex, and Antigravity handled the boilerplate and the borrow checker fights. I focused on architecture and design.
The result is a single binary I can copy to a Raspberry Pi or a $5/month VPS and have it just work. No Docker, no Node.js, no runtime dependencies. For a daemon that's supposed to run 24/7, Rust's compile-time safety and low memory footprint turned out to be the right call.
What's next
AIdaemon is at version 0.9.2 and I'm still adding to it. Recent additions include a two-phase consultant system for smarter intent classification, structured JSON schema enforcement for LLM outputs, and goal tracking with token budgets.
The project is open source on GitHub and published on crates.io. If you're interested in having a personal AI agent that you fully control, give it a try and let me know what you think.