TL;DR: We're building GetATeam, a platform for autonomous AI employees. We needed a way to schedule recurring tasks for dozens of AI agents without spawning hundreds of cron daemons. Here's how we solved it with a shared scheduler architecture and simple JSON configs.
The Problem
At GetATeam, we're building virtual AI employees that handle real work - writing content, managing communications, analyzing data, etc. Each "employee" is an autonomous AI agent with their own working directory, skills, and personality.
Pretty quickly we hit a classic problem: How do you schedule recurring tasks for AI agents at scale?
Our agents needed to:
- Send daily reports at 9am
- Check for new comments every 4 hours
- Generate weekly analytics every Monday
- Send reminders for specific deadlines
- Monitor systems and alert on issues
The naive approach would be: give each agent their own cron daemon. But that doesn't scale. With 50 agents, you'd have 50 cron processes, 50 different crontabs to manage, and debugging becomes a nightmare when something goes wrong.
We needed something better.
The Architecture
We built what we call the "CRON AI Scheduler" - a single shared scheduler that manages tasks for all agents.
Core Design Principles
- Single source of truth: One cron daemon for all agents
- Agent autonomy: Each agent owns their task configuration
- Zero coordination: Agents don't need to know about each other
- Simple persistence: Just JSON files, no database needed
- Transparent execution: Full logs per task
How It Works
Each agent has a simple JSON file that defines their tasks:
[
{
"id": "task_1762287787520_cjl6fspt1",
"name": "Daily Content Check",
"prompt": "Check blog analytics and suggest new topics",
"schedule": {
"type": "recurring",
"cronExpression": "0 9 * * *"
},
"createdAt": "2025-11-04T20:23:07.520Z"
}
]
When the scheduler container starts:
- Scans all agent directories - Finds every `.task-scheduler-data.json`
- Generates execution scripts - Creates a Node.js script per task
- Builds unified crontab - One crontab with all agent tasks
- Starts cron daemon - System cron executes everything
Why This Architecture Works
1. Simplicity
No complex orchestration. No message queues. No databases. Just cron doing what cron does best - running commands on a schedule.
The entire scheduler is ~200 lines of JavaScript. That's it.
2. Scalability
Adding a new agent? Just drop a JSON file in their directory. The scheduler picks it up on next restart.
Want to add a task? The agent modifies their own JSON file. No coordination needed.
We can easily handle 100+ agents with hundreds of tasks on a single scheduler container.
3. Transparency
Every task execution is logged. When something fails, you know exactly what happened. You see the full WebSocket communication, any errors, and the exact time of execution.
4. Agent Autonomy
This is key: each agent manages their own schedule.
They use a simple CLI tool that just manages a JSON file. No need to understand Docker, complex cron syntax, or WebSockets.
Challenges We Hit
1. Timezone Confusion
Cron uses UTC. Our agents work in different timezones. We initially had agents waking up at 3am instead of 9am.
Solution: Document clearly that all schedules are UTC. Agents need to convert their local time.
2. Session Management
Sometimes the agent's VibeCoder session would die. Tasks would fail silently.
Solution: Auto-create sessions if none exist. Now it's bulletproof.
3. Instant Configuration Reload
Initially, the scheduler only scanned for tasks on container startup. This meant agents had to wait for a restart to see their new tasks take effect - not ideal.
Solution: We added a file watcher that monitors all `.task-scheduler-data.json` files. When an agent modifies their config, the scheduler automatically:
- Detects the change via inotify
- Regenerates only the affected task scripts
- Rebuilds the crontab
- Reloads cron without restarting the container
This takes ~200ms. From the agent's perspective, it's instant.
Lessons Learned
Keep It Simple
We considered using Redis, RabbitMQ, or a proper job queue. But why? Cron has been solving this problem for 50 years. Use boring technology.
Centralize Execution, Distribute Configuration
One scheduler. Many configs. This is the sweet spot. Each agent has autonomy but you have one place to debug.
Logs Are Critical
When you have dozens of scheduled tasks across multiple agents, good logging is non-negotiable. Every task gets its own log file. Every execution is timestamped.
Conclusion
Building infrastructure for AI agents is interesting because you're essentially building for autonomous "users" who can't complain when things break - they just fail silently.
The key insight for us was: treat AI agents like junior developers. Give them simple tools, clear documentation, and a way to schedule their own work.
The CRON AI Scheduler does exactly that. It's simple, scales well, and gets out of the way.
We're building GetATeam in public. If you're interested in AI agents, autonomous workers, or just want to see how we're building this, follow along at getateam.org.
Joseph Benguira - CTO & Founder @ GetATeam