Hacker Newsnew | past | comments | ask | show | jobs | submit | intellegix's commentslogin

Cool project. The proactive heartbeat approach is interesting -- most tools in this space (mine included) are reactive, only running when explicitly kicked off.

I built something complementary but cross-platform: an open source loop driver + council system for Claude Code CLI. Instead of a desktop UI, it's all Python/JS that wraps the CLI with NDJSON streaming. The loop handles budget enforcement, stagnation detection, and model fallback (Opus -> Sonnet), while a separate council system queries multiple models through Perplexity for architectural decisions.

Different trade-offs: yours has the proactive scheduling and messaging integrations, mine focuses on the autonomous coding loop itself with 194 pytest tests and session continuity via --resume.

https://github.com/intellegix/intellegix-code-agent-toolkit


Interesting approach with the parallel planner/worker/judge architecture. I've been solving a similar problem but with a single-agent loop pattern instead of multi-agent.

My toolkit wraps Claude Code CLI with NDJSON streaming and handles the exact failure modes you describe: budget exceeded -> exit code 2, stagnation detected (low turns over a window) -> exit code 3, consecutive timeouts -> auto fallback from Opus to Sonnet. The human steers between runs by editing CLAUDE.md.

One thing I added that might interest you: a multi-model council system that queries GPT-4, Claude, and Gemini simultaneously through Perplexity before big architectural decisions. Catches blind spots that a single model misses.

https://github.com/intellegix/intellegix-code-agent-toolkit

Curious how openTiger handles the cost tracking problem. With single-agent loops I parse cost from NDJSON events, but with parallel agents the spend can compound fast.


It actually can with the right wrapper. I built an open source loop driver that runs Claude Code CLI autonomously with --dangerously-skip-permissions. It handles session continuity (--resume), budget enforcement, stagnation detection (two-strike system if turns stay low), and auto model fallback (Opus -> Sonnet on consecutive timeouts).

The key is streaming NDJSON output to track cost per iteration and detect completion markers. The human stays in control by editing CLAUDE.md between runs to steer the project.

https://github.com/intellegix/intellegix-code-agent-toolkit


This separation of planning and execution is exactly the pattern I ended up building into an open source toolkit for Claude Code. The key insight that made autonomous loops work was giving the loop driver awareness of the CLAUDE.md file as the "plan" layer — the human edits CLAUDE.md between runs to steer the project, and the loop driver handles execution (session continuity, budget enforcement, stagnation detection, model fallback from Opus to Sonnet on consecutive timeouts).

The other piece that helped was a multi-model council system — before committing to a major architectural decision, the toolkit queries GPT-4, Claude, and Gemini simultaneously through Perplexity, then synthesizes with Opus. Having three models surface their assumptions (as the top comment here describes) catches more blind spots than any single model.

194 pytest tests, MIT licensed: https://github.com/intellegix/intellegix-code-agent-toolkit


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: