For a long time, the implicit message from the AI tooling industry has been: if you want to build agents, learn Python. The frameworks, the tutorials, the conference talks, all pointed in the same direction. PHP developers who wanted to experiment with autonomous systems had two options: switch stacks or stitch something together from raw API calls and hope it holds.
That’s the gap Neuron AI was built to close. And now, with Neuron v3 introducing a workflow-first architecture, I wanted to prove the point in the most direct way possible: build something that the ecosystem assumes can only be done in another language. That’s how Maestro was born—the first coding agent built entirely in PHP.

What a Coding Agent Actually Does
Before looking at the code, it’s worth being precise about what a coding agent is, because the term gets stretched a lot. Maestro isn’t a code completion tool. It’s an autonomous agent that runs in your terminal, reads your project files, reasons about your codebase, and proposes changes. It operates in a loop: you give it a task, it decides which tools to call (read a file, search for patterns, write changes), executes them in sequence, and reports back. The key word is proposes—before touching your filesystem, it asks for your approval.
That last part is not a nice-to-have. Any agent with write access to your codebase that doesn’t pause for confirmation is a liability. The tool approval mechanism in Maestro is one of the things I’m most satisfied with, and it maps directly to a feature that Neuron v3 introduced as a first-class concept: human-in-the-loop workflow interruption.
The Architecture
The repository structure reflects a clear separation of concerns. The entry point is bin/maestro, which bootstraps a Symfony Console command. From there, everything fans out cleanly:
bin/maestro
└─ MaestroCommand (Symfony Console)
├─ Settings (loads .maestro/settings.json)
├─ EventDispatcher + CliOutputListener
└─ AgentOrchestrator
└─ CodingAgent (extends NeuronAI Agent)
├─ ProviderFactory → AIProviderInterface
├─ FileSystemToolkit (read-only FS tools)
└─ McpConnector[] (optional MCP servers)
The CodingAgent class extends Neuron’s Agent base and adds a tool approval middleware. This is the piece that intercepts execution before any filesystem write, fires a ToolApprovalRequestedEvent, and waits. The AgentOrchestrator catches the workflow interrupt thrown by the middleware, presents the approval prompt to the user via the CLI, and resumes or aborts execution based on the response.
This pattern—interrupt, present, resume—would have been painful to implement without a workflow-oriented framework underneath. With Neuron v3, it’s the natural way to build it.
Getting Started
Installation is standard Composer:
composer global require neuron-core/maestro
Make sure Composer’s global bin directory is in your PATH:
export PATH="$HOME/.config/composer/vendor/bin:$PATH"
For per-project use, install it as a dev dependency and run it via vendor/bin/maestro.
Configuration lives in .maestro/settings.json at the root of your project. At minimum, you need a provider and an API key:
{
"provider": {
"type": "anthropic",
"api_key": "sk-ant-your-key-here",
"model": "claude-sonnet-4-20250514",
"max_tokens": 8192
}
}
Maestro supports Anthropic, OpenAI, Gemini, Cohere, Mistral, Ollama, Grok, and Deepseek out of the box—all routed through a ProviderFactory that maps the type field to the corresponding Neuron AI provider instance. If you want to run everything locally without sending data to an external API, point it at an Ollama instance:
{
"provider": {
"type": "ollama",
"base_url": "http://localhost:11434",
"model": "llama2"
}
}
Giving the Agent Context About Your Project
By default Maestro try to load the Agents.md file from the project directory. Alternatively, you can point Maestro at a different markdown file in your repo that describes your project’s architecture, coding standards, and any conventions the agent should follow. You can configure the context_file property in the settings file with the path to the file.
{
"provider": { "..." },
"context_file": "CLAUDE.md"
}
The agent appends the content of that file to its system instructions before the conversation starts. This is a simple mechanism, but it makes a real difference in practice. An agent that knows your project uses PSR-12, that controllers shouldn’t contain business logic, and that you prefer dependency injection over service locators will produce more relevant suggestions from the first message.
The Tool Approval Flow
When the agent wants to modify a file, execution doesn’t just proceed. It stops, and you see something like this:
The agent wants to write changes to src/Service/UserService.php
[1] Allow once
[2] Allow for session
[3] Always allow
[4] Deny
“Allow for session” is the option I use most in practice. It means I approve write operations on a given file type or tool once per session, without having to confirm each individual change. “Always allow” persists the preference to .maestro/settings.json under an allowed_tools key, so future sessions skip the prompt entirely for that operation.
This granularity matters. You probably want to approve the first few changes in an unfamiliar session to build confidence, then let the agent run more freely once it’s demonstrated it understands what you’re asking.
The Event System
Maestro uses a lightweight PSR-14-compatible event dispatcher with three events: AgentThinkingEvent (fires before each AI call), AgentResponseEvent (fires when the model returns), and ToolApprovalRequestedEvent (fires when a tool needs approval). The CliOutputListener subscribes to these and handles all terminal rendering.
This design keeps the agent logic clean. The CodingAgent doesn’t know anything about how output is displayed—it just fires events. If you wanted to build a web interface on top of the same agent, you’d swap out the listener and leave everything else untouched.
MCP Integration
For teams that want to extend the agent’s capabilities beyond filesystem operations, Maestro supports Model Context Protocol servers in the configuration:
{
"mcp_servers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
}
}
}
}
Each entry in mcp_servers spins up a subprocess and connects it to the agent as an additional tool source. The agent can then call GitHub operations, search the web, or access any MCP-compatible service alongside its native filesystem tools.
What This Demonstrates
Maestro is a working proof that the patterns the rest of the industry has been building in Python and TypeScript are fully expressible in PHP now. The workflow architecture that makes tool approval possible, the event-driven rendering pipeline, the multi-provider abstraction, the MCP integration, none of this required stepping outside the PHP ecosystem.
The framework doing the heavy lifting here is Neuron AI, specifically the workflow architecture introduced in v3. Without the ability to interrupt execution mid-agent-loop and resume it based on user input, the tool approval system would require significantly more scaffolding to build and maintain.
If you want to explore the code, the repository is at github.com/neuron-core/maestro. The Neuron AI documentation lives at docs.neuron-ai.dev. Questions, issues, and pull requests are open.


