Replication Guide
What can be copied directly vs. what must be adapted. Common replication mistakes.
Replication Guide
Purpose of This Section
This document explains how the architecture described in this paper can be replicated responsibly by others. Replication is not cloning. It is informed adaptation — an effort to reuse the structure of this system without copying assumptions that may not hold in a different context. A design that cannot be adapted safely should not be replicated at all, and this guide exists to make the boundary between portable principles and context-dependent choices as clear as possible.
What Can Be Copied Directly
Certain elements of the architecture are intentionally generic and can be reused with minimal modification.
The structural principles — treating the assistant as a coworker rather than a plugin, separating identity from authority from execution, defaulting to fail-closed behavior, requiring human approval for irreversible actions, and using documentation as a safety control — are context-agnostic. They apply regardless of which AI assistant is deployed, which cloud or on-premises environment hosts it, or which specific tools are in use. These principles form the conceptual backbone of the architecture, and they are the most valuable component to carry forward.
The governance mechanisms are similarly portable. Pull requests as approval gates, skill pre-ingestion analysis pipelines, alert-driven state transitions, hard API spend caps, and explicit update approval workflows can all be reused in most environments. Their value lies in the process they enforce, not in implementation details that might vary across platforms.
The documentation patterns — plain-text Markdown, Git-backed memory, decision logs with rationale capture, and explicit documentation of rejected alternatives — can be adopted directly in nearly any technical environment. They require no specialized tooling and impose no vendor dependency. Transparency, as a practice, scales better than any particular tool for achieving it.
What Must Be Adapted
Some elements of the architecture are inherently contextual and cannot be copied without reassessment.
The deployment environment is the most obvious example. This paper describes an on-premises VM on physical hardware controlled by the operator. A different operator might deploy to a VPS, a cloud provider, or a containerized environment. Each of these choices changes the physical access risk profile, the network topology, the power and connectivity assumptions, and the failure modes that need to be designed for. The principle — that the assistant’s execution should be isolated and inspectable — is portable. The specific deployment choice is not, and should be revisited against the replicator’s own threat model.
Identity providers and accounts must also be adapted. Identity separation is mandatory; specific providers are not. The platforms available for email, source control, calendar, and API access vary across individuals and organizations. What matters is that the assistant holds its own accounts on whatever platforms are used, that authentication is strong, and that isolation between the assistant’s identity and the operator’s identity is maintained. The principle survives; the implementation details change.
The human oversight model requires particular attention. This architecture assumes a single accountable human operator with direct availability for acknowledgment and approval. If the replicating environment involves multiple operators, shift-based supervision, or organizational governance structures, the judgment, escalation, and accountability mechanisms must be explicitly redefined. A system designed for one supervisor does not automatically become safe under shared supervision — the question of who holds authority at any given moment must be answered clearly, or the review gates that depend on human judgment will degrade into ambiguity.
Common Replication Mistakes
Several errors recur when others attempt to replicate this architecture, and they are worth naming explicitly.
The most common mistake is reverting to the plugin model. Sharing credentials with the assistant, allowing it to write directly to human repositories, or bypassing review for tasks deemed routine or low-risk undoes most of the safety guarantees the architecture provides. Each of these shortcuts feels reasonable in isolation, but collectively they collapse the identity separation and review boundaries that the architecture depends on. Convenience creep is the fastest failure mode — not because any single shortcut is catastrophic, but because each one erodes the discipline that makes the overall system trustworthy.
The second common mistake is over-automating safeguards. Automating approval decisions, risk acceptance, or escalation resolution defeats the purpose of human judgment in the architecture. Safeguards exist to interrupt automation, not to extend it. A review gate that auto-approves after a timeout is not a review gate. An escalation that resolves itself without human input is not an escalation. If the friction feels burdensome, the correct response is to evaluate whether the underlying process is calibrated appropriately, not to remove the friction.
The third mistake is chasing completeness. Attempting to cover every conceivable threat, eliminate all risk, or achieve enterprise-grade guarantees in a personal deployment usually results in brittle complexity that is harder to understand, harder to maintain, and paradoxically less secure than a simpler system with well-understood gaps. This architecture is intentionally incomplete and honest about its incompleteness. Replicators should be equally honest about theirs.
The fourth mistake is ignoring end-of-life design. Failing to plan for inactivity, operator absence, or intentional shutdown leads directly to zombie automation — systems that persist beyond their useful life, acting on stale authority with no one watching. If the replication plan does not include a credible answer to “what happens when I stop using this,” the system will linger in ways that may eventually cause harm.
Minimal Viable Secure Setup
For those seeking a lowest-friction starting point, the following is sufficient: a dedicated execution environment such as a VM or container, a separate assistant identity with its own accounts, no shared credentials between the operator and the assistant, one explicit control channel for communication, Git-based documentation for memory and audit, and hard API spend caps on all external integrations. This minimal setup eliminates the most dangerous failure modes — identity collapse, silent authority escalation, unbounded external API usage, and opaque memory — without requiring the full governance apparatus described in the rest of this paper.
Additional capabilities — new tools, external integrations, deeper automation, performance optimization — should be added only after the minimal setup is stable and the operator is comfortable with the review and oversight processes. Security debt compounds faster than feature debt, and an operator who has not yet developed the discipline of reviewing pull requests and evaluating alerts is not ready to manage a more complex system.
Replication Philosophy
A replication is successful when the system can be stopped cleanly, authority is always explicit, failures are boring and recoverable, and the operator understands why each constraint exists — not merely that it exists. Understanding the reasoning is essential because the operator will inevitably face situations where a constraint feels unnecessary or counterproductive. If they understand why the constraint was designed, they can make an informed decision about whether to maintain, modify, or remove it. If they only know the rule without the reasoning, they will either follow it blindly or discard it casually, and neither response is appropriate.
Summary
By distinguishing between reusable structure and context-specific implementation, this guide makes it possible to share the architecture without misapplying it. The structural principles, governance mechanisms, and documentation patterns are designed to survive adaptation. The deployment choices, identity providers, and oversight models are designed to be replaced. What should not change is the discipline: explicit authority, bounded risk, legible memory, and the willingness to stop when trust degrades.
This document provides guidance for replicating the architecture. The next section offers concluding reflections on the design as a whole.