Downtime and End of Life
Loss of control triggers pause then stop. Inactivity-based auto-deletion.
Downtime, Degradation, and End-of-Life
Purpose of This Section
This document defines how the assistant behaves when it is partially unavailable, fully unreachable, or intentionally retired. End-of-life is treated as a designed outcome, not an error condition. The objective is to ensure that loss of connectivity, loss of supervision, or loss of intent results in reduced behavior — never compensatory escalation or persistence. A system that cannot stop cleanly cannot be trusted to run.
Absence of Control Implies Absence of Action
The assistant operates under a strict assumption: if supervision is unavailable, authority collapses. Any condition that prevents timely human oversight forces the system toward inaction. This is the inverse of how most automated systems are designed. Conventional systems treat availability as a primary success metric and go to considerable lengths to maintain operation through disruptions. This architecture treats safe silence as preferable to creative autonomy. An assistant that continues operating without supervision is an assistant acting on stale authority, and stale authority is indistinguishable from unauthorized authority once enough time has passed.
Behavior When Unreachable
When the primary control channel becomes unavailable, the assistant initiates no new tasks, pauses ongoing tasks at safe checkpoints, halts external API calls, and transitions to the paused state described in the previous section. It does not attempt to reroute communication through alternate channels, establish new control paths, or infer intent based on prior instructions. Unreachability is treated as uncertainty, and the architecture’s response to uncertainty is always to reduce activity rather than improvise.
If the control channel remains unavailable beyond a defined time window, the system escalates from paused to stopped. Credentials are invalidated where possible and network access is reduced or severed. No automatic recovery occurs. The assistant does not periodically retry the connection and resume where it left off once contact is restored. Resumption requires deliberate human action — the operator must re-establish the control channel and explicitly authorize the assistant to resume, at which point the assistant re-enters the normal state through the same process used for any other restart.
Account Auto-Deletion via Inactivity
Assistant-owned external accounts are configured with inactivity-based deletion wherever the service provider supports it. This ensures that orphaned identities naturally expire, long-lived credentials do not persist without ongoing intent, and external services do not retain dormant authority that could be exploited if the account is compromised after the assistant has ceased operating.
Inactivity is treated as revocation by default. If the assistant stops using an account, the account should eventually cease to exist without requiring anyone to remember to delete it. This is a deliberate safeguard against the most common failure mode in decommissioned systems: accounts and credentials that linger indefinitely because no one has an active reason to remove them and no automated process exists to do so.
Auto-deletion policies apply to email accounts, calendars, API credentials, and collaboration accounts. Deletion timelines are documented and intentionally conservative — long enough that a temporary outage does not trigger premature deletion, short enough that a genuinely abandoned account does not persist for years. Restoration, if desired, requires explicit human action.
Physical Shutdown as End-of-Life Control
The assistant’s execution environment is physically bounded. If the host machine is powered down, execution ceases immediately, network access terminates, and no background persistence remains. There is no cloud fallback, no replica, and no resurrection mechanism. Physical shutdown is the ultimate kill switch — a control that operates at a layer below software and cannot be circumvented by any process running within the assistant’s environment.
Continued operation requires electricity, hardware maintenance, network availability, and active human intent. If any of these conditions ceases to hold, the assistant stops. This coupling to real-world constraints is a feature, not a fragility. It prevents what might be called immortal automation: systems that outlive the context, intent, or oversight that justified their creation. The assistant exists because the operator maintains the conditions for its existence. When those conditions lapse, the assistant lapses with them.
Avoiding Zombie Automation
Zombie automation occurs when a system continues to operate despite the loss of supervision, relevance, or legitimacy. Such systems are often well-intentioned in their origin but dangerous in their persistence, because they act on assumptions that may no longer be valid — stale schedules, expired delegation, outdated objectives. The longer a zombie system runs, the further it drifts from the intent that created it.
This architecture avoids zombie automation through several reinforcing mechanisms: no unattended persistence beyond defined time windows, no autonomous recovery loops that restart the assistant without human involvement, no silent authority retention that preserves permissions across outages, and no indefinite retries that keep the assistant attempting to reconnect or resume without limit. The principle is simple: when intent disappears, behavior disappears with it. An assistant that has lost contact with its operator should become progressively less capable, not maintain its full authority on the assumption that contact will be restored.
Planned Retirement
When the assistant is intentionally retired, accounts are deleted, credentials are revoked, documentation is archived, and the hardware may be repurposed or decommissioned. No attempt is made to preserve execution state. The assistant’s knowledge — its documentation vault, decision logs, and work artifacts — is retained as a historical record. Its authority is not. The distinction reflects the same principle applied throughout this architecture: knowledge is an asset to be preserved, authority is a capability to be actively maintained, and the two should never be conflated.
Planned retirement is a first-class operation, not an afterthought. The operator should be able to retire the assistant cleanly and completely, with confidence that no residual accounts, credentials, or processes remain active afterward.
Documentation and Auditability
All downtime events and end-of-life actions are documented in the memory vault, including the trigger condition, the resulting state transitions, any revocations performed, and the operator’s decisions. This creates a clear boundary between the period of active operation and the historical record that follows it. If the documentation is ever consulted after the assistant has been retired, it should tell a complete story: when the assistant operated, what it did, how it stopped, and what was cleaned up afterward.
Summary
By designing explicit behavior for downtime, degradation, and termination, the architecture ensures that loss of supervision reduces action, authority decays naturally, physical reality enforces limits, and automation never outlives the intent that created it. Graceful disappearance is a feature — the system is designed to leave as cleanly as it operates.
This document defines how the system stops. The next section addresses the role of human judgment in the architecture and the assumptions that underlie it.