Designing Systems That Remember Their Own Decisions
This article is in reference to:
Your System Isn’t Broken. It Was Built That Way
As seen on: cfcx.work
The quiet reason teams say “the system can’t”
This piece exists because “the system can’t do that” is almost never about software. It is about memory.
The original article is not really about NetSuite, saved searches, or audit exports. It is about what happens when an organization forgets the reasons behind its own choices, and how that amnesia hardens into constraint. The author is pointing at a simple but uncomfortable claim: most systems are not broken; they are faithfully enforcing decisions no one remembers making.
That is why it matters. If leaders misdiagnose the problem as missing features or bad tools, they reach for new platforms and bigger projects. If they see it instead as lost intent and unmanaged workarounds, the remedy shifts toward something quieter: designing systems that can explain themselves over time.
From temporary detours to structural constraint
At the core of the original post is a first-principles view: an operating system for a business is just accumulated choices. People, tools, handoffs, controls all of it is decisions layered over time.
Most of those decisions are reasonable in the moment. A partner cannot consume the standard file format. An integration slips. A compliance deadline arrives before the training is ready. The workaround is not a failure of discipline; it is a rational response to a real constraint.
The tension appears later. The detour becomes infrastructure. The original constraint expires, but the workaround does not. What was once a conscious “we’ll do it this way for now” decays into a permanent “we can’t do it the other way.”
The post names this phenomenon as operational debt, distinct from technical debt. That distinction is important to the author’s purpose:
- Technical debt is usually visible to specialists. It lives in code, backlogs, and performance metrics.
- Operational debt is encoded in how work actually happens: checklists, spreadsheet bridges, saved searches, and unwritten know-how. By naming operational debt, the author is widening the frame. The limiting factor in many ERPs is not what the software can do, but what the operating patterns allow people to see and safely change. The real risk is illegibility: when no one can distinguish between intentional design and historical accident, every improvement feels like surgery without an x-ray.
Workarounds as signals, not shame
A key move in the article is to neutralize the moral charge around workarounds. The examples are deliberately ordinary: a custom search because native reporting is hard to use, a manual approval because roles are not designed yet, a CSV import instead of a finished integration.
The message is that none of these are inherently wrong. The system is not being “abused.” People are doing the best they can under constraints.
This reframing serves two purposes:
- It reduces blame. Teams can talk honestly about exceptions without defending them.
- It turns workarounds into signals. Each one is evidence of a missing capability, an unresolved risk, or a design gap. The author’s intent is not to eliminate temporary fixes. It is to give organizations a way to treat them as managed exceptions rather than invisible sediment. The language of “deprecation triggers” and “temporary” labels is less about documentation hygiene and more about teaching the system to surface its own unfinished decisions.
Embedding memory where work actually happens
The practical guidance about where to store rationale in saved search descriptions, script comments, SOP lines, runbooks is not just tips for better notes. It reflects a deeper design principle: memory must live on operational surfaces.
The author is pushing against a familiar pattern. Context gets trapped in implementation artifacts: project plans, chat threads, email chains. Once the go-live moment passes, those artifacts sink into archives where operators never look. The result is a system that executes steps but cannot explain why those steps exist.
By embedding “why this exists” and “review when” directly into the objects people touch to do the work, the organization gains two things:
- Local legibility: the person running a report or following a checklist can see that they are in a temporary lane, not the main road.
- Recoverable intent: future maintainers can reconstruct the original constraint and decide whether it still applies. This is the systems-level goal behind the concrete NetSuite examples. The tool becomes a medium for institutional memory, not just transaction processing.
Turning exceptions into a managed inventory
The other structural idea in the post is to treat exceptions like inventory. That framing signals a trade-off the author wants leaders to recognize.
Most organizations handle accumulated workarounds through occasional, high-effort cleanups: every year or two, someone sponsors a “process rationalization” or “field cleanup” project. These projects tend to be expensive, demoralizing, and incomplete. They depend on bursts of heroism rather than a stable habit.
By contrast, inventory thinking is quiet and continuous. You do not wait 18 months to see what is on the shelf. You count, label, and rotate stock as part of the job.
Transposed into operations, this yields the author’s pattern:
- Keep a simple register of workarounds, with owners and triggers.
- Make review routine, attached to existing governance cadences.
- Require a retirement trigger whenever someone proposes a manual or temporary step. The purpose of these steps is not bureaucratic control. It is to reduce the perceived risk of change. When every exception has an owner, a reason, and a clear event that forces review, teams can alter workflows without feeling like they are disturbing a fragile mystery.
This is the deeper “why” behind the auditor example. The point is not that one should name a saved search carefully. It is that the organization is choosing to embed conditions under which today’s reasonable compromise will be questioned tomorrow.
What this signals to executives
The article also carries a subtle message for leaders deciding whether to buy new tools or redesign existing ones.
When a team says, “the system can’t,” executives often interpret it as a capability gap: the platform is inadequate; we need something more modern. The author suggests a different diagnostic path: ask, “what constraint are we still honoring that no longer exists?”
Two specific indicators are highlighted:
- Process knowledge is concentrated in a small group that acts as translators.
- Change feels unsafe because no one can predict side effects. Both are symptoms of high operational debt and low legibility. The proposed remedy is not necessarily a new system, but a system that can explain itself. That is what pairing workarounds with deprecation triggers is trying to build: a way for the organization to see its own logic, not just its outputs.
For executives, the post is an invitation to shift what they listen for. “The system can’t” might be a request for capital expenditure. It might also be an expression of uncertainty: “we no longer understand how our own decisions are wired together.” The underlying argument is that the second problem is more dangerous and less visible than the first.
In the end, designing for systems that can change
Ultimately, this article is about the kind of systems organizations want to live with over time. A system that works today but cannot safely change is fragile, even if its feature list is impressive.
The author is making a quiet bet: the path to resilient ERPs and operating environments does not depend on perfect design at go-live. It depends on an explicit practice of retiring exceptions, preserving intent, and making workarounds visible as temporary choices rather than invisible fate.
Looking ahead, the practical question for any team is simple: where are we treating yesterday’s “for now” as today’s “forever,” and how could we make those boundaries visible again? The techniques in the original post triggers, embedded rationale, workaround registers are small levers. Their real purpose is to restore the organization’s ability to see what it has already decided, so it can decide differently.
In the end, the claim that “your system isn’t broken; it was built that way” is not an accusation. It is an invitation to design systems that remember their own history, so that change feels like maintenance instead of excavation.