Finishing the Design, Not Blaming the User
This article is in reference to:
Your System Isn’t Broken. It’s Incomplete.
As seen on: cfcx.work
Operational failures as symptoms, not causes
This piece exists because most organizations are misdiagnosing their pain. They experience a report that will not run, an integration that rejects records, a reconciliation that never quite ties, and they reach for the nearest story: something broke, someone erred, some change slipped through.
The original post argues that these visible failures are rarely the real defect. They are surface signals of an unfinished design: systems that record activity but do not fully carry the decisions, context, and guardrails the business actually relies on.
That distinction matters. If a system is “broken,” the remedy is technical heroics and user discipline. If it is “incomplete,” the remedy is conceptual: finish the thinking about where truth lives, how it propagates, and which decisions belong to software versus people.
Shifting the frame: from users as glue to systems as containers of decisions
The post is, at its core, a challenge to a pervasive but rarely examined assumption: that people will reliably remember and execute the invisible parts of a process.
Modern ERP implementations often treat the system as a sophisticated data entry surface. Transactions are captured, but the reasoning that should connect master data, rules, and outcomes is left floating in training decks, emails, and institutional memory.
To bridge that gap, teams introduce “user tasks”: extra fields, checkboxes, reminders, and SOPs that ask individuals to act as the missing logic layer. The system becomes a ledger while humans serve as real-time compilers of unencoded business rules.
The post’s deeper point is that this is not neutral. Each manual step is a design decision that transfers cognitive load and risk from the system to the user. Over time, that transfer shapes the culture:
- Errors are framed as individual failures rather than predictable outcomes of ambiguity.
- Training is treated as an evergreen fix, regardless of how many rules accumulate.
- “Good” performers are those who remember and patch gaps, not those who question the design. Seen from this angle, the essay is not just about NetSuite or ERP configuration. It is about who the organization expects to think, and where that thinking is allowed to live.
Design gaps as structural debt
Calling incomplete logic a “user task” is, in the author’s framing, a form of debt. It functions now, but it creates obligations that intensify under scale, turnover, and complexity.
The operational symptoms it describes—ambiguity, inconsistency, latency, fragility—are not random failure modes. They are what happens when a system is asked to run on unstated rules:
- Ambiguity arises when the system allows multiple interpretations of the same scenario because the rule has never been made explicit.
- Inconsistency follows naturally when different people apply their own local heuristics.
- Latency appears when mistakes are detected only downstream, turning clean-up into a periodic crisis.
- Fragility emerges when the process functions only under ideal conditions: full context, perfect memory, low volume. By naming these issues as design outcomes rather than human shortcomings, the post reframes “user error” as an artifact of governance. The debt is not in misclicks; it is in leaving core decisions unmodeled.
This is the trade-off the article is trying to surface: every time an organization chooses a quick manual patch over a structural rule, it is borrowing against future stability. The cost shows up later as reconciliation work, integration firefights, and the quiet normalization of heroics.
Completeness as knowing what the system already knows
At a more fundamental level, the essay is about epistemology inside systems: what does the system know, and how does it prove that it knows it?
In the NetSuite example, the platform already holds a rich web of context—subsidiaries, tax profiles, item behaviors, transaction states. Yet implementations routinely ask users to restate that context in parallel fields: compliance categories, routing codes, flags that mirror attributes elsewhere.
The author’s argument is that parallel truths are not just redundant; they are unstable. When the same concept is represented in multiple places and kept in sync only by memory and training, drift is inevitable. Drift then becomes work: reconciliation meetings, adjustment journals, exception scripts.
By advocating “derive, don’t request,” the post is pushing for a more principled stance:
- If a value can be computed deterministically from existing data, it should be.
- If a value is the output of a rule, it should be locked or at least controlled.
- If auditability matters, the reasoning should be recorded once, rather than reinvented per transaction. This is less about automation for its own sake and more about integrity. A system that “knows itself” does not ask people to retype its own context. It carries that context forward, and it exposes only the decisions that truly require human judgment.
Decision inventories and the politics of responsibility
The proposed tool—a decision inventory—is deceptively simple: list the decisions that must be correct for a process to succeed; then ask where those decisions live, what inputs they need, and when they should be made.
On the surface, this is a practical method for taming fragile workflows. Underneath, it is a subtle reallocation of responsibility:
- From frontline users back to system designers and process owners.
- From ad hoc fixes to explicit modeling of how the business actually works.
- From blaming individuals to interrogating structures when things go wrong. By forcing each critical decision to have a defined source of truth and derivation point, the organization is asked to confront where it has been relying on “remember to” as governance. Integration failures that “look random,” for example, are revealed as the logical result of undecided ownership: no one has formally chosen where the external ID should come from.
The broader signal here is about maturity. Mature systems are not those with the most automation, but those where each important decision has an intentional home. The inventory is a way to surface where intent is currently distributed across emails, training sessions, and unwritten expectations.
Where rules end and judgment begins
The post is careful to draw a boundary. Not everything should be derived. Some data reflects negotiated reality or situational judgment: promise dates, exception approvals, reasons for write-offs.
The proposed test—if two qualified people have the same inputs, should they always choose the same value?—is an attempt to keep systems honest about their limits. It acknowledges that not all complexity can be reduced to rules without creating “false certainty,” where the system appears precise but is, in fact, wrong.
This distinction matters because it prevents a common overcorrection: mistaking completeness for total automation. The author’s intent is narrower and more disciplined. The goal is to remove humans from acting as glue between facts the system already has, not to erase human agency where reality is genuinely ambiguous.
In doing so, the piece is sketching a division of labor: let systems handle consistency where rules apply; let people handle trade-offs where judgment is needed; and make the boundary visible.
In the end, finishing the design
In the end, this article is less about ERP configuration than about organizational honesty. It asks leaders and designers to admit when they have stopped halfway—to recognize that a functioning screen and workflow do not guarantee that the system is carrying the real work of the process.
Ultimately, the claim that “your system isn’t broken, it’s incomplete” is an invitation to change questions. Instead of asking “who missed this field?” or “what changed last night?”, the more useful questions become: “What decision is the system failing to make?” and “Given what the system already knows, why are we asking a person to remember this?”
Looking ahead, the practical next step is modest but powerful: treat every recurring exception and new manual field request as a design review prompt. Before adding another user task, ask whether the necessary inputs already exist somewhere in the data model. If they do, encode the rule. If they do not, adjust the upstream structure so they can.
As organizations adopt that habit, the heroics and blame that currently surround operational failures can give way to a quieter discipline: finishing the design so that systems bear the weight they are built for, and people can focus on the decisions that truly require them.