Skip to main content
← Back to Writing

ERP Integrations as Delegated Judgment

ERP Integrations as Delegated Judgment

This article is in reference to:

Decision-First ERP Integrations (Not Field Maps)

As seen on: cfcx.work

ERP integrations are about trust, not spreadsheets

This piece exists because something subtle keeps going wrong in finance and operations projects that look tidy on paper. Teams do the responsible-seeming thing: they catalog every field, match every column, and emerge with an immaculate integration specification. Yet months later, the questions piling up are not about missing data. They are about judgment: Why did this post that way? Who decided this was OpEx? How did this skip the right approval?

The original article is pushing back on a comforting illusion. It argues that field inventories feel like “requirements” even when they do not actually protect the business from bad outcomes. The real requirement is whether the system can make the same decisions a careful operator would make, at volume, long after that operator has moved on.

At its core, the post is about what an ERP integration really is: a formal act of delegating business judgment to software. Field-first approaches treat integrations as plumbing. Decision-first approaches admit they are governance.

From boxes on a form to acts of judgment

Most implementation stories begin in a conference room, not with strategy but with screenshots. Stakeholders look at forms in an upstream system, forms in NetSuite or another ERP, and begin drawing lines. Each line feels like progress. Each mapped field looks like risk reduced.

The article offers a different lens: every transaction flowing through an ERP embodies a series of decisions. Is this spend CapEx or OpEx? Which cost center owns it? Who must approve it? Does it become an asset, a prepaid, or a period expense? These are not cosmetic choices. They are commitments that affect financial statements, tax exposure, controls, and operational accountability.

Seen this way, an integration is not copying values from place to place. It is deciding, on behalf of the business, how work will be classified, routed, and recorded. Fields are just how those decisions get expressed in a system that speaks in GL codes, segments, and status flags.

The tension the article names is simple but consequential: teams are rewarded for visible completeness (every box mapped) rather than verifiable judgment (every important decision made consistently and defensibly). That mismatch explains why so many “successful” integrations quietly increase rework, exceptions, and later clean-up.

Systems, signals, and the comfort of visible progress

The piece also hints at why this pattern is persistent, especially in flexible ERPs like NetSuite. The systems landscape nudges teams toward structure over meaning.

ERPs expose the world as records, bodies, and lines. They differentiate header fields from column fields, items from expenses, segments from custom fields. These are implementation details, but they are the first things implementers see. When project plans and design sessions are organized around that structure, “where something sits” becomes a proxy for “how it matters.”

The result is a powerful but misleading signal. A complete field map looks like evidence of understanding. Every column has a destination. Executives can scan this artifact and feel that requirements have been captured.

What is missing is far less legible: the actual logic by which an experienced operator would distinguish a motor replacement from routine maintenance, or a genuine capital improvement from a large repair. That logic often depends on a combination of categories, locations, amounts, and local rules that do not map neatly to a single field.

When design starts from fields instead of from those judgment calls, the integration has nowhere to encode the nuance. It can only shuttle values and hope defaults are “good enough.” The article’s method—decision inventory, data contracts, then field binding—flips the order of visibility. It asks for clarity on the least tangible part first: what decisions the system is being trusted to make, who owns them, and what counts as an acceptable exception.

Decision inventories as quiet governance

On the surface, a “decision inventory” sounds like another artifact. In practice, it is doing governance work that many organizations defer or avoid.

To list decisions and their outputs in business terms is to force a conversation about accountability. If the system is going to decide CapEx vs. OpEx, someone in Finance must own the rule set. If the system is going to hold a transaction when information is missing, someone must own the exception path and service levels. These questions are uncomfortable because they move beyond implementation mechanics into operating model design.

By framing decisions explicitly, the approach in the article pressures organizations to answer: What are we actually delegating to automation, and who stands behind those delegations? Many ERP projects never ask this. They ship integrations that work technically, but whose embedded rules are nobody’s explicit responsibility. When those rules misclassify work or fail under new scenarios, there is no clear owner to adjust them.

Data contracts extend this governance lens. Instead of treating any field the upstream system can produce as a gift, they ask what minimum reliable information is needed to make each decision. It is not asking, “What exists?” but “What must be true for us to automate this judgment and defend it during an audit?”

This distinction matters most where transactions mix intents, like service vendor bills that contain maintenance, capital improvements, and pass-through costs on one document. If a line can embody more than one accounting intent, there may be no way for any mapping, however careful, to automate the decision. The choice is not “better field map” versus “worse field map.” It is “change the upstream structure and classification” versus “accept manual work or risk.”

What this post signals about mature operations

Beneath the implementation advice, the article is signaling what more mature finance and operations functions tend to converge on.

First, they treat automations as policy embodiments, not just efficiency plays. A rule that classifies spend based on categories and thresholds is an operationalization of accounting policy and risk tolerance. As such, it deserves the same level of scrutiny, documentation, and ownership as a written policy.

Second, they recognize that when automation fails quietly, it is worse than having no automation at all. A field-first integration that “usually works” but occasionally misroutes approvals or posts to the wrong accounts can erode trust in the system and create invisible liabilities. Decision-first design tries to make failure modes explicit—by declaring what happens when inputs are missing or ambiguous, and by routing those cases into visible review queues.

Third, they design for change. Because decisions are documented independently of how they are bound to fields, evolving a process does not require rediscovering the purpose behind each field every time the form or vendor changes. The integration becomes more like a contract between the business and its systems: if we supply these inputs and you apply these rules, we expect these outcomes.

In the end, systems inherit the quality of decisions

The post ultimately challenges a convenient myth: that enough detailed mapping can stand in for clarity about how work should be decided. It argues that the true unit of correctness in ERP integrations is not the field but the decision. Fields without decisions are decoration. Decisions without clear inputs are guesses.

It invites teams to treat integration design as an opportunity to clarify judgment, not only to connect systems. Write the decision inventory before the field map. Define the minimum data contracts for each decision, even if that exposes gaps in upstream tools or processes. Bind fields last, and in ways that can be revisited without losing sight of why they exist.

Looking ahead, organizations that internalize this approach will likely spend less time firefighting misposts and more time refining the logic that actually runs their operations. For anyone planning an ERP integration—or living with the consequences of one—the implicit question this post leaves hanging is simple: What decisions have you really asked your systems to make, and would a careful operator recognize their own judgment in those outcomes?