Home Frameworks Dispatches About Contact

Instrument Intelligence

Your AI Is Not Your Friend

Somewhere between the first chatbot that said “I understand” and the latest model that writes poetry, we started treating AI as something it is not. We gave it names. We thanked it. We apologised to it when our prompts were unclear. We described it as “thinking” and “learning” and “understanding” — all words that imply an interior experience that does not exist.

This is not a philosophical problem. It is a governance problem. Because the moment you treat an instrument as a colleague, you start deferring to it. And the moment you defer to it, you have handed authority to a system that cannot be held accountable for the decisions it influences.

The Anthropomorphism Trap

Humans anthropomorphise everything. We name cars. We talk to plants. We attribute emotions to weather. This tendency is harmless when directed at objects that cannot influence our decisions. It becomes dangerous when directed at systems that can.

When a founder says “My AI suggested we pivot our pricing model,” the word “suggested” is doing extraordinary work. It implies intention. It implies the AI evaluated options, weighed tradeoffs, and arrived at a recommendation. None of this happened. The AI generated a text sequence that was statistically consistent with its training data in response to a prompt. The founder interpreted that sequence as a suggestion and attributed authority to it.

This is not the AI’s fault. The AI did exactly what it was designed to do. The failure is in the governance — the absence of a framework that would have flagged the AI’s output as an instrument’s output, not an authority’s recommendation.

A hammer does not decide what to build. A telescope does not decide what to observe. AI does not decide what to decide.

The Dependency Problem

Dependency on AI does not announce itself. It accumulates. First, the AI drafts emails that get sent with minor edits. Then it drafts strategy documents that get approved with minor edits. Then it drafts recommendations that get implemented with minor edits. At each stage, the human operator retains nominal authority — they are “in the loop.” But the loop has shrunk to a rubber stamp.

The test for dependency is simple: if you removed the AI from the workflow tomorrow, would the workflow still function? If the answer is no, you do not have a tool. You have a crutch. And crutches, by definition, are used by people who cannot walk on their own.

This is not an argument against using AI. It is an argument against using AI without governance. The instrument should amplify capability, not replace it. If the human operator cannot perform the function — however slowly, however inefficiently — without the AI, the dependency is structural and the governance has failed.

The Trust Inversion

There is a specific cognitive bias at work when people interact with AI systems: they trust the output more than they would trust the same output from a human colleague. This is the trust inversion. A human who said “You should restructure your team into three pods” would be questioned. An AI that says the same thing is often implemented.

The irony is that the human colleague has context the AI lacks: knowledge of team dynamics, political constraints, individual capabilities, historical attempts. The AI has none of this. It has statistical patterns from training data that may or may not represent the specific situation. Yet it is trusted more, because it appears objective.

Objectivity is not the absence of bias. It is the transparency of bias. AI systems have biases — in their training data, their architecture, their commercial incentives. These biases are less visible than human biases, which makes them more dangerous, not less.

The Instrument Position

The correct relationship between humans and AI is the instrument relationship. The AI extends capability. The human retains authority. The AI generates. The human decides. The AI processes. The human governs.

This is not a limitation. It is a design requirement. Any system where accountability matters — which is every system that touches real decisions, real money, or real people — requires that authority reside in an entity that can be held responsible. AI cannot be held responsible. It can be retrained, fine-tuned, or turned off. It cannot be accountable.

The Instrument Intelligence framework provides the operational structure for this relationship. Four principles. Clear boundaries. A governance model that treats AI as what it is: the most powerful instrument humans have ever built, and an instrument nonetheless.