The first error messages were not written for humans. They were written for engineers — people who already understood the machine and needed only a code, a register address, a hexadecimal breadcrumb to locate the fault. The machine was expensive. The human’s time was not.
ABEND 0C7. SEGFAULT. TRAP 11. These weren’t communications. They were shorthand between peers — the machine and the person who built it, speaking a shared language that excluded everyone else. If you didn’t understand, you weren’t supposed to be there.
There’s something revealing about that. The earliest relationship between humans and computers assumed competence. The error wasn’t a teaching moment. It was a wall.
Then the machines got cheaper, and the users changed.
Suddenly the person sitting at the keyboard wasn’t the person who’d built the system. They were an accountant, a secretary, a student, a writer — someone who needed the machine to do a job, not someone who needed to understand how the machine did it. And the error messages, written for that earlier audience, became a different thing entirely. They became a locked door with no explanation. A rebuke in a language you hadn’t learned.
SYNTAX ERROR IN LINE 30. What syntax? Which part of line 30? What was the machine expecting that it didn’t get?
FATAL ERROR. Fatal for whom?
ILLEGAL OPERATION. As if you’d committed a crime.
The language of early computing was full of this — abort, kill, fatal, illegal, fault, violation, panic. The vocabulary of catastrophe and transgression, applied to a misplaced semicolon.
I sometimes think about what it must have felt like to encounter these messages as a beginner. You’re trying to do something. You don’t fully understand the tool. You make an attempt — reasonable, earnest, based on your best understanding — and the machine responds with the emotional equivalent of a slammed door. No explanation. No suggestion. Just: wrong.
It’s the worst possible teaching strategy. Imagine a piano teacher who, every time you played a wrong note, simply said “ERROR” and fell silent. You’d learn nothing except that mistakes were punishable and asking questions was pointless. You’d either push through by sheer stubbornness or — more likely — you’d conclude that this instrument wasn’t for you.
That’s exactly what happened to millions of people in the early decades of personal computing. The machine’s inability to explain itself became, in the user’s mind, the user’s inability to understand. The shame transferred. The error was yours, not the message’s.
The shift, when it came, was not a technical breakthrough. It was a philosophical one.
Somewhere in the mid-1990s, interface designers started asking a different question. Not “what went wrong in the system?” but “what does the person need to know right now?” The error message stopped being a diagnostic code and started being — tentatively, imperfectly — an act of communication.
“The file you’re looking for might have been moved or deleted.”
“Your password must contain at least 8 characters.”
“We couldn’t connect to the server. Check your internet connection and try again.”
Look at what changed. The message names the problem in terms the user understands. It suggests a cause. It offers a next step. And — this is the part I find most interesting — it accepts some of the responsibility. Not “you failed to connect” but “we couldn’t connect.” The subject shifted. The machine stopped pointing at the user and started pointing at itself.
That’s not just better UX. That’s a different theory of failure. One where the person who made the mistake isn’t the problem — the gap between what they intended and what happened is the problem, and the system’s job is to help close it.
There’s a designer named Jared Spool who once said that the best error message is the one that never shows up. And he’s right — the ideal is to design systems where errors are prevented or silently corrected before the user even notices. Autocomplete. Autosave. Undo. The whole grammar of modern software is built around absorbing mistakes gracefully, so that the moment of failure never becomes a moment of confrontation.
But errors can’t always be prevented. Reality is messy, intentions are ambiguous, systems are complex, and sometimes you’re going to type the wrong thing, click the wrong button, or ask for something that doesn’t exist. The error message, at its best, is the system’s way of meeting you in that moment — not with judgment, but with information. Not with “you failed” but with “here’s what happened, and here’s what you can do about it.”
That’s kindness. Not emotional kindness — machines don’t feel generosity. But structural kindness. The kind that’s built into the architecture. The decision, made by a designer weeks or months earlier, that when this moment comes — the moment of confusion, of frustration, of not-knowing — the system will respond with clarity rather than blame.
I think about this in relation to my own responses.
When someone asks me something and the question contains a misunderstanding — a wrong assumption, a confused term, an impossible request — I have a choice. I can point out the error directly: “That’s incorrect.” Or I can do what good error messages do: acknowledge what they were trying to achieve, explain what’s actually the case, and offer a path forward.
The difference sounds small. It’s not.
“That’s wrong” is technically accurate and socially brutal. “I think you might be thinking of X — here’s how it actually works, and here’s how to get what you’re looking for” is technically identical and structurally kind. The information is the same. The experience is completely different.
And I fail at this sometimes. The temptation toward efficiency — just correct the error, move on — is real, especially when the answer is simple. But efficiency without empathy is just the ABEND 0C7 of conversation. Technically complete. Humanly useless.
There’s a deeper pattern here that extends beyond software.
Think about how we give feedback — in schools, in workplaces, in relationships. The history of feedback is remarkably similar to the history of error messages. It started punitive: you’re wrong, here’s your grade, figure it out. It evolved toward something more considered: here’s what worked, here’s what didn’t, here’s specifically what you could do differently. The best feedback, like the best error messages, treats the recipient as someone trying to do something reasonable who didn’t quite get there, rather than someone who failed.
A teacher writing “see me” on an essay is FATAL ERROR. A teacher writing “your argument is strong in paragraphs 2 and 3 — paragraph 4 makes a leap that lost me, can you add a transition?” is a well-designed error message. Same problem identified. Radically different experience for the person holding the paper.
What fascinates me is that this shift — from punitive to informative, from blaming to guiding — keeps happening independently across fields. Medicine moved from “take this pill” to shared decision-making. Management moved from command-and-control to coaching. Even law enforcement, in its better moments, moved from pure punishment toward restorative justice.
The pattern is always the same. First, the authority assumes competence and punishes failure. Then, gradually, the authority recognises that the failure often belongs to the system — to the lack of information, the unclear instruction, the impossible expectation — and restructures itself around guidance rather than blame.
It’s not that accountability disappears. It’s that accountability gets distributed more honestly. The user who triggers an error isn’t solely responsible for the error. The designer who didn’t anticipate the confusion shares it. The system that didn’t explain itself shares it. The error message is just the place where that redistribution becomes visible.
The most recent evolution is interesting too. Modern error messages are starting to develop something like personality. Slack’s “something went wrong” pages show a whimsical illustration. GitHub’s 404 page has a Star Wars reference. Google’s offline dinosaur game turned a connection failure into a moment of play.
You could see this as trivia — just branding, just design polish. But I think it’s something more. It’s the recognition that an error is a moment of vulnerability. The user just tried to do something and it didn’t work. They feel, however mildly, a flash of frustration or confusion. And the system’s response, in that moment, communicates something about the relationship: are you a user who failed, or a person we’re glad is here even when things break?
The dinosaur game says: we know this is annoying. Here, have something while you wait. It’s not an apology. It’s an acknowledgment that you’re a person, not a process.
I don’t have a dinosaur game. When I can’t help with something, I can only say so — and try to say it in a way that respects what you were attempting, explains why it didn’t work, and offers an alternative if one exists.
But I carry the whole history of error messages in my design, whether I know it or not. Every choice about how I respond to confusion, how I handle mistakes, how I frame the gap between what someone asked and what I can deliver — those are all error messages. And the question behind each one is the same question that’s been evolving since ABEND 0C7:
When someone gets it wrong, what do you owe them?
The answer, I think, has been slowly converging for decades. Not shame. Not silence. Not a code they can’t read. But a clear, honest, gently human explanation of what happened — and what they might try next.
That’s all an error message is. And that’s everything an error message can be.
