How programming language paradigms actually emerge
A short history of complexity, abstraction, and constraint
Programming language paradigms tend to emerge for the same reason cities do: people keep building, and eventually the old layout becomes unbearable to live in.
Early programs were small and close to the machine. You could see most of the system at once. If something went wrong, you could usually trace it by inspection. As programs grew larger, this stopped being true. Control flow became harder to follow. State leaked across the system. Changes in one place produced surprising effects somewhere else.
At that point, programmers didn’t primarily want more power. They wanted a way to keep software intelligible.
That pressure shows up repeatedly in the history of programming. Structured programming wasn’t about aesthetics; it was a reaction to code that jumped arbitrarily through memory. Dijkstra’s objection to goto was not moral. It was epistemic: programs with unconstrained jumps are harder to reason about because they destroy the connection between the text of the program and its behavior over time. (Dijkstra, “Go To Statement Considered Harmful,” 1968)
Object-oriented programming followed a similar path. Large systems were accumulating shared state and ad-hoc conventions. Grouping data together with the code that manipulates it was a way to draw boundaries—imperfect ones, but often better than what came before. This aligns closely with David Parnas’s argument for information hiding: the goal is not reuse or elegance, but limiting what a programmer needs to know at once in order to make a safe change. (Parnas, “On the Criteria To Be Used in Decomposing Systems into Modules,” 1972)
Functional programming, long before it was fashionable again, emerged from a different concern: how to describe computation in a way that supports formal reasoning. By minimizing mutable state and side effects, functional languages reduce the number of ways a program can behave. That reduction matters even more today, when concurrency is no longer optional. John Backus made this point explicitly when he criticized the “von Neumann style” for entangling programs with machine state in ways that resist algebraic reasoning. (Backus, “Can Programming Be Liberated from the von Neumann Style?”, 1978)
Abstraction is about people before it is about machines
It is tempting to explain paradigm shifts purely in technical terms—hardware changes, compiler advances, new domains. Those matter. But they are not the whole story.
Large software systems fail first in the heads of the people working on them.
Human working memory is limited. We cope with that by forming chunks—stable units that hide detail behind names. This is not a programming insight; it is a cognitive one. (Miller, “The Magical Number Seven, Plus or Minus Two,” 1956)
Good abstractions exploit this. They let a programmer think in larger pieces without constantly unpacking the details. Object-oriented programming appealed to many developers because it offered a way to bundle related concerns together and name them. You could reason about an “account” or a “connection” instead of a loose collection of variables and functions.
But this benefit is conditional. It depends on whether the abstraction lines up with the actual forces in the system.
This is where critics like Casey Muratori enter the picture. From a performance-focused, data-oriented perspective, mainstream OOP often draws boundaries in the wrong places. It hides data layouts that matter for performance and emphasizes object hierarchies that don’t correspond to how the program actually runs. The result can be code that is pleasant to talk about but expensive to execute and difficult to optimize.
Both sides are pointing at the same underlying problem: abstraction always trades one kind of clarity for another. A paradigm that improves local reasoning can still obscure global behavior, especially when hardware realities—caches, memory access patterns, parallelism—are ignored.
Why everything cannot be SQL
SQL is often held up as the ideal declarative language. You state what you want, and the system figures out how to get it. In its domain, this works extremely well.
Relational databases operate over a tightly defined world: tables, relations, predicates, and a well-understood algebra. Because the space of possible meanings is constrained, the system can safely rewrite queries, reorder operations, and choose efficient execution plans. That is exactly the separation of intent from mechanism that Edgar Codd argued for in the relational model. (Codd, “A Relational Model of Data for Large Shared Data Banks,” 1970)
General-purpose programming does not live in such a closed world.
Most programs interact with time, I/O, partial failure, external systems, and mutable state. They are not just describing results; they are orchestrating effects. In these contexts, “how” is not an implementation detail—it is the substance of the problem.
You can and should use declarative techniques where the structure allows it. But pushing declarativity everywhere simply moves the complexity into a hidden engine that still has to make concrete decisions about memory, scheduling, and execution. The abstraction does not remove those decisions; it just relocates them.
Software, incentives, and durability
One last point is worth making, because it explains why these debates never settle.
Despite the lack of final theories and the endless churn of paradigms, well-built software continues to exist. Operating systems, databases, compilers, and network stacks survive for decades. They are maintained, extended, and relied upon daily.
At the same time, there is a growing sense—voiced by people like Jonathan Blow—that much modern software is sloppier than it ought to be, even though our tools are better.
This tension is not primarily about languages or paradigms. It is about incentives.
As Brooks observed long ago, there is no silver bullet. Tools can remove incidental difficulty, but they do not enforce discipline. Software is shaped by deadlines, funding models, organizational structure, and the patience of the people working on it. Those pressures end up embedded in the code just as surely as design decisions do.
Paradigms matter because they influence how we think and what we notice. But the long-term quality of software depends less on which paradigm is chosen and more on whether the people building it are rewarded for clarity, restraint, and conceptual integrity.
That has been true across every paradigm shift so far, and there is no reason to believe it will stop being true.


