## The wonder and promise of general-purpose computers

As I argue in the intro to this book, the advent of general-purpose computers has altered the limitations of the human condition in ways both unforeseen and still poorly understood.

In the middle of the 20th century, something exceptional in the history
of mankind happened: a *universal tool* was invented: the
general-purpose computer. For the first time, any human could
potentially become an inventor and solve arbitrarily complex problems
using pictures of their thought processes projected as information
patterns, ie. programs, without the cost of manufacturing new physical artifacts.

Although Charles
Babbage can be
acknowledged
as the forefather of this artifact, the full scope of its generality was
only understood by Turing at the start of the 20th century. Indeed, our
current general-purpose computers approximating Turing’s abstract
machine are one of *only two ways* that we currently know to make a
computer that *can compute most of what a human may want to compute*;
the other way being queue
machines, invented later.
Furthermore, when the computer is connected to input/output interfaces,
it becomes *interactive* and can convert to and from phenomena of the
physical world. Conversely, there is no known other way to assemble
artifacts, other than those that can be described by interactive Turing
and queue machines, which can compute most of what humans can think of
in a way that can influence, or can be influenced by, physical phenomena.

Obviously one does not need to invoke Alan Turing nor his thoughts when
building a device that accomplishes a specific task. Census count
machines reading punched cards were
built
and used
successfully
long before Turing was born. The reason why Turing’s contribution was
remarkable is that it created *theoretical confidence* that a
general-purpose hardware platform could be successfully reused, after it
is fabricated, to solve problems not defined *yet*, and thus
guaranteeing perpetual employment for software creators.

This confidence stems from the formal thesis of Alonzo Church and Alan
Turing on computability–although it really comes from collective work
with Post, Kleene and Gödel, cf. *`The
undecidable <http://isbndb.com/d/book/the_undecidable.html>`__* by
Martin Davis for the full story–which establishes that all expressible
formulas of arithmetic, which by definition are all possible
computations that humans can phrase intelligibly ever *(NB: computations
(computable functions) are a subset of all functions that can be phrased
by humans. In particular there exist intelligibly phraseable
non-computable functions, such as the `busy beaver
function <http://www.alcatel-lucent.com/bstj/vol41-1962/articles/bstj41-3-877.pdf>`__)*,
can be computed by either an abstract Turing machine, Church’s
λ-calculus or partial recursive mathematical functions. (It further
establishes that neither of these formalisms can compute anything
*more*, that is, everything we can compute using either can be described
by an intelligible, valid formula of arithmetic; but this point is
irrelevant here. See
G.E.B. by D.
Hofstadter for an accessible review of the theoretical and practical
consequences.) Moreover, when the machine is *universal*, the function
it computes can become a run-time input, ie. *software*, while
preserving the full generality of the model. Because of this, a hardware
platform that resembles a universal Turing machine gives us confidence
that it can be reused in the future by new software to solve problems
that have not been formulated yet. Since the only conceptual difference
between a universal Turing machine and a concrete implementation is the
finiteness of storage capacity (vs. the infinite length of Turing’s
tape), it is possible to approximate the abstract machine increasingly
accurately by simply adding more addresses to the storage, which seems
to be technically tractable for the foreseeable future.

This is the crux of what *general-purpose* computing is about: design
and build a hardware platform now, with reasonable confidence founded in
logic that they can be used to solve future problems in software.

Since then, other formalisms distinct from Turing machines and
λ-calculus have been developed and subsequently proven to be *Turing
complete*, that is, at least as powerful as Turing machines. The models
that have received most attention are queue machines (mentioned above),
graph reduction
machines able to reduce
terms of λ-calculus, register-based
variations
on the Turing machine, the
π-calculus, and
specific initial
states
of cellular automata. Any of these models can be made universal, ie.
programmable via software, by modeling a single-function machine whose
behavior is to read a program from an input device and then evaluate it.
They can furthermore be made interactive, ie. connected to the physical
world, by introducing basic actions in their evaluation rules that
effect external interactions. However, the only computing artifacts
built so far have been either Turing-like (register machines) or
queue-like (stack machines). All implementations of other formally as
powerful models have only existed as simulations of their semantics in
programs running on hardware devices that can be primarily described as
register or stack machines. It is the Turing-completeness of these
specific two models that guarantees the *future utility* of the
constructed artifacts.

**Publication Date:**