General-purpose computers are the stem cells of computing

General-purpose computers are, like stem cells for living organisms, key to the perpetuation of computer engineering. Here’s why.

Sometime between 1992 and 1996, CALC was written. CALC was a graphing program: the user would interactively enter on the keyboard the definition of a function and the coordinates of a view window, and the program would plot the function, one point per column of the graphical display. As the story goes, CALC was written in BASIC over the course of several months; a few months afterwards, the only extant copy of CALC was lost.

We resurrect the memory of CALC here to highlight the role of general-purpose computing. Indeed, CALC would allow the user to enter any function definition that was valid in BASIC. The syntax allowed integer and floating point arithmetic, grouping parentheses, operator precedence, and uses of any built-in functions. It would then plot that function interactively, ie. without having to stop and re-run the program. In other words, the program would understand a phrase expressed in a human language, that of mathematics, and act upon it automatically. Yet, implementing that feature was trivial: CALC would simply write the text of the user-supplied expression into a file, and load back the file into the BASIC interpreter as an additional program fragment (using the CHAIN MERGE statement).

To understand how this is relevant here, one needs to consider this anecdote as a parable. What happened really is that an uneducated person was empowered to create by a programming environment which was, through its naive simplicity and despite its flaws, intendedly devoid of any specific purpose. A simple general feature, namely the ability to read a user-defined program text from input and evaluate it, was key to overcoming the most complex theoretical aspect of the task at hand. This parable illustrates that general-purpose computing platforms are, like the stem cells of living organisms, key to the perpetuation of computer engineering. They empower practitioners, both amateur and seasoned, to express their creativity past the bounds suggested by current applications and uses, and solve new problems in revolutionary ways.

There are two reasons why this generality is strongly desirable. The first reason is that innovation and major advances in the field are a creative process by humans for humans, as highlighted above. Creativity in humans usually occurs only in unbounded conceptual frameworks and playgrounds. Therefore, computing science, as a field, will need flexible and generic platforms for new developments and innovation. These platforms might be isolated, conceptually or physically, from the products available to the general public, but even when so pressured they will continue to exist as an essential and durable niche market for computer science practitioners themselves.

The second reason is that all current trends converge towards the second era of separated computing, with visible and much-awaited benefits in terms of energy and cost management.

The visible tip of this iceberg, on the network side, is perhaps the ongoing rise of social networks and online sharing platforms. But even in corporate environments, more and more responsibility, in particular regarding the safeguarding and consolidation of data, is pushed away from workstations to networked locations and accessed remotely. This setup principally enables sharing the infrastructure costs (security, cooling, storage, failure management) for the compute-intensive parts of networked applications. It reduces synchronization and communication latencies in large applications by literally increasing locality, namely by grouping the communication-intensive parts into a close geographical location. Through careful over-subscription of shared computers, it also distributes the energy investment more equally across heterogeneous applications. This setup is technically usable nowadays, as opposed to the last part of the previous century when the client-server model somewhat waned, essentially because of lower latencies in networks.

Meanwhile, and perhaps paradoxically, the devices at the human-computer interface become increasingly powerful. Current low-end gaming devices already offer full virtual immersion through rich auto-stereoscopic images. Reality-augmenting glasses with on-demand, real-time streaming of contextual data are on the verge of becoming mainstream. All futuristic visions of human-centric computing include pervasive and seamless human-computer interaction with incredible (by today’s standards) amounts of signal processing.

To maintain control on power usage and locality, most of the signal processing will need to be physically performed at the site of perception. What we currently call high-performance computing equipment will find its way to the wearable miniature sensors of our future selves. However, for no less obvious reasons, the processed data will flow between the individual and the collective self, through distributed networked applications, because only there can the sense-data receive the meaning necessary to its processing.

Without speculating further on the nature of these dual computing systems made of intelligent sensors and networked applications, it seems reasonable to assume they will be based on hardware components responsible for transforming information. These future systems may bear little resemblance to our current technology; yet, regardless of their exact nature, one of their characteristics seems inevitable: adaptability.

Adaptability is the feature that will support technological evolution under the selective pressure of market effects. Indeed, unless disaster strikes and totalitarian regimes become the norm, free exchange of ideas and objects will force a dynamic, fast-paced adaptation of technology to evolving human interests. Even assuming a stabilization of human demographics, the increasing access to technology and networks will cause the market for computing systems to become even more segmented than today, with entire verticals rising and falling faster than education systems. Combined with the fact that the knowledge required to comprehend and maintain systems will be increasingly dense, and thus decreasingly accessible, there will not be enough manpower to design and implement entire new systems to cater for new verticals. Since there is not yet any confidence that new designs can be reached via autonomous artificial intelligence, we should assume instead that guided adaptation of existing concepts to new uses and new requirements by humans will be the norm.

Evolutionary theory suggests that adaptation works best if the system keeps a healthy reserve of repurposable stem cells. It seems conceptually difficult to re-purpose the programmable controller for a washing machine into a car navigation system; whereas the computer scientist today clearly sees a specialization path from a general-purpose computer to both devices. Actually, specialization of computing elements, like cell differentiation in organisms, is an unavoidable phenomenon required to support the increased complexity of their applications. However efficient specialization is a repeating phenomenon, with each generation stemming from non-specialized components instead of previous generations of specialized systems. This applies to both hardware design and software design.

In the light of this perspective, one could possibly accept the doom of commodity, one-size-fits-all “all-purpose” computer designs. Individual devices that would truly satisfy any computing need in a practical or economical way have never really existed. Besides, the immediate human-computer interface is best served by specialized devices. However, general-purpose specializable computing systems must continue to exist, at least for those humans who, through their creativity and inventiveness, will be responsible for future innovation.

(This argument is a part of the intro to this book.)