December Update

This month was somewhat more quiet. I spent quality time with friends, and started to work on a side quest in my home: setting up a reading corner. I’ve noticed that I have become more sensitive to my surroundings: the bed is for sleeping, the couch is for relaxing, the kitchen for discussing and preparing adventures with friends, and the desk is for work. None of them makes me want to read when I am there, but I do want to read!

I am thus on the market for a reading chair. If you have any suggestions, I will welcome them.

❦❦❦

Even without a dedicated reading spot, there was a lot of reading with much to learn from and reflect on. I’ll start with practical/actionable stuff and then go up from there.

In Lullaby language, Jerry Weinberg reflects on these problematic words that people add to their sentences to put the listener’s mind to sleep. For example, the word “just” severely discounts the actual complexity of tasks, as in “we can automate this process, we just have to design a spreadsheet with the right formulas.”

I had also personally built my own “translation dictionary” for corporate speak before (see below), but I hadn’t been successful at properly describing what was wrong about these words. I like the term “lullaby” a lot.

My own definitions differ from Jerry Weinberg’s, however:

Word Actual meaning Example
Just With unpredictable complexity and delay To stabilize my half rack, we just have to screw it to the wall. (It actually took two months.)
Should Probably won’t I should mow my lawn this month before the winter really starts. (Every year. Never happened.)
Soon Only after you come back telling me it’s now urgent Responding to my architect asking when I’d communicate my design preferences: Soon. (Two months later. I was procrastinating because it’s hard.)
Only / Simply Let’s put aside a lot of unpleasant and boring details I only had to submit a form to submit to inform the city council there was no asbestos remaining. (Putting aside the expensive investigation and removal work required to obtain the form, and the back-and-forth six months later because the form hadn’t been properly submitted.)
Basically I’m not sure I fully understand this myself and I’d rather you didn’t ask Basically, the main difference between the Raft and Paxos algorithms is that Raft requires more coordination for consensus, but also has better support for membership changes. (I would never dare to talk about either without first re-reading a textbook.)

❦❦❦

In this short video You’re doing zoom calls wrong, communication expert Vinh Giang explains how most people do not set up their videocalls properly: the face is too close to the camera and the other side experiences stress stemming from the excessive impression of closeness. I definitely experienced that. Another advantage of placing the camera further away is that it also liberates the upper body and helps convey way more body language.

❦❦❦

Another thing I’ve encountered is two unrelated personal takes on John Ousterhout’s A philosophy of Software design (which I also recommend).

In Cognitive load is what matters, Artem Zakirullin reflects on the apparent tension and trade-offs that exists between designing many small components versus a few large components. His conclusion is that focusing on this tension is mistaking the forest for the trees, and that there is just one thing to optimize for regardless of the selected architecture: cognitive load, which emerges from letting smart developers code stuff according to their own intelligence and quirks instead of optimizing for other, less skilled people remaining able to maintain the stuff.

Meanwhile, in Ideas from “A Philosophy of Software Design”, Eliran Turgeman provides a few practical examples of the above. This specific blog post is a bit light on content, but I appreciated the introduction of his Telegram channel where he publishes his reading notes and learnings. That one is very good.

❦❦❦

Moving on, I was delighted to discover Matthias Felleisen’s Developing Developers, where he explains how he and others built Northeastern University’s systematic course on software engineering.

The unique feature of this course is its emphasis on “systematic design”. In the author’s words, traditional programming courses teach programming implicitly, with students picking it up via mimicking and experimenting. This approach may appeal to students who love to tinker with gadgets and video games, but it also turns off many others who might be equally talented for engineering actual software or benefit to the same extent from a properly taught course on programming and problem solving.

It’s not often that an experienced teacher offers a deep dive on his reasoning for creating a course. I also knew that Northeastern had a good program, given the high caliber of their alumni, and this document gives me more insights about why.

One challenge I’d oppose to this approach however, is the fundamental complexity that emerges from working with pre-existing systems. For those, a scientific empirical method is often more appropriate, where engineers develop partial/imperfect models of the systems, do something with/to the system according to that model, refine their model and iterate. I would be curious to know how Mathias Felleisen thinks about this.

❦❦❦

My challenge to systematic design above is probably going to become even more relevant in the coming era of LLM-generated software.

Speaking of which, it is becoming increasingly harder to ignore the roaring stream of intellectual noise around LLMs. I found the following bits useful and/or insightful.

On the practical side, Abishek Muthian explains in How I run LLMs locally how to reduce our dependence on online APIs. Complementarily, Matt Webb explains in Narrative jailbreaking for fun and profit a fun approach to working around model censorship rules^W^Wsystem prompts and peek into their internal knowledge graph.

On another practical side, but at a higher level, Phil Calçado shares his experience designing complex systems that use LLMs under the hood in Building AI Products—Part I: Back-end Architecture.

Meanwhile, Dennis Schuber points out in this Mastodon post how web sites are currently experiencing excess inbound traffic by LLM content crawlers that are poorly designed and ignore internet standards.

On the skeptical side, I chuckled at John Gruber’s short piece OpenAI’s Board, Paraphrased: ‘To Succeed, All We Need Is Unimaginable Sums of Money’, with a hint of worry at the magnitude of the unavoidable eventual collapse of this economic bubble.

I also chuckled at Eerke Boiten’s Does current AI represent a dead end? where he makes some really good points such as (paraphrasing) will we be able to maintain and extend complex systems in the longer term if we give up our ability to understand them properly? His tone is a bit dramatic however, and I wonder how much of this comes from the vulnerability of his own field of research (formal system analysis) to LLM slop.

Paradoxically, I was not able to chuckle at the intended humor of Lionel Dricot (a.k.a. Ploum)’s My colleague Julius. It hit a little too close to home as I already see individuals stepping over others in manners amplified by misguided LLM (ab)use.

Besides these specific points, I also strongly recommend skimming through Simon Willison’s Things we learned about LLMs in 2024 which provides a good overview of the main themes of the past year.

❦❦❦

I also randomly encountered this CACM edito by Moshe Vardi, a reasonably well respected and well-awarded computer scientist: I Was Wrong about the Ethics Crisis.

Choice quote:

I bemoaned that humanity seems to be serving technology rather than the other way around. […] I pointed out that Big Tech’s business models are unethical. I explained how technology increases societal polarization. […] About two years ago, I started giving talks on how to be an ethical computing technologist.

But I have yet, until now, to point at the elephant in the room and ask whether it is ethical to work for Big Tech, taking all of the above into consideration. […]

“It is difficult to get a man to understand something, when his salary depends on his not understanding it,” said the writer and political activist Upton Sinclair. By and large, Big Tech workers do not seem to be asking themselves hard questions, I believe, hence my conclusion that we do indeed suffer from an ethics crisis.

This resonates with a line of inquiry that has been keeping me busy for a couple of years now: how is it that the brightest minds of this century are not working on the hardest problems? What of inequality, the housing crisis, or global corporate tax evasion? Why are the brightest minds so often allocated to optimizing yet one more algorithm for delivering ads to teenagers, and not putting their skills towards policy making or building better incentive systems?

Are we facing an “ethics crisis”, or more generally a tragedy of the commons caused by systemic misallocation of human intelligence? What are the moral components to this?

❦❦❦

Coincidentally, I am currently active in an online community where folk study philosophy and generally seek to build skills (both physically and intellectually) to become better people. I find this stimulating and quite rewarding.

In a recent conversation there, one of the members was wondering how we could ever claim to teach morals, as David Brooks was suggesting to us in this excellent opinion piece. Who can legitimately decide what would go into education programs relating to morals? This brought us to investigating objectivity vs. subjectivity in moral matters.

In turn, this helped me discover this intriguing bit of anthropological research: Is It Good to Cooperate? Testing the Theory of Morality-as-Cooperation in 60 Societies by O.S. Curry, D.A. Mullins, and H. Whitehouse. According to the authors, there are at least 7 general moral principles relating to cooperation that seem to be universally shared across human populations. I liked reading how this experiment tends to invalidate empirically some extreme forms of moral relativism. I also wonder if we could find a larger group of consensual moral principles if we were to reduce the scope of the research to just one region. Food for thought.

❦❦❦

Another thing that happens in that online community is that members provide recommendations to each other about stuff to listen to. This is how I discovered Alexander McKechnie (Exurb1a), who publishes quality monologues on various anthropological and philosophical topics. They have an ideal size to serve as podcast during a workout.

I haven’t explored all his stories yet, but this particular one really struck a nerve: The Answer is not a Hut in the Woods. In there, Alexander explains how he went on a quest to “find himself” as a writer alone in the wilderness, only to discover in the end that what makes him truly happy is to hang out with his friends at home. The tale is old as history, but the quality of his delivery is monumental. Listening to this story was a delightful experience and I felt privileged for experiencing it.

❦❦❦

Last but not least, I would like to share two special pieces that forced some humility on me.

In All objects and some questions (copy), Charles H. Lineweaver, Vihan M. Patel capture the entire history of our universe, its physics and everything in two simple graphs. One graph displays the history of all the composite objects that condensed out of the background as the universe expanded and cooled. The other graph plots the masses and sizes of all the objects in the universe. The construction of these two graphs is math-heavy but the exposé is clear and I was able to follow along. To me, the fact that our scientific method has been able to comprehend our universe’s history so well and so precisely that we can capture all that has ever existed into such a succinct mathematical form is nothing less than awe-inspiring.

Meanwhile, in Advanced Civilizations Could be Indistinguishable from Nature, Evan Gough provides an engaging analysis of a denser article by Lukáš Likavčan, The Grass of the Universe: Rethinking Technosphere, Planetary History, and Sustainability with Fermi Paradox. (You can read either, but I recommend the former. It’s easier to read and contains more pictures.) Both articles revisit the topic of the Fermi Paradox, which asks why we haven’t yet made contact with extra-terrestrial intelligence given how many stars there are in the sky, and add one more explanation: that the most likely way that civilizations expand is to become more in harmony with their environment, whereby at a large distance their worlds become indistinguishable from non-inhabited planets.

I found the idea oddly beautiful, and far more appealing than the Dark Forest theory.

❦❦❦

References: