March Update

Buckle up: we need to talk about serious stuff.

❦❦❦

Before that, a brief recap of the past few weeks. I was in Austria for a bit more snowboarding. The weather did not look great initially: far too warm and sunny. Afternoon snow was akin to the slush ice you slurp on a summer day; so bad that my board got literally stuck on the wetness a couple of times. Thankfully, it was white and crisp(er) at the top, and we even received fresh snowfall on the last two days.

view from the lift

The first lift up did not look too promising.

landscape

Groomed runs resemble white rivers flowing from the clouds.

snow on board

Enough snow to be enjoyable.

In an extremely out-of-character move for 2-year older me, I excitingly joined a pub crawl on my second day, and I found that experience thoroughly enjoyable. Later in the week, I also partook in Austria “après-ski” at a goat-farm-turned-slopeside-bar.

apres ski

I did not know Jägermeister had different flavours.

It has been only one week since my return and I have been busy. I got rid of some more furniture I was not using, and took back my piano (loaned out more than a year ago; I was itching to play again). I cleaned the yard up (in preparation for new spring planting) and invested in outdoors seating.

outdoor seating

The spot receives morning sunshine and afternoon shade. Ideal for G&T.

Also finally made the executive move to invest in a higher-end reading chair (after three months of procrastination). Already had multiple folk over for dinner (with plans for more already lined up). Braved some more preconceived notions about my limits and enjoyed another late night 5-hour meetup with random strangers (likely to turn into delightful new friends; I was even enthusiastically invited to the next gathering). Joined a random meetup in town to play “trampoline dodgeball” and I surprised myself by enjoying it (exactly what it says on the tin: one hour of intensive dodgeball while jumping in a trampoline pit; also booked for the next run).

It’s still a bit difficult to juggle all the things together. My shoulders are still not right and I struggle to balance recovery exercises with strength training. My tennis elbow issues now flare on both sides (from just the right side previously), with some days worse than others. My sleep quality is still not where I want it to be. All in all I currently need at least 1.5-2 full days of recovery per week, which is slightly more than I’d like to allocate. Improving this remains a project for the coming few months.

❦❦❦

On the long-form reading front, last month and this one I purposefully went for more fiction. Idle time before bed was dedicated to old works by Neil Gaiman: a Calendar of Tales, M is for Magic, and Smoke and Mirrors. On a rest day earlier this month, I also felt an urge to sample the current state of the art of queer literature. I was surprised and perhaps a little disappointed to see that the large majority of recent works were written by female authors, with a vibe that tells me they are writing for a mostly female audience. To sample the contrarian stream, I allocated some time to absorb The Music of What Happens (lightweight, not too memorable) and A Little Bit Country (cute and funny, but way too “country” for my personal taste). I mistook Wolfsong for a short read: it took me a few days instead. Its overuse of “alpha” and “beta” role stereotypes irked me; nonetheless, I found it intriguing overall and I might read the sequels.

❦❦❦

Some short-form learnings this month too.

In Praise of “Normal” Engineers: A software engineer argues against the myth of the “10x engineer” (IEEE Spectrum), Charity Majors reminds us that when a company mistakenly focuses on “hiring the top 0.1 percent” at the expense of anyone else, they might get a boost when getting things started initially but are ultimately setting themselves up for failure, because the resulting organization is less future-proof. I had also been making this argument for the last 8 years in my consulting.

At the end of last year, Emily Omier delivered a talk at the Open Source Experience conference, titled Code vs products. There she drew the line between infrastructure (lumber) and applications (chairs) for software. She discussed their associated business models and customer interests. She made the point that there is a large economic advantage at sharing infrastructure software in public, but not so much for application software. In Sovereign Lumber Dmitry Sagalovskiy more recently expanded the topic to discuss what this distinction means in the context of the EU’s recent push for more “software sovereignty”. The argument is intriguing, and caught my attention in how it suggests new business models will likely become plausible in the coming few years.

There’s this random person called George Hotz who identified some problem with the US economy and then offered a braindead solution. This, on its own, is a non-event; what is special about it is that it prompted Rachel Wolford, CEO of Otherbranch, to share the following nugget of wisdom:

Users almost always understand their problems better than you do. If the metrics say things are going well, and your users say everything sucks, your metrics are probably wrong. But the complement to that is that users mostly suck at solutions: you understand the constraints and difficulties of your product better than they do, so they tend to suggest things that are infeasible, overly specific, or prohibitively difficult to build.

When the public gives (or random bloggers) a damn about economics, it’s a sign the economy isn’t working. Of course they don’t have useful solutions - they’re not economists - but that feels a little beside the point: you don’t have to be a plumber to recognize that your house is full of sewage. And since no one can be an expert in everything, life demands the ability to identify and call attention to problems you cannot personally solve.

This rephrases succinctly an intuition I had about the bogus corporate principle of “don’t raise problems unless you have solutions”—raising problems and finding solutions are different skills, and it’s unreasonable to expect the same people to have both in excess.

In an older piece from 2017 which I only noticed last week, Reality has a surprising amount of detail, John Salvatier explains how hard work is often hard because of details that are not visible before we start the work. And so it’s very easy to embark on tasks that turn out to be impossible. His advice is to talk to experts and understand what details are important to them that do not seem important at first.

Separately, I enjoyed 50 things we’ve learned about building successful products from Ian Vanagas (PostHog) both as a narrative of what worked for them and as a practical handbook of things to care about for a new product hire. This is something I found was missing from many books I read last year.

Likewise, SOC2: The Screenshots Will Continue Until Security Improves from Thomas Ptacek (Fly.io) can be seen both as an indictment of the security theater of SOC2 and as a rather pragmatic roadmap of how to prepare for SOC2 without sacrificing quality and execution speed. I had been looking for something like that for my next venture.

In Why layoffs don’t work, Mark Dent explains that more often than not, companies that react to economic downturns with layoffs run into more problems (both short and long term) than those who don’t. He suggests alternatives including temporarily shorter working hours and furloughs.

I also stumbled upon Dave Kellogg’s blog (originally via Career Development: What It Really Means to be a Manager, Director, or VP). Overall, the content is about the day-to-day decisions encountered in executive teams for mid-size tech companies. Most of his content is gold. Check out e.g. The Ten Most Read Posts in 2024 and his Predictions for 2025 (bold and informative!).

❦❦❦

In “Wait, not like that”: Free and open access in the age of generative AI, Molly White explains how AI companies should think seriously about how to compensate their data sources, for otherwise the training pipeline is likely to dry up. I found the argument pretty solid.

In AI Killed The Tech Interview. Now What?, Kane Narraway identifies how deepfakes are killing the remote tech interview and what is going to happen next. There will be in-person interviews, for sure, but she also suggests an uptick in probation periods. I like that idea. She also warns us / concludes that the change is also making it harder for junior people to find jobs, which is by far the part that alerts me the most.

In Please Stop Inviting AI Notetakers to Meetings (Bloomberg; archive), Chris Stokel-Walker highlights that “AI note takers” influence the authenticity of meeting participants (negatively). This compounds with my own reservations, which I started to develop a few months ago, that AI notetakers remove a key instrument to corporate governance, that of the ability for leaders to (re)shape internal narratives.

On a more upbeat note, in Here’s How Gen Z and Gen Alpha Are Actually Using ChatGPT in Schools Steffi Cao shows that higher-agency pupils actually care about their learning skills and voluntarily opt out of using generative AI as a shortcut in their studies.

❦❦❦

An eye-opening argument in Discworld Rules; And LOTR is brain-rot for technologists by Venkatesh Rao kept me awake for a couple of days. To summarize, Venkatesh highlights that the Lord of the Rings is essentially full of Chosen Ones doing Great Man things, which “as an extended allegory for society and technology it absolutely sucks and is also ludicrously wrong-headed.” The article, overall, argues that technologists who let their subconscious be animated through their passion for Tolkien’s work end up making bad choices for societies. It then directs us to consider the works of Terry Pratchett (Discworld) and Ian M. Banks (Culture) as an alternative. Choice quote:

If you’re an actual, serious technologist, Discworld is where you should look for clues about how the world works, how it evolves in response to technological forces, and how humans should engage with those forces. It is catnip for actual technological curiosity, as opposed to validation of incuriously instrumental approaches to technology. If on the other hand, you’re really just a fantasist larping Chosen One stories bolstered by specious Straussian conceits, trying to meme your hyperstitional theory fictions into existence for a while, looting the commons with private-equity extraction engines until you get your Girardian comeuppance — by all means go for it.

This mirrors the main life lesson that I also share through my coaching: that our environment, from books to people, condition our choices; and that we have a moral duty to assert agency in the (re)shaping of that environment to ensure we remain on the right path.

❦❦❦

This was a convenient segue into the main serious topic I wanted to share today.

For context, through the last ten years, I have ruthlessly optimized my information intake to insulate myself from direct commercial interests: I fully avoid broadcast television, I blocked nearly all online advertisement, and I trained myself to subconsciously reject branded online stores and paid product reviews in favor of open marketplaces (with manual product comparisons across marketplaces) and discussion forums. Likewise, when it comes to learning about the state of the world, I purposefully trained myself to fence my attention off from branded news feeds, television programs and newspapers with corporate holdings subject to political influence.

What I sacrificed in awareness of the cultural zeitgeist, I gained back in sensitivity to bullshit and propaganda.

Every day, I am using this trained intuition to filter and evaluate the few remaining things I let myself read or watch. With this skill (combined with the aforementioned agressive blocking of ads and disabling of personal recommendations), new aggregators remained somewhat usable: I could still mine them for the occasional nuggets of fresh information or knowledge, something like 10-20% of the things I skim daily (that’s one item every 5 or 10).

With this context, the thing I want to share is this: in the past few months, my bullshit meter is alarmingly off the chart for nearly everything on public aggregators. I can spend an hour skimming a hundred news items, and don’t see one that isn’t obviously or subtly a product of broken views, propaganda, random AI slop or incomplete/ignorant interpretations of reality.

This was not a slow transition either. I am noticing it because it was so sudden. It’s like, last July approximately I could still “read the news” and find a few things to learn, such that I would spend at least a few hours per week scanning public sources for new things. Today, I simply cannot allocate this time anymore: the ROI has fell way too low.

The public internet is broken, even when you know what to watch out for.

It’s not just me who’s noticing. After I came to this conclusion, I was delighted to see Adam Aleksic, of “Etymology nerd” fame, reach the same observation/conclusion in slop capitalism and dead internet theory.

❦❦❦

One specific thing that bothers me even more is that folk around me seem to have a hard time even perceiving this is a problem. To me, it feels as if most of the attention in my social circles has been captured by algorithmic feeds (Instagram, Tiktok etc.) and left very little of it remaining for classical news, and thus starved even the smartest individuals of signals that things are amiss overall.

Speaking of how algorithmic feeds capture our attention and twist our perception of reality, I like to recommend the following two explanatory angles.

In Algorithms are breaking how we think (YT), the guy from Technology Connections (whose name is hard to determine) explains that as we get more and more used to being delivered information, we lose the skill to search for it, and this robs us from the opportunity to discover unexpected knowledge as a byproduct of the search.

Meanwhile, in People Born After 1995 Are Living In ‘Ghost Time’ (YT), the guy from Wobbleverse (equally anonymous) explain the mechanism that prevent our brains from forming new memories when consuming short-form content.

The result of both effects is that we become collectively and increasingly unable to form the intellectual context necessary to realize, scrutinize and criticize the current onslaught of misinformation.

This is horrifying.

❦❦❦

As to why and how this happens, I might let history walk its course and reveal (in time, maybe) what we are dealing with. Is this the fruit of organized cyber warfare? Systemic failures caused by gradual erosion of guardrails? Runaway growth upon misaligned incentives? Something else? I just don’t know.

But I also feel it’s not strictly necessary to understand the causes to take personal corrective measures.

My current first line of defense was to forego public aggregation feeds altogether, and switch over to curated subscriptions to private newsletters by authors with a legible background. Closing the one faucet and opening the other metaphorically feels like switching from drinking sewage water to clear water.

The other thing that is proving interesting was to start to moderate my openness to listen to folk who share “things they’ve learned online” with me through my observation of how much time they allocate to algorithmic feeds each day. It’s my own version of rejecting trials by identifying when the jury’s been tampered with. Conversely, asking folk more systematically, how they learn what they know, and which additional curated subscriptions they would recommend to me, has opened some learning doors which I hadn’t perceived were shut before, perhaps at the risk of sounding excessively skeptical during casual social encounters.

I don’t exactly like how this change turns me into an “old” person. I kinda liked the earlier, more naive version of me.