We don't usually think about where stuff comes from, until a supply chain _goes wrong._ Chicken prices go up when meat packers go on strike, or bird flu goes around again. Oil prices are currently elevated because the current U.S. president is making foreign policy decisions according to a Ouija board conversation with Dick Cheney. And RAM? Don't get me started on RAM.

In the current landscape of supply chain attacks within the software world, it's worth a bit of reflection to understand how the supply chain _was_ working, and why it's starting to _not_ work now.
## Sure, I'll execute that on my computer.

Software has always fit oddly into the economy, which is a bug of the economy. Because software is (materially) pure information, it might take a lot of labor to make, but it's trivial to copy. And much like 1950's optimism about automation, this cheapness would just be a straightforward win for humanity, if humanity had not organized itself socially around toll gates and rent-seeking being necessary to feed yourself.

Because of their trivial-copy nature, computers have (for most of their existence) drawn a crowd of intelligent folks who _want_ to contribute code for free. In the early days of the internet, especially (before it became a handful of websites competing to be the on-screen distraction you crash your SUV to), some of the most influential voices in the community - the techbros of their time - really understood the internet as an opportunity to build a parallel, alternative civilization that traditional empires would need to reckon with. A societal model that actually held up to post-scarcity realities as they came up one by one, the most immediate of those realities being software itself. A lot of those guys sold out eventually, especially when the dot com bubble burst, out of necessity.

Writing FOSS was generally not something you could make money doing. People did it for "love of the game" reasons, AKA intrinsic motivation. If you found a random library on the internet, well, it might or might not be good code. But you knew that whoever wrote it was _trying_, and wished you no ill-intent, which meant something. People caught on pretty quickly that the open source ecosystem was a social project that became more valuable with each freely available library. Contributing to FOSS was something you could do to rise the tide underneath all boats, including your own. This worldbuilding would have been dead in the water if open-source development wasn't such a good [shibboleth](https://en.wikipedia.org/wiki/Shibboleth): the only reason to be a FOSS dev was because you believed in FOSS principles, which meant you could trust libraries on the internet by default, which meant those libraries could actually contribute to the network effect.

I don't consider this to be a communist economy, even if it appealed to many communists, myself included. It's something much older, a gift economy like what existed between many ancient and recent tribes (including a lot of Native American ones). You don't have to like them (and their social pressure to give back as you have been given), but they do _work,_ just for a set of goals that aren't tightly aligned to a capitalist economy. [Ginger Bill and I will just have to agree to disagree](https://www.gingerbill.org/article/2025/04/22/unstructured-thoughts-on-oss/).

It's probably worth saying that this model worked for a very long time, was definitively influential to how most package managers work (especially language package managers like `npm` and `cargo`), and we're still swimming in an ocean of implicit assumptions from those halcyon days. Times are not changing instantly, but they _are_ changing.
## Where there's a fat moose, the mosquitoes will gather

As per Guru Khaled, you cannot have exorbitant success without suffering. FOSS was so effective that it became the backbone of most proprietary software. You simply could not compete with other proprietary software offerings if you were spending all your time managing the costs and logistics of software licensing for your dependencies. And of course, living purely in the FOSS ecosystem kept becoming a better proposition over time.

This meant that FOSS was in everything, not always in an up-to-date form, in a world where the value of vulnerabilities kept going up. The percentage of our lives that we experience or manage digitally has never been higher. Crypto wallets have been a thing for awhile. There were finally reasons to contribute to open source that _weren't_ founded in mutualism, that were cannibalizing the trust that FOSS was built on. While this has been a gradual shift rather than a sudden inflection, it's to a point that us frogs find ourselves asking each other: "hey, it's pretty warm in this pot these days, innit?"

Even though it was only two years ago, [Jia Tan's attack on XZ utils](https://en.wikipedia.org/wiki/XZ_Utils_backdoor) already feels a little quaint. It required enough human labor over a long enough time that Jia Tan was almost certainly a persona played by multiple people on a team of state-sponsored attackers. The attack involved badgering the original developer into giving up project maintainership using sock puppet accounts. The backdoor had to be very well hidden, which was inefficient, leading to the ironic way it was discovered (a performance regression) - but that hiding effort was also why it took so much time and labor to compromise a compression library.

The same attack today would involve using LLMs to grind out reasonable-looking compromised code and Github comments, probably by one person using multiple accounts. Or it would take advantage of [LLM integrations with some legitimate project's CI service](https://snyk.io/blog/cline-supply-chain-attack-prompt-injection-github-actions/). Or, because LLMs produce junior-level code and are rarely used in tight supervision, you might just miss a genuine whoopsie-doodle while using such a tool to speed up development. Or, you might be a maintainer with a bug bounty program, and be overwhelmed with false-positive hallucinations clogging up the review process and wasting human time. It's only half the story to say that hacks have more value than they used to - they're also a fraction of their old price.

None of this is to declare FOSS dead. But if we want it to survive, we might need to revisit what this social project is about, from first principles, with a specific eye towards what the last couple years have to teach us.
## Humans make software

This has been an unpopular reality for a long time. In the early days of video games, the first easter eggs were _credits_ because [Atari were such flaming assholes at the time that they refused to credit developers of their games](https://en.wikipedia.org/wiki/Adventure_(1980_video_game)). That attitude might feel shocking today, but credits in video games are one of those things we only have because labor pressure won a war against the unbounded greed of the owner class. It is normal for companies to want to obscure who actually makes things you buy. It's from _Atari!_ The consumer doesn't need to know who Warren Robinett is. And your bananas are from Dole, your ARM chips are from Apple, etc.

But when I say that humans make software, I don't mean that they just generate it and throw it over a wall. As we all know, software does not stand still. It needs updates, and somebody needs to make those. Have you ever chosen not to use a library because it isn't maintained anymore? I have. Someone with deep knowledge of the code, opinions about how to change it, and an intrinsic motivation to publish good code needs to be there, at the wheel, or that repository is _dead._ I could gripe about Discord being a bad place to organize, but I think it's no surprise to hear that a lot of code is only broadly useful because there's a Discord server where the devs hang out and offer support.

What we're talking about here is not _generation_ but _maintainership._ An AI cannot maintain a codebase, and generation is not enough. If you found a repo today for some random thing, let's say an IRC client, and the "maintainer" was Claude - would you want to use that? Probably not. But this also goes beyond AI.

There are certain projects that are very large, and as a response to that scope, they're "maintained" by a foundation or organization or somesuch. These usually don't try to hide individual attribution for specific commits, or other credits. But I've found that a lot of projects organized this way struggle a lot when it comes to having clear direction, the ability to handle interpersonal conflict, etc. In fact, we're seeing an example of this right now with The Document Foundation.
### Wait, what now with TDF?

Quick catch-up. LibreOffice is a highly popular free software office suite with best-in-class compatibility with Microsoft Office. Once upon a time, [it was forked off of OpenOffice because Oracle acquired the project](https://freesoftwaremagazine.com/articles/openoffice_org_dead_long_live_libreoffice/), because Oracle had a track record of neglect and malfeasance in the FOSS space. The fork breathed new life into the project, and there was an influx of new developers and optimism. A lot of those developers were even getting paid, because a company called Collabora (a FOSS-centric consulting company) invested heavily into LO, with multiple employees dedicated to contributing to LO specifically.

Details are still unfolding, but The Document Foundation (which was created to be the management structure behind LO) somewhat recently revived an old project for web-based office software (which directly competes with some Collabora paid software - something LO has every right to do, but is a bit hostile to many core contributors), and then rushed a vote to eject every developer with a legal conflict of interest against TDF, which is... all the employees Collabora was paying to work on LO. Strategically absurd. Micheal Meeks was a bit of a figurehead for the LO project because of all his direct contributions, blog posts, and more - [if you read his response to these moves](https://www.collaboraonline.com/blog/tdf-ejects-its-core-developers/), you can really feel the pain he's in, after investing so much of his life into LO.

Be careful about the people breathlessly commentating on this story while a lot of details are still missing. I've seen some baseless finger-pointing at TDF for not fully publicly elaborating some legal concerns, where they probably legitimately _can't_ talk in detail yet. That said, it's hard to look at TDF's choices and see something that represents the good of the LibreOffice project, or respects the people who were actually making the dang thing. If you alienate your contributors, they will stop contributing! And then you're the king of a tiny dirt pile instead of a kingdom.
### That's gotta be an outlier!

I wish. Not every project with a committee or foundation in the maintainer position is rapidly going to shit or anything. But there are plenty that are struggling. Docker's reputation goes a bit further downhill every year as they struggle to find a monetization strategy that doesn't drive their users away. Firefox is a vitally important browser in the current landscape, and it's scary to watch them make a series of terrible feature choices (from Pocket to LLM integration) that undercut the feeling of safety that their better paid options (VPN, Relay) depend on.

The alternative is obvious, but for some, it'll be uncomfortable: stuff's just better when there's a human's screen name in the URL. Even if that person has a light touch and trusts most day-to-day work to a wider group of maintainers (for example, Linus Torvalds and the Linux Kernel), there's still a singular person in charge of deciding who to trust, and how much, based on a position of maximum available information.

It all ends up being about trust, because no, you can't live a practical digital life in the current world _only_ running code you've carefully vetted by source. You need to trust people who trust people.

On the development side, it's not just about trust, which makes the argument for human-centric maintainership even stronger.
### Pulling in the same direction

Organizing volunteer labor can have a lot in common with herding cats, but plenty of people have done it. How? Well, one of the most important things is that the people working on a project need to have a strong overlap in terms of goals and values. Diversity among the people working on a project is very, very good. But you don't want diversity _of vision_ within a team - that will pull people in contradictory directions, and they'll fight about it, and if you're really unlucky you'll have some decentralized leaderless conflict resolution infrastructure. Unfortunately, committees exist to "unite" disparate stakeholders with contradictory visions into a single battle arena with decentralized leaderless conflict resolution infrastructure, making them unsuitable for managing software development.

![[accountable.png]]
> The infamous ["A computer can never be held accountable, therefore a computer must never make a management decision"](https://simonwillison.net/2025/Feb/3/a-computer-can-never-be-held-accountable/) image. Committees are subject to the same problem, because groupthink is largely an exercise in avoiding being remembered as the one responsible for a bad idea.

Having a human maintainer as a figurehead is very valuable here, because that person can - and must - be a symbol of the values that direct the project. It's a feature, not a bug, that you can look at a maintainer and make a decision whether you want to work with that person, and their framework for charting the project's future. If the answer is no, it's time to fork or use something else, and that freedom is critical.

I'm not a fan of the term BDFL for this lead maintainer role, because I think every letter is a bit wrong. Benevolence is impossible to rigidly define in a way that everyone agrees with. Dictatorship implies a level of micromanagement that would push reasonable people away. And "For Life" is pure hubris - as a maintainer, you might get hit by the proverbial bus, or take a demanding job, or retire, or lose interest in the project, or be so unpopular that your fork becomes the dead one. Memento Mori is a more valuable attitude - you are only the leader provisionally, you should act like it! I consider myself the Interim Captain of the Prone project. I will probably maintain my fork as long as I'm alive and able, and that fork will be strongly infused with my values, but it may not be the only or most popular fork forever.

A committee cannot just avoid the problems of strong dogmatic leaders by being the opposite - flavorless, and without individual accountability. There has to be a name for whom _the buck stops here._ There has to be a united technical direction. You can give people a lot of freedom if you know they're all aligned on common goals.
## Aligning on humanity

So, what are the best choices you can make, if you want to bet on the future of open-source software?

1. Don't use LLMs, even the free ones. You know how the (brick-and-mortar) library benefits from being able to point to readership numbers, even when you're not paying to rent books, and you can support your local library just by using it? Okay, same rules apply here, but for evil.[^1]
2. Prefer to use well-established software that explicitly has a human primary maintainer.
3. Just generally use fewer dependencies where it's practical to do so.
4. Verify checksums and signatures for the software you download.
5. Apply these values to software you publish.

These aren't always easy, but the network effect that stoked open source from the beginning is still alive. The more of us making these kinds of choices, the easier it is for the next person to make those kinds of choices. We are making the world we want to live in.

[^1]: Don't make me write a whole blog post for why a product explicitly marketed to investors as the final solution to intellectual labor and copyleft licensing is a bad beast to feed.