On the social media site formerly known as Twitter, ESR discusses a pre-computer (pre-electronics) proof that open source is more secure than closed source:

“How university open debates and discussions introduced me to open source” by opensourceway is licensed under CC BY-SA 2.0
There’s an old, bad idea that’s been trying to resurrect itself on X in the last couple of days. Which makes it time for me to explain exactly why, in the age of LLMs, open-sourcing your code is an even more important security measure than it was before we had robot friends.
The underlying principle was discovered in the 1880s by an expert on military cryptography, a man named August Kerckhoffs, writing long before computers were a thing.
To start with, you need to focus in on the fact that cryptosystems have two parts. They have methods, and they have keys. You feed a key and a message to a method and get encrypted information that, you hope, only someone else with the same pair of method and key can read.
What Kerckhoffs noticed was this: military cryptosystems in normal operation leak information about their methods. Code books and code machines get captured, stolen, betrayed, or lost in simple accidents and found by people you don’t want to have them. This was the pre-computer equivalent of an unintended source-code disclosure.
Cryptosystems also leak information about their keys — think post-it notes with passwords stuck to a monitor. What Kerckhoffs noticed is that these two different kinds of compromising leakage happen at very different base rates. It is almost impossible to prevent leakage of information about methods, but just barely possible to prevent leakage of information about keys.
Why? Keys have fewer bits. This makes them easier to keep secret.
Remember: this was something an intelligent man could notice in the 1880s, well before even vacuum tubes. Which is your first clue that the power of this observation hasn’t changed just because we’re in the middle of a freaking Singularity.
Security through obscurity — closed source code — means you’re busted if either the source code or the keys get leaked. Open source is a preemptive strike — it’s a way to force the property that your security depends *only* on keeping the keys secret.
What you’re doing by designing under the assumption of open source is preventing source code leakage from being a danger. And that’s the kind of leakage with a high base rate.
As far back as 1947 Claude Shannon applied this to electronic security — he did critical work on the voice scramblers that were used for secure telephone communications between heads of state during World War II. Shannon said one should always design as though “the enemy knows the system”. The US’s National Security Agency still uses this as a guiding principle in computer-based cryptosystems.
If you’re doing software security, always design as though the enemy can see your source code. I’m still a little puzzled that I was apparently the first person to notice that this was a general argument for open source; as soon as I did, my first thought was more or less “Duh? Somebody should have noticed this sooner?”
Now let’s consider how LLMs change this picture. Or…don’t.
An LLM is like a cryptanalyst with a superhuman attention span that never sleeps. If your system leaks information that can compromise it, that compromise is going to happen a hell of a lot faster than if your adversary has to rely on Mark 1 meatbrains.
But it gets worse. With LLMs, decompilation is now fast and cheap. You have to assume that if an adversary can see your executable binary, they can recover the source code. If you were relying on that to be secret, you are *screwed*.
Leakage control — limiting the set of bits that can yield a compromise — is more important than ever. So security by code obscurity is an even more brittle and dangerous strategy than it used to be.
Anybody who tries to tell you differently is either deeply stupid or trying to sell you something that you should not by any means buy.






















