Bleach, Meet Ammonia
Created 2025-12-13, last modified 2025-12-13. Visibility: public
Well, it's CVE season again. At time of this writing, if you're using Next.js and/or React, you're probably having a very annoying day at best.
Now, while I'm never above saying "I told you so", I try to save it for special occasions, where someone does actually deserve to be kicked a little while they're down. So let me get my steel-toed boots on and explain (not for the first time) why Object Oriented Programming is something I consider harmful. It's not the only lesson we can learn from this CVE, but other outlets are going to cover other (valid, but low-hanging fruit) sides like "don't trust user input." I specifically want to cover the angle that I don't expect to see much of online:
Object Oriented Programming is fundamentally misguided because it mixes code into data, and you really shouldn't do that.
A historic primer
In ye olden days of primitive machines and precious little memory, there wasn't a lot of security either. And frankly, in context, that made sense. Many things you can do to improve security will require more RAM or more CPU cycles, and on the computers we're talking about, you wouldn't really run untrusted code that somebody else wrote (and if you did, you at least had to type it in yourself). Plus, there wasn't a lot of stuff you could break with a nefarious program. Many of these computers could only run one program at a time. So you didn't need to worry too much about Process A reading or modifying Process B's memory - you might not even have a concept of "processes" at all.
Obviously, this changed over time. We gained protection barriers between processes, which was a good start. As our processes got bigger, they started needing internal protection too. The same hardware support that let us keep processes from mangling each other (MMUs), we also started using to say "this piece of process memory is executable code, so we can't make changes to it. But this other piece is data, so we can read and write that."
This made life a lot harder for hackers. You couldn't just put a bit of malicious machine code into some piece of process data and then trick the process into executing it - data sections aren't executable! But it was also a godsend for performance. You don't usually get a free lunch where a security fix actually improves speed, but for this, it makes sense.
When you run a program, the operating system makes sure there's a copy of its code in memory. But if you run a second instance of the same program, well, we can just reuse the same piece of memory, right? It can't be modified, so the two instances can't mess with each other. So why not reuse the memory? And we can keep it cached in RAM even when the program isn't running, so if we run the program again, we don't have to load it off of disk at all. We can even use memory-mapped IO for this, which means that when we load a program into memory, we don't have to load it all at once! Pieces will load as they're needed. And here's the kicker: shared libraries can work the same way. Which means something like glibc (used by basically every program on a Linux computer) can exist as a single fast copy in memory, no matter how many programs are using it. The level of synergy is so beautiful, it's mind-boggling.
Basically, code and data are usually best kept separate, for both security and performance. Unfortunately, for a very long time, it was a very commonly held belief that the secret to better code was to swirl the two together, and this philosophy is known as Object Oriented Programming.
Isn't bleach plus ammonia just more cleaner?
I should play a little devil's advocate and explain why there were smart people who thought this strategy was a good idea. As humanity started making bigger, more complex software, we started to really worry about keeping that complexity tamed. OOP is one of the responses to that worry.
OOP isn't really about "no boundaries" - if anything, it's the opposite. The goal is more boundaries, but placed differently. An "Object" is a piece of data, but with a bodyguard: its class interface.
class Example {
// Things that external code is allowed to see/use.
public:
bool data_is_negative();
// Things that only code within the object is allowed to use.
private:
int hidden_data;
}
This prevents our Example objects from being accidentally or intentionally misused by some other code later. We set rules, and the language enforces them. The dream is to make our programs out of these self-contained, rigorously-defended pieces.
I'm leaving a lot out here - for example, early OOP was very enthusiastic about the idea of using something called "inheritance" to reuse code, which has fallen out of fashion these days for reasons that would be a separate article. For now, the key principle is: it seemed very reasonable in the 90s to envision the future of software as a Russian nesting doll arrangement of big components made of medium components made of smaller components, each of which had to be understood as a thick-walled castle of code-protected data.
So why doesn't this approach work? Well, it works enough to get by, actually. You can make big software this way, and certainly many people have. C++, the language in that example a few paragraphs back, was invented to add OOP to the C language, and most big video games are written in C++ using classes.
Yeah, you have to interact with an object on its own terms, and sometimes those terms were written by an idiot, but that usually leads to software being slow, bloated, and roundabout - not outright impossible or insecure. Or, stuff is harder to write tests for, and you end up using a lot of mocks everywhere to stand in for real objects. Or, when you need to update your dependencies, they become subtly incompatible with each other due to behavioral changes. Or, it's harder to write things like time-travelling debuggers (fun name for something that's more like a VCR than anything actually science-fictional. They're really useful!).
It's (almost) entirely stuff that a modern programmer can just accept as an annoyance, one of the grievances of daily life. Pour another cup of coffee and suck it up. At least, until you need to send program state from one computer to another. Welcome to hell.
The Serialization Problem
The correctness of an object depends on it being a combo of code and data. So let's say I want to send an object from my computer to your computer. In order for it to work the same way on your computer as it does on mine, you'll need to run my code. Do you trust me?
In compiled languages like C++, you actually can't send code, and data is separate from code under the hood - it mixes the two at the logical level, not implementation. So, to send an object, you actually need to (logically):
- Split the code and data apart, and send just the data, in some agreed-upon format. This is called serialization.
- The receiving program takes the serialized data, combines that with its own behavioral code, and produces a working object. This is called deserialization.
- If the code I dehydrated out of my object on my end doesn't match the code you rehydrated back into it on your end, weird stuff can go wrong.
In other words, OOP is such a mistake, such a violation of good practice, that you have to temporarily undo it in order to scale a computation beyond a single machine.
Of course, in dynamic languages, keeping code out of serialized data is a choice, and an inconvenient one in object-oriented languages. What if you could just leave it in? Surely nothing bad would happen, right? Well...
One of the things Python provides is a built-in serialization tool called pickle, which lets you save Python data structures to disk and such. Now obviously, if you're just saving and loading pure data, this is pretty safe. But Python users have a lot of their data in the form of Objects, in the OOP sense, so the original authors of pickle felt that it wouldn't be a useful tool unless you could save and load objects too. So, what should pickle do if you try something silly?
>>> import pickle
>>> import socket
>>> my_connection_to_google = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> my_connection_to_google.connect(("google.com", 80))
>>> # Okay. How do I save a live network connection for later?
>>> pickle.dumps(my_connection_to_google)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.10/socket.py", line 273, in __getstate__
raise TypeError(f"cannot pickle {self.__class__.__name__!r} object")
TypeError: cannot pickle 'socket' object
Obviously the authors of the Python standard library were able to prevent me from that specific bit of mischief. That's good! But the history of the pickle module is a series of security vulnerabilities. It can still get confused by user-defined classes that should produce errors like my example did. The docs recommend you use something else, and urge you never to unpickle untrusted data. Even with trusted data, there's subtle sharp edges you have to be aware of. It fundamentally cannot be safe.
This game of whack-a-mole could have been easily prevented by just, you know, never even taking on the design goal of serializing and deserializing executable code. But that's just looking at the what, not the why. I'll say it again: Python users have a lot of their data in the form of Objects, so the original authors of pickle felt that it wouldn't be a useful tool unless you could save and load objects too.
Ergo, if internal program state wasn't structured as Objects in the first place, there would be no motivation to engage in risky, unprotected serde. The game was rigged from the start.
Get to the CVE!
Oh right, we were talking about a specific zero-day vulnerability in JavaScript.
React Server Components is a relatively young technology for doing server-side rendering. If you're not familiar with SSR, it's a compromise between two strategies for hosting netsites on the interweb:
- The 2000s way, where the server creates complete HTML that the browser can display immediately. That's how this website works!
- The 2010s way, where you download a JS app once, and it shares an API backend with your mobile apps. Slower initial pageload, but faster to navigate around afterwards, and it leds you build things like Google Docs or Netflix.
Server-side rendering, the 2020s way, is a hybrid. You generate complete HTML on the server, but you use the same code as your browser JS app to do it. That way the browser can show accurate pixels immediately, and then when the JS app loads on the browser side and takes over, it does so seamlessly. For perfect seamlessness and efficiency, you usually need to bundle up the HTML and the data that was used to render the HTML, so the client-side app can pick up where the server-side app left off.
The good news is, it seems like the React Server Components team have the good sense to want to keep this data transfer to be purely data.
Here's the bad news.
- There are already great formats for sending plain data safely (primarily JSON), but RSC invented their own format to be more efficient for redundant and/or recursive payloads.
- The "make sure it's just plain data" validations weren't comprehensive. They didn't cover the technique that React2Shell uses.
- There are situations where the communication in this new format needs to go in a browser-to-server direction.
- Doing server-side templating requires your server to be a giant JS app with access to your databases and other behind-the-scenes goodies.
It's an unfortunate combination, which allowed for attackers to run arbitrary code on affected servers, by crafting malicious payloads that (when deserialized by the server) would corrupt various in-memory objects to do the attacker's bidding.
Here's why I want to focus on the language design angle. Technically, you can use JavaScript/TypeScript safely, but it requires a certain level of care, because (as a multi-paradigm language that supports OOP), it has to support mixing code and data by design. Keeping the two separate requires active defensive effort from programmers. Even in TypeScript, you're not only not protected - classes are commonly accepted as best practice! Only the most egregious forms of cross-contamination are actually treated as hazardous by the community, for example "eval is evil."
Trying to do better
I didn't write this post to be self-promotion for my projects, so I'll keep this section brief, but I'm actually working on a programming language that allows converting data to code or vice-versa, and runs in an interpreter. By now, that should sound like playing with fire, which is why the language design has some guardrails that many languages don't.
- Converting data into code is an explicit action.
- There's no such thing as classes. All data is plain data using language primitive types.
- Type-based function dispatch is still important for ergonomics, so we use multiple-dispatch free functions, which can handle everything that methods can (plus extra).
- User-defined types are still valuable, but they're treated as ways of specifying and validating the structure of plain data objects.
- Plain data can't be recursive.
- Reference types are technically supported, but they're treated with suspicion, sort of like
unsafeblock in Rust. - The memory model is copy-on-write across function calls, which prevents polluting globals outside your local scope.
It's a bit like working in pure, typed JSON for 95% of your code. It stands out when you're doing something that couldn't just serialize safely, which lets you reserve your care and attention for those situations. And then you get to enjoy faster code, easier testing, better library interoperability, and time-travelling debuggers.
I really don't want to pitch you too hard on Prone specifically. It's not ready for people to use, and even when it's mature, it won't fit all needs - I'm making the language I wish I had, and if it works for other people, great!
What I really want, though, is to train people's standards to be higher in specific ways. I really do think people should have a wary reaction to capital-O Objects. I think people should treat classes as a smell. I think these heuristics should inform which languages you use (to the extent it's your choice), and how you use them. I think that these standards will lead to a simpler, safer, cleaner, faster world.
Maybe, just maybe, those will be mainstream opinions someday.

