Chronologication
Created 2025-11-29, last modified 2025-11-30
Historically, my website has been organized in a topical way, the way you'd put together documentation. My articles have names, not numbers. It's added some useful back-pressure against my impulse to just write stuff on a whim: I need to pick a name for the thing I'm writing, and think about whether it's redundant with the other named things I've written.
Well, in order to be easier to follow online without platform lock-in, I'm switching to an RSS model with chronological feeds, so y'all are stuck with an unchecked form of my technobabble now.
Crafting a changeling
So how does this work? Well, first of all, I knew a blog would be easier to implement if I had a statically generated website. I wrote the original version of this site as a Rocket.rs app (as part of a larger goal of learning Rust), and I came away from that project feeling really dissatisfied with the tech stack I was using. It would have been very difficult to extend to support the needs of a blog. On the other hand, adding blog features would be easy in an offline site-building process, and I could eventually replace the site generator with a Prone equivalent someday as part of the dogfooding process.
That said, I didn't want it to be obvious or visible that I'd changed the underlying tech. No broken features (that I was still using, anyways, the /stream page was forfeit), no changes to layout, the same bytes shipped to the consumer from a different factory process.
This dictated some of the stuff that stayed the same. I'm using the same Handlebars-based template, and the same Markdown documents. However, it only took a couple hours to start a new uv project and get all the HTML pages of my website generating accurately from a Python 3 script, starting with the index page (so I could focus on generating one page and static resources) and adding logic to walk through my whole Obsidian vault to find which pages to generate.
The hard part came from replicating one specific feature, which had to be handled by the webserver (in this case nginx) and not the static site generator: every page of my website will provide HTML to browsers, but original Markdown to CLI tools, based on the Accept header sent in the request saying what MIME types the client supports. This is such a rare, odd behavior that it took ages of poring through nginx docs and guides to figure out how to get the server to do what I wanted. In the end, it required a couple things:
First, a "map" that creates a new variable $ext based on the contents of the Accept header. A lot of the time spent on this was just figuring out how to access the header.
map $http_accept $ext {
default md;
~.*text/html.* html;
}
So if text/html is present anywhere in the Accept header, like a browser would send, $ext is set to html, but otherwise it's set to md.
If you think that "ext" might be short for "extension", like the suffix of a file name, you'd be right. The cool thing is we can literally just use this directly in our try_files directive.
location / {
root /usr/share/nginx/html;
try_files $uri $uri/index.$ext $uri.$ext =404;
error_page 404 /404.$ext;
}
This means if a client specifies the extension of a page, for example a request to /software.html or /index.md, well, we have those on disk, generated by main.py. Every page has a .md and a .html version living next to each other. Nginx can just serve that. If you don't specify an extension, nginx will infer one differently based on your Accept header. In fact, other than having .$ext in place of .html, this is an extremely bog-standard try_files directive that should read like boilerplate to any long-time nginxer.
Finally, the built-in mime.types file that ships with the Nginx Docker container is missing Markdown, so I had to inline a copy of it into my custom nginx.conf but with Markdown added.
default_type text/html;
types {
text/html html htm shtml;
text/markdown md;
# ...
}
That last bit of work isn't the hugest deal, but it does mean we send the right Content-Type header for responses in the very common case that the file being served is a .md.
I find that subjectively my website is a bit faster now as a static website, but it's also quicker to deploy, since I don't have to worry about syncing my entire Obsidian vault to my site hosting laptop. It just didn't make sense to be cloning over a bunch of large files that my website software didn't even allow people to see, like the full album Meliora and a bunch of high-res wallpapers. It's a perfect impersonation of the old site, but at a fraction of the weight.
Generating the New Stuff
Once I had a static site generator working for my existing website, I needed to extend it to generate things like the RSS feed itself, but also an index page for this blog.
For the RSS feed, the RSS 2.0 spec was a vital reference, but I'll be honest, the most useful thing was starting with the example feed on the official website and slowly customizing it to have more and more fields sourcing from real data specific to my site. The fun thing is I'm actually using a Handlebars template to render my RSS feed, the way I would any page on my website, but it's specifically an RSS template.
One of the things this project prompted me to do was install the Frontmatter Modified Date plugin for Obsidian, by Alan Grainger. I haven't historically tracked the creation or edit dates of pages on my website, and I needed this information for RSS fields like pubDate at the channel and item levels. It's also something I always meant to display on the website normally. Now I actually have the automation to track these dates in a more reliable way than filesystem metadata. And now that I have the data, it's pretty easy to display in the main template of my site.
When I finally got to the index page, I realized how committed I was to the pages on my site consistently having simplified Markdown versions as well as browser-fancy HTML versions, so the way I make the index page is a little funny. I actually generate a Markdown version of the page first, and then convert that to HTML in the same way as any other page. The code to do this is currently inefficient, janky, and redundant, but I don't really care because:
- I'm planning to polish it up later.
- Polishing too early means you're guessing what's a pattern vs a one-off. If you wait and endure the jank for awhile, it eventually becomes obvious how to clean things up correctly.
I'm planning to provide 3+ different blog feeds, so I'm intentionally applying some restraint and not making my code pretty until I have multiple feeds. And of course, it doesn't need to be that fast, as it's an offline process.
Finally, I had to edit my site to link to the new resources. This included the <link rel="alternate" ...> magic to make the RSS feed detectable by the browser. But I also had to make a PNG version of the site favicon, as blog icons cannot be .ico files. Finally, I edited the site header and homepage to link to the blog area.
There's more I need to do next, like the ability for different blogs to have different site background colors, making the site's local preview mode work better with self-links, massive code cleanup, and copying over content from Tumblr, where I used to post technical content. But this is enough for one day! And I'd like to get back to C programming soon. Hopefully, that's what the next post on this blog will be about. See you then!

