How the site works

public - Will show date when possible

Fundamentally, the entire website is a docker-compose configuration running on a desktop computer in my bedroom. I don't need massive international redundancy. Cheap hosting and simple administration are higher priorities for me for this project. And that allows some cool stuff. Let's dig into the file, actually.

name: maddiem4-cc
services:
  nginx: # Not currently used, likely to be removed
    image: nginx
    ports:
      - '8080:80/tcp'
  web:
    build: ./containers/web
    volumes:
	  # These can probably be simplified
      - ./containers/web/templates:/usr/src/web/templates
      - ./containers/web/pages:/usr/src/web/pages
      - ./containers/web/static:/usr/src/web/static
    environment:
      # Likewise, this could just be a better Rocket.toml
      - ROCKET_ADDRESS=0.0.0.0
      - ROCKET_PORT=8082
      #- ROCKET_PROFILE=debug
      #- ROCKET_LOG_LEVEL=debug
    ports:
      - 8082:8082
  cloudflared-tunnel:
    build: ./containers/tunnel
    secrets:
      - tunnel-token
  ntfy:
    image: binwiederhier/ntfy
    command: ["serve"]
    environment:
      - TZ=PT
    volumes:
      - ./volumes/ntfy-cache:/var/cache/ntfy
      - ./volumes/ntfy-config:/etc/ntfy
    ports:
      - 8081:80
    restart: unless-stopped

secrets:
  tunnel-token:
    file: secrets/tunnel-token.txt

The most fundamentally thing is the web service, which is a custom website written in rocket.rs. I'll go more into detail on that later, but when you look at most things on maddiem4.cc, like this page, you're getting served pages by a Rocket server. Check the response headers! Er... well, those'll say Cloudflare actually, which brings us to the next point.

Next we have a cloudflared container. This allows us to use Cloudflare ZeroTrust Tunnels to host a real website from some random machine in a home network with a dynamic public IP. The tunnel daemon identifies itself to Cloudflare, which allows Cloudflare to send traffic to the tunnel according to configuration I set up on their control panel. Now here's what's real fuckin' cool and synergistic: my Cloudflare configuration references hostnames like ntfy and web, because the tunnel daemon is running in an environment where Docker DNS is set up. So the tunnel is just talking to peer containers by their names (and forwarding to CF, of course).

Finally, we have ntfy. This is a neat little service that lets me easily send notifications to my phone or desktop from the command line. I have it set up in a way that my loved ones can use it, but random strangers can't. Should be good for my nerdy household.

Let's dive into the web server

Lots of stuff going on there. For example, I'm able to update the site from anywhere in the world with Obsidian Sync. How does that work?

This web server is some source code in rust (for generating the server program) and associated config like Config.toml, combined with three directories used at runtime (as you saw volume-mounted for the container):

While templates and static are part of the git repo that I maintain this entire project in, pages is maintained by Obsidian, I just symlink it appropriately on whatever computer needs to run the site software. In fact, pages is in my .gitignore so that it's not only not tracked by git, but can point in a different per-machine-appropriate place... per machine!

I'm not ready to publish the source code quite yet. But I think after a little auditing, that should be fine. I had an intent for it to be publishable the entire time, so there's stuff like untracked secrets files to keep sensitive info from being in the commit history all along.

The rocket server doesn't do anything terribly fancy. It does static file hosting, and requests that aren't known to be static are treated as dynamic and assumed to be pages in my Markdown vault. Info from the page (including Markdown compiled to HTML) is then fed as inputs to the Handlebars template engine.

But this is a personal vault that I'm retrofitting public access onto, so probably the most complicated thing it currently does is that it reads some metadata from the frontmatter of pages. Most pointedly, the visibility attribute in the frontmatter must be the string public or the page won't show up on the site. More nuanced scopes validated with Rocket's private cookie system are coming, but not quite yet. The upshot is that my entire Vault is private by default, and pages must very explicitly opt into publication in order to be visible. A visibility check failure is exposed as a 404.

Hey wait a minute. You're not using the upstream Docker image for cloudflare tunnels!

You're right! Well not directly. I make a custom image with very few adjustments FROM the upstream. It's because, for security reasons, I really do want to pass the security token by file. And since Cloudflare doesn't seem to want to implement that natively (dare I even link the patronizing Github Issues threads where people point out that it's the industry standard in the Docker world?), I basically made a little wrapper that takes a token as a file and provides it to the underlying tunnel program.

It's obnoxious that I had to do this, and if they ever fix the problem upstream, I'd love to stop making a workaround that pulls stuff like cat into a scrupulously minimal upstream image. I feel like I'm desecrating something pure here! I don't want to!

Dockerfile:

FROM busybox:1.35.0-uclibc as busybox
FROM cloudflare/cloudflared

COPY --from=busybox /bin/sh /bin/sh
COPY --from=busybox /bin/cat /bin/cat
COPY --from=busybox /bin/ping /bin/ping

COPY entrypoint.sh /bin/
ENTRYPOINT /bin/entrypoint.sh

Entrypoint:

#!/bin/sh
set -e
cloudflared --no-autoupdate tunnel run --token $(cat /run/secrets/tunnel-token)

Oh by the way hi Clarissa!

Love you!