This website is no longer on AWS
Because it doesn’t need to be and in fact could be doomed because it was.
NOTE: This describes an older iteration of this website, back when it was still a static page. The logic still kinda tracks today, though.
For a long while this website - and several others like it I operate - was hosted on AWS. I used a combination of S3, S3 Websites, and CloudFront. It’s not a bad setup for a static website: it’s relatively simple to set up once you learn all the magic words to make it not 403 every other request, and it costs pennies at my scale. I would imagine that at the point where it would start costing actual money, it’d be some form of A Good Problem To Have.
It wasn’t a bad UX, either - for the website visitors, I mean. I absolutely cheaped out on CloudFront, only replicating my blog to a few of their cache regions. Still, the access times from where most of my readers historically were, for a bit of text, some CSS and the odd webp, were fantastic.
I say “historically were”, because I no longer run any analytics on this blog. This article puts the reasoning behind it more eloquently than I could. In any case, they are mostly unneeded - a side effect is I don’t know where the readership comes from, or indeed if there is any. Nor do I really care.
They are objectively worse now, unless you’re in the right bit of the European Union. I’m only sorta in that bit, so mine are just okay. That’s because this site is now hosted via nginx running on somebody else’s computer1 in Germany.
While the UX for the readers may be slightly worse, my own… development experience, I guess? Is miles better now.
Firstly, I could replicate this setup anywhere. I could run it on a raspberry pi in somebody else’s basement2. I could run it from a Linux box in someone’s garage3. My point is I’m no longer beholden to a very specific type of someone else’s computer which happens to speak AWS API. The makefile I use for this site is no longer mostly calls to the AWS CLI.
I no longer have to wait between an hour and twenty hours for the inspired tractates herein proffered to become available to the uncaring maw of the Information Superhighway4. I may write on average one post per month, but by Jove if I wrote it, I want it to be out now, now, NOW.
microtransactions?
I also no longer need to use a separate box for storing the raw media going into this blog. (It needs a lot of files which could be smaller, but I’d have to actually make an effort for once and we can’t have that). I could store them in git where the rest of this site lives? You kids and your LFSes. Get off my porch. It’s not like I don’t know how to configure GitLab for LFS, silly, I just don’t want to. Yeah.
It’s not like I started hating AWS or something. It’s kinda like GitHub to me: I still use it for work and won’t start hissing whenever I need to touch it. I just looked back on all the server moves/provider moves I had to do before and decided that vendor lock-in kinda sucks. I like this website. If I have to suddenly move it, it probably won’t be a priority - I’ve got stuff to do. So I won’t move it immediately if it’s not super easy to do, and with vendor lock-in it never is. Therefore something as minor as a credit card bouncing was an existential threat to my blog.
Now it’s more expensive and slower. But it can be expensive and slower almost anywhere. And that was the point.
- Yes, I do mean The Cloud. Just a smaller cloud owned by somebody other than Jeff Bezos. [return]
- Incredibly, mine isn’t wired for Ethernet. Or power. [return]
- I wish I had one of those. I could wire it for Ethernet! [return]
- Or pay Jeff a king’s ransom for a „cache invalidation”. Why did it never give me pause that my blogging came with [return]