Please bear with me, I am still learning to speak

Fighting the Server Conspiracy

(One service at a time)

3 min readJul 17, 2018

--

Have you ever noticed that the discourse is often about slicing and dicing the hardware, power and networking resources data centers have? Whatever you need — you can get it from multiple vendors, with excellent support and exciting features — as long as you keep bringing the money every month. The notions of cloud, AaaS (Anything as a Service) and serverless are firmly and unquestionably at the center.

While this is no collusion (at least I don’t think it is), it is clearly in the best interests of those who invest a lot of money in their operations to keep you renting more and more of their capacities. Look, even Microsoft is all about services now. Services bring recurring revenue.

While there’s a whole ton of use cases where this makes perfect sense — anything bandwidth intensive, requiring short response time, etc. But I wonder — where can we get away with simpler setups?

One specific case was particularly on my mind. Bootstrapping the ecosystem for SIT, it always annoyed me that in order for an early adopter to use SIT’s issue tracking, they pretty much have to put together a git hook I developed, web server and a bunch of glue and magic and run this somewhere where it can be available to the general public to git push to. Yeah, as you might have noticed, this would require them to shell out a few bucks for a slice of a server in one of those data centers. Altogether, it’s a pretty steep entry barrier and it is clearly not helping adoption.

I’ve procrastinated around this issue for a few months, until I had a very simple idea pop up in my head. Reflecting on what decentralization really means (hint: it’s not about blockchains!) and how Linux and a few other projects use Git (another hint: they aren’t using a centralized service like GitHub to collaborate), I realized that the right technology was right under my nose the whole time.

Ubiquitously available, cheap-to-free, not requiring you to be online all the time.

E-mail.

Yeah, right, e-mail.

Sure, it still needs a server that’s online 24/7 (well, not really, as other servers will queue emails for up to a few days — but it helps if it is online as much as possible). But that server is usually operated by somebody else or you already have your own setup (however unlikely this is in 2018). Heck, you already have at least one e-mail anyway.

So, what if we just polled e-mail inbox(es) locally, on a maintainer’s machine, applied the patches contained in them and pushed them out automatically?

sit-inbox in the wild

Turns out, this is a great approach. There’s no need to rent a server. E-mail can enable complex workflows at nearly no cost (asking new contributors to sign a CLA, anyone?). Mailbox serves as a log. Can work from (almost) anywhere on the planet. Multiple maintainers can run it, simultaneously. Would totally work with mailing lists. It’s that ultimate lightweight flexibility that comes with simplicity.

In a matter of few days, I’ve put together sit-inbox, in a form of a Docker container, packaging a bunch of old and new free open source software and gluing it all together with a bunch of tiny scripts. This is how things were before the cloud. This feels good.

This is one less server deployed.

P.S. I do plan to add other transports to sit-inbox (including ones that rely on being online), of course — but that’s not the point of this short article :)

--

--

Yurii Rashkovskii

Tech entrepreneur, open source developer. Amateur runner, skier, cyclist, sailor.