It seems my annual website hosting reboot trend is continuing. Every year it seems I get bored with how my websites and utilities are hosted and re-write the book. The past several hosts were all based on CentOS Linux, or Ubuntu Linux. When I worked for a very large, but now defunct, public cloud host I was using several instances to provide a fully redundant, scalable hosting environment for my 10 visitors. Most recently I was on a single Ubuntu 16.04-LTS host running on a dirt cheap OpenVZ instance out of Dallas, but all that's behind me now because I'm going back to my roots!
From 2008 until 2012 I was working full time on FreeBSD servers as a systems administrator for a high performance web hosting, and content management development firm. While working there I was introduced to what's now taken the DevOps world by storm, containers. Although not Docker, LXC or OpenVZ. I was working with with Jails, which date back to 2000. By Comparison Solaris Containers were first released in 2004, OpenVZ was first released in 2005, and LXC was released in 2008.
One thing I decided I wanted in my new hosting environment was a good level of process separation. This way if my wife requests that her website run Wordpress with it's long list of security issues, I don't have to worry as much about the contents of my git server being exploited. One way to get this separation would be to run multiple hosts, and put the risky content on those hosts. Unfortunately I don't have an unlimited source of funds, and I don't have enough traffic on any of these sites to warrant adding hosts like that. Another way to provide the required separation is to use containers. The catch is containers aren't for security. Although they can provide some important levels of separation. Below is an early, proposed drawing, of what I intended to deploy.
At this point, if you're reading this page, you have passed through the PF firewall on the host machine. PF passed your connection through to the jail running nginx, which terminated SSL, and then proxied you on to either Varnish, or directly to the Ghost instance for this site. An additional level of protection I've added to the server is each jail has a PF firewall also providing ingress and egress filters. So the Ghost instances, which don't need access to the PostgreSQL server can't access it. Not just from connection rules in PostgreSQL, but also at the network level.
The last technology I have yet to deploy, but am planning on adding is Capsicum. Capsicum is a lightweight OS capability and sandbox framework developed at the University of Cambridge Computer Laboratory, and fills a similar role to tools such as SELinux or AppArmor in Linux. However Capsicum being capability based operates much differently than the other two which are both MAC - Mandatory Access Controls. Capsicum is similar to the framework used by Google in ChromeOS and the Chrome browser to sandbox each tab. I intend to sandbox each application (Apache, Nginx, Varnish, Etc.) as applicable within their individual jails.
There will be more to come once I get a little further down the road.