Privilege Seperation on Linux

I have been following a little known Linux distribution for a while now called Qubes OS.  This distro is based on Fedora 18 with Xen to create strong privilege separation between different kinds of tasks.  Specifically you create numerous Dom-U virtual machines (Ie. Work, Banking, Personal, Untrusted) and then applications like Firefox executed in the “banking” domain are completely pristine and firewalled off from Firefox in the “untrusted” domain.    This is a very neat idea as Qubes also uses a few “utility” instances to tie it all together (Networking, Firewall, etc.)  This also allows you to tune whether or not an application from say the “Work” domain can talk to applications in the “Personal” domain.

Credit: Pete Massas

Credit: Pete Massas

The problem with Qubes OS is that it’s based on an older build of Fedora, and they have a team that’s apparently too small to keep up.  There are a lot of bugs and not enough people working to squash them all.  Also Xen virtualization is heavy.  The idea of Qubes is solid, but it’s execution is perhaps sub-optimal.

I haven’t done very much work yet, but I’m looking at whether or not I can use docker on a modern Linux system to launch user land applications rather than daemons.  Then tie those applications into the host X Windows session using something like xpra.  Nat Meysenburg has a good article about using xpra with unique users per application for privilege separation.

Credit: Philippe Amiot

Credit: Philippe Amiot

I also have considered whether or not using a tool like Open vSwitch makes sense in this context, or if it will just make things unnecessarily complex.  More to come on this topic after I sit down with the author of http://www.therandomsecurityguy.com around a whiteboard.

Cheers!

Austin Sportbike Riders Maintenance Day At South Austin Motorcycle CoOp

On Feb. 23, 2014 I went down to the South Austin Motorcycle Co-Op for the Austin Sportbike Riders Maintenance day with the intent of replacing my front forks.

On the stand ready to go

A couple of the guys from the Austinmotorcycles Sub-Reddit offered to help me with the project which is great!  So we got the bike onto the stand, and unloaded the front wheel using the center stand and a tie-down strap to hold the rear wheel down.

Front wheel removed

We had no difficulty removing the front wheel.  Things were progressing nicely.

Right up until we went to remove the brake caliper from the right fork.  It appears the original owner of my Ninja 500R destroyed the caliper bolts.  They were almost completely round with no surface for the wrenches to bite.

So we went to re-assemble the front end as trying to ride it home with mangled caliper bolts didn’t sound like a good idea.  The problem is my bike is also in desperate need of new brake pads so when we went to re-install the wheel, one of the pads kept falling out.

We did eventually get it, with the help of a couple extra hands, and I rode the bike home with the “Round Head” security bolts still installed.

Monday I picked up a pair of M10 (1.25) x40mm class 10.9 bolts with 17mm heads from American Bolt here in Austin, TX.  And managed to replace the top caliper bolt after a few arguments with my tools.

One Down, One to go.

I have new brake pads, and stainless steel braided brake lines on order.  I picked up a bottle of brake cleaner & DOT-4 brake fluid.  Once the rest of the parts are in I’m going to re-attempt the fork upgrade, and will work in a brake system upgrade while I’m at it.

With Forks Like These, Who Needs Spoons?

My Kawasaki Ninja 500R has dampening rod forks from the factory.  These are notorious for being both harsh, and for bottoming out far too easily.  As it turns out this is one area where the Ninja 500R really benefits from aftermarket parts.  A company called Racetech makes a product called “Gold Valve Cartridge Emulators”  which make damping rod forks perform like well-tuned cartridge forks.  These emulators, coupled with rider weight appropriate springs won’t make the Ninja 500R handle like a supersport that costs four times as much, but it’s close.

Racetech Gold Valve Cartridge Emulators

So I had been considering picking up the parts to rebuild and upgrade my forks when I stumbled across a deal that was far too good to pass up.  A full set of forks, rebuilt less than 500 miles ago, with appropriate for my weight springs, and the Racetech cartridge emulators already installed, for less money than just the emulators would have cost.

Racetech Pimped Ninja 500R Forks

Today those forks arrived.  I want to touch up the paint on the lower fork arms, and then Saturday I will be meeting up with some guys I went riding through Texas Hill Country with at South Austin Motorcycle Co-Op to install my new forks.

Can’t wait to report back how they work!

Improved Website Performance

I finally took some time to do a little tuning on this website and server.  While recovering from the loss of my previous server the focus was on getting everything up and running as quickly as possible, and not so much “in the best way possible.”

Credit: Sean Rozekrans

The first step was to replace all the images I was displaying.  In the name of expediency I was hosting all of the images on Object Storage in the cloud.  This introduced two problems.  The first being I had no control over the headers passed from the cloud.  The second, I was lazy and never resized the images.  As a result they were all huge.  To fix this problem I installed a gallery application that automatically generates resized images.  I used Piwigo and have migrated all of the images to my own hosting now.

The second step was to install APC and tune a couple of settings in the web server itself.  These only result in minor improvements, but pave the way for something bigger down the road.  I enabled gzip, configured expires headers for content that doesn’t change,  and tuned some of the other settings.

Finally I installed a caching plugin into the blog software itself.  This allows the blog to generate and link to static files, as well several other performance enhancements.

There’s still a long way to go before I’ll have my sites flying, but the ground floor has been laid.  Now I just need to keep improving.

Cheers!

Web Statistics Gathering

Now that we’ve mostly recovered from the Great Website Massacre of 2013, it’s time to start improving things.  The first thing I’ve done is setup a new Web Statistics Gathering utility.  Previously I’ve used Google Analytics, and tools like AW Stats.  However I always disliked the idea of sending all of that information out to Google.

I started looking for alternatives, and found a utility called Piwik.  Piwik appears to cover most of what Google Analytics provides, but allows me to host my own.  This uses a small amount of java script attached to the footer of your pages to send metrics to the stats server.  So far I’ve added three of the WordPress blogs I’m hosting to the Piwik environment.  Time will tell how well this utility works.

Piwik Dashboard

I still intend to install and configure AWStats to review the actual access logs as well.  I will likely use my Jenkins installation to trigger the building of the stats pages a few times a day.  Typically this is done via cron.  I think in this case Jenkins is a better tool as it will provide some useful statistics surrounding the builds.

More details to come after I’ve gathered some metrics.

Cheers!

Things To Consider

I currently have a gitlab installation I’m using for source code change management.  This has been working great as it allows me to keep my laptop(s) and desktop(s) all in a state that whatever project I’m working on is available to me by always committing and checking out my changes at whatever computer I happen to be using.

I have also been using Salt as a system configuration tool for much of my personal environment.  This has been great as setting up new workstations has become a fire and forget operation rather than requiring a lot of time and attention.

In a previous post I mentioned I’m now using Jenkins as a mechanism to both backup my web server/sites as well as a mechanism to test that those backups are complete.

So moving forward I think I’m going to look at setting up Gerrit.  From the Gerrit website: “Gerrit is a web based code review system, facilitating online code reviews for projects using the Git version control system.”  I’m interested in setting up Gerrit specifically for Zuul which acts as a middleman between Gerrit and Jenkins.  My idea is that I should be able to setup triggers so that when I commit changes to my salt state files, Gerrit and Jenkins will run a series of automated tests against those changes.  Provided all of the tests pass then Jenkins can instruct the Salt master server to call highstate against all of the nodes it manages.

If you have any experience with any of the above technologies, or suggestions on ways to further improve the setup don’t hesitate to drop me a comment below.

Cheers!

Improved Backup System

Due to “The Great Website Massacre of 2013” , where all the sites I hosted that required MySQL databases were lost due to the SQL dump files being zero bytes, I took the time to re-engineer my backup solution.

I still have a couple / few bash scripts that are used to generate my backups.  These scripts combine all the files from the server I need, tars them up, and uploads them to HP Cloud object storage.  This backup process is executed every night at about 1am executed as a Jenkins job.  Several of the parameters used by the script are actually passed to the script by Jenkins itself, which makes adding sites/folders to the backup easy.

As I found out during the disaster, backups are only good if you can actually restore from them.  And like most systems administrators my personal environment failed to be maintained at the same level my environment at work is.  Which is to say my backups had been un-tested for over a year.  It’s something that typically you do when you first setup a new system, and then I admit I was guilty of assuming the backups which used to work, would continue to work.

Jenkins in action

Now I have a trio of weekly Jenkins jobs that perform the following steps:

Job 1: Weekly Restore Test

  • Determine the latest backup in object storage
  • Download and extract that backup
  • Setup the MySQL Databases, Users, and Permissions
  • Insert the data from the SQL dump files into the MySQL server
  • Move the web files from the backup into appropriate apache folders

Job 2: Weekly Restore Function Test (only executes if Job 1 completes successfully)

  • Tests each site using a “restore.$Domain.com” url
  • Looks for specific key words that show both Database connectivity & web content.
  • Reports success or failure

Job 3: Weekly Restoration Cleanup (only executes if Job 2 completes successfully)

  • Deletes all files extracted during the restoration
  • Drops all of the databases from MySQL
  • Deletes all of the users and permissions from MySQL
  • Reports success or failure

With this setup I now know each week that at least one of my backup files is actually useful for restoring the sites to proper function.  I intend to improve the setup further by using the Apache jclouds plugin for Jenkins to setup my build slave for the restoration test at the time of the restoration rather than leaving it idle for the majority of 7 days a week.  I would also like to improve the scripts that are used to actually perform the backup and restoration tasks so that less manual intervention is required to add new sites to the system.

In all though this system has dramatically improved my confidence that if I lose my server again I will have useful backups of the system.   If you have any questions or suggestions on what I am doing, or how I might do it even better feel free to hit the comment button and drop me a line.

Cheers!

Cheap Amazon Motorcycle Gloves

Prior to taking the MSF Basic Rider safety class I purchased an inexpensive pair of motorcycle gloves from Amazon. (This Link)

Carbon Fiber Knuckles. Reflective Piping

 

http://gallery.lnoldan.com/picture.php?/30/category/10

Leather Palms

Initially I was impressed with the gloves.  Real leather palm and back.  Real carbon fiber knuckle sliders.  Double thickness leather on the outside of the glove along the pinky and palm.  And perforated fingers that breath well.

However the adage “you get what you pay for” is absolutely true in this case.  The leather in the palm of the glove is so thin even initially it inspired zero confidence in this glove’s ability to protect me in a crash.  Things got worse from there.  Originally it was a theoretical “These might not work when it matters.”  Today I found out they simply won’t.  The following picture was taken after pulling the glove on this morning.  I used only my left thum and pointer finger, and the damage started in the middle of the panel, not at one of the seams.  Cheap leather doesn’t do justice to just how bad these gloves actually are.

The Ugly

I would avoid these gloves like the plague.  At $10 they might make a nice set to hand to someone you don’t like.  Anything actually requiring protection would be better off in a pair of Mechanix work gloves from Home Depot.