Friday, March 25, 2016

Easy Rake-based Deployment for Git-hosted Rails Apps

I searched a lot of places for an easy way to automate my deployments for OpenStrokes, a website I've been working on. Some things were just too complex (capistrano) and some were way too simple (StackOverflow answers that didn't do everything, or didn't check for errors).

So, as most people do, I wrote my own. Hopefully this short rake task can help you as well. This assumes that your application server has your app checked out as a clone of some git repo you push changes to and that you are running under passenger. When I want to deploy, I log in to my production server, cd to my app repo, and then run:

rake myapp:deploy

For just strictly view updates, it completes in 3 seconds or less. There are several things it does, in addition to checking for errors:

  • Checks to make sure the app's git checkout isn't dirty from any local edits.
  • Fetches the remote branch and checks if there are any new commits, exits if not.
  • Tags the current production code base before pulling the changes.
  • Does a git pull with fast-forward only (to avoid unexpected merging).
  • Checks if there are any new gems to install via bundle (checks for changes in Gemfile and Gemfile.lock).
  • Checks if there are any database migrations that need to be done (checks for changes to db/schema.db db/migrations/*).
  • Checks for possible changes to assets and precompiles if needed (checks Gemfile.lock and app/assets/*).
  • Restarts passenger to pick up the changes.
  • Does a HEAD request on / to make sure it gets an expected 200 showing the server is running without errors.

The script can also take a few arguments:

  • :branch Git branch, defaults to master
  • :remote Git remote, defaults to origin
  • :server_url URL for HEAD request to check server after completion

Note, if the task encounters an error, you have to manually complete the deploy. You should not rerun the task.

Any finally, here is the task itself. You can save this to lib/tasks/myapp.rb

# We can't use Rake::Task because it can fail when things are mid
# upgrade

require "net/http"

def do_at_exit(start_time)
  puts "Time: #{(Time.now - start_time).round(3)} secs"
end

def start_timer
  start_time = Time.now
  at_exit { do_at_exit(start_time) }
end

namespace :myapp do
  desc 'Deployment automation'
  task :deploy, [:branch, :remote, :server_url] do |t, args|
    start_timer

    # Arg supercedes env, which supercedes default
    branch = args[:branch] || ENV['DEPLOY_BRANCH'] || 'master'
    remote = args[:remote] || ENV['DEPLOY_REMOTE'] || 'origin'
    server_url = args[:server_url] || ENV['DEPLOY_SERVER_URL'] || 'http://localhost/'

    puts "II: Starting deployment..."

    # Check for dirty repo
    unless system("git diff --quiet")
      puts "WW: Refusing to deploy on a dirty repo, exiting."
      exit 1
    end

    # Update from remote so we can check for what to do
    system("git fetch -n #{remote}")

    # See if there's anything new at all
    if system("git diff --quiet HEAD..#{remote}/#{branch} --")
      puts "II: Nothing new, exiting"
      exit
    end

    # Tag this revision...
    tag = "prev-#{DateTime.now.strftime("%Y%m%dT%H%M%S")}"
    system("git tag -f #{tag}")

    # Pull in the changes
    if ! system("git pull --ff-only #{remote} #{branch}")
      puts "EE: Failed to fast-forward to #{branch}"
      exit 1
    end

    # Base command to check for differences
    cmd = "git diff --quiet #{tag}..HEAD"

    if system("#{cmd} Gemfile Gemfile.lock")
      puts "II: No updates to bundled gems"
    else
      puts "II: Running bundler..."
      Bundler.with_clean_env do
        if ! system("bundle install")
          puts "EE: Error running bundle install"
          exit 1
        end
      end
    end

    if system("#{cmd} db/schema.rb db/migrate/")
      puts "II: No db changes"
    else
      puts "II: Running db migrations..."
      # We run this as a sub process to avoid errors
      if ! system("rake db:migrate")
        puts "EE: Error running db migrations"
        exit 1
      end
    end

    if system("#{cmd} Gemfile.lock app/assets/")
      puts "II: No changes to assets"
    else
      puts "II: Running asset updates..."
      if ! system("rake assets:precompile")
        puts "EE: Error precompiling assets"
        exit 1
      end
      system("rake assets:clean")
    end

    puts "II: Restarting Passenger..."
    FileUtils.touch("tmp/restart.txt")

    puts "II: Checking HTTP response code..."

    uri = URI.parse(server_url)
    res = nil

    Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
      req = Net::HTTP::Head.new(uri, {'User-Agent' => 'deploy/net-check'})
      res = http.request req
    end

    if res.code != "200"
      puts "EE: Server returned #{res.code}!!!"
      exit 1
    else
      puts "II: Everything appears to be ok"
    end
  end
end

Here's an example of the command output:

$ rake myapp:deploy
II: Starting deployment...
remote: Counting objects: 15, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 6), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From /home/user/myapp
   efee45c..e5468c1  master     -> origin/master
From /home/user/myapp
 * branch            master     -> FETCH_HEAD
Updating efee45c..e5468c1
Fast-forward
 app/views/users/_display.html.erb     | 7 +++++--
 public/svg/badges/caretakers-club.svg | 1 -
 2 files changed, 5 insertions(+), 3 deletions(-)
 delete mode 100644 public/svg/badges/caretakers-club.svg
II: No updates to bundled gems
II: No db changes
II: No changes to assets
II: Restarting Passenger...
II: Checking HTTP response code...
II: Everything appears to be ok
Time: 3.031 secs

Friday, August 21, 2015

Encryption is not for the bad guys

I've been reading a lot of articles about presidential candidates and their stances on encryption for online privacy. It befuddles me how ridiculous their arguments are. You hear things like "only evildoers use encryption" or "if you have nothing to hide, you shouldn't be against this."

These are the same arguments that gave us our miranda rights, protection against illegal search and seizure, etc. Just because a citizen exercises their right to privacy does not mean they are hiding something, nor can it be taken as probable cause to remove their rights.

Saying that law enforcement needs to remove encryption so they can find bad guys is like saying houses need to have open windows so they can see in to houses. It's like saying we should do away with home security systems because only people with illegal items use them.

Any candidate that wants to keep me from protecting my privacy and security on the grounds of protecting me from some would-be terrorist is on a power trip and does not stand for the same constitution that I do.

Monday, July 27, 2015

How to Alienate Your Customers And Drive Your Online Service Into the Ground

So you've spent months, maybe years, building an online service. You have lots of customers and people are starting to rely on your service for personal and/or business purposes. It has a great reputation and it's basically the only one of its kind. Now what?

If you're like one particular service that I started using a couple of months ago, you ruin all of your good will and hard work inside of a week. I'm not about to call out who this service is, but I will gladly tell you exactly how they went about this epic failure of standard practices. So let's get started.

Step One: Plan a maintenance window


You've got to keep these customers happy with new features, not to mention, you've got to make room for the unexpected growth you've seen.

So let's do it all at once:

  • Lots of new customer facing features? Check
  • Lots of backend features to handle unexpected growth? Check
  • Move to new hardware? Check
  • Database updates across billion-row tables? Check
  • Test plan using mirrored copy of current data set? Ain't nobody got time for dat.
  • Backout plan in case something goes wrong? Pshaw, we won't need one!
  • Timetable for how long this operation will take? Eh, can't take too long, right?

Here's the thing folks; never, and I mean never, run a maintenance window that involves multiple moving parts. If you can't perform these actions individually, then you've messed up somewhere in building this thing out from the start. If your upgrades require moving to new hardware, then do it separate from the rest.

Move to new hardware with the existing application and data. Don't mix features that aren't interdependent and make sure to test any database migrations on a full backup data set (or at least a good portion of it) before doing it on the live data set.

Next, always have a way to go back. If you're upgrading a database, have some way to revert back to the original in an instant. Whether it's a backup, a snapshot, or whatever. Don't depend on being able to back out the change you've made (i.e. running more SQL commands on the live data). You want a pristine place to draw from.

But now that you've screwed that up, let's continue on.

Step Two: Don't ever take the blame


Now that you've pushed this "New and Improved" version of your service in the most obscenely unprofessional way, you will definitely have something go wrong. It's not an if or when, it's just going to happen. First off, don't bother checking to make sure everything went well. Just go to bed and pat yourself on the back.

When you wake up in the morning and see things aren't working as you expect, don't bother replying to the customers' cries for help just yet. Let them know who's boss and who runs this joint. You, that's right. After awhile, give a little update. Remember what your english teacher taught you; less is more. Something like this will do just fine:

We're working hard to make our service better. Please bear with us while we continue to do so.

Some companies forget that their customers are not stupid. We know when something's wrong and in cases like this, we know you screwed up somehow. Don't make it worse by glossing it over. Gives us the straight poop, so to speak. The main word here is transparency. Most companies have this knee-jerk reaction of trying to make it look like "oh, we just had some bad luck, we didn't do anything wrong."

Believe me, even if you aren't transparent, the fact that you aren't is pretty transparent to us. Just like when my kids were 3 years old, I could tell when they were lying ("But dad, I swear, it was the dog that ate all the cookies mommy just baked"...as cookie crumbs fell from his face).

Now that you're busy ignoring the flames on the customer front, let's take care of this problem.

Step Three: Take your time


Who's in a hurry? Not this guy! Amiright? It's already broken, you have no capability to revert all of this crap, and you have to get these new features to the masses else what was it all for? Just keep pushing forward like an angry crowd at a music festival.

What you need to remember is that all of this work is for naught if your customers all leave. You have to be able to suck it up and back all of this out to get back to a stable base and come back at it later. The particular failure that I saw this past week may have been able to revert all of the mess they started. I don't know (they didn't talk much). I can only assume that a) they couldn't revert (bad planning) or b) they were so consumed with making this work, they decided to push forward.

It's hard to say which scenario is worse, but if you do find yourself in a position where an upgrade has broken things and you are able to revert back to a known good state, don't let your ego get the best of you. Just REVERT, go back to the drawing board, perform some postmortem, and start up again on a fresh day.

Lastly, above all else, talk to your customers regularly through the process. Waiting 4-15 hours between updates is a super bad idea. Making these status updates vague and absolving yourself of any responsibility is mistake number two.

Just remember, if you own a particularly new market, the only time a competitor will even think of jumping in is when you lose the trust of your customers. They will capitalize on your mistakes.

Thursday, March 7, 2013

Power Up

As a POWER architecture hardware vendor, we've definitely run into quite a few wish-list items for software we want to have on our platform. Whether it's for customers or just to have a feature complete set of packages in everyday distributions, we want to see things build everywhere, and run just as well as the x86 counterparts.

Starting soon, we are kicking off a PowerUp (my cool label) initiative in order to direct POWER developers toward software that needs a little love on our POWER platforms. Software targets range from the completely unsupported (e.g. Google's v8 Javascript engine, D Languages's phobos) to optimizing specifically for POWER (e.g. OpenJDK).

To collect these initiatives together, we will be starting a new PowerUp portal. For now, we have begun a GitHub Team where we have forked relevant repositories. Forums for discussion and participation will also follow. Feel free to clone and hack away. Email me if you have any questions (or wait until the forums and portal open).

NOTE: PowerUp is just my initial name. That may or may not change.

I'll update this blog post when more information is available.

Wednesday, March 6, 2013

Ubuntu Rolling Releases Vs. Hardware Companies

So I have to speak out on this whole issue. I work for Servergy, and for almost two years I've been working on Ubuntu's PowerPC port in order for our new hardware platform, the CTS-1000, to have an out-of-the-box solution for our customers. We've been hedging on Ubuntu, since it was able to provide us a known quantity for release dates and an open community that we could participate in (especially being able to take advantage of my core-developer status).

Now, after so much work, so much planning, we are worried about 13.04 never being properly released. This would leave us with no stable Linux distribution for our hardware, basically yanking the rug out from under all of our work. Having a stable release every two years also enlarges the support gap for our followup platforms. Now I realize most hardware vendors are x86-based, and their issues are likely limited to supporting peripherals, so this affects us more than most. The issue we face is supporting entirely new hardware platforms and SoCs with a completely new kernel (likely requiring lots of supporting patches). This is the type of thing that, historically, isn't allowed to be added to an LTS release.

So I have to wonder, if Ubuntu does adopt this rolling release schedule, how viable is it for us? I would still be happy if Ubuntu had one release per year, with every other release becoming an LTS. However, the two year window is just entirely too large to depend on for quick moving hardware bring up and release.

Wednesday, November 7, 2012

Reflecting on 14 years of free software

14 years ago last month, I created my first PGP key to sign up to be a Debian developer. I recall what brought me to that place. I had been trying to improve my skill-set for my resume and wanted to learn to program.

Considering Linux was free compared to development software on Windows (and it ran on my Pentium 90MHz CPU when BSD didn't), it was an easy choice. However, I had no idea what I was getting into.

At the time, I was on a steep learning curve. This command line thing was nothing like the Apple //e BASIC prompt I was used to from my youth, and not even close to Mac/Windows. I was literally reinstalling my Linux OS 2-5 times a week because I would dig around into things that I had no business checking into. I tried several distributions of the time including RedHat, Slackware and Debian. I settled on Debian because it had the largest software repository, and I wanted no limitations to my journey into the realm of a software developer.

Back then, configuring my network meant first configuring the serial port and modem and then PPP and related software, in addition to chat scripts (used to provide username/password). Luckily I worked as a web designer for a local ISP, so the *nix gurus there gave me plenty of help.

As happens with free software, it isn't too long before you start finding "bugs." These annoying little things that stand in the way of you and your Linux Adulthood. At first, you just kick it around, try to avoid irritating the little thing, but eventually, you find yourself on IRC or a bug tracking system trying to find help.

I immersed myself into providing feedback to hackers and coders to test what could be wrong with my system. Surely, I thought, this was not just a problem I was having but sat amazed at how intuitive these programmers were and how steadfast in wanting to help me fix the issue. Their tireless efforts inspired me to return as much as I could.

I decided to join this group of lively lads known as Debian Developers, and submitted my PGP key and new-maintainer request. I got a call from Ian Jackson, while at work, and verified information by FAXing a few identification-proving materials to him in London. This was an exhilarating experience. I had never talked to a Brit before, much less one that was in Britain (yes, I was a little sheltered and naive). Now I just needed a way to give back to this group of about 800 developers and it's thousands of users.

As luck would have it, I got quite familiar with the inner workings of Debian's package system (DPKG/DEB) and how it worked on UltraSPARC computers. Working at NASA, I had access to all sorts of SPARC hardware, and, at the time, Debian's SPARC port was a fledgling of hope, without any guidance. I began automatic builds of Debian's vast software repository on my UltraSPARC II desktop system at work. I'd come in in the morning, verify the builds, PGP sign them, and upload the lot to the repository. I was king of SPARC!

Yes, this did get to my head. I was young, eager and worst of all, blinded by the slightest recognition. I thrived on acknowledgement and was empowered by the adulation of my peers. I dove into Debian work like Michael Phelps at a community pool; head first and with no purpose. I spent all of my spare time working on SPARC build failures, taking over things like glibc, PAM and OpenLDAP maintenance. I was hooked and my ego took me to the next logical step, running for Debian Project Leader. However, my arrogant and harsh online persona left me with few supporters, the first time around...and the second time around too.

Two years later, wiser and tempered by humility, I ran for DPL again. This time with a clear vision of what I wanted to accomplish and a vague image of my future legacy. You can read the whole thing here. As I read it, after all these years, I'm reminded of how little I knew of the real world, but I'm aghast at my own confidence and ambitious attitude. Time has a way of dulling that drive. During this DPL election, I had a clear win, and so began my 15 minutes of fame.

My new found leadership was longing for things to "fix." I started with Debian's non-US archive. We wanted encryption to be mainline, but US export restrictions were a hassle. We took on a pro-bono lawyer to help us with the specifics and finally figured out how to abide by such restrictions without opening ourselves up to legal action. The cool part was that we had to email and snail-mail notifications for each and every upload of a package that fell under these restrictions. Each notification looked something like this.

If you know anything about Debian, you know packages get uploaded by the dozens a day, if not more. We were basically flooding the bureau (with the remote hope that they would realize how ridiculous this all was). The original mailing was 2 reams of paper, double sided in a single package. This occurred about once a month. It was sheer insanity, but it got us one step closer to what we wanted...world domination!

My next step was to build up our infrastructure. Debian is heavily reliant on donations -- equipment and money. We had a good chunk of money, but we never spent it. We had decent donated hardware and bandwidth, but the main donor at the time would whine and cry and make threats that left us wondering if he would yank it all away some day. Either way, we got new hardware with large disk space at my local ISP. Chucked down $5000 for a Sun RAID array with about 320Gigs of disk. Pretty damn expensive.

How I loved this time of my life. I was well known and in the headlines of Slashdot and Linux Gazette on a regular basis. I remember being able to pick up a book or magazine or two at Barnes and Noble that had my name and/or picture in it. I would be lying if I said I didn't miss that.

But deep down, I'm a developer. It didn't take long before I had that yearning to "do some real work" and by that I mean staying up all night in front of a shell prompt trying to figure out why that oops disappears when I add in some debug printk's or reverse engineer the endianness of an OHCI-1394 packet on sparc64. Anyways, on to better things I went and many more adventures awaited me.

As I moved through my career, I became more and more focused on Linux kernel work. From embedded to server, from network drivers to mpeg drivers, from MMUs to CPUs. I've never regretted a single step of the journey. As I sit here now, working from home at my newest job, I reflect, not with a sense of accomplishment, but with a sense of humility, knowing that there were many greater, smarter and harder working folks that traversed those same years making it all happen and enabling the opportunities that I've had.

So for anyone who stumbles upon this lonely blog entry, wondering what this whole free software thing is; take a seat, pour a cup of tea, and relax for a few minutes. It's probably the last time you will have that brief illusion of a normal life, but you wont miss it one bit.

Cheers

Sunday, November 4, 2012

Follow-up: Power Architecture Related Tracks Proposed for UDS-r

A few weeks ago I posted about some tracks at UDS concerning PowerPC. Here are links to the session results.


I need to clean up the items. The main take away is that the PowerPC kernels will be maintained separately from the mainline kernels, which means we will be getting support for some new architectures. It will probably be a couple of weeks before I get this setup, but expect it nonetheless.

The other side is the boot loader. This is a tricky and complex implementation point. Details are in the session notes, but this may have implications based on relevant work being done on ARM as well.

That's the extent of it at this point. Looking forward to great things with Raring.

Saturday, November 3, 2012

Servergy Announces New PowerPC Developer Board

DISCLAIMER: I work for Servergy, Inc.

This week, at UDS, Servergy has announced that it will be designing and selling a PowerPC based developer board like no other currently on the market. Typical Power dev kits are using out-dated and feature-poor CPUs. As a follow up, they made this formal announcement.

Servergy plans many needed features, including:
  • Multi-core processor
  • Hardware virtualization (via Linux kernel KVM)
  • Gigabit ethernet
  • HDMI video output
  • Network offloading engines
  • SATA controller
  • Audio output
  • USB Ports
  • SD Card slot

Servergy has dubbed this board P-Cubed.

They are planning a wide range of software support including firmware/boot-loader source code and pre-built images for creating bootable SD cards. Support for major Linux distributions will include Debian, Ubuntu, Fedora and openSUSE.

The platform is geared toward making modern Power systems available to developers for a fraction of the cost of full fledged server systems (Servergy's primary market). While the board is aimed at increasing the ecosystem and community around Linux-on-Power, the pricing is sure to attract hobbyists and students as well.

While Servergy did not say the exact price, they are aiming at a sub-$200 system. Keep an eye on Servergy's website for news and pre-order form.

Thursday, October 25, 2012

How to Setup an Alternative Linux Distro Chroot Under Ubuntu - Part 2

NOTE: This is part 2 of a 2 part series.

In my last post, we setup a chroot environment for our RPM based distribution in /srv/chroots/rhel6-ppc. Now we'll setup schroot so we can access this environment as if it were the booted system, and using a union type scheme (an schroot configuration setup) so that we will always have a pristine environment for builds and such.

So let's look at /etc/schroot/schroot.conf and add the following entry:

[rhel6]
type=directory
union-type=overlayfs
description=RedHat Enterprise Linux 6
groups=adm,root
directory=/srv/chroots/rhel6-ppc
profile=default

This is the basic setup. The key here is that it uses overlayfs for union mounting the original. This means that after you exit a newly created schroot for this entry, it will be purged and the original chroot will not be changed.

Also, the profile=default means it will use configurations from /etc/schroot/default/. Make sure to add yourself to the adm group or run schroot as root.

In order to try it out, use the following command:

schroot -c rhel6

From here, you can do whatever it is you like to do in your new environment!

Wednesday, October 24, 2012

How to Setup an Alternative Linux Distro Chroot Under Ubuntu - Part 1

NOTE: This is part 1 of a 2 part series.

Developing a new server product requires me to test all sorts of things, including multiple distributions. As an Ubuntu developer, my main platform is, of course, Ubuntu.

It's a PITA to run multiple distributions from one system (and not very productive doing it from multiple machines), so I decided to setup chroots for each one. My production system has three environments outside of the main Ubuntu install: RHEL5, RHEL6 and SLES11.

Fortunately, Ubuntu has a nice tool called schroot. I like it because it's based off the original tool called dchroot, which I wrote back in 1999 (wow). It was mainly to allow people to use the UltraSPARC developer systems with more than one release of Debian without me having to setup multiple machines.

Fast forward 13 years, and now we have schroot. This tool has come a long way, and even includes support for snap shot of file systems so you can always start with a pristine environment. This is useful to me because I want to make sure when I build a package, only the required dependencies are installed, and I don't want to worry about screwing up the original environment. Not to mention, I can start more than one session and they wont bother each other.

In addition to schroot, we will need the rinse package. For anyone familiar with debootstrap, it's basically the same thing, but for RPM based systems. It will download and bootstrap all the required RPMs needed for a particular distro, in a manner suitable for a chroot environment.

sudo apt-get install schroot rinse

Now, if you look in /etc/rinse/rinse.conf, you will see several already configured RPM distributions. If you are wanting to do one for RHEL, you will need to either use CentOS instead, or duplicate the matching CentOS entry and rename, being sure to change the mirror URL to match your location. For my RHEL distros, I have a local RPM repository, so I use this entry:

[rhel-6]
mirror       = file:///srv/rhel6-ppc/RPMS/media/Packages

You will also need to copy the matching .packages file in /etc/rinse/ naming it the same as your entry.

Now decide where you want your chroot to be located. I've decided to put mine in /srv/chroots/rhel6-ppc. Create this directory, and then you can run the rinse command as follows:

sudo rinse --arch ppc --directory /srv/chroots/rhel6-ppc --distribution rhel-6
.

NOTE: I have my rinse script hacked a little to allow ppc as an arch. You will probably use i386 or amd64. Also, the --distribution argument is the same as the entry name.

In my next post, we'll move on to configuring schroot with a union type backing.

Monday, October 22, 2012

Where to Obtain PowerPC Dev Kits

I was asked this today on #ubuntu-kernel. It's a good question, and one which I hear often. Most people can go with old Mac hardware, but those things are kind of obsolete and largish for many people. Not to mention it doesn't do much justice for the modern CPUs you see today (multi-core, hardware virtualization, SATA, DDR2/3, etc.). Unfortunately, you aren't going to find many cheap modern Power kits like you would expect in the ARM world, but here some some quick links that I was able to put together. I would stick with Freescale, but IBM has some dev kits too.

  • Micetek - These are actually really nice
  • Embedded Planet - Very low CPU speeds
  • IBM 750FX - 64-bit but unsure about pricing
  • Emerson - Unsure of pricing, but very broad list of CPUs for IBM and Freescale

If you know of more, please comment and I'll add them to this post.

Friday, October 19, 2012

Can Canonical Make Skunkwork Work?

This post is in regards to Mark's recent announcement that Ubuntu 13.04 will be using a skunkworks approach to some of its more wow-factor features.

I know some people are going to cry out from their basement, screaming "community, community, community!" Let's face it though, one thing Linux is missing is being able to release something and people say "holy shitballs, batman!"

Anyway, I totally applaud Mark and Canonical for this decision. If it all goes well, here's my vote for naming 13.10 the Snazzy Skunk.

Tuesday, October 2, 2012

The GOP/DEM Circlejerk

This is slightly off from my normal blog topics, but it is election season. As I stare off with an uncontrolled dazed look on my face, pondering which candidate to vote for, a deep depression has set in.

I could choose Obama. At least I'll know what I'm getting, even though I disagree with his wealth distribution policies and bailing out too-big-to-fail companies that are sending us in a spiral of debt.

I could choose Romney, since I'm Republican, but I honestly don't like him as a person, and I feel he will serve money hungry people more than a strict capitalist agenda.

I could vote Libertarian, but, let's be honest, that will just take votes away from one of the other candidates. At this point, a vote for anything other than Romney is like voting for Obama.

My depression begins when I consider that the two primary political parties are only interested in preserving themselves. Considering that Democratic and Republican platforms have morphed every election campaign into what they think will get them voted into power, or attempt to separate themselves from one another, I can never be sure that the party I choose on my beliefs really fits who I'm voting for.

Every election it's the same thing: "Your problems are because of <insert opposite party>." Neither party will say "our platform allowed these problems to happen" or "the bills we voted caused a downturn in the economy, we'll revert that and try something else."

Let's consider this. Say you work at a company. You and another co-worker handle a particular product or service together. You work well together, bouncing ideas and patting each other on the back. One day, your manager quits, and since your company promotes from with-in, you know that you or your co-worker are going to replace him. You'll get more money, access to the executive bathroom, more vacation days, etc.

So during the manager's last couple weeks, you work your ass off. You make sure that the higher ups responsible for the decision see your hard work. You are basically campaigning for this new position. Your co-worker, being prudent, does the same thing.

Now, instead of working closely together, you are both trying to make yourself look more important and more manager-like. During these few weeks, arguments ensue, and instead of trying to build a great product or service, you are positioning, spreading distrust against your co-worker and generally being unproductive, all for the power of that promotion.

In the end, your co-worker wins out. You blame it on him bringing the top-shelf bottle of liquor to the Christmas party and taking the boss out to golf with some no-name NBA player that his cousin is towel-boy for. It's definitely not because he's better.

Now you stew in your hatred toward him as he becomes YOUR boss. At first, you continue to do your job, until one day he tells you that he thinks the product would be better if you changed one aspect. This doesn't sit well with you. "Who does he think he is?" It wasn't your idea, so it must be crap. You toss all logic aside and just hate his suggestion, regardless of the fact that it will save tons of money and is generally better at solving a certain problem than your solution would have been.

You do as you're told, but you don't have it in you to put in 100% effort. You don't necessarily sabotage the product, but the solution he asked you to implement is definitely half-assed and you don't care.

Now the product goes into production, and sure enough, the implementation falls flat, the one that you worked on. However, it was his idea, so you don't care. When the firestorm comes down from above, you blame the problem on your back-stabbing co-worker that took yer job. "He wanted it implemented this way, but I wanted to do it different." Sure enough, he's canned and now you have the manager job.

You got what you wanted. The power courses through your veins like an espresso hopped up on Redbull. You are the man. Now, two people work for you, one of whom was there when you did the poor implementation, and overheard you complain about your manager at the water cooler. He knows what you did, and has a distaste for how you got your new job. He thinks you are incompetent and undeserving: He is just like you used to be.

You get my meaning here? How can we expect Congress or the President to do anything useful if their entire intent is to make their affiliate party look good, not to mention, self-serve their lust for power? How is the President supposed to get anything done when his counterparts in Congress are trying to set him up for failure so they have bullet points on the next presidential election?

I wish I had some sort of proposal to fix this, but alas, I'm not a political scientist, so all I can do is bring the bad news.

Friday, September 28, 2012

Power Architecture Related Tracks Proposed for UDS-r

On my ramp-up toward UDS-r, I've created some blueprints and pinged some related folks to get them into the proper tracks.

I'm hoping to get a lot of interest and discussion around there, so here they are:


So this covers a wide range of topics. The most in-depth one is the Virtualization blueprint. As of yet, I've not seen a lot of broad support for non-x86 in OpenStack and related software. While it works (I've set it up), it just doesn't do a lot to make me happy happy.

The boot loaders blueprint is basically an RFC. The idea of Power architecture on a non-embedded system not having OpenFirmware is about on par with Dell selling an Intel system without a BIOS. The Power systems do have u-Boot (Das Boot), but that's not as robust as it needs to be. I'm thinking something like grub2 being compiled agains the u-Boot API that u-Boot can load modularly or perhaps something like the kexec based loader that the ps3 used.

Finally, the kernel development is a host discussion that needs to be hammered out with the Canonical Kernel Team so we can all be happy and not step on the primary architectures, but still being able to spread some support for newer Power equipment.

Cheers and see you in Denmark!

EDIT: Updated link for boot loaders blueprint

Wednesday, September 26, 2012

The Rack Revolution

As I sit here in my cozy home on my comfy couch, I am bewildered and amazed at just how far things have come in the last decade.

Let's take a quick inventory of my immediate surroundings:

  • Laptop
  • WiFi
  • Smart Phone
  • HD TV
  • High Speed Internet
  • Server Farm

Hmm...that last one's a bit different from the old days. I used to have a nice collection of loudly humming, room-warming servers in my garage. As a telecommuter, I needed it. My blog was running on it, my email was running on it and my firewall was running on it.

What happened? Well, we all know the answer to that question: things consolidated into the "Cloud." Instead of under-the-table boxes running our local services, we now have providers doing the heavy (literally) lifting for us.

So what do they run on? Practically the same loudly humming room-warmers that we used to keep under our desks. However, in recent years, the move is being made to lower the operating costs of these rack farms into quiet, low-powered, self-cooling, maintainable animals.

As most places have tried to just tone down, or spread thin, some have been making the move to efficiency. Enter the reverse revolution of the CPU to something more applicable to today's computing needs. Instead of powering with high-wattage x86 chips, many are dipping their toes into the shallow end of the alternative-processor kiddy-pool.

And with that I introduce an amazing NEW and WILD CPU: PowerPC!

Oh, you've heard of it? It's legacy and old-hat, you say? I must be thinking of a different PowerPC CPU then. The company I've been gainfully employed with for the past 1.5 years seems to be using something quite different than your grandmother's Power chip. Not quite the behemoth of the IBM Power7 iron (in size nor noise), but not the wussy of your old PowerMac neither.

We're talking multiway SoCs with full floating-point running at a fraction of the wattage of just about anything else on the market. Add with it full hardware virtualization (via KVM), and you begin to see where in the market this is headed.

We've already been engaging multiple Linux and software vendors to give a complete and first rate experience on this new class of hardware. You'll have multiple choices when it comes to supporting and administrating, whether it's one system or a room full of racks filled.

So here's my not-so-humble way of introducing you to Servergy, Inc.. They've been around for 3 years, but expect to be hearing a lot more about us in the coming months. If you're going to be at a Linux or Cloud/Server related event in the near future, chances are you will run into one of us. I'll actually be at Ubuntu's UDS-r in Copenhagen at the end of October. I'm hoping to have a live demonstration while I'm there.

Cheers

NOTE: In this article I am speaking solely on my behalf. None of what I've said can be taken as a statement by the company I work for: Servergy, Inc.

Tuesday, July 17, 2012

The Community Conundrum: PowerPC

In my recent work, I've been dealing a lot with PowerPC. As an old Mac user, I've had a soft spot for PowerPC for ages. Like most people, until recently, I've considered PowerPC an aging and dying architecture. Even with IBM selling PowerLinux systems, the lack of cheap hardware for developers has left a hole not easily filled, no matter how man old PowerMacs you buy in eBay.

However, there are a lot of PowerPC platforms that do fill this gap left by PowerMac. Some are even 32-bit platforms that can compete in today's markets.

So why have you never heard of them? Why can't you download Fedora or Ubuntu to install on your PowerPC of today? Several reason:

  • Distributions don't really support it.
  • The "community" behind it is driven at the kernel and low-level, not at the distribution level (see last bullet item).

This circle of support appears to be the hold up. Convincing even community supported architectures like Ubuntu and Fedora to support these different kernel flavors is met with archaic skepticism, and is usually concluded with "there is no community" to which I usually respond "because there is no support."

Something has to give here. Linux and Open Source isn't where we want the chicken-and-egg scenario to happen. You can't walk up to a Linux distro with a community and say "Here we are, let's do this" in much the same way as you can't go to a community and say "Come over here with us. We don't support you yet, but we'd like you to prove that you're worth it."

So where to begin...

Friday, December 9, 2011

Reviving the Ubuntu PowerPC community effort

Just is just a heads up to some people who may be interested. I am trying to breath some life back into the Ubuntu PowerPC community. My interest extends from my current job and focus on the server market. That's not to say that I don't think PowerPC should have a desktop life (though most people would only like it for the legacy ppc Mac systems out there), just that my personal focus won't cover much of that beyond perhaps some CD creation and fixing fails-to-build-from-source problems.

So, come one, come all. I've sent a quick note out to the Ubuntu PowerPC LaunchPad Team. There's also a mailing list now.

I'll be posting more of a road-map soonish.

Thursday, June 30, 2011

Setting up minicom and ckermit for u-boot serial file transfers

Took awhile to get this simple bit of legacy working. I'm working over a USB serial console to a dev board and needed to update some parts of flash but I don't have working ethernet yet. U-Boot allows for kermit binary file transfers, but default ckermit settings don't appear to work well. So to speed others along, here's what I did. Quite simply, add this to (or create) ~/.kermrc:

set carrier-watch off
set handshake none
set flow-control none
robust
set file type bin
set rec pack 1000
set send pack 1000
set window 5

Otherwise, ckermit expects some modem line signals and CONNECT before it starts the transfer. Now you can use minicom's send command.

Wednesday, June 29, 2011

Bluecherry is hiring an Ubuntu developer!

My old employer, Bluecherry, is looking to hire an Ubuntu developer with the following experience:

  • Extensive knowledge of the Video4Linux2 and ALSA sound API
  • Prior experience in Linux based software design / implementation
  • Prior knowledge of Ubuntu, including building / maintaining Debian packages
  • Prior experience with gstreamer and RTSP
  • Prior experience with MYSQL.
  • Prior experience with Matroska file containers and video encoding
  • Excellent verbal and written communication skills
  • Strong knowledge of C
  • Previous work with and understanding of working with video / audio formatting / codecs including MPEG4 and H.264
  • Internet and operating system security fundamentals
  • Sharp analytical abilities and proven design skills
  • Strong sense of ownership, urgency, and drive
  • Demonstrated ability to achieve goals in a highly innovative and fast paced environment
Yes it's a long list, but you are replacing me, so get used to high expectations, but a highly rewarding job. You don't need to know all of this, but you should have a good enough foundation that you can embrace the current codebase and not shy away from a steep learning curve. Visit Bluecherry's job posting for more information.

Wednesday, June 8, 2011

Stripping an Ubuntu system to just the basics...

WARNING: I am not responsible for you trashing your system. Use this guide with care. No attempt was made to ensure your intelligence level (nor mine).

UPDATE: Please read the comments about using apt-mark instead of the man-handler of a script that I have in the main post. Should lessen the chance of hosing your system.

While working on a cross-build system inside an Ubuntu 10.10 virtual machine instance, I decided I didn't want all the fluff of the desktop version. However, instead of just going through the entire package list on the machine, I came up with a quick way to have APT automatically handle it for me.

Ubuntu is nice in that it has meta-packages for the different levels of their system. The top-level meta-packages -- ubuntu-minimal, ubuntu-standard and ubuntu-desktop -- let APT know which package group to install (ubuntu-standard is generally used for server installs). APT also has the nice functionality of automatically being able to remove packages which were only installed because they were depended on by some other package. For example, if you install ffmpeg, you get a ton of libraries with it. If you then remove ffmpeg, you can run "apt-get autoremove" to also remove the libraries that are no longer needed because they were only installed to satisfy ffmpeg's dependencies.

So now, how to abuse this functionality. First, we find out how APT tracks these implicit/explicit package states (packages we installed directly vs. packages that were only installed to satisfy dependencies). We find /var/lib/apt/extended_states. The format is similar to /var/lib/dpkg/status and has stanzas in the form of:

Package: foo
Architecture: i386
Auto-Installed: 1

So here's a quick script that will mark all of your currently installed packages as auto-installed:

#!/bin/bash

arch=$(dpkg --print-architecture)
dpkg --get-selections | awk '{print $1}' | \
(while read pkg; do
        echo "Package: $pkg"
        echo "Architecture: $arch"
        echo "Auto-Installed: 1"
        echo
done)

Here's how we use it (ai-all.sh is the shell script from above):

sudo cp /var/lib/apt/extended_states /var/lib/apt/extended_states.bak
bash ai-all.sh | sudo tee /var/lib/apt/extended_states > /dev/null 2>&1

Now, we have to tell APT that we want to keep some things. Personally, I went with the ubuntu-standard as a base line and added a few necessities for good measure. You could go with ubuntu-minimal, and also add packages here that you specifically want to keep (otherwise the commands later will remove them). Note, I specifically added grub-pc because a boot-loader is not a required package (think EC2, diskless installs, etc.). Be sure to add your boot-loader to this command if you require it.

sudo apt-get install ubuntu-standard grub-pc vim build-essential git

This most likely wont do much, since these packages are already installed. However, APT will mark them as "Auto-Installed: 0" so that it knows we explicitly installed them. Next, time to ditch a few hundred megs:

sudo apt-get --purge autoremove

This will take some time and finally spew out a huge list of things to remove. You may want to give it a quick once-over to make sure you aren't tossing something important. If you see a package you need, ^C out and run the apt-get install command again with the new package(s).

So now you should be clean and clear. Note that the above --purge option is meant to completely remove things like configuration files that were installed with the packages you are removing. If that scares you, then remove that option.