I've been reading a lot of articles about presidential candidates and their stances on encryption for online privacy. It befuddles me how ridiculous their arguments are. You hear things like "only evildoers use encryption" or "if you have nothing to hide, you shouldn't be against this."
These are the same arguments that gave us our miranda rights, protection against illegal search and seizure, etc. Just because a citizen exercises their right to privacy does not mean they are hiding something, nor can it be taken as probable cause to remove their rights.
Saying that law enforcement needs to remove encryption so they can find bad guys is like saying houses need to have open windows so they can see in to houses. It's like saying we should do away with home security systems because only people with illegal items use them.
Any candidate that wants to keep me from protecting my privacy and security on the grounds of protecting me from some would-be terrorist is on a power trip and does not stand for the same constitution that I do.
Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts
Friday, August 21, 2015
Thursday, March 7, 2013
Power Up
As a POWER architecture hardware vendor, we've definitely run into quite a few wish-list items for software we want to have on our platform. Whether it's for customers or just to have a feature complete set of packages in everyday distributions, we want to see things build everywhere, and run just as well as the x86 counterparts.
Starting soon, we are kicking off a PowerUp (my cool label) initiative in order to direct POWER developers toward software that needs a little love on our POWER platforms. Software targets range from the completely unsupported (e.g. Google's v8 Javascript engine, D Languages's phobos) to optimizing specifically for POWER (e.g. OpenJDK).
To collect these initiatives together, we will be starting a new PowerUp portal. For now, we have begun a GitHub Team where we have forked relevant repositories. Forums for discussion and participation will also follow. Feel free to clone and hack away. Email me if you have any questions (or wait until the forums and portal open).
NOTE: PowerUp is just my initial name. That may or may not change.
I'll update this blog post when more information is available.
Starting soon, we are kicking off a PowerUp (my cool label) initiative in order to direct POWER developers toward software that needs a little love on our POWER platforms. Software targets range from the completely unsupported (e.g. Google's v8 Javascript engine, D Languages's phobos) to optimizing specifically for POWER (e.g. OpenJDK).
To collect these initiatives together, we will be starting a new PowerUp portal. For now, we have begun a GitHub Team where we have forked relevant repositories. Forums for discussion and participation will also follow. Feel free to clone and hack away. Email me if you have any questions (or wait until the forums and portal open).
NOTE: PowerUp is just my initial name. That may or may not change.
I'll update this blog post when more information is available.
Wednesday, March 6, 2013
Ubuntu Rolling Releases Vs. Hardware Companies
So I have to speak out on this whole issue. I work for Servergy, and for almost two years I've been working on Ubuntu's PowerPC port in order for our new hardware platform, the CTS-1000, to have an out-of-the-box solution for our customers. We've been hedging on Ubuntu, since it was able to provide us a known quantity for release dates and an open community that we could participate in (especially being able to take advantage of my core-developer status).
Now, after so much work, so much planning, we are worried about 13.04 never being properly released. This would leave us with no stable Linux distribution for our hardware, basically yanking the rug out from under all of our work. Having a stable release every two years also enlarges the support gap for our followup platforms. Now I realize most hardware vendors are x86-based, and their issues are likely limited to supporting peripherals, so this affects us more than most. The issue we face is supporting entirely new hardware platforms and SoCs with a completely new kernel (likely requiring lots of supporting patches). This is the type of thing that, historically, isn't allowed to be added to an LTS release.
So I have to wonder, if Ubuntu does adopt this rolling release schedule, how viable is it for us? I would still be happy if Ubuntu had one release per year, with every other release becoming an LTS. However, the two year window is just entirely too large to depend on for quick moving hardware bring up and release.
Now, after so much work, so much planning, we are worried about 13.04 never being properly released. This would leave us with no stable Linux distribution for our hardware, basically yanking the rug out from under all of our work. Having a stable release every two years also enlarges the support gap for our followup platforms. Now I realize most hardware vendors are x86-based, and their issues are likely limited to supporting peripherals, so this affects us more than most. The issue we face is supporting entirely new hardware platforms and SoCs with a completely new kernel (likely requiring lots of supporting patches). This is the type of thing that, historically, isn't allowed to be added to an LTS release.
So I have to wonder, if Ubuntu does adopt this rolling release schedule, how viable is it for us? I would still be happy if Ubuntu had one release per year, with every other release becoming an LTS. However, the two year window is just entirely too large to depend on for quick moving hardware bring up and release.
Wednesday, November 7, 2012
Reflecting on 14 years of free software
14 years ago last month, I created my first PGP key to sign up to be a Debian developer. I recall what brought me to that place. I had been trying to improve my skill-set for my resume and wanted to learn to program.
Considering Linux was free compared to development software on Windows (and it ran on my Pentium 90MHz CPU when BSD didn't), it was an easy choice. However, I had no idea what I was getting into.
At the time, I was on a steep learning curve. This command line thing was nothing like the Apple //e BASIC prompt I was used to from my youth, and not even close to Mac/Windows. I was literally reinstalling my Linux OS 2-5 times a week because I would dig around into things that I had no business checking into. I tried several distributions of the time including RedHat, Slackware and Debian. I settled on Debian because it had the largest software repository, and I wanted no limitations to my journey into the realm of a software developer.
Back then, configuring my network meant first configuring the serial port and modem and then PPP and related software, in addition to chat scripts (used to provide username/password). Luckily I worked as a web designer for a local ISP, so the *nix gurus there gave me plenty of help.
As happens with free software, it isn't too long before you start finding "bugs." These annoying little things that stand in the way of you and your Linux Adulthood. At first, you just kick it around, try to avoid irritating the little thing, but eventually, you find yourself on IRC or a bug tracking system trying to find help.
I immersed myself into providing feedback to hackers and coders to test what could be wrong with my system. Surely, I thought, this was not just a problem I was having but sat amazed at how intuitive these programmers were and how steadfast in wanting to help me fix the issue. Their tireless efforts inspired me to return as much as I could.
I decided to join this group of lively lads known as Debian Developers, and submitted my PGP key and new-maintainer request. I got a call from Ian Jackson, while at work, and verified information by FAXing a few identification-proving materials to him in London. This was an exhilarating experience. I had never talked to a Brit before, much less one that was in Britain (yes, I was a little sheltered and naive). Now I just needed a way to give back to this group of about 800 developers and it's thousands of users.
As luck would have it, I got quite familiar with the inner workings of Debian's package system (DPKG/DEB) and how it worked on UltraSPARC computers. Working at NASA, I had access to all sorts of SPARC hardware, and, at the time, Debian's SPARC port was a fledgling of hope, without any guidance. I began automatic builds of Debian's vast software repository on my UltraSPARC II desktop system at work. I'd come in in the morning, verify the builds, PGP sign them, and upload the lot to the repository. I was king of SPARC!
Yes, this did get to my head. I was young, eager and worst of all, blinded by the slightest recognition. I thrived on acknowledgement and was empowered by the adulation of my peers. I dove into Debian work like Michael Phelps at a community pool; head first and with no purpose. I spent all of my spare time working on SPARC build failures, taking over things like glibc, PAM and OpenLDAP maintenance. I was hooked and my ego took me to the next logical step, running for Debian Project Leader. However, my arrogant and harsh online persona left me with few supporters, the first time around...and the second time around too.
Two years later, wiser and tempered by humility, I ran for DPL again. This time with a clear vision of what I wanted to accomplish and a vague image of my future legacy. You can read the whole thing here. As I read it, after all these years, I'm reminded of how little I knew of the real world, but I'm aghast at my own confidence and ambitious attitude. Time has a way of dulling that drive. During this DPL election, I had a clear win, and so began my 15 minutes of fame.
My new found leadership was longing for things to "fix." I started with Debian's non-US archive. We wanted encryption to be mainline, but US export restrictions were a hassle. We took on a pro-bono lawyer to help us with the specifics and finally figured out how to abide by such restrictions without opening ourselves up to legal action. The cool part was that we had to email and snail-mail notifications for each and every upload of a package that fell under these restrictions. Each notification looked something like this.
If you know anything about Debian, you know packages get uploaded by the dozens a day, if not more. We were basically flooding the bureau (with the remote hope that they would realize how ridiculous this all was). The original mailing was 2 reams of paper, double sided in a single package. This occurred about once a month. It was sheer insanity, but it got us one step closer to what we wanted...world domination!
My next step was to build up our infrastructure. Debian is heavily reliant on donations -- equipment and money. We had a good chunk of money, but we never spent it. We had decent donated hardware and bandwidth, but the main donor at the time would whine and cry and make threats that left us wondering if he would yank it all away some day. Either way, we got new hardware with large disk space at my local ISP. Chucked down $5000 for a Sun RAID array with about 320Gigs of disk. Pretty damn expensive.
How I loved this time of my life. I was well known and in the headlines of Slashdot and Linux Gazette on a regular basis. I remember being able to pick up a book or magazine or two at Barnes and Noble that had my name and/or picture in it. I would be lying if I said I didn't miss that.
But deep down, I'm a developer. It didn't take long before I had that yearning to "do some real work" and by that I mean staying up all night in front of a shell prompt trying to figure out why that oops disappears when I add in some debug printk's or reverse engineer the endianness of an OHCI-1394 packet on sparc64. Anyways, on to better things I went and many more adventures awaited me.
As I moved through my career, I became more and more focused on Linux kernel work. From embedded to server, from network drivers to mpeg drivers, from MMUs to CPUs. I've never regretted a single step of the journey. As I sit here now, working from home at my newest job, I reflect, not with a sense of accomplishment, but with a sense of humility, knowing that there were many greater, smarter and harder working folks that traversed those same years making it all happen and enabling the opportunities that I've had.
So for anyone who stumbles upon this lonely blog entry, wondering what this whole free software thing is; take a seat, pour a cup of tea, and relax for a few minutes. It's probably the last time you will have that brief illusion of a normal life, but you wont miss it one bit.
Cheers
Considering Linux was free compared to development software on Windows (and it ran on my Pentium 90MHz CPU when BSD didn't), it was an easy choice. However, I had no idea what I was getting into.
At the time, I was on a steep learning curve. This command line thing was nothing like the Apple //e BASIC prompt I was used to from my youth, and not even close to Mac/Windows. I was literally reinstalling my Linux OS 2-5 times a week because I would dig around into things that I had no business checking into. I tried several distributions of the time including RedHat, Slackware and Debian. I settled on Debian because it had the largest software repository, and I wanted no limitations to my journey into the realm of a software developer.
Back then, configuring my network meant first configuring the serial port and modem and then PPP and related software, in addition to chat scripts (used to provide username/password). Luckily I worked as a web designer for a local ISP, so the *nix gurus there gave me plenty of help.
As happens with free software, it isn't too long before you start finding "bugs." These annoying little things that stand in the way of you and your Linux Adulthood. At first, you just kick it around, try to avoid irritating the little thing, but eventually, you find yourself on IRC or a bug tracking system trying to find help.
I immersed myself into providing feedback to hackers and coders to test what could be wrong with my system. Surely, I thought, this was not just a problem I was having but sat amazed at how intuitive these programmers were and how steadfast in wanting to help me fix the issue. Their tireless efforts inspired me to return as much as I could.
I decided to join this group of lively lads known as Debian Developers, and submitted my PGP key and new-maintainer request. I got a call from Ian Jackson, while at work, and verified information by FAXing a few identification-proving materials to him in London. This was an exhilarating experience. I had never talked to a Brit before, much less one that was in Britain (yes, I was a little sheltered and naive). Now I just needed a way to give back to this group of about 800 developers and it's thousands of users.
As luck would have it, I got quite familiar with the inner workings of Debian's package system (DPKG/DEB) and how it worked on UltraSPARC computers. Working at NASA, I had access to all sorts of SPARC hardware, and, at the time, Debian's SPARC port was a fledgling of hope, without any guidance. I began automatic builds of Debian's vast software repository on my UltraSPARC II desktop system at work. I'd come in in the morning, verify the builds, PGP sign them, and upload the lot to the repository. I was king of SPARC!
Yes, this did get to my head. I was young, eager and worst of all, blinded by the slightest recognition. I thrived on acknowledgement and was empowered by the adulation of my peers. I dove into Debian work like Michael Phelps at a community pool; head first and with no purpose. I spent all of my spare time working on SPARC build failures, taking over things like glibc, PAM and OpenLDAP maintenance. I was hooked and my ego took me to the next logical step, running for Debian Project Leader. However, my arrogant and harsh online persona left me with few supporters, the first time around...and the second time around too.
Two years later, wiser and tempered by humility, I ran for DPL again. This time with a clear vision of what I wanted to accomplish and a vague image of my future legacy. You can read the whole thing here. As I read it, after all these years, I'm reminded of how little I knew of the real world, but I'm aghast at my own confidence and ambitious attitude. Time has a way of dulling that drive. During this DPL election, I had a clear win, and so began my 15 minutes of fame.
My new found leadership was longing for things to "fix." I started with Debian's non-US archive. We wanted encryption to be mainline, but US export restrictions were a hassle. We took on a pro-bono lawyer to help us with the specifics and finally figured out how to abide by such restrictions without opening ourselves up to legal action. The cool part was that we had to email and snail-mail notifications for each and every upload of a package that fell under these restrictions. Each notification looked something like this.
If you know anything about Debian, you know packages get uploaded by the dozens a day, if not more. We were basically flooding the bureau (with the remote hope that they would realize how ridiculous this all was). The original mailing was 2 reams of paper, double sided in a single package. This occurred about once a month. It was sheer insanity, but it got us one step closer to what we wanted...world domination!
My next step was to build up our infrastructure. Debian is heavily reliant on donations -- equipment and money. We had a good chunk of money, but we never spent it. We had decent donated hardware and bandwidth, but the main donor at the time would whine and cry and make threats that left us wondering if he would yank it all away some day. Either way, we got new hardware with large disk space at my local ISP. Chucked down $5000 for a Sun RAID array with about 320Gigs of disk. Pretty damn expensive.
How I loved this time of my life. I was well known and in the headlines of Slashdot and Linux Gazette on a regular basis. I remember being able to pick up a book or magazine or two at Barnes and Noble that had my name and/or picture in it. I would be lying if I said I didn't miss that.
But deep down, I'm a developer. It didn't take long before I had that yearning to "do some real work" and by that I mean staying up all night in front of a shell prompt trying to figure out why that oops disappears when I add in some debug printk's or reverse engineer the endianness of an OHCI-1394 packet on sparc64. Anyways, on to better things I went and many more adventures awaited me.
As I moved through my career, I became more and more focused on Linux kernel work. From embedded to server, from network drivers to mpeg drivers, from MMUs to CPUs. I've never regretted a single step of the journey. As I sit here now, working from home at my newest job, I reflect, not with a sense of accomplishment, but with a sense of humility, knowing that there were many greater, smarter and harder working folks that traversed those same years making it all happen and enabling the opportunities that I've had.
So for anyone who stumbles upon this lonely blog entry, wondering what this whole free software thing is; take a seat, pour a cup of tea, and relax for a few minutes. It's probably the last time you will have that brief illusion of a normal life, but you wont miss it one bit.
Cheers
Sunday, November 4, 2012
Follow-up: Power Architecture Related Tracks Proposed for UDS-r
A few weeks ago I posted about some tracks at UDS concerning PowerPC. Here are links to the session results.
I need to clean up the items. The main take away is that the PowerPC kernels will be maintained separately from the mainline kernels, which means we will be getting support for some new architectures. It will probably be a couple of weeks before I get this setup, but expect it nonetheless.
The other side is the boot loader. This is a tricky and complex implementation point. Details are in the session notes, but this may have implications based on relevant work being done on ARM as well.
That's the extent of it at this point. Looking forward to great things with Raring.
- Virtualization support for Power architecture
- Power Architecture Kernel Development
- PowerPC Bootloader Options
I need to clean up the items. The main take away is that the PowerPC kernels will be maintained separately from the mainline kernels, which means we will be getting support for some new architectures. It will probably be a couple of weeks before I get this setup, but expect it nonetheless.
The other side is the boot loader. This is a tricky and complex implementation point. Details are in the session notes, but this may have implications based on relevant work being done on ARM as well.
That's the extent of it at this point. Looking forward to great things with Raring.
Saturday, November 3, 2012
Servergy Announces New PowerPC Developer Board
DISCLAIMER: I work for Servergy, Inc.
This week, at UDS, Servergy has announced that it will be designing and selling a PowerPC based developer board like no other currently on the market. Typical Power dev kits are using out-dated and feature-poor CPUs. As a follow up, they made this formal announcement.
Servergy plans many needed features, including:
Servergy has dubbed this board P-Cubed.
They are planning a wide range of software support including firmware/boot-loader source code and pre-built images for creating bootable SD cards. Support for major Linux distributions will include Debian, Ubuntu, Fedora and openSUSE.
The platform is geared toward making modern Power systems available to developers for a fraction of the cost of full fledged server systems (Servergy's primary market). While the board is aimed at increasing the ecosystem and community around Linux-on-Power, the pricing is sure to attract hobbyists and students as well.
While Servergy did not say the exact price, they are aiming at a sub-$200 system. Keep an eye on Servergy's website for news and pre-order form.
This week, at UDS, Servergy has announced that it will be designing and selling a PowerPC based developer board like no other currently on the market. Typical Power dev kits are using out-dated and feature-poor CPUs. As a follow up, they made this formal announcement.
Servergy plans many needed features, including:
- Multi-core processor
- Hardware virtualization (via Linux kernel KVM)
- Gigabit ethernet
- HDMI video output
- Network offloading engines
- SATA controller
- Audio output
- USB Ports
- SD Card slot
Servergy has dubbed this board P-Cubed.
They are planning a wide range of software support including firmware/boot-loader source code and pre-built images for creating bootable SD cards. Support for major Linux distributions will include Debian, Ubuntu, Fedora and openSUSE.
The platform is geared toward making modern Power systems available to developers for a fraction of the cost of full fledged server systems (Servergy's primary market). While the board is aimed at increasing the ecosystem and community around Linux-on-Power, the pricing is sure to attract hobbyists and students as well.
While Servergy did not say the exact price, they are aiming at a sub-$200 system. Keep an eye on Servergy's website for news and pre-order form.
Thursday, October 25, 2012
How to Setup an Alternative Linux Distro Chroot Under Ubuntu - Part 2
NOTE: This is part 2 of a 2 part series.
In my last post, we setup a chroot environment for our RPM based distribution in
So let's look at
This is the basic setup. The key here is that it uses
Also, the
In order to try it out, use the following command:
From here, you can do whatever it is you like to do in your new environment!
In my last post, we setup a chroot environment for our RPM based distribution in
/srv/chroots/rhel6-ppc. Now we'll setup schroot so we can access this environment as if it were the booted system, and using a union type scheme (an schroot configuration setup) so that we will always have a pristine environment for builds and such.So let's look at
/etc/schroot/schroot.conf and add the following entry:[rhel6]
type=directory
union-type=overlayfs
description=RedHat Enterprise Linux 6
groups=adm,root
directory=/srv/chroots/rhel6-ppc
profile=defaultThis is the basic setup. The key here is that it uses
overlayfs for union mounting the original. This means that after you exit a newly created schroot for this entry, it will be purged and the original chroot will not be changed.Also, the
profile=default means it will use configurations from /etc/schroot/default/. Make sure to add yourself to the adm group or run schroot as root.In order to try it out, use the following command:
schroot -c rhel6From here, you can do whatever it is you like to do in your new environment!
Wednesday, October 24, 2012
How to Setup an Alternative Linux Distro Chroot Under Ubuntu - Part 1
NOTE: This is part 1 of a 2 part series.
Developing a new server product requires me to test all sorts of things, including multiple distributions. As an Ubuntu developer, my main platform is, of course, Ubuntu.
It's a PITA to run multiple distributions from one system (and not very productive doing it from multiple machines), so I decided to setup chroots for each one. My production system has three environments outside of the main Ubuntu install: RHEL5, RHEL6 and SLES11.
Fortunately, Ubuntu has a nice tool called
Fast forward 13 years, and now we have
In addition to
Now, if you look in
You will also need to copy the matching
Now decide where you want your chroot to be located. I've decided to put mine in
NOTE: I have my
In my next post, we'll move on to configuring
Developing a new server product requires me to test all sorts of things, including multiple distributions. As an Ubuntu developer, my main platform is, of course, Ubuntu.
It's a PITA to run multiple distributions from one system (and not very productive doing it from multiple machines), so I decided to setup chroots for each one. My production system has three environments outside of the main Ubuntu install: RHEL5, RHEL6 and SLES11.
Fortunately, Ubuntu has a nice tool called
schroot. I like it because it's based off the original tool called dchroot, which I wrote back in 1999 (wow). It was mainly to allow people to use the UltraSPARC developer systems with more than one release of Debian without me having to setup multiple machines.Fast forward 13 years, and now we have
schroot. This tool has come a long way, and even includes support for snap shot of file systems so you can always start with a pristine environment. This is useful to me because I want to make sure when I build a package, only the required dependencies are installed, and I don't want to worry about screwing up the original environment. Not to mention, I can start more than one session and they wont bother each other.In addition to
schroot, we will need the rinse package. For anyone familiar with debootstrap, it's basically the same thing, but for RPM based systems. It will download and bootstrap all the required RPMs needed for a particular distro, in a manner suitable for a chroot environment.sudo apt-get install schroot rinseNow, if you look in
/etc/rinse/rinse.conf, you will see several already configured RPM distributions. If you are wanting to do one for RHEL, you will need to either use CentOS instead, or duplicate the matching CentOS entry and rename, being sure to change the mirror URL to match your location. For my RHEL distros, I have a local RPM repository, so I use this entry:[rhel-6]
mirror = file:///srv/rhel6-ppc/RPMS/media/PackagesYou will also need to copy the matching
.packages file in /etc/rinse/ naming it the same as your entry.Now decide where you want your chroot to be located. I've decided to put mine in
/srv/chroots/rhel6-ppc. Create this directory, and then you can run the rinse command as follows:sudo rinse --arch ppc --directory /srv/chroots/rhel6-ppc --distribution rhel-6.NOTE: I have my
rinse script hacked a little to allow ppc as an arch. You will probably use i386 or amd64. Also, the --distribution argument is the same as the entry name.In my next post, we'll move on to configuring
schroot with a union type backing.
Monday, October 22, 2012
Where to Obtain PowerPC Dev Kits
I was asked this today on #ubuntu-kernel. It's a good question, and one which I hear often. Most people can go with old Mac hardware, but those things are kind of obsolete and largish for many people. Not to mention it doesn't do much justice for the modern CPUs you see today (multi-core, hardware virtualization, SATA, DDR2/3, etc.). Unfortunately, you aren't going to find many cheap modern Power kits like you would expect in the ARM world, but here some some quick links that I was able to put together. I would stick with Freescale, but IBM has some dev kits too.
If you know of more, please comment and I'll add them to this post.
- Micetek - These are actually really nice
- Embedded Planet - Very low CPU speeds
- IBM 750FX - 64-bit but unsure about pricing
- Emerson - Unsure of pricing, but very broad list of CPUs for IBM and Freescale
If you know of more, please comment and I'll add them to this post.
Friday, October 19, 2012
Can Canonical Make Skunkwork Work?
This post is in regards to Mark's recent announcement that Ubuntu 13.04 will be using a skunkworks approach to some of its more wow-factor features.
I know some people are going to cry out from their basement, screaming "community, community, community!" Let's face it though, one thing Linux is missing is being able to release something and people say "holy shitballs, batman!"
Anyway, I totally applaud Mark and Canonical for this decision. If it all goes well, here's my vote for naming 13.10 the Snazzy Skunk.
I know some people are going to cry out from their basement, screaming "community, community, community!" Let's face it though, one thing Linux is missing is being able to release something and people say "holy shitballs, batman!"
Anyway, I totally applaud Mark and Canonical for this decision. If it all goes well, here's my vote for naming 13.10 the Snazzy Skunk.
Friday, September 28, 2012
Power Architecture Related Tracks Proposed for UDS-r
On my ramp-up toward UDS-r, I've created some blueprints and pinged some related folks to get them into the proper tracks.
I'm hoping to get a lot of interest and discussion around there, so here they are:
So this covers a wide range of topics. The most in-depth one is the Virtualization blueprint. As of yet, I've not seen a lot of broad support for non-x86 in OpenStack and related software. While it works (I've set it up), it just doesn't do a lot to make me happy happy.
The boot loaders blueprint is basically an RFC. The idea of Power architecture on a non-embedded system not having OpenFirmware is about on par with Dell selling an Intel system without a BIOS. The Power systems do have u-Boot (Das Boot), but that's not as robust as it needs to be. I'm thinking something like grub2 being compiled agains the u-Boot API that u-Boot can load modularly or perhaps something like the kexec based loader that the ps3 used.
Finally, the kernel development is a host discussion that needs to be hammered out with the Canonical Kernel Team so we can all be happy and not step on the primary architectures, but still being able to spread some support for newer Power equipment.
Cheers and see you in Denmark!
EDIT: Updated link for boot loaders blueprint
I'm hoping to get a lot of interest and discussion around there, so here they are:
- Virtualization support for Power architecture
- Power Architecture Kernel Development
- PowerPC Bootloader Options
So this covers a wide range of topics. The most in-depth one is the Virtualization blueprint. As of yet, I've not seen a lot of broad support for non-x86 in OpenStack and related software. While it works (I've set it up), it just doesn't do a lot to make me happy happy.
The boot loaders blueprint is basically an RFC. The idea of Power architecture on a non-embedded system not having OpenFirmware is about on par with Dell selling an Intel system without a BIOS. The Power systems do have u-Boot (Das Boot), but that's not as robust as it needs to be. I'm thinking something like grub2 being compiled agains the u-Boot API that u-Boot can load modularly or perhaps something like the kexec based loader that the ps3 used.
Finally, the kernel development is a host discussion that needs to be hammered out with the Canonical Kernel Team so we can all be happy and not step on the primary architectures, but still being able to spread some support for newer Power equipment.
Cheers and see you in Denmark!
EDIT: Updated link for boot loaders blueprint
Wednesday, September 26, 2012
The Rack Revolution
As I sit here in my cozy home on my comfy couch, I am bewildered and amazed at just how far things have come in the last decade.
Let's take a quick inventory of my immediate surroundings:
Hmm...that last one's a bit different from the old days. I used to have a nice collection of loudly humming, room-warming servers in my garage. As a telecommuter, I needed it. My blog was running on it, my email was running on it and my firewall was running on it.
What happened? Well, we all know the answer to that question: things consolidated into the "Cloud." Instead of under-the-table boxes running our local services, we now have providers doing the heavy (literally) lifting for us.
So what do they run on? Practically the same loudly humming room-warmers that we used to keep under our desks. However, in recent years, the move is being made to lower the operating costs of these rack farms into quiet, low-powered, self-cooling, maintainable animals.
As most places have tried to just tone down, or spread thin, some have been making the move to efficiency. Enter the reverse revolution of the CPU to something more applicable to today's computing needs. Instead of powering with high-wattage x86 chips, many are dipping their toes into the shallow end of the alternative-processor kiddy-pool.
And with that I introduce an amazing NEW and WILD CPU: PowerPC!
Oh, you've heard of it? It's legacy and old-hat, you say? I must be thinking of a different PowerPC CPU then. The company I've been gainfully employed with for the past 1.5 years seems to be using something quite different than your grandmother's Power chip. Not quite the behemoth of the IBM Power7 iron (in size nor noise), but not the wussy of your old PowerMac neither.
We're talking multiway SoCs with full floating-point running at a fraction of the wattage of just about anything else on the market. Add with it full hardware virtualization (via KVM), and you begin to see where in the market this is headed.
We've already been engaging multiple Linux and software vendors to give a complete and first rate experience on this new class of hardware. You'll have multiple choices when it comes to supporting and administrating, whether it's one system or a room full of racks filled.
So here's my not-so-humble way of introducing you to Servergy, Inc.. They've been around for 3 years, but expect to be hearing a lot more about us in the coming months. If you're going to be at a Linux or Cloud/Server related event in the near future, chances are you will run into one of us. I'll actually be at Ubuntu's UDS-r in Copenhagen at the end of October. I'm hoping to have a live demonstration while I'm there.
Cheers
NOTE: In this article I am speaking solely on my behalf. None of what I've said can be taken as a statement by the company I work for: Servergy, Inc.
Let's take a quick inventory of my immediate surroundings:
- ✔ Laptop
- ✔ WiFi
- ✔ Smart Phone
- ✔ HD TV
- ✔ High Speed Internet
- ✖ Server Farm
Hmm...that last one's a bit different from the old days. I used to have a nice collection of loudly humming, room-warming servers in my garage. As a telecommuter, I needed it. My blog was running on it, my email was running on it and my firewall was running on it.
What happened? Well, we all know the answer to that question: things consolidated into the "Cloud." Instead of under-the-table boxes running our local services, we now have providers doing the heavy (literally) lifting for us.
So what do they run on? Practically the same loudly humming room-warmers that we used to keep under our desks. However, in recent years, the move is being made to lower the operating costs of these rack farms into quiet, low-powered, self-cooling, maintainable animals.
As most places have tried to just tone down, or spread thin, some have been making the move to efficiency. Enter the reverse revolution of the CPU to something more applicable to today's computing needs. Instead of powering with high-wattage x86 chips, many are dipping their toes into the shallow end of the alternative-processor kiddy-pool.
And with that I introduce an amazing NEW and WILD CPU: PowerPC!
Oh, you've heard of it? It's legacy and old-hat, you say? I must be thinking of a different PowerPC CPU then. The company I've been gainfully employed with for the past 1.5 years seems to be using something quite different than your grandmother's Power chip. Not quite the behemoth of the IBM Power7 iron (in size nor noise), but not the wussy of your old PowerMac neither.
We're talking multiway SoCs with full floating-point running at a fraction of the wattage of just about anything else on the market. Add with it full hardware virtualization (via KVM), and you begin to see where in the market this is headed.
We've already been engaging multiple Linux and software vendors to give a complete and first rate experience on this new class of hardware. You'll have multiple choices when it comes to supporting and administrating, whether it's one system or a room full of racks filled.
So here's my not-so-humble way of introducing you to Servergy, Inc.. They've been around for 3 years, but expect to be hearing a lot more about us in the coming months. If you're going to be at a Linux or Cloud/Server related event in the near future, chances are you will run into one of us. I'll actually be at Ubuntu's UDS-r in Copenhagen at the end of October. I'm hoping to have a live demonstration while I'm there.
Cheers
NOTE: In this article I am speaking solely on my behalf. None of what I've said can be taken as a statement by the company I work for: Servergy, Inc.
Tuesday, July 17, 2012
The Community Conundrum: PowerPC
In my recent work, I've been dealing a lot with PowerPC. As an old Mac user, I've had a soft spot for PowerPC for ages. Like most people, until recently, I've considered PowerPC an aging and dying architecture. Even with IBM selling PowerLinux systems, the lack of cheap hardware for developers has left a hole not easily filled, no matter how man old PowerMacs you buy in eBay.
However, there are a lot of PowerPC platforms that do fill this gap left by PowerMac. Some are even 32-bit platforms that can compete in today's markets.
So why have you never heard of them? Why can't you download Fedora or Ubuntu to install on your PowerPC of today? Several reason:
This circle of support appears to be the hold up. Convincing even community supported architectures like Ubuntu and Fedora to support these different kernel flavors is met with archaic skepticism, and is usually concluded with "there is no community" to which I usually respond "because there is no support."
Something has to give here. Linux and Open Source isn't where we want the chicken-and-egg scenario to happen. You can't walk up to a Linux distro with a community and say "Here we are, let's do this" in much the same way as you can't go to a community and say "Come over here with us. We don't support you yet, but we'd like you to prove that you're worth it."
So where to begin...
However, there are a lot of PowerPC platforms that do fill this gap left by PowerMac. Some are even 32-bit platforms that can compete in today's markets.
So why have you never heard of them? Why can't you download Fedora or Ubuntu to install on your PowerPC of today? Several reason:
- Distributions don't really support it.
- The "community" behind it is driven at the kernel and low-level, not at the distribution level (see last bullet item).
This circle of support appears to be the hold up. Convincing even community supported architectures like Ubuntu and Fedora to support these different kernel flavors is met with archaic skepticism, and is usually concluded with "there is no community" to which I usually respond "because there is no support."
Something has to give here. Linux and Open Source isn't where we want the chicken-and-egg scenario to happen. You can't walk up to a Linux distro with a community and say "Here we are, let's do this" in much the same way as you can't go to a community and say "Come over here with us. We don't support you yet, but we'd like you to prove that you're worth it."
So where to begin...
Friday, December 9, 2011
Reviving the Ubuntu PowerPC community effort
Just is just a heads up to some people who may be interested. I am trying to breath some life back into the Ubuntu PowerPC community. My interest extends from my current job and focus on the server market. That's not to say that I don't think PowerPC should have a desktop life (though most people would only like it for the legacy ppc Mac systems out there), just that my personal focus won't cover much of that beyond perhaps some CD creation and fixing fails-to-build-from-source problems.
So, come one, come all. I've sent a quick note out to the Ubuntu PowerPC LaunchPad Team. There's also a mailing list now.
I'll be posting more of a road-map soonish.
So, come one, come all. I've sent a quick note out to the Ubuntu PowerPC LaunchPad Team. There's also a mailing list now.
I'll be posting more of a road-map soonish.
Thursday, June 30, 2011
Setting up minicom and ckermit for u-boot serial file transfers
Took awhile to get this simple bit of legacy working. I'm working over a USB serial console to a dev board and needed to update some parts of flash but I don't have working ethernet yet. U-Boot allows for kermit binary file transfers, but default ckermit settings don't appear to work well. So to speed others along, here's what I did. Quite simply, add this to (or create) ~/.kermrc:
Otherwise, ckermit expects some modem line signals and CONNECT before it starts the transfer. Now you can use minicom's send command.
set carrier-watch off
set handshake none
set flow-control none
robust
set file type bin
set rec pack 1000
set send pack 1000
set window 5Otherwise, ckermit expects some modem line signals and CONNECT before it starts the transfer. Now you can use minicom's send command.
Wednesday, June 29, 2011
Bluecherry is hiring an Ubuntu developer!
My old employer, Bluecherry, is looking to hire an Ubuntu developer with the following experience:
- Extensive knowledge of the Video4Linux2 and ALSA sound API
- Prior experience in Linux based software design / implementation
- Prior knowledge of Ubuntu, including building / maintaining Debian packages
- Prior experience with gstreamer and RTSP
- Prior experience with MYSQL.
- Prior experience with Matroska file containers and video encoding
- Excellent verbal and written communication skills
- Strong knowledge of C
- Previous work with and understanding of working with video / audio formatting / codecs including MPEG4 and H.264
- Internet and operating system security fundamentals
- Sharp analytical abilities and proven design skills
- Strong sense of ownership, urgency, and drive
- Demonstrated ability to achieve goals in a highly innovative and fast paced environment
Wednesday, June 8, 2011
Stripping an Ubuntu system to just the basics...
WARNING: I am not responsible for you trashing your system. Use this guide with care. No attempt was made to ensure your intelligence level (nor mine).
UPDATE: Please read the comments about using apt-mark instead of the man-handler of a script that I have in the main post. Should lessen the chance of hosing your system.
While working on a cross-build system inside an Ubuntu 10.10 virtual machine instance, I decided I didn't want all the fluff of the desktop version. However, instead of just going through the entire package list on the machine, I came up with a quick way to have APT automatically handle it for me.
Ubuntu is nice in that it has meta-packages for the different levels of their system. The top-level meta-packages -- ubuntu-minimal, ubuntu-standard and ubuntu-desktop -- let APT know which package group to install (ubuntu-standard is generally used for server installs). APT also has the nice functionality of automatically being able to remove packages which were only installed because they were depended on by some other package. For example, if you install ffmpeg, you get a ton of libraries with it. If you then remove ffmpeg, you can run "apt-get autoremove" to also remove the libraries that are no longer needed because they were only installed to satisfy ffmpeg's dependencies.
So now, how to abuse this functionality. First, we find out how APT tracks these implicit/explicit package states (packages we installed directly vs. packages that were only installed to satisfy dependencies). We find /var/lib/apt/extended_states. The format is similar to /var/lib/dpkg/status and has stanzas in the form of:
So here's a quick script that will mark all of your currently installed packages as auto-installed:
Here's how we use it (ai-all.sh is the shell script from above):
Now, we have to tell APT that we want to keep some things. Personally, I went with the ubuntu-standard as a base line and added a few necessities for good measure. You could go with ubuntu-minimal, and also add packages here that you specifically want to keep (otherwise the commands later will remove them). Note, I specifically added grub-pc because a boot-loader is not a required package (think EC2, diskless installs, etc.). Be sure to add your boot-loader to this command if you require it.
This most likely wont do much, since these packages are already installed. However, APT will mark them as "Auto-Installed: 0" so that it knows we explicitly installed them. Next, time to ditch a few hundred megs:
This will take some time and finally spew out a huge list of things to remove. You may want to give it a quick once-over to make sure you aren't tossing something important. If you see a package you need, ^C out and run the apt-get install command again with the new package(s).
So now you should be clean and clear. Note that the above --purge option is meant to completely remove things like configuration files that were installed with the packages you are removing. If that scares you, then remove that option.
UPDATE: Please read the comments about using apt-mark instead of the man-handler of a script that I have in the main post. Should lessen the chance of hosing your system.
While working on a cross-build system inside an Ubuntu 10.10 virtual machine instance, I decided I didn't want all the fluff of the desktop version. However, instead of just going through the entire package list on the machine, I came up with a quick way to have APT automatically handle it for me.
Ubuntu is nice in that it has meta-packages for the different levels of their system. The top-level meta-packages -- ubuntu-minimal, ubuntu-standard and ubuntu-desktop -- let APT know which package group to install (ubuntu-standard is generally used for server installs). APT also has the nice functionality of automatically being able to remove packages which were only installed because they were depended on by some other package. For example, if you install ffmpeg, you get a ton of libraries with it. If you then remove ffmpeg, you can run "apt-get autoremove" to also remove the libraries that are no longer needed because they were only installed to satisfy ffmpeg's dependencies.
So now, how to abuse this functionality. First, we find out how APT tracks these implicit/explicit package states (packages we installed directly vs. packages that were only installed to satisfy dependencies). We find /var/lib/apt/extended_states. The format is similar to /var/lib/dpkg/status and has stanzas in the form of:
Package: foo
Architecture: i386
Auto-Installed: 1So here's a quick script that will mark all of your currently installed packages as auto-installed:
#!/bin/bash
arch=$(dpkg --print-architecture)
dpkg --get-selections | awk '{print $1}' | \
(while read pkg; do
echo "Package: $pkg"
echo "Architecture: $arch"
echo "Auto-Installed: 1"
echo
done)Here's how we use it (ai-all.sh is the shell script from above):
sudo cp /var/lib/apt/extended_states /var/lib/apt/extended_states.bak
bash ai-all.sh | sudo tee /var/lib/apt/extended_states > /dev/null 2>&1Now, we have to tell APT that we want to keep some things. Personally, I went with the ubuntu-standard as a base line and added a few necessities for good measure. You could go with ubuntu-minimal, and also add packages here that you specifically want to keep (otherwise the commands later will remove them). Note, I specifically added grub-pc because a boot-loader is not a required package (think EC2, diskless installs, etc.). Be sure to add your boot-loader to this command if you require it.
sudo apt-get install ubuntu-standard grub-pc vim build-essential gitThis most likely wont do much, since these packages are already installed. However, APT will mark them as "Auto-Installed: 0" so that it knows we explicitly installed them. Next, time to ditch a few hundred megs:
sudo apt-get --purge autoremoveThis will take some time and finally spew out a huge list of things to remove. You may want to give it a quick once-over to make sure you aren't tossing something important. If you see a package you need, ^C out and run the apt-get install command again with the new package(s).
So now you should be clean and clear. Note that the above --purge option is meant to completely remove things like configuration files that were installed with the packages you are removing. If that scares you, then remove that option.
Friday, November 12, 2010
Bluecherry Releases beta1 of their new DVR product
After much delay, and much anticipation, we at Bluecherry have finally released our first public beta of version 2 of our DVR product.
For extensive details, please see the announcement.
I'm honored to have worked on this project. While I've been dealing with the driver (solo6x10) and backend server for over a year, the final parts of the product have fallen into place in just the last 6 months. It was an overwhelming effort by just 4 developers on a project that spanned from hardware to UI.
For extensive details, please see the announcement.
I'm honored to have worked on this project. While I've been dealing with the driver (solo6x10) and backend server for over a year, the final parts of the product have fallen into place in just the last 6 months. It was an overwhelming effort by just 4 developers on a project that spanned from hardware to UI.
Wednesday, August 25, 2010
Solo6x10: Recording from video
I've finally gotten around to writing an example program for recording from Solo6x10 devices to a file. This program is very basic. It leaves the video device in it's default state (resolution, frame rate, etc). So you can modify those settings separately, and then use this program to record at those settings.
I also did not put motion detection examples in this, mainly because I have not satisfied my desire to create a decent API in v4l2 yet for that that.
Next step, I will add sound recording into this.
You can find the example source here.
To compile it, run:
Execute it with a single command line option, the device for a solo6x10 encoder (e.g. /dev/video1). It will record until you hit ^C.
Happy recording!
I also did not put motion detection examples in this, mainly because I have not satisfied my desire to create a decent API in v4l2 yet for that that.
Next step, I will add sound recording into this.
You can find the example source here.
To compile it, run:
gcc -lavformat bc-record.c -o bc-recordExecute it with a single command line option, the device for a solo6x10 encoder (e.g. /dev/video1). It will record until you hit ^C.
Happy recording!
Wednesday, July 14, 2010
Cross compiling the Linux kernel from Mac OS X
So I picked up a 13" MacBook and have been fiddling around with it. I like it, sue me.
One of the first things I did (as any Linux developer would) was to install darwin ports. I noticed some interesting things in there. A few that I needed (git) and a few that completely surprised me (dpkg and apt).
One thing that was missing was a Linux cross-compiler. So I did what any self-respecting Linux developer on a Mac would do: I built one.
Don't get too excited. I've only built one worthy of compiling a kernel (which means no C library, no userspace, etc).
The result of my work is here (built on 10.6.3):
You may notice the extra elf.h file, which is needed in /usr/include/elf.h for some programs in the kernel to compile natively on the host (e.g. modpost). The gcc and binutils will unpack in /opt/local/.
In order to cross-compile, you will need to add a few things to your kernel make command line:
You may notice, like I did, scripts/genksyms/parse.c has a #include for malloc.h which is not on Darwin. You may safely delete that line.
Note that you must already have /opt/local/bin in your PATH. Using ARCH=i386 will also work and compile 32-bit kernels. One last point, the sources for gcc/binutils came from Ubuntu's Jaunty.
Happy hacking...
One of the first things I did (as any Linux developer would) was to install darwin ports. I noticed some interesting things in there. A few that I needed (git) and a few that completely surprised me (dpkg and apt).
One thing that was missing was a Linux cross-compiler. So I did what any self-respecting Linux developer on a Mac would do: I built one.
Don't get too excited. I've only built one worthy of compiling a kernel (which means no C library, no userspace, etc).
The result of my work is here (built on 10.6.3):
- http://www.swissdisk.com/~bcollins/macosx/gcc-4.3.3-x86_64-linux-gnu.tar.bz2
- http://www.swissdisk.com/~bcollins/macosx/binutils-2.20.1-x86_64-linux-gnu.tar.bz2
- http://www.swissdisk.com/~bcollins/macosx/elf.h
You may notice the extra elf.h file, which is needed in /usr/include/elf.h for some programs in the kernel to compile natively on the host (e.g. modpost). The gcc and binutils will unpack in /opt/local/.
In order to cross-compile, you will need to add a few things to your kernel make command line:
make ARCH=x86_64 CROSS_COMPILE=x86_64-linux-gnu- ...You may notice, like I did, scripts/genksyms/parse.c has a #include for malloc.h which is not on Darwin. You may safely delete that line.
Note that you must already have /opt/local/bin in your PATH. Using ARCH=i386 will also work and compile 32-bit kernels. One last point, the sources for gcc/binutils came from Ubuntu's Jaunty.
Happy hacking...
Subscribe to:
Posts (Atom)