Saturday, August 23, 2008

Intrepid Ibex (8.10) Moves to 2.6.27

After considerable discussion on ubuntu-devel mailing list, and in the Ubuntu Kernel Team's last IRC meeting, we've made the move to 2.6.27 for Intrepid in the hopes that it will provide a more robust experience for our users.

The source package was just uploaded to the archive for building, so in about 24 hours, we should so it on mirrors.

Tuesday, August 19, 2008

The Linux Ecosystem...Changes Ahead

So I've been privy to, and sometimes involved in, many conversations about the Linux ecosystem. How it evolved, how it is now, and where it will go from here. The most important factor has been how Linux kernel development has been funded over the years and what needs to happen to ensure it remains funded.

Given that Linux is not owned by anyone (not even by us, the developers) it is hard to say who should and will fund its future. The tides of money are constantly shifting as companies involved in this ecosystem decide where they fit in, what they want from it, and ultimately, how much they are willing to spend and for how long.

So let's go through some history. I'd like to remind people that I am not an expert on Linux kernel history in this sense. This is all from my recollection over 10+ years of being involved.

In the beginning (the past)



Well, there wasn't much. Let's face it, in the start, it was a hobby for most everyone. No company took it seriously. The device vendors that did write drivers did so out of free will, and usually poorly. Volunteers still had to munge it and cram it into the main kernel tree. It was a much simpler time. Folks did things for vanity and sheer enjoyment.

We can relate this time to when things got done because some individual wanted it done. Corporations were still on the sidelines waiting to see what happened (if they were even looking at all).

At this point, people were working on the kernel in their free time. Lots of them were in college, which means no families to support, no mortgage, no worry about retirement. I'm sure a lot of people (including myself) thought "This looks good on a resume, plus I get to do things I like".

The corporations emerge (still the past)



So at some point, people decided they wanted to make a living off this thing called Linux. Everyone knew that for the hardwares vendors to care about Linux, it had to have a corporate entity to talk to and users demanding. The boom of venture-capital-backed Linux vendors emerged (aka distributions). We know they have come and gone over the years, with only a few emerging in the black.

These companies positioned themselves in many different ways. Some trying to become service oriented, while others relied on licensing to make their money. I wont delve into this topic much, but let's look at how these Linux vendors advanced.

Remember, we are just coming out of the previous stage. No corporations are yet seriously funding development in Linux. It, in itself, is still a long ways off from having all the features that users really want in an OS. How do these distributions get these features? Easy, they hire developers that have been doing this all along.

This is the initial way things get done. You pay someone to do it.

So how did this fall on the distributions? Simply because the corporate distributions required it in order to compete in the market. The OEM's and hardware vendors didn't care. Their stuff was selling on Windows and Unix platforms without problems. They had no financial requirement or user demand to worry about supporting Linux at this point. If their hardware was popular enough with Linux, someone would write a driver for it.

Enter the hardware vendors (sort of past and up to now)



Now that Linux is starting to be a commercial "thing", hardware vendors are taking notice. Not only because of the press around it, but because their own customers are starting to demand it. Big customers.

In addition, companies are starting to see a way for them to piggyback on all the hype and press coverage. If you're "Linux Friendly", you've got a whole bunch of geeks with purchasing leverage behind your company.

Large hardware vendors are starting to take notice. Companies devote whole groups of engineers at supporting Linux. And not just in some odd way, they are doing it our way. Open, and in the community. They work with distributions to get early adoption of drivers. They work with upstream to integrate these drivers and features into the kernel. They participate in steering the process, and drive a lot of what we do. OEM's can finally lay down the requirement "Must be supported in Linux" to their ODM's.

Where did these engineers come from? Right from the Linux kernel community. Most people hired to work on the Linux kernel by a company, cut their teeth for zero money in the community. Some of them have also been hired away from the distributions.

As a previous hiring manager for Ubuntu's kernel team, I can tell you personally, I generally skipped the CV and went straight to the kernel commit logs and linux-kernel mailing list to verify someone. The CV was just a backup.

This is one of the beauties of our process. On a CV, most people look good. Even their references (which they choose) are all likely to tell you what you want to hear. But nothing can tell about a persons personality like the thread from 6 months ago where this person tried to get a feature accepted upstream on lkml. You wouldn't have seen how this person either defended themselves valiantly, or wussed out just because Alan Cox had some harsh words. You wouldn't have known about how they worked for months to take their original idea and rework it to suit the issues brought up on submission, or whether they let the idea die because they couldn't handle the criticism.

Back to the topic though...I'll repeat, these hardware vendors are hiring kernel developers. But it isn't just the hardware manudacturers. You also have companies like Oracle, Google and VMWare hiring them. Some companies even have enough cashflow that they can hire a high level upstream kernel developer for pure bragging rights, or a consortium sponsored by many companies hiring people to just keep doing what they were doing for nothing.

This is definitely where the shift comes in. More and more, we see hardware vendors developing Linux drivers that are released at the same time the device goes public. This development is occurring in-house, and not out in the community. Sure, the community still integrates it, and goes through the code review process, but how many new drivers are coming from someone not associated with the vendors that made the device? Fewer and fewer.

The road ahead...(now and into the future)



So as things move ahead, there will be less for the distributions to do for hardware support. Most vendors will produce the driver, and community+distributions will play a big part in integrating these drivers. New subsystems will emerge to support new ranges of devices. It's not too hard to see vendors working together in the community to solidify features and API's that their drivers need (e.g. mac80211, GPU, other wireless technologies, multi-core features, memory management, etc).

Most developers will cut their teeth on helping to integrate and enhance these things from the vendors. The community will revolve around major restructuring of the kernel to ease development and improve stability.

So where does that leave the distributions? With the majority of the kernel work being handled by vendors, the distributions will fall into a level of consumption. Let's face it, distributions are best at integration (which is part development, so let's not get confused). Distributions are also good at noticing trends, which are fed upstream. Yes, they will still drive new ideas, and possibly even develop these ideas in-house, but they wont be the ones driving the the bulk of the work because they wont be the ones creating the new hardware that will require it.

The idea that distributions should ultimately be responsible for the kernel funding is not possible to sustain. In the current ecosystem, it is not required for a distro to invest heavily in upstream kernel work. Because Linux is open and free, there is nothing forcing them to do so. If this company instead invests heavily in integration and usability, they will produce a better product for the masses, beating the other distributions in the end, and leaving the kernel developers hired by those other distributions without jobs.

In the end...(entirely made up)



If a distribution is popular enough, the hardware vendors will want it to run on their goods. OEM's and hardware vendors who work together to help bring support for their hardware to the kernel will ultimately beat out competitors. The age where Linux is in-demand enough to create this ecosystem is close at hand, and in some ways, already exists.

Nothing is written in stone. No one can predict what will happen, we can only speculate. However, we can probably be assured that the funding to keep Linux around will come from many places, maybe even ones we haven't thought of yet.

I, for one, look forward to what's ahead.

Thursday, August 14, 2008

2.6.27-rc3 Kernel images for Hardy/8.04 and Intrepid/8.10

I've built some Ubuntu kernels based on 2.6.27-rc3. They are available here.

Please feel free to test. Report bugs directly to the kernel-team mailing list.

If you are running hardy, and want to use the iwlagn driver (4965 11n support), then make sure to get the linux-restricted-modules-common package from Intrepid as well.

Thursday, August 7, 2008

Ubuntu Kernel Next

Normally in Ubuntu's development cycle, we don't begin work on the kernel for a release until that release opens for development.

We are starting something new this time around. Now that 2.6.26 is released, and the kernel in Intrepid/8.10 (our current development cycle) is pretty stable, we have opened up a new git tree called ubuntu-next. Do not confuse this with linux-next, they are different concepts.

We are not spending a lot of time adding features to this tree. It is basically a rebase of all of our patches on top of the latest kernel in linux-2.6 upstream git. Our patches are consolidated and given some consistency (and a few pushed upstream).

At regular intervals, binary packages of this tree will be made available (usually at -rc milestones from upstream). In fact, the first installment of these are now available at:

http://kernel.ubuntu.com/pub/next/2.6.27-rc2/

I've built them for Hardy/8.04 and Intrepid/8.10. The only difference between Intrepid and Hardy is the compiler, which means your dkms and other third-party module recompiles should work :)

A few points to remember when using these:


  • They are not guaranteed stable, or even to be able to boot. Keep away from small children.

  • These images will not be supplied in a PPA or any other APT type repository. We need a barrier to prevent people from just adding these to their systems wholesale.

  • We will not provide respins of linux-restricted-modules or any other modules package in the Ubuntu archive with these. Headers are provided, so use them if you need to.

  • DO NOT (and I can't say this enough) file bugs on these packages. They will be ignored.

  • DO (and I can't say this enough either) feel free to reproduce your bugs against official kernels with these kernels. It's a good data point to know whether the bug has been fixed by a newer kernel

  • DO NOT (and I will make physical threats on this one) pump these packages into a public APT repo and tell the world so that it is "easier" to use bleeding edge stuff. If folks don't know how to install these packages, they surely don't need to be running them (we wont help them fix things after trying them).

  • DO NOT ask us to put this into Intrepid/8.10 3 weeks before RC just because it makes your mouse stop squeaking whenever a cat is near the computer. Honestly, this kernel isn't going to get the same testing, and we have deadlines for a reason.



Other than that, enjoy.

Sunday, August 3, 2008

Ubuntu Kernel Team IRC Meeting

Our next team meeting is August 5th (Tue) at 16:00 UTC. We invite any and all community members to participate. Read our agenda and see what we've done in the past. Quick note, we've been slacking on this important event for awhile now, and are trying to get things rolling again.

Thursday, July 17, 2008

Ubuntu DKMS Presentation

At the Ubuntu Kernel Sprint in Lexington, Mass. this week, I did a presentation on how to create a simple package which utilizes DKMS. For those that don't know, DKMS is a distribution agnostic way to deliver driver source that is not prone to breakage when the kernel ABI changes.

When a new kernel is installed or booted, the driver is actually rebuilt against it automatically.

This is a 20 minute presentation, complete with an example package. There is a video of the talk in OGG format. It will be uploaded to the Ubuntu Dev video channel soon (in case you don't want the 1.4G file I have).

Have fun, and please don't hesitate to ask any questions.

http://kernel.ubuntu.com/~bcollins/dkms-presentation/

NOTE: I am in the process of uploading the ogg, so be patient:

-rw-r--r-- 1 bcollins bcollins 1444484439 2008-07-16 15:29 dkms-presentation.ogg
94a9c16c6fdb2593a19737ae6f7b9aac dkms-presentation.ogg


UPDATE: Here's a smaller file. I hope the quality is good enough to read the projector:

-rw-r--r-- 1 bcollins bcollins 246894559 2008-07-17 12:48 dkms-presentation-small.ogg
61248c1baf4f708e3b040361e977eee0 dkms-presentation-small.ogg

Tuesday, July 15, 2008

Canonical and the Linux kernel

There's been a lot of discussion recently about Canonical and it's contributions to upstream, mainly due to an off-the-cuff comment by GregKH on a google video, and mainly in relation to the Linux kernel. Greg's statement, while correct when compared to his data, was incorrect because of improper data collection.

Greg did some data gathering to show where kernel contributions come from, based on the history in the git logs, and associating email addresses of authors with companies.

During this presentation, Greg said that "Canonical does not give back to the community". While I could do pages about how this blanket comment is completely baseless, due to the fact that he is using one numeric value from one bit of history, I wont go into that now.

What I want to clarify are the numbers, since I can prove them with facts. I don't make any claims that this data means we do huge amounts of work, or that we compare favorably to other companies. I just want people to know that the numbers you heard are wrong. I also want people to realize that these numbers, while important, are not a good metric of how much a company such as Canonical, SuSe or even Redhat contribute back to the community on a whole. Just remember, Ubuntu is a community driven distribution. It's not something we put out to appease the developers and community, it's the heart and soul of everything we do.

So, to get some corrections out:

GregKH: "Canonical only contributed 6 patches in 5 years"

BenC: First off, Canonical hasn't even been around for 5 years, so expressing the numbers in this way leads to some incorrect conclusions. Second off, using a check for ^Author with a canonical.com or ubuntu.com email address in the v2.6.25 tag of the upstream kernel tree, shows 91 commits (I should know the numbers, since 63 of those were from me). Granted, Redhat and SuSe outnumber us considerably, but then we don't have > 100 kernel developers on staff (we have less than 10).

So how did Greg make this mistake? After talking with him it seems he was only checking for canonical.com addresses. It was only recently that we started using canonical.com as a habit for upstream work (we used to use ubuntu.com).

Seems people like to harp on numbers, as inconclusive as they are. All I can say is, take a step back and look at the bigger picture. We're all working toward the same end, and we all have our contributions.

Monday, June 23, 2008

Kernel ABI and why it matters

Kernel ABI is the exported binary interface provided by the kernel, and accompanying modules. Most Linux distributions have a way of tracking this ABI to detect when the kernel's exported interfaces have changed, and thus, modules compiled for it may be in need of recompiling. Usually this results in the incrementing if an ABI number (on Ubuntu, it would be the "-2-" in 2.6.24-2-generic).

There are many extremes on how this ABI is handled, from taking expensive measures to ensure it doesn't change in a released distribution, to just not caring at all (not tracking it will be an idiotic extreme I wont even cover).

In Ubuntu's case, we fall somewhere in the middle. We track it, and bump the ABI fairly reliably if _anything_ in our kernel ABI changes. We also try not to bump it in released kernels too often (security updates being an exception), but we don't put much effort into preventing it, or guaranteeing a subset of symbols will never change over the lifetime of a product.

The general consensus has been (at least for me, I can't speak for Canonical on this point) is that the only things that cared about ABI bumps were proprietary and/or third party drivers. For open-source third-party drivers, we've recently been pushing the use of DKMS. For proprietary drivers, I've been all for letting the vendors deal with keeping up with us.

However, kABI has more ramifications than just proprietary modules. Many certifications are based on the kABI it is running under (think Oracle, VMware, etc). When we bump the kABI, we in affect force a minimal recertification. This isn't good for an enterprise Linux distribution.

So it seems that things will change. For Ubuntu to be considered in certain installations, kABI will need to have strict guidelines and criteria. We'll need to justify post-release kABI bumps and do more work to rewrite security patches to avoid changing the kABI.

A bit of preparation for this has already taken place in Intrepid's kernel (based on 2.6.26). A new kABI checker has been implemented. This checker has always been there, and it runs at build time, but has been very basic and dumb. Now, it has much finer grain checks, and will be expanded to support white/black lists in which we will only bump the kABI if it affects major parts of the kernel (no more kABI bumps for appletalk, sorry guys).

You wont notice much when this happens. You probably wouldn't have known if I hadn't told you. Rest assured that it wont adversely affect our range of hardware support. Perhaps it will mean you have to do less compiling for DKMS packages on your system :)

Sunday, June 22, 2008

Keeping the last successfully booted kernel

It's always been possible to boot to an older kernel on Ubuntu if a new kernel didn't work. However it was never possible to boot to the last kernel that was known to work, nor was there ever a guarantee that there would be an old kernel around.

Let's say someone installs Intrepid 8.10 Alpha 1 with kernel 2.6.26-2-generic. After installing (which presumably went well), they upgrade and get a new 2.6.26-2-generic kernel (remember, newer kernels don't always bump the "2" ABI). Upon rebooting, they discover this new kernel doesn't work for them. What now? There is no longer an older kernel to boot to.

Enter "last-good-boot". A new mechanism now in development that will save away the kernel+initrd+modules for the last time your system booted successfully.

It will add this as an entry in grub for visibility.

Currently there are packages in a PPA:


deb http://ppa.launchpad.net/ben-collins/ubuntu intrepid main
deb-src http://ppa.launchpad.net/ben-collins/ubuntu intrepid main


It includes the module-init-tools and grub changes needed to test. After installing these packages, you will need to reboot in order to get your first last-good-boot.

Feedback welcome.

Thursday, June 5, 2008

Video conversion for PS3

So I was messing around with my PS3 this week. I decided I wanted to finally take advantage of it as a media center, since my laptop is getting too full with movies, home videos and photos.

Much to my dismay, there's a whole lot of people asking how to use Linux to convert videos for the PS3, and not a lot of success out there. Finally, after several days of trial and error, I've gotten it down pretty good, encoding all of my home videos and movies to h264 format.

First off, I did replace the weak 20Gig drive in my PS3. It's actually quite easy. The drive in the PS3 is a standard SATA laptop type drive. If you want to backup your current drive, simply go under Settings->System Settings->Backup Utility and use a decent size flash/pen drive to do it (I was able to back mine up onto a 4Gig USB pen drive). After the backup is done, open the side (you're responsible for this if it breaks your system, not me), and change out the drive with your new one, and power up. It will ask if you want to format it, choose "Yes" of course. Then use the same utility to restore your data (saved games and such).

So, now on to the encoding part. This ended up being extremely easy, but there was a lot of missing information I had to figure out. First, you will need the avidemux program. On my Ubuntu 8.04 system, this was available from the universe repository (apt-get, synaptics will find it by default).

Start avidemux and open your video. Go to the Auto menu and select "PSP (H.264)", use the 720x480 video mode when the dialog box pops up. This isn't it though. Next, select "Configure" under the Video section, change the encoding to "2 Pass - Bitrate" mode. Close the configure dialog. Click the "Format" dropdown, and change it to MP4.

That's it, now click save, and let it process. You will get nice, compact, high quality video suitable for your PS3.

WAIT. This is the part that I have yet to find. When transfering your video to your PS3, you will need to follow some key steps, else the PS3 wont "see" it on your USB device. Your USB device should be fat32 formatted. On the device, create a directory called "VIDEO" (all caps, no quotes). This is where you will copy your video files to. This part took me awhile to figure out. If you have videos you want to group together (an "Album" in PS3 speak), create a subdirectory in "VIDEOS" (e.g. "VIDEOS/Action Movies/") and place your videos there. You can then copy this entire "Album" together.

Well, that's it. Hope I saved you a lot of time and frustration.

Monday, May 5, 2008

Give me a funky ass bass line...

Started my bass lessons a few weeks ago. I've had my bass rig since November, and have just been strumming on it, practicing fretting, plucking and muting techniques. Finally learning some songs now. First song is Black Dog by Led Zeppling, and now I'm picking up Bulls on Parade by Rage Against The Machine. Lots of fun. My teacher is kick ass, so that makes it a lot easier.

Tuesday, April 22, 2008

How to play freerolls

So in my quest to duplicate Chris Ferguson's feat, I've been playing a lot of freerolls. It's been awhile since I've played so many, and I've had a few thoughts on changing my game plan to accommodate the craziness involved in these types of tournaments.


The main thing about freerolls is that they're freerolls. This means people have no recourse for losing. Many players believe they need to build their stack very quickly. This is wrong in many ways. In general, there is little to gain from building up a quick stack. If you tighten up after building it up quick, the blinds will grow too quickly for you to make use of those chips you just got. If you continue your reign of terror (playing big pots aggressively with weak hands) players at the table will catch on and use you to build their own stack. Yes, you may suck out on AA every once in awhile with 95s, but that doesn't mean you should make it your game plan.


So, here's some things I've been doing. We'll start of with how to play when the big blind is less than 150.


  • Don't raise with anything other than AA, KK, and AKs. The normal reason for raising with other hands is to reduce the field and protect your hand. You can't do this on a loose freeroll. Almost any ace is going to see a flop.

  • Limp a lot with mediocre hands. If you can see a flop cheap with things like J8s and K9s, then do it. Be prepared to dump them with a weak flop, but play them hard if you flop a monster. You will get payed off.

  • Be prepared to fold your big hands when there is too much action. If you are sitting pre-flop with QQ and 5 people have gone all-in, and it costs you all your chips (or most) to make the call, you are better off folding. Sure, you are still going to win around 30% of the time, but that's not enough to make it worth it. Likely, you will lose. Even if you win, those chips wont be worth the risk you took to get them.

  • Be patient. The all-in-junkies will be gone soon.

  • Don't limp to someone who is pushing in every hand if you aren't willing to call.

  • Take advantage of the action players. If you see someone making a move at every pot, trap them with your big hands, and monster flops.

  • Don't bluff. This includes huge semi-bluffs (open-ended straight-flush draws for example). Most times you will get called and have to hit your outs. I would only suggest semi-bluffs where you think you have 12 or more outs and are short stacked and needing the chips badly. Sometimes you can try a minimum raise on the flop against a single tight player to see a cheap river, but most of these players will call and bet on the turn


So, now you've made it to the bigger blinds. You've outlasted the jokers, now on to the poker. Usually at this point, the field is already down to less than a third of where it started. But that's way off from payday. However now that you have gotten deeper into the tournament, normal game plan starts paying off. Closer to the money (or prize) makes people less likely to take risks. The big stacks, however will be calling with mediocre hands to try and eliminate the short stacks (e.g. calling an all-in with A5o).


Hope this helps! Good luck, and see you at the tables.

Thursday, April 10, 2008

Zero-2-10k: Day 1

I wont be posting about this every day, but just to get things rolling, here's a little insight into my first day. Final money: $0.00, FTP: 23. I played only one NL freeroll. Don't have time to play the Razz free roll coming up in 5 minutes (have to pack for tomorrow). I played a shit load of the Poker After Dark freerolls. I came in second on one (the .NET, which you have to come in 1st out of 360). Oh well. I didn't expect day-1 to be too great.


I'm going to be in SanFran till Tuesday, so maybe I'll pick back up when I get back.

Zero-2-10k

As many people know, I'm an avid poker player. I enjoy the game not so much for the money (though I can't complain about it), but mostly because I love the interaction with other people. Practicing people skills, reading their thoughts, playing on their most basic instincts (greed, fear and anger) is very enjoyable to me (sounds sadistic, but I assure you, it's just for fun).


So a lot of people are following in Chris Ferguson's footsteps. The idea: use strict bankroll management to turn nothing into something. The goal: go from zero to $10,000 using this management and good poker skills. You can read more about what Chris did on his site. Here's the basic rules:


  • I'll never buy into a cash game or a Sit & Go with more than 5 percent of my total bankroll (there is an exception for the lowest limits: I'm allowed to buy into any game with a buy-in of $2.50 or less).

  • I won't buy into a multi-table tournament for more than 2 percent of my total bankroll and I'm allowed to buy into any multi-table tournament that costs $1.

  • If at any time during a No-Limit or Pot-Limit cash-game session the money on the table represents more than 10 percent of my total bankroll, I must leave the game when the blinds reach me.


So, I'm taking up this challenge, and I'll be blogging my progress here. I'll be playing on FullTiltPoker's site, and playing a lot of freerolls to start. In addition to normal freerolls, I will also be using my FTP's to try to earn extra cash and such. For example, there are FTP SnG's where you can win $26 satellite tokens to MTT's. Also, the Poker After Dark freerolls (which start every 10 minutes or so) win you a 100 FTP buy-in to the round 2 MTT. I will generally cash these out in favor of the 100 FTP, so I can use them for more cash oriented games.

LUGRadio Live 2008 - USA

So, this is my first post, but not the only one for today. I thought it best to pimp the excellent LUGRadio Live show for this first blog. I'll be speaking there for a quick lightning talk. The event will be held at the Metreon Theater in San Francisco, this weekend, April 12-13.

Lots of other speakers, including Ian Murdock and Miguel de Icaza. Should be lots of fun, so come out and party with us. Especially check out the live LUGRadio session!