Showing posts with label ubuntu. Show all posts
Showing posts with label ubuntu. Show all posts

Thursday, July 1, 2010

Oldies but goodies: libugci

So I went digging around my old software, and ran across some interesting stuff that doesn't get a lot of attention. I've picked out one in particular that I haven't heard much about in a long time, but I consider it very useful for people who love MAME: libugci

I had built an arcade cabinet a long time ago that was making use of a USB control interface called Happ UGCI. It allows you to directly interface real video game style controls (buttons, joysticks, trackball, coin door, etc) with a PC.

It was perfect for that I was doing, except that it sucked on Linux. Back then, I actually had to write some patches for the kernel to get it all working correctly (firmware bugs in the UGCI). In addition, much of it was not accessible via USB input layer, and so I wrote a library called libugci that took advantage of the HID interface to the board.

The board allows you to connect a real coin-door, and libugci+mame will convert that to coin-door events in the software. I always wanted to complete my MAME cabinet with a coin-door so I could make money off my friends :)

The MAME code has support for libugci built-in (thanks to my patches submitted all those years ago). Installing libugci, and recompiling MAME with this support enabled will make use of it. There are also some programs for accessing the EEPROM on the board, as well as mapping events from UGCI to HID keyboard strokes (in cases where you would want that as opposed to real joy/mouse events).

So if you're a MAME junky like I am, and need that extra bit of features that only the UGCI can offer, you can download the library from:

  • Tarball
  • git://github.com/benmcollins/libugci.git

Good luck and happy gaming!

Sunday, June 27, 2010

Why Linux will (has?) hit a wall in popularity with normal users...

So this is one of the few times I decide to get political and/or rational. Most of my career has been spent on Linux. And while the gettin's good, I don't subscribe to the notion that Linux, as a desktop, will take over the World.

Let me make one thing clear. I do believe Linux, as a core, will succeed in many forms. On the server, on mobile products (where it is the core and not exposed directly to the user, a la Android).

So here's the problem as I see it. Too many choices. Yes, this has been beaten to death, and to some extent, many Linux vendors have taken note. Debian used to be a free-for-all where all of the choices were exposed to the user. People who used Debian loved the choices, but the fact isn't that they loved the choices, they just loved that their choice was among them.

If you didn't have a preference, for example with a Desktop environment, then choices are bad for a user. They didn't know how to pick one. So now, there is a default. That's great, but across many Linux distributions, even if the default is Gnome, the little nuances of each system will overwhelmingly differentiate the entire thing so that no Gnome desktop is truly the same as another distribution.

So why are choices bad? I want to take an example from a book I was reading recently called The 4-Hour Workweek. It tells of a watch company that wanted to advertise in a magazine. The watch company had many different styles of watches and wanted to put a full page ad that showed off 6 of them. The advertising executive said they should pick one watch and show off that one. To settle the dispute, they had two full page ads: one with the 6 watch layout, and the other with a single watch. Don't you know that the single watch ad out-performed the 6-watch ad by a factor of 6? Interesting...

So anyway, choices are bad for consumers. They would rather have one choice, even if it may limit them in some way. Apple figured this out when they almost went the way of the Commodore by making so many damn types of Macintoshes (when Jobs wasn't at the helm). Microsoft also learned this when they had an extensive list of Windows variants (Full/Pro/Home/Home-Pro/Server/etc/etc), but I don't think they've recovered from that very well.

Now on to the meat of the problem. Linux, too many choices...what can be done? Well, as a developer, not much. It's not our job to make these decisions. We are the ones that give all the choices. Drivers for every device, apps that do anything you want, themes, icons, documentation, hardware support. The real issue at stake is some company needs to break out of the "We're a Linux distribution" mold.

Let's take Dell's Ubuntu Linux offering as an example (I'm not knocking this effort, I helped start it when I was working for Canonical, it's a great offering). If a normal user somehow gets to the Dell Linux page, and they say "wow, what is this Linux thing?", they will surely go to Google.com and start checking. Bad? Hell yes. The huge amount of information, choices and decisions becomes quickly apparent to them. They start asking questions like "Is Ubuntu the _right_ Linux for me?" and "Should I try other Linux's as well" and "Why does Dell only offer Ubuntu?".

Indeed, these are good questions, but for which there is no answer that is going to take the average user from "I've always used Windows/MacOS and know how it works" to "I'm going to try this thing called Linux."

In my opinion, as unfortunate as it may sound, in the end, some company will deliver a product that makes no mention of Linux other than in the copyright attributions and source code, and will call it something completely different. Maybe they will call it Chrome OS?

Saturday, June 19, 2010

Using your new Bluecherry MPEG-4 codec card and driver...

Now that the dust has settled and people are taking notice of the new driver for Bluecherry's MPEG-4 codec cards, here's a quick How-To for using it.

You will notice that there are two types of v4l2 devices created for each card. One device for the display port that produces uncompressed YUV and one for each input that produces compressed video in either MPEG-4 or MJPEG.

We'll start with the display port device. When loading the driver, a display port is created as the first device for that card. You can see in dmesg output something like this:

solo6010 0000:03:01.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
solo6010 0000:03:01.0: Enabled 2 i2c adapters
solo6010 0000:03:01.0: Initialized 4 tw28xx chips: tw2864[4]
solo6010 0000:03:01.0: Display as /dev/video0 with 16 inputs (5 extended)
solo6010 0000:03:01.0: Encoders as /dev/video1-16
solo6010 0000:03:01.0: Alsa sound card as Softlogic0

This is for a 16-port card. The output for a 4-port card would show "Encoders as /dev/video1-4" and similarly for 8-port show /dev/video1-8.

The display port allows you to view and configure what is shown on the video out port of the card. The device has several inputs and depends on which card you have installed:

  • 4-port: 1 input per port and 1 virtual input for all 4 inputs in 4-up mode.
  • 8-port: 1 input per port and 2 virtual inputs for 4-up on inputs 1-4 and 5-8 respectively.
  • 16-port: 1 input per port and 5 virtual inputs for 4-up on inputs 1-5, 5-8, 9-12 and 13-16 and 1 virtual input for 16-up on all inputs.

You do not have to open this device for the video output of the card to work. If you open the device and set the input to input 2, and close it (without viewing any of the video) it will continue to show that input on the video out of the card. So you can simply change inputs using v4l2's ioctl's.

This is useful if you want to have live display to a CRT and use a simple program that rotates through the inputs (or multi-up virtual inputs) a few second intervals.

You can still use vlc, mplayer or whatever to view this device (you can open it multiple times).

Now for the encoder devices. There's obviously one device for each physical input on the card. The driver will allow you to record MPEG-4 and MJPEG from the same device (but you must open it twice, one for each feed). The video format cannot be configured once recording starts. So if you open the device for MPEG-4 and full D1 resultion at 30fps, that's what you're going to get if you also open a simultaneous record for MJPEG.

However, it's good to note here that MJPEG will automatically skip frames when recording. This allows you to pipe the output to a network connection (e.g. MJPEG over HTTP) with no worry of the remote connection being overloaded on bandwidth.

However, this isn't so for MPEG-4. It is possible if you are too slow at recording (not likely) to fall behind the card's internal buffer. I was not able to do this writing the full frames to disk on 44 records (4 cards of 16, 16, 8 and 4 ports).

Unlike any card before this supported by v4l2, the Bluecherry cards produce containerless MPEG-4 frames. Most v4l2 applications expect some sort of MPEG-2 stream such as program or transport. Since these programs do not expect MPEG-4 raw frames, I don't know of any that are capable of playing the encoders directly (much less being able to record from them). You can do something simple like 'cat /dev/video1' and somehow pipe it to vlc (I haven't tested this), or write a program that just writes the frames to disk (I have tested this, most programs can play the raw m4v files produced from the driver).

However, since most people will record to disk, the easiest way is to write the video frames straight out to disk.

Now on to the audio. The cards produce what is known as G.723, which is a voice codec typically found on phone systems (especially VoIP).

Since Alsa currently doesn't have a format for G.723, the driver shows it as unsigned 8-bit PCM audio. However, I can assure you that it isn't. I have sent a patch that was included in alsa-kernel (hopefully getting synced to mainline soon). But this only defines the correct format, it doesn't change the way you handle it at all.

You must convert G.723-24 (3-bit samples at 8khz) yourself. The example program I provide in my next post will show you how to do this, as well as how to convert it to MP2 audio, and record all of this to a container format on disk for later playback.

Wednesday, June 16, 2010

Softlogic 6010 4/8/16 Channel MPEG-4 Codec Card Driver Released

As I've talked about before, the company I work for has been dedicated to producing stable video surveillance products based on Linux.

Bluecherry's primary device for their video surveillance applications is the Softlogic based MPEG-4 codec card, which is available in 4, 8 and 16 channel models. The original driver for this card, although available as Open Source, was pretty pathetic to say the least. Most of it was just a kludge of the Windows driver, exposing all of the functionality, but with little effort to make it Linux savvy.

That's where I came in. I've since rewritten the driver so that it makes use of Linux's Video4Linux2 and Alsa driver API's. It's currently 90% functional, and many times more efficient than the original OEM driver.

Here is a quick run-down of some of the features and plus-ones against the original driver:

  • Video4Linux2 interface allows easy use of existing capture software
  • Alsa interface allows for easy audio capture (however, see G.723 caveats from my previous posts)
  • Zero-copy in the driver. The original driver DMA'd and then copied the MPEG frames to userspace. The new driver makes use of v4l2 buffers and can DMA directly to an MMAP buffer for userspace.
  • Simultaneous MPEG/MJPEG feed per channel, selectable via v4l2 format
  • Standard v4l2 uncompressed video YUV display with multi-channel display format (4-up)

Now that the driver is nearing completion, it's about time to release it. I've done so via Launchpad.


If you are on an Ubuntu system, you can install the DKMS package from the PPA archive using these commands:

sudo add-apt-repository ppa:ben-collins/solo6x10
sudo apt-get update
sudo apt-get install solo6010-dkms

Note, I've only supplied this for Lucid right now, but if you download the .deb or the .tar.gz you should be able to install it on any recent kernel.

Friday, June 11, 2010

Feedburner: Adding Flattr to your FeedFlare (Part: 2)

This is a follow up to my previous post: Feedburner: Adding Flattr to your FeedFlare.

I've been wrestling around with FeedBurner's FeedFlare API for a few nights now. Most notably I've had trouble getting some of the documented xPath functions to work, and dealing with what appears to be delays in updating the flare after you add it.

My goal was to add categories to the DynamicFlare href so you could pass those along to Flattr. The problem is that if you add something like ${a:category[1]/@term} to the href, and a:category[1] doesn't exist in your feed, it will not add the flare to your feed (sort of like a filter if the attribute proves false()).

In a final decision of anger, I decided to drop any passing of information from the DynamicFlare href other than the feedUrl. This in itself proved difficult, since ${feedUrl} doesn't work as advertised. I instead opted to pass ${a:link[@rel="self"]/@href} which appears to work on my feed. YMMV.

I've gotten rid of the files I linked to in my last post so people don't use them. For the quick and dirty, here's the URL to use for Personal FeedFlare now:

http://www.swissdisk.com/~bcollins/feedflare/flattr-me-dynamic-v2.php

There are two options you can pass to this script:

  • uid: Your Flattr UID (required)
  • lng: Your preferred language (defaults to en_GB, aka English)

I used this for mine:

http://www.swissdisk.com/~bcollins/feedflare/flattr-me-dynamic-v2.php?uid=17833&lng=en_GB

That's it! The new script will parse the feed in the second script and pass up to 980 characters as the desc, up to 80 characters of the title and all of the categories as tags.

You can also check here for all the PHP-Source files so you can modify to your liking.

Tuesday, June 8, 2010

Feedburner: Adding Flattr to your FeedFlare

Update 2010-06-11: This article and information with-in are superseded by Feedburner: Adding Flattr to your FeedFlare (Part: 2)

I've added Flattr to my blog and also wanted to add it to my feedburner FeedFlare, but alas, no one has yet to create one. So I've gone through the trouble of doing it for you :)

First, I went to the Feedburner FeedFlare API documentation. I wont go into the details of writing your own flare, but I opted for the dynamic type, since it would allow me to show how many times one of my blog posts had been flattered.

Second, I dove into the Flattr JavaScript API. I don't think they recommend this, but it's the only way I could get to the button information contained in their default IFrame.

Third, I downloaded the PHP Simple HTML DOM Parser. There's probably a simpler way to parse the IFrame sent back from Flattr, but I opted for this method since it was pretty straight forward.

For the lazy, you can use my existing FeedFlare URLs as your own. You will need to go to your feedburner page, login, select the feed you want to add this to, click on "Optimize" and then "FeedFlare". Below the stock list you will see a place to enter a URL. Enter the URL below and BE SURE to replace "your_uid" with your Flattr UID, else you wont get the money.

http://www.swissdisk.com/~bcollins/flattr-me-dynamic.php?uid=your_uid

For the interested, here are the two files I've created. First is the dynamic PHP FeedFlare file:

<FeedFlareUnit>
  <Catalog>
    <Title>Flattr Me</Title>
    <Description>
      Adds a Flattr link including flattr count for each feed unit.
    </Description>
    <Link href="http://www.swissdisk.com/~bcollins/flattr-me-dynamic.php?uid=flattr_uid"/>
    <Author email="benmcollins13@gmail.com">Ben Collins
  </Catalog>
  <DynamicFlare href="http://www.swissdisk.com/~bcollins/flattr-me-static.php?uid=<?
        print $_GET['uid']; ?>&title=${title}&link=${link}"/>
  <Sample>Flattr (11)</Sample>
</FeedFlareUnit>

Note that the <Link> element references another PHP script, and that this is in fact PHP. This allows us to pass along the Flattr UID to the second script, which is the one that actually produces the FeedFlare (feedburner periodically checks the second URL it gets from this file for updates to the FeedFlare).

Now, the second script is the one that uses the simple_html_dom.php library I spoke of. You will see it referenced in the file below. Basically I pack the data just like the original Flattr load.js script does, and request the Flattr button, and then rip a few bits of information from it:

<?
include_once("simple_html_dom.php");

$btn_url = "http://api.flattr.com/button/view/";

$data = "button=compact&uid=" . $_GET['uid'] .
        "&url=" . $_GET['link'] . "&lng=en_US&hide=0&title=" .
        $_GET['title'] . "&cat=text&tag=&desc=";

$html = file_get_html($btn_url . bin2hex($data));

$els = $html->find("span.flattr-count");
$count = $els[0]->innertext;

$els = $html->find("a.flattr-pop");
$link = $els[0]->href;

$els = $html->find("span.flattr-link");
$txt = $els[0]->innertext;

?>
<FeedFlare>
  <Text><? print "$txt ($count)"; ?></Text>
  <Link href="<? print $link; ?>"/>
</FeedFlare>

Those familiar with Flattr will note that I did not pass in the description, which could probably be added in the first script (or at least a shortened version of it) and then passed to the button. Usually the description is the first few hundred characters of the post in this case.

Hope all works well. Please post back if you take the time to add the description to this!

Friday, June 4, 2010

PHP: Sending Motion-JPEG

As you may know from past posts, I was trying to send Motion-JPEG from a PHP script. This proved (for many reason) not so easy. After I conquered writing php extension modules, I was still left with nuances in PHP that made it difficult to send MJPEG from my script.

Here's the basic run-down of difficulties:
  • PHP buffers output to the client and this keeps you from doing continuous streams of data easily
  • PHP doesn't allow you to send headers after it thinks the headers have already been sent
  • Apache has some other handlers that also cause buffering
  • Apache does some client negotiation that conflicts with MJPEG (mod_gzip)

Searching the Eentarnets did not produce good results on how to handle this. At least, not in a single place and easily findable. So here's my solution for others to use:

<?
# Used to separate multipart
$boundary = "my_mjpeg";

# We start with the standard headers. PHP allows us this much
header("Cache-Control: no-cache");
header("Cache-Control: private");
header("Pragma: no-cache");
header("Content-type: multipart/x-mixed-replace; boundary=$boundary");

# From here out, we no longer expect to be able to use the header() function
print "--$boundary\n";

# Set this so PHP doesn't timeout during a long stream
set_time_limit(0);

# Disable Apache and PHP's compression of output to the client
@apache_setenv('no-gzip', 1);
@ini_set('zlib.output_compression', 0);

# Set implicit flush, and flush all current buffers
@ini_set('implicit_flush', 1);
for ($i = 0; $i < ob_get_level(); $i++)
    ob_end_flush();
ob_implicit_flush(1);

# The loop, producing one jpeg frame per iteration
while (true) {
    # Per-image header, note the two new-lines
    print "Content-type: image/jpeg\n\n";

    # Your function to get one jpeg image
    print get_one_jpeg();

    # The separator
    print "--$boundary\n";
}
?>

That's it in a nutshell. Make sure that your PHP script does not contain any newlines or data before or after the PHP enclosures (<? ... ?>).

The joy of writing a php5 module

As a follow up to my last post, I wanted to give a quick update.

As it turns out, I ended up writing a php5 Zend module to wrap up some functions I use to access v4l2 devices. I have to say that writing a php5 module was pretty straight forward. Big thanks to the Extension Writing tutorial I found, which was well written, and did not leave me with any questions.

I was able to get the module ready in a few hours, and spent the rest of this morning cleaning it up and tweaking it a bit.

Now I'm completely able to read my v4l2 devices and mjpeg stream them from my PHP script :)

Wednesday, June 2, 2010

Request-for-help: PHP and Video4Linux

As it turns out, I do not like writing web applications. Give me registers and DMA, keep the CSS and JS...thanks.

Anyway, I have to find a way to feed an MJPEG output from a PHP script. WAIT! I know this sounds easy, and if not for all the caveats of what I have to adhere to, it would be very simple...maybe.

It seems that PHP doesn't have a way to use ioctl()'s. I can open() a v4l2 device just fine, and read() from it with ease, but your SOL if you want to do something cool with that file handle in PHP.

I need to be able to do ioctl()'s because my v4l2 device requires at least one so I can put it into MJPEG format (as opposed to the default MPEG format). I can serve up these JPEG/MJPEG's through apache perfectly using a C program I've written.

However, I have to use PHP because the rest of the web application is written in PHP and there is already authentication mechanisms storing credentials in PHP sessions. I don't want to have to parse PHP sessions in a C program.

The normal method I found for doing JPEG from a PHP script worked like this:

<?
header("Content-Type: application/jpeg");
passthru("/usr/bin/grabjpeg");
?>

This works perfectly. Except I want to be able to do a MJPEG feed which requires sending one image after the next with headers in between. PHP doesn't like this much, nor does it like the fact that I want all of these headers to come directly from my C program and not from the PHP script. I also do not want to call grabjpeg for each frame, since that's too much overhead in between frames.

What ends up happening is that my headers from the C program are sent as part of the content that the client thinks is the JPEG file.

Right now, I can only see one way to handle this, and that's to write a PHP module to expose libv4l, but I'm open to suggestions on being able to call an ioctl() from within a PHP script.

Friday, May 28, 2010

Dear Users: Do not withhold information from developers, kthxbye

I accidentally stumbled upon a debugging case today with a user that seems to be a common problem. I wont call this user out directly, but he was a case study in what not to do when you want help from a developer like myself.

The basic volley started off with the usual chit chat in an IRC channel:

<User> Can someone help me compile a module for my kernel?
<Me> Sure, what seems to be the trouble?

So off we went with some IRC and PasteBin exchanges of his compile problem. I looked at the source code for the driver he was trying to compile, and it was a one-line obvious fix to get it working with a newer kernel such as the one found on the Ubuntu 10.04 Lucid system he was working on.

So now the module compiled, and he tried loading it. Hmm...the module disagreed with symbols from modules on his running system, videodev to be exact.

Weird. That shouldn't happen. I asked him if he had compiled or installed different versions of v4l than what his system came with. He didn't recall. However, after getting him to pastebin "ls -lR" of his modules directory, it was apparent that 3 days ago, he did in fact completely replace the drivers/media install.

This meant that those modules didn't match the stock headers that came with his running kernel. This took a very short time for him, but considerable time for me (volunteer time) to find out. After finding out, he admitted to replacing the modules.

Now it was obvious that he was embarrassed to admit he had junked up his system, and even more embarrassing that I caught him in a lie. He could have saved time for both of us. If I had given up after helping with his initial problem, he would have been stuck not knowing how to fix it.

So the moral of the story here is, don't hide information from people trying to help you. Tell all the gross details. If you fed your cat buttermilk waffles off your keyboard, it might help to know that if your 'H' is stuck.

Wednesday, May 26, 2010

Debugging: The elusive deadlock

It's very infrequent that I come across a deadlock bug in my code that 1) isn't easy to find, and 2) is very easy to reproduce.

I had a user report a bug in my solo6010 driver where he has two cards installed in the system. He is on a Core2Duo. If he starts mplayer up on each display for the two cards he has installed (2 mplayer instances), his machine instantly deadlocks and spews to the console.

At first I wasn't able to easily reproduce this. I'm on a Core2Quad, but since I have 4 cards installed I decided to start an mplayer instance for each display device for each card (4 mplayer instances). Oddly enough, I also deadlocked and spewed softlockup messages to the console.

Do you see where this is going? I decided, for clarity, to disable two of my cores:

echo 0 | sudo tee /sys/devices/system/cpu/cpu2/online
echo 0 | sudo tee /sys/devices/system/cpu/cpu3/online

Sure enough it only took two mplayer instances to deadlock my machine this time. Weird! Now my driver currently is able to pull 44 feeds from 4 cards at once for the MPEG feeds. Here, in this case, I am deadlocking with just two YUV feeds from the uncompressed video of the card. This code is much less complex, and the locking even less so. No parts of the driver share data between card instances (each card instance has it's own data and locks).

Upon further investigation I've noticed that this deadlock appears to happen in spin_unlock_irqrestore() during wake_up().

After carefully tracing the code, it was vaguely apparent that my logic around the wakeup routine for when it tries to grab a frame from the the hardware was a little off. I was using a different wake struct for each file handle, when I should have been using one per card. Not to mention, I was not taking advantage of the video sync IRQ to send a wakeup to the thread so that it knew a new frame was ready to grab (allowed me to spin less, and guaranteed the threads to be awoken when a new frame was ready).

Reworking this logic just a bit cleared the deadlock. Honestly, I'm not entirely sure of how the scenario caused a deadlock. It appears to be something in the underlying logic for wait/wake_up routines. I wont argue that it is fixed now, and my code is cleaner and more efficient, so I wont ask too many questions.

Saturday, May 22, 2010

Review: Softlogic 6010 based MPEG-4/G.723 compression cards

So the company I work for (Bluecherry, LLC) is busy developing some products around the Softlogic 6010 based compression card. My job there has been to rewrite the driver from scratch in order to make it more Linux friendly. So to make things clear, I am writing this review from a programmer's perspective. I want to point out that I am not an MPEG expert, so I may skimp on some of the encoder details.

Let's start of with some specs. The base card supports full D1-quad compression of video into MPEG-4 video format. What this means is that it can encode 704x480 sized video at a rate of 120fps for NTSC, or 704x576 at a rate of 100fps for PAL. This breaks down to 4 full streams at 30fps and 25fps respectively. Alternately it can do CIF encoding (1/4 size of D1) at 4 times that frame rate, or for the math-lazy, 16 channels at 30fps at 320x240 frame size.

The card can be purchased in 4, 8 or 16 channel input models. So to take advantage of all 16 channels on the top model, you would either have to record in CIF mode (320x240) or reduce the frame interval to get 7.5fps per channel for full D1 mode (704x480). I will be speaking mostly in NTSC, but the card does support PAL, so do the conversions as we go.

The card allows for the usual MPEG encoding settings including GOP (Group of Pictures), Quantization and Intervals. Intervals are sort of the opposite of frames-per-second, but correlate the same way. An interval of 1 means that the encoder captures every frame, while an interval of 3 means it skips 2 frames between every frame it encodes. The video muxer on the card performs at 30fps, so the interval setting will decide how many of these frames get encoded.

The encoder itself performs quite well. It performs all encoding to an on-board SDRAM chip, and can DMA the frames directly to the host memory, which is great for performance. The original driver did not take advantage of this since it copied the frames to user space. The new driver I've written makes use of v4l2 and it's videobuf-dma-contiguous framework and thus allows for memory mapped buffers with userspace. This gives us zero-copy to userspace.

The encoder also supports side-by-side MJPEG compression of video frames. So while you can be recording the compressed MPEG-4 to disk, you can also frame grab JPEG images. This is useful for tools that want to do such frame grabbing for video analytics, or for live viewing over a web server (it's very easy to send frame grabs via an MJPEG cgi script).

All of this is built properly now on top of Linux's v4l2 API. Unfortunately the API does not expect compression cards to pipe MPEG-4 video, so most clients using v4l2 expect compressed video to be either MJPEG or MPEG-1/2 streams of some sort.

Currently the only drawback from the MPEG encoder is that the frames are self-standing MPEG-4 video frames. I have to add a header to the key frames for them to be usable by most decoders.

Overall the video capture is great. I've run 44 simultaneous records (16, 16, 8, 4 channel cards) on a Core2Duo with a system load average of 1.65, and only about 10% CPU usage. Most of the load is disk I/O.

Each encoder input also supports a graphical overlay that can be programmed at pixel level with varying colors. This is great for textual overlays. Currently we use it to place a descriptive name on the recording along with a timestamp.

In addition to the encoders, the card supports one uncompressed display port. It's currently exposed via v4l2 as a standard analog YUV device. It can be configured to show any of the inputs ports in tons of configurations. So you can do things like a 4-up display. This live display also supports a graphical overlay.

The display is sent to the video-out port on the card (hard-wired), so it can be hooked to a monitor as well (good for surveillance applications such as what Bluecherry offers).

Finally we'll discuss my least favorite part of this card. While it's not a killer, it is just odd that the card supports sound only in G.723 format. For surveillance applications this is just fine. Delivering 3-bit samples at 8Khz sample rate, which is a 24kbs. While this is good for bandwidth, it's bad for anything that needs better audio quality. Not to mention that storing the audio and video together in any sane format requires converting G.723 to linear PCM.

However, the G.723 to linear PCM conversion isn't much overhead on performance, and neither is the encoding to 16Khz MP2 audio, which is how we store it for our surveillance products. Overall, our format is MPEG-4/Video and MP2/Audio in a Matroska/mkv container. This is exactly how it was stored in my 44 stream example above.

So Pros:
  • Fast and efficient
  • Can handle multiple inputs easily
  • The new driver works well with v4l2 and alsa
  • Perfect for security applications
  • Nice OSD capabilities
  • Motion detection supported per input
  • Side-by-side MPEG-4 and JPEG capture modes per input
Cons:
  • MPEG-4 video. The SOLO-6110 will support H.264
  • Low quality audio is not great for anything other than special applications (no TV DVR)
  • G.723 audio format has been obsoleted twice since it was introduced. Nothing uses it so you must always re-encode it.

Friday, May 21, 2010

Video4Linux2 Hardware Motion Detection Support

In about a week or so I'll be making a proposal for V4L2 to have API support for hardware that offers motion detection. Since my experience with this is limited to only one type of hardware, I'm hoping to gain feedback on making sure that the approach I'm offering is as generic as possible.

I'll describe the hardware that I'm working with, which is a Softlogic 6010 MPEG4/G.723 encoder board supporting 4, 8 and 16 input channels (and all of them being able to be encoded at once). Note that all of this applies to the SOLO-6110 card as well (h.264 variant).

The motion detection exposed by the SOLO-6010 is on a per-input basis. It can be configured, when motion detection is enabled, to either signal start of motion events only, or signal start and stop events with a configurable delay after actual motion has stopped (i.e. it will not send the stop signal until there is no motion for n amount of seconds).

Next, SOLO-6010 allows you to set a threshold for when the hardware will detect an event. In my case, the higher the threshold, the less sensitive. It has a range of 0 (anal) to 65535 (off) with a default of 768.

Exposing this via v4l2 controls is quite simple. In my current version of the solo6010 driver, I expose this via private CIDs (Control IDs) which can be easily converted to native CIDs in v4l2.

#define V4L2_CID_MOTION_ENABLE    (V4L2_CID_PRIVATE_BASE+0)
#define V4L2_CID_MOTION_THRESHOLD (V4L2_CID_PRIVATE_BASE+1)
#define V4L2_CID_MOTION_MODE      (V4L2_CID_PRIVATE_BASE+2)
#define V4L2_CID_MOTION_EASE_OFF  (V4L2_CID_PRIVATE_BASE+3)

In this case, V4L2_CID_MOTION_ENABLE is a boolean to turn motion detection on or off, V4L2_CID_MOTION_THRESHOLD is the threshold value I spoke of (slider with said range), V4L2_CID_MOTION_MODE is a menu control for "Start events only" and "Start and stop events" and V4L2_CID_MOTION_EASE_OFF is the seconds of non-motion required before the stop event is triggered.

Now, I could combine V4L2_CID_MOTION_ENABLE and V4L2_CID_MOTION_MODE as just a menu control with "Disabled" as one option, but I'm not sure what the consensus would be. It would be confusing as a standard control for hardware that only supported on/off tuning of this feature.

Note that in "Start event only" mode, my hardware will continually produce motion events when the card sees it, and thus I can emulate V4L2_CID_MOTION_EASE_OFF and a stop event in software.

Whether it is a good idea to always have this support from the control, and have the v4l2 middle layer take care of using the hardware or it's own software to handle it, is up for discussion. I'm all for implementing it as transparent to the user, with the middle-layer handling the guts and the drivers deciding if they allow the middle-layer to do it, or expose the hardware support for it.

Now we just need to make userspace aware of these events. I've found the easiest way for me was to define some extra flags for struct v4l2_buffer that get set during dqbuf.

#define V4L2_BUF_FLAG_MOTION_ON         0x00000400
#define V4L2_BUF_FLAG_MOTION_START      0x00000800
#define V4L2_BUF_FLAG_MOTION_STOP       0x00001000

The reason for V4L2_BUF_FLAG_MOTION_ON is because we need userspace to be able to tell that motion detection is on without querying the controls every second or two. Remember that controls can be changed even while a recorder is on (and in the case of motion detection, I suspect that's a wanted feature).

So if userspace is reading packets, it knows that motion detection is on or off depending on that flag, and can act accordingly. The start and stop flags are self-explanatory.

Now, this is a good reason to promote software side (perhaps libv4l2?) ease-off on motion detection. Without creating another flag, there's no way to know if motion detection is in a start-only or start-stop mode. If we always implement the ease-off, then we know we'll get a stop event eventually, whether or not the hardware supports it.

Moving back to threshold values...SOLO-6010 actually supports a motion detection grid with block sizes of 32x32 pixels. SOLO-6010s NTSC viewable field is 704x480, 704x576 for PAL. So that's either a 22x15 or 22x18 grid of blocks that can have individual threshold settings each. I'm still up in the air about how to do this in a standard v4l2 API. For SOLO-6010 I am using the low 16-bits of the control value to pass back a threshold level, and the high 16-bits to determine the block being affected (0xff000000 being the x value and 0x00ff0000 being the y value on the grid). This works well in practice but is obviously not generic enough.

Well that's all I have for today.

Tuesday, May 11, 2010

Writing an ALSA driver: PCM handler callbacks

So here we are on the final chapter of the ALSA driver series. We will finally fill in the meat of the driver with some simple handler callbacks for the PCM capture device we've been developing. In the previous post, Writing an ALSA driver: Setting up capture, we defined my_pcm_ops, which was used when calling snd_pcm_set_ops() for our PCM device. Here is that structure again:

static struct snd_pcm_ops my_pcm_ops = {
        .open      = my_pcm_open,
        .close     = my_pcm_close,
        .ioctl     = snd_pcm_lib_ioctl,
        .hw_params = my_hw_params,
        .hw_free   = my_hw_free,
        .prepare   = my_pcm_prepare,
        .trigger   = my_pcm_trigger,
        .pointer   = my_pcm_pointer,
        .copy      = my_pcm_copy,
};

First let's start off with the open and close methods defined in this structure. This is where your driver gets notified that someone has opened the capture device (file open) and subsequently closed it.

static int my_pcm_open(struct snd_pcm_substream *ss)
{
        ss->runtime->hw = my_pcm_hw;
        ss->private_data = my_dev;

        return 0;
}

static int my_pcm_close(struct snd_pcm_substream *ss)
{
        ss->private_data = NULL;

        return 0;
}

This is the minimum you would do for these two functions. If needed, you would allocate private data for this stream and free it on close.

For the ioctl handler, unless you need something special, you can just use the standard snd_pcm_lib_ioctl callback.

The next three callbacks handle hardware setup.

static int my_hw_params(struct snd_pcm_substream *ss,
                        struct snd_pcm_hw_params *hw_params)
{
        return snd_pcm_lib_malloc_pages(ss,
                         params_buffer_bytes(hw_params));
}

static int my_hw_free(struct snd_pcm_substream *ss)
{
        return snd_pcm_lib_free_pages(ss);
}

static int my_pcm_prepare(struct snd_pcm_substream *ss)
{
        return 0;
}

Since we've been using standard memory allocation routines from ALSA, these functions stay fairly simple. If you have some special exceptions between different versions of the hardware supported by your driver, you can make changes to the ss->hw structure here (e.g. if one version of your card supports 96khz, but the rest only support 48khz max).

The PCM prepare callback should handle anything your driver needs to do before alsa-lib can ask it to start sending buffers. My driver doesn't do anything special here, so I have an empty callback.

This next handler tells your driver when ALSA is going to start and stop capturing buffers from your device. Most likely you will enable and disable interrupts here.

static int my_pcm_trigger(struct snd_pcm_substream *ss,
                          int cmd)
{
        struct my_device *my_dev = snd_pcm_substream_chip(ss);
        int ret = 0;

        switch (cmd) {
        case SNDRV_PCM_TRIGGER_START:
                // Start the hardware capture
                break;
        case SNDRV_PCM_TRIGGER_STOP:
                // Stop the hardware capture
                break;
        default:
                ret = -EINVAL;
        }

        return ret;
}

Let's move on to the handlers that are the work horse in my driver. Since the hardware that I'm writing my driver for cannot directly DMA into memory that ALSA has supplied for us to communicate with userspace, I need to make use of the copy handler to perform this operation.

static snd_pcm_uframes_t my_pcm_pointer(struct snd_pcm_substream *ss)
{
        struct my_device *my_dev = snd_pcm_substream_chip(ss);

        return my_dev->hw_idx;
}

static int my_pcm_copy(struct snd_pcm_substream *ss,
                       int channel, snd_pcm_uframes_t pos,
                       void __user *dst,
                       snd_pcm_uframes_t count)
{
        struct my_device *my_dev = snd_pcm_substream_chip(ss);

        return copy_to_user(dst, my_dev->buffer + pos, count);
}

So here we've defined a pointer function which gets called by userspace to find our where the hardware is in writing to the buffer.

Next, we have the actual copy function. You should note that count and pos are in sample sizes, not bytes. The buffer I've shown we assume to have been filled during interrupt.

Speaking of interrupt, that is where you should also signal to ALSA that you have more data to consume. In my ISR (interrupt service routine), I have this:

snd_pcm_period_elapsed(my_dev->ss);

And I think we're done. Hopefully now you have at least the stubs in place for a working driver, and will be able to fill in the details for your hardware. One day I may come back and write another post on how to add mixer controls (e.g. volume).

Hope this series has helped you out!

<< Prev

Tuesday, May 4, 2010

Writing an ALSA driver: PCM Hardware Description

Welcome to the fourth installment in my "Writing an ALSA Driver" series. In this post, we'll dig into the snd_pcm_hardware structure that will be used in the next post which will describe the PCM handler callbacks.

Here is a look at the snd_pcm_hardware structure I have for my driver. It's fairly simplistic:

static struct snd_pcm_hardware my_pcm_hw = {
        .info = (SNDRV_PCM_INFO_MMAP |
                 SNDRV_PCM_INFO_INTERLEAVED |
                 SNDRV_PCM_INFO_BLOCK_TRANSFER |
                 SNDRV_PCM_INFO_MMAP_VALID),
        .formats          = SNDRV_PCM_FMTBIT_U8,
        .rates            = SNDRV_PCM_RATE_8000,
        .rate_min         = 8000,
        .rate_max         = 8000,
        .channels_min     = 1,
        .channels_max     = 1,
        .buffer_bytes_max = (32 * 48),
        .period_bytes_min = 48,
        .period_bytes_max = 48,
        .periods_min      = 1,
        .periods_max      = 32,
};

This structure describes how my hardware lays out the PCM data for capturing. As I described before, it writes out 48 bytes at a time for each stream, into 32 pages. A period basically describes an interrupt. It sums up the "chunk" size that the hardware supplies data in.

This hardware only supplies mono data (1 channel) and only 8000HZ sample rate. Most hardware seems to work in the range of 8000 to 48000, and there is a define for that of SNDRV_PCM_RATE_8000_48000. This is a bit masked field, so you can add whatever rates your harware supports.

My hardware driver describes this data as unsigned 8-bit format (it's actually signed 3-bit g723-24, but ALSA doesn't support that, so I fake it). Most common PCM data is signed 16-bit little-endian (S16_LE). You would use whatever your hardware supplies, which can be more than one type. Since the format is a bit mask, you can define multiple data formats.

Lastly, the info field describes some middle layer features that your hardware/driver supports. What I have here is the base for what most drivers will supply. See the ALSA docs for more details. For example, if your hardware has stereo (or multiple channels) but it does not interleave these channels together, you would not have the interleave flag.

Next post will give us some handler callbacks. It will likely be split into two posts.

<< Prev | Next >>

Sunday, May 2, 2010

Writing an ALSA driver: Setting up capture

Now that we have an ALSA card initialized and registered with the middle layer we can move on to describing to ALSA our capture device. Unfortunately for anyone wishing to do playback, I will not be covering that since my device driver only provides for capture. If I end up implementing the playback feature, I will make an additional post.

So let's get started. ALSA provides a PCM API in its middle layer. We will be making use of this to register a single PCM capture device that will have a number of subdevices depending on the low level hardware I have. NOTE: All of the initialization below must be done just before the call to snd_card_register() in the last posting.

struct snd_pcm *pcm;
ret = snd_pcm_new(card, card->driver, 0, 0, nr_subdevs,
                  &pcm);
if (ret < 0)
        return ret;

In the above code we allocate a new PCM structure. We pass the card we allocated beforehand. The second argument is a name for the PCM device, which I have just conveniently set to the same name as the driver. It can be whatever you like. The third argument is the PCM device number. Since I am only allocating one, it's set to 0.

The third and fourth arguments are the number of playback and capture streams associated with this device. For my purpose, playback is 0 and capture is the number I have detected that the card supports (4, 8 or 16).

The last argument is where ALSA allocates the PCM device. It will associate any memory for this with the card, so when we later call snd_card_free(), it will cleanup our PCM device(s) as well.

Next we must associate the handlers for capturing sound data from our hardware. We have a struct defined as such:

static struct snd_pcm_ops my_pcm_ops = {
        .open      = my_pcm_open,
        .close     = my_pcm_close,
        .ioctl     = snd_pcm_lib_ioctl,
        .hw_params = my_hw_params,
        .hw_free   = my_hw_free,
        .prepare   = my_pcm_prepare,
        .trigger   = my_pcm_trigger,
        .pointer   = my_pcm_pointer,
        .copy      = my_pcm_copy,
};

I will go into the details of how to define these handlers in the next post, but for now we just want to let the PCM middle layer know to use them:

snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE,
                &my_pcm_ops);
pcm->private_data = mydev;
pcm->info_flags = 0;
strcpy(pcm->name, card->shortname);

Here, we first set the capture handlers for this PCM device to the one we defined above. Afterwards, we also set some basic info for the PCM device such as adding our main device as part of the private data (so that we can retrieve it more easily in the handler callbacks).

Now that we've made the device, we want to initialize the memory management associated with the PCM middle layer. ALSA provides some basic memory handling routines for various functions. We want to make use of it since it allows us to reduce the amount of code we write and makes working with userspace that much easier.

ret = snd_pcm_lib_preallocate_pages_for_all(pcm,
                     SNDRV_DMA_TYPE_CONTINUOUS,
                     snd_dma_continuous_data(GFP_KERNEL),
                     MAX_BUFFER, MAX_BUFFER);
if (ret < 0)
        return ret;

The MAX_BUFFER is something we've defined earlier and will be discussed further in the next post. Simply put, it's the maximum size of the buffer in the hardware (the maximum size of data that userspace can request at one time without waiting on the hardware to produce more data).

We are using the simple continuous buffer type here. Your hardware may support direct DMA into the buffers, and as such you would use something like snd_dma_dev() along with your PCI device to initialize this. I'm using standard buffers because my hardware will require me to handle moving data around manually.

Next post we'll actually define the hardware and the handler callbacks.

<< Prev | Next >>

Saturday, May 1, 2010

Writing an ALSA driver: The basics

In my last post I described a bit of hardware that I am writing an ALSA driver for. In this installment, I'll dig a little deeper into the base driver. I wont go into the details of the module and PCI initialization that was already present in my driver (I developed the core and v4l2 components first, so all of that is taken care of).

So first off I needed to register with ALSA that we actually have a sound card. This bit is easy, and looks like this:

struct snd_card *card;
ret = snd_card_create(SNDRV_DEFAULT_IDX1, "MySoundCard",
                      THIS_MODULE, 0, &card);
if (ret < 0)
        return ret;

This asks ALSA to allocate a new sound card with the name "MySoundCard". This is also the name that appears in /proc/asound/ as a symlink to the card ID (e.g. "card0"). In my particular instance I actually name the card with an ID number, so it ends up being "MySoundCard0". This is because I can, and typically do, have more than one installed at a time for this type of device. I notice some other sound drivers do not do this, probably because they don't expect more than one to be installed at a time (think HDA, which is usually embedded on the motherboard, and so wont have two or more inserted into a PCIe slot). Next, we set some of the properties of this new card.

strcpy(card->driver, "my_driver");
strcpy(card->shortname, "MySoundCard Audio");
sprintf(card->longname, "%s on %s IRQ %d", card->shortname,
        pci_name(pci_dev), pci_dev->irq);
snd_card_set_dev(card, &pci_dev->dev);

Here, we've assigned the name of the driver that handles this card, which is typically the same as the actual name of your driver. Next is a short description of the hardware, followed by a longer description. Most drivers seem to set the long description to something containing the PCI info. If you have some other bus, then the convention would follow to use information from that particular bus. Finally, set the parent device associated with the card. Again, since this is a PCI device, I set it to that.

Now to set this card up in ALSA along with a decent description of how the hardware works. We add the next bit of code to do this:

static struct snd_device_ops ops = { NULL };
ret = snd_device_new(card, SNDRV_DEV_LOWLEVEL, mydev, &ops);
if (ret < 0)
        return ret;

We're basically telling ALSA to create a new card that is a low level sound driver. The mydev argument is passed as the private data that is associated with this device, for your convenience. We leave the ops structure as a no-op here for now.

Lastly, to complete the registration with ALSA:

if ((ret = snd_card_register(card)) < 0)
        return ret;

ALSA now knows about this card, and lists it in /proc/asound/ among other places such as /sys. We still haven't told ALSA about the interfaces associated with this card (capture/playback). This will be discussed in the next installment. One last thing, when you cleanup your device/driver, you must do so through ALSA as well, like this:

snd_card_free(card);

This will cleanup all items associated with this card, including any devices that we will register later.

<< Prev | Next >>

Friday, April 30, 2010

Writing an ALSA driver

Over the past week I've been writing an ALSA driver for an MPEG-4 capture board (4/8/16 channel). What I discovered is there are not many good documents on the basics of writing a simple ALSA driver. So I wanted to share my experience in the hopes that it would help others.

My driver needed to be pretty simple. The encoder produced 8Khz mono G.723-24 ADPCM. So you can avoid the wikepedia trip, that's 3-bits per sample, or 24000 bits per second. The card produced this at a rate of 128 samples per interrupt (48 bytes) for every channel available (you cannot disable each channel).

The card delivered this data in a 32kbyte buffer, split into 32 pages. Each page was written as 48*20 channels, which took up 960 bytes of the 1024 byte page (it could do up to this number, but for my purposes I was only using 4, 8 or 16 channels of encoded data depending on the capabilities of the card).

Now, let's set aside the fact that ALSA does not have a format spec for G.723-24, so my usage entails dumping out the 48 bytes to userspace as unsigned 8-bit PCM (and my userspace application handles the G.723-24 decoding, knowing that it is getting this data).

First, where to start in ALSA. I had to decide how to expose these capture interfaces. I could have exposed a capture device for each channel, but instead I chose to expose one capture interface with a subdevice for each channel. This made programming a bit easier, gave a better overview of the devices as perceived by ALSA, and kept /dev/snd/ less cluttered (especially when you had multiple 16-channel cards installed). It also made programming userspace easier since it kept channels hierarchically under the card/device.

Next post, I'll discuss how the initial ALSA driver is setup and exposed to userspace.

Next >>

Saturday, August 23, 2008

Intrepid Ibex (8.10) Moves to 2.6.27

After considerable discussion on ubuntu-devel mailing list, and in the Ubuntu Kernel Team's last IRC meeting, we've made the move to 2.6.27 for Intrepid in the hopes that it will provide a more robust experience for our users.

The source package was just uploaded to the archive for building, so in about 24 hours, we should so it on mirrors.

Tuesday, August 19, 2008

The Linux Ecosystem...Changes Ahead

So I've been privy to, and sometimes involved in, many conversations about the Linux ecosystem. How it evolved, how it is now, and where it will go from here. The most important factor has been how Linux kernel development has been funded over the years and what needs to happen to ensure it remains funded.

Given that Linux is not owned by anyone (not even by us, the developers) it is hard to say who should and will fund its future. The tides of money are constantly shifting as companies involved in this ecosystem decide where they fit in, what they want from it, and ultimately, how much they are willing to spend and for how long.

So let's go through some history. I'd like to remind people that I am not an expert on Linux kernel history in this sense. This is all from my recollection over 10+ years of being involved.

In the beginning (the past)



Well, there wasn't much. Let's face it, in the start, it was a hobby for most everyone. No company took it seriously. The device vendors that did write drivers did so out of free will, and usually poorly. Volunteers still had to munge it and cram it into the main kernel tree. It was a much simpler time. Folks did things for vanity and sheer enjoyment.

We can relate this time to when things got done because some individual wanted it done. Corporations were still on the sidelines waiting to see what happened (if they were even looking at all).

At this point, people were working on the kernel in their free time. Lots of them were in college, which means no families to support, no mortgage, no worry about retirement. I'm sure a lot of people (including myself) thought "This looks good on a resume, plus I get to do things I like".

The corporations emerge (still the past)



So at some point, people decided they wanted to make a living off this thing called Linux. Everyone knew that for the hardwares vendors to care about Linux, it had to have a corporate entity to talk to and users demanding. The boom of venture-capital-backed Linux vendors emerged (aka distributions). We know they have come and gone over the years, with only a few emerging in the black.

These companies positioned themselves in many different ways. Some trying to become service oriented, while others relied on licensing to make their money. I wont delve into this topic much, but let's look at how these Linux vendors advanced.

Remember, we are just coming out of the previous stage. No corporations are yet seriously funding development in Linux. It, in itself, is still a long ways off from having all the features that users really want in an OS. How do these distributions get these features? Easy, they hire developers that have been doing this all along.

This is the initial way things get done. You pay someone to do it.

So how did this fall on the distributions? Simply because the corporate distributions required it in order to compete in the market. The OEM's and hardware vendors didn't care. Their stuff was selling on Windows and Unix platforms without problems. They had no financial requirement or user demand to worry about supporting Linux at this point. If their hardware was popular enough with Linux, someone would write a driver for it.

Enter the hardware vendors (sort of past and up to now)



Now that Linux is starting to be a commercial "thing", hardware vendors are taking notice. Not only because of the press around it, but because their own customers are starting to demand it. Big customers.

In addition, companies are starting to see a way for them to piggyback on all the hype and press coverage. If you're "Linux Friendly", you've got a whole bunch of geeks with purchasing leverage behind your company.

Large hardware vendors are starting to take notice. Companies devote whole groups of engineers at supporting Linux. And not just in some odd way, they are doing it our way. Open, and in the community. They work with distributions to get early adoption of drivers. They work with upstream to integrate these drivers and features into the kernel. They participate in steering the process, and drive a lot of what we do. OEM's can finally lay down the requirement "Must be supported in Linux" to their ODM's.

Where did these engineers come from? Right from the Linux kernel community. Most people hired to work on the Linux kernel by a company, cut their teeth for zero money in the community. Some of them have also been hired away from the distributions.

As a previous hiring manager for Ubuntu's kernel team, I can tell you personally, I generally skipped the CV and went straight to the kernel commit logs and linux-kernel mailing list to verify someone. The CV was just a backup.

This is one of the beauties of our process. On a CV, most people look good. Even their references (which they choose) are all likely to tell you what you want to hear. But nothing can tell about a persons personality like the thread from 6 months ago where this person tried to get a feature accepted upstream on lkml. You wouldn't have seen how this person either defended themselves valiantly, or wussed out just because Alan Cox had some harsh words. You wouldn't have known about how they worked for months to take their original idea and rework it to suit the issues brought up on submission, or whether they let the idea die because they couldn't handle the criticism.

Back to the topic though...I'll repeat, these hardware vendors are hiring kernel developers. But it isn't just the hardware manudacturers. You also have companies like Oracle, Google and VMWare hiring them. Some companies even have enough cashflow that they can hire a high level upstream kernel developer for pure bragging rights, or a consortium sponsored by many companies hiring people to just keep doing what they were doing for nothing.

This is definitely where the shift comes in. More and more, we see hardware vendors developing Linux drivers that are released at the same time the device goes public. This development is occurring in-house, and not out in the community. Sure, the community still integrates it, and goes through the code review process, but how many new drivers are coming from someone not associated with the vendors that made the device? Fewer and fewer.

The road ahead...(now and into the future)



So as things move ahead, there will be less for the distributions to do for hardware support. Most vendors will produce the driver, and community+distributions will play a big part in integrating these drivers. New subsystems will emerge to support new ranges of devices. It's not too hard to see vendors working together in the community to solidify features and API's that their drivers need (e.g. mac80211, GPU, other wireless technologies, multi-core features, memory management, etc).

Most developers will cut their teeth on helping to integrate and enhance these things from the vendors. The community will revolve around major restructuring of the kernel to ease development and improve stability.

So where does that leave the distributions? With the majority of the kernel work being handled by vendors, the distributions will fall into a level of consumption. Let's face it, distributions are best at integration (which is part development, so let's not get confused). Distributions are also good at noticing trends, which are fed upstream. Yes, they will still drive new ideas, and possibly even develop these ideas in-house, but they wont be the ones driving the the bulk of the work because they wont be the ones creating the new hardware that will require it.

The idea that distributions should ultimately be responsible for the kernel funding is not possible to sustain. In the current ecosystem, it is not required for a distro to invest heavily in upstream kernel work. Because Linux is open and free, there is nothing forcing them to do so. If this company instead invests heavily in integration and usability, they will produce a better product for the masses, beating the other distributions in the end, and leaving the kernel developers hired by those other distributions without jobs.

In the end...(entirely made up)



If a distribution is popular enough, the hardware vendors will want it to run on their goods. OEM's and hardware vendors who work together to help bring support for their hardware to the kernel will ultimately beat out competitors. The age where Linux is in-demand enough to create this ecosystem is close at hand, and in some ways, already exists.

Nothing is written in stone. No one can predict what will happen, we can only speculate. However, we can probably be assured that the funding to keep Linux around will come from many places, maybe even ones we haven't thought of yet.

I, for one, look forward to what's ahead.