So this is one of the few times I decide to get political and/or rational. Most of my career has been spent on Linux. And while the gettin's good, I don't subscribe to the notion that Linux, as a desktop, will take over the World.
Let me make one thing clear. I do believe Linux, as a core, will succeed in many forms. On the server, on mobile products (where it is the core and not exposed directly to the user, a la Android).
So here's the problem as I see it. Too many choices. Yes, this has been beaten to death, and to some extent, many Linux vendors have taken note. Debian used to be a free-for-all where all of the choices were exposed to the user. People who used Debian loved the choices, but the fact isn't that they loved the choices, they just loved that their choice was among them.
If you didn't have a preference, for example with a Desktop environment, then choices are bad for a user. They didn't know how to pick one. So now, there is a default. That's great, but across many Linux distributions, even if the default is Gnome, the little nuances of each system will overwhelmingly differentiate the entire thing so that no Gnome desktop is truly the same as another distribution.
So why are choices bad? I want to take an example from a book I was reading recently called The 4-Hour Workweek. It tells of a watch company that wanted to advertise in a magazine. The watch company had many different styles of watches and wanted to put a full page ad that showed off 6 of them. The advertising executive said they should pick one watch and show off that one. To settle the dispute, they had two full page ads: one with the 6 watch layout, and the other with a single watch. Don't you know that the single watch ad out-performed the 6-watch ad by a factor of 6? Interesting...
So anyway, choices are bad for consumers. They would rather have one choice, even if it may limit them in some way. Apple figured this out when they almost went the way of the Commodore by making so many damn types of Macintoshes (when Jobs wasn't at the helm). Microsoft also learned this when they had an extensive list of Windows variants (Full/Pro/Home/Home-Pro/Server/etc/etc), but I don't think they've recovered from that very well.
Now on to the meat of the problem. Linux, too many choices...what can be done? Well, as a developer, not much. It's not our job to make these decisions. We are the ones that give all the choices. Drivers for every device, apps that do anything you want, themes, icons, documentation, hardware support. The real issue at stake is some company needs to break out of the "We're a Linux distribution" mold.
Let's take Dell's Ubuntu Linux offering as an example (I'm not knocking this effort, I helped start it when I was working for Canonical, it's a great offering). If a normal user somehow gets to the Dell Linux page, and they say "wow, what is this Linux thing?", they will surely go to Google.com and start checking. Bad? Hell yes. The huge amount of information, choices and decisions becomes quickly apparent to them. They start asking questions like "Is Ubuntu the _right_ Linux for me?" and "Should I try other Linux's as well" and "Why does Dell only offer Ubuntu?".
Indeed, these are good questions, but for which there is no answer that is going to take the average user from "I've always used Windows/MacOS and know how it works" to "I'm going to try this thing called Linux."
In my opinion, as unfortunate as it may sound, in the end, some company will deliver a product that makes no mention of Linux other than in the copyright attributions and source code, and will call it something completely different. Maybe they will call it Chrome OS?
Sunday, June 27, 2010
Saturday, June 19, 2010
Using your new Bluecherry MPEG-4 codec card and driver...
Now that the dust has settled and people are taking notice of the new driver for Bluecherry's MPEG-4 codec cards, here's a quick How-To for using it.
You will notice that there are two types of v4l2 devices created for each card. One device for the display port that produces uncompressed YUV and one for each input that produces compressed video in either MPEG-4 or MJPEG.
We'll start with the display port device. When loading the driver, a display port is created as the first device for that card. You can see in dmesg output something like this:
This is for a 16-port card. The output for a 4-port card would show "Encoders as /dev/video1-4" and similarly for 8-port show /dev/video1-8.
The display port allows you to view and configure what is shown on the video out port of the card. The device has several inputs and depends on which card you have installed:
You do not have to open this device for the video output of the card to work. If you open the device and set the input to input 2, and close it (without viewing any of the video) it will continue to show that input on the video out of the card. So you can simply change inputs using v4l2's ioctl's.
This is useful if you want to have live display to a CRT and use a simple program that rotates through the inputs (or multi-up virtual inputs) a few second intervals.
You can still use vlc, mplayer or whatever to view this device (you can open it multiple times).
Now for the encoder devices. There's obviously one device for each physical input on the card. The driver will allow you to record MPEG-4 and MJPEG from the same device (but you must open it twice, one for each feed). The video format cannot be configured once recording starts. So if you open the device for MPEG-4 and full D1 resultion at 30fps, that's what you're going to get if you also open a simultaneous record for MJPEG.
However, it's good to note here that MJPEG will automatically skip frames when recording. This allows you to pipe the output to a network connection (e.g. MJPEG over HTTP) with no worry of the remote connection being overloaded on bandwidth.
However, this isn't so for MPEG-4. It is possible if you are too slow at recording (not likely) to fall behind the card's internal buffer. I was not able to do this writing the full frames to disk on 44 records (4 cards of 16, 16, 8 and 4 ports).
Unlike any card before this supported by v4l2, the Bluecherry cards produce containerless MPEG-4 frames. Most v4l2 applications expect some sort of MPEG-2 stream such as program or transport. Since these programs do not expect MPEG-4 raw frames, I don't know of any that are capable of playing the encoders directly (much less being able to record from them). You can do something simple like 'cat /dev/video1' and somehow pipe it to vlc (I haven't tested this), or write a program that just writes the frames to disk (I have tested this, most programs can play the raw m4v files produced from the driver).
However, since most people will record to disk, the easiest way is to write the video frames straight out to disk.
Now on to the audio. The cards produce what is known as G.723, which is a voice codec typically found on phone systems (especially VoIP).
Since Alsa currently doesn't have a format for G.723, the driver shows it as unsigned 8-bit PCM audio. However, I can assure you that it isn't. I have sent a patch that was included in alsa-kernel (hopefully getting synced to mainline soon). But this only defines the correct format, it doesn't change the way you handle it at all.
You must convert G.723-24 (3-bit samples at 8khz) yourself. The example program I provide in my next post will show you how to do this, as well as how to convert it to MP2 audio, and record all of this to a container format on disk for later playback.
You will notice that there are two types of v4l2 devices created for each card. One device for the display port that produces uncompressed YUV and one for each input that produces compressed video in either MPEG-4 or MJPEG.
We'll start with the display port device. When loading the driver, a display port is created as the first device for that card. You can see in dmesg output something like this:
solo6010 0000:03:01.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
solo6010 0000:03:01.0: Enabled 2 i2c adapters
solo6010 0000:03:01.0: Initialized 4 tw28xx chips: tw2864[4]
solo6010 0000:03:01.0: Display as /dev/video0 with 16 inputs (5 extended)
solo6010 0000:03:01.0: Encoders as /dev/video1-16
solo6010 0000:03:01.0: Alsa sound card as Softlogic0
This is for a 16-port card. The output for a 4-port card would show "Encoders as /dev/video1-4" and similarly for 8-port show /dev/video1-8.
The display port allows you to view and configure what is shown on the video out port of the card. The device has several inputs and depends on which card you have installed:
- 4-port: 1 input per port and 1 virtual input for all 4 inputs in 4-up mode.
- 8-port: 1 input per port and 2 virtual inputs for 4-up on inputs 1-4 and 5-8 respectively.
- 16-port: 1 input per port and 5 virtual inputs for 4-up on inputs 1-5, 5-8, 9-12 and 13-16 and 1 virtual input for 16-up on all inputs.
You do not have to open this device for the video output of the card to work. If you open the device and set the input to input 2, and close it (without viewing any of the video) it will continue to show that input on the video out of the card. So you can simply change inputs using v4l2's ioctl's.
This is useful if you want to have live display to a CRT and use a simple program that rotates through the inputs (or multi-up virtual inputs) a few second intervals.
You can still use vlc, mplayer or whatever to view this device (you can open it multiple times).
Now for the encoder devices. There's obviously one device for each physical input on the card. The driver will allow you to record MPEG-4 and MJPEG from the same device (but you must open it twice, one for each feed). The video format cannot be configured once recording starts. So if you open the device for MPEG-4 and full D1 resultion at 30fps, that's what you're going to get if you also open a simultaneous record for MJPEG.
However, it's good to note here that MJPEG will automatically skip frames when recording. This allows you to pipe the output to a network connection (e.g. MJPEG over HTTP) with no worry of the remote connection being overloaded on bandwidth.
However, this isn't so for MPEG-4. It is possible if you are too slow at recording (not likely) to fall behind the card's internal buffer. I was not able to do this writing the full frames to disk on 44 records (4 cards of 16, 16, 8 and 4 ports).
Unlike any card before this supported by v4l2, the Bluecherry cards produce containerless MPEG-4 frames. Most v4l2 applications expect some sort of MPEG-2 stream such as program or transport. Since these programs do not expect MPEG-4 raw frames, I don't know of any that are capable of playing the encoders directly (much less being able to record from them). You can do something simple like 'cat /dev/video1' and somehow pipe it to vlc (I haven't tested this), or write a program that just writes the frames to disk (I have tested this, most programs can play the raw m4v files produced from the driver).
However, since most people will record to disk, the easiest way is to write the video frames straight out to disk.
Now on to the audio. The cards produce what is known as G.723, which is a voice codec typically found on phone systems (especially VoIP).
Since Alsa currently doesn't have a format for G.723, the driver shows it as unsigned 8-bit PCM audio. However, I can assure you that it isn't. I have sent a patch that was included in alsa-kernel (hopefully getting synced to mainline soon). But this only defines the correct format, it doesn't change the way you handle it at all.
You must convert G.723-24 (3-bit samples at 8khz) yourself. The example program I provide in my next post will show you how to do this, as well as how to convert it to MP2 audio, and record all of this to a container format on disk for later playback.
Wednesday, June 16, 2010
Softlogic 6010 4/8/16 Channel MPEG-4 Codec Card Driver Released
As I've talked about before, the company I work for has been dedicated to producing stable video surveillance products based on Linux.
Bluecherry's primary device for their video surveillance applications is the Softlogic based MPEG-4 codec card, which is available in 4, 8 and 16 channel models. The original driver for this card, although available as Open Source, was pretty pathetic to say the least. Most of it was just a kludge of the Windows driver, exposing all of the functionality, but with little effort to make it Linux savvy.
That's where I came in. I've since rewritten the driver so that it makes use of Linux's Video4Linux2 and Alsa driver API's. It's currently 90% functional, and many times more efficient than the original OEM driver.
Here is a quick run-down of some of the features and plus-ones against the original driver:
Now that the driver is nearing completion, it's about time to release it. I've done so via Launchpad.
If you are on an Ubuntu system, you can install the DKMS package from the PPA archive using these commands:
Note, I've only supplied this for Lucid right now, but if you download the .deb or the .tar.gz you should be able to install it on any recent kernel.
Bluecherry's primary device for their video surveillance applications is the Softlogic based MPEG-4 codec card, which is available in 4, 8 and 16 channel models. The original driver for this card, although available as Open Source, was pretty pathetic to say the least. Most of it was just a kludge of the Windows driver, exposing all of the functionality, but with little effort to make it Linux savvy.
That's where I came in. I've since rewritten the driver so that it makes use of Linux's Video4Linux2 and Alsa driver API's. It's currently 90% functional, and many times more efficient than the original OEM driver.
Here is a quick run-down of some of the features and plus-ones against the original driver:
- Video4Linux2 interface allows easy use of existing capture software
- Alsa interface allows for easy audio capture (however, see G.723 caveats from my previous posts)
- Zero-copy in the driver. The original driver DMA'd and then copied the MPEG frames to userspace. The new driver makes use of v4l2 buffers and can DMA directly to an MMAP buffer for userspace.
- Simultaneous MPEG/MJPEG feed per channel, selectable via v4l2 format
- Standard v4l2 uncompressed video YUV display with multi-channel display format (4-up)
Now that the driver is nearing completion, it's about time to release it. I've done so via Launchpad.
If you are on an Ubuntu system, you can install the DKMS package from the PPA archive using these commands:
sudo add-apt-repository ppa:ben-collins/solo6x10
sudo apt-get update
sudo apt-get install solo6010-dkms
Note, I've only supplied this for Lucid right now, but if you download the .deb or the .tar.gz you should be able to install it on any recent kernel.
Friday, June 11, 2010
Feedburner: Adding Flattr to your FeedFlare (Part: 2)
This is a follow up to my previous post: Feedburner: Adding Flattr to your FeedFlare.
I've been wrestling around with FeedBurner's FeedFlare API for a few nights now. Most notably I've had trouble getting some of the documented xPath functions to work, and dealing with what appears to be delays in updating the flare after you add it.
My goal was to add categories to the DynamicFlare href so you could pass those along to Flattr. The problem is that if you add something like ${a:category[1]/@term} to the href, and a:category[1] doesn't exist in your feed, it will not add the flare to your feed (sort of like a filter if the attribute proves false()).
In a final decision of anger, I decided to drop any passing of information from the DynamicFlare href other than the feedUrl. This in itself proved difficult, since ${feedUrl} doesn't work as advertised. I instead opted to pass ${a:link[@rel="self"]/@href} which appears to work on my feed. YMMV.
I've gotten rid of the files I linked to in my last post so people don't use them. For the quick and dirty, here's the URL to use for Personal FeedFlare now:
There are two options you can pass to this script:
I used this for mine:
That's it! The new script will parse the feed in the second script and pass up to 980 characters as the desc, up to 80 characters of the title and all of the categories as tags.
You can also check here for all the PHP-Source files so you can modify to your liking.
I've been wrestling around with FeedBurner's FeedFlare API for a few nights now. Most notably I've had trouble getting some of the documented xPath functions to work, and dealing with what appears to be delays in updating the flare after you add it.
My goal was to add categories to the DynamicFlare href so you could pass those along to Flattr. The problem is that if you add something like ${a:category[1]/@term} to the href, and a:category[1] doesn't exist in your feed, it will not add the flare to your feed (sort of like a filter if the attribute proves false()).
In a final decision of anger, I decided to drop any passing of information from the DynamicFlare href other than the feedUrl. This in itself proved difficult, since ${feedUrl} doesn't work as advertised. I instead opted to pass ${a:link[@rel="self"]/@href} which appears to work on my feed. YMMV.
I've gotten rid of the files I linked to in my last post so people don't use them. For the quick and dirty, here's the URL to use for Personal FeedFlare now:
http://www.swissdisk.com/~bcollins/feedflare/flattr-me-dynamic-v2.php
There are two options you can pass to this script:
- uid: Your Flattr UID (required)
- lng: Your preferred language (defaults to en_GB, aka English)
I used this for mine:
http://www.swissdisk.com/~bcollins/feedflare/flattr-me-dynamic-v2.php?uid=17833&lng=en_GB
That's it! The new script will parse the feed in the second script and pass up to 980 characters as the desc, up to 80 characters of the title and all of the categories as tags.
You can also check here for all the PHP-Source files so you can modify to your liking.
Tuesday, June 8, 2010
Feedburner: Adding Flattr to your FeedFlare
Update 2010-06-11: This article and information with-in are superseded by Feedburner: Adding Flattr to your FeedFlare (Part: 2)
I've added Flattr to my blog and also wanted to add it to my feedburner FeedFlare, but alas, no one has yet to create one. So I've gone through the trouble of doing it for you :)
First, I went to the Feedburner FeedFlare API documentation. I wont go into the details of writing your own flare, but I opted for the dynamic type, since it would allow me to show how many times one of my blog posts had been flattered.
Second, I dove into the Flattr JavaScript API. I don't think they recommend this, but it's the only way I could get to the button information contained in their default IFrame.
Third, I downloaded the PHP Simple HTML DOM Parser. There's probably a simpler way to parse the IFrame sent back from Flattr, but I opted for this method since it was pretty straight forward.
For the lazy, you can use my existing FeedFlare URLs as your own. You will need to go to your feedburner page, login, select the feed you want to add this to, click on "Optimize" and then "FeedFlare". Below the stock list you will see a place to enter a URL. Enter the URL below and BE SURE to replace "your_uid" with your Flattr UID, else you wont get the money.
For the interested, here are the two files I've created. First is the dynamic PHP FeedFlare file:
Note that the <Link> element references another PHP script, and that this is in fact PHP. This allows us to pass along the Flattr UID to the second script, which is the one that actually produces the FeedFlare (feedburner periodically checks the second URL it gets from this file for updates to the FeedFlare).
Now, the second script is the one that uses the simple_html_dom.php library I spoke of. You will see it referenced in the file below. Basically I pack the data just like the original Flattr load.js script does, and request the Flattr button, and then rip a few bits of information from it:
Those familiar with Flattr will note that I did not pass in the description, which could probably be added in the first script (or at least a shortened version of it) and then passed to the button. Usually the description is the first few hundred characters of the post in this case.
Hope all works well. Please post back if you take the time to add the description to this!
I've added Flattr to my blog and also wanted to add it to my feedburner FeedFlare, but alas, no one has yet to create one. So I've gone through the trouble of doing it for you :)
First, I went to the Feedburner FeedFlare API documentation. I wont go into the details of writing your own flare, but I opted for the dynamic type, since it would allow me to show how many times one of my blog posts had been flattered.
Second, I dove into the Flattr JavaScript API. I don't think they recommend this, but it's the only way I could get to the button information contained in their default IFrame.
Third, I downloaded the PHP Simple HTML DOM Parser. There's probably a simpler way to parse the IFrame sent back from Flattr, but I opted for this method since it was pretty straight forward.
For the lazy, you can use my existing FeedFlare URLs as your own. You will need to go to your feedburner page, login, select the feed you want to add this to, click on "Optimize" and then "FeedFlare". Below the stock list you will see a place to enter a URL. Enter the URL below and BE SURE to replace "your_uid" with your Flattr UID, else you wont get the money.
http://www.swissdisk.com/~bcollins/flattr-me-dynamic.php?uid=your_uid
For the interested, here are the two files I've created. First is the dynamic PHP FeedFlare file:
<FeedFlareUnit>
<Catalog>
<Title>Flattr Me</Title>
<Description>
Adds a Flattr link including flattr count for each feed unit.
</Description>
<Link href="http://www.swissdisk.com/~bcollins/flattr-me-dynamic.php?uid=flattr_uid"/>
<Author email="benmcollins13@gmail.com">Ben Collins
</Catalog>
<DynamicFlare href="http://www.swissdisk.com/~bcollins/flattr-me-static.php?uid=<?
print $_GET['uid']; ?>&title=${title}&link=${link}"/>
<Sample>Flattr (11)</Sample>
</FeedFlareUnit>
Note that the <Link> element references another PHP script, and that this is in fact PHP. This allows us to pass along the Flattr UID to the second script, which is the one that actually produces the FeedFlare (feedburner periodically checks the second URL it gets from this file for updates to the FeedFlare).
Now, the second script is the one that uses the simple_html_dom.php library I spoke of. You will see it referenced in the file below. Basically I pack the data just like the original Flattr load.js script does, and request the Flattr button, and then rip a few bits of information from it:
<?
include_once("simple_html_dom.php");
$btn_url = "http://api.flattr.com/button/view/";
$data = "button=compact&uid=" . $_GET['uid'] .
"&url=" . $_GET['link'] . "&lng=en_US&hide=0&title=" .
$_GET['title'] . "&cat=text&tag=&desc=";
$html = file_get_html($btn_url . bin2hex($data));
$els = $html->find("span.flattr-count");
$count = $els[0]->innertext;
$els = $html->find("a.flattr-pop");
$link = $els[0]->href;
$els = $html->find("span.flattr-link");
$txt = $els[0]->innertext;
?>
<FeedFlare>
<Text><? print "$txt ($count)"; ?></Text>
<Link href="<? print $link; ?>"/>
</FeedFlare>
Those familiar with Flattr will note that I did not pass in the description, which could probably be added in the first script (or at least a shortened version of it) and then passed to the button. Usually the description is the first few hundred characters of the post in this case.
Hope all works well. Please post back if you take the time to add the description to this!
Friday, June 4, 2010
PHP: Sending Motion-JPEG
As you may know from past posts, I was trying to send Motion-JPEG from a PHP script. This proved (for many reason) not so easy. After I conquered writing php extension modules, I was still left with nuances in PHP that made it difficult to send MJPEG from my script.
Here's the basic run-down of difficulties:
Searching the Eentarnets did not produce good results on how to handle this. At least, not in a single place and easily findable. So here's my solution for others to use:
That's it in a nutshell. Make sure that your PHP script does not contain any newlines or data before or after the PHP enclosures (<? ... ?>).
Here's the basic run-down of difficulties:
- PHP buffers output to the client and this keeps you from doing continuous streams of data easily
- PHP doesn't allow you to send headers after it thinks the headers have already been sent
- Apache has some other handlers that also cause buffering
- Apache does some client negotiation that conflicts with MJPEG (mod_gzip)
Searching the Eentarnets did not produce good results on how to handle this. At least, not in a single place and easily findable. So here's my solution for others to use:
<?
# Used to separate multipart
$boundary = "my_mjpeg";
# We start with the standard headers. PHP allows us this much
header("Cache-Control: no-cache");
header("Cache-Control: private");
header("Pragma: no-cache");
header("Content-type: multipart/x-mixed-replace; boundary=$boundary");
# From here out, we no longer expect to be able to use the header() function
print "--$boundary\n";
# Set this so PHP doesn't timeout during a long stream
set_time_limit(0);
# Disable Apache and PHP's compression of output to the client
@apache_setenv('no-gzip', 1);
@ini_set('zlib.output_compression', 0);
# Set implicit flush, and flush all current buffers
@ini_set('implicit_flush', 1);
for ($i = 0; $i < ob_get_level(); $i++)
ob_end_flush();
ob_implicit_flush(1);
# The loop, producing one jpeg frame per iteration
while (true) {
# Per-image header, note the two new-lines
print "Content-type: image/jpeg\n\n";
# Your function to get one jpeg image
print get_one_jpeg();
# The separator
print "--$boundary\n";
}
?>
That's it in a nutshell. Make sure that your PHP script does not contain any newlines or data before or after the PHP enclosures (<? ... ?>).
The joy of writing a php5 module
As a follow up to my last post, I wanted to give a quick update.
As it turns out, I ended up writing a php5 Zend module to wrap up some functions I use to access v4l2 devices. I have to say that writing a php5 module was pretty straight forward. Big thanks to the Extension Writing tutorial I found, which was well written, and did not leave me with any questions.
I was able to get the module ready in a few hours, and spent the rest of this morning cleaning it up and tweaking it a bit.
Now I'm completely able to read my v4l2 devices and mjpeg stream them from my PHP script :)
As it turns out, I ended up writing a php5 Zend module to wrap up some functions I use to access v4l2 devices. I have to say that writing a php5 module was pretty straight forward. Big thanks to the Extension Writing tutorial I found, which was well written, and did not leave me with any questions.
I was able to get the module ready in a few hours, and spent the rest of this morning cleaning it up and tweaking it a bit.
Now I'm completely able to read my v4l2 devices and mjpeg stream them from my PHP script :)
Wednesday, June 2, 2010
Request-for-help: PHP and Video4Linux
As it turns out, I do not like writing web applications. Give me registers and DMA, keep the CSS and JS...thanks.
Anyway, I have to find a way to feed an MJPEG output from a PHP script. WAIT! I know this sounds easy, and if not for all the caveats of what I have to adhere to, it would be very simple...maybe.
It seems that PHP doesn't have a way to use ioctl()'s. I can open() a v4l2 device just fine, and read() from it with ease, but your SOL if you want to do something cool with that file handle in PHP.
I need to be able to do ioctl()'s because my v4l2 device requires at least one so I can put it into MJPEG format (as opposed to the default MPEG format). I can serve up these JPEG/MJPEG's through apache perfectly using a C program I've written.
However, I have to use PHP because the rest of the web application is written in PHP and there is already authentication mechanisms storing credentials in PHP sessions. I don't want to have to parse PHP sessions in a C program.
The normal method I found for doing JPEG from a PHP script worked like this:
This works perfectly. Except I want to be able to do a MJPEG feed which requires sending one image after the next with headers in between. PHP doesn't like this much, nor does it like the fact that I want all of these headers to come directly from my C program and not from the PHP script. I also do not want to call grabjpeg for each frame, since that's too much overhead in between frames.
What ends up happening is that my headers from the C program are sent as part of the content that the client thinks is the JPEG file.
Right now, I can only see one way to handle this, and that's to write a PHP module to expose libv4l, but I'm open to suggestions on being able to call an ioctl() from within a PHP script.
Anyway, I have to find a way to feed an MJPEG output from a PHP script. WAIT! I know this sounds easy, and if not for all the caveats of what I have to adhere to, it would be very simple...maybe.
It seems that PHP doesn't have a way to use ioctl()'s. I can open() a v4l2 device just fine, and read() from it with ease, but your SOL if you want to do something cool with that file handle in PHP.
I need to be able to do ioctl()'s because my v4l2 device requires at least one so I can put it into MJPEG format (as opposed to the default MPEG format). I can serve up these JPEG/MJPEG's through apache perfectly using a C program I've written.
However, I have to use PHP because the rest of the web application is written in PHP and there is already authentication mechanisms storing credentials in PHP sessions. I don't want to have to parse PHP sessions in a C program.
The normal method I found for doing JPEG from a PHP script worked like this:
<?
header("Content-Type: application/jpeg");
passthru("/usr/bin/grabjpeg");
?>
This works perfectly. Except I want to be able to do a MJPEG feed which requires sending one image after the next with headers in between. PHP doesn't like this much, nor does it like the fact that I want all of these headers to come directly from my C program and not from the PHP script. I also do not want to call grabjpeg for each frame, since that's too much overhead in between frames.
What ends up happening is that my headers from the C program are sent as part of the content that the client thinks is the JPEG file.
Right now, I can only see one way to handle this, and that's to write a PHP module to expose libv4l, but I'm open to suggestions on being able to call an ioctl() from within a PHP script.
Subscribe to:
Posts (Atom)