After much delay, and much anticipation, we at Bluecherry have finally released our first public beta of version 2 of our DVR product.
For extensive details, please see the announcement.
I'm honored to have worked on this project. While I've been dealing with the driver (solo6x10) and backend server for over a year, the final parts of the product have fallen into place in just the last 6 months. It was an overwhelming effort by just 4 developers on a project that spanned from hardware to UI.
Friday, November 12, 2010
Tuesday, September 14, 2010
HDCP Master Key
This is a mirror of the HDCP Master Key. As reported on Slashdot, this key is used to make source and sink keys, essentially making HDCP useless as a DRM mechanism. Sounds to me like it will make all HDCP sources prone to attack (e.g. BlueRay players, satellite and cable boxes, etc). Enjoy, and hopefully this is real.
HDCP MASTER KEY (MIRROR THIS TEXT!)
This is a forty times forty element matrix of fifty-six bit
hexadecimal numbers.
To generate a source key, take a forty-bit number that (in
binary) consists of twenty ones and twenty zeroes; this is
the source KSV. Add together those twenty rows of the matrix
that correspond to the ones in the KSV (with the lowest bit
in the KSV corresponding to the first row), taking all elements
modulo two to the power of fifty-six; this is the source
private key.
To generate a sink key, do the same, but with the transposed
matrix.
6692d179032205 b4116a96425a7f ecc2ef51af1740 959d3b6d07bce4 fa9f2af29814d9
82592e77a204a8 146a6970e3c4a1 f43a81dc36eff7 568b44f60c79f5 bb606d7fe87dd6
1b91b9b73c68f9 f31c6aeef81de6 9a9cc14469a037 a480bc978970a6 997f729d0a1a39
b3b9accda43860 f9d45a5bf64a1d 180a1013ba5023 42b73df2d33112 851f2c4d21b05e
2901308bbd685c 9fde452d3328f5 4cc518f97414a8 8fca1f7e2a0a14 dc8bdbb12e2378
672f11cedf36c5 f45a2a00da1c1d 5a3e82c124129a 084a707eadd972 cb45c81b64808d
07ebd2779e3e71 9663e2beeee6e5 25078568d83de8 28027d5c0c4e65 ec3f0fc32c7e63
1d6b501ae0f003 f5a8fcecb28092 854349337aa99e 9c669367e08bf1 d9c23474e09f70
3c901d46bada9a 40981ffcfa376f a4b686ca8fb039 63f2ce16b91863 1bade89cc52ca2
4552921af8efd2 fe8ac96a02a6f9 9248b8894b23bd 17535dbff93d56 94bdc32a095df2
cd247c6d30286e d2212f9d8ce80a dc55bdc2a6962c bcabf9b5fcbe6f c2cfc78f5fdafa
80e32223b9feab f1fa23f5b0bf0d ab6bf4b5b698ae d960315753d36f 424701e5a944ed
10f61245ebe788 f57a17fc53a314 00e22e88911d9e 76575e18c7956e c1ef4eee022e38
f5459f177591d9 08748f861098ef 287d2c63bd809e e6a28a6f5d000c 7ae5964a663c1b
0f15f7167f56c6 d6c05b2bbe8800 544a49be026410 d9f3f08602517f 74878dc02827f7
d72ef3ea24b7c8 717c7afc0b55a5 0be2a582516d08 202ded173a5428 9b71e35e45943f
9e7cd2c8789c99 1b590a91f1cffd 903dca7c36d298 52ad58ddcc1861 56dd3acba0d9c5
c76254c1be9ed1 06ecb6ae8ff373 cfcc1afcbc80a4 30eba7ac19308c d6e20ae760c986
c0d1e59db1075f 8933d5d8284b92 9280d9a3faa716 8386984f92bfd6 be56cd7c4bfa59
16593d2aa598a6 d62534326a40ee 0c1f1919936667 acbaf0eefdd395 36dbfdbf9e1439
0bd7c7e683d280 54759e16cfd9ea cac9029104bd51 436d1dca1371d3 ca2f808654cdb2
7d6923e47f97b5 70e256b741910c 7dd466ed5fff2e 26bec4a28e8cc4 5754ea7219d4eb
75270aa4d3cc8d e0ae1d1897b7f4 4fe5663e8cb342 05a80e4a1a950d 66b4eb6ed4c99e
3d7e9d469c6165 81677af04a2e15 ada4be60bc348d dfdfbbad739248 98ad5986f3ca1f
971d02ada31b46 2adab96f7b15da 9855f01b9b7b94 6cef0f65663fbf eb328e8a3c6c5d
e29f0f0b1ef2bf e4a30b29047d31 52250e7ae3a4ac fe3efc3b8c2df1 8c997d15d6078b
49da8b4611ff9f b1e061bc9be995 31fd68c4ad6dc6 fd8974f0c506dd 90421c1cd2b26c
53eec84c91ed17 5159ba3711173b 25e318ddceea6a 98a14125755955 2bb97fd341cea2
3f8404769a0a8e bce5c7a45fb5d4 9608307b43f785 2a98e5856afe75 b4dbead4815cac
d1118af62c964a 3142667a5b0d14 6c6f90933acd3d 6b14a0052e2be4 1b1811fda0f554
12300aa7f10405 1919ca0bff56ea d3e2f3aad5250c 4aeeea5101d2ec 377fc499c07057
6cb1a90cdb7b11 3c839d47a4b814 25c5ac14b5ec28 4ef18646d5b9c2 95a98cc51ebd3b
310e98028e24de 092ffc76b79f44 0740a1ca2d4737 b9f38966257c99 a75afc7454abe4
a6dd815be8ccbf ec2cac2df0c675 41f7636aa4080f 30e87b712520fd d5dfdc6d3266ac
ee28f5479f836f 0bf8ee2112173f 43ae802fa8d52d 4e0dffd36c1eac 3cbda974bb7585
fb60a4700470e3 d9f6b6083ef13d 4a5840f02d0130 6c20ef5e35e2bf dad2f85c745b5b
61c5ddc65d3fc9 7f6ec395d4ae22 2b8906fb3996e2 e4110f59eb92ac 1cb212b44128bb
545afda80a4fd1 b1ffea547eab6b fac3d9166afce8 3fe35fe17586f2 9d082667026a4c
17ffaf1cb50145 24f27b316acfff b6bb758ec4ad60 995e8726359ef7 c44952cb424035
5ec53461dbd248 40a1586f04aee7 49ea3fa4474e52 c13e8f52c51562 30a1a70162cfb8
ccbada27b91c33 33661064d05759 3388bb6315b036 0380a6b43851fb 0228dadb44ad3d
b732565bc37841 993c0d383cfaae 0bea49476758ac accc69dbfcde8b f416ab0474f022
2b7dbcc3002502 20dc4e67289e50 0068424fde9515 64806d59eb0c18 9cf08fb2abc362
8d0ee78a6cace9 b6781bd504d105 af65fab8ee6252 64a8f8dd8e2d14 cb9d3354e06b5b
53082840d3c011 8e080bedab3c4c e30d722a455843 24955a20397c17 82495c1c5114e8
656e71c31d813d 1f0a6d291823a1 6327f9534353fa b89529c2f034fb 70e9b12205c7b3
a06c87969407a2 520bfa2fe80f90 da1efc3d345c65 313936ec023811 a8cc87128be2fa
4cd0e8645ee141 be7975519e2b63 9543d23113c2a8 3d87b0da033f22 df0464c704e9d4
7e1a30947e867e 014ae464b37935 5c4babf689fa4e c4aec0cb01cc35 328c0e4a0230e4
fdacb93b419594 26deefc8a553e6 6e75a2d790cb55 2c4554518f7396 94b77184cb145d
95f883f620a8bb edff42866a2783 7b4ee6304b711d ed56e077a4b9fb c4e60e687ff6c3
0cbf144b8f64d5 023dd10a35eddd beaa3323e999c6 d2e016b31c38c4 8d2917a888f799
18c3abd28e736b 8d38e69b4966cc 624db0143dd2e7 5e2fa510f632b7 ee6e64d45b139a
a1c6d852e74be7 429843b9e6bb7e db9ab07c8dc267 9efa092299f071 dcca9e0e61e960
94406fac95f1d8 d19122f3f88782 1b11a662e9c83f d161fd6fb7f032 89f7d984da9d48
a3583fea45fe58 885e2c4839e254 47e87235f713b1 f4732e05b71aee ae026d063f4349
0a481d2db197af abfce1039d4ac0 4a6b89d2d1aeac 0842eb7178cc53 b82ce2835f1937
3b4002ca21d6b6 e64a78a78abb27 8bd6142ad04526 e035dacb23624a 4cf80110135771
7a52fafc92745e efa28a290ea782 735617cd8b0221 b095e9f4b286a5 021e9ba0727645
3e58e9ec16ed1c d7732bb5ba99a6 374bde43fa89a9 cb83e5ef2e4d04 1da4f73566d134
e01da194625c25 d62018764d7473 64643721313d24 5a01badd970941 481c9578781414
a4d3faa92d1fef bd4b247d37862a 5332a7ca3c2ca6 393ee51989d5a9 01a6e564040d37
390c472ee27892 f0217fe009e9b4 5d3f04da415b35 612ecd5b8e4eac 757e27d2169f2d
92853b737b7526 9ac837c86476df e956c2b45ebd5e d4fa6da687ac39 60f4343669ddd3
64b8d778e72e78 f86cd55efe92b8 a9adbf2e728440 966c8282cee1f9 ea195972b883f4
46ac03b37e7f24 744df253954ae5 22e3f9a0adbc58 6add7c7d8a2961 ba963e4912d17c
2840ac28fcfad9 8d8ec3ad6dfc32 a3c788dd094910 e65ebb61dabb5f b50e906b28c881
003b11eb83e6a9 a2fac0595b138d 3d55a28f915330 c343bd1849a085 54c786629d2b42
1d465cb22ccbc2 d8f87fd52aded1 ecb34f46656b71 b4cbe50f839f2c 2df6a553cc3698
40b2dd25f26d51 492f3c5c6fa566 f80dd453864548 d4be786d8735d9 e364511a0fb62d
3c2df64d6d1c9f f640e4ef4186be 41773025d6ff57 6147e75d7df3f5 49809548639d16
01067ef6034247 4e7c1b20deb154 3f8172a6b98ea0 b0691d4b575801 136a88607a3e5b
0180058ca8742e 972bc2ca1c4cb6 7b05bbc57e63df 5f01049697eaa2 c537f3121384dc
edb1fa0b34f132 689b1374cafe25 802d7bca5c6674 f8e01e75e9eb3d a59c2d9126d85d
f10f603f8c4fd9 d5a358aa84b2d5 f8320f2a3bd078 019bcf0dabb5c3 43dd8dd5e173f0
45169f788a0233 d62daee0e9839c 7d673cf77a53d2 008730faf272d0 3c08080778ae8d
920e40fad87d7e bf118230ffb194 692baf40b951b4 83549affe4e382 68e172f86a40b3
aa5e2c1b74636d c3d7809ac68aae 33c344fd9bcc33 6e6057dc7d71f0 bceef547db57fa
ec91cc1056e4b5 8153f00c8ef4f8 a2ca943ab03915 079a070121782d d592dcec23dd3f
44ba5fe5078279 e6f8ed790ffa59 e7877e834b4391 d1ca3db32bccd7 b382e35bff1ba1
96cb3b9ef8671e 70342fff9216a5 d635530148dcc6 bf40909f72ba4b e3697761ac11f1
f2a77a5f435c5c a57729bb9aaf37 14f78a30f9bf6f 1a7fe7f0271b01 0b224bc83ef07b
0d409ce2157473 adefa793287d48 a6b13ce8e00a7f 74d735fd54a00b e2dc16285d1b5a
8b3d55371ce703 bb3909153586b6 03c8c622aa53e9 89ee3322e069aa 325ce41fbd0175
2cd1326421cd83 3c47eed2daadda 87c2177de0c63f 39b496d688c971 179359349f5e0e
3cfa9ea9345dbc 47b1948cbfe45f 2a13b18cf3a0d1 00b03fc13e6cde 656ef26757f5d1
7c584630c27fb2 02f2e14ca8a67e fcfec527978154 4ec09910379625 e90fc0a898a5b7
5beb0f3ee5d03a 2383832708cfb7 6905747e27453e 1714e418f0f0a3 53bcdef0965e8d
2c9b5813b90c3c bb9a20c8ebb80e 045e04f3d57918 6fe6ffb0718731 201760abf11c27
e289872adda7e1 233e7ef2b2c83b 423b4c0ba711db 334b15e5bd4c01 034d1e41bff0e8
58a436cce28ea3 e6ef4d94b49962 ec8728db63716b 8c8ffc95c21b06 0beb50502d9acb
c1eb732268091a e45e0c30cfed36 31d58c384bc3e4 8a26ae8b7a5c60 83991e11e8a21e
e4f193c0183e07 691fbbf9ccb4c2 4e5214fae905d8 2052c969e9699d f6cea5a6157de3
fd84477a6bad8e 04f37758724bc3 a491d0fd8f084e 19933cec5f51f0 93794e76e1f29b
ebd1f1c057b30c 7ec220fa6d31d9 867d711c9a7674 a700cf5f177e37 cf3fae5da3ddc4
4e8030990c7917 553a5ce2abaaa4 c2296c42e2dcea 19ae4f9b654581 66d5fff1163703
bb5085e0e7d595 12605df8a35f9f 35c6d572c28ea5 5099437e5f5595 fb45cdaa8872f1
6e012db5feedc3 1ba0e5515be76f b793b687fbf1dd 9d2c01063d4ca1 c2e6fde5bc3a1c
c17b11e1a33418 436fcacef170c5 e4c3cbc3066618 2063665d2a1b84 a8b5b4f2e58850
ce74bcbc892d71 b312d96806cdc8 82d9c95678fff1 5d8a0120206c3c 621f13db39bd6e
4a5db4815f181d 8dae6e596cebd5 1b8b1681dd4918 1dbcbd79f8e5ff 135064b0968c4e
d81e91507c1e96 ce08e072644e54 e1648d32befadc d0b7f41fca118d 7b9291b680b18a
10ab9a2fb4f9a0 9f462d2370dd03 bb453f4b48b2ea b3c3e6d63c2559 be4aa3d8e8f129
90af78e01d25c9 2e06a8715063da 988dbf792de669 17eabe5b043c41 b1f700946e4ad2
e329ae8a66581e 4a5bda0ff2a313 79577080aaac8c 0dd34f4f929df3 0f5e87f82b9b1f
1ead67333c42d5 ebac8fb8797375 dc26965e625abb 953ce074d8c84c 2edd54991b2104
a45196065c2bca 98f56533f328bf 8560a1a390e921 37d2506aff3d7b f88576a47d273e
562b7c9592ffdc 2d0ff0ba59787b 4dd89971bd39a6 7a4a778d69a4cc 58bad18bf5fc74
5cac8d53dcc72c ba7e9c7a2b57d7 ff544acc98f08f 1d22f503712081 cf868290f04def
ba48ab7c61a8ab 3ca439f055f713 2401e3a43338e0 b7c4b19cf1edc8 37db6b0d8991a7
10ede95c9c35e6 a8f021fc870126 6e5909a7f3217b 33772e647266ff a5c8fd0c786e0f
04f0bb34025c67 cc33c6a49bf101 45c563f33f807d 6e95e9c2b5e349 3a0e55d42d44b7
611138d0e928dd 24d7958e8e6149 c66faf12b50f45 eaa5eb19337961 e68c81cb35d5d3
ed1fe1f1b8d443 612ca593de8afe 6c15ee22ffb8b0 c27152ca5a1e77 0133b8165e3ed1
608c9c1a6ca4aa df5272bd1b6425 6f7efc5b2bbfa0 b49b5f0c67ee30 f4ef0e7ed820cb
4b14d077b672ce 3a60f2386c0218 9e8d6e5f6caddf a53ccecbae8684 d3183beeba0cef
4cd21e6afc08e8 5db41995d15a93 6afe570246af77 d0994bc305b27f 2de99a0885c909
1629a47aaa161f 0f6b6d45ff8967 cfc4e83f5b469c cc22586cab3936 29e6b3f94d122e
83f00e419d8980 bb282b6f3efdef 30d80463fb25e0 1846f8f1b935d3 3c03ed5243b7b4
cb6b0e6e4c770f 8bc2856390163a 73a332bc2ebabd b3aeafedbc8c08 74ff7726398cd4
0071d5d3644b97 45dd1ae0369e9a c1f518cd384512 b933bc25cb3402 9377c50007d647
e609eb009c9245 7d99fff828ba6e 9f0adcca6cd0a9 5c5cf8366b699f f00f513ad9e29d
7c2ecfdb5afe40 1f131691f0677b 30e1df0cce8710 f3c52df030e941 b2bb6b650cf2d7
012a5a2d11f1b4 4699b78e898918 977b2e06972b36 674e2619e6be97 93007948f99eee
af2b5b80b81bb3 417446ac93bc16 14fb20c6ab0e24 3ffc77d1672771 36580afea2edec
48942ed95911c5 fa312a7aca8f83 992e36a47ef1db 3937ff39b1a9b5 2af79ef5c48c64
6c88d58111a0b7 b6fa6dc5f7c8dc b1acc64f2b083d 332baac65b4feb e58dae530ad4af
0fbdb072d0ba36 e2607b065b6fe4 f803ae22cb2a6c 9b639dd91166cc f5e430b9cece8c
687c1dc2ac5898 b429122b168f1c 4248f91ae51605 1c24d7f1578ba6 1dec5a6c003598
e3c04b01a812a7 2df7909352cece de31efaffdd0d4 e4a7f11873ec87 4768f7b8d77583
23b6f7bae4521f 8fbf571e568d5a 577ad8b71f3721 718b68ac1ada36 e10689cc83ea91
43f73798b295f7 6e2b078c8d68e5 613c3bb265ca36 d25d07032b8c80 843fe3783b5959
e918f7789f0d33 afac1cb1534684 0fb3c6c442a94b 167f58645b56c2 76132472470129
590ae9be533d39 75adfeba5e6230 30dea290d933d7 08cc4d30a4af39 09bc69be193a2f
f7f8ff9f03af3b 3ad1a453e9dde4 a534709b6e15c7 c6ce7d4efd42e9 5e947977595b68
ca674d0c7541e9 97f178a43b6057 137a6483c7653a 49f1eec3082cc7 70824eb5bebf04
cf95519563f7c7 cef140efdaa431 4f8ddc5fb70009 27710736a485cd 41b05dfead9e7a
dcbf8e83a3a89a 23e46b5a421a08 84f0fb922099a4 120b226eedd549 cf4706582b36f4
e3b718cabb9c11 03db1daab9520a 3a29a8c65c45f6 0219e82dbeb36b b351c498a8dda8
0ba2a5607f3bf6 0b95be14721f63 62d3b4d2b1fc16 f46a95de23a55e b70c2f136e83eb
a0b215f5837e73 d76368870bd5bc 0372cf15e7ff03 c992d958598014 1fb03e9712f2c4
a73b9107699fb2 239ad1d706b5f3 3623dab66fefc1 8b5e04ac40e7ed 77eaadd7c4d35c
b3ba11dde839a2 621e7ab334235d 29f2ed9f1990d9 e0d731952272a4 d31f58d8cfad64
57690ff74579fe e78fb0fe43c6cf b127e3c5c7da88 1765c8883fcd01 dc0028f618172d
07d8f79c0e5b79 bdff41e18ee3b3 0990bd1c710888 b0ef52eb6da5bd b790ff7419e17d
22ab4221d42b9a 35bec4ded01a53 6a2f35fd63b686 db66f3c21b9291 165a5fd321d034
f2ea034bd3a6b6 4d47388e2680b7 018dd250cfd53b 53babaed27080a 73c54d98e4a365
6a77f2e71cfab6 4f9539f7e67a64 c35beaa6ab5528 1698a8ee44d10d 01e623ff7096e8
96a68072d59c56 6baba4b0d232ee 725a1f9e0fbeb1 97728ef73b9a8e 16ecfe23a3bdb6
f035aac743b427 202c094281f68b 1c8be9e39e4591 0959fad0920ae6 15a97f475dc632
a3fc9e9363688a 89cea147f0339b d1ffe6e68570d2 329a0b16c32fa2 cbd5818383dd8f
c26f57abe7c8cd 4d680e55e8a77d feefbd47b284a3 41bc9077e7df69 1c32ea11a0df3c
2ea8501eab0c69 63dff30ea51c9f 8de69a045d957b 4036f90d8e90b7 5886f2e5059e5d
7341e707011eca 8d6006677dabf1 2c6f2040741941 5058a43d3958d2 29eee2b01178b8
eb9e382e6ea2e5 62e44ce8f6b19e a5b4444f78d77d c12755f1de34c7 8fd001eb8d0d91
8a3ece83c541b5 659f736aca9076 1c1864cc5b30f1 1b9f901459a142 f5571fc19f94a3
39e842e17176ca ed2a1659a97f8e 625e74d131b3da bdbdfeaa0366bd 95ebf86c33a687
4a09faea206cd1 29f59174377238 908e6c956350cb 686a225a26548c a45140d1ed5b76
75e9ea2087732c 14dd568be007bf 3668e3791bdd4b 56f9aa39df5785 e7b37c964271c9
c5211e837c726e 374513cd4cd34f a5c71ff1a4195e 4e234c5adc13b4 75093fc66c8faf
2ec02dd6ea2715 d8676bb21e7f0b b4c22ceadbd907 9ccaf78857ea36 a28da605bbf2d8
723651fb07c86a 07039b49d2fa32 40dbb6dc2ef93d da48f7e9d5eb92 45bc6190b3a9e4
fc84b55352b994 25f44b36a3fb83 d09a8f4ab7d78e 0829201a523b21 966e0098395656
5984c4e317d930 581dd2ab677c99 a92a70424c5aae 4ea1dbaca67de1 e45918a0d6d560
1e5c75efdd907f 99a6e56cbb015f 04fd11c8ae4d05 83a72f3e967bb6 2ddf23b892d1e5
d648bbe9e5f8d3 d4b128d667ff6a 781dcd435b03f4 1a1cb99fc298e1 69d80c51941a26
5263476c788bb7 db0b584b59ec8d d95a4e9a6a95c5 5263b0eb0cc8d4 98e62e5116ab09
97564c79d4b733 39d708c3284fb2 d2cd596efe674a a9e3b1f33b4473 70b30aa67c0c2c
3532c9874c8ce5 680a796f9db4b3 64e5825663090f eb0a67604f3f9b 7c4716c88afa20
cecf4b6b1467f8 342600406fe556 200290eea56903 36562b6cff764c b02d3847d68f8f
a26c2ab20fe063 5de36be096db8d ac5998b94e3c17 4c8808ebb9bf53 4bbf0a436470da
d3875253f7b0a9 a99369bfede348 8c3391fd3a5f95 5005f88c89d735 acd8196d21d41b
5ba2ce34f48817 da3e7f4332994f 8cfe88c8ae18af e4df8b64d16e61 b0f200ab8229f9
5a15b4ad681a60 350a1bb85a5708 f5731809fe17da 9da29858778783 e496533ffbda6c
a590c76b953dff edbf61ba227191 f7fd713fd0b4bf 4a5e6df9905845 42ed273f1fee88
e56d34cbb2866d cc76209f9773ec 4c21238f991ec6 7adff263cb22b1 4fb41d94f97f42
f26d90e0b24a1a 37fe90421cee92 5cd69e29e95550 bec2bff0431bc0 6acc812fa97ad4
4f19e44dd33a0e d9280b1ae70cff 6575a036db7f1e 7bf2ed31bcef8e 45dfb49b8dc51d
e1fd10fb1b59b8 092da05f342c0a 01fa56a0375319 c1f5ad03dc627a cb1f2c96f11444
5d67a093467a43 a832f56266f0bf 7a464d7fab7c48 42561af703a045 c1c9b270211af3
edcaf802cfd336 6f9ba5cc39c3dc 585554fa4224ca 4a7216b8d2dd3e 16c2d8b31e6fa9
e9ae301e1bfa98 ac8389842b368a 158c5060209885 c01a2c3f5b7bca d20124920faa1c
a2217820d1fa40 803272c88d1844 c2554237c9ecc8 d25f509a6db1de 325148c1726f18
398c66b1339048 8c8c43dd7f2c26 24cf4ec93ee498 54618829620375 eb494db615a50f
69e1fb949b4215 3e02e353426513 bf6ea2adefdded fbbb781d40e52c d6ebec825d94a0
3f84de44b6fd50 0b466ea0458290 3a77f7804e0c62 b0ce750e2b2078 69f346f188a43a
24ef26f7c284a3 544ea716d5498b 3e1f23b1154dcf 6d5c580dbec7f0 120302c7a16ee0
bae4ae638ee502 60cd112182bd84 dbc443744789a8 7faefcebed3a2c 579c0f77cfa536
0d920b050cb068 fb2fc616ee5eb8 3b7082e645d419 40df3b620a8474 df360190d74ec7
28f0d33396ee1e 3c007bfb335325 ac5c5327fcfbe9 9daecd75584e11 770aecaa7200f5
ef955be6081878 8c906f9fbbd9a8 f16d11b5a2980c f837a8f49c0378 33efbbae308e71
0bda652822a309 8990e49a4320ce 8bf60c5517e853 0b0f2a3d47d09b b07d28e7903ac9
5009b61262ab9c 0161bb90668bf4 a314e46c502058 447250d9698fed c3e4ceaa255d41
5ba4045c2fdba8 17b0720f52e736 0eb0036d8439d0 9e15116b8245e2 3dad88738ceab0
260986d154e9a9 56cd13e67e508e 9895906f7a2bc2 4970647a63ed02 5e192810f2e040
02e7f4cad9b4cb 18d5850dc181a2 05204ea9653f18 2d3b188124823d f9b34ca3d2c93b
2e5ba515010f68 7308114d65f874 acbf4d6286131d 46681d439816a8 15fc07b05c47fe
f0ef6a332c3132 c4630529dd2021 a743a1e9423e63 b12af7fe3d806e 0cb7d03c2afdae
7abe068af28323 fe75b567a2c0c7 069313cf6c1f44 a39aeec0ddcc87 747c3bd20c1471
876af6b8558b0f eb0b357c5d8f97 c64ac9dcac22f2 856e4341b42b50 663b16ec5eb01f
0d31dd990e70cc f7203530ab3d19 6d42eb5412ec69 dc9e4fcaf97880 e0dcd2d94a10fc
b5f39a9e831217 4b084adf9c02c4 d3cabf53a97846 4c331980146846 3c9f7c840833be
b0cb542c3108b1 9dcf7401e6f79a c1f27ed5dd4e0e 509cf69e83c56c 15ca00d43e1758
5948602f5bf14d 1d129ae6b9f4ee 2b58f973ae2956 6a6c792feb0c13 62474058c00758
caab48f22b2e6a ed88328618842d 0418ebd349eb34 846eda10087342 e8b6c21b95cbf9
cc90523ed0cb59 4c9374718e79ca 60c8fa29dd489a 41f2190a03e88f 8ac12bebb17c5e
3195835960d662 2317a3d2d90ead 5f5aeb6d34f4e1 7a39957a01179f 3f88d79fc83f9c
edb1049a771b1e 30a85067c640ed 06cac8047923de 59bdda0f1b1b9d 7a014eaecf61b7
292e8b0f865638 4dc1de3d7f5dda d9b1b7557b4db8 54813ab90c75a3 9b35f03246f1e4
20f760465bc347 0da41ba5991181 a6a49de8fdf505 60b1ea116f81a6 ce2716aa9919a7
e3fce68f208dd3 05d5b9594f643d ded74364c812db 16b6e7e4269696 ad975ff975a727
4d6e503b6ae9a9 9ce664850ed1da a714650763250b 944b7b251c3e6b 0d37d4e4854c4c
06c7e1c3d4b917 5602bc69558908 92f5ddd9a20bbd 84d12a16b5963d d1426dd7f44f09
06cca7d8cd71ba 710072c1b4ea7a ebabe1e8242f72 69960c6c0d5bf3 2084edb90ada1d
235ed7d8a9fe39 3b133ed8a3fec1 132c4509579af1 203ca5447787a5 ca938128fcd756
ca569d31b6f05d edec4129270543 ff17078079c2aa f642caa8568a3b 8d1f6c3bf9b5e7
c947c61701ce12 1a3808b18cb73c d1d7543be23892 9917eefd8b4b7d 0eabef30f24b08
b72c10d49c60a3 c01344f22cc2a2 b97c57f2a37b00 f82a2f9338e520 5a8b9c9ce0dc1d
8a4d7e7260e257 62046c5551c0e9 19811c1011cf28 dc158db4a957c2 b516e794206aa4
4a9e535622d8df bea44b252b2ab8 7284568528acd5 239ab1d64c7025 bad538907922d6
57fb163fcb9eca ad97c1507e480a 78e8cfc81ca935 14eee2413bbe9d e349073d92ab5c
8ed191d530d9af f3a72b6e194e41 d26925b22eb6c5 f709c6088bb419 8527923aa6f4f3
1345fcb8916f88 9f82d7a298174b b0a41e5d16d9d4 28c7eab4098fe2 f34abb591392d4
a5084515586118 71f3fcfcee19ff 180d1b40c23b7c c18c22be085cc3 edeb86d04f3c78
c56c61899b8011 2cf78b1bcd5b77 99247be60f0cdb 4c8a9aa7a58409 e2bf0ad4cfe9b6
f79b501f91d364 5fd2c40e48e881 c650973fb8e681 7c8ae6d3aca02c 7a01c329e3bf17
3b126f2cda1e76 229d405bbc41d3 5e028a9f388566 97e13e1dfee5e2 aa02da00a5271a
be2abd92296fc2 e380153ffa1a5d dc3c184ca2fd9a 8dd7381eccc7e0 55a7fef2252572
76da25ac98ef00 3e12a21d43ef92 28c5f1d9e71a96 b7cd9a47a9c9d8 aaf77a03539742
9f8854a9983a9c 2bcde940d64350 6986616ba3f75f e80cc522c68b65 f03f78b91d9f6c
fdf9170e4ac9f1 c84c3819797def 03bbfca0340880 2893d145bda408 df07456e5388bf
cbadcc8ac22dc9 365807ab820d70 29da8be4c0de87 756ee3a7865bfe 46439df366b70b
ab960b51e728db e2e3c346921e4e 74c6317baa49fb b3efd421fb40bd 979d2df24bca93
98d5bd5de71195 bc030746a50c59 02cf2a4b1b9812 467af79145cfaa 0ed643c7b530e4
181ef7d406026a f6ea606e325377 a302d06af1f7cc c7658f6ae6defa bbe5314d959e1f
bb5757386c8799 8759670183f618 58e0cc3816f883 c113183a0578ee ce5456e86ee96b
c04285b8c56bb1 74e5fb66d586e6 9d8eea215e70c8 f4a00feb7bc2f6 369c2bf470063d
5b267be08f0594 c26fb2440b1ac6 8610ef5a140769 bb3d5b50a536d4 df6c30bc09f971
74e572ca84d171 2deb91e812d860 b17ac9ae5be211 c95a0e3f542c78 46397245b13a99
1806ab9ceb6646 1b4161b0ec2edc fc536e2a24abf8 9f7207bf519f1b abf95b0d0d3cae
d9e17ba1bf7678 6526f524fad677 ec243ad271d0cb 9b1c06cf737605 0a36697c74beaa
fa0f0056a6bd7e 9f2d03db497a93 027d76e6e8692a 72ceb29c5913a7 55eb297dae3330
eb676e7345fb39 7021192efd5b47 462906905e7511 e005f52fd8da5e 1288c01960d735
3460b18eafd2d8 faa9b1c3caf426 5035e585d9fd2d 85636dff1d4e42 600c4b7f664267
02b21e6a8c7a03 79ce25c264e2f4 035a7f32c227ea bf8f711445a7d7 d0b5e3b336f71e
c454a416321483 bdc1a7a9d20dea e1ee4744e83143 5b6969f2864529 17a6b42d6346ab
ff0fb6edf2265a ba75b0991f6dfb 6638c1d7243ff8 e7806af6600486 dfe3bc58f31717
b0c4adc2717922 c11abff0b4a290 43598e076f60be 2ef17ad2f77605 3a41a09d974da6
ee787846e7ff26 ce05d869fecf18 fde916d95f9357 4c1b4dd723b90d b1f024400d61a2
f51dfebc71b770 461e7f725d9637 2b1587ff40035e a2cafbcd0c6b17 2e9efaf6986045
80e339a823ea56 febfaa02609bb2 a33955624e1602 a137b84639ef0c 6e2ecf420a6d6b
69f13acbea8f97 b4d36c41e3a867 1352aee4798c08 e3ec254ddf35cb ab600d90f13919
d00cc1d401fd2c 1c629e621756d2 090f8d6e0895fa 701bd1b0a355ce a53c7c91b15eb4
dd8579d4dd92e4 03d1c960c63d55 215a8fbc09cc85 59c1e6069b6dd1 a0428bfb223cca
46d131153e9982 c5dab0c9ffb93d 682db866d6503c a481c48384a087 a417c564567258
2ec7b9722b5c5f 2d491f9cf79086 30cd268b1088d0 f02e69b1441963 d9841b5339d18f
a26deb7b957527 337f3bd67d3c51 e6839a4d5fe4cf 1619c18889be68 d971f0f57d1016
56213ebf152a2d 9c0e0394832c92 9e6fc90ca28ba5 9c5151dfbb8394 f49fe4cf2a3f7a
97f4db054b2b34 2a4c21abf6406f eb941a80bee3f8 7615468e80e77a 0f935ebe8e8842
959f2b3ba1f50c d6bc8b614e39de 3c43d13746983a 7956e617131247 56de3547cf1010
c16d5d1fce2bcc 3e73e5ef9fd691 1211c1a27803ad f1c9644aac4ba2 8d67134e3be189
d8aee617c607cb f62677b30d8ece e7df69402b2291 6ec102f220e09f a6223e874c3e53
fb474983ebdb9f 806832bde2f4e6 7c25ef688134b3 4aadca3409a6b3 bdccd638f3b19a
2b01f18625fdd1 0f5e91c28af081 f28e4dcd9077cd 9229d87caebbfb 072b846b4d2ce0
fd7a25e195d67a ec9546899268d2 ff3068a2e9d0c2 af9f2fc2de9978 01b47566d0faef
fc5a8eab966720 4b981c9fe7ff10 b4a0aa0873484e 25a8b544ed8801 c72530c2e5d37f
94b0483e74e4fa bc5ac97d82cd68 1a23d34cce0d52 6e4d17a8475b19 63493b14551149
d36db24ae5ced3 a24a53ea6aada0 230cbe502aa32e 2aa07485f281e6 66777be0d719c7
5d3ab65be78916 06076f42e71bd2 273e56dd3eccfd 5ef1c9394b6a9e 42f1f49590ffa2
1e6ab6994e56a8 8d54a339e301c2 efff698c46e74e 6721df7c5334d0 f36cf6a93bf3ff
91d8979d8fe631 321dc8b5eccceb a237eb1423c395 4623a16cc50b79 83f616f60d8114
32c15a65536b82 e4a00d384d99cd 369206bbef6fb4 42a720e294a9e4 768a1c77e94dac
31d4798dffbd75 da46bbd77e908b 0fc027a69fcccc 4204ee745159f5 c14155873d42b2
7ce0c031527eae 22fb1c9d6da9a7 33c940531510c9 d938e52464ce71 385b73fa95a2d3
597bf6362dd268 f9901921654409 7c8d064cd5b4ea 80e8fe2f1b3288 ee188609ef2cdf
beee34a1c48fe2 459cefca35857f 33b5320fbbca79 7789297027b6a4 f1debe5a09d013
fd5d818a56bf63 19a092fb1ec45c 526f5d3ff64331 9b8295291aef56 d6963c3a92c34c
065482a033fbbb 9b9ab43410d764 44ebbd99c86a86 4c087234311b85 db6e5a803ba13f
760c159ce2a619 d58d83243fe0ba 1c1e5e83aa79b5 75c8bbae9baabf 2ed91bdb632ae8
8e46b443cfffe5 afa4f53f148577 0be538701c4afa 3fd89c44ef7ced 060dcdc21e9368
0a5b9e2ba4a53b 63db0419a96d30 f68e038377a61e aa15b78389ec74 5f532809ff80f1
f2892acc49ca4e e2d68174c08e81 378254a38f5138 0b060222bb20b1 8013f6f4745c47
ef08e87e2c197d b69d5ddaadf417 7bc66fe482b730 f4bd76d3bb1dfc 09492b63f2935b
43875dd685ab00 e15a528f666486 aad1fac042ef90 b3bb7b8ef9c2e9 c78967b9392a1c
2f05fd5ca0230b 1008345afdd18c dac73c313ace60 346d535e500b62 12e6357496459b
ecccdac5a34926 3b880f7098608f e66c3352a9cdd6 049b176ff1a04d 897d9569948066
806230e9740d6e 179ebf2b7952ab 3a2c5079b5bbea 73261b85c35fb1 5b917dc1bf7e57
9c55b95581c1d5 e1fb86e6219639 c72a22d8404448 95dc5d7b966027 457f3fec730d5d
469cd82a2b3cec 021d9de560b8d6 85b4d126933886 c8265dafb87325 8741d99af7f420
1329e52d3e66f8 7d37c458a6ad05 1241c5ccbc355b 7fdcad3c3c269d 05f04a0a23acc7
ce076aba97cc18 74b36afc4645b2 cd7adaad8f5b91 bd0651649dc722 3d9b6437c667fb
6827ec09eed45e 8ad6102faa934f 1a80658c0bfe1e ef1749235ab59f 75478ad5949a06
49ce6e19841851 8df41dc39cc628 bd9dc9bec89c8b 7771a21fe8028c 65082929b051f3
c657798a3aebd1 ce9c37c494cfc8 156efce8330e42 d0d95860d39040 dfb0fb66f814c4
4b421540a0aece 9e767cbf7e9c49 eeea5b5c866a9e e2026ca4bfb067 ec9fff1a5d41c0
290ed4da32d333 65208d00dba471 99e1d15a83b736 5585401976a265 1668daeb4aad72
cdabffa646baca 7a6c7bb29875f7 1a87a53a2139c1 1ecf7ae823158a b1087bf595f7f2
d8e668fe076e4e b0794cf137a863 81e2a419a320ac 8090b1d39e5171 813903d5f1a68c
ba31f8211e56bb 52413eefa30a66 af88d053eb8a4a 4d2235fdb9bf91 69a947973d3ed3
0dbcf583c26ffd b6bfbee58458b7 5a12f3e625e5d8 53ae22c2b1bc6a 8e6f7e7bbde691
2b692a6746d3be 686438e4ac66e4 aa77be21178471 36aed3fc3ff079 8c9373ebb1c2f7
bb99fd5bfaebc1 665902d08610e8 8af0db91b80b6e 00d4995f9a6ce8 1d503d26442bd9
e61b181dfd0949 2a02f5075a277e 76caa7ddc435fe 5cedae4cae7a57 4b5f7c58c4f214
1446a05c1023ab 459c93a9f5ad37 1ea5f5aa4060b2 a48215350fcd60 ac21e2a4729a51
f18cf87ecf430f 0fd48c241ac6bc 33964cd2e3d9ec 8bebeffc2e848e bb84a5c2ea9b0e
311f20ff7ce601 752573f4effe98 79dbe4d184dce5 6bb04def99d322 7d2d5f23c2475b
7f93f7dbddf04e fff303e751fffe 08d205f8999a3d 5750d14f75e056 1107a3f96ca8e9
f62b50c8ad9f20 53e7fcc55d72d8 6345dec1054cb6 01d52c21dc654a aa0bd78e39c594
2265b675381cd5 57cc03dd65f821 1fa373049059a8 9885886b48085f 8357ab98192a14
5bb6cfbf84048b 5cf862f25ff6ab c9382e36ab2dbd 2357b5ade91fcf 2db77558ceef24
d4a0cb3ba50a2a 12c3cb633dfe47 db805410168807 a5e635ac766e1a 25252810f49fad
cae296fce18ed4 b9932d5822c519 4b7006cc54ea84 2546d761d284cf 2346d0a11b1ed9
81ce0d028c4474 c8002fd0315372 8670db1a6ad6eb 4c7f942260e9c9 822bb2c423cc53
e3b67febea3672 59c24223d913c3 6f4b196f69400f 51bfb6cc7f3603 fb9fbef84ffaf4
7c1632636806f6 a50ec42076931f f68b2be9e5e7ad 7603302a518bd4 d7cd9bb97ffa3c
acf1faaebf7412 f55d55d548bd86 5b34112ed53d06 1b58692e1e33b7 cc7e3cb6d32fe2
8f7b35c14a744f 9a4ed599399554 8eb369e71641af d4a6d1a5c74123 8cc7ec376acf04
ec0a470647b248 2fd9e8eea1f10e 94439285677960 4d11f6e6a426e0 06378817230b68
ec14f2df152cb7 199a8c0bd5f05d ecad5aab44ac2b ca87ab2ba6e905 69c0bf2acdb36c
d66279737bc807 4dd946eb19d81b 4e9c473b5e9846 5a016f7ca86f9d d02c2b7dca744a
Thursday, September 9, 2010
The Facebook Like button PITA: fb_xd_fragment=
So I have been using Facebook's like button for awhile now. I have it on a couple blogs, and a site I maintain using drupal. I started noticing that I was getting requests with a query of "?fb_xd_fragment=". This is a problem, because it shows a blank page.
I did some investigating, and this seems to be caused by two bugs (that I can tell). One, that Facebook's API causes a bug in IE that sends this query in the first place, and second that my website, because of the Facebook API, causes the page to become visibly blank because of it.
So, what to do? I decided to kill the query string at the source: my apache web server using mod_rewrite.
So here's what I added to the apache config for just this web server:
So this basically sends a 301 HTTP code (Moved Permanently) with the same request, minus the query string. Note that the "?" on the RewriteRule is needed so that mod_rewrite doesn't append the old query back on. Seems to be doing the trick, and now I'm not losing traffic due to a Facebook API bug.
I did some investigating, and this seems to be caused by two bugs (that I can tell). One, that Facebook's API causes a bug in IE that sends this query in the first place, and second that my website, because of the Facebook API, causes the page to become visibly blank because of it.
So, what to do? I decided to kill the query string at the source: my apache web server using mod_rewrite.
So here's what I added to the apache config for just this web server:
RewriteEngine On
RewriteCond %{QUERY_STRING} fb_xd_fragment=
RewriteRule (.*) $1? [R=301]
So this basically sends a 301 HTTP code (Moved Permanently) with the same request, minus the query string. Note that the "?" on the RewriteRule is needed so that mod_rewrite doesn't append the old query back on. Seems to be doing the trick, and now I'm not losing traffic due to a Facebook API bug.
Wednesday, August 25, 2010
Solo6x10: Recording from video
I've finally gotten around to writing an example program for recording from Solo6x10 devices to a file. This program is very basic. It leaves the video device in it's default state (resolution, frame rate, etc). So you can modify those settings separately, and then use this program to record at those settings.
I also did not put motion detection examples in this, mainly because I have not satisfied my desire to create a decent API in v4l2 yet for that that.
Next step, I will add sound recording into this.
You can find the example source here.
To compile it, run:
Execute it with a single command line option, the device for a solo6x10 encoder (e.g. /dev/video1). It will record until you hit ^C.
Happy recording!
I also did not put motion detection examples in this, mainly because I have not satisfied my desire to create a decent API in v4l2 yet for that that.
Next step, I will add sound recording into this.
You can find the example source here.
To compile it, run:
gcc -lavformat bc-record.c -o bc-record
Execute it with a single command line option, the device for a solo6x10 encoder (e.g. /dev/video1). It will record until you hit ^C.
Happy recording!
Tuesday, August 24, 2010
Mac OS X Internet Sharing Problems, resolved
I was having problems with Internet Sharing on Mac OS X Snow Leopard. It was working at one time, and then it stopped. I would see from my Mac that it was getting a DHCP DISCOVER from my other computer, but the Mac would not give out a network address.
At long last, I figured out the problem. If perform this command:
You will see that bootpd is actually running, but being com.apple.bootpd means it is running as the normal system bootpd server. You don't want that.
First, disable Internet Sharing. Then run this command:
Then, re-enable Internet Sharing. You should now see something like:
This is the anonymous Internet Sharing version of bootpd. Now it all works.
At long last, I figured out the problem. If perform this command:
$ sudo launchctl list | grep bootp
1550 - com.apple.bootpd
You will see that bootpd is actually running, but being com.apple.bootpd means it is running as the normal system bootpd server. You don't want that.
First, disable Internet Sharing. Then run this command:
$ sudo launchctl remove com.apple.bootpd
Then, re-enable Internet Sharing. You should now see something like:
$ sudo launchctl list | grep bootp
3423 - 0x100101440.anonymous.bootpd
This is the anonymous Internet Sharing version of bootpd. Now it all works.
Tuesday, August 17, 2010
Booting your iBook G4 from a USB stick
So I spent the better half of the night trying to figure out how to make my iBook G4 boot from a USB stick. I have the Leopard ISO, but none of my DVD drives will burn Dual Layer, so I needed to use the USB route.
However, none of the nonsense I found on the Internet could give me proper instructions. The system itself doesn't directly support it (holding down option during power-on doesn't show it either).
So I had to dig into my OpenFirmware roots and do it the old fashion way. So here's the quick tip for the rest of you out there (hopefully Google will eventually pull this up in page ranks so it gets hit first and saves people time).
This should also work on other PPC Mac's that don't normally boot from USB, such as G3s.
However, none of the nonsense I found on the Internet could give me proper instructions. The system itself doesn't directly support it (holding down option during power-on doesn't show it either).
So I had to dig into my OpenFirmware roots and do it the old fashion way. So here's the quick tip for the rest of you out there (hopefully Google will eventually pull this up in page ranks so it gets hit first and saves people time).
- Plug in the USB device where you have copied your bootable system to (I do not cover this part since it's well covered already, google is your friend).
- Power on your iBook and hold down Command+Option+O+F. This will take you into the OpenFirmware. Scary looking if your not a computer type person.
- Once you see the screen go white with some text on it, you can release the keys in the previous step.
- Type "
boot ud:,\\:tbxi
" and if you're lucky, it will start booting from your USB device. If not, continue on. - Type "
dev usb0
" at the little ">" prompt and hit return. - Type "
ls
". If you see something like "/disk@1
", continue, else go to the previous step and use "usb1
" instead. - If you get here and you haven't seen something like "
/disk@1
", then you're likely screwed, sorry. - Type "
dev disk@1
" and hit return, and then "pwd
" and hit return again. You should see something that looks like "/pci@f2000000/usb1b,1/disk@1
". You will use this in the next step. - Type "
boot /pci@f2000000/usb1b,1/disk@1:,\\:tbxi
". This is the device part you got in the last step after typing "pwd
" with ":,\\:tbxi
" added to the end. - Moment of truth, hit enter. You should now be booting into your USB drive. IT WILL BE SLOW SO BE PATIENT
This should also work on other PPC Mac's that don't normally boot from USB, such as G3s.
Wednesday, July 14, 2010
Cross compiling the Linux kernel from Mac OS X
So I picked up a 13" MacBook and have been fiddling around with it. I like it, sue me.
One of the first things I did (as any Linux developer would) was to install darwin ports. I noticed some interesting things in there. A few that I needed (git) and a few that completely surprised me (dpkg and apt).
One thing that was missing was a Linux cross-compiler. So I did what any self-respecting Linux developer on a Mac would do: I built one.
Don't get too excited. I've only built one worthy of compiling a kernel (which means no C library, no userspace, etc).
The result of my work is here (built on 10.6.3):
You may notice the extra elf.h file, which is needed in /usr/include/elf.h for some programs in the kernel to compile natively on the host (e.g. modpost). The gcc and binutils will unpack in /opt/local/.
In order to cross-compile, you will need to add a few things to your kernel make command line:
You may notice, like I did, scripts/genksyms/parse.c has a #include for malloc.h which is not on Darwin. You may safely delete that line.
Note that you must already have /opt/local/bin in your PATH. Using ARCH=i386 will also work and compile 32-bit kernels. One last point, the sources for gcc/binutils came from Ubuntu's Jaunty.
Happy hacking...
One of the first things I did (as any Linux developer would) was to install darwin ports. I noticed some interesting things in there. A few that I needed (git) and a few that completely surprised me (dpkg and apt).
One thing that was missing was a Linux cross-compiler. So I did what any self-respecting Linux developer on a Mac would do: I built one.
Don't get too excited. I've only built one worthy of compiling a kernel (which means no C library, no userspace, etc).
The result of my work is here (built on 10.6.3):
- http://www.swissdisk.com/~bcollins/macosx/gcc-4.3.3-x86_64-linux-gnu.tar.bz2
- http://www.swissdisk.com/~bcollins/macosx/binutils-2.20.1-x86_64-linux-gnu.tar.bz2
- http://www.swissdisk.com/~bcollins/macosx/elf.h
You may notice the extra elf.h file, which is needed in /usr/include/elf.h for some programs in the kernel to compile natively on the host (e.g. modpost). The gcc and binutils will unpack in /opt/local/.
In order to cross-compile, you will need to add a few things to your kernel make command line:
make ARCH=x86_64 CROSS_COMPILE=x86_64-linux-gnu- ...
You may notice, like I did, scripts/genksyms/parse.c has a #include for malloc.h which is not on Darwin. You may safely delete that line.
Note that you must already have /opt/local/bin in your PATH. Using ARCH=i386 will also work and compile 32-bit kernels. One last point, the sources for gcc/binutils came from Ubuntu's Jaunty.
Happy hacking...
Thursday, July 1, 2010
Oldies but goodies: libugci
So I went digging around my old software, and ran across some interesting stuff that doesn't get a lot of attention. I've picked out one in particular that I haven't heard much about in a long time, but I consider it very useful for people who love MAME: libugci
I had built an arcade cabinet a long time ago that was making use of a USB control interface called Happ UGCI. It allows you to directly interface real video game style controls (buttons, joysticks, trackball, coin door, etc) with a PC.
It was perfect for that I was doing, except that it sucked on Linux. Back then, I actually had to write some patches for the kernel to get it all working correctly (firmware bugs in the UGCI). In addition, much of it was not accessible via USB input layer, and so I wrote a library called libugci that took advantage of the HID interface to the board.
The board allows you to connect a real coin-door, and libugci+mame will convert that to coin-door events in the software. I always wanted to complete my MAME cabinet with a coin-door so I could make money off my friends :)
The MAME code has support for libugci built-in (thanks to my patches submitted all those years ago). Installing libugci, and recompiling MAME with this support enabled will make use of it. There are also some programs for accessing the EEPROM on the board, as well as mapping events from UGCI to HID keyboard strokes (in cases where you would want that as opposed to real joy/mouse events).
So if you're a MAME junky like I am, and need that extra bit of features that only the UGCI can offer, you can download the library from:
Good luck and happy gaming!
I had built an arcade cabinet a long time ago that was making use of a USB control interface called Happ UGCI. It allows you to directly interface real video game style controls (buttons, joysticks, trackball, coin door, etc) with a PC.
It was perfect for that I was doing, except that it sucked on Linux. Back then, I actually had to write some patches for the kernel to get it all working correctly (firmware bugs in the UGCI). In addition, much of it was not accessible via USB input layer, and so I wrote a library called libugci that took advantage of the HID interface to the board.
The board allows you to connect a real coin-door, and libugci+mame will convert that to coin-door events in the software. I always wanted to complete my MAME cabinet with a coin-door so I could make money off my friends :)
The MAME code has support for libugci built-in (thanks to my patches submitted all those years ago). Installing libugci, and recompiling MAME with this support enabled will make use of it. There are also some programs for accessing the EEPROM on the board, as well as mapping events from UGCI to HID keyboard strokes (in cases where you would want that as opposed to real joy/mouse events).
So if you're a MAME junky like I am, and need that extra bit of features that only the UGCI can offer, you can download the library from:
- Tarball
- git://github.com/benmcollins/libugci.git
Good luck and happy gaming!
Sunday, June 27, 2010
Why Linux will (has?) hit a wall in popularity with normal users...
So this is one of the few times I decide to get political and/or rational. Most of my career has been spent on Linux. And while the gettin's good, I don't subscribe to the notion that Linux, as a desktop, will take over the World.
Let me make one thing clear. I do believe Linux, as a core, will succeed in many forms. On the server, on mobile products (where it is the core and not exposed directly to the user, a la Android).
So here's the problem as I see it. Too many choices. Yes, this has been beaten to death, and to some extent, many Linux vendors have taken note. Debian used to be a free-for-all where all of the choices were exposed to the user. People who used Debian loved the choices, but the fact isn't that they loved the choices, they just loved that their choice was among them.
If you didn't have a preference, for example with a Desktop environment, then choices are bad for a user. They didn't know how to pick one. So now, there is a default. That's great, but across many Linux distributions, even if the default is Gnome, the little nuances of each system will overwhelmingly differentiate the entire thing so that no Gnome desktop is truly the same as another distribution.
So why are choices bad? I want to take an example from a book I was reading recently called The 4-Hour Workweek. It tells of a watch company that wanted to advertise in a magazine. The watch company had many different styles of watches and wanted to put a full page ad that showed off 6 of them. The advertising executive said they should pick one watch and show off that one. To settle the dispute, they had two full page ads: one with the 6 watch layout, and the other with a single watch. Don't you know that the single watch ad out-performed the 6-watch ad by a factor of 6? Interesting...
So anyway, choices are bad for consumers. They would rather have one choice, even if it may limit them in some way. Apple figured this out when they almost went the way of the Commodore by making so many damn types of Macintoshes (when Jobs wasn't at the helm). Microsoft also learned this when they had an extensive list of Windows variants (Full/Pro/Home/Home-Pro/Server/etc/etc), but I don't think they've recovered from that very well.
Now on to the meat of the problem. Linux, too many choices...what can be done? Well, as a developer, not much. It's not our job to make these decisions. We are the ones that give all the choices. Drivers for every device, apps that do anything you want, themes, icons, documentation, hardware support. The real issue at stake is some company needs to break out of the "We're a Linux distribution" mold.
Let's take Dell's Ubuntu Linux offering as an example (I'm not knocking this effort, I helped start it when I was working for Canonical, it's a great offering). If a normal user somehow gets to the Dell Linux page, and they say "wow, what is this Linux thing?", they will surely go to Google.com and start checking. Bad? Hell yes. The huge amount of information, choices and decisions becomes quickly apparent to them. They start asking questions like "Is Ubuntu the _right_ Linux for me?" and "Should I try other Linux's as well" and "Why does Dell only offer Ubuntu?".
Indeed, these are good questions, but for which there is no answer that is going to take the average user from "I've always used Windows/MacOS and know how it works" to "I'm going to try this thing called Linux."
In my opinion, as unfortunate as it may sound, in the end, some company will deliver a product that makes no mention of Linux other than in the copyright attributions and source code, and will call it something completely different. Maybe they will call it Chrome OS?
Let me make one thing clear. I do believe Linux, as a core, will succeed in many forms. On the server, on mobile products (where it is the core and not exposed directly to the user, a la Android).
So here's the problem as I see it. Too many choices. Yes, this has been beaten to death, and to some extent, many Linux vendors have taken note. Debian used to be a free-for-all where all of the choices were exposed to the user. People who used Debian loved the choices, but the fact isn't that they loved the choices, they just loved that their choice was among them.
If you didn't have a preference, for example with a Desktop environment, then choices are bad for a user. They didn't know how to pick one. So now, there is a default. That's great, but across many Linux distributions, even if the default is Gnome, the little nuances of each system will overwhelmingly differentiate the entire thing so that no Gnome desktop is truly the same as another distribution.
So why are choices bad? I want to take an example from a book I was reading recently called The 4-Hour Workweek. It tells of a watch company that wanted to advertise in a magazine. The watch company had many different styles of watches and wanted to put a full page ad that showed off 6 of them. The advertising executive said they should pick one watch and show off that one. To settle the dispute, they had two full page ads: one with the 6 watch layout, and the other with a single watch. Don't you know that the single watch ad out-performed the 6-watch ad by a factor of 6? Interesting...
So anyway, choices are bad for consumers. They would rather have one choice, even if it may limit them in some way. Apple figured this out when they almost went the way of the Commodore by making so many damn types of Macintoshes (when Jobs wasn't at the helm). Microsoft also learned this when they had an extensive list of Windows variants (Full/Pro/Home/Home-Pro/Server/etc/etc), but I don't think they've recovered from that very well.
Now on to the meat of the problem. Linux, too many choices...what can be done? Well, as a developer, not much. It's not our job to make these decisions. We are the ones that give all the choices. Drivers for every device, apps that do anything you want, themes, icons, documentation, hardware support. The real issue at stake is some company needs to break out of the "We're a Linux distribution" mold.
Let's take Dell's Ubuntu Linux offering as an example (I'm not knocking this effort, I helped start it when I was working for Canonical, it's a great offering). If a normal user somehow gets to the Dell Linux page, and they say "wow, what is this Linux thing?", they will surely go to Google.com and start checking. Bad? Hell yes. The huge amount of information, choices and decisions becomes quickly apparent to them. They start asking questions like "Is Ubuntu the _right_ Linux for me?" and "Should I try other Linux's as well" and "Why does Dell only offer Ubuntu?".
Indeed, these are good questions, but for which there is no answer that is going to take the average user from "I've always used Windows/MacOS and know how it works" to "I'm going to try this thing called Linux."
In my opinion, as unfortunate as it may sound, in the end, some company will deliver a product that makes no mention of Linux other than in the copyright attributions and source code, and will call it something completely different. Maybe they will call it Chrome OS?
Saturday, June 19, 2010
Using your new Bluecherry MPEG-4 codec card and driver...
Now that the dust has settled and people are taking notice of the new driver for Bluecherry's MPEG-4 codec cards, here's a quick How-To for using it.
You will notice that there are two types of v4l2 devices created for each card. One device for the display port that produces uncompressed YUV and one for each input that produces compressed video in either MPEG-4 or MJPEG.
We'll start with the display port device. When loading the driver, a display port is created as the first device for that card. You can see in dmesg output something like this:
This is for a 16-port card. The output for a 4-port card would show "Encoders as /dev/video1-4" and similarly for 8-port show /dev/video1-8.
The display port allows you to view and configure what is shown on the video out port of the card. The device has several inputs and depends on which card you have installed:
You do not have to open this device for the video output of the card to work. If you open the device and set the input to input 2, and close it (without viewing any of the video) it will continue to show that input on the video out of the card. So you can simply change inputs using v4l2's ioctl's.
This is useful if you want to have live display to a CRT and use a simple program that rotates through the inputs (or multi-up virtual inputs) a few second intervals.
You can still use vlc, mplayer or whatever to view this device (you can open it multiple times).
Now for the encoder devices. There's obviously one device for each physical input on the card. The driver will allow you to record MPEG-4 and MJPEG from the same device (but you must open it twice, one for each feed). The video format cannot be configured once recording starts. So if you open the device for MPEG-4 and full D1 resultion at 30fps, that's what you're going to get if you also open a simultaneous record for MJPEG.
However, it's good to note here that MJPEG will automatically skip frames when recording. This allows you to pipe the output to a network connection (e.g. MJPEG over HTTP) with no worry of the remote connection being overloaded on bandwidth.
However, this isn't so for MPEG-4. It is possible if you are too slow at recording (not likely) to fall behind the card's internal buffer. I was not able to do this writing the full frames to disk on 44 records (4 cards of 16, 16, 8 and 4 ports).
Unlike any card before this supported by v4l2, the Bluecherry cards produce containerless MPEG-4 frames. Most v4l2 applications expect some sort of MPEG-2 stream such as program or transport. Since these programs do not expect MPEG-4 raw frames, I don't know of any that are capable of playing the encoders directly (much less being able to record from them). You can do something simple like 'cat /dev/video1' and somehow pipe it to vlc (I haven't tested this), or write a program that just writes the frames to disk (I have tested this, most programs can play the raw m4v files produced from the driver).
However, since most people will record to disk, the easiest way is to write the video frames straight out to disk.
Now on to the audio. The cards produce what is known as G.723, which is a voice codec typically found on phone systems (especially VoIP).
Since Alsa currently doesn't have a format for G.723, the driver shows it as unsigned 8-bit PCM audio. However, I can assure you that it isn't. I have sent a patch that was included in alsa-kernel (hopefully getting synced to mainline soon). But this only defines the correct format, it doesn't change the way you handle it at all.
You must convert G.723-24 (3-bit samples at 8khz) yourself. The example program I provide in my next post will show you how to do this, as well as how to convert it to MP2 audio, and record all of this to a container format on disk for later playback.
You will notice that there are two types of v4l2 devices created for each card. One device for the display port that produces uncompressed YUV and one for each input that produces compressed video in either MPEG-4 or MJPEG.
We'll start with the display port device. When loading the driver, a display port is created as the first device for that card. You can see in dmesg output something like this:
solo6010 0000:03:01.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
solo6010 0000:03:01.0: Enabled 2 i2c adapters
solo6010 0000:03:01.0: Initialized 4 tw28xx chips: tw2864[4]
solo6010 0000:03:01.0: Display as /dev/video0 with 16 inputs (5 extended)
solo6010 0000:03:01.0: Encoders as /dev/video1-16
solo6010 0000:03:01.0: Alsa sound card as Softlogic0
This is for a 16-port card. The output for a 4-port card would show "Encoders as /dev/video1-4" and similarly for 8-port show /dev/video1-8.
The display port allows you to view and configure what is shown on the video out port of the card. The device has several inputs and depends on which card you have installed:
- 4-port: 1 input per port and 1 virtual input for all 4 inputs in 4-up mode.
- 8-port: 1 input per port and 2 virtual inputs for 4-up on inputs 1-4 and 5-8 respectively.
- 16-port: 1 input per port and 5 virtual inputs for 4-up on inputs 1-5, 5-8, 9-12 and 13-16 and 1 virtual input for 16-up on all inputs.
You do not have to open this device for the video output of the card to work. If you open the device and set the input to input 2, and close it (without viewing any of the video) it will continue to show that input on the video out of the card. So you can simply change inputs using v4l2's ioctl's.
This is useful if you want to have live display to a CRT and use a simple program that rotates through the inputs (or multi-up virtual inputs) a few second intervals.
You can still use vlc, mplayer or whatever to view this device (you can open it multiple times).
Now for the encoder devices. There's obviously one device for each physical input on the card. The driver will allow you to record MPEG-4 and MJPEG from the same device (but you must open it twice, one for each feed). The video format cannot be configured once recording starts. So if you open the device for MPEG-4 and full D1 resultion at 30fps, that's what you're going to get if you also open a simultaneous record for MJPEG.
However, it's good to note here that MJPEG will automatically skip frames when recording. This allows you to pipe the output to a network connection (e.g. MJPEG over HTTP) with no worry of the remote connection being overloaded on bandwidth.
However, this isn't so for MPEG-4. It is possible if you are too slow at recording (not likely) to fall behind the card's internal buffer. I was not able to do this writing the full frames to disk on 44 records (4 cards of 16, 16, 8 and 4 ports).
Unlike any card before this supported by v4l2, the Bluecherry cards produce containerless MPEG-4 frames. Most v4l2 applications expect some sort of MPEG-2 stream such as program or transport. Since these programs do not expect MPEG-4 raw frames, I don't know of any that are capable of playing the encoders directly (much less being able to record from them). You can do something simple like 'cat /dev/video1' and somehow pipe it to vlc (I haven't tested this), or write a program that just writes the frames to disk (I have tested this, most programs can play the raw m4v files produced from the driver).
However, since most people will record to disk, the easiest way is to write the video frames straight out to disk.
Now on to the audio. The cards produce what is known as G.723, which is a voice codec typically found on phone systems (especially VoIP).
Since Alsa currently doesn't have a format for G.723, the driver shows it as unsigned 8-bit PCM audio. However, I can assure you that it isn't. I have sent a patch that was included in alsa-kernel (hopefully getting synced to mainline soon). But this only defines the correct format, it doesn't change the way you handle it at all.
You must convert G.723-24 (3-bit samples at 8khz) yourself. The example program I provide in my next post will show you how to do this, as well as how to convert it to MP2 audio, and record all of this to a container format on disk for later playback.
Wednesday, June 16, 2010
Softlogic 6010 4/8/16 Channel MPEG-4 Codec Card Driver Released
As I've talked about before, the company I work for has been dedicated to producing stable video surveillance products based on Linux.
Bluecherry's primary device for their video surveillance applications is the Softlogic based MPEG-4 codec card, which is available in 4, 8 and 16 channel models. The original driver for this card, although available as Open Source, was pretty pathetic to say the least. Most of it was just a kludge of the Windows driver, exposing all of the functionality, but with little effort to make it Linux savvy.
That's where I came in. I've since rewritten the driver so that it makes use of Linux's Video4Linux2 and Alsa driver API's. It's currently 90% functional, and many times more efficient than the original OEM driver.
Here is a quick run-down of some of the features and plus-ones against the original driver:
Now that the driver is nearing completion, it's about time to release it. I've done so via Launchpad.
If you are on an Ubuntu system, you can install the DKMS package from the PPA archive using these commands:
Note, I've only supplied this for Lucid right now, but if you download the .deb or the .tar.gz you should be able to install it on any recent kernel.
Bluecherry's primary device for their video surveillance applications is the Softlogic based MPEG-4 codec card, which is available in 4, 8 and 16 channel models. The original driver for this card, although available as Open Source, was pretty pathetic to say the least. Most of it was just a kludge of the Windows driver, exposing all of the functionality, but with little effort to make it Linux savvy.
That's where I came in. I've since rewritten the driver so that it makes use of Linux's Video4Linux2 and Alsa driver API's. It's currently 90% functional, and many times more efficient than the original OEM driver.
Here is a quick run-down of some of the features and plus-ones against the original driver:
- Video4Linux2 interface allows easy use of existing capture software
- Alsa interface allows for easy audio capture (however, see G.723 caveats from my previous posts)
- Zero-copy in the driver. The original driver DMA'd and then copied the MPEG frames to userspace. The new driver makes use of v4l2 buffers and can DMA directly to an MMAP buffer for userspace.
- Simultaneous MPEG/MJPEG feed per channel, selectable via v4l2 format
- Standard v4l2 uncompressed video YUV display with multi-channel display format (4-up)
Now that the driver is nearing completion, it's about time to release it. I've done so via Launchpad.
If you are on an Ubuntu system, you can install the DKMS package from the PPA archive using these commands:
sudo add-apt-repository ppa:ben-collins/solo6x10
sudo apt-get update
sudo apt-get install solo6010-dkms
Note, I've only supplied this for Lucid right now, but if you download the .deb or the .tar.gz you should be able to install it on any recent kernel.
Friday, June 11, 2010
Feedburner: Adding Flattr to your FeedFlare (Part: 2)
This is a follow up to my previous post: Feedburner: Adding Flattr to your FeedFlare.
I've been wrestling around with FeedBurner's FeedFlare API for a few nights now. Most notably I've had trouble getting some of the documented xPath functions to work, and dealing with what appears to be delays in updating the flare after you add it.
My goal was to add categories to the DynamicFlare href so you could pass those along to Flattr. The problem is that if you add something like ${a:category[1]/@term} to the href, and a:category[1] doesn't exist in your feed, it will not add the flare to your feed (sort of like a filter if the attribute proves false()).
In a final decision of anger, I decided to drop any passing of information from the DynamicFlare href other than the feedUrl. This in itself proved difficult, since ${feedUrl} doesn't work as advertised. I instead opted to pass ${a:link[@rel="self"]/@href} which appears to work on my feed. YMMV.
I've gotten rid of the files I linked to in my last post so people don't use them. For the quick and dirty, here's the URL to use for Personal FeedFlare now:
There are two options you can pass to this script:
I used this for mine:
That's it! The new script will parse the feed in the second script and pass up to 980 characters as the desc, up to 80 characters of the title and all of the categories as tags.
You can also check here for all the PHP-Source files so you can modify to your liking.
I've been wrestling around with FeedBurner's FeedFlare API for a few nights now. Most notably I've had trouble getting some of the documented xPath functions to work, and dealing with what appears to be delays in updating the flare after you add it.
My goal was to add categories to the DynamicFlare href so you could pass those along to Flattr. The problem is that if you add something like ${a:category[1]/@term} to the href, and a:category[1] doesn't exist in your feed, it will not add the flare to your feed (sort of like a filter if the attribute proves false()).
In a final decision of anger, I decided to drop any passing of information from the DynamicFlare href other than the feedUrl. This in itself proved difficult, since ${feedUrl} doesn't work as advertised. I instead opted to pass ${a:link[@rel="self"]/@href} which appears to work on my feed. YMMV.
I've gotten rid of the files I linked to in my last post so people don't use them. For the quick and dirty, here's the URL to use for Personal FeedFlare now:
http://www.swissdisk.com/~bcollins/feedflare/flattr-me-dynamic-v2.php
There are two options you can pass to this script:
- uid: Your Flattr UID (required)
- lng: Your preferred language (defaults to en_GB, aka English)
I used this for mine:
http://www.swissdisk.com/~bcollins/feedflare/flattr-me-dynamic-v2.php?uid=17833&lng=en_GB
That's it! The new script will parse the feed in the second script and pass up to 980 characters as the desc, up to 80 characters of the title and all of the categories as tags.
You can also check here for all the PHP-Source files so you can modify to your liking.
Tuesday, June 8, 2010
Feedburner: Adding Flattr to your FeedFlare
Update 2010-06-11: This article and information with-in are superseded by Feedburner: Adding Flattr to your FeedFlare (Part: 2)
I've added Flattr to my blog and also wanted to add it to my feedburner FeedFlare, but alas, no one has yet to create one. So I've gone through the trouble of doing it for you :)
First, I went to the Feedburner FeedFlare API documentation. I wont go into the details of writing your own flare, but I opted for the dynamic type, since it would allow me to show how many times one of my blog posts had been flattered.
Second, I dove into the Flattr JavaScript API. I don't think they recommend this, but it's the only way I could get to the button information contained in their default IFrame.
Third, I downloaded the PHP Simple HTML DOM Parser. There's probably a simpler way to parse the IFrame sent back from Flattr, but I opted for this method since it was pretty straight forward.
For the lazy, you can use my existing FeedFlare URLs as your own. You will need to go to your feedburner page, login, select the feed you want to add this to, click on "Optimize" and then "FeedFlare". Below the stock list you will see a place to enter a URL. Enter the URL below and BE SURE to replace "your_uid" with your Flattr UID, else you wont get the money.
For the interested, here are the two files I've created. First is the dynamic PHP FeedFlare file:
Note that the <Link> element references another PHP script, and that this is in fact PHP. This allows us to pass along the Flattr UID to the second script, which is the one that actually produces the FeedFlare (feedburner periodically checks the second URL it gets from this file for updates to the FeedFlare).
Now, the second script is the one that uses the simple_html_dom.php library I spoke of. You will see it referenced in the file below. Basically I pack the data just like the original Flattr load.js script does, and request the Flattr button, and then rip a few bits of information from it:
Those familiar with Flattr will note that I did not pass in the description, which could probably be added in the first script (or at least a shortened version of it) and then passed to the button. Usually the description is the first few hundred characters of the post in this case.
Hope all works well. Please post back if you take the time to add the description to this!
I've added Flattr to my blog and also wanted to add it to my feedburner FeedFlare, but alas, no one has yet to create one. So I've gone through the trouble of doing it for you :)
First, I went to the Feedburner FeedFlare API documentation. I wont go into the details of writing your own flare, but I opted for the dynamic type, since it would allow me to show how many times one of my blog posts had been flattered.
Second, I dove into the Flattr JavaScript API. I don't think they recommend this, but it's the only way I could get to the button information contained in their default IFrame.
Third, I downloaded the PHP Simple HTML DOM Parser. There's probably a simpler way to parse the IFrame sent back from Flattr, but I opted for this method since it was pretty straight forward.
For the lazy, you can use my existing FeedFlare URLs as your own. You will need to go to your feedburner page, login, select the feed you want to add this to, click on "Optimize" and then "FeedFlare". Below the stock list you will see a place to enter a URL. Enter the URL below and BE SURE to replace "your_uid" with your Flattr UID, else you wont get the money.
http://www.swissdisk.com/~bcollins/flattr-me-dynamic.php?uid=your_uid
For the interested, here are the two files I've created. First is the dynamic PHP FeedFlare file:
<FeedFlareUnit>
<Catalog>
<Title>Flattr Me</Title>
<Description>
Adds a Flattr link including flattr count for each feed unit.
</Description>
<Link href="http://www.swissdisk.com/~bcollins/flattr-me-dynamic.php?uid=flattr_uid"/>
<Author email="benmcollins13@gmail.com">Ben Collins
</Catalog>
<DynamicFlare href="http://www.swissdisk.com/~bcollins/flattr-me-static.php?uid=<?
print $_GET['uid']; ?>&title=${title}&link=${link}"/>
<Sample>Flattr (11)</Sample>
</FeedFlareUnit>
Note that the <Link> element references another PHP script, and that this is in fact PHP. This allows us to pass along the Flattr UID to the second script, which is the one that actually produces the FeedFlare (feedburner periodically checks the second URL it gets from this file for updates to the FeedFlare).
Now, the second script is the one that uses the simple_html_dom.php library I spoke of. You will see it referenced in the file below. Basically I pack the data just like the original Flattr load.js script does, and request the Flattr button, and then rip a few bits of information from it:
<?
include_once("simple_html_dom.php");
$btn_url = "http://api.flattr.com/button/view/";
$data = "button=compact&uid=" . $_GET['uid'] .
"&url=" . $_GET['link'] . "&lng=en_US&hide=0&title=" .
$_GET['title'] . "&cat=text&tag=&desc=";
$html = file_get_html($btn_url . bin2hex($data));
$els = $html->find("span.flattr-count");
$count = $els[0]->innertext;
$els = $html->find("a.flattr-pop");
$link = $els[0]->href;
$els = $html->find("span.flattr-link");
$txt = $els[0]->innertext;
?>
<FeedFlare>
<Text><? print "$txt ($count)"; ?></Text>
<Link href="<? print $link; ?>"/>
</FeedFlare>
Those familiar with Flattr will note that I did not pass in the description, which could probably be added in the first script (or at least a shortened version of it) and then passed to the button. Usually the description is the first few hundred characters of the post in this case.
Hope all works well. Please post back if you take the time to add the description to this!
Friday, June 4, 2010
PHP: Sending Motion-JPEG
As you may know from past posts, I was trying to send Motion-JPEG from a PHP script. This proved (for many reason) not so easy. After I conquered writing php extension modules, I was still left with nuances in PHP that made it difficult to send MJPEG from my script.
Here's the basic run-down of difficulties:
Searching the Eentarnets did not produce good results on how to handle this. At least, not in a single place and easily findable. So here's my solution for others to use:
That's it in a nutshell. Make sure that your PHP script does not contain any newlines or data before or after the PHP enclosures (<? ... ?>).
Here's the basic run-down of difficulties:
- PHP buffers output to the client and this keeps you from doing continuous streams of data easily
- PHP doesn't allow you to send headers after it thinks the headers have already been sent
- Apache has some other handlers that also cause buffering
- Apache does some client negotiation that conflicts with MJPEG (mod_gzip)
Searching the Eentarnets did not produce good results on how to handle this. At least, not in a single place and easily findable. So here's my solution for others to use:
<?
# Used to separate multipart
$boundary = "my_mjpeg";
# We start with the standard headers. PHP allows us this much
header("Cache-Control: no-cache");
header("Cache-Control: private");
header("Pragma: no-cache");
header("Content-type: multipart/x-mixed-replace; boundary=$boundary");
# From here out, we no longer expect to be able to use the header() function
print "--$boundary\n";
# Set this so PHP doesn't timeout during a long stream
set_time_limit(0);
# Disable Apache and PHP's compression of output to the client
@apache_setenv('no-gzip', 1);
@ini_set('zlib.output_compression', 0);
# Set implicit flush, and flush all current buffers
@ini_set('implicit_flush', 1);
for ($i = 0; $i < ob_get_level(); $i++)
ob_end_flush();
ob_implicit_flush(1);
# The loop, producing one jpeg frame per iteration
while (true) {
# Per-image header, note the two new-lines
print "Content-type: image/jpeg\n\n";
# Your function to get one jpeg image
print get_one_jpeg();
# The separator
print "--$boundary\n";
}
?>
That's it in a nutshell. Make sure that your PHP script does not contain any newlines or data before or after the PHP enclosures (<? ... ?>).
The joy of writing a php5 module
As a follow up to my last post, I wanted to give a quick update.
As it turns out, I ended up writing a php5 Zend module to wrap up some functions I use to access v4l2 devices. I have to say that writing a php5 module was pretty straight forward. Big thanks to the Extension Writing tutorial I found, which was well written, and did not leave me with any questions.
I was able to get the module ready in a few hours, and spent the rest of this morning cleaning it up and tweaking it a bit.
Now I'm completely able to read my v4l2 devices and mjpeg stream them from my PHP script :)
As it turns out, I ended up writing a php5 Zend module to wrap up some functions I use to access v4l2 devices. I have to say that writing a php5 module was pretty straight forward. Big thanks to the Extension Writing tutorial I found, which was well written, and did not leave me with any questions.
I was able to get the module ready in a few hours, and spent the rest of this morning cleaning it up and tweaking it a bit.
Now I'm completely able to read my v4l2 devices and mjpeg stream them from my PHP script :)
Wednesday, June 2, 2010
Request-for-help: PHP and Video4Linux
As it turns out, I do not like writing web applications. Give me registers and DMA, keep the CSS and JS...thanks.
Anyway, I have to find a way to feed an MJPEG output from a PHP script. WAIT! I know this sounds easy, and if not for all the caveats of what I have to adhere to, it would be very simple...maybe.
It seems that PHP doesn't have a way to use ioctl()'s. I can open() a v4l2 device just fine, and read() from it with ease, but your SOL if you want to do something cool with that file handle in PHP.
I need to be able to do ioctl()'s because my v4l2 device requires at least one so I can put it into MJPEG format (as opposed to the default MPEG format). I can serve up these JPEG/MJPEG's through apache perfectly using a C program I've written.
However, I have to use PHP because the rest of the web application is written in PHP and there is already authentication mechanisms storing credentials in PHP sessions. I don't want to have to parse PHP sessions in a C program.
The normal method I found for doing JPEG from a PHP script worked like this:
This works perfectly. Except I want to be able to do a MJPEG feed which requires sending one image after the next with headers in between. PHP doesn't like this much, nor does it like the fact that I want all of these headers to come directly from my C program and not from the PHP script. I also do not want to call grabjpeg for each frame, since that's too much overhead in between frames.
What ends up happening is that my headers from the C program are sent as part of the content that the client thinks is the JPEG file.
Right now, I can only see one way to handle this, and that's to write a PHP module to expose libv4l, but I'm open to suggestions on being able to call an ioctl() from within a PHP script.
Anyway, I have to find a way to feed an MJPEG output from a PHP script. WAIT! I know this sounds easy, and if not for all the caveats of what I have to adhere to, it would be very simple...maybe.
It seems that PHP doesn't have a way to use ioctl()'s. I can open() a v4l2 device just fine, and read() from it with ease, but your SOL if you want to do something cool with that file handle in PHP.
I need to be able to do ioctl()'s because my v4l2 device requires at least one so I can put it into MJPEG format (as opposed to the default MPEG format). I can serve up these JPEG/MJPEG's through apache perfectly using a C program I've written.
However, I have to use PHP because the rest of the web application is written in PHP and there is already authentication mechanisms storing credentials in PHP sessions. I don't want to have to parse PHP sessions in a C program.
The normal method I found for doing JPEG from a PHP script worked like this:
<?
header("Content-Type: application/jpeg");
passthru("/usr/bin/grabjpeg");
?>
This works perfectly. Except I want to be able to do a MJPEG feed which requires sending one image after the next with headers in between. PHP doesn't like this much, nor does it like the fact that I want all of these headers to come directly from my C program and not from the PHP script. I also do not want to call grabjpeg for each frame, since that's too much overhead in between frames.
What ends up happening is that my headers from the C program are sent as part of the content that the client thinks is the JPEG file.
Right now, I can only see one way to handle this, and that's to write a PHP module to expose libv4l, but I'm open to suggestions on being able to call an ioctl() from within a PHP script.
Friday, May 28, 2010
Dear Users: Do not withhold information from developers, kthxbye
I accidentally stumbled upon a debugging case today with a user that seems to be a common problem. I wont call this user out directly, but he was a case study in what not to do when you want help from a developer like myself.
The basic volley started off with the usual chit chat in an IRC channel:
So off we went with some IRC and PasteBin exchanges of his compile problem. I looked at the source code for the driver he was trying to compile, and it was a one-line obvious fix to get it working with a newer kernel such as the one found on the Ubuntu 10.04 Lucid system he was working on.
So now the module compiled, and he tried loading it. Hmm...the module disagreed with symbols from modules on his running system, videodev to be exact.
Weird. That shouldn't happen. I asked him if he had compiled or installed different versions of v4l than what his system came with. He didn't recall. However, after getting him to pastebin "ls -lR" of his modules directory, it was apparent that 3 days ago, he did in fact completely replace the drivers/media install.
This meant that those modules didn't match the stock headers that came with his running kernel. This took a very short time for him, but considerable time for me (volunteer time) to find out. After finding out, he admitted to replacing the modules.
Now it was obvious that he was embarrassed to admit he had junked up his system, and even more embarrassing that I caught him in a lie. He could have saved time for both of us. If I had given up after helping with his initial problem, he would have been stuck not knowing how to fix it.
So the moral of the story here is, don't hide information from people trying to help you. Tell all the gross details. If you fed your cat buttermilk waffles off your keyboard, it might help to know that if your 'H' is stuck.
The basic volley started off with the usual chit chat in an IRC channel:
<User> Can someone help me compile a module for my kernel?
<Me> Sure, what seems to be the trouble?
So off we went with some IRC and PasteBin exchanges of his compile problem. I looked at the source code for the driver he was trying to compile, and it was a one-line obvious fix to get it working with a newer kernel such as the one found on the Ubuntu 10.04 Lucid system he was working on.
So now the module compiled, and he tried loading it. Hmm...the module disagreed with symbols from modules on his running system, videodev to be exact.
Weird. That shouldn't happen. I asked him if he had compiled or installed different versions of v4l than what his system came with. He didn't recall. However, after getting him to pastebin "ls -lR" of his modules directory, it was apparent that 3 days ago, he did in fact completely replace the drivers/media install.
This meant that those modules didn't match the stock headers that came with his running kernel. This took a very short time for him, but considerable time for me (volunteer time) to find out. After finding out, he admitted to replacing the modules.
Now it was obvious that he was embarrassed to admit he had junked up his system, and even more embarrassing that I caught him in a lie. He could have saved time for both of us. If I had given up after helping with his initial problem, he would have been stuck not knowing how to fix it.
So the moral of the story here is, don't hide information from people trying to help you. Tell all the gross details. If you fed your cat buttermilk waffles off your keyboard, it might help to know that if your 'H' is stuck.
Wednesday, May 26, 2010
Debugging: The elusive deadlock
It's very infrequent that I come across a deadlock bug in my code that 1) isn't easy to find, and 2) is very easy to reproduce.
I had a user report a bug in my solo6010 driver where he has two cards installed in the system. He is on a Core2Duo. If he starts mplayer up on each display for the two cards he has installed (2 mplayer instances), his machine instantly deadlocks and spews to the console.
At first I wasn't able to easily reproduce this. I'm on a Core2Quad, but since I have 4 cards installed I decided to start an mplayer instance for each display device for each card (4 mplayer instances). Oddly enough, I also deadlocked and spewed softlockup messages to the console.
Do you see where this is going? I decided, for clarity, to disable two of my cores:
Sure enough it only took two mplayer instances to deadlock my machine this time. Weird! Now my driver currently is able to pull 44 feeds from 4 cards at once for the MPEG feeds. Here, in this case, I am deadlocking with just two YUV feeds from the uncompressed video of the card. This code is much less complex, and the locking even less so. No parts of the driver share data between card instances (each card instance has it's own data and locks).
Upon further investigation I've noticed that this deadlock appears to happen in spin_unlock_irqrestore() during wake_up().
After carefully tracing the code, it was vaguely apparent that my logic around the wakeup routine for when it tries to grab a frame from the the hardware was a little off. I was using a different wake struct for each file handle, when I should have been using one per card. Not to mention, I was not taking advantage of the video sync IRQ to send a wakeup to the thread so that it knew a new frame was ready to grab (allowed me to spin less, and guaranteed the threads to be awoken when a new frame was ready).
Reworking this logic just a bit cleared the deadlock. Honestly, I'm not entirely sure of how the scenario caused a deadlock. It appears to be something in the underlying logic for wait/wake_up routines. I wont argue that it is fixed now, and my code is cleaner and more efficient, so I wont ask too many questions.
I had a user report a bug in my solo6010 driver where he has two cards installed in the system. He is on a Core2Duo. If he starts mplayer up on each display for the two cards he has installed (2 mplayer instances), his machine instantly deadlocks and spews to the console.
At first I wasn't able to easily reproduce this. I'm on a Core2Quad, but since I have 4 cards installed I decided to start an mplayer instance for each display device for each card (4 mplayer instances). Oddly enough, I also deadlocked and spewed softlockup messages to the console.
Do you see where this is going? I decided, for clarity, to disable two of my cores:
echo 0 | sudo tee /sys/devices/system/cpu/cpu2/online
echo 0 | sudo tee /sys/devices/system/cpu/cpu3/online
Sure enough it only took two mplayer instances to deadlock my machine this time. Weird! Now my driver currently is able to pull 44 feeds from 4 cards at once for the MPEG feeds. Here, in this case, I am deadlocking with just two YUV feeds from the uncompressed video of the card. This code is much less complex, and the locking even less so. No parts of the driver share data between card instances (each card instance has it's own data and locks).
Upon further investigation I've noticed that this deadlock appears to happen in spin_unlock_irqrestore() during wake_up().
After carefully tracing the code, it was vaguely apparent that my logic around the wakeup routine for when it tries to grab a frame from the the hardware was a little off. I was using a different wake struct for each file handle, when I should have been using one per card. Not to mention, I was not taking advantage of the video sync IRQ to send a wakeup to the thread so that it knew a new frame was ready to grab (allowed me to spin less, and guaranteed the threads to be awoken when a new frame was ready).
Reworking this logic just a bit cleared the deadlock. Honestly, I'm not entirely sure of how the scenario caused a deadlock. It appears to be something in the underlying logic for wait/wake_up routines. I wont argue that it is fixed now, and my code is cleaner and more efficient, so I wont ask too many questions.
Saturday, May 22, 2010
Review: Softlogic 6010 based MPEG-4/G.723 compression cards
So the company I work for (Bluecherry, LLC) is busy developing some products around the Softlogic 6010 based compression card. My job there has been to rewrite the driver from scratch in order to make it more Linux friendly. So to make things clear, I am writing this review from a programmer's perspective. I want to point out that I am not an MPEG expert, so I may skimp on some of the encoder details.
Let's start of with some specs. The base card supports full D1-quad compression of video into MPEG-4 video format. What this means is that it can encode 704x480 sized video at a rate of 120fps for NTSC, or 704x576 at a rate of 100fps for PAL. This breaks down to 4 full streams at 30fps and 25fps respectively. Alternately it can do CIF encoding (1/4 size of D1) at 4 times that frame rate, or for the math-lazy, 16 channels at 30fps at 320x240 frame size.
The card can be purchased in 4, 8 or 16 channel input models. So to take advantage of all 16 channels on the top model, you would either have to record in CIF mode (320x240) or reduce the frame interval to get 7.5fps per channel for full D1 mode (704x480). I will be speaking mostly in NTSC, but the card does support PAL, so do the conversions as we go.
The card allows for the usual MPEG encoding settings including GOP (Group of Pictures), Quantization and Intervals. Intervals are sort of the opposite of frames-per-second, but correlate the same way. An interval of 1 means that the encoder captures every frame, while an interval of 3 means it skips 2 frames between every frame it encodes. The video muxer on the card performs at 30fps, so the interval setting will decide how many of these frames get encoded.
The encoder itself performs quite well. It performs all encoding to an on-board SDRAM chip, and can DMA the frames directly to the host memory, which is great for performance. The original driver did not take advantage of this since it copied the frames to user space. The new driver I've written makes use of v4l2 and it's videobuf-dma-contiguous framework and thus allows for memory mapped buffers with userspace. This gives us zero-copy to userspace.
The encoder also supports side-by-side MJPEG compression of video frames. So while you can be recording the compressed MPEG-4 to disk, you can also frame grab JPEG images. This is useful for tools that want to do such frame grabbing for video analytics, or for live viewing over a web server (it's very easy to send frame grabs via an MJPEG cgi script).
All of this is built properly now on top of Linux's v4l2 API. Unfortunately the API does not expect compression cards to pipe MPEG-4 video, so most clients using v4l2 expect compressed video to be either MJPEG or MPEG-1/2 streams of some sort.
Currently the only drawback from the MPEG encoder is that the frames are self-standing MPEG-4 video frames. I have to add a header to the key frames for them to be usable by most decoders.
Overall the video capture is great. I've run 44 simultaneous records (16, 16, 8, 4 channel cards) on a Core2Duo with a system load average of 1.65, and only about 10% CPU usage. Most of the load is disk I/O.
Each encoder input also supports a graphical overlay that can be programmed at pixel level with varying colors. This is great for textual overlays. Currently we use it to place a descriptive name on the recording along with a timestamp.
In addition to the encoders, the card supports one uncompressed display port. It's currently exposed via v4l2 as a standard analog YUV device. It can be configured to show any of the inputs ports in tons of configurations. So you can do things like a 4-up display. This live display also supports a graphical overlay.
The display is sent to the video-out port on the card (hard-wired), so it can be hooked to a monitor as well (good for surveillance applications such as what Bluecherry offers).
Finally we'll discuss my least favorite part of this card. While it's not a killer, it is just odd that the card supports sound only in G.723 format. For surveillance applications this is just fine. Delivering 3-bit samples at 8Khz sample rate, which is a 24kbs. While this is good for bandwidth, it's bad for anything that needs better audio quality. Not to mention that storing the audio and video together in any sane format requires converting G.723 to linear PCM.
However, the G.723 to linear PCM conversion isn't much overhead on performance, and neither is the encoding to 16Khz MP2 audio, which is how we store it for our surveillance products. Overall, our format is MPEG-4/Video and MP2/Audio in a Matroska/mkv container. This is exactly how it was stored in my 44 stream example above.
So Pros:
Let's start of with some specs. The base card supports full D1-quad compression of video into MPEG-4 video format. What this means is that it can encode 704x480 sized video at a rate of 120fps for NTSC, or 704x576 at a rate of 100fps for PAL. This breaks down to 4 full streams at 30fps and 25fps respectively. Alternately it can do CIF encoding (1/4 size of D1) at 4 times that frame rate, or for the math-lazy, 16 channels at 30fps at 320x240 frame size.
The card can be purchased in 4, 8 or 16 channel input models. So to take advantage of all 16 channels on the top model, you would either have to record in CIF mode (320x240) or reduce the frame interval to get 7.5fps per channel for full D1 mode (704x480). I will be speaking mostly in NTSC, but the card does support PAL, so do the conversions as we go.
The card allows for the usual MPEG encoding settings including GOP (Group of Pictures), Quantization and Intervals. Intervals are sort of the opposite of frames-per-second, but correlate the same way. An interval of 1 means that the encoder captures every frame, while an interval of 3 means it skips 2 frames between every frame it encodes. The video muxer on the card performs at 30fps, so the interval setting will decide how many of these frames get encoded.
The encoder itself performs quite well. It performs all encoding to an on-board SDRAM chip, and can DMA the frames directly to the host memory, which is great for performance. The original driver did not take advantage of this since it copied the frames to user space. The new driver I've written makes use of v4l2 and it's videobuf-dma-contiguous framework and thus allows for memory mapped buffers with userspace. This gives us zero-copy to userspace.
The encoder also supports side-by-side MJPEG compression of video frames. So while you can be recording the compressed MPEG-4 to disk, you can also frame grab JPEG images. This is useful for tools that want to do such frame grabbing for video analytics, or for live viewing over a web server (it's very easy to send frame grabs via an MJPEG cgi script).
All of this is built properly now on top of Linux's v4l2 API. Unfortunately the API does not expect compression cards to pipe MPEG-4 video, so most clients using v4l2 expect compressed video to be either MJPEG or MPEG-1/2 streams of some sort.
Currently the only drawback from the MPEG encoder is that the frames are self-standing MPEG-4 video frames. I have to add a header to the key frames for them to be usable by most decoders.
Overall the video capture is great. I've run 44 simultaneous records (16, 16, 8, 4 channel cards) on a Core2Duo with a system load average of 1.65, and only about 10% CPU usage. Most of the load is disk I/O.
Each encoder input also supports a graphical overlay that can be programmed at pixel level with varying colors. This is great for textual overlays. Currently we use it to place a descriptive name on the recording along with a timestamp.
In addition to the encoders, the card supports one uncompressed display port. It's currently exposed via v4l2 as a standard analog YUV device. It can be configured to show any of the inputs ports in tons of configurations. So you can do things like a 4-up display. This live display also supports a graphical overlay.
The display is sent to the video-out port on the card (hard-wired), so it can be hooked to a monitor as well (good for surveillance applications such as what Bluecherry offers).
Finally we'll discuss my least favorite part of this card. While it's not a killer, it is just odd that the card supports sound only in G.723 format. For surveillance applications this is just fine. Delivering 3-bit samples at 8Khz sample rate, which is a 24kbs. While this is good for bandwidth, it's bad for anything that needs better audio quality. Not to mention that storing the audio and video together in any sane format requires converting G.723 to linear PCM.
However, the G.723 to linear PCM conversion isn't much overhead on performance, and neither is the encoding to 16Khz MP2 audio, which is how we store it for our surveillance products. Overall, our format is MPEG-4/Video and MP2/Audio in a Matroska/mkv container. This is exactly how it was stored in my 44 stream example above.
So Pros:
- Fast and efficient
- Can handle multiple inputs easily
- The new driver works well with v4l2 and alsa
- Perfect for security applications
- Nice OSD capabilities
- Motion detection supported per input
- Side-by-side MPEG-4 and JPEG capture modes per input
- MPEG-4 video. The SOLO-6110 will support H.264
- Low quality audio is not great for anything other than special applications (no TV DVR)
- G.723 audio format has been obsoleted twice since it was introduced. Nothing uses it so you must always re-encode it.
Friday, May 21, 2010
Video4Linux2 Hardware Motion Detection Support
In about a week or so I'll be making a proposal for V4L2 to have API support for hardware that offers motion detection. Since my experience with this is limited to only one type of hardware, I'm hoping to gain feedback on making sure that the approach I'm offering is as generic as possible.
I'll describe the hardware that I'm working with, which is a Softlogic 6010 MPEG4/G.723 encoder board supporting 4, 8 and 16 input channels (and all of them being able to be encoded at once). Note that all of this applies to the SOLO-6110 card as well (h.264 variant).
The motion detection exposed by the SOLO-6010 is on a per-input basis. It can be configured, when motion detection is enabled, to either signal start of motion events only, or signal start and stop events with a configurable delay after actual motion has stopped (i.e. it will not send the stop signal until there is no motion for n amount of seconds).
Next, SOLO-6010 allows you to set a threshold for when the hardware will detect an event. In my case, the higher the threshold, the less sensitive. It has a range of 0 (anal) to 65535 (off) with a default of 768.
Exposing this via v4l2 controls is quite simple. In my current version of the solo6010 driver, I expose this via private CIDs (Control IDs) which can be easily converted to native CIDs in v4l2.
In this case, V4L2_CID_MOTION_ENABLE is a boolean to turn motion detection on or off, V4L2_CID_MOTION_THRESHOLD is the threshold value I spoke of (slider with said range), V4L2_CID_MOTION_MODE is a menu control for "Start events only" and "Start and stop events" and V4L2_CID_MOTION_EASE_OFF is the seconds of non-motion required before the stop event is triggered.
Now, I could combine V4L2_CID_MOTION_ENABLE and V4L2_CID_MOTION_MODE as just a menu control with "Disabled" as one option, but I'm not sure what the consensus would be. It would be confusing as a standard control for hardware that only supported on/off tuning of this feature.
Note that in "Start event only" mode, my hardware will continually produce motion events when the card sees it, and thus I can emulate V4L2_CID_MOTION_EASE_OFF and a stop event in software.
Whether it is a good idea to always have this support from the control, and have the v4l2 middle layer take care of using the hardware or it's own software to handle it, is up for discussion. I'm all for implementing it as transparent to the user, with the middle-layer handling the guts and the drivers deciding if they allow the middle-layer to do it, or expose the hardware support for it.
Now we just need to make userspace aware of these events. I've found the easiest way for me was to define some extra flags for struct v4l2_buffer that get set during dqbuf.
The reason for V4L2_BUF_FLAG_MOTION_ON is because we need userspace to be able to tell that motion detection is on without querying the controls every second or two. Remember that controls can be changed even while a recorder is on (and in the case of motion detection, I suspect that's a wanted feature).
So if userspace is reading packets, it knows that motion detection is on or off depending on that flag, and can act accordingly. The start and stop flags are self-explanatory.
Now, this is a good reason to promote software side (perhaps libv4l2?) ease-off on motion detection. Without creating another flag, there's no way to know if motion detection is in a start-only or start-stop mode. If we always implement the ease-off, then we know we'll get a stop event eventually, whether or not the hardware supports it.
Moving back to threshold values...SOLO-6010 actually supports a motion detection grid with block sizes of 32x32 pixels. SOLO-6010s NTSC viewable field is 704x480, 704x576 for PAL. So that's either a 22x15 or 22x18 grid of blocks that can have individual threshold settings each. I'm still up in the air about how to do this in a standard v4l2 API. For SOLO-6010 I am using the low 16-bits of the control value to pass back a threshold level, and the high 16-bits to determine the block being affected (0xff000000 being the x value and 0x00ff0000 being the y value on the grid). This works well in practice but is obviously not generic enough.
Well that's all I have for today.
I'll describe the hardware that I'm working with, which is a Softlogic 6010 MPEG4/G.723 encoder board supporting 4, 8 and 16 input channels (and all of them being able to be encoded at once). Note that all of this applies to the SOLO-6110 card as well (h.264 variant).
The motion detection exposed by the SOLO-6010 is on a per-input basis. It can be configured, when motion detection is enabled, to either signal start of motion events only, or signal start and stop events with a configurable delay after actual motion has stopped (i.e. it will not send the stop signal until there is no motion for n amount of seconds).
Next, SOLO-6010 allows you to set a threshold for when the hardware will detect an event. In my case, the higher the threshold, the less sensitive. It has a range of 0 (anal) to 65535 (off) with a default of 768.
Exposing this via v4l2 controls is quite simple. In my current version of the solo6010 driver, I expose this via private CIDs (Control IDs) which can be easily converted to native CIDs in v4l2.
#define V4L2_CID_MOTION_ENABLE (V4L2_CID_PRIVATE_BASE+0)
#define V4L2_CID_MOTION_THRESHOLD (V4L2_CID_PRIVATE_BASE+1)
#define V4L2_CID_MOTION_MODE (V4L2_CID_PRIVATE_BASE+2)
#define V4L2_CID_MOTION_EASE_OFF (V4L2_CID_PRIVATE_BASE+3)
In this case, V4L2_CID_MOTION_ENABLE is a boolean to turn motion detection on or off, V4L2_CID_MOTION_THRESHOLD is the threshold value I spoke of (slider with said range), V4L2_CID_MOTION_MODE is a menu control for "Start events only" and "Start and stop events" and V4L2_CID_MOTION_EASE_OFF is the seconds of non-motion required before the stop event is triggered.
Now, I could combine V4L2_CID_MOTION_ENABLE and V4L2_CID_MOTION_MODE as just a menu control with "Disabled" as one option, but I'm not sure what the consensus would be. It would be confusing as a standard control for hardware that only supported on/off tuning of this feature.
Note that in "Start event only" mode, my hardware will continually produce motion events when the card sees it, and thus I can emulate V4L2_CID_MOTION_EASE_OFF and a stop event in software.
Whether it is a good idea to always have this support from the control, and have the v4l2 middle layer take care of using the hardware or it's own software to handle it, is up for discussion. I'm all for implementing it as transparent to the user, with the middle-layer handling the guts and the drivers deciding if they allow the middle-layer to do it, or expose the hardware support for it.
Now we just need to make userspace aware of these events. I've found the easiest way for me was to define some extra flags for struct v4l2_buffer that get set during dqbuf.
#define V4L2_BUF_FLAG_MOTION_ON 0x00000400
#define V4L2_BUF_FLAG_MOTION_START 0x00000800
#define V4L2_BUF_FLAG_MOTION_STOP 0x00001000
The reason for V4L2_BUF_FLAG_MOTION_ON is because we need userspace to be able to tell that motion detection is on without querying the controls every second or two. Remember that controls can be changed even while a recorder is on (and in the case of motion detection, I suspect that's a wanted feature).
So if userspace is reading packets, it knows that motion detection is on or off depending on that flag, and can act accordingly. The start and stop flags are self-explanatory.
Now, this is a good reason to promote software side (perhaps libv4l2?) ease-off on motion detection. Without creating another flag, there's no way to know if motion detection is in a start-only or start-stop mode. If we always implement the ease-off, then we know we'll get a stop event eventually, whether or not the hardware supports it.
Moving back to threshold values...SOLO-6010 actually supports a motion detection grid with block sizes of 32x32 pixels. SOLO-6010s NTSC viewable field is 704x480, 704x576 for PAL. So that's either a 22x15 or 22x18 grid of blocks that can have individual threshold settings each. I'm still up in the air about how to do this in a standard v4l2 API. For SOLO-6010 I am using the low 16-bits of the control value to pass back a threshold level, and the high 16-bits to determine the block being affected (0xff000000 being the x value and 0x00ff0000 being the y value on the grid). This works well in practice but is obviously not generic enough.
Well that's all I have for today.
Tuesday, May 11, 2010
Writing an ALSA driver: PCM handler callbacks
So here we are on the final chapter of the ALSA driver series. We will finally fill in the meat of the driver with some simple handler callbacks for the PCM capture device we've been developing. In the previous post, Writing an ALSA driver: Setting up capture, we defined my_pcm_ops, which was used when calling snd_pcm_set_ops() for our PCM device. Here is that structure again:
First let's start off with the open and close methods defined in this structure. This is where your driver gets notified that someone has opened the capture device (file open) and subsequently closed it.
This is the minimum you would do for these two functions. If needed, you would allocate private data for this stream and free it on close.
For the ioctl handler, unless you need something special, you can just use the standard snd_pcm_lib_ioctl callback.
The next three callbacks handle hardware setup.
Since we've been using standard memory allocation routines from ALSA, these functions stay fairly simple. If you have some special exceptions between different versions of the hardware supported by your driver, you can make changes to the ss->hw structure here (e.g. if one version of your card supports 96khz, but the rest only support 48khz max).
The PCM prepare callback should handle anything your driver needs to do before alsa-lib can ask it to start sending buffers. My driver doesn't do anything special here, so I have an empty callback.
This next handler tells your driver when ALSA is going to start and stop capturing buffers from your device. Most likely you will enable and disable interrupts here.
Let's move on to the handlers that are the work horse in my driver. Since the hardware that I'm writing my driver for cannot directly DMA into memory that ALSA has supplied for us to communicate with userspace, I need to make use of the copy handler to perform this operation.
So here we've defined a pointer function which gets called by userspace to find our where the hardware is in writing to the buffer.
Next, we have the actual copy function. You should note that count and pos are in sample sizes, not bytes. The buffer I've shown we assume to have been filled during interrupt.
Speaking of interrupt, that is where you should also signal to ALSA that you have more data to consume. In my ISR (interrupt service routine), I have this:
And I think we're done. Hopefully now you have at least the stubs in place for a working driver, and will be able to fill in the details for your hardware. One day I may come back and write another post on how to add mixer controls (e.g. volume).
Hope this series has helped you out!
<< Prev
static struct snd_pcm_ops my_pcm_ops = {
.open = my_pcm_open,
.close = my_pcm_close,
.ioctl = snd_pcm_lib_ioctl,
.hw_params = my_hw_params,
.hw_free = my_hw_free,
.prepare = my_pcm_prepare,
.trigger = my_pcm_trigger,
.pointer = my_pcm_pointer,
.copy = my_pcm_copy,
};
First let's start off with the open and close methods defined in this structure. This is where your driver gets notified that someone has opened the capture device (file open) and subsequently closed it.
static int my_pcm_open(struct snd_pcm_substream *ss)
{
ss->runtime->hw = my_pcm_hw;
ss->private_data = my_dev;
return 0;
}
static int my_pcm_close(struct snd_pcm_substream *ss)
{
ss->private_data = NULL;
return 0;
}
This is the minimum you would do for these two functions. If needed, you would allocate private data for this stream and free it on close.
For the ioctl handler, unless you need something special, you can just use the standard snd_pcm_lib_ioctl callback.
The next three callbacks handle hardware setup.
static int my_hw_params(struct snd_pcm_substream *ss,
struct snd_pcm_hw_params *hw_params)
{
return snd_pcm_lib_malloc_pages(ss,
params_buffer_bytes(hw_params));
}
static int my_hw_free(struct snd_pcm_substream *ss)
{
return snd_pcm_lib_free_pages(ss);
}
static int my_pcm_prepare(struct snd_pcm_substream *ss)
{
return 0;
}
Since we've been using standard memory allocation routines from ALSA, these functions stay fairly simple. If you have some special exceptions between different versions of the hardware supported by your driver, you can make changes to the ss->hw structure here (e.g. if one version of your card supports 96khz, but the rest only support 48khz max).
The PCM prepare callback should handle anything your driver needs to do before alsa-lib can ask it to start sending buffers. My driver doesn't do anything special here, so I have an empty callback.
This next handler tells your driver when ALSA is going to start and stop capturing buffers from your device. Most likely you will enable and disable interrupts here.
static int my_pcm_trigger(struct snd_pcm_substream *ss,
int cmd)
{
struct my_device *my_dev = snd_pcm_substream_chip(ss);
int ret = 0;
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
// Start the hardware capture
break;
case SNDRV_PCM_TRIGGER_STOP:
// Stop the hardware capture
break;
default:
ret = -EINVAL;
}
return ret;
}
Let's move on to the handlers that are the work horse in my driver. Since the hardware that I'm writing my driver for cannot directly DMA into memory that ALSA has supplied for us to communicate with userspace, I need to make use of the copy handler to perform this operation.
static snd_pcm_uframes_t my_pcm_pointer(struct snd_pcm_substream *ss)
{
struct my_device *my_dev = snd_pcm_substream_chip(ss);
return my_dev->hw_idx;
}
static int my_pcm_copy(struct snd_pcm_substream *ss,
int channel, snd_pcm_uframes_t pos,
void __user *dst,
snd_pcm_uframes_t count)
{
struct my_device *my_dev = snd_pcm_substream_chip(ss);
return copy_to_user(dst, my_dev->buffer + pos, count);
}
So here we've defined a pointer function which gets called by userspace to find our where the hardware is in writing to the buffer.
Next, we have the actual copy function. You should note that count and pos are in sample sizes, not bytes. The buffer I've shown we assume to have been filled during interrupt.
Speaking of interrupt, that is where you should also signal to ALSA that you have more data to consume. In my ISR (interrupt service routine), I have this:
snd_pcm_period_elapsed(my_dev->ss);
And I think we're done. Hopefully now you have at least the stubs in place for a working driver, and will be able to fill in the details for your hardware. One day I may come back and write another post on how to add mixer controls (e.g. volume).
Hope this series has helped you out!
<< Prev
Tuesday, May 4, 2010
Writing an ALSA driver: PCM Hardware Description
Welcome to the fourth installment in my "Writing an ALSA Driver" series. In this post, we'll dig into the snd_pcm_hardware structure that will be used in the next post which will describe the PCM handler callbacks.
Here is a look at the snd_pcm_hardware structure I have for my driver. It's fairly simplistic:
This structure describes how my hardware lays out the PCM data for capturing. As I described before, it writes out 48 bytes at a time for each stream, into 32 pages. A period basically describes an interrupt. It sums up the "chunk" size that the hardware supplies data in.
This hardware only supplies mono data (1 channel) and only 8000HZ sample rate. Most hardware seems to work in the range of 8000 to 48000, and there is a define for that of SNDRV_PCM_RATE_8000_48000. This is a bit masked field, so you can add whatever rates your harware supports.
My hardware driver describes this data as unsigned 8-bit format (it's actually signed 3-bit g723-24, but ALSA doesn't support that, so I fake it). Most common PCM data is signed 16-bit little-endian (S16_LE). You would use whatever your hardware supplies, which can be more than one type. Since the format is a bit mask, you can define multiple data formats.
Lastly, the info field describes some middle layer features that your hardware/driver supports. What I have here is the base for what most drivers will supply. See the ALSA docs for more details. For example, if your hardware has stereo (or multiple channels) but it does not interleave these channels together, you would not have the interleave flag.
Next post will give us some handler callbacks. It will likely be split into two posts.
<< Prev | Next >>
Here is a look at the snd_pcm_hardware structure I have for my driver. It's fairly simplistic:
static struct snd_pcm_hardware my_pcm_hw = {
.info = (SNDRV_PCM_INFO_MMAP |
SNDRV_PCM_INFO_INTERLEAVED |
SNDRV_PCM_INFO_BLOCK_TRANSFER |
SNDRV_PCM_INFO_MMAP_VALID),
.formats = SNDRV_PCM_FMTBIT_U8,
.rates = SNDRV_PCM_RATE_8000,
.rate_min = 8000,
.rate_max = 8000,
.channels_min = 1,
.channels_max = 1,
.buffer_bytes_max = (32 * 48),
.period_bytes_min = 48,
.period_bytes_max = 48,
.periods_min = 1,
.periods_max = 32,
};
This structure describes how my hardware lays out the PCM data for capturing. As I described before, it writes out 48 bytes at a time for each stream, into 32 pages. A period basically describes an interrupt. It sums up the "chunk" size that the hardware supplies data in.
This hardware only supplies mono data (1 channel) and only 8000HZ sample rate. Most hardware seems to work in the range of 8000 to 48000, and there is a define for that of SNDRV_PCM_RATE_8000_48000. This is a bit masked field, so you can add whatever rates your harware supports.
My hardware driver describes this data as unsigned 8-bit format (it's actually signed 3-bit g723-24, but ALSA doesn't support that, so I fake it). Most common PCM data is signed 16-bit little-endian (S16_LE). You would use whatever your hardware supplies, which can be more than one type. Since the format is a bit mask, you can define multiple data formats.
Lastly, the info field describes some middle layer features that your hardware/driver supports. What I have here is the base for what most drivers will supply. See the ALSA docs for more details. For example, if your hardware has stereo (or multiple channels) but it does not interleave these channels together, you would not have the interleave flag.
Next post will give us some handler callbacks. It will likely be split into two posts.
<< Prev | Next >>
Sunday, May 2, 2010
Writing an ALSA driver: Setting up capture
Now that we have an ALSA card initialized and registered with the middle layer we can move on to describing to ALSA our capture device. Unfortunately for anyone wishing to do playback, I will not be covering that since my device driver only provides for capture. If I end up implementing the playback feature, I will make an additional post.
So let's get started. ALSA provides a PCM API in its middle layer. We will be making use of this to register a single PCM capture device that will have a number of subdevices depending on the low level hardware I have. NOTE: All of the initialization below must be done just before the call to snd_card_register() in the last posting.
In the above code we allocate a new PCM structure. We pass the card we allocated beforehand. The second argument is a name for the PCM device, which I have just conveniently set to the same name as the driver. It can be whatever you like. The third argument is the PCM device number. Since I am only allocating one, it's set to 0.
The third and fourth arguments are the number of playback and capture streams associated with this device. For my purpose, playback is 0 and capture is the number I have detected that the card supports (4, 8 or 16).
The last argument is where ALSA allocates the PCM device. It will associate any memory for this with the card, so when we later call snd_card_free(), it will cleanup our PCM device(s) as well.
Next we must associate the handlers for capturing sound data from our hardware. We have a struct defined as such:
I will go into the details of how to define these handlers in the next post, but for now we just want to let the PCM middle layer know to use them:
Here, we first set the capture handlers for this PCM device to the one we defined above. Afterwards, we also set some basic info for the PCM device such as adding our main device as part of the private data (so that we can retrieve it more easily in the handler callbacks).
Now that we've made the device, we want to initialize the memory management associated with the PCM middle layer. ALSA provides some basic memory handling routines for various functions. We want to make use of it since it allows us to reduce the amount of code we write and makes working with userspace that much easier.
The MAX_BUFFER is something we've defined earlier and will be discussed further in the next post. Simply put, it's the maximum size of the buffer in the hardware (the maximum size of data that userspace can request at one time without waiting on the hardware to produce more data).
We are using the simple continuous buffer type here. Your hardware may support direct DMA into the buffers, and as such you would use something like snd_dma_dev() along with your PCI device to initialize this. I'm using standard buffers because my hardware will require me to handle moving data around manually.
Next post we'll actually define the hardware and the handler callbacks.
<< Prev | Next >>
So let's get started. ALSA provides a PCM API in its middle layer. We will be making use of this to register a single PCM capture device that will have a number of subdevices depending on the low level hardware I have. NOTE: All of the initialization below must be done just before the call to snd_card_register() in the last posting.
struct snd_pcm *pcm;
ret = snd_pcm_new(card, card->driver, 0, 0, nr_subdevs,
&pcm);
if (ret < 0)
return ret;
In the above code we allocate a new PCM structure. We pass the card we allocated beforehand. The second argument is a name for the PCM device, which I have just conveniently set to the same name as the driver. It can be whatever you like. The third argument is the PCM device number. Since I am only allocating one, it's set to 0.
The third and fourth arguments are the number of playback and capture streams associated with this device. For my purpose, playback is 0 and capture is the number I have detected that the card supports (4, 8 or 16).
The last argument is where ALSA allocates the PCM device. It will associate any memory for this with the card, so when we later call snd_card_free(), it will cleanup our PCM device(s) as well.
Next we must associate the handlers for capturing sound data from our hardware. We have a struct defined as such:
static struct snd_pcm_ops my_pcm_ops = {
.open = my_pcm_open,
.close = my_pcm_close,
.ioctl = snd_pcm_lib_ioctl,
.hw_params = my_hw_params,
.hw_free = my_hw_free,
.prepare = my_pcm_prepare,
.trigger = my_pcm_trigger,
.pointer = my_pcm_pointer,
.copy = my_pcm_copy,
};
I will go into the details of how to define these handlers in the next post, but for now we just want to let the PCM middle layer know to use them:
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE,
&my_pcm_ops);
pcm->private_data = mydev;
pcm->info_flags = 0;
strcpy(pcm->name, card->shortname);
Here, we first set the capture handlers for this PCM device to the one we defined above. Afterwards, we also set some basic info for the PCM device such as adding our main device as part of the private data (so that we can retrieve it more easily in the handler callbacks).
Now that we've made the device, we want to initialize the memory management associated with the PCM middle layer. ALSA provides some basic memory handling routines for various functions. We want to make use of it since it allows us to reduce the amount of code we write and makes working with userspace that much easier.
ret = snd_pcm_lib_preallocate_pages_for_all(pcm,
SNDRV_DMA_TYPE_CONTINUOUS,
snd_dma_continuous_data(GFP_KERNEL),
MAX_BUFFER, MAX_BUFFER);
if (ret < 0)
return ret;
The MAX_BUFFER is something we've defined earlier and will be discussed further in the next post. Simply put, it's the maximum size of the buffer in the hardware (the maximum size of data that userspace can request at one time without waiting on the hardware to produce more data).
We are using the simple continuous buffer type here. Your hardware may support direct DMA into the buffers, and as such you would use something like snd_dma_dev() along with your PCI device to initialize this. I'm using standard buffers because my hardware will require me to handle moving data around manually.
Next post we'll actually define the hardware and the handler callbacks.
<< Prev | Next >>
Saturday, May 1, 2010
Writing an ALSA driver: The basics
In my last post I described a bit of hardware that I am writing an ALSA driver for. In this installment, I'll dig a little deeper into the base driver. I wont go into the details of the module and PCI initialization that was already present in my driver (I developed the core and v4l2 components first, so all of that is taken care of).
So first off I needed to register with ALSA that we actually have a sound card. This bit is easy, and looks like this:
This asks ALSA to allocate a new sound card with the name "MySoundCard". This is also the name that appears in /proc/asound/ as a symlink to the card ID (e.g. "card0"). In my particular instance I actually name the card with an ID number, so it ends up being "MySoundCard0". This is because I can, and typically do, have more than one installed at a time for this type of device. I notice some other sound drivers do not do this, probably because they don't expect more than one to be installed at a time (think HDA, which is usually embedded on the motherboard, and so wont have two or more inserted into a PCIe slot). Next, we set some of the properties of this new card.
Here, we've assigned the name of the driver that handles this card, which is typically the same as the actual name of your driver. Next is a short description of the hardware, followed by a longer description. Most drivers seem to set the long description to something containing the PCI info. If you have some other bus, then the convention would follow to use information from that particular bus. Finally, set the parent device associated with the card. Again, since this is a PCI device, I set it to that.
Now to set this card up in ALSA along with a decent description of how the hardware works. We add the next bit of code to do this:
We're basically telling ALSA to create a new card that is a low level sound driver. The mydev argument is passed as the private data that is associated with this device, for your convenience. We leave the ops structure as a no-op here for now.
Lastly, to complete the registration with ALSA:
ALSA now knows about this card, and lists it in /proc/asound/ among other places such as /sys. We still haven't told ALSA about the interfaces associated with this card (capture/playback). This will be discussed in the next installment. One last thing, when you cleanup your device/driver, you must do so through ALSA as well, like this:
This will cleanup all items associated with this card, including any devices that we will register later.
<< Prev | Next >>
So first off I needed to register with ALSA that we actually have a sound card. This bit is easy, and looks like this:
struct snd_card *card;
ret = snd_card_create(SNDRV_DEFAULT_IDX1, "MySoundCard",
THIS_MODULE, 0, &card);
if (ret < 0)
return ret;
This asks ALSA to allocate a new sound card with the name "MySoundCard". This is also the name that appears in /proc/asound/ as a symlink to the card ID (e.g. "card0"). In my particular instance I actually name the card with an ID number, so it ends up being "MySoundCard0". This is because I can, and typically do, have more than one installed at a time for this type of device. I notice some other sound drivers do not do this, probably because they don't expect more than one to be installed at a time (think HDA, which is usually embedded on the motherboard, and so wont have two or more inserted into a PCIe slot). Next, we set some of the properties of this new card.
strcpy(card->driver, "my_driver");
strcpy(card->shortname, "MySoundCard Audio");
sprintf(card->longname, "%s on %s IRQ %d", card->shortname,
pci_name(pci_dev), pci_dev->irq);
snd_card_set_dev(card, &pci_dev->dev);
Here, we've assigned the name of the driver that handles this card, which is typically the same as the actual name of your driver. Next is a short description of the hardware, followed by a longer description. Most drivers seem to set the long description to something containing the PCI info. If you have some other bus, then the convention would follow to use information from that particular bus. Finally, set the parent device associated with the card. Again, since this is a PCI device, I set it to that.
Now to set this card up in ALSA along with a decent description of how the hardware works. We add the next bit of code to do this:
static struct snd_device_ops ops = { NULL };
ret = snd_device_new(card, SNDRV_DEV_LOWLEVEL, mydev, &ops);
if (ret < 0)
return ret;
We're basically telling ALSA to create a new card that is a low level sound driver. The mydev argument is passed as the private data that is associated with this device, for your convenience. We leave the ops structure as a no-op here for now.
Lastly, to complete the registration with ALSA:
if ((ret = snd_card_register(card)) < 0)
return ret;
ALSA now knows about this card, and lists it in /proc/asound/ among other places such as /sys. We still haven't told ALSA about the interfaces associated with this card (capture/playback). This will be discussed in the next installment. One last thing, when you cleanup your device/driver, you must do so through ALSA as well, like this:
snd_card_free(card);
This will cleanup all items associated with this card, including any devices that we will register later.
<< Prev | Next >>
Friday, April 30, 2010
Writing an ALSA driver
Over the past week I've been writing an ALSA driver for an MPEG-4 capture board (4/8/16 channel). What I discovered is there are not many good documents on the basics of writing a simple ALSA driver. So I wanted to share my experience in the hopes that it would help others.
My driver needed to be pretty simple. The encoder produced 8Khz mono G.723-24 ADPCM. So you can avoid the wikepedia trip, that's 3-bits per sample, or 24000 bits per second. The card produced this at a rate of 128 samples per interrupt (48 bytes) for every channel available (you cannot disable each channel).
The card delivered this data in a 32kbyte buffer, split into 32 pages. Each page was written as 48*20 channels, which took up 960 bytes of the 1024 byte page (it could do up to this number, but for my purposes I was only using 4, 8 or 16 channels of encoded data depending on the capabilities of the card).
Now, let's set aside the fact that ALSA does not have a format spec for G.723-24, so my usage entails dumping out the 48 bytes to userspace as unsigned 8-bit PCM (and my userspace application handles the G.723-24 decoding, knowing that it is getting this data).
First, where to start in ALSA. I had to decide how to expose these capture interfaces. I could have exposed a capture device for each channel, but instead I chose to expose one capture interface with a subdevice for each channel. This made programming a bit easier, gave a better overview of the devices as perceived by ALSA, and kept /dev/snd/ less cluttered (especially when you had multiple 16-channel cards installed). It also made programming userspace easier since it kept channels hierarchically under the card/device.
Next post, I'll discuss how the initial ALSA driver is setup and exposed to userspace.
Next >>
My driver needed to be pretty simple. The encoder produced 8Khz mono G.723-24 ADPCM. So you can avoid the wikepedia trip, that's 3-bits per sample, or 24000 bits per second. The card produced this at a rate of 128 samples per interrupt (48 bytes) for every channel available (you cannot disable each channel).
The card delivered this data in a 32kbyte buffer, split into 32 pages. Each page was written as 48*20 channels, which took up 960 bytes of the 1024 byte page (it could do up to this number, but for my purposes I was only using 4, 8 or 16 channels of encoded data depending on the capabilities of the card).
Now, let's set aside the fact that ALSA does not have a format spec for G.723-24, so my usage entails dumping out the 48 bytes to userspace as unsigned 8-bit PCM (and my userspace application handles the G.723-24 decoding, knowing that it is getting this data).
First, where to start in ALSA. I had to decide how to expose these capture interfaces. I could have exposed a capture device for each channel, but instead I chose to expose one capture interface with a subdevice for each channel. This made programming a bit easier, gave a better overview of the devices as perceived by ALSA, and kept /dev/snd/ less cluttered (especially when you had multiple 16-channel cards installed). It also made programming userspace easier since it kept channels hierarchically under the card/device.
Next post, I'll discuss how the initial ALSA driver is setup and exposed to userspace.
Next >>
Subscribe to:
Posts (Atom)