Well, thank the heavens. It finally happened. Google saved the web.

The Register reports that Google has released the VP8 video codec which it gained last year through its $124M acquisition of web video business, On2.

On2 have been producing video codecs for years. It open sourced VP3 around 2003, if memory serves, which then became the basis for the Theora codec; the preferred choice of the open source community. Theora is a royalty- and patent-free codec that many open source advocates – myself included – have promoted the use of due to its free nature (free as in freedom… but that’s another issue).

However, as Steve Jobs recently hinted that a patent pools was being established to destroy Theora (and ultimately line his pockets further), Google have done just what Microsoft and Apple probably feared. Pulled the rug out.

So, all YouTube video will be re-encoded to use VP8 rather than H.264 (the proprietary codec supported by Apple and Microsoft), and browser builders Mozilla and Opera have already come out in support of it. As has Adobe. And, of course, Chrome will support it too.

And VP8, being open source and royalty-free, can also be supported by Microsoft and Apple. All source code and documentation is available on line, so there really is no excuse not to support it.

Having installed CentOS (wikipedia) on a server here, I was surprised to find that, by default, the source repositories were not enabled.
Below are the source repo definitions I use.  Simply create a file called “Centos-Source.repo” (# chmod 644) in /etc/yum.repos.d/ and enable repositories as required (using enabled=1).  Please note that this example is for CentOS version 5 and may differ from any official versions out there.  I offer no warranty… it just works for me.  ;-)

[base-SRPMS]
name=CentOS-$releasever – Base SRPMS
baseurl=http://mirror.centos.org/centos/$releasever/os/SRPMS/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
priority=1
enabled=1
#released updates
[update-SRPMS]
name=CentOS-$releasever – Updates SRPMS
baseurl=http://mirror.centos.org/centos/$releasever/updates/SRPMS/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
priority=1
enabled=1
#packages used/produced in the build but not released
[addons-SRPMS]
name=CentOS-$releasever – Addons SRPMS
baseurl=http://mirror.centos.org/centos/$releasever/addons/SRPMS/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
priority=1
enabled=0
#additional packages that may be useful
[extras-SRPMS]
name=CentOS-$releasever – Extras SRPMS
baseurl=http://mirror.centos.org/centos/$releasever/extras/SRPMS/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
priority=1
enabled=0
#additional packages that extend functionality of existing packages
[centosplus-SRPMS]
name=CentOS-$releasever – Plus SRPMS
baseurl=http://mirror.centos.org/centos/$releasever/centosplus/SRPMS/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
priority=1
#contrib – packages by Centos Users
[contrib-SRPMS]
name=CentOS-$releasever – Contrib SRPMS
baseurl=http://mirror.centos.org/centos/$releasever/contrib/SRPMS/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
priority=1

News abounds today of Google’s statement, relating to its operations in China. The statement indicated that Google would consider exiting China completely if it could not operate, with government approval, in an unrestricted manner. The post is here: http://googleblog.blogspot.com/2010/01/new-approach-to-china.htmlIn business, to turn away just under 20% of your potential revenue to comply with your own principles must be a hard call to make. But Google is global, and perhaps 4.8 billion people in the rest of the world is a sufficient number to target with AdWords campaigns…

But what is really happening here? It’s difficult to believe that Google would invest so much time and effort, installing services in 2006, and then expect that within 4 years Beijing would accede to Google’s “wisdom” and suddenly allow freedom of speech. Within 4 years? After thousands of years of communist, dynastic and, occasionally, even tyrannical rule? No, somehow this seems unlikely.

It’s a surprising move by Google; one that could incite anything from a murmur of disquiet amongst the ranks of young Chinese teens, avidly seeking knowledge and understanding, to full-blown protests, perhaps even riots. It’s something of a political move, too: reading between the lines, it would appear that Google suspects Beijing of orchestrating the cyber-attacks on it and the twenty or so other organisations, as mentioned in their blog. By saying “play fair or don’t play at all”, Google may be vocalising the sentiments of the underclasses, still struggling to be heard from within the provinces.

Something that has not been mentioned (to my knowledge) so far in the press is the opportunity to expose Hong Kong. Under Chinese rule, but with special provisions (such as more liberal allowances on internet services), Hong Kong would present a potential new base for Google’s Chinese operation. But perhaps that’s a step too far?

The question remains whether it’s a viable exercise, and for viability, read “bottom-line”. Implementing the required censorship and publishing restrictions as required by the Chinese government will likely have been more technical trouble than they’re worth for Google, who elsewhere in the world have hands-down probably the most advanced information and revenue infrastructure to be found.

But information and revenue go hand in hand in Google’s business model. The less information, the less dynamism on-site, then the less interest there will likely be and the less uptake, over time. Google works in the west because there are virtually no limits, within the law, on trading ideas and services. In the far east, Google may have just observed a synergy that works to the detriment of its model. It may also be outgunned by larger powers at work; Beijing’s insurance.

We shall see if Google’s gambit, encouraging closer but more open ties with Beijing, will pay off.

Hot off the press is v1.4.5 of Mark Hershberger’s weblogger, an extension to GNU Emacs / XEmacs which allows blogging from within the Emacs editor environment.

Early indications are good – for me at least. I have found the process of setting up and using weblogger a bit tricky, at times, so it’s encouraging to see that I can at least add this blog entry fairly easily.

Now, which is that “publish blog” keystroke…? 😉

I’ve never been one for uploading my images in different places.  I don’t upload images to albums in Facebook or into Blogger itself.  Instead, I prefer to centralilse all my image storage at Flickr Picasa.

The main reason for this is was that Flickr has been around a long time, is a veteran Yahoo web application, and has a great Javascript-based uploader which works flawlessly on Linux browsers – well, Firefox at least.  Unlike that stupid Java-applet attempt courtesy of Facebook’s programming team.  Sorry guys, “almost, but no cigar”.

However, given that Yahoo charges for something that is an added detour from something else (Google+) that is essentially free, it no longer seems necessary to use it.

So, when we see another wintry spell in the UK, perhaps I’ll take the aging Pentax *istDL out for another burn somewhere.

Or maybe I’ll cling on to the Samsung Galaxy S (mk1) and the ease of Android 🙂

I have two blogs hosted by Google/Blogger (a blog for work, life and general stuff that interests me) and WordPress (a blog just for work).  I differentiate these on the basis of content type as opposed to areas of interest.  That is, purely commercial (or tech-commercial) stuff goes to the WordPress one.

And yet, I wonder, what is the point?  With the ability to group, tag, label and so on, I can collect similar articles together in a variety of ways.  Anyone with half a brain, left or right, would be able to see that any articles I have labelled “business” are probably more commercially-oriented that ones labelled “may contain nuts”.

The problem is, I don’t want to miss the party – anywhere.  WordPress blogs seem, by some opinion, so popular that it makes me wonder if WordPress is more of a writer’s platform than blogger, and that blogger is something more akin to myspace for the blogosphere – a kind of scrawly, messy, throw-together-but-informative kind of creative jumble.  Perhaps I’m being harsh of others’ blogger blogs, even if I’m being slightly too kind to my own… 😉

Conversely, the opinions cited in various threads (1, 2, 3) would suggest that Blogger is the way to go, at least for feedback options and template customisability

Regardless, I am not entirely convinced that either system is, actually, tremendously brilliant. Maybe I’d be a better person to judge once I’ve committed a thousand or two- more articles to cyberspace and then regret/celebrate making the wrong/right choice.

Then everyone would really thank me for my opinion.  Then disregard it.  😉

It’s been a very busy start to 2010 but I have finally managed to get myself into gear with use of Emacs. I’m using it in console-only guise as far as I can, simply to learn the keystrokes as quickly as possible.

One feature that I’ve been very happy to stumble across is this weblogger.el extension. It means you can simply open a new buffer in Emacs, blog and save – all in minutes, if not seconds! Much better than opening a web page every time you want to blog about something.

The inspiration to really use Emacs in earnest comes from my new hero(in): Sacha Chua. A hugely popular and influential personality, Sacha is a true geek (in the best possible sense, of course) and a rising star for 2010 and beyond. I highly recommend reading Sacha’s blog at sachachua.com.

Happy reading!

3 Nov 2009
I have recently been conducting a little research into hosting companies/ISPs/data centres to understand more about their speed.

One hosting provider in the UK, UKFast, has recently been marketing the advantages of speed as a prime factor.  Consequently, they have allegedly invested 25% of their profits year on year into improving their internet connectivity while at the same time ensuring that they never exceed [by that I infer “sell more than”] 40% of total bandwidth available*.  Fair play – we all like stuff to be faster.  I was also pointed to a 3rd party web site who provide speed measuring of UK-based web hosting providers – WebCop.
* I was told this by a UKFast sales representative.

I was interested by WebCop’s claims, namely that by distributing their testing servers across UK-based hosting centres, they eliminate bias of one or another datacentre and concentrate instead of the actual, average throughput delivered by them.  It’s a fair claim, but there could be issues.  Today, I sent them this message:

Hi,

I’m interested by your web hosting speed statistics, for two main reasons.

 Firstly, there isn’t much info on your site about how you conduct tests – e.g. which web sites are used to measure the hosting companies relative speed.  This concerns me, as hosting companies can easily make the most prominent web sites the fastest, putting them on the front line of the data centre, while allocating less bandwidth to smaller web sites.

Secondly, you don’t mention from where you test each hosting company’s sites/servers.  So, for example, you could be testing a London-based server using servers in Manchester and Leeds, but the contention in one direction may be significantly higher than in the other direction.  Therefore, you could have skewed results.  In addition to this, if one hosting provider/ISP has a faster network, how can you prove this by testing on their competitors’ slower networks?

I’m looking forward to hearing back from them.  Currently UKFast appears to have leapt ahead in terms of the speed ratings, according to WebCop.

Whois WebCop?

Good question.  I ran a #whois on webcop.co.uk and found that the domain is registered by a company in the Netherlands who has a POBox address in Gibraltar!  Because whois output is subject to Nominet copyright, I cannot redistribute it here.  But if you want to see it, try www.123-reg.co.uk.

I have tried to dig a little deeper; the web is very unrevealing of a company that seemingly wants to stay hidden. I did find out that UKFast’s sister brand, GraphiteRack.com, registered their domain name through ENom, the same registrar that WebCop used, but nothing more.

The public-facing WebCop server seems to be hosted by Tagadab.com, a Clara.net Group company. Interesting that a company (WebCop) with testing servers distributed across the UK, use a London-based ISP with only 6 IP addresses allocated from IANA and some very “comptetitive” prices.  Perhaps they want to keep their web traffic well away from testing servers…

Stay tuned…

 5 Nov 2009
Not heard anything from WebCop yet…

 9 Nov 2009
I got a reply from WebCop:

Our testing servers are located on many different networks and effort has been taken to ensure that they are also geographically evenly located throughout the country. This means that if we test a server located in London it will be tested from all over the country and the average result given. This allows us to display results that are averaged not only across different provider’s networks but also  across different geographical locations.

As for your first point, we are currently addressing this and looking to find the best way to ensure that providers don’t cheat in the same way we know they do for Webperf testing. Currently for the larger providers we test a server located in the standard customer space and not their main website, and for smaller providers we test their company website server. We are looking for a way to make this fairer and are working with some of the larger providers to do this.

On the surface this is a fair strategy. However, it’s very, very easy for a data centre to prioritise traffic to/from a particular machine.  My feeling is that this could be happening already although, of course, I can prove nothing.

My gut instinct tells me that if the majority of datacentres in the UK felt they could increase sales by claiming the fastest network connectivity, they would.

However, every UK datacentre (apart from one) seems to hover around the same speed of connectivity, which suggests that either the system of tests is not respected amongst the datacentre community (in other words, it isn’t perceived as being particularly accurate), or the service provided by one is much faster than the bigger ISPs with which it peers… which seems rather unlikely.

I respect the WebCop team for this endeavour, but strongly feel that until the testing methodology is properly published for the networking and datacentre community, there can be little value in its findings.

I recently found myself having the need to revoke an old certificate. The steps are actually quite straightforward, but you do need to have your old revocation certificate to hand.

For more info, visit the GNU Privacy Guard site: http://www.gnupg.org/gph/en/manual.html

Simple follow these steps. In a terminal, issue:

  • gpg –import my-old-key@mydomain.com (0x712AC328) rev.asc
  • gpg –keyserver certserver.pgp.com –send-key 712AC328

That’s it!

Yes, it seems too good to be true.

Well, guess what?! It IS!!

That’s right. Your old tat (or, you could say, my old tat) is just about as worthless to everyone else as it is to me. I’ve spent ages on ebay and sold almost nothing. And what I have sold, I sold for 99p.

Give to charity instead, that’s what I should have done! Pah!

🙂