Apple’s attempt to sell me an iPad
(the image has now been deleted, but depicted Apple’s QuickTime-only web site with the plugin not working – or failing-over nicely, in my browser)

So, I can’t quite work out why I might want or need an iPad. Amusingly, a friend of mine posted a link on Facebook to Apple’s “TV” adverts on its website.

What I saw was the image, opposite.

Hmm, strange. Is this product only for people who already use Windows and/or a Mac? Being unable to install QuickTime (which is for a “PC” or Mac only) means I am unable to view this product. Apple are unable to do the most basic thing with sales and actually demonstrate to me why this product is good.

Which then led me to think, perhaps it isn’t.

Hot off the press is v1.4.5 of Mark Hershberger’s weblogger, an extension to GNU Emacs / XEmacs which allows blogging from within the Emacs editor environment.

Early indications are good – for me at least. I have found the process of setting up and using weblogger a bit tricky, at times, so it’s encouraging to see that I can at least add this blog entry fairly easily.

Now, which is that “publish blog” keystroke…? 😉

I love Linux.  Sure, it ain’t perfect; there’s still some things that could “feel” a bit more modern.  But at the same time, there is so much to its credit that it’s hard to ignore.

Take, for instance, virtual memory.  All modern computers have it.  Mobile phones use it.  Basically any computer-oriented device probably used virtual memory paging instead of real address allocation.  It’s just more flexible and safer to leave all the memory management to the operating system kernel.

The nice thing about the open source OS, however, is that you can determine just how “swappy” Linux is.  It’s a feature which allows incredible flexibility.

For example, a recent filesystem and partition resizing operation that I undertook had the strange side-effect of rendering my swap partition strangely ineffective.  Being able to tune the swappiness of the kernel has allowed me to fix and test the problem in-situ.

I’ve never been one for uploading my images in different places.  I don’t upload images to albums in Facebook or into Blogger itself.  Instead, I prefer to centralilse all my image storage at Flickr Picasa.

The main reason for this is was that Flickr has been around a long time, is a veteran Yahoo web application, and has a great Javascript-based uploader which works flawlessly on Linux browsers – well, Firefox at least.  Unlike that stupid Java-applet attempt courtesy of Facebook’s programming team.  Sorry guys, “almost, but no cigar”.

However, given that Yahoo charges for something that is an added detour from something else (Google+) that is essentially free, it no longer seems necessary to use it.

So, when we see another wintry spell in the UK, perhaps I’ll take the aging Pentax *istDL out for another burn somewhere.

Or maybe I’ll cling on to the Samsung Galaxy S (mk1) and the ease of Android 🙂

I have two blogs hosted by Google/Blogger (a blog for work, life and general stuff that interests me) and WordPress (a blog just for work).  I differentiate these on the basis of content type as opposed to areas of interest.  That is, purely commercial (or tech-commercial) stuff goes to the WordPress one.

And yet, I wonder, what is the point?  With the ability to group, tag, label and so on, I can collect similar articles together in a variety of ways.  Anyone with half a brain, left or right, would be able to see that any articles I have labelled “business” are probably more commercially-oriented that ones labelled “may contain nuts”.

The problem is, I don’t want to miss the party – anywhere.  WordPress blogs seem, by some opinion, so popular that it makes me wonder if WordPress is more of a writer’s platform than blogger, and that blogger is something more akin to myspace for the blogosphere – a kind of scrawly, messy, throw-together-but-informative kind of creative jumble.  Perhaps I’m being harsh of others’ blogger blogs, even if I’m being slightly too kind to my own… 😉

Conversely, the opinions cited in various threads (1, 2, 3) would suggest that Blogger is the way to go, at least for feedback options and template customisability

Regardless, I am not entirely convinced that either system is, actually, tremendously brilliant. Maybe I’d be a better person to judge once I’ve committed a thousand or two- more articles to cyberspace and then regret/celebrate making the wrong/right choice.

Then everyone would really thank me for my opinion.  Then disregard it.  😉

Short one today – I was looking for a way of converting all my ripped CDs to an alternative format for portable audio use.

Here’s a useful link for doing scripted, recursive audio format conversion.

Now you can rip all those CDs to FLAC format (which is lossless, unlike lossy mp3CBR or VBR) and then convert the lot to mp3 for the iPod, car, etc.

Oh, and a copy of Fedora or Ubuntu would probably be handy too 😉

Of course, you could pay for a commercial alternative or even – heaven forbid – “upgrade” your iTunes for DRM-de-restricted AAC files (which are still lossy-format files anyway).

So, why bother, when a CD costs the same and has better sound quality?

Forget digital downloads, until they respect your freedom.  Buy CDs!!

Or, if you are 100% sure your data will always be safe and/or don’t have a hi-fi CD player (in addition to CD/DVD-ROM drive) to justify getting physical media, investigate these forward-looking alternatives:

 Enjoy!

It’s been a very busy start to 2010 but I have finally managed to get myself into gear with use of Emacs. I’m using it in console-only guise as far as I can, simply to learn the keystrokes as quickly as possible.

One feature that I’ve been very happy to stumble across is this weblogger.el extension. It means you can simply open a new buffer in Emacs, blog and save – all in minutes, if not seconds! Much better than opening a web page every time you want to blog about something.

The inspiration to really use Emacs in earnest comes from my new hero(in): Sacha Chua. A hugely popular and influential personality, Sacha is a true geek (in the best possible sense, of course) and a rising star for 2010 and beyond. I highly recommend reading Sacha’s blog at sachachua.com.

Happy reading!

The problem: you cannot boot a paravirtualised machine from a CD-ROM for the purposes of installing a virtual machine. You may also be on a wireless link set up by NetworkManager and WLAN0 isn’t a bridged interface.

Here’s the solution:

  1. Download the ISO of your favourite distro and burn to DVD, then mount on your machine (this will probably happen just by inserting the disc on your drive).  If a window opens in your desktop, highlight the path in the address bar and copy it to the system clipboard (CTRL-C).
  2. Install Apache and start the apache/httpd service
  3. In /var/www/html (/var/www on debian, I believe) simply create a symbolic link to the directory where the DVD is mounted.  In this example, I am using CentOS:
     #  ln -s /media/CentOS_5.4_Final/ centos
  4. Now create the virtual machine, by starting up virt-manager, ensuring that it can connect to Dom0 and select New…
  5. In the Installation Source section of the virtual machine creation dialog,  specify the following parameter: Installation media URL: http://localhost/centos (the path to the installer repository)
  6. In the “type of network” selection, select Virtual Interface.
  7. Click through the rest of the set up – but BEFORE YOU COMPLETE IT, GET READY TO PAUSE THE VM. The virtual machine will start up automatically when you finish the set-up steps.
  8. As soon as you start the VM, the initial bootstrapping files should load and the distribution’s kernel should start up.   Only when the console window opens should you pause it!
  9. If you are using CentOS, you now need to modify the configuration file that’s been created, following these steps:
  1. Download the Xen kernel and initial ramdisk from here: http://mirror.centos.org/centos/5/os/x86_64/images/xen/ (change the path if you’re using an i386 host)
  2. Save them somewhere sensible: I made /var/lib/xen/boot and put them in there.
  3. Un-pause and Shutdown the virtual machine.
  4. Modify the config file, to include the paths to the xen-aware kernel and initrd (put these entries at the top, adjusting for your path as necessary):
    kernel = "/var/lib/xen/boot/vmlinuz"
    ramdisk = "/var/lib/xen/boot/initrd.img"
    
    

     

  5. IMPORTANT – also comment out the line for pygrub, so: = “/usr/bin/pygrub”
  6. Save the config and run the virtual machine. Nearly there!  Now open up the console to the virtual machine…
  • If you are prompted for a network address or DHCP, try DHCP.
  • If you are prompted for an installation path, stick to http. In a network interface dialog that may appear, choose a manual address that doesn’t conflict with other hosts on your real network (but make sure it’s valid for your network!!)
  • Because the VM now has a virtual network interface, http://localhost/centos is a meaningless path.  If the installer identifies this and prompts for an alternative path to the stage2.img file [true in CentOS, at least], then do the following on your host (real) machine:
    # ifconfig wlan0(substistute eth0 for wlan0 if you’re using a wired ethernet connection)
  • Paste/type the IP address from the output of ifconfig into the path dialog of the halted installer, but keep the /centos/ directory.
  • The installer should then run through the rest of the motions and voila – a paravirtualized virtual machine installed from local CD/DVD-ROM.

    When the installer has finished running, uncomment the pygrub line in the config file.

    If you spot any errors with this process, please let me know so I can correct the procedure.

    Happy Christmas!   *<(:-##

    I recently came across this slightly bizarre issue.  I was trying to mount a NFS share from one server to another server, using very loose permissions (I was basically sharing a DVD to a machine which had no DVD-ROM drive).

    So, what was happening?  Well, basically nothing.  On the NFS server (the machine with the DVD exported) I ran tcp dump to see what traffic was being received (the server was IP 192.168.10.200):

      #tcpdump -nn | grep 192.168.10.1

    No output was displayed when I was trying to mount the share on the client. None at all.  Well, almost none.  The one bit of output that got me wondering was a broadcast packet which was received from the client.


       10:14:45.651572 IP 192.168.10.1 > 192.168.10.255 arp who has 192.168.10.201 tell 192.168.10.1

    The IP address 192.168.10.201 was a typo made by me the day before.  I’d meant to type in .200 in my mount string.  My incorrect mount command thus read:
     
      # mount -t nfs 192.168.10.201:/mnt/share /mnt/dvd

    It seemed strange that an incorrect mount command that I’d typed in yesterday (and then hit CTRL-C to)might still be working in the background.

    Back to the client, I realised that perhaps the mount command worked in a queue/serial-like way.  Therefore, each mount command would have to complete – either successfully or not, so long as it finally returned – before the next one was attempted.  Checking out this theory, I investigated local processes:

      # ps ax | grep mount

    Sure enough, there were lots of mount entries pointing to the wrong IP address.  These were all my attempts to mount a non-existing server’s share to a local directory.  Dumb mistake, eh.  Still, CTRL-C didn’t cancel the mount request, which continued to run in the background.

    The easiest solution was to reboot the server, but in situations where that’s not practical, killing the rogue processes should suffice.

    3 Nov 2009
    I have recently been conducting a little research into hosting companies/ISPs/data centres to understand more about their speed.

    One hosting provider in the UK, UKFast, has recently been marketing the advantages of speed as a prime factor.  Consequently, they have allegedly invested 25% of their profits year on year into improving their internet connectivity while at the same time ensuring that they never exceed [by that I infer “sell more than”] 40% of total bandwidth available*.  Fair play – we all like stuff to be faster.  I was also pointed to a 3rd party web site who provide speed measuring of UK-based web hosting providers – WebCop.
    * I was told this by a UKFast sales representative.

    I was interested by WebCop’s claims, namely that by distributing their testing servers across UK-based hosting centres, they eliminate bias of one or another datacentre and concentrate instead of the actual, average throughput delivered by them.  It’s a fair claim, but there could be issues.  Today, I sent them this message:

    Hi,

    I’m interested by your web hosting speed statistics, for two main reasons.

     Firstly, there isn’t much info on your site about how you conduct tests – e.g. which web sites are used to measure the hosting companies relative speed.  This concerns me, as hosting companies can easily make the most prominent web sites the fastest, putting them on the front line of the data centre, while allocating less bandwidth to smaller web sites.

    Secondly, you don’t mention from where you test each hosting company’s sites/servers.  So, for example, you could be testing a London-based server using servers in Manchester and Leeds, but the contention in one direction may be significantly higher than in the other direction.  Therefore, you could have skewed results.  In addition to this, if one hosting provider/ISP has a faster network, how can you prove this by testing on their competitors’ slower networks?

    I’m looking forward to hearing back from them.  Currently UKFast appears to have leapt ahead in terms of the speed ratings, according to WebCop.

    Whois WebCop?

    Good question.  I ran a on webcop.co.uk and found that the domain is registered by a company in the Netherlands who has a POBox address in Gibraltar!  Because whois output is subject to Nominet copyright, I cannot redistribute it here.  But if you want to see it, try www.123-reg.co.uk.

    I have tried to dig a little deeper; the web is very unrevealing of a company that seemingly wants to stay hidden. I did find out that UKFast’s sister brand, GraphiteRack.com, registered their domain name through ENom, the same registrar that WebCop used, but nothing more.

    The public-facing WebCop server seems to be hosted by Tagadab.com, a Clara.net Group company. Interesting that a company (WebCop) with testing servers distributed across the UK, use a London-based ISP with only 6 IP addresses allocated from IANA and some very “comptetitive” prices.  Perhaps they want to keep their web traffic well away from testing servers…

    Stay tuned…

     5 Nov 2009
    Not heard anything from WebCop yet…

     9 Nov 2009
    I got a reply from WebCop:

    Our testing servers are located on many different networks and effort has been taken to ensure that they are also geographically evenly located throughout the country. This means that if we test a server located in London it will be tested from all over the country and the average result given. This allows us to display results that are averaged not only across different provider’s networks but also  across different geographical locations.

    As for your first point, we are currently addressing this and looking to find the best way to ensure that providers don’t cheat in the same way we know they do for Webperf testing. Currently for the larger providers we test a server located in the standard customer space and not their main website, and for smaller providers we test their company website server. We are looking for a way to make this fairer and are working with some of the larger providers to do this.

    On the surface this is a fair strategy. However, it’s very, very easy for a data centre to prioritise traffic to/from a particular machine.  My feeling is that this could be happening already although, of course, I can prove nothing.

    My gut instinct tells me that if the majority of datacentres in the UK felt they could increase sales by claiming the fastest network connectivity, they would.

    However, every UK datacentre (apart from one) seems to hover around the same speed of connectivity, which suggests that either the system of tests is not respected amongst the datacentre community (in other words, it isn’t perceived as being particularly accurate), or the service provided by one is much faster than the bigger ISPs with which it peers… which seems rather unlikely.

    I respect the WebCop team for this endeavour, but strongly feel that until the testing methodology is properly published for the networking and datacentre community, there can be little value in its findings.