Old News: Google Killing The Web

A few days or weeks ago, Gawker Media changed their websites’ layout to a fully dynamic, AJAXified … thing. For what it’s worth, I share the opinion expressed in Penny Arcade’s cartoon comment on the design change.

But that’s not my point right now. Today, I noticed something else about the the new design that I hadn’t seen before. Yes, I’m slow with that sort of thing. I use an RSS feed reader, it protects me from a lot of the web abominations out there.

The thing I noticed is that the new Gawker design now also uses URI fragments to encode what content you’re supposed to see, e.g. http://kotaku.com/#!5762887/rage-may-be-a-wasteland-but-in-this-new-trailer-its-a-pretty-one1.

It bothered me when I saw Twitter do that, but I didn’t really pay attention to it, because I don’t twit. Seeing this same thing again, though, suggests to me that there’s some method behind this madness, and I set out to figure out the reasoning behind this choice.

Looks as if it’s related to a proposal by Google to make AJAX content crawlable. It’s from 2009, so very old by intarwebs standards.

And it’s a pretty abysmal idea, from where I’m standing.

See, you can argue about whether or not the proposal is a good idea or not from a technical point of view all day long. I don’t care about that, for once. I care about why this is a bad idea from a, well, let’s say political point of view.

The proposal suggests — amongst other things — that website publishers who want to publish AJAX-heavy pages should run a headless browser on their servers for the sole purpose of making it easier for Google’s search engine crawler to find their content. In other words, website owners are supposed to pay for Google’s failure.

To make things worse, on the face of it, the proposal isn’t phrased like that: it’s suggesting that using AJAX on your website is a choice that breaks how the internet is supposed to work, and therefore you must face the consequences: either your site won’t be crawled, or you pay extra (effort, server CPU cycles, etc.).

The simple truth is that Google could easily crawl such pages, they just choose not to. I suspect the extra effort required on their side to do so would outstrip benefits for them. So they try to offload that extra effort to website owners instead, making it appear as if it was their fault in the first place.

  1. Kotaku is part of the Gawker network. []