I don't understand why they spent time on this at all. It's basically a solved problem. ELO has been around since the 30's; it's used for a bunch of serious competitive games (like Chess, Go, some MLB player ratings, etc). Microsoft's TrueSkill[1], the multiplayer variant, has plenty of solid open-source implementations[2]. I think we all know this because Iso used it.
But like, even if TrueSkill doesn't suit your fancy, skill-estimation rating systems are well-studied and there are tons of them. You really need a good reason to come up with a new one. If their argument was "we decided TrueSkill didn't suit our needs and here's a careful discussion of the differences" then I'd be totally open to blazing some trails. For example, Blizzard didn't use it for the StarCraft 2 ladders[3]. They wanted matching to maintain a sort of camaraderie if you played frequently. So they traded off on precision of skill-estimation in favor of making the humans have more fun. That seems reasonable.
But that's not what we get. We're told that math is hard and everybody should just chill out or go shopping. In fact, we're explicitly told that the system wasn't there to make us happy, which seems suspicious. In fact, it mostly sounds to me like "we didn't do our homework (also we can't keep our servers up haha suckers have some lag!!1)". I, for one, do not find this argument compelling.
[1]
http://research.microsoft.com/en-us/projects/trueskill/[2]
https://github.com/search?q=trueskill -- The implementation isotropic used:
https://github.com/dougz/trueskill[3]
http://www.sirlin.net/blog/2010/7/24/analyzing-starcraft-2s-ranking-system.html