Kaggle ran a chess prediction competition, and the winner's entry was based on TrueSkill.
http://people.few.eur.nl/salimans/chess.html
But he won basically because he milked side information (the data contained #matches per participant. High #matches came from advancing further in tournaments, doh!).
It's not clear to me that the logistic or normal is distribution is more natural than another for player skills though. Nor that either are a particularly good fit.
Many uses of the logistic I've seen are motivated because it has a super easy to compute derivative, which makes it easy to optimize with gradient descent.
I don't think you understood that correctly: it wasn't so much that people with more games went further in tournaments, but that people who played stronger opponents were likely to have done better - particularly if way outrated, it probably means that the event is a swiss that your player has done well in, hence overpredict him a bit. (Incidentally, if you know much about chess, there are very few knockout tournaments...). On the other hand, his closest competitors all did some version of the same thing, too.
The method he used was not deemed feasible as a practicable rating system for FIDE (the world chess federation) to use.
(I am already acquainted with this stuff from my knowledge of the chess world; for a more personable write-up, see say this article:
http://www.chessbase.com/newsdetail.asp?newsid=7277 )
Logistic seems much more natural to me than normal - you can get logistic out of the Odds Ratio really simply, and this is not too hard to give some justification for. Normal... I don't see any particular reasoning for. Again, not that either are super hot in all contexts.