TL;DR: Isotropish and Isotropic use an identical updating algorithm, but with different parameters values. I think it's these different values that give Isotropic lower uncertainties.
I haven't looked at your code, but you said previously that you just used the vanilla TrueSkill algorithm, which does increase the variance (by GAMMA^2) once for each game:
So, what is going on here? Between any two games of a gamer, the TrueSkill ranking system assumes that the true skill of a gamer, that is, μ, can have changed slightly either up or down; this property is what allows the ranking system to adapt to a change in the skill of a gamer. Technically, this is achieved by a small increase in the σ of each participating gamer before the game outcome is incorporated.
sigma=sqrt(pl.skill[1] ** 2 + GAMMA ** 2)
...
(Bold by me.) So you apply GAMMA once per game, while Doug (according to qmech's quote above) ended up applying it only once per day, not per game.
I understand now. I think our confusion is Microsoft's fault.
I agree, but in another way, I think.
This is Vanilla TrueSkill updating as described on Microsoft's TrueSkill detail page. So I think what Microsoft means by the rather misleading "... small increase in the σ of each participating gamer before the game outcome is incorporated" is that the scaling factor c is not just the sum of the players' variances, but also includes 2β². That's the sense in which the variance is increased. The variable itself isn't increased by β², but its effect on the updating is.
I don't think this interpretation of β is correct; according to DougZ's code documentation, β is a measure of how random the game is, and is clearly distinct from gamma:
beta is a measure of how random the game is. You can think of it as
the difference in skill (mean) needed for the better player to have
an ~80% chance of winning. A high value means the game is more
random (I need to be *much* better than you to consistently overcome
the randomness of the game and beat you 80% of the time); a low
value is less random (a slight edge in skill is enough to win
consistently). The default value of beta is half of INITIAL_SIGMA
(the value suggested by the Herbrich et al. paper).
[...]
gamma is a small amount by which a player's uncertainty (sigma) is
increased prior to the start of each game. This allows us to
account for skills that vary over time; the effect of old games
on the estimate will slowly disappear unless reinforced by evidence
from new games.
Now the Microsoft formulas quoted don't even mention gamma. On the Details page, they seem to imply that the increase in uncertainty happens before these formulas are used: "Before starting to determine the new skill beliefs of all participating players for a new game outcome, the TrueSkill ranking system assumes that the skill of each player may have changed slightly between the current and the last game played by each player."
So I think they just neglected to give the corresponding equations for the uncertainty increase, and only give the more "interesting" change based on the game's outcome.
I also notice that DougZ does talk about an uncertainty increase per game here; so if qmech was right, this would be DougZ' old TrueSkill implementation,
before he switched to an uncertainty increase per day. If this is the case, it's not so surprising that it leads to the same output as your script, because both implementations do it once per game...