Chess's AI friendly task is checkmate puzzle solving. But the way chess works, the really hard part happens before that even comes up, one player generates strong advantage before mating puzzles start to really be presented. So mostly chess was a matter of making the AI smart enough to win, with no calculator based advantages at all. I don't know go well but I expect it's a pretty similar story. I think it's possibly kind of true that the amount of time the AI had to not lose before it could start checkmate-puzzling off of information advantages is more generous in chess than in go, that is possibly the meaningful way of looking at it. It could just be that Go midgames are flat out harder, though.
I don't think that this is how chess engines works. They have mainly become better due to the increase of speed of calculation and only partially due to the better design of the evaluation function. That's where humans are still far better.
The impressive thing about AlphaZero is that the machine tought itself how to play chess well without any human guidance and the resulting play is more human (e.g. less materialistic, sacrificing material for long-terman positionala advantages that are hard to evaluate and impossible to calculate to the end) than that of an ordinary chess engine.
Dominion seems far more tricky than a deterministic abstract like chess to me. It has stochastic elements, there are far more "pieces" and every game is different. I guess that DeepMind would have to play one Kingdom a zillion times over before it could move to the next one. Then it would have to learn to evaluate how the strength of a card changes during a game, partly depending on what the opponents do, and the "metagame", i.e. how the strength of a card varies among Kingdoms.
I guess it is possible but this seems like something a human mind can learn much faster, albeit less perfectly.