In the absence of a Qvist card ranking this year, over at the Discord we had the idea for a different approach.

VoteA small website for submitting pairwise card comparisons is here:

**https://sites.google.com/view/domrating**There is an optional field for your (user) name. You can always not answer a comparison and it will just not enter the results. You can also call cards similar and this will enter as a tie result. I'll leave it up to you to decide, whether to call it a tie or make one card better than the other. (For Hexes, the idea is to vote for the ones that hurt the most.)

ResultsLeaderboards can be calculcated the same way as the leaderboard of players, where cards either win or tie against each other. The results are regularly updated and can be found here:

**https://docs.google.com/spreadsheets/d/1CaVOd1pgAgmjJHXPM1tVMVnlJOLDZaq8BxjW4I1NI1E/edit#gid=0**Everybody can submit as much or as little as they want. (There could eventually be alternative leaderboard calculations using weighted or selected responses only.)

I used all the Qvist categories, making Ways a new one. This rating approach allows for pooling all cost categories. On the spreadsheet with the leaderboard, there is a column for the Qvist cost that you can use to filter to find the subset of $3-costs etc. ("temporary filter", you can also download the sheet)

Kingdom cards and Landscapes are not mixed (so far), but Events and Projects are pooled.

The idea is to have it up for some time to get an initial rating. Afterwards, I could leave it open for further submissions or (temporarily) shut it down - not sure what the best approach is. After some time and/or when new cards get added, there could be a boost to cards' uncertainty and a new communal effort to update ratings (similar to the yearly edition of Qvist rating). The advantage is that we could then start from the existing ratings to have closer match-ups from the start as well as better initial ratings.

Interpreting the resultsThe rating that each card gets follows similar Glicko rules as the Dominion Online leaderboard (rating = mu there).

This means that the difference of ratings between cards, indicates how often it is expected to win in a (hypothetical) match-up. If everyone agreed on the exact order of the cards, you'd expect to see big gaps between each card. Groups of cards that are closer together indicate the opinions vary or "similar" is used.

The formula is approximately: win_probability = 1/(1+exp(-(rating1-rating2)))

There are 3 columns that calculate this win probability of a card compared to the best, the median, and the worst card in the list.

phi is the uncertainty of the rating. It goes down with more submissions and it shows whether the rating difference between two cards is (statistically) significant. (You could add about +/- 2*phi to the rating to get a 95% confidence interval.)