While Isotropic was down I finally finished my "aiPlayValue" branch (remember when I said I was working on this sometime in, oh, 2011) and merged it in. The AI's default preference for playing each card is now defined on the card, with a function called "ai_playValue", returning a number from -100 (avoid playing at all costs) to 1000 (OMG PLAY IT NOW).
This is a mechanism that generalizes: cards can define an "ai_xValue" function, for some
x, and that function will be consulted by default when making decision
x about that card. The decision named 'play' replaces the 'action' and 'treasure' decisions.
I used this mechanism for Throne Room and King's Court too, with a decision named "multiplied" that returns a value over 1000 if it's preferable to play the card with a multiplier. We could in fact define all the default decisions for a card on the card object, which is great for experimenting with variant cards!
Part of the point here was also to make it play faster, because it no longer has to evaluate the enormous actionPriority function all the time. In fact, the result is about a 10% speedup.
The bigger bottleneck is actually the "gainsToEndGame" function, which is called all the time by lots of different decisions. The value is saved in a cache, but apparently the cache is cleared way too often, so that would be the thing to look at next for efficiency. But other nice things to accomplish would be:
- Implement and update the Optimized Big Money + action strategies
- Create a meta-strategy that chooses the best OBM strategy
- Make a nice way to add a variant card to the game