The model does two kinds of output, once you've trained it. One displays something about the learned buy rules for a specific card; for the ad hoc decision tree classifier I wrote, it gives something like this:
>>> print mymodel["Ironworks"]
* Gardens:
* Bishop:
- No Buy (16%)
* No Bishop:
* Monument:
- No Buy (16%)
* No Monument:
* Young Witch:
- No Buy (28%)
* No Young Witch:
(snipped for readability, goes on to basically say you usually buy Ironworks if Gardens is around)
* No Gardens:
* Great Hall:
* Trade Route:
- Buy (60%)
* No Trade Route:
(snipped for readability)
The other thing it does is it takes a list of kingdom cards, converts it to a boolean input vector, runs the appropriate ten classifiers to produce an output vector, which it then converts back to something human readable:
>>> mymodel.classify("Highway, Nomad Camp, Harvest, Cartographer, Golem, Talisman, Vineyard, Island, Haggler, and University.")
'Cartographer', 'Island', 'Haggler', 'Talisman', 'Nomad Camp', 'Highway'
{'Golem': 0.083333333333333329, 'Cartographer': 0.90000000000000002, 'Island': 0.72727272727272729, 'University': 0.14285714285714285, 'Haggler': 0.66666666666666663, 'Vineyard': 0.4375, 'Talisman': 0.5625, 'Nomad Camp': 0.97222222222222221, 'Harvest': 0.023809523809523808, 'Highway': 0.69999999999999996}
...which is a mix of good and bad advice, to say the least. My gut says the approach of predicting each card separately is fundamentally flawed, and I have some ideas for how to improve on it, but I haven't had the chance to try them out.
@rspeer, what sort of input and output did you guys try?