I'd be interested to see what kind of Dominion cards something like this might come up with.
The fundamental problem is that the set of all
Dominion cards printed (256, and that's if you include Events and the like) is dramatically smaller than the number of
Magic: the Gathering cards. The recursive neural network (henceforth RNN) is simply not going to have a large-enough set of training data to make much in the way of interesting new things.
Which is not to say that you might not generate some inspired nonsense.
If someone wants to actually take this up, I strongly recommend taking the time to do some normalization of the data. In particular, you'll want to tokenize just about everything in a manner that makes it easy for the RNN to make connections; while they're a significant step up from plain Markov chain generators, RNNs still suck at "memory," and so get lost easily in large amounts of text. Turning common phrases/styles/etc. into simple, short tokens will make it easier to train the RNN, and will result in higher-quality nonsense. For example, some simple cards:
Smithy|$4|#T=A|#C+3
Province|$8|#T=V|#V+6
Market|$5|#T=A|#C+1 #A+1 #B+1 #$+1
and any internal references to itself would be something like
#SELF. This actually looks a lot like the effort that someone... here? was doing in making every card have a parseable representation.
Obviously you could just throw the card text at the RNN and see what happens, but a little effort in cleaning it up would go a long way to improving the output.