The "war" function was completely left out for simplicity. It's just, each turn the next two cards go head to head; if one is higher than the other, that player gets a point; if tied then neither player does. No cards are "stolen" either...after the deck is drawn the player with the most points wins. You could make the tie part more interesting if you wanted, or make the gradation finer/continuous to discourage ties. Negative numbers should really not be allowed.
So, for example, that deck of 1 1000 and 99 zeroes would lose badly against a deck of 100 10s.
Interesting. Clearly no one deck is optimal for this game, because in each battle you're best off either winning by a little or losing by a lot: if you know your opponent's next card is going to be N, then N+1 (for small N) and 0 (for large N) are both strong plays. So any optimal strategy has to be randomized.
I feel it should be possible to figure out the optimal strategy for this game, maybe with computer help, but I don't know how to do it yet.
Playing to maximize average points (i.e. expected value), rather than playing to win, turns out to be pretty straightforward. (Sorry that this is a bit off topic.)
First observe that you might as well uniformly randomly permute your cards. Any possible advantage you could gain by not doing this could be countered by your opponent uniformly randomly permuting _his_ cards. So all that matters is how many cards you select of each value. Given that, the expected value can be calculated using a double sum.
It's easiest to reason through the rest using a continuous version of the game, where the double sum becomes a double integral. In the continuous game, you're asked to pick a non-negative, non-decreasing function f on [0,1] whose integral must be 1/2. Your opponent simultaneously picks a function g satisfying the same conditions. Your score is: integral from [0,1] of (integral from [0,1] of H(g(y) - f(x)) dy) dx, where H is the Heaviside step function defined piecewise (see
http://mathworld.wolfram.com/HeavisideStepFunction.html). Observe that the score is at least 0 and at most 1. (Edit: Whoops, checking this the next day, the double integral I gave there is actually your opponent's score, not your score. Since 1-1/2=1/2, the reasoning later on still works.)
In this continuous game, the best strategy is f(x) = x. Then the score (given by the double integral) becomes the Lebesgue integral of g, which is always 1/2, because we required its integral to be 1/2 as part of the game. Given that this is a zero-sum symmetric game, nothing better is possible.
Bringing that idea back to the original discrete game, the intuition is that you should choose about the same amount of each value, starting from 0 and ending up at whatever value makes the point total work out. (Let's use score 2 for a card win and score 1 for a card tie to make the arithmetic cleaner.) The rounding is a bit annoying. It'd be easier if the game had 950 points to distribute among 100 cards, because in that case you could do 5 cards each of 0,1,2,...,19. With you choosing that strategy, for a card value N your opponent picks from 0 to 19, its expected contribution to his score is just 2(N/20) + (1/20), and with 20 and above it's worse (<=2(N/20)), so his expected score can be no better than 95 + 100(1/20) = 100, which is exactly half the available total score.
When playing to win rather than playing to maximize score, this analysis still leaves open the door to strategies that try to tilt the score distribution so that they usually win by a little and occasionally lose by a lot. I haven't thought that through.