In addition to MCTS, you can also combine MCTS with an evaluation function. In plain MCTS, you pick moves randomly, which somehow gives you good move decisions when you simulate enough games. However, if you already have a decent evaluation function, you can use that to bias how you randomly select moves: if a move has a higher value, then you probably want to pick that move more often when doing the simulations.
In my experience, minimax search isn't very good without a strong evaluation function, but this biased-simulation lets you use okay evaluation functions to get good play. You may also want to look into methods that find optimal weights for you, once you decide what your evaluation function is looking at. Stochastic gradient descent and temporal difference learning are the methods I know of.
(I am kinda-sorta doing research here so I've been fiddling with similar things...)
For Ultimate Tic-Tac-Toe in particular, you should probably play the variant where you cannot be sent to an already-won square. There's a pretty simple strategy that wins 100% of the time as first player, and once someone points it out to you it's hard to not play it, or to not bias your evaluation function to follow that strategy more.