I managed to jump on board for this right at the start, so I've played around with a few different ideas, I'll try to give a bit of an outline below...
First to explain how I represented things a little bit, a "move" consisted of a vector of fleets, all with the same destination planet but fleets could be sent from anywhere and at any time (provided they arrived no later than the maximum distance of the map in the future). The main types of moves I made were for expanding, defending, attacking, redistributing and redirecting. The first four should be pretty self explanatory, redirecting consisted of sending fleets added at time zero to a friend planet with a closer enemy planet such that the ships could make it to their final destination in time.
As with any other ai contest, what one really wanted to do was get a game tree going with minimax, the problem with this game is the true game tree is impossibly large to do anything with, so one has to make huge guesses at what are actually reasonable moves, and even then one is extremely limited in the number of moves they can test.
I first wanted to use dynamic programming for move selection, but my test implementations were running unbearably slow (like over 15 seconds a turn at some points with a basic heuristic bot, so that wasn't going to work for move selection in a tree) so I quickly ditched any attempts at using that, then I considered trying to use linear programming to do it, but I never really worked out a decent way to do it (it would also need to be set up as an integer program, which I've never actually played around with).
My final move selection consists of running through the planets and just picking out moves up to the maxdistance on the map to that planet, and using that as a possible move. It's horribly inefficient and a huge bottleneck on generating much of a game tree, but alas it's the best thing I ended up with.
As for my macro strategy, I tried modelling it both using a game tree and a payoff matrix but I was never able to get much success with the payoff matrix, but I liked that idea most. I constructed my state evalation such that it was zero sum to make it easier in both instances.
The issues with the simultaneous game were that it didn't consider strings of moves and I couldn't generate a very big game matrix within the 1 second (it was also very hard to pick strategies for each player then create the needed payoff matrix and time it for near 1 second). The idea was to pick sets of moves that were jointly feasible and treat those as pure strategies for each player. Then to pick my pure strategy to play, I was going to pick out my favourite pure strategy Nash equilibrium that I could play (or even try some kind of minimax regret method going, although I never worked out specifics for that), if no such equilibrium existed, I would try to obtain (or approximate) a mixed strategy Nash equilibrium (msne) and play that.
There are a couple of ways I would have tried to get the msne, the first is to get an actual msne using simplex, the cool thing being that simplex iteratively approaches the optimal solution with feasible solutions, so if you ran out of time you could just use what had been calculated so far as an approximation. The second would have been to try and approximate it numerically somehow, I found a few ideas for that online, the best was one method of having players optimally respond in pure strategies (I think) to the opponents current mixed strategy and then accumulate results for updating said current mixed strategies. Given I was having trouble getting a simple version of this running with just pure strategies in the required time, I didn't see much point in actually trying to implement that full idea, so it never eventuated.
Finally I came back to the normal approach for these kind of problems, minimax search on a game tree. I tried a few different methods for this (my final submission I only actually implemented this morning, so I have no idea if it will flop or not, but it seemed to work better in the few test games I managed to play). The first is to have each player pick moves for a current game state, then generate two layers and repeat the process, the problem I had with this was that the last move I make completely ignored what moves they might respond with, and it would often make moves that would be great if there was no opponent responding, but alas this is a game and that is not the case.
The second was to have the opponent respond to my moves in each child of mine, which removes the above issue of them not getting to respond to all moves I consider, but it also means you have to generate their moves a whole bunch more and process a whole lot more states for the same number of nodes in the tree.
What I implemented this morning was like the second, but after generating each 2 layers, it'd pick the minimax solution for that and then keep extending from there, the benefit to this is I actually end up considering and making more "actual" moves each turn, but I don't think it's quite as smart with the moves that it does make.
(Edit: upon further looking at my code, I didn't do what I thought this morning, the bot is doing pretty much what is described as the second option, although I just had one crash recently, hopefully I don't get too many of them).
One last thing, I was never sure what to send in the way of redistribution ships, if I send all unused ships it sends too much at first, but if you send just the growth amount of the planet some get stuck at the back and planets with other planets redistrbuting to them build up as well, my simple hack as to send up to 4 times the growth of the planet in redistribution moves, that way it would slowly reduce the ships at the backline without sending too many ships into the air at once, and there isn't as much of an issue with builds ups on the path to the front line.
Edit: You can get a copy of my code here https://github.com/amstan/antimatroid
and if you so cared, watch it play here http://ai-contest.com/profile.php?user_id=3945