With the competition drawing to a close, I guess it's time to reveal the design of my bot. It's not exactly a Xathis killer, but I do expect it to make the top 50. Source code is included at the bottom.
I decided to use an influence based approach. Each possible move is given a score. These are then fed into a giant constraint solver I wrote which minimizes total combat and noncombat scores, subject to the hard constraint of ants not being allowed to collide. I also implemented a symmetry tracker to infer hill locations and unseen parts of the map.
This is one of the parts I'm most proud of, as I believe that my symmetry code is likely the best of any bot, or at least very close. It essentially works through brute force, but there are some extra methods I use which set it apart from the previous brute force solver described on the forums.
On turn 0, it generates all possible symmetries with an order at most maxplayers, i.e. min(10, area/900). On a typical map, there are 10-60k possible transforms at the start, depending on size and whether the map is square or not. The solver also keeps track of the group of all player transforms found so far. Each time I see a new square, I go through all transforms and remove the ones inconsistent with the newly seen square. For each removed transform, I also remove it's inverse, and if the group is nonempty, the product of it with every group member. If the current turn is less than 20, I also remove any transform that would predict a hill at a visible spot, since the earliest possible razing is turn 21 due to the minimum hill distance. When the set of possible transforms has been reduced to minplayers (or some other similar conditions), I can successfully conclude that all remaining transforms represent player transforms.
Filtering the transforms initially takes a lot of time, and it is often impossible to do it all on one turn. I actually keep a stack of seen squares, and I process at the end of each turn until I'm done or run out of time.
When a transform is known to be in the group, I add it. I then add all products until the group is once again closed. I calculate the newly inferred parts of the map and add them to the seen squares queue.
Lastly there is one other check I do, though it only rarely helps - order filtering. For each transform, I calculate the hypothetical new group if it were added. If any of the members are impossible, I can remove that transform. If the group was too big, or there are no multiples of the group size between min and maxplayers, I can also filter the transform.
For each square, I keep track of whether there could be an enemy there. I assume all enemies belong to the same player. The method is fairly conservative: everything is true at the start, and invisible squares are true if any of their neighbors were last turn or if a hill is located there. One extra check: if I manage to narrow down possible enemy hill locations within te first 49 turns, I'll do a BFS and mark any square that is not within turns-1 distance of a possible hill as false.
I calculate an influence for each square of my ants and enemy ants. The kernel I use for my own ants is large (rad^2 of 10), while enemy influence only extends to their immediate neighbors (rad 1). This is because I wanted to prevent clustering of my ants, but I only care about enemies if they are blocking the way with a dense formation.
Exploration and food gathering
This is one area where my bot is not as sophisticated as most. The food influence portion of noncombat score depends on distance from the nearest food. Distances are increased when propogating thorugh squares with ant influence. If the path propogates through an unexplored square, I will add that square as a pretend food, the closest my bot comes to explicit exploration code.
Food distance scores are based on a single multisource A* search. I do not attempt to prevent ants from going for the same target (though the influence reduces that risk). I also don't look into the future at all to plan routes involving multiple food. Every ant greedily goes toward the closest. While this may be suboptimal, it is simple and efficient. Technically, I do a seperate A* for every ant, but since they reuse the distances and openset of the previous search, it is basically a single search and almost as efficient. This is only possible since each ant sees the same distance for each square and don't try to block each other.
Food location tracking - my bot assumes that all food it sees remains uneaten until it sees otherwise. Also, when it sees a newly spawned food, it will add all conjugates of that position (from detected player transforms) if there is no possible enemy at that square. PossibleEnemies is used as proxy for territory control because there is no sense worry about food that spawned in enemy territory. One weakness is that my bot will never try to explore previously seen tiles. I tried making it do so at one point, but it only seemed to hurt my bot, since too many ants go for the square. What would be ideal here is something like Xathis's visibility covering, but I never tried to do that.
Hill distance is calculated similarly to food distance. However, ant influence is weighted less strongly. Also, it applies ant influence and squared enemy influence, but only if exactly one type is present in the square. The idea behind this is to encourage reinforcements to a spare battle line, but to redirect ants once the blockage becomes thick.
One extra feature is controlled enemy hill detection. If I have at least 2 ants in range of a hill and at most one enemy, I mark the hill as controlled and no longer treat it as a target. Any ants which are already there are left to guard it while enemy reserves drain out of the hill, but no new ants will be sent towards the hill.
Putting it together
Noncombat square scores are a combination of ant influence, food score, hill score, and food penalty. Food penalty is quantity which depends on the number of ants, number of visible enemies, turns elapsed, and amount of stored food. The purpose is to act as a weight between collecting food and attacking hills. The higher it is, the father ants will go out of their way to attack a hill instead of collecting food. The overall score is the minimum of (foodscore + food penalty + ant influence) and hill score. Ant influence is applied when collecting food to reduce clustering, but not when attacking hills for obvious reasons.
In early versions, I expirimented with static defences, but I quickly found them to be a terrible idea. I instead tried to have a dynamic defence system. The way it works is that roughly 1/5th of the ants are chosen as guards. These guards are restricted to remain at least as close to the hill as the closest possible enemy. But within that distance they can gather food like normal. When enemies get close, the guards will naturally converge on the hill to defend.
One defect of this system is that my bot is not smart enough to expand the "perimeter". If there is an invisible square close to the hill which may contain an enemy, then the guards will be forced to stick close to the hill, but they aren't smart enough to just go explore that square instead.
And now for the part everyone's been waiting for.
My bot considers 5 scenarios of possible enemy moves. One each where all enemies make the same move if possible. I figured this is a good heuristic because often ants on an attack line will either all move forward or backward. Anyway, each enemy position is assigned a "bounty" which is usually equal in value to half an ant. This is the penalty for letting them live. To evaluate the combat score of a possible set of moves, the number of causalities and bounties of living enemies is added up for each scenario, and the score from the worst case scenario is taken.
All of the scores are fed into a giant constraint solver. I apply several hard constraints to ant movement. Ants are not allowed to occupy the same square. Pairs of ants cannot exchange position (reduces search space without affecting solutions). Ants are forbidden to move to a possible enemy destination unless said square is a hill we own. Lastly, ants are prevented from moving onto a hill if we have enough food to spawn at every hill.
After applying arc consistency and pruning dominated moves, all of the ants are split into islands, consisting of all ants that could possibly affect each other.
The islands are solved using what is effectively a depth first greedy search. This allows finding decent solutions quickly, but it takes a while to search the entire possible search space. The solver seeks to minimize total combat score, followed by total non combat score. It takes turns working on each island until every island is solved or time runs out.
Aka the ChrisH tactic. The way my implementation works is that it calculates the noncombat score of each hill (see above). It also keeps track of when each hill was last touched, so that given any set of blocked and unblocked hills, it can predict where the ants will be spawned. (In case of ties, it assumes worst case scenario.) For each possible set of blocked and unblocked hills, it estimates the total score of spawned ants. The difference is considered the blocking incentive, and is added to the movescore of any ant moving onto that hill. If there are more ants than twice the hill count, it will try to keep a blocker ant next to each hill. There is a large penalty for failing to spawn an ant, so my bot will always try to spawn as many ants as possible each turn. It also keeps blockers within distance 1 of the hill instead of always on the hill itself.
- Source code of my final submission
- (37.75 KiB) Downloaded 230 times
Last edited by Parasprites
on Wed Dec 21, 2011 7:15 pm, edited 1 time in total.