by montanalow » Tue Mar 02, 2010 6:37 am
My bot uses about 32KB instruction, and about 4KB data per move, all on the stack, to maximize my minimax depth. I used the standard iterative deepening alpha-beta minimax+voronoi approach, but realized that at any given minimax depth 99% of the processing time was spent in my floodfill voronoi territory calculation. So...
To find the optimal minimax depth, I used iterative deeping, but without calculating the floodfill voronoi territory. I could calculate how many clock cycles 1 voronoi territory calculation cost, then iteratively deepen minimax for relatively no cpu, until I found the max depth, with fewer than my max possible voronoi calculations. At which point I reran minimax to that depth, and performed all the voronoi calculations. Typically all of the iterative deepening was sub 5ms, and I'd only allow 800ms total time to avoid timeout problems.
After clearing hundreds if not thousands of trial games with this time calculation, the day before the contest, my bot timed out in a match, so I frantically shoe horned in a wall clock interrupt *big thanks ebrahim*. Unfortunately I was leaving for my vacation, and didn't realize I had a subtle bug in the new timeout code. So 10 minutes before the deadline, I uploaded a version with the wall clock interrupt stripped back out.
Here that is:
PS. I tried many tie-breaker strategies for when minimax couldn't differentiate, and found that moving to the walliest square would beat a random differentiator 57-43 in my 100 test boards, as well as every other idea I came up with.
- Attachments
-
cpp.zip
- (17.05 KiB) Downloaded 182 times