Here's mine - in python/cython/C - the main entry point is in the.py files, which import an extension module built from the .pyx, .h, and.c files. chamberNode.c/h are linked in, but I didn't have time to finish that.
I seem to have a bug or two given how badly I've done compared to others using almost the same technique (it was really frustrating bug hunting for hours and not finding anything

) - but here's what I was going for:
* if the bots are separated then generate a game tree (using only my moves) and choose the path that maximises area left at the end. I normally manage 15-25 levels deep.
* If the bots aren't separated, build the game tree for both players moves and assign scores to the nodes as such:
for most nodes:
min(score({their moves}))
or
max(score({my_moves}))
for deepest nodes:
-100 if we lose
+100 if we win
0 if we draw
delta := (my_area - their_area) (within the limit -100<delta<100)
If they were in the same section on that node then I used veronoi territory. If they were not, then I split the region into chambers, and for each chamber I used the checkerboard trick - this gave a better upper bound than flood-filling, but not as good as if I'd got the chamber graph working.
At each game state node I also pass back which move I've predicted they are going to make - and what their worst possible move would be (along with what score that would lead to).
I tried using this information to decide when it's worth picking a move that's not the best (i.e. when the move they would make - assuming I make the best move - is actually the worst move they could make if I did something else).
This last trick seems to have worked well against some bots, and really badly against others - I guess it depends on how similar our evaluations are - I'd have stored a hit ratio for the move predictions and decided when to take such a move based on that if I'd had time. In the end I commented it out a few minutes before the deadline over a very narrow empirical test,