@Learnbot: I'm glad to hear about someone else's results with neural networks. Can you elaborate on your approach to the problem? You mention reinforcement learning, so I gather you haven't trained your network to approximate some sort of "correct" / "desirable" play as I did, but rather let the crude network try its hand at playing the game and used reinforcement to strengthen behaviors which produced favorable outcomes.
What input is given to the network? Is it handled per-ant? Does it handle both foraging / exploring and combat situations? Does the network receive information about the map which isn't strictly visible to individual ants, like how the other ants have traveled, location of enemy hills that are out of sight, et cetera?
An example game where my neural net combat module produced some nice-looking walling-off and engulfing tactics is
http://aichallenge.org/visualizer.php?g ... 1&user=396 ... I think that if I'd given my ants their entire visual field as input and trained on that, I would have seen more successful emergent behaviors of "pincer movements" when enemies are surrounded...
An observation from my side is that my approach with neural networks is computationally really hard while training the network (offline), using all of (one core at a time, unfortunately) my high-end desktop CPU for 24+ hours to train the net which is playing in the finals, but computationally really efficient while playing... a few thousand float multiplications are really easy for modern computers to do, apparently.
Regards / Claes