[phpBB Debug] PHP Warning: in file /includes/bbcode.php on line 112: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead
AI Challenge Forums • View topic - Ignoring path finding and using collaborative diffusion

It is currently Fri Dec 15, 2017 9:43 pm Advanced search

Ignoring path finding and using collaborative diffusion

Share and discuss ideas for your entries here.

Re: Ignoring path finding and using collaborative diffusion

Postby Equinoxe » Fri Dec 23, 2011 5:43 pm

MDP ranks squares. Q-learning ranks actions.
So, two actions making an ant to come on the same square might have different values in Q-learning, but not in MDPs.
Equinoxe
Captain
 
Posts: 23
Joined: Mon Dec 19, 2011 9:55 pm

Re: Ignoring path finding and using collaborative diffusion

Postby zaphod » Fri Dec 23, 2011 6:19 pm

Thanks for clearing the confusion. So what I have used is basically an MDP. But what factors make a diffusion approach different from that of an MDP? Is it the reduction by a geometric progression in case of diffusion and an arithmetic reduction in terms of a simple MDP? Or is it some different factor? Right now, they both seem very similar to me. Sorry for all the annoying questions, but I was probably a bit too lazy to search it in Google!
zaphod
Captain
 
Posts: 21
Joined: Tue Nov 01, 2011 6:07 pm

Re: Ignoring path finding and using collaborative diffusion

Postby Equinoxe » Fri Dec 23, 2011 6:40 pm

Equinoxe
Captain
 
Posts: 23
Joined: Mon Dec 19, 2011 9:55 pm

Re: Ignoring path finding and using collaborative diffusion

Postby zaphod » Fri Dec 23, 2011 8:22 pm

Not sure if I have understood it correctly, but seems like a repulsive scent would prevent the ant from moving towards the enemy and take a longer path to the food in your example, which might be a better choice. I was taking the Stanford online class and can recollect a policy function which would consider a bad guy between an agent and a goal state. So looks like the repulsive scent in the diffusion value is a good approximation of such a policy function...

But again, the enemy seems closer to the food than my ant. This case might need to be evaluated differently based on different scenarios. If it is a 2 player game (which might be difficult to detect in the first place) then I guess attacking the enemy would be a better choice than trying to find a path to the food around the enemy. A greedy heuristic (without prediction of future moves) of denying food to the enemy and preferring a 1 v 1 exchange might look like a better strategy at first. However, it might be a bad idea after all, especially considering that the top bots often prefer to avoid 1 v 1 exchanges!
zaphod
Captain
 
Posts: 21
Joined: Tue Nov 01, 2011 6:07 pm

Re: Ignoring path finding and using collaborative diffusion

Postby Equinoxe » Sat Dec 24, 2011 1:18 am

As far as I remember, in Stanford AI class, the bad guy and the goal were in an exercise asking how to look for good feature choice. Not in a policy function. Because value-iteration-based planning on deterministic maps always consider only the best neighbor.

And, for your second paragraph, I chose wrong example. Let's replace the food by a B hill, and it will be fine !
Equinoxe
Captain
 
Posts: 23
Joined: Mon Dec 19, 2011 9:55 pm

Re: Ignoring path finding and using collaborative diffusion

Postby zaphod » Sat Dec 24, 2011 3:30 am

Thanks again for the explanations. The long wait is now finally over. Just read the post that GreenTea was placed above xathis in the last 2 hours. However xathis still wins (and deserved to win for dominating the server for 2 months)!
zaphod
Captain
 
Posts: 21
Joined: Tue Nov 01, 2011 6:07 pm

Previous

Return to Strategy

Who is online

Users browsing this forum: No registered users and 1 guest

cron