Killbots Aftermath
On Thursday the class took part in a double-elimination tournament, comprising of bots that we had developed while studying AI. The tournament consisted of two randomly selected peers against each other, the winner continuing on to verse the next randomly selected peer with the looser taking on the next loosing peer. We were provided with a basic interface in which to develop our bot AI, with the ability to control movement, physical stats and events such as shooting. A simple C++ template was provided with bare functionality, in the form of a dynamically linked library when compile.
Most of us had kept our strategies and bot behaviors fairly quiet during the weeks leading up to the tournament, however myself and two others decided on sharing our bots with each other a couple of days prior for testing purposes. What I found was that each bot despite having vastly different behavior, were both vulnerable in similar ways. As such, I spent quite some time testing and tweaking various settings that I had hard-coded in for bot tracking, prediction and movement, in order to best take down both bots with an almost perfect kill ratio.
Unfortunately on the day of the tournament I found that I had severely underestimated the bots from other peers, not counting on such variance in bot behavior. I found this quite unexpected, particularly after the bots that I had tested against were quite similar in that respect. As the behavior of my bot was so finely tuned for these particular bots, it failed miserably against the majority of others. This was purely due to the fact that my AI was quite static, in the sense that much of it was hard-coded and could therefore not adapt to other opponents very well.
I have since started to rewrite my bot, reusing as much code as I can while removing as many predefined values as I can that its previous behavior relied upon. Tracking has been somewhat improved, providing a way for the prediction system to now watch and estimate movement based on what it sees and not from hard-coded values. Our second tournament in just over a week from now and will differ in the fact that our bots will be required to navigate a map, requiring path finding as opposed to mostly random movement in the large open map that the first tournament took place in.
Most of us had kept our strategies and bot behaviors fairly quiet during the weeks leading up to the tournament, however myself and two others decided on sharing our bots with each other a couple of days prior for testing purposes. What I found was that each bot despite having vastly different behavior, were both vulnerable in similar ways. As such, I spent quite some time testing and tweaking various settings that I had hard-coded in for bot tracking, prediction and movement, in order to best take down both bots with an almost perfect kill ratio.
Unfortunately on the day of the tournament I found that I had severely underestimated the bots from other peers, not counting on such variance in bot behavior. I found this quite unexpected, particularly after the bots that I had tested against were quite similar in that respect. As the behavior of my bot was so finely tuned for these particular bots, it failed miserably against the majority of others. This was purely due to the fact that my AI was quite static, in the sense that much of it was hard-coded and could therefore not adapt to other opponents very well.
I have since started to rewrite my bot, reusing as much code as I can while removing as many predefined values as I can that its previous behavior relied upon. Tracking has been somewhat improved, providing a way for the prediction system to now watch and estimate movement based on what it sees and not from hard-coded values. Our second tournament in just over a week from now and will differ in the fact that our bots will be required to navigate a map, requiring path finding as opposed to mostly random movement in the large open map that the first tournament took place in.
Comments
Post a Comment