david_a Posted April 22, 2016 The 2016 IEEE Computational Intelligence and Games Conference will include a very interesting AI competition - Visual Doom! Participants will create a Doom bot (hooking into a custom version of ZDoom) that works purely off the same input that human players have. The competition will involve good ol' deathmatch with the winner having the highest frag count. More details about entering the competition (for you brave souls out there) can be found at the Visual Doom competition page. 0 Share this post Link to post
printz Posted April 22, 2016 Yipe. This looks cool as hell. All this is happening while I'm sitting here contemplating life and wasting time on the internet. 0 Share this post Link to post
kuchitsu Posted April 22, 2016 Wow. Hard to believe that the bots will be any good at all, but sure sounds intriguing. 0 Share this post Link to post
AD_79 Posted April 22, 2016 visual doom page linking to wikia smh cool though 0 Share this post Link to post
david_a Posted April 22, 2016 AD_79 said:visual doom page linking to wikia smh cool though I noticed that. One of us should probably correct them :) I don't know how plugged-in to the community the people behind this are, but since they based their custom port on ZDoom and linked to a wiki (even if it's the wrong one) they seem like they know a little bit about the game. 0 Share this post Link to post
scifista42 Posted April 22, 2016 david_a said:Participants will create a Doom bot (hooking into a custom version of ZDoom) that works purely off the same input that human players have. Has anyone managed to run ViZDoom on their computer (I haven't), so that they would confirm if the automap is available or not, respectively if IDDT cheat works? I bet the bots would use it pretty much solely instead of normal 3D vision. 0 Share this post Link to post
Jon Posted April 23, 2016 david_a said:I noticed that. One of us should probably correct them :) I've left a note on their github. 0 Share this post Link to post
printz Posted April 23, 2016 I wish this was for vanilla Doom or PrBoom+, or maybe even Zandronum, but not ZDoom. 0 Share this post Link to post
sheridan Posted April 23, 2016 scifista42 said:Has anyone managed to run ViZDoom on their computer (I haven't), so that they would confirm if the automap is available or not, respectively if IDDT cheat works? I bet the bots would use it pretty much solely instead of normal 3D vision. Well apparently the competition involves deathmatch, so cheats like IDDT will presumably be disabled. 0 Share this post Link to post
scifista42 Posted April 23, 2016 OK, but what about just the automap, for navigation around the map's layout? 0 Share this post Link to post
esselfortium Posted April 23, 2016 This is cool. scifista42 said:Has anyone managed to run ViZDoom on their computer (I haven't), so that they would confirm if the automap is available or not, respectively if IDDT cheat works? I bet the bots would use it pretty much solely instead of normal 3D vision. That'd defeat the purpose of the exercise, plus it'd mean not being able to tell anything about where height changes are. Relying on the automap view might possibly make basic navigation easier in the short term, but it'd limit the bot's ability to improve. 0 Share this post Link to post
scifista42 Posted April 23, 2016 esselfortium said:That'd defeat the purpose of the exercise, And that's exactly why I'm asking whether the engine makes it possible or not, because if yes, the participants will surely exploit it. If not entirely, at least to some extent. 0 Share this post Link to post
printz Posted April 23, 2016 I don't see what is cheating here. If you can parse the automap easier than the player view, just do it. You'll still lose some information, but you cam tell the bot to press tab periodically and record the player view. Can you also rely on sounds with this? Or is ViZDoom totally silent? 0 Share this post Link to post
kuchitsu Posted April 23, 2016 Uhh, if they are going to use the automap then it's not interesting at all... I want to see them analyze a mess of pixels in a 320x200 resolution and try to get some useful info from it. I hope they will clarify on this. 0 Share this post Link to post
VGA Posted April 23, 2016 This reminds me of that Starcraft 2 bot that someone created. It actually analysed the screen for information and nothing else, if I understood it correctly. Anyway, I want to see more info as this progresses. It seems so hard to me to just rely on the player's viewport. Using the automap too would help with navigating. It should be allowed IMO. 0 Share this post Link to post
Linguica Posted April 23, 2016 Apparently they are allowed to use a depth buffer as well, which seems to defeat the purpose? 0 Share this post Link to post
andrewj Posted April 24, 2016 Linguica said:Apparently they are allowed to use a depth buffer as well, which seems to defeat the purpose? Yeah a big part of the challenge would be to figure out how close a nearby wall or object is. Using the depth buffer from the rendered scene is a massive cop-out. Makes it much easier to figure out what is a wall (same depth going down), what is floor or ceiling (same depth going across) and what is an object (same depth going down and across). 0 Share this post Link to post
Linguica Posted April 24, 2016 kuchitsu said:depth buffer = ? http://vizdoom.cs.put.edu.pl/user/pages/01.home/depthbuffer.png 0 Share this post Link to post
andrewj Posted April 25, 2016 Jon said:I guess that implies GL-only too. I guess so, though Quake's software renderer uses a depth buffer. 0 Share this post Link to post
printz Posted April 25, 2016 I was able to approximate a depth buffer in Wolf4SDL by calculating the pixel color change density per area. I took advantage of the pixellated graphics. However, this is not effective at low resolutions, and can be fooled by textureless areas. 0 Share this post Link to post
RestlessRodent Posted April 25, 2016 Does prize money constitute commercial usage? This would probably have been more interesting if it were for example Chocolate Doom playing with just the screen buffer and sound channels compared to say an OpenGL port with a depth buffer. 0 Share this post Link to post
boris Posted April 25, 2016 Why the hate for the depth buffer? Isn't that basically what other real-world systems use? Like autonomous cars using Lidar. 0 Share this post Link to post
scifista42 Posted April 25, 2016 boris said:Why the hate for the depth buffer? Participants will create a Doom bot that works purely off the same input that human players have. Can AI effectively play Doom using only raw visual input? 0 Share this post Link to post
boris Posted April 25, 2016 But the human brain has an built-in depth buffer, too, so to speak. Giving this information to the program is a more realistic scenario IMO. 0 Share this post Link to post
Linguica Posted April 25, 2016 boris said:But the human brain has an built-in depth buffer, too, so to speak. Giving this information to the program is a more realistic scenario IMO. Oh come on, monitors display a 2D image. If the point is to research a computer program that can reconstruct a 3D understanding from a 2D image, then giving access to a depth buffer is totally missing the point. If the point is to navigate an environment using a depth buffer as pseudo-lidar, then why put any emphasis on a comparison to human play? 0 Share this post Link to post
scifista42 Posted April 25, 2016 Linguica said:If the point is to navigate an environment using a depth buffer as pseudo-lidar, then why put any emphasis on a comparison to human play? To be fair, the bots are going to suck even with access to the depth buffer, so that comparison to human play would be interesting and worth researching/showcasing even then. But it's still contradicting the point that they are proposing the challenge with - "only raw visual input". 0 Share this post Link to post
VGA Posted April 25, 2016 Yes, when put like that, the bots should have access only to the displayed frame and the sounds being played. And they should use the player available controls in any way they seem fit. Like opening the automap every few milliseconds for example. But it's still very interesting. I want to see what progress will be made in these short few months. Maybe printz can get some ideas or code for his AutoDoom project. (although it is different) 0 Share this post Link to post
Katamori Posted April 26, 2016 kuchitsu said:Wow. Hard to believe that the bots will be any good at all, but sure sounds intriguing. You extremely underestimate the power of these systems. 0 Share this post Link to post