Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
AirRaid

Bloom! HDR! Buzzwords! Blah!

Recommended Posts

As some of you are probably aware, I'm a reasonably avid graphics fanatic. I generally keep up with the latest technologies and stuff, and I get excitied about pretty new tech demos. Yes I;m a geek.

Recently however, I've had an increasingly large bone to pick with the latest batch of rendering "features" that have seen the light of day. I'm talking about such things as (have you guessed yet?) Bloom and HDR.
Lets take bloom. Lovely bloom. Which everyone and his grandmother appear to have leapt upon as the latest thing to include to make their lighting 'more realistic!' Er. Wait. Realism. Making it blurry round the edges? ...Kay. It's supposed to simulate the glare you get when you look at a bright light. However it fails miserably in the respect, and you you get is some blurred shit around the light, which invariably makes a standard fluorescent light look like a lightsaber. Mhmm. Very realistic. Or in an outdoor scenario, you get crappy blurry edges around anything that creates a horizon with the sky. The only way you'd get that in real life is if you over exposed a picture with a camera.

Which brings me too HDR... ugh. It's worse than bloom really. Let's take Valve's description of what it does (paraprhasing since i can't find the exact quote) - "High Dynamic Range lighting allows the game to render bright and dark areas as if they we're being seen through the player's eyes, for instance when moving from a dark area to a very bright area the payer would be temporarily blinded while their eyes adjust." Now, that's sort of true. If you go outside and it's sunny you do get a bit blinded. Thing is, it's impossible to render overbrightness on a computer screen. All you can do is draw white. This does NOT WORK. You don't entirely lose your sight when you look at the damn sky. The effect is far too exaggerated and the only thing it succeeds in doing is annoying the player because he can't see what the fuck is shooting him.

To summarize, all the recent graphical effects I've seen recently that claim to inprove realism, just don't. Look around you, as long as your eyesight is good, everything is sharp, focused. If you look at a lightbulb you dont see a fancy glow, you see a bright bit and a not so bright bit around it. All these fancy shenanigans do is manage to blur shit into oblivion so you can't see a damn thing. Yet somehow it's supposed to be more real.

Anyway you're probably bored of this already. I just felt like ranting. Comments?

Share this post


Link to post

I don't get bloom. I mean, sure it looks cool, but you never see that in real life. Honestly, I think everyone's latching onto that stuff because we're running out of "cool" things to do to improve graphics. Hi-res textures, lotsa polygons? Yeah, got those. Realistic shadows? Check. Ragdoll physics? Not perfect, but we have it. Apparently every new game needs new technology to keep people interested. Frankly, I'd like to see people perfect the technology we've got. I'm sick of the whole "games as technology demos" mentality. I'm also sick of games with Hollywood budgets. It's killing the gaming industry. Oh, and for the love of God, somebody kill plot as an essential game element. I like to know why I'm doing what I'm doing, I guess, but games heavy in cinematics and cutscenes have no replayability value. And when plot is a crucial element to a game, once you know how the story ends, it's kinda pointless to go back and play again.

And the gameplay should be fun. Don't make playing the game feel like a chore, like so many RPGs. And games need decent AI. Not for all monsters. It's quite enjoyable to mow down legions of the undead as they walk right into your crosshairs. But some enemies should be smarter. Like going for cover and whatnot. And not scripted! Scripted AI sucks. Once you figure out the pattern, it's all over, in terms of the enemy being a challenge. Sure, scripting works the first time through, but it becomes too easy.

And I just totally went off on a rant of my own.

Share this post


Link to post

Couldn´t agree more.

Now, i like pretty graphics like most people. Nothing gives me a boner like subtle bumpmapping, highres textures and insane polycounts on a virtual opponent getting blown through a window, complete with realistic ragdoll- and breaking glass physics. Preferable in slowmotion aswell. That´s the reason i bought myself a 3500$ computer 6 months ago.

However, like you said, these new "effects" that everyone and their mothers seems to come up with really makes no sense to me either. The first time i played "Brothers in arms: Road to hill 30" i wondered WTF was wrong with my configuration. That was, until i found out that the shit on my screen was not due to faulty drivers, but someone thought up the brilliant idea of smearing digital vaseline all over the place to make the game look better!

As far as HDR go. Can´t say i remember having experienced it. Guess i'll have to install Half-life 2 again then :)

Another favorite of mine is "lens flares", which makes as much sense as middleaged men dressing up in japanese school girl uniforms.

Share this post


Link to post

Nothing gives me a boner like a perfect composition of pixels in the lowest resolution possible at which the essential information about the depicted object can be conveyed.

Share this post


Link to post

See, I have no idea why, in the grand scheme of more realistic graphics, no steps have been taken to simulate motion blur. It's one of the few things that we ACTUALLY EXPERIENCE through our vision-- one of the few things that would make ANYTHING look real no matter what you were rendering-- and it seems still to be only a diversion for tinkering tech demo devs.

The upcoming PS2 game Shadow of the Colossus reportedly has fantastic motion blur, though-- we'll see next month.

Share this post


Link to post
Numbermind said:

no steps have been taken to simulate motion blur

There are plenty of upcoming games with motion blur, and non-real-time graphics has had it for a long time. So how can no steps have been taken?

Share this post


Link to post

I think Burnout 3 is an excellent example of a game which used motion blur to good effect, to simulate the more insane speeds you could gain.

Looked great to, and it wasn't distracting your attention from the gameplay.

Share this post


Link to post

Alright.

HDR is a programmatic feature more than a visual flashy thing for you to pick on. HDR is, above anything else, simply an enviroment or mode where you are allowed more bits per channel, be it the standard RGB components that make up the color information of a single pixel (not all there is to describe a pixel these days, but all visually explicit to the user), or (perhaps not implemented right now, but could be in the future) the alpha (translucency of the RGB color)channel or the X channel (unused memory space, potentially useful for very low level coding).

Take a standard true color image, don't even worry about the embedded alpha channel (if any). Each pixel is made up of three color components, each one with 8 bits of precision. This means, for anyone who has done some combinatory math, 256 different states for each channel, and a total of 16.777.216 possible colors. While this seems like a lot, and it generally is, it is nowhere enough to correctly simulate the next scenario:

Consider that you are a visual effects and rendering pipeline coder, and that your main designers wish you to add specular bloom to the engine. Specular bloom is a visual effect where overempowered, ultra bright sources of light 'bleed' beyond the boundary set by its profile from your point of view. Rampant photons scatter wildly and fall in the vicinity of the affected photoluminous sensors of your eye, exciting even more sensors around those that would normally "define" the shape, color and texture of the object in question.

Anyway, your designers want bloom coming from the sun, the torches in a dungeon, the reflection of the torches in the blade of the sword, the cold cruel full moon, and the reflection of the moon on a sunken, flooded tile in the streets of a rainy fantastical city.

So how do you normally do bloom? Well, pretend that you scan your image looking for "bright" pixels. You know colors can be described as RGB combinations, red, green and blue. Another way to represent them is by HLS, Hue, Luminance, Saturation. Hue means where it is located in the gradient spectrum (like a rainbow), Luminance tells you how close to white or black it is, and Saturation dictates how close to the Hue or gray the color is. So let's pretend that you have a fancy way of looking for high luminance values on your screen, say in the 240-255 range? Seems alright. You grab these pixels and store them on top of a black background image that you keep in memory, you perform a blurring algorithm over it, in order to extend the colored pixels beyond their original "shapes" and to create a nice, diffuse look, and finally, you overlay this image on top of the original in an additive fashion, thereby "lighting up" the surroundings (and insides) of these "highly luminous" spots of the world.

And that's when things get stupid. Your stars, which the designers obviously made as a texture with dots ranging from luminances of 90 to 255, now look all milky thanks to bloom. The teeth of that NPC you meet at the beginning, they glow like a motherfucking monster. Those were done with extremly light shades of grey in the oops, 240-246 luminance value. Some guy decides to put a really obnoxious green light in the middle of a dark damp forest? Everything glows now, instead of being pale and dark and confusing to look at.

Why doesn't this happen in real life?

Because luminance is described in much bigger terms. If say, this computer screen was as bright as 25 zaldrons (a completely made up unit for the example), the sun will be 60.000 zaldrons.

In a 24/32 bit (a)RGB mode, if you had the sun as a circle of white (RGB:255,255,255), the screen, something that is 2400 times paler, would have to be described by (RGB:255/2400,255/2400,255/2400), of course, that only leaves us a bunch of zeros, or absolute black, for that matter.

But hey! I can kind of see a screen and the sun in the same place, sure it will be kind of hard/annoying, but I can see something.

That's because your brain is smart. Your brain realizes what is the darkest luminance you see, the highest, and how it maps in between. It always strives to let you see as much as possible, using all the range of "colours" you recognize. If a street is lit up with orange lights, your brain will translate these bright oranges to something closer to white, so that you can see more colours. If there are only very pale lights, your iris expands, lets in more photons, and you go into night vision, relying more in absolute luminance rather than combinations of colors. If you go outside to the blazing scorching desert sun, your brain and eyes will adjust until you see the deep blue of the sky, and can distinguish the cracks in the baked earth.

If we had HDR, and we could describe many kinds of luminance for whites, such as a sun with a number of 300000000, stars with something like 100000, and white paper as 45, then we could easily avoid rampant blooming. Because only things after a specific limit would need to be bloomed.

As a coder, you would have to device a simple, yet useful, eye-like behaviour. If its night and there is only very minimal lights, the light of the moon, about 1/30 of the sun's, the starts, 1/2000 of the sun, you would have to device a method such that the highest luminance available (the moon), maps to a white pixel in the screen, and everything in between will be some blueish/brownish shade of their former color (to emulate our strange perception of clear/overcast nights). Things that will bloom? Anything in the realm of the moon's luminance. Mage casts the spell Light from the tip of his scepter and makes pixels with luminance numbers such as 30000 when the moon is something like 5000. You'll be fucking blinded, and you know why? Because suddenly you have to map your eye such that 30000 means white, and everything else, like the moon at 5000 and the darkened plains at something like 80, they will fall to very very dark shades.

A monitor can only roughly display three hundred different levels of brightness/intensity, a tad over the 256 possible luminance values supported right out of the video card's output, but this is only because you have such things as brightness and contrast adjustments.

So no, until the problematics of coming up with a cost-effective, non-damaging, eco-power whole new kind of display device that allows for ultra bright dots, HDR will not fully flesh out as a visual feature by itself.

Another use for this thing is when you try to pill up pixel shader effects on top of other.

Suppose, for whatever reason, the final pixels of your screen will be dictated by this equation:

pixel_screen(x,y) = pixel_source(x,y)*0.6

That would make it 60% of what it used to be, and 40% darker, right?
What if we did this:

pixel_screen(x,y) = pixel_source(x,y)*0.01

That would be even much darker, but suppose this is only the first part, that now something else affects the outcome of these pixels:

pixel_temp(x,y) = pixel_source(x,y)*0.01
pixel_screen(x,y) = pixel_temp(x,y)*100

That should put it back at the beginning right? I mean its 1/100 and then multiply by 100. If only, see, this is what happens for some randomly picked colors.

for a dark red:

(127,0,0) * 0.01 = (127*0.01, 0*0.01, 0*0.01) = _

ideally (1.27,0,0) -> but because these are integers (up to 255), you have to round the value. In this case: (1,0,0).

So our dark red becomes extremely dark red, in fact, the darkest red possible. But is this alright? Afterall, if we multiply by 100, we'll get 100, and not 127. That is like 4 perceptible by you and me shades of red wrong. 128 will suffer the same fate, 129, 130, all of them up to 151, in wich case finally one will be processed correctly (if it rounds up, rather than just always going down to the biggest possible integer). This is when you get the so-called banding effect, where many source values that should map to different output values return the same value because you do not have enough bits to describe these intermediate steps.

In modern engines, for example, coders have to do fancy things if they want normal mapping and specular at the same time on a surface, simply because basing the specular on top of the normal mapped results can lead to lots of data loss.

If you simply had more bits per channel, you could simply discard this ancient capped integer model, and use fractional numbers ranging from 0 to 1 (none of this colour channel to all of this colour channel). Depending how many extra bits you're working on, it could mean the difference between having numbers such as 0.3457211 and 0.34572113462 to work with. For now that can seem overkill, but hey, in ten years time each pixel will be described with hundreds of different operations (additions, divisions, multiplications, etc).

Share this post


Link to post

I thought this was going to be about Harold Bloom, the literary critic.

The fact that they are selling something as realistic is irrelevant unless you're one of those people that are looking for "realism" all over the place; something so common it's apalling, especially when you consider the makeup of most entertainment that has to have that bent on being "something that could have happened" and thus always ends up being a cardboard-like waste of whatever it's supposed to be, if only because of how hard it tries to pretend to be something, as opposed to being something in itself.

Does the game play well? If it does, great.
Does it look good? If it does, cool.
Does it look realistic? If it does, I probably have issues distinguishing reality from a game.

Designers make effects to distinguish their games; realistic compliance is a secondary element that is always there since all these games have an emulating quality; but the main focus is to create a distinctive (hopefully catchy) effect to give the game character.

Perhaps these games aren't any good, so people start nitpicking totally retarded things like their level of realism in order to judge them, or perhaps some people are just bored.

Share this post


Link to post

Good post Zaldron. Just to illustrate a little bit:
Below is a depth of field mask derived from a 32-bit floating point channel embedded in a rendered image. These 32-bit floating point images contain more information than can usually be displayed on the screen, but here is the resulting mask when mapped to its Black/White points. It contains more than enough data to do very accurate Depth of Field effects.




Now, here we have the same DoF mask also mapped to its B/W points, but instead of deriving it from 32-bit floating point data, the source has been converted to 8-bit integer. The result is loss of data. You can see a lot of banding in the forground, and some of the information in the background has been lost altogether.




Effects like HDR rendering, motion blur, depth of field, and lots of types of filtering can take advantage of the extra data. I have yet to see HDR rendering in action, but there are HDR monitors coming out, which should make it actually worthwile. As for bloom, I think it's mis-used 99% of the time and looks freaking horrible (the Unreal Engine 3 stuff for example).

Share this post


Link to post

The reason I hate bloom, HDR, and any other technical effect that is popular at the moment, is because everybody talks about it like it will make games better. Okay, if that was the case, would we still be playing the same FPS game for the umpteenth time? I'd rather we focus on gameplay and see how far we can push it long before we even consider making everything glow and smeer and appear fuzzy for the sake of "realism". Sames goes for bump maps, unified lighting, and any other buzz word the industry will throw at us in the hopes that we will forget how average and mediocre games are and will continue to be.

And besides, graphics aren't the only way to represent realism. We hardly have any game that can even come close to mimicking life's complexity besides how it appears, and the very few that have tried to simulate life as we know it, have been dubbed "extreme survival sims" or "pretensious" and then forgotten about. You don't even need the best graphics to represent something as complex as life, anyway! The brain has this beautiful thing called an imagination that fills in the cracks where our perception clearly stops. You don't need to keep making graphics as real as possible if our brain and eyes are okay with a 2-bit image that is able to convey just enough information to be immersed. If I want to see a real bird, that's what reality is for. When I want to escape reality, I would like for it to atleast appear like it's not quite real.

Addendum: Frankly, if the industry is going to be obsessed with every graphical gimmick out there, I'd like to see what would happen if we tried to push textmode graphics to their very limit. I can just see it now: Text-Based Video Cards just for the latest, Text Crunching FPS game!

Share this post


Link to post
Numbermind said:

See, I have no idea why, in the grand scheme of more realistic graphics, no steps have been taken to simulate motion blur. It's one of the few things that we ACTUALLY EXPERIENCE through our vision-- one of the few things that would make ANYTHING look real no matter what you were rendering-- and it seems still to be only a diversion for tinkering tech demo devs.

http://projectoffset.com/downloads.html

Post-processed on the entire world. Still has blooms and HDR, but it looks like it's been used more diligently here.

Share this post


Link to post
Zaldron said:

Alright.
<awesome post>


Well see that's exactly what I'm talking about really. I'm not having a go at the actual methods such as HDR and bloom, I'm complaining that designers seem to use them everywhere, when, as explained by your post, it's virtually impossible for them to look right, simply because of the hardware they're being displayed on. The resulting outcome is a bunch of blurry white crap. :P

Fiend said:

Yes, bloom is pretty useless. But look what I did with it one time in Tranfusion (using Darkplaces for Quake).
I have yet to be able to recreate this:

http://members.cox.net/alarrivee4/tfonweed.jpg


That is the most horrid screenshot I've seen in a long time. 'Hey let's make everything seem blurry and out of focus yet still manage to draw jaggies!'

Share this post


Link to post

Search for HL2 Lost Coast. You'll find Screens and a video. It's one of those things you kinda need to see in motion.

Share this post


Link to post
Snarboo said:

I'd rather we focus on gameplay and see how far we can push it long before we even consider making everything glow and smeer and appear fuzzy for the sake of "realism". Sames goes for bump maps, unified lighting, and any other buzz word the industry will throw at us in the hopes that we will forget how average and mediocre games are and will continue to be.

I am an extremely jaded gamer but I see these features as perfectly valid doors to improved gameplay.

Unified lighting systems, as proven by Thief 3 and upcoming Oblivion, are ALL about gameplay, really, providing you with clear and extremely intuitive enviroments and shadow volumes where to hide and where to be caught. You understand the concept of lights affecting all items in real time from real life, and as such it maps exceedingly well into a game for the sake of both suspension of disbelief and sheer gameplay.

HDR would allow us to make continuous worlds that, lighting-wise, map perfectly right to us. In these days, if we had a cave with torches inside and it was a clear midday outside, the sun would be strong in the realm of 250 to 255, whereas the torches are about 170. So if we looked into the cave from outside under the scorching sun, we could still see inside perfectly well, and that is extremely wrong. It should be extremely dim, as it makes sense with floating intensity values of say, 0.99 for the sun and 0.025 for the torches.

Tactical FPSs, as an example, could benefit from sun glare, rushing from/into exteriors to/from interiors, real flashbang grenades, etc.

Bumpmaps helps us better understand the direction of lighting, thereby providing more suspension to disbelief. When in older engines we bake lighting information in textures, and we scatter them in random directions, the sense of spatial volume and surface orientation is lost. Picture a concrete texture with a embossed out rectangle in the middle with a cross, suitable for drawing cathedrals. If you also had to make a satanic shrine and wanted to have turned-down crosses everywhere, by rotating this image 180 degrees and putting it next to the other one, this one would look embossed in rather than out, and that will instantly break your perception of the enviroment. If you can judge amount of lights, size, overall shape and materials involved effectively, you can better tackle a situation. For now it's still up to the developers to make your visibility matter, the sound propagation (given the materials, size and clutter of the room) of your movements matter, and the potential ricocheting/shrapnel tendency of these surfaces when you grenade/shoot and miss, but when we do have those gameplay elements, it's images and sounds who will help us be prepared, it's sight and sound the key for our visceral or hunting mechanisms, and as such we have to pay special attention to them. Even our inductive/deductive reasoning largely depends on what we see and how it changes when we step to the left or the right, or get a closer look.

Specular maps, for example, give us an absolutely easy way to realize what is matte and what is shiny. What is metal, what is polished wood, and what is wet stone. I generally tend to dislike the simple specularity we have these days, but that is largely attributed to common denominator hardware than a failure from part of the vendors to get nice effects going on here.

You have this notion where the brain fills in the cracks, but to tell the truth, that only goes so far, because eventually the brain starts to -notice- the cracks. If I had, for example, the most advanced and amazing text-to-speech parser in the world, plus an incredibly sophisticated AI that can translate any input of mine into concepts it understands, but the engine failed to have facial animation features, your brain would go "wtf these people are like dummies with speakers hidden somewhere under their clothes".

If you had facial features on, but you did not have nice pixel shader 3.0 skin shaders on their faces, your brain will go "wtf this guy is dead and someone prepared him for the funeral with orange makeup".

As we move towards more advanced gameplay mechanics, inmersive stories, emergent gameplay, drama, and start expecting fairly good common sense or rational deduction from all characters and creatures, our brain will beging fretting about the jaggedness, uglyness and wrongness of an otherwise perfectly acceptable and real world. We'll really care about the characters, really, really pay attention to them. They will behave scaringly realistic to people like us right now, but it'll be sheer fun in next decades when this gets implemented. And because everything will so inmediatly make more sense to us from the beginning, a lack of proper visual information will impair our ability to respond to the gameworld.

The industry is, on the large part, not obsessed with gimmicks. I'm sorry, but you have to be realistic, companies don't have the time nor the resources to add random crap just because. Something might be added because it's fairly easy to implement, it doesn't detract from the game core, and is largely supported by hardware, sure, but you have to understand that game development is as much a science as an art, and it takes several iterations of "poking" or "scratching the surface" of an effect before it can be fully incorporated into gameplay mechanics. Sometimes you have to play the consumer catch-up waiting game, sometimes you have to wait for better APIs or implementations from software/hardware vendors, and sometimes it's a matter of reading the correct papers, and knowing how to translate them into customizable, efficient code. There is much research from the last decades to be implemented, and that means fairly complex maths and strong sense of abstraction.

Share this post


Link to post

AirRaid said:
That is the most horrid screenshot I've seen in a long time. 'Hey let's make everything seem blurry and out of focus yet still manage to draw jaggies!'

It reminds me of how you see things when you just wake up, or perhaps walk out of a dark room.

Share this post


Link to post
AirRaid said:

Search for HL2 Lost Coast. You'll find Screens and a video. It's one of those things you kinda need to see in motion.


Well I did play+beat HL2 a while ago.

Share this post


Link to post
BlackFish said:

Well I did play+beat HL2 a while ago.


Well you havent played Lost Coast have you.

Share this post


Link to post
BlackFish said:

I need to reinstall HL2. :o

Lost Coast is an add-on map for HL2 that Valve is working on. It's main purpose is showing some new graphical effects, but it isn't released yet.

I don't have the latest Nvidia card, so I can't really comment on the looks of HDR in Far Cry and such.
But I don't understand what everyone is complaining about, it's not like your being forced to run the game with HDR. If you don't like it, switch it off.

Share this post


Link to post

Graphics in general are focused on far too much. I'd prefer that the source of a game's visual appeal be on well-drawn textures and detailed models rather than on applying crazy post-processing effects to mask the fact that the models really suck.

To this day, I think that Medal of Honor: Allied Assault looks far better than any World War 2 game released afterwards, including Call of Duty, Brothers in Arms, even the screenshots for Call of Duty 2. The graphics have a sharp, well-defined look to them, and since the game doesn't waste your CPU cycles on pixel shading, bloom effects and such, you can afford to turn up the screen resolution and really enjoy it. The development team really did a good job on the models and animations.

If we could convince the companies that be to focus more on believable environments, objects and characters rather than the next big thing in technology, we'd probably see a greater flow of good games. But fat chance of that happening - there's always somebody looking to push the envelope.

I shudder to think how much the next generation of games is going to piledrive my computer into next week.

Share this post


Link to post

In response to Zaldron's Post:
I didn't read all of your post, so I could be mistaken, but it seems like you're confusing immersion with gameplay. You bring up unified lighting as not only being something pretty to look at, but also a great way to enhance gameplay, which I agree with, but only if it's used properly and not just as a graphical gimmick or enhancement. For example, you bring up Theif 3 as a good example of how unified/dynamic lighting enhances gameplay. However, bringing up Thief 3 as an example of how lighting enhances gampelay is somewhat moot, especially if you've played the first two games, which already used a more primitive lighting model and still were able to pull off the stealth aspect just fine, showing that while lighting can enhance gameplay, you don't need the latest, most advanced rendering method to do it with. Sure, the first Thief games had dynamic lighting to an extent, but not anywhere near as complex or good looking as Theif 3's lighting system, and they still played fine. That pretty much proves that a technology that renders shadows better doesn't enhance gameplay one iota, especially if the previous games in the series managed to pull off any gameplay trick just fine without better graphics. The only thing that Thief 3 has done better than the previous games is look better, allowing you to be immersed a little bit better or feel like you are really hiding in the shadows if the old Thief games turned you off.

You also mention how things like bump mapping and specularity can make something appear more realistic, which somehow makes a game play better. However, bump mapping and specularity are just more advanced forms of shading. Games in the past didn't need bump mapping to fool you into thinking that a texture had depth or was a grate or window or floor panel or whatever. Instead, they used tried and true shading methods on the textures themselves, which still work well for making a person believe something has actual depth or detail, rather than have the engine elevate the texture in game with some rendering process. Same goes for specularity. Why does making a texture, say metal, appear more shiny or realistic enhance gameplay? It doesn't, because knowing that you are probably looking at metal doesn't make the game play better. Specularity would actually be pointless to use if you were making a game with a sort of muted, watercoloresque theme or artistic style to it, where things would not shine or appear sharp, but smear together and be very dim. All effects like these do is make the game appear more realistic, and for people who need that realism, it enhances the immersion factor, which is important when you're playing a game.

However, anybody who has played a game like chess and checkers can tell you that it would be futile to enhance the game by making it look prettier when the gameplay is already solid. The only thing a graphical representation of chess needs to do to allow the player to get into the game and be immersed is to have a way of distinguishing between a pawn and a knight and so forth, and also give the pieces the right moves. That is what I mean when I say that making a game look prettier doesn't necessarily make it more realistic or even enhance the gameplay. There are other ways to simulate realism other than eye candy, which can only be appreciated by people with sight anyway. A blind person's life is just as real and impressive and complex as someone who can see, so don't tell me making a game appear more realistic makes it play better or even makes it realistic.

Everybody talks about how graphics are pushing the envelope of realism when we're still playing games like Half-Life 2 that play like a 7 year old game that was popular and new and fresh in 1998 but really hasn't brought anything new to the table now. If you're going to make the graphics appear more realistic, why not go all out and make the game more realistic, or atleast play different than what we have been playing for atleast 10 years now? If you're going to be making a game that plays like one from 10 years ago, why even bother with better graphics when you could just use the graphics of yesteryear? Until a game designer can actually make good use of more realistic rendering methods, I'm going to look at all this graphical bullshit with disdain.

Also, just because a game's level of graphical realism doesn't match the realism of its gameplay doesn't make it bad or jarring or even unable to immerse a player or seem believable. Why should a game with realistic gameplay have to have realistic graphics, just so they both match? That doesn't make any sense to me, I'm sorry. That's like saying the words or font on a page of a book has to look better if the book is realistic or whatever.

However, please don't interpret my tirade as saying that games shouldn't strive to be more realistic or appear more realistic. Not at all, it just irks me when people can honestly confuse how a game plays with how a game looks. That must explain why we are still playing games that play like one that came out 10 years ago but looks better.

Share this post


Link to post
Snarboo said:

In response to Zaldron's Post:[/b]
I didn't read all of your post, so I could be mistaken, but it seems like you're confusing immersion with gameplay.

I see them as the same, basically. As long as the input I receive from the game makes up for output that matches what I am expecting to happen (not in a predictable fashion but more of a consistent fashion), I deem the game as containing great gameplay.

Bear in mind that it might NOT be my cup of tea, but as long as the mechanics of the game, combined with the graphical and aural assets, react in a way that you're not instantly brainfucked with a sudden disconnection to the flow of another world's valid natural and social laws, then yes, I consider that a game where they nailed down the gameplay. A rather simple accomplishment in more 'abstract' gameworlds like games of ten and twenty years ago, and something that gets progresivelly more difficult because, to put it bluntly, we see the chair in 3D and all of its polished wood glory, but we cannot break a leg and use it to bash someone's head, nor move/rotate it to where we want it to be, nor even sit on it. Right now where beholders of worlds that don't react like what our eyes are telling us. Do you not try, for example, to 'use' every decorative object in the world to see if it's "one of those games" where you can trigger simple scripts from them? Do you not, after trying to do this, try to break these objects, to see if we are allowed? Or how well they break, for that matter?

It simply never occurs to us that maybe, the fire hydrant could be used to stock on water, or to direct jets of water at a random direction. That maybe we can call people with phones rather than get silly easter eggs, that we can read roadsigns to point us at places rather than just stuff to blow up, that I should be able to open the gas in ovens and explode shacky apartments, that wood & fabric furniture should catch on fire and let it spread. That after a shootout at a shoddy restaurant I could perhaps sit in a table and read a whole menu, or go to the kitchen and make me a burger? It's a daunting task to provide such level of interactivity with the gameworld, such level of "this is this and I know it can be used for stuff like this".

You bring up unified lighting as not only being something pretty to look at, but also a great way to enhance gameplay, which I agree with, but only if it's used properly and not just as a graphical gimmick or enhancement.

Any examples of gimmicky implementation of unified lighting? It's quite heavy-handed to code, you know, and it involves different art assets than conventional baked or vertex lighting models.

As for Thief 3, it's sadly something I have to bring up because there's few games out there at the moment in the PC market to prove my points, this is a transition period where pre-GF3 chipsets and such must be phased out. It's like the jump to OpenGL when Quake 3 came out.

I prefer the visual quality of Thief 2 and its gameplay to Thief 3 by a big, big margin. I never even bothered to finish Thief 3, simply out of a "it's not like Looking Glass would have done it" vein in me. Altough the shadows did work to wonders in Thief 3, I still think they dropped the ball on the gameplay, because it just wasn't up to par with all the possibilities that were -opened- by actually having unified lighting systems, so no, I won't defend it, I agree with you. I simply used the game as a reference, because the game is, afterall, built upon the concept of hiding in dark places that might not be dark all the time, afterall.

You also mention how things like bump mapping and specularity can make something appear more realistic... It doesn't, because knowing that you are probably looking at metal doesn't make the game play better.

Not in today's games, of course. At least, not to any appreciable margin. To quote myself:

I am an extremely jaded gamer but I see these features as perfectly valid doors to improved gameplay.

A time will come when seeing specular hotspots will be useful, believe me. There will come a day when we light up a torch in an RPG, to find ourselves in a big ancient tomb that is wide and long enough as to not cast any light on its walls, and in the foreboding black horizon ahead of us we'll the see the hotspots shining over rounded, softly moving shapes, and we'll know there are some sneaking bastards behind those shields, ready to attack.
There'll be a day when we run inside an almost pitch black tunnel, being chased by extremely fast rabid zombies, counting the last bullets in our handgun, and specular bloom will reveal to us the exit. We'll look back, to gauge distance, because the echoes make it too hard to have any spatial awareness. We can't see, looking ahead to that bright, glaring half-circle of hope has fucked up our night vision for a little while. But it's ok, we're almost there but oh shit too late, you saw it but had no time to react, that patch of road was shiny...that patch of road was wet. You slip and knock yourself in the back of your head. You slowly try to stagger back to your feet, vision now blurred, still seeing some dancing, diminishing stars from the impact. But it's too late, they're already on top of you. You shoot blindly and wildly, as exulted as your chasers it seems.

Fuck, there'll be a day when games do not contain cleavage, ninjas, nazis, aliens or monsters necesarilly. Some will simply be dramas where the player must afront the tensions and magnetism of incredibly realistic characters.

Specularity would actually be pointless to use if you were making a game with a sort of muted, watercoloresque theme or artistic style to it, where things would not shine or appear sharp, but smear together and be very dim.

Well duh, but see, it's not like graphic vendors make features for realistic games. The pixel shader fragments, for example, would allow me to make all kinds of non-realistic renderers. And that doesn't mean fucking cel renders with black thick lines on the edges and simpler gradients instead of textures. Fuck that, I mean stuff like watercolours looks in real time, or oil paintings, or crazy impossible to describe with words surreal looks. But this will take time, as people need to get the hardware, and companies have to play with fragments a bit before nailing such complicated uses. Afterall, it is easier to mimick nature because we have the maths to render it. Other styles have much more obscure maths behind them.

However, anybody who has played a game like chess and checkers can tell you that it would be futile to enhance the game by making it look prettier when the gameplay is already solid.

That is because chess is a far more abstract universe. It is only about patters and movements and several different "types" which are allowed movements. You have some global rules on top, and that's it. The gameworld "chess" comes with no real graphical component. We make graphical -representations- or models for Chess, and yes, putting shadows and complex shaders on the pieces does not really add anything to the game. We're simply adding things on top of the thing we use to experience Chess. Like making your game controller or CPU case a cooler color with leds and translucent parts and shit.

A blind person's life is just as real and impressive and complex as someone who can see, so don't tell me making a game appear more realistic makes it play better or even makes it realistic.

Not on today's games, but I would very much like to be there, and it takes some steps.

Everybody talks about how graphics are pushing the envelope of realism when we're still playing games like Half-Life 2 that play like a 7 year old game that was popular and new and fresh in 1998 but really hasn't brought anything new to the table now.

Have I said that? I enjoy two games a year nowadays. I only said that gameplay can and will be capped by graphics in the future, unless we take these steps to create tools that are highly customizable and modulable.

If you're going to make the graphics appear more realistic, why not go all out and make the game more realistic, or atleast play different than what we have been playing for atleast 10 years now? If you're going to be making a game that plays like one from 10 years ago, why even bother with better graphics when you could just use the graphics of yesteryear?

Not exactly the developers fault that hardware is better and artists are not as limited as before. Palettes? 16-bit approximations? 500 triangles, for God's sake? This corrodes the artists more than anything else. Visual execution impaired by technology? In the realm of -computer- games? Come on.

If artists and render programmers have had more luck improving their field than their counterparts, it's not their fucking fault. It's their achievement. Go rag the designers, the game content coders, people like that. It's not like producers force the puny little game companies to make only flashy products, most of the time they just ask for something that doesn't look like a turd, or that it looks really cool if the design is meh for starters.

But then again, it's easier to make 2048x2048 textures than to "simply" code a feature where all NPCs can be given gifts based on their hobbies and preferences and be "touched" by your attentiveness and friendship. It's easy to see why one thing grows much faster than the other for now.

Until a game designer can actually make good use of more realistic rendering methods, I'm going to look at all this graphical bullshit with disdain.

The guys who made Facade can be very well couple their system with a 3D lip synching and facial gestures and get something incredibly cool. Will Wright, who I don't normally give too much credit for, is making Spore, entirely done visually-wise, it seems, with vertex programs and pixel shader fragments, allowing for the evolutionary, never-the-same mutation and customization of all lifeforms.

Why should a game with realistic gameplay have to have realistic graphics, just so they both match? That doesn't make any sense to me, I'm sorry. That's like saying the words or font on a page of a book has to look better if the book is realistic or whatever.

Bad example. Because if the 'realistic gameplay', the unlimited interactivity, cannot be accurately displayed on the image, you cannot react to it accordingly. If you cannot see the worn down rust in the little crevices of an item, you cannot tell if it's old or new. If you can't see the extremely unnatural shine and deep colours of things like rocks outside, you can't tell if it rained overnight and you didn't notice because you were in fast asleep. If you can't the tree branches and weed dancing under the wind, you can't tell its direction. Perhaps you see these things as inconsequential, but imagine if they were not? Imagine a gameworld so complex and yet impaired by a lack of proper information to the player?

Have you ever played pencil and paper roleplaying games? Do you know the difference between a good gamemaster that spins a tale with inmense detail and takes into account your actions over the world, as opposed to one that just simply describes with half-sentences, and proceeds in the most abstract of fashions?

However, please don't interpret my tirade as saying that games shouldn't strive to be more realistic or appear more realistic. Not at all, it just irks me when people can honestly confuse how a game plays with how a game looks.

Do I? As I said, I still preffer my 16-bit precision, 32 texels per sample lightmaps of Thief 2 rather than the new incarnation of the series. And I also like Thief 2 the gameplay better. So what am I raving about?

Games fucking need good visuals if we're gonna make deeper gameplays. How else are you going to tell that you are indeed making a fucking difference on the gameworld, if its something that cannot be downright told to you via a sound file or a text window?

That must explain why we are still playing games that play like one that came out 10 years ago but looks better.

Not really, you haven't noticed the dramatic change compared to games ten years ago. I think you are expecting much. Have you done any coding? I ask simply because I would be interested in knowing what things you expect in gameplay circa 2005. I have some ideas myself that I desperately want to see in a game (specially one of my own doing), but the process of turning perfectly reasonable and cool gameplay ideas into code is...well, sometimes it's a daunting task.

Share this post


Link to post
AirRaid said:

rant

Look for HDRIBL, it's a demo that uses REAL HDR. Not the half-assed crap that valve is using.

Share this post


Link to post
geekmarine said:

Oh, and for the love of God, somebody kill plot as an essential game element. I like to know why I'm doing what I'm doing, I guess, but games heavy in cinematics and cutscenes have no replayability value. And when plot is a crucial element to a game, once you know how the story ends, it's kinda pointless to go back and play again.


It would be great if people didn't think a plot was an essential game element. A videogame plot isn't nearly as good as a story in a book. I go to other message boards and the people there have hated games just because of the story. Gameplay is important for videogames, nothing else.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×