Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
phi108

Every texel antialiasing/ Super-duper-sampling?

Recommended Posts

This post may be nonsensical, but is there any official term for averaging the colors of all the texture pixels/texels contained in one pixel of the display? This would make mipmapping and antialiasing unneeded (if they aren't already...), but would probably require many times more processing. The 3D engine would look at all parts of all textures that are behind one pixel (sometimes located on multiple polygons/surfaces) and average the R/G/B values.

For better performance of wide open areas, maybe the average color of each single texture could be precomputed, so for a pixel that covers many textures of a far away area, it takes multiple average colors and doesn't average as many numbers.

I'm guessing even a 320x200 display would be laggy with something as big as Doom maps, but maybe a non-realtime engine would use something similar for prerendered scenes, or the view distance could be drastically reduced.

Share this post


Link to post

There's technically no difference between what you described and mipmapping, which also has the advantage of creating recognizable shapes at various scales, not just single pixels. Every one of the pixels of the smaller scaled images is made from averaging the RGB values of the full-scale pic, and the most appropriate one is chosen depending on view distance.

Unfortunately, you can't easily create mipmaps between different textures for each possible scale/position/texture pair, as the number of combinations would grow exponentially.

Now, you COULD quickly do RGB averages based on the 1x1 or 2x2 mipmaps for multiple textures in real time, but if two or more textures are rendered so far away as to be 1 pixel combined in size, do you think it would really make a difference if you took their average?

Share this post


Link to post
Maes said:

What you describe is called mipmapping

There's a few things wrong with your description. I've seen more often than not higher mips generated from point sampling rather than any kind of filtering over a texture. Averaging the values is only turned on when linear filtering of mips or anisotropic filtering is enabled, and even then average is the wrong word as it's a lerp instead of a plain average. Trilinear and anisotropic filtering is expensive though, most modern material systems allow it to be defined per-material.

But more to the point. He hasn't really described mip mapping. What he's describing is closer to alpha blending.

But I think what he's really trying to get at is how triangles themselves are rastered. Graphics hardware is a bit brainless. It decides whether to draw a pixel to the backbuffer based on how much of a pixel the poly/line covers. It's effectively an on/off approach. Antialiasing is really a bit of a hack to get around that - render at a higher resolution, and then downsample with a bilinear filter of some description to get rid of those jagged edges. If you want to get rid of that, you'd need a hardware solution that stores which polys overlap which pixels and blend the corresponding solution in order and with the correct transparency percentage.

To be honest though - it's possible. The PowerVR chipsets in the iPhone and many Android devices are a deferred tile based renderer. What this means is that each polygon that's sent to the renderer is tested against a grid (my own tests have determined it to be a 16x16 pixel cell). If it fits wholly within, it gets put in the list for that grid. If it overlaps, it gets clipped and put in to that grid. Rendering to the backbuffer only occurs when the buffers get full or an operation that requires a backbuffer resolve is called. The advantage there is that you get free alpha sorting thanks to the way the hardware stores the polys. Now, from here, rather than following the usual on/off rule, it would be trivial to determine how much of a pixel the poly overlaps and blend accordingly.

This would be substantially more difficult on non-deferred renderers like your desktop ATI/nVidia cards. But hardware that operates using standard APIs could easily be adapted to such an idea.

Share this post


Link to post

If there are mipmaps generated by point sampling rather than averaging or filtering, then they were simply shoddily made. Since they are precomputed anyway, there's no reason not to use the most rigorous method available. Point sampling is just wrong from a DSP point of view, and essentially negates the advantages of using a mipmap.

The "problem" is that mipmaps apply to ONE texture, there are no cross-texture mimaps , aka "what would happen if I averaged texture A at distance X with texture B at distance Y, so that texture A appears at posizion Z relative to B?"

BTW, the OP didn't mention transparency, he just asked if what sounds like a glorified cross-texture mipmapping (averaging averages based on position/overlapping) would be possible, down to the single pixel level.

If (proper) mipmapping alone doesn't do it, then a combination of 1x1 or 2x2 ... nxn mipmaps and alpha blending could. But there's no guarantee that the result would be theoretically superior to a properly executed hardware AA. Keeping anally detailed information over what overlaps what etc. would require equally powerful hardware, and would really just be another kind of non-oversampling filter, only less practical to apply.

Hardware AA works well and is cheap because it's a "brute force" massively-parallelizable affair, and doesn't really need to know about what textures/objects/polys you're displaying, as it works on raster data.

An anally nitpicking super-average filtering that took 3D objects into account would require a full blown CPU, intimate coupling with the rendering hardware and primitives and a lot of inherently serial operations. Guess which would the GPU industries rather implement.

Share this post


Link to post

Multisampling I think may be the term that you're looking for, and it does exist. Sadly though, I don't think it's powerful enough yet (or rather graphics cards aren't powerful enough yet) to make other forms of anti-aliasing irrelevant.

Share this post


Link to post
Maes said:

If there are mipmaps generated by point sampling rather than averaging or filtering, then they were simply shoddily made.

Bullshit. Your blanket statement is not applicable. Ever tried to generate good mips on a grille texture where many pixels are completely transparent? It looks like ass if you use any kind of filter that averages pixels in any way, shape or form. Ever tried downsampling 2D cell art with a blending function? There goes your well defined edges.

Maes said:

BTW, the OP didn't mention transparency, he just asked if what sounds like a glorified cross-texture mipmapping (averaging averages based on position/overlapping) would be possible, down to the single pixel level.

Uh.... wut?

but is there any official term for averaging the colors of all the texture pixels/texels contained in one pixel of the display? This would make mipmapping and antialiasing unneeded


OP is well aware what mipmapping is and wants to get rid of the need to do AA. OP is also asking about every texture rendered to a single pixel and blending them together. He's not asking about selecting a texel to render to the backbuffer. These are already selected texels that he wants to blend together. As such, it is much like alpha blending, except that only works on two texels at a time instead of every single texel rendered to a pixel.

Maes said:

Keeping anally detailed information over what overlaps what etc. would require equally powerful hardware, and would really just be another kind of non-oversampling filter, only less practical to apply.

Why are you arguing totally wrong points points against a graphics programmer? You know how much it'd need on an SGX or an MBX? An extra byte per pixel for each polygon stored in a deferred renderer.

Maes said:

An anally nitpicking super-average filtering that took 3D objects into account would require a full blown CPU, intimate coupling with the rendering hardware and primitives and a lot of inherently serial operations. Guess which would the GPU industries rather implement.

Just shut up and actually research the hardware I'm talking about. It's been out there for years.

EDIT:

Carnevil said:

Multisampling I think may be the term that you're looking for, and it does exist. Sadly though, I don't think it's powerful enough yet (or rather graphics cards aren't powerful enough yet) to make other forms of anti-aliasing irrelevant.

Aye, it's pretty much why I think a deferred renderer is the best immediate option to do something like that as all the data it needs to multisample is already there and sorted just waiting to render.

Share this post


Link to post

From what I understand, "multisample anti-aliasing" is doing a antialiasing effect on pixels that cover 2 or more polygons, like what Gooberman said:

GooberMan said:

...a hardware solution that stores which polys overlap which pixels and blend the corresponding solution in order...

That solution leaves pixels that cover one polygon unaltered, so it's better performance than "render whole output at 2x-16x resolution and downsample" kind.

What I described would affect every pixel of the output, unless that pixel only covers one texel of a close-up texture (maybe keeping all texture filtering off would keep it less complex.)

Here is an example of normal Doom on the left and 4x anti-aliasing on the right, sort-of showing what the result would be. The absense of mipmapping isn't as noticeable anymore (at 320x240), and 4x might give good performance with a 320x200 display, but I described something that takes into account every single pixel/texel of every texture that faces the camera, probably killing performance.

Share this post


Link to post
phi108 said:

From what I understand, "multisample anti-aliasing" is doing a antialiasing effect on pixels that cover 2 or more polygons

Nope. MSAA basically means that for the number of pixels in the backbuffer, there are <x> times as many rendered and sampled from. A 1280x720 display with 2x MSAA renders to the equivalent of a 1810x1018 and downsamples the output pixels from that. 4x MSAA renders 2560x1440.

phi108 said:

The absense of mipmapping isn't as noticeable anymore

Now that you've said that, I don't think you quite understand what mipmaps are. Mipmaps are a hardware-friendly feature. Rather than selecting an entire texture to pick a texel from, it picks a lower resolution texture if the area that's being rendered to on screen isn't big enough to warrant a higher resolution texture.

Hardware Doom renderers that try to look like the original Doom renderer (for example, the Xbox renderer) don't turn on bilinear filtering. The hardware samplers are set to point (or nearest) sampling. It just picks one texel from the texture that's the closest to what the transformed polygon represents. For comparison, bilinear filtering picks that texel and some texels around it and averages the result based on where in the pixel that transformed point is.

The MSAA downsampling there is, in effect, negating the point sampling effect. It's basically bilinearly filtering those textures, killing the old school pixellated effect in the process.

A deferred multisampler would get rid of the jagged edges whilst keeping the pixellated look. It'd save pixel processing time for sure (which is an issue on the iPhone, its fillrate sucks compared to its vertex prcessing rate).

Share this post


Link to post

This just reminds me of the time someone averaged all the palette colors from the Id games. I remember Doom was some sort of green.

Share this post


Link to post

All I see is that we're using rather convoluted terms for describing what is the same thing under the hood, no matter how many extra fancy buzzwords you can throw at it to make it sound more important than it actually is.

Don't try to argue vs a DIP & DSP MSc graduate, "graphics programmer" guy. Atten-shun and about-face!

Share this post


Link to post

I've always hated anti-aliasing in some games. If there's anti-aliasing that has to happen, the game isn't moving fast enough. some console games actually switched off Anti-aliasing when moving because it wasn't needed, then switched it on when standing still, gradually. Heck, some N64 games could pick and choose what lines to put under the algorithm, if I recall correctly. but that's not FSAA.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
×