Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
invictius

Does using an agp video card in a 2d port have an advantage over pci?

Recommended Posts

Only thing I could think of is a wad with super complex architecture could benefit from an agp card as I assume it delivers faster 2d rendering/whatever despite the focus on performance being for d3d/ogl games.

Also, does the same logic work for dos?

Share this post


Link to post

If you talk about software rendering, no, it won't make any difference at all. The time to transfer the video data is the same for simple and complex maps.

Why do you even ask about an obsolete hardware standard? Nobody uses AGP anymore.

Share this post


Link to post

The amount of video data being rendered is the same no matter how complex the scene is. The scene calculation stuff (For the traditional renderer) happens entirely on the CPU.

Share this post


Link to post

Well, if he meant conventional PCI (not PCI-e), AGP was developed precisely in order to transfer data more quickly in one direction (CPU/main RAM to video card), and even the first revision of AGP was specced at twice the frequency and bandwidth of PCI, so there will a speed advantage, even when doing plain framebuffer transfers. Of course, this will only be apparent at ridiculously large resolutions, large enough to hog the conventional PCI bus at Doom's typical frame rates.

If we assume the base 133 MB/sec PCI speed, that's good enough for moving about 2 Mpixels @ 70 fps. That would be enough for resolutions up to about 1440 * 1080 (highest standard 4:3 resolution) but it would consume nearly 80% of ALL available PCI bandwidth. Not really something you'd like to happen if you want the system to function smoothly.

Luckily, AGP bypasses that, and allows up to 16x that bandwidth (with AGP 8x).

Now who's using AGP (besides me and hex11)? I dunno. But it certainly beats using an old S3 PCI video card ;-)

Share this post


Link to post
Maes said:

If we assume the base 133 MB/sec PCI speed, that's good enough for moving about 2 Mpixels @ 70 fps. That would be enough for resolutions up to about 1440 * 1080 (highest standard 4:3 resolution) but it would consume nearly 80% of ALL available PCI bandwidth. Not really something you'd like to happen if you want the system to function smoothly.



And yet...
Even modern CPUs are struggling to churn out that much data on anything but the most simplistic maps.

Which leads to the conclusion that for AGP vs. original PCI (and not only here) the real bottleneck lies entirely elsewhere than the video bus.

Share this post


Link to post

I was referring to pure framebuffer data -regardless of map complexity, when using software rendering you're pushing the same amount of pixels to the video card every time, regardless of scene complexity. There's no variable geometry or texture data to push to the video card, in this case, just a fixed-resolution scene -one for each frame.

And with the example resolution I posted, it's obvious that it's quite easy to saturate that bandwidth. The "standard" HD resolution of 1920 x 1080 would exceed what the PCI bus is capable of, already at 35 Hz, let alone 60 or 70.

In practice, you would encounter problems way before reaching saturation: overheads would diminish the maximum attainable data transfer, not to mention contention from other devices, CPU overheads during DMA transfers (which would add up to a significant amount, at those volumes) etc.

So of course you want your video card to be as fast as possible -ideally, taking zero time to transfer a fully rendered frame from main RAM to the Video RAM, not interfering with other system devices and/or, if it's directly CPU driven, not to introduce significant data transfer overheads/delays. In this scenario, PCIe > AGP > PCI > ISA.

I think integrated/shared RAM video cards might have a slight advantage in DirectDraw applications such as software-rendered Doom ports -in theory, there are no RAM-to-VRAM or RAM-RAM transfers to perform.

However the OP's question erroneously assumes that a card's performance will be dependent on a Doom map's architecture in "2D" (software rendering?) mode. Strictly speaking, nope. The card will perform equally no matter what you render. It's the CPU that will determine maximum scene complexity. However, if using really high resolutions, in addition to a powerful CPU you'll also need a video card with reasonably fast bus to the main RAM/CPU, in order to handle the large data volume that's being moved.

Share this post


Link to post

A little off-topic, but does the phrase "2d rendering" when referring to software rendering, and "3d rendering" when referring to hardware-accelerated rendering, have any validity? To me it seems that it doesn't, as both types just render a 3d (respectively 2.5d) world onto a 2d screen, but I'm not sure, and that's why I'm asking, not to nitpick but to know.

Share this post


Link to post

Indeed, the OP should have used the term "software rendering". The opposite is "hardware accelerated rendering".

As for the second question (whether the same logic would work in DOS...), a faster video card, even in the era of simple VGAs, DID matter: not all VGAs were created equal or performed as well, and even if everything was CPU driven, there could be significant differences even in "dumb" framebuffer performance, even between ISA video cards.

VESA and PCI buses were pretty much mandatory for playing anything in SVGA and above resolutions, simply because the ISA bus could not move that much "dumb" framebuffer data around (the theoretical limit was 16 MB/sec, in practice hardly more than 11 MB/sec). And that was BEFORE the first mainstream 2D/3D accelerators even hit the market.

However, for vanilla Doom in particular, there seems to be a sort of bottleneck due to the ISA bus's speed, that doesn't go away even with a PCI or better video card installed.

Share this post


Link to post
Maes said:

And with the example resolution I posted, it's obvious that it's quite easy to saturate that bandwidth. The "standard" HD resolution of 1920 x 1080 would exceed what the PCI bus is capable of, already at 35 Hz, let alone 60 or 70.



That may be correct but let's not forget that any system limited by a PCI bus would be even more limited by the weak CPU it got equipped with.

I consider it pointless to state that today's screen resolutions would overtax some hardware that never was used with them.

Let's not forget that AGP as PCI's successor is already 20 years old (original PCI is 24 years old, btw) and back then people could only dream of such hi-res displays being usable in a game. Even 1024x768 was way too much for most of those systems by CPU limizs alone.

Share this post


Link to post
Graf Zahl said:

That may be correct but let's not forget that any system limited by a PCI bus would be even more limited by the weak CPU it got equipped with.


Well, it is still technically possible to install a PCI videocard on a modern system, so while extremely unlikely, it is still a technically feasible scenario. An example would be e.g. as a fallback in case of some major system fuckup or on servers where you'll need at most to display a login prompt once in a while, they have no integrated option, and you just happen to have some old PCI S3s lying around.

But wrap your head around this: there's a market for video cards with modern GPUs and multiple cores with a PCI bus. Yes, that means plain old 133 MB/sec PCI. No 'Express'.

Share this post


Link to post
Graf Zahl said:

That may be correct but let's not forget that any system limited by a PCI bus would be even more limited by the weak CPU it got equipped with.

I consider it pointless to state that today's screen resolutions would overtax some hardware that never was used with them.

Let's not forget that AGP as PCI's successor is already 20 years old (original PCI is 24 years old, btw) and back then people could only dream of such hi-res displays being usable in a game. Even 1024x768 was way too much for most of those systems by CPU limizs alone.


It's funny how we started with there being high-resolution crt's but nothing, not even a radeon 9800, was powerful enough to game with them... then we got to being able to run most things at 1080p... now 4k is out and most people can't use it once again.

Share this post


Link to post
Maes said:

Well, it is still technically possible to install a PCI videocard on a modern system, so while extremely unlikely, it is still a technically feasible scenario. An example would be e.g. as a fallback in case of some major system fuckup or on servers where you'll need at most to display a login prompt once in a while, they have no integrated option, and you just happen to have some old PCI S3s lying around.

But wrap your head around this: there's a market for video cards with modern GPUs and multiple cores with a PCI bus. Yes, that means plain old 133 MB/sec PCI. No 'Express'.


Yes, there's always some people hell-bent on sticking with obsolete technology. Nothing that can be done about the hopeless cases but to ignore them... :D

invictius said:

It's funny how we started with there being high-resolution crt's but nothing, not even a radeon 9800, was powerful enough to game with them... then we got to being able to run most things at 1080p... now 4k is out and most people can't use it once again.


And once they can use 4k there will be 8k or 16k and the same thing starts all over yet again. The tech industry needs to sell their stuff after all... :D

Share this post


Link to post
invictius said:

It's funny how we started with there being high-resolution crt's but nothing, not even a radeon 9800, was powerful enough to game with them... then we got to being able to run most things at 1080p.


And how far was that from running games at e.g. 1024x768 or 1280x1024, which were typical 4:3 or 5:4 ratios for 2001-2006 eras in gaming? Unless you went overboard with antialiasing or maxed the settings on stuff like Far Cry, any mid-range video card of the time could handle such resolutions, often better. Computers, esp. post-SVGA PCs, were always ahead of (SD)TV in terms of display resolution. (HD)TV was merely catching up ;-)

Graf Zahl said:

Yes, there's always some people hell-bent on sticking with obsolete technology. Nothing that can be done about the hopeless cases but to ignore them... :D


Well...putting such a PCI card as an "upgrade" in a system that has an AGP mobo would be sad indeed, when there are still a lot of decent used AGP offerings (up to the level of a Radeon HD 4650 or nVidia 7900 GS) with DX10 capability. I don't know if it would make sense to "upgrade" a post-AGP system that has no PCIe slots at all (typical of some barebone/office PCs, which have integrated video and maybe 1 or 2 PCI (non-Express) slot). Depending on the case, it might really be an upgrade compared to the integrated graphics.

Share this post


Link to post
Graf Zahl said:

And yet...
Even modern CPUs are struggling to churn out that much data on anything but the most simplistic maps.

Which leads to the conclusion that for AGP vs. original PCI (and not only here) the real bottleneck lies entirely elsewhere than the video bus.


For 2D/throw-a-framebuffer-at-the-video card, sure. But the PCI bus was a bottleneck for actual 3D games. Source: coincidentally I was talking about this just yesterday with a colleague at $work who used to work for 3DFX.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×