Sorry, I was not trying for historic accuracy in my comments, but pointing out why precalculated tables have such an advantage over run-time RGB math. This is long-winded because there are other viewers of this forum than just experts and I tried to give details.
The hand tuning in the palette will not have left visible traces, and I really don't care about whether they actually did this or not (it does not matter for this discussion). Some palette color choices will change hue when converted to darker colors using integer math.
This is because of unequal proportions lost when low order bits are lost. There are fewer choices of hue in the darker colors. To get a good palette for a game, select the dark colors first, and then multiply them up to the bright colors for your palette. This gives palette colors that will retain their hue when darkened. It does depend upon exactly what operation is used to darken a color, and that can be hand-tuned too. This hand-tuning of the palette is necessary given the integer RGB representation.
The hue error (hue shift) is NOT identical to the round-off error of a single RGB component. It is due to the difference in round-off between the RGB components. Using simple math with no round-off then there is a larger truncation error instead, but with very similar hue error.
This effect is made worse in Doom because it is synchronous over a large area. Due to a limited palette, when a worst case error occurs for one color combination, then it occurs identically wherever the same colors combine, over the whole texture or sprite.
This can be minimized by using an RGB framebuffer, because any individual pixel differences are preserved (the color quantization of a palette based framebuffer makes the effect far worse). However, because of the source color quantization, there is limited opportunity to introduce differences. It may actually help to introduce a deliberate random least-sig-bit (dither).
The gamma calculations that create the gamma table are all done in float in DoomLegacy (I did not look at other ports). The doom palette does a lookup in the gamma table to create the video card palette. There is no reason to use integer in gamma table generation as it would introduce even more error in the base palette.
Overall, this does not create much error.
All the colormaps are loaded, and are Doom color to Doom color translations. This is subject to color quantization errors. If the calculations that created the colormaps are very simple, then float would not be needed. The round-off error of float-to-int conversion is about the same as one integer operation. Trying to replace any of the colormap lookups with RGB math can eliminate the color quantization error, but will introduce round-off error instead, in proportion to how many adds and multiplies and shifts are used.
That leaves the RGB operations as a source of hue error, which is where all this is leads. The concepts of accuracy, precision, significant digits, complicate the whole issue of how much error is in a RGB calculation. I trust interval arithmetic the most. The others are estimates. Because RGB is not subject to measurement errors, the statistic basic equations for combining accuracies do not apply.
To shorten this, let UE be the uncertainity error. If the number of bits in intermediate values is not constant then it would be preferable to track the number of significant bits.
I tried to verify this stuff from my numerical analysis notebooks, but they are not readily available. I expect there will be alternative error analysis argued. This is a very rough estimate.
A value from a table has at least a 1/2 bit UE.
The result of adding two values is 1 bit UE.
A double-length multiply result does not itself have additional UE, but shifting off the low order fractional bits does. A division is complicated but if you are looking for speed you won't be using any. Multiply by a constant, multiplies the possible error by the constant.
Doing one transparency RGB calculation using integer math, is 1 add, and a shift (minimum). This gives 2 bit UE estimate leaving 6 bits of accurate color.
The hue shift occurs because of the independence in the actual error direction in each component.
Even with 32bit color, there are only 8 bits in each component. In Doom this is actually only 7 bits because it mostly uses the darker colors (Using INTEGER, darker colors have less available range, and less tolerance for UE, than bright colors do).
There are only two apparent ways to prevent this, use FLOAT or FIXED POINT to do all RGB component math and round the result back to INTEGER, or minimize the number of integer operations on RGB components, preferably to fewer operations than 1/4 of the bits available.