Most appropriate linux distro for Chocolate Doom

http://www.archlinux.org/

Minimalist, simple, fast without the necessary compilation of every package, as in Gentoo. Combines some of the best ideas of BSD and Linux, and the documentation is very easy to read and more importantly, learn from. (Beginner's Guide can literally take you through installation step by step, allowing you to build a quick and powerful system even if you know little)

Share this post


Link to post

I sincerely hope that it's already been established that there is no "most appropriate" or "best" distro for Chocolate Doom. Though, as the current maintainer of the Chocolate Doom package in the AUR, I don't mind people using it :P

Share this post


Link to post
MP2E said:

http://www.archlinux.org/

Minimalist, simple, fast without the necessary compilation of every package, as in Gentoo. Combines some of the best ideas of BSD and Linux, and the documentation is very easy to read and more importantly, learn from. (Beginner's Guide can literally take you through installation step by step, allowing you to build a quick and powerful system even if you know little)


I swear to god ONLY Linux nerds can advertise a "minimalist" OS and keep a straight face.

Anywho, I really don't think he wants to "learn" that much, and to be honest, he won't learn anything if he's just following a bunch of steps. I still didn't learn a thing on Arch but how little my patience can be when ndiswrapper refused to work, but hey, that's just my experience.


In other words, Easy to set up, works, small. Choose two.

Share this post


Link to post

There isn't really any distribution that will specifically make you learn how to use Linux better (not even Gentoo); the only thing you ever really are forced to learn is the distro itself (ie, the package management tools). Pretty much no distribution is any better or worse suited as a platform to learn how to use Linux.

Also for what it's worth, I find arch to fit into all three of your categories, CSG.

Share this post


Link to post

Well yeah obviously chocolate doom isn't going to run any better or worse from distro to distro but I figured I might as well put a word in for Arch. As for minimalism being something for "Linux geeks" I disagree. But then again, I AM a Linux geek. I can see how it wouldn't matter to the typical person but I want my hardware running only exactly what I want it to.

Share this post


Link to post
Csonicgo said:

I swear to god ONLY Linux nerds can advertise a "minimalist" OS and keep a straight face.


Not really, there's plenty of non-nerds on Puppy Linux forums (for example) who just want a usable, unbloated system for various reasons. By non-nerds I mean people who aren't all that technically savvy.


In other words, Easy to set up, works, small. Choose two.


OpenBSD = works, small, and easy to setup for real Unix fans. :)

Share this post


Link to post
Porsche Monty said:

I'm guessing it would only make sense to optimize for PIII's with integrated gpu's and below?

Definitlely not. AMD Geode and ARM CPUs are becoming popular for low-powered systems. Even taking Atom into consideration, the future of low-powered computing is pointing anywhere but Intel.

Csonicgo said:

I still didn't learn a thing on Arch

Really? I learnt heaps, and not just Arch-specific things. Being forced to use runlevels, modules and services manually was of great help to me when going back to larger distros like Ubuntu and Fedora.

I agree with chungy that Arch does fit into all three of your categories. The problem with Arch is that you have to *keep* setting it up, spending up to a quarter of your time just maintaining the distro instead of playing Doom or looking at cat pictures or whatever you do with your computer.

hex11 said:

Puppy Linux forums

And a moderator who censors any criticism of the distro, even genuine constructive discourse about improvements.

OpenBSD = works, small, and easy to setup for real Unix fans. :)

You forgot to mention comparatively small library of ported packages and lack of hardware support.

Share this post


Link to post
Super Jamie said:

I agree with chungy that Arch does fit into all three of your categories. The problem with Arch is that you have to *keep* setting it up, spending up to a quarter of your time just maintaining the distro instead of playing Doom or looking at cat pictures or whatever you do with your computer.


That's exactly what I was going for. It's not hard as much as it's frustrating.

Share this post


Link to post

You realize that you are describing systems that are an order of magnitude or two more powerful (this includes practically all modern smartphones and anything with an ARM CPU > 50 MHz) than the ones that had to run doom.exe, right? With the intention of running a vanilla-resolution, "no-frills" type of port.

It seems weird how performance issues could even exist at that level.

Share this post


Link to post

Issues can arise with very high screen resolutions in Chocolate Doom... while it always has the appearance of 320x200, scaling that to larger screen resolutions takes quite a bit of CPU horsepower. So far Chocolate Doom doesn't use OpenGL or anything like that to accelerate the scaling. Though generally, it doesn't take a whole lot to run it at 320x200 natively, or even 640x480 (which unfortunately is quite blurry but it's not *too* bad); I've ran it on a Pentium 100MHz just fine.

Share this post


Link to post

I really need to get round to implementing that whole OpenGL/hardware scaling thing.

Share this post


Link to post

Even if you use pure software, there are some memory access patterns that work better than others.

From my understanding of the chocolate doom code, as of now you do "square scaling" aka trying to draw 4 (2x), 9 (3x), 16 (4x) n^2 (n) etc. pixels at once in a square/rectangular fashion, by using individual pixel addressing.

This is terribly inefficient, as it fucks up cache locality, since you force 2,3,4 etc. different scanlines to be cached each time you scale a single pixel, then then the next one causes the same 4 lines to be fetched again etc.

Instead, by doing only horizontal integer scaling for one scanline (using only horizontal pixel doubling, tripling etc.) all the writes stay in one scanline and cache coherency is preserved. This is the so-called "master scanline".

After you completed horizontal scaling, then you can just memcpy the master scanline as many times as you need vertical scaling and move on to the next one.

An even better access pattern is to do as above, but only do the vertical multiplication memcpys when you have completed all of the master scanlines (think about doing all the time-consuming horizontal scaling first, then vertical using super-efficient memcpy).

You can see an implementation of this in Mocha Doom's SoftwareVideoRenderer.java:

	/**
	 * Pretty crude in-place scaling. It's fast, but only works full-screen
	 * Width needs to be specific, height is implied.
	 * */

	protected final void scaleSolid(int m, int n, int screen, int width) {
		int height = screens[screen].length / width;
		for (int i = 0; i < height; i += n) {

			for (int j = 0; j < n - 1; j++) {

				System.arraycopy(screens[screen], (i + j) * width,
						screens[screen], (i + j + 1) * width, width);
			}
		}

	}
This snippet only does the final memcpy-based scaling, but I only use it for scaling full-screen "solid" stuff like title screen, help pages etc.

Compare the part of code that does 4x scaling in Mocha Doom (it actually works on patches/columns, but the reasoning is the same):
// Scales a pixel of a particular column 4x times horizontally ONLY
					for (int j = 0; j < column.postlen[i]; j++) {
						dest[destPos] = data[ptr++];
						dest[destPos + 1] = dest[destPos];
						dest[destPos + 2] = dest[destPos];
						dest[destPos + 3] = dest[destPos];
						destPos += n * this.width;
					}
After everything has been horizontally scaled, solidScale is called to complete the job with bulk vertical scaling.

In Chocolate Doom:
        for (x=x1; x<x2; ++x)
        {
            *sp++ = *bp;  *sp++ = *bp;  *sp++ = *bp;  *sp++ = *bp;
            *sp2++ = *bp; *sp2++ = *bp; *sp2++ = *bp; *sp2++ = *bp;
            *sp3++ = *bp; *sp3++ = *bp; *sp3++ = *bp; *sp3++ = *bp;
            *sp4++ = *bp; *sp4++ = *bp; *sp4++ = *bp; *sp4++ = *bp;
            ++bp;
        }
The former does only 4 expensive pointer-based accesses for scaling a pixel 4x, and leaves vertical scaling to the efficient bulk System.arraycopy == memcpy, while the latter does 16 pointer accesses every time.

Share this post


Link to post
Maes said:

From my understanding of the chocolate doom code, as of now you do "square scaling" aka trying to draw 4 (2x), 9 (3x), 16 (4x) n^2 (n) etc. pixels at once in a square/rectangular fashion, by using individual pixel addressing.

This is terribly inefficient, as it fucks up cache locality, since you force 2,3,4 etc. different scanlines to be cached each time you scale a single pixel, then then the next one causes the same 4 lines to be fetched again etc.

Yeah, I'm aware of this. For example, A while back I rewrote the Eternity Engine's screen wipe code because it worked by drawing a bunch of vertical slices onto the screen.


In Chocolate Doom:

        for (x=x1; xThe former does only 4 expensive pointer-based accesses for scaling a pixel 4x, and leaves vertical scaling to the efficient bulk System.arraycopy == memcpy, while the latter does 16 pointer accesses every time. 

Yeah, this would probably be better off drawing a while scanline at a time, though I'd expect that modern CPUs have enough cache that it probably isn't too much of a problem. As for the pointer dereference, I'd hope that the compiler is smart enough to be only doing it once.

You'll notice that the aspect ratio-corrected versions of the scale-up functions are scanline-based, and everyone should be using those anyway :-)

Share this post


Link to post

Does that speedup technique still work for drawing in highcolor, 24bpp, or 32bpp. I have been working on fixing some of the half-done highcolor support. While trying to get DoomLegacy to work on the Mac, we have discovered that the Mac buffer width is a power of 2, not the video width. This has forced fixing every draw routine to increment y screen positions by a new field, vid.ybytes. Our X11 support uses the other technique, draw everything in 8bpp palette, and convert to 15/16/24/32bpp on the video buffer copy operation.

Share this post


Link to post

Since the point of the technique is replacing individual (and cache-unfriendly) pointer accesses with cache-friendly bulk memory copying whenever possible, the bit depth is a secondary consideration.

You will only draw one (horizontally scaled) column with expensive pointer operations regardless of what the actual dereferenced pointers are, and do the vertical scaling by "dumb and direct" memory copying, which cheerfully ignores the actual bit width (of course, with bigger bit depths you'll have more data to copy around, but still better than using n^2 individual accesses per pixel with little to no help from the cache).

Share this post


Link to post

2 days have gone by since I installed this...don't wanna sound rude or anything, but whoever invented xorg should get dragged out of his house and shot in the back of the neck cuban cdr style, with wife and kids watching, should he not be burning in hell already.

Share this post


Link to post

What are you using? What's the problem you're having? What steps have you done to get there?

You don't need an xorg.conf with most distros now, you can configure display resolutions and such with the settings menu in the desktop environment.

Share this post


Link to post
Super Jamie said:

What are you using? What's the problem you're having? What steps have you done to get there?

You don't need an xorg.conf with most distros now, you can configure display resolutions and such with the settings menu in the desktop environment.


yes, and most don't do low resolutions or (sadly) widescreen ones, in my experience.

Share this post


Link to post

It's not so much of what distro (X.org is largely the same no matter what), but rather what drivers. the NVIDIA drivers require an xorg.conf, primarily to load the nvidia driver rather than vesa, nv, or nouveau (the xorg.conf it generates itself is very bare-bones); I wouldn't be surprised if AMD Catalyst needs the same.

Beyond that I haven't had any issues (auto-detected open source drivers or the proprietary NVIDIA driver) with resolutions being exposed that a video card and monitor can support (hell, too often it lists modes that my monitor flat-out can't do, including all the ones it can).

Share this post


Link to post

Proprietary driver installs are improving as time goes on. I haven't looked into how, but F16 with Catalyst drivers from RPMFusion can run without xorg.conf.

Of course, you don't need this for Doom. Even Darkplaces (the Quake engine) can do 60+fps with high settings at 1920x1200 on the latest free radeon drivers.

Share this post


Link to post

To make a long story short, I finally managed to get Chocolate Doom working the way I wanted, save for a minor gamma problem, but as it turns out, OPL music wasn't what I had been expecting as far as accuracy is concerned (not that I wasn't warned) It literally screws up the way emulated OPL does, so at the end, if I have to dual-boot just for this, I'd rather do so into real DOS and get at least perfect OPL straight out of vanilla.

What a disheartening journey this was, but I learned enough to keep myself from trying anything linux for the rest of my life :)

Share this post


Link to post
Porsche Monty said:

OPL music wasn't what I had been expecting as far as accuracy is concerned (not that I wasn't warned) It literally screws up the way emulated OPL does

Just to be certain: you're sure that you're actually using hardware OPL and not still using the emulator, right? It should say this in the startup messages:

OPL_Init: Using driver 'Linux'.

Share this post


Link to post

Yep, pretty sure, but just to be even more sure; is emulated OPL filtered in linux under any circumstance?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now