Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Quasar

Doom Wiki

Recommended Posts

First off, I just want to go ahead and give out my support for the wiki move, whatever that is worth (probably not much :p)

Secondly, and most importantly, is that naming this wiki anything that doesn't contain the clause "Doom Wiki" is a bad, bad idea.

I've read in this thread that some want to change the name in order to not be in direct competition with the Wikia Doom wiki. The problem with this idea is that (obviously) this wiki will cover the same topics that the Wikia one currently does, and that this alone makes the Wikia Doom Wiki a competitor, whether you want it to be or not.

The thing is that a)many people on the internet know what a wiki is, and b)when these people want to find some specific information about Doom, they are going to go to Google to find a wiki on Doom and c)the search term they are going to use is "doom wiki". Not "doomwiki", not "doom database" and most certainly not "cacopedia". Keep in mind that in search strings, "doomwiki" is NOT the same thing as "Doom Wiki" (unless it is in a domain name, since no spaces can exist there, obviously).

When it comes to SEO, your name is one of the most important things that gauges what search results you'll appear in and where you will appear at; naming it anything that doesn't contain the clause "Doom Wiki" will make SEOing it that much harder (and it will take much, much longer before the new wiki will appear before the old wiki in the most common search strings).

I could go on and on here about proper SEO practices, like how pagerank plays into it all, the proper ways to have other pages link to the new Wiki, etc, etc, etc, but considering where this is going, it might be a waste of time. I can always go over this at a later time, if need be, after all.

So instead of babbling on, I'll just point you to the Transformers Wiki a successful case of defeating Wikia in search results. They really had the right idea going there, and I think doing something similar for the Doom Wiki would be a great idea. Much like the title there is "Transformers Wiki - TFWiki.net", ours should be "The Doom Wiki - DoomWiki.org".

Share this post


Link to post
Sigvatr said:

Are we going to need a new web design for the new wiki, or should we have one?

I designed a few well known sites, like http://www.soldat.pl

Well the design of the site will be dictated by whatever themes are available to MediaWiki. By default, AFAIK, the current version of MediaWiki comes with Monobook and Vector (IIRC that's what it's called anyway - it's the current Wikipedia skin).

Obviously it's possible to develop custom skins for MediaWiki, but I dunno how dedicated anybody is to that purpose - I am sure it is a pretty bit of work. I've heard that attempts to export the Wikia skin (the current, sane one) were unsuccessful due to some kind of obfuscation or server-side hacks they have going on, probably to ensure their look can't be borrowed.

However it is, of course, possible to customize the existing skins as well, such as changing the colors and some basic elements of the layout, without having to invest the same amount of work necessary to develop an entire custom skin.

Share this post


Link to post

I'd say that Monobook is the sane skin; but then again I never liked Monaco.

Share this post


Link to post

I've made skins for lots of shit, forums, blogs, myspace, shouldn't be a problem at all.

I'll just have to get MediaWiki running off of my box so I can test it while I work on it.

But if there is a landslide response in the affirmative, I can get started soon.

Share this post


Link to post

Nuxius does make a good point. The title of the fork could very well be "The Doom Wiki - DoomWiki.org", as he suggested. This title would also include the actual vote-winning pick, "DoomWiki.org", so it would be a reasonable compromise in that sense too. The title as a whole would combine the functions of stepping on Wikia's toes as well as being slightly different from the original wiki name.

As for the other tasks at hand, I think we really need to start thinking about who is going to take care of what in order to get the fork project rolling in practice. For example, if there are any volunteers for handling the database dump & fork building duties, please let us know here.

For one, I would love to see Sigvatr's take on a possible new Monobook skin for the fork.

Share this post


Link to post

Sorry for stating the obvious, but if no one's making any movements it's not going to happen. I think someone mentioned that the first step was to obtain a copy of the database from wikia, has anyone done that yet?

Share this post


Link to post
DooMAD said:

Sorry for stating the obvious, but if no one's making any movements it's not going to happen. I think someone mentioned that the first step was to obtain a copy of the database from wikia, has anyone done that yet?

Yeah that's easy, you can download them right from the page. We also need an image dump and I don't know how to get that.

Once we have both of those we need MediaWiki set up on Mancunet. Nobody's volunteering to help with this, so I don't know how/when that will happen. I don't think I know enough about it to do it all myself, and especially not the heavy php needed to import/setup stuff like user accounts.

Share this post


Link to post

I would be surprised if some kind of software like Fantastico wasn't able to flip open a new MediaWiki in 2 seconds.

Share this post


Link to post
Quasar said:

We also need an image dump and I don't know how to get that.

Just finished it an hour ago. It's current to 5:14pm 21-Nov-2010, has the relative directory structure, a big list (3006 entries) in HTML and AFAIK is missing only one file - but at 350MB it's too big for my Mediafire account. If we're starting to get serious about this could someone point me towards an alternate file store that doesn't suck?

Share this post


Link to post
GreyGhost said:

If we're starting to get serious about this could someone point me towards an alternate file store that doesn't suck?


Check your PMs. :)

Share this post


Link to post
GreyGhost said:

Just finished it an hour ago. It's current to 5:14pm 21-Nov-2010, has the relative directory structure, a big list (3006 entries) in HTML and AFAIK is missing only one file - but at 350MB it's too big for my Mediafire account. If we're starting to get serious about this could someone point me towards an alternate file store that doesn't suck?

I don't think you understand, or else I'm misunderstanding what you've pulled. We don't need HTML files etc. We need the "image" files from the site, such as pics. Not a wget, just the binary data in a form that can be reimported. The pages themselves have to come in the form of a database file or else MediaWiki will not be able to import them.

Share this post


Link to post

Right now there are 2,688 articles on the wiki. I'm sure half of them are one sentence articles about some mapper that added himself to the wiki.

Ruling out all of the crap, there honestly isn't too much content that definitively needs to be moved over.

Share this post


Link to post
DooMAD said:

Check your PMs. :)

Done and done, upload complete. :-)

Quasar said:

I don't think you understand, or else I'm misunderstanding what you've pulled. We don't need HTML files etc.

It's a minor miscommunication, the file list is the only one in HTML - it's the language I'm most familiar with. Here's a csv formatted directory listing of what I've downloaded, for some odd reason non-image files haven't kept their original datestamps so they're all clustered at the bottom of the list amongst the more recent uploads.

Share this post


Link to post

Yeah, but what this archive means is that we can rebuild the database by reuploading all images... And we lose the history and previous versions in doing so.

Without the database dump, the files are going to be considered missing since they won't be in the database. I remember the UESP wiki has had some problems with images when it moved to a multi-server setup to spread the load because the single server was getting hammered by requests. Some errors in configuration did not keep the image database in sync across the servers, so you'd upload an image and it'd still not show in the article, because the image was sent to server A and the article was shown from server B. The file technically was there on both, but the image page was redlinked anyway -- if you went there, you saw the pic and a message telling you the file did not exist.


So yeah. Database dump or nothing.

Share this post


Link to post
Gez said:

So yeah. Database dump or nothing.


The people have called. Who will man up and help get this done? My resources are available as needed but please don't depend on me to get it all done on my own, I'm just one guy :)

Share this post


Link to post
Quasar said:

I don't think you understand, or else I'm misunderstanding what you've pulled. We don't need HTML files etc. We need the "image" files from the site, such as pics. Not a wget, just the binary data in a form that can be reimported. The pages themselves have to come in the form of a database file or else MediaWiki will not be able to import them.

The uploaded file is called "image dump.zip", so I'm assuming that's what you wanted. If there's any doubt, download it and find out, heh.

Share this post


Link to post

Yeah I would like to see some action happen on this before the end of the month. Otherwise I don't feel it's going to actually happen.

Share this post


Link to post
DooMAD said:

The uploaded file is called "image dump.zip", so I'm assuming that's what you wanted. If there's any doubt, download it and find out, heh.

I attempted to download this and your server cut me off at 119 MB. I don't really feel like trying to get this 500 times so is there any way to get it onto a reliable download location?

EDIT: Cut off a second time at 120 MB. I think the server is simply refusing to send more than that.

Share this post


Link to post
Blzut3 said:

This page may be able to help. I don't have everything need to run the MediaWikiDumper script though.


I spent some time playing with MediaWikiDumper yesterday, and here's the result:



MediaWikiDumper is basically a scraper that uses the MediaWiki API, so if you did use this tool to copy the Wikia version of the Doom Wiki, you'd have to find some way of linking people back up with their user accounts. The passwords in the MediaWiki database that gets created with MediaWikiDumper are all blank, as are the e-mails, the passwords and e-mails can't be scraped via the MediaWiki API. It would be impossible for people to use the "e-mail me a new password" link in order to get access to their accounts as they were on Wikia, there's no e-mail addresses in the user database.

The good news is that it looks like MediaWikiDumper is incremental, i.e. it will only scrape the new pages/files since the last time you ran it. It does put all the images back into the proper folder directory structure, so that when you restore the content by inserting it into the database and copy the image folder that is created with MediaWikiDumper into the MediaWiki folder, things will Just Work, provided your MediaWiki is set up correctly.

I found out how to download database backups of the Wikia content without bugging the Wikia people, it's near the bottom of the Special:Statistics page on the Wikia Doom Wiki.

You will need to find a way to remove the Wikia-specific junk from any dump you take. You can see an example in my screenshot; the <mainpage-leftcolumn-start /> tag, as well as the left hand navigation bar. It's easier IMO to try and remove that junk before you import it into MediaWiki than to try to do it by hand later on.

It took me some time to work out an Apache config that would make MediaWiki happy; I think I have it nailed now, and I can pastebin it if someone wants to use it as a base.

My mailserver wasn't working on this host either, so I blew my chance to have a new password e-mailed to me, and I don't know enough about MediaWiki guts to reset it. You only get one password reset chance per day by default through the web interface with MediaWiki. I tried using the PHP password changing script that MediaWiki comes with, but it didn't work, and I haven't had the time to hack it further.

If anyone wants a copy of the dumpfile that I created using MediaWikiDumper, please PM me and I'll send you the URL, probably not today but tonight or tomorrow my time (San Diego). The dump created with MediaWikiDumper is almost 400M. It's not difficult to create your own using MediaWikiDumper however.

Oh yeah, happy turkey day y'all.

Share this post


Link to post
spicyjack said:

I spent some time playing with MediaWikiDumper yesterday, and here's the result:
*snip*

This should really go straight to Manc because he's the one who controls the server we want to host the wiki on. Thanks for all this work, I'm quite impressed.

Share this post


Link to post

External Sponsor Links
Click here!
Buy a sponsored link and description for your website on this page. Act quickly, the few sponsorship slots sell out fast!


I saw this at the bottom of individual Wikia Doom wiki pages this evening. Is this a new thing, or have I not been paying enough attention?

Example: http://doom.wikia.com/wiki/Foreverhood

If it's a new thing, FFFFUUUU Wikia.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×