Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
WadArchive

Closing Wad Archive

Recommended Posts

Sucks to see you go, dude. Thanks for years of help.

 

does this mean WadHosting will be going too?

Share this post


Link to post

Echoing others in that I'm sad to see it go. Pretty sure Wad Archive is the first place i ever downloaded wads. Hope you're doing ok. Thank you for making the data available

Share this post


Link to post

Real shame, it was an excellent site. Especially loved the screenshots feature! Thanks for running it all this time and passing the data on.

Share this post


Link to post

Now I'll have to update my two PWAD Sourcing projects to include links to Internet Archive's ISO Contents explorer feature or links to rare PWADs on my Discord server, as many Doom Levels CDs are only available in BIN/CUE format, not that there was any Red Book audio tracks on those discs anyway.

Share this post


Link to post
5 minutes ago, Wadmodder Shalton said:

Now I'll have to update my two PWAD Sourcing projects to include links to Internet Archive's ISO Contents explorer feature or links to rare PWADs on my Discord server, as many Doom Levels CDs are only available in BIN/CUE format, not that there was any Red Book audio tracks on those discs anyway.

can i get a link to this server?

Share this post


Link to post

One of my favorite features of the Wad Archive was the screenshots taken and all map views. It displayed a lot of useful information before playing. Sad to see it go. Hopefully, we'll get some sort of WAD IMDb in the future, near or far.

Share this post


Link to post

well, the archive is 260 files totalling just shy of 1TB (zipped) and from the OP, it looks to be a bunch of MongoDB documents. I'm familiar with MongoDB already, though not used the Atlas service - so it should be possible to extract that info from the data dump. Once I have found a big enough HDD to drop it all onto...

 

 

:-) maybe not..

 

 

The first one is downloading now, so I can have a look and see what is there. My smeghammer site is backed by github, so no server-side stuff, but it should be straightforward to extract the WAD data and host as flat files fronted by some javascript (like I did for the R667 mirror last year), and probably extract all filenames as a text file too. 

 

Watch this space,

Edited by smeghammer

Share this post


Link to post
25 minutes ago, Doomkid said:

Just tossing this out there - it would be helpful to have a .txt file with a simple listing of every single wad/pk3/pk7 that was ever on Wad-Archive, even better if paired with a SHA-1 for each. This way, the many files not on /idgames can at least be “confirmed to exist” when someone searches for it. (Wad-Archive seems to have by far the best SEO of all wad sites and could easily bring up any file even with a Google search).
 

A simple list of every site Wad-Archive scraped would be fantastic too. I know it hit most of the big ones, but it also checked a whole bunch of somewhat more obscure ones too which aren’t as well known. Would be awesome and would save so many hours  to have that info!

I can generate a list of WADs and upload it when I get home from work. The SEO part was probably due to what my job was at the time, glad it was working haha.

As for the list of sites that are scraped, I could list them here but not sure if I will include it in the internet archive.

 

7 hours ago, Doomkid said:

Sucks to see you go, dude. Thanks for years of help.

 

does this mean WadHosting will be going too?

Most likely

Share this post


Link to post

It would be very useful if said text file(s) could map the UUID filename to the wad/map name. If that is structured, I can render that programmatically with a bit of AJAX and javascript.

Share this post


Link to post

I have attached a list of all the sites that are scraped. Some may not longer exist or provide WADs anymore

1 minute ago, smeghammer said:

It would be very useful if said text file(s) could map the UUID filename to the wad/map name. If that is structured, I can render that programmatically with a bit of AJAX and javascript.

If you are wanting a list from ID to map name then just parse wads.json. Being JSON it is easy to handle with javascript.

sites.zip

Share this post


Link to post

Ah. I see. The lumps.json... Needs some pre-processing I think :-) 18GB probably can't be used in-memory... 

 

Is this a straight mongodump? or have the documents been collected into a single JSON root? I'd like to push this into mongo if possible - it'll be easier to process. I'll see if VSCode opens it tomorrow (notepad++ just crashed...). 

Share this post


Link to post

Yes it is dump from mongodb, have a look at README.MD. Yea lumps.json is a biggie which is why I uploaded it compressed.

Share this post


Link to post

Thank you for uploading everything to archive.org instead of letting potentially hundreds of wads become lost to time. You've done a great service to the Doom community.

Share this post


Link to post

Thanks a ton for the site list. I’m looking forward to the wad list (when and if you get time) but this will be so helpful in making the transition into the post-WadArchive era. 

Share this post


Link to post

Sharing the sentiments: sorry for the site to shut down, but thanks to WadArchive for years of great community service, and for now helping organize the archival/transition process (instead of disappearing in a puff of smoke).

 

The Doom Wiki has nearly 100 links to wad-archive.com. The normal convention is to change them into Wayback Machine links, and this can be automated via a XymphBot script. But the links need to exist in the archive first, or saved now otherwise. For so many dozens of links, it would be exhausting for me to verify/do this manually. Can someone assist with this process?

Share this post


Link to post

WadArchive has been a critical and undertouted resource of this community for so long. Thank you for maintaining it as long as you did.

Share this post


Link to post
1 hour ago, Xymph said:

But the links need to exist in the archive first, or saved now otherwise. For so many dozens of links, it would be exhausting for me to verify/do this manually. Can someone assist with this process?

A friend of the wiki took care of this already. Thanks mate.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×