Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

spicyjack

Members
  • Content count

    61
  • Joined

  • Last visited

Everything posted by spicyjack

  1. spicyjack

    Software renderer in source ports

    What fabian said. Mesa in version 7.5 removed all of the old drivers (3dfx/Matrox/older ATI cards) and switched to a new driver framework (Gallium) for the cards that they still do support (newer ATI/NVidia/VMware/Intel), so if you have version 7.x or 8.x of the Mesa 3D libs in your distro, expect some things to be broken. Debian Wheezy (7.x) comes with Mesa version 8.0.5, and the ATI drivers in that version work in software render mode only, but the NVidia drivers are hardware accelerated. Like you said, you're probably going to have to wait until your distro updates, since they are the ones doing all of the testing of combinations of Linux kernel + X + Mesa. Or you can compile latest Mesa and be your own guinea pig ;) List of currently supported cards in Mesa: http://mesa3d.sourceforge.net/systems.html
  2. spicyjack

    Conceptual Doom arcade cabinet

    It's actually sort of already been done before, in the Ultimart.... For those of you scratching your head, the movie is Grosse Point Blank (Wikipedia) with John Cusack, Dan Aykroyd, and Minnie Driver.
  3. spicyjack

    The largest WAD on /idgames

    Nope, that's close enough. Calculating runtime statistics... - Total files found in archive: 34738 - Total size of files in archive: 33.16G - Total files currently in /newstuff directory: 58
  4. spicyjack

    The largest WAD on /idgames

    Yeah, the file paths are not shown because I'm still working on the bits that cross-correlate between the two databases I'm using, a local copy of /idgames database info as queried using the API, and a database of WAD information I'm generating using scripts that touch all of the files in an unofficial mirror I have of /idgames. FWIW, my unofficial mirror of /idgames is at: http://phobos.xaoc.org/ Please to be exercising my bandwidth quota. It's updated every 12 hours, and anyone is free to abuse^H^H^H^H^Hdownload from it. If I don't hit any snags, at some point, I would like ask Ty to become an official mirror.
  5. spicyjack

    The largest WAD on /idgames

    !!BEGIN DISCLAIMER!! This is old data, from the beginning of February, prior to Ty's purges. !!END DISCLAIMER!! sqlite> SELECT filename, size FROM wads ORDER BY size DESC LIMIT 20; filename size bbfourty.wad 448402705 Damnation.wad 256445935 DVII-1i.wad 177678822 vg.wad 161965805 action2.wad 143208125 rod2.wad 126580515 DOOM80.wad 121200022 CIF3.wad 112680042 chemdept.wad 104857600 Tartaru3.wad 98724881 UACMN2.wad 95408720 3057hub1.wad 89378569 AVGNDOOM.wad 82540404 mangskin.wad 78205429 SCmusic.wad 74362635 dimxmus.wad 72589848 cchest4.wad 72442794 NDCP2.wad 69450467 2MusVids.wad 67907838 NeoDoom.wad 66536230 This data is from a set of WAD|/idgames tools that I'm working on, things like syncing /idgames, and going inside of zip files and figuring out what's there. Note that my tools don't handle *.pk3 files yet, that's on the TODO list. Edit: made the disclaimer stand out a bit more. Does vB code have [blink] tags?
  6. spicyjack

    doom reading material

    There are tools to dump wikis. A quick Google got me: https://code.google.com/p/wikiteam/ When I helped migrate the Doom Wiki from Wikia to manc's server, the tool that someone said worked was MediaWikiDumper, so that's what I got to work. I was able to get a dump of all of the pages and the images from Wikia into a database for the pages, and into a directory structure for images. At the time I did the dump, it was about 400M of stuff. MediaWikiDumper has morphed into http://www.mediawiki.org/wiki/Manual:Grabbers, if you wanted to go that route.
  7. spicyjack

    Post your Doom picture! [post in Part 2 instead]

    Doom IRL. These are outside my office building. Every time I see them, I think of TLMP, or maybe a cross between TLMP and COLU.
  8. I went through all of the API entries again, both in XML and JSON, and they both parsed out fine. Thanks for all of the fixes.
  9. spicyjack

    Level->SVG exporter?

    Colin Phipps did a wad2svg script in Perl, he has it on his doombsp project page on Sourceforge; Homepage: http://doombsp.sourceforge.net/wad2svg/ Tarball: http://sourceforge.net/projects/doombsp/files/wad2svg/0.1/wad2svg-0.1.tar.gz The only bad part about the script is that it's dirt old, I don't think it's been updated since about 2002 or so. I did a few tests with it, if you want to see what it outputs. Here's the example SVG that's listed on his page (125Kb); http://doom.spicyjack.com/pix/example.svg Not sure why Chrome won't render this as an SVG in the browser for me. Here's a try at rendering Mek Velapax... WARNING: this SVG file is about 5.7M, you might want to download it first, then open it up in a dedicated SVG program like Inkscape or Adobe Illustrator... it takes Chrome forever to render it. http://doom.spicyjack.com/pix/mek-velapax-beta1.svg The SVG Perl module should be available on any modern Linux system as a package that you can install, for example, on Debian/Ubuntu, the package is named libsvg-perl. It may also installable on Strawberry Perl for Windows via cpanm.
  10. I vote for an API version bump because of the change in API version attribute from a stringy-quasi-float to an integer. That way, no apologies would be required ;) Also, a changelog of all of these changes somewhere on the API page itself would be awesome, so people who work with the API can match API versions to different functionality in the API. Maybe list all of the changes, major and minor, but major changes (where the behavior of the API changed) get highlighted in a different color? I'll go through all of the files via JSON API requests again in the next few days to verify they can be parsed by my parser, and will update this thread when I'm done.
  11. spicyjack

    /idgames database ID numbers

    Can it be replaced with some kind of URL shortening service then? A read-only by end user service that can generate the same URL every time, and not just to /idgames, but to the shovelware CDs stored on archive.org for example, in case someone actually does go through them and catalog them at some point. Or any other 3rd party site with WADs. There was another thread where someone had a giant list of sites they were going through and downloading. For URL shortening, I was thinking something along the lines of generating a checksum from domain + path + filename, turn that in to base32, and the first files indexed get the shortest URLs possible, longer URLs are generated from the hashes that are generated in order to prevent hash collisions. For /idgames files, use fullsort.gz, to generate URLs from oldest file to newest. Edit: added mention of fullsort.gz for determining order for indexing of /idgames
  12. spicyjack

    /idgames database ID numbers

    No, sorry, my question was for using the idgames:// URL syntax with a file ID, for example, idgames://12345. If file IDs can change in the future, what's the point of referring to files by file ID in an idgames:// URL? Will you get some kind of redirect to another idgames:// URL if the file ID changes at a later date, or ???
  13. spicyjack

    /idgames database ID numbers

    Does that mean that using an idgames://<file ID> URL to refer to a file shouldn't be relied on as well?
  14. Only if you use the server for all searches. If you have a copy of the data, or even a subset of the data, then you can "solve" that problem however you want.
  15. The issues I found would make excellent test cases for whatever implementation you decide to work out ;) As floating point values, if your version is currently 1.9, and you bump to 1.10, the trailing 0 may^H^H^Hmost likely will get truncated, making the resulting version 1.1. Same thing when moving between 1.99 and 1.100. The extra trailing zeros at the end of a floating point number are unnecessary, and most likely would be stripped whenever the first math operation took place, depending on the implementation of the math library. If the version was a string, then to the computer, all of the digits are important, and they would never be purposely stripped off. No rush on the fixes, I'm currently using the XML for everything, this is more for people who write things in the future and want to consume the JSON.
  16. spicyjack

    Power outages

    All of San Diego county lost power in 2011 (http://en.wikipedia.org/wiki/2011_Southwest_blackout). A power company employee in a substation in Arizona zigged when he should have zagged, and about 3 million people went dark for about 12 hours. 12 hours is nothing compared to having power out for a few days in the winter, but around here, it's like when it rains, people just don't know how to cope. During the day, it was probably 85-90F (30-33C), and when it's that warm during the day for a long period of time, most houses just become hotboxes at night, it's almost impossible to sleep without fans or A/C, both of which require power. Power dropped at 3:30pm in the afternoon, and didn't come back on for me until 12:30am. I was at work when the power cut out. There was a huge line of cars trying to get on to the freeway near where I work, I think the line was about 2 or 3 miles long. People get into intersections when the power is out, and then they get stuck there, because the people ahead of them get stuck and everybody stops moving. I usually ride my motorcycle to work; there's a special carpool lane to get on the freeway near where I work, so when I left work, I was able to boogie up to the carpool entrance, it's in the opposite direction from the normal non-carpool freeway onramp. It only took me 5 minutes to get on the freeway, and I was home within about 20 minutes. After I got home, the UPS on my computer lasted about 2-2.5 hours or so, then I shut down all of my computer equipment and went hiking up into the hills above my house. I wanted to see and take pictures of the area in between the hill behind my house, and the Pacific Ocean, about 7 miles/~12 kilometers to the west of me. Usually, at dusk and at night, it's very bright, lots of street lamps, houses, and stoplights. That night, all I could see were the headlights of the cars on the freeway. Here's one of the pix I took. The lights you see in the picture are cars on Interstate 56 in San Diego County, heading eastbound, and a few cars on surface streets. The Pacific Ocean is visible in the background, below the line of the horizon, but above the hills. There's probably about 20,000-30,000 people who live in the frame of this picture, and they were all dark when I took it. After I got home from hiking in the hills, I went swimming, the condo complex I live in has a common pool. I was the only person who thought of going swimming that evening/night. As warm as it was, nobody else wanted to cool off in the pool, so I had the place to myself. It was nice being able to look up and see stars and the moon for once, instead of seeing the light pollution from the city. Everything was super bright, it was nice. The best part was, my water heater still had moderately hot water after I got home from the pool, so I was able to get a hot shower after swimming, then crashed out until the power got switched on at 12:30am. It sucks that you have no power, but it sounds like you're coping with it fine. I hope you get power back soon.
  17. (Note: I also e-mailed this directly to MTrop, but posting it here so everyone can see it) I've been parsing the contents of the idGames API over the last couple of weeks, and I found some more issues with the JSON. The XML requests are parsing fine, so no worries there. My methodology was to start at file_id=1 and parse every file to the last file ID listed in a latestfiles query. I haven't tried to parse out any of the new entries since idGames Archive has been re-enabled after the hardware swap. Here's what I've found... 1) http://www.doomworld.com/idgames/api/api.php?action=get&id=1243&out=json One of the rating objects in that record has unquoted value of >+1<; unquoted text is usually treated as a number, and >+1< is not a valid JSON number. A valid number with a plus sign would be a number with an exponent, so something like >1E+12345< is valid JSON. 2) http://www.doomworld.com/idgames/api/api.php?action=get&id=5159&out=json The credits field has a unquoted value of >.4<, which is not a valid float value, it's missing the leading zero. It should either be quoted to make it text, or the leading zero added to make it a number. Since the credits field is usually text, I think you'll want to quote it. 3) http://www.doomworld.com/idgames/api/api.php?action=get&id=15306&out=json One of the vote objects has a text field has an unquoted value of >0.<, which is not a valid float value. That value should be quoted, or made into a valid floating point number. Maybe you could go through all of the fields in all of the API requests and decide beforehand if the field is a string or a number, and always apply quotes to fields that should always be strings in the JSON response? Also, one other thing, the JSON >"meta":{"version":1.0}< is somewhat ambiguous; my parser is truncating the number and turning it into the integer 1. If the version ever gets incremented, are you expecting people who consume the API to compare versions as strings, or as a floating point numbers? If you're expecting people to compare versions as strings, then you need to quote the version to make JSON parsers parse the value as a string. If you're expecting people to compare floats, things could get real interesting when the last digit in your floating point version number is a >0<. Based on what I've seen so far, I'm expecting most parsers to truncate that last digit when it's a >0<. FWIW, the version value in the XML is quoted, so it's treated as a string. JSON RFC for reference: http://tools.ietf.org/html/rfc4627
  18. spicyjack

    Program for weeding out duplicate files?

    I'm guessing you're running Windows, you don't mention in your original post. Were you on a Linux/Mac, I would say fdupes http://code.google.com/p/fdupes/ Debian/Ubuntu has a package of it. It will delete duplicate files that it finds, replacing the files with a hardlink (basically a UNIX-y pointer for files), saving you a bunch of space. There's also teh googles and wikipaedias https://www.google.com/search?q=fdupes%20windows http://en.wikipedia.org/wiki/List_of_duplicate_file_finders For what it's worth, /idgames currently holds 34418 files, at around 32.56G of space.
  19. spicyjack

    Revision control and the Internet

    That goes back to what fraggle was saying about having to think about what you're going to say in your commit messages when you make a commit. Think of it as present self leaving breadcrumbs for future self to follow. Another thing that I find is very helpful is to keep a project journal. I use Markdown files, but plain text files work too, plus, VCSes will diff Markdown/plaintext no problems.
  20. spicyjack

    Revision control and the Internet

    Generally, the only way a VCS will be able to show you the differences between binary files is if the VCS is written to understand the binary file format of the files in question. Neither Git nor Mercurial (nor any other VCS that I know of) will show changes between successive versions of WAD files, but on the other hand, they won't show you changes to Excel/Word files, or PNGs, or JPGs or any other binary file format, they just say that the two files (old and new) are different. AFAIK there's not even a standalone tool that will 'diff' two WAD files, although in theory it could be done, as the WAD file format is well-documented. Have you thought about switching to a different WAD file format? UDMF (http://zdoom.org/wiki/UDMF) is text-based, so all VCS tools would be able to easily show you changes to the files that you make during editing.
  21. spicyjack

    Revision control and the Internet

    I think this is the most important thing; to use something that does some kind of tracking of changes as opposed to using nothing at all, or rolling your own system, say for example copying files and adding dates/extensions or something similar to "keep track" of your changes. Being able to ask your VCS what changed over a period of time down the road can save huge amounts of headaches when you have to go figure things out. My first maxim is: Blame the artist, not the tool. All VCS tools do pretty much the same thing, the differences lie in how the tools are used, and how you use them as a normal part of how you are doing whatever you are trying to do (how the tool integrates into your "workflow"). I still have repos in CVS, mostly because the effort required to convert them to something else isn't justified for how I use them (mostly read-only at this point). If you're looking at learning a VCS from scratch, I would recommend any of the DVCS tools, such as Git or Mercurial; being able to work offline as well as not having to administer repos (SVN) IMO is worth the price of the somewhat steeper learning curve. As far as Git vs. Mercurial, I recommend the same thing that I recommend as far as choosing a Linux distro, i.e. find out what tool people you know and trust use, then use that, the reason being, if you ever have questions, you already know a few people who are using the same tool, it's much easier to ask one of them then to try and find the right Google magick that will give you the answer to your problem.
  22. spicyjack

    Revision control and the Internet

    Both Git and Mercurial (distributed version control systems, or DVCSes as fraggle called them) come bundled with tools that can help you migrate code from older VCSes, and if you don't like the supplied tools, other people have written their own. Fraggle wrote one that converts from SVN to Git differently from the Git-supplied tool. How do you want a version control tool to "optimize for non-text files"? Compression? Output of 'diff' between changes of binary formats? It would be hard to answer your question without a use-case that explains what you're looking to do.
  23. Cool, thanks for the super-quick turnaround, I see the updated script live now. Here's what I get now: wget -O - "http://www.doomworld.com/idgames/api/api.php?action=get&id=17259&out=json" | xxd | less {snip} 00000a0: 7574 686f 7222 3a22 5a6f 6c74 c3a1 6e20 uthor":"Zolt..n 00000b0: 53c3 b366 616c 7669 2028 5a38 3629 222c S..falvi (Z86)", So the 0xe1 is now 0xc3 0xa1, and 0xf3 is now 0xc3 0xb3, both of which are the valid UTF-8 replacement sequences for the offending ISO-8859-1 characters (according to http://www.utf8-chartable.de/). This was actually a good lesson for me, it reminded me that I ALWAYS need to validate my input from external sources :) If I find anything else wonky, I'll be sure to let you know. As far as the headers: $ wget --spider --server-response "http://www.doomworld.com/idgames/api/api.php?action=get&id=17259&out=json" Spider mode enabled. Check if remote file exists. --2013-09-21 11:17:25-- http://www.doomworld.com/idgames/api/api.php?action=get&id=17259&out=json Resolving doomworld.com... 38.68.5.148 Connecting to doomworld.com|38.68.5.148|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Sat, 21 Sep 2013 18:17:22 GMT Server: Apache/2.2.23 (Unix) PHP/5.4.10 X-Powered-By: PHP/5.4.10 Keep-Alive: timeout=15, max=500 Connection: Keep-Alive Content-Type: application/json Length: unspecified [application/json] No, no encoding type mentioned in the headers. For what it's worth, the XML output also doesn't have encoding type in the headers, but since character encoding is part of the document, it's not such a big deal. I'll keep in mind the contact e-mail address mentioned above in the about call, however, any chance of posting the API script somewhere publicly, preferably some place with a issues/bug queue, like GitHub/Bitbucket/Sourceforge/etc.? Thanks again!
  24. I have a bug to report. For this entry in idgames: http://www.doomworld.com/idgames/index.php?id=17259 The author entered in some characters using ISO-8859-1; notice the 'small letter a acute' and 'small letter o acute' in the author's name. The hexadecimal bytes for these letters are 0xe1 and 0xf3 respectively. In the XML API call [1], it converts these bytes to the proper UTF-8 byte sequences (0xc3 0xa1 for the acute 'a' and 0xc3 0xb3 for the acute 'o'). curl "http://www.doomworld.com/idgames/api/api.php?action=get&id=17259" \ | xxd - | less ... 00000f0: 332d 3036 2d32 393c 2f64 6174 653e 3c61 3-06-29</date><a 0000100: 7574 686f 723e 5a6f 6c74 c3a1 6e20 53c3 uthor>Zolt..n S. 0000110: b366 616c 7669 2028 5a38 3629 3c2f 6175 .falvi (Z86)</au 0000120: 7468 6f72 3e3c 656d 6169 6c3e 7a65 7261 thor><email>zera But the JSON [2] is not converted to valid UTF-8, the bytes remain encoded in ISO-8859-1, and those bytes are invalid sequences in UTF-8, so parsers most likely won't parse it. Valid JSON per RFC4627 is Unicode, so UTF-8, UTF-16 or UTF-32 (http://www.ietf.org/rfc/rfc4627.txt). curl "http://www.doomworld.com/idgames/api/api.php?action=get&id=17259&out=json" \ | xxd - | less ... 0000090: 3a22 3230 3133 2d30 362d 3239 222c 2261 :"2013-06-29","a 00000a0: 7574 686f 7222 3a22 5a6f 6c74 e16e 2053 uthor":"Zolt.n S 00000b0: f366 616c 7669 2028 5a38 3629 222c 2265 .falvi (Z86)","e The parser I was using was unhappy until I manually scanned for the offending bytes and converted to UTF-8 prior to unserializing the JSON into objects. I can switch to using the XML version of the API, but I would prefer to use the JSON version if possible. For what it's worth, I found a few decent pages [3][4] showing byte encodings for UTF-8, if anyone needs them for future reference; [1] http://www.doomworld.com/idgames/api/api.php?action=get&id=17259 [2] http://www.doomworld.com/idgames/api/api.php?action=get&id=17259&out=json [3] http://www.utf8-chartable.de/ [4] http://dev.networkerror.org/utf8 [edit] added valid encoding types for JSON per the RFC and a link to the RFC[/edit]
  25. spicyjack

    Best Linux for old computer

    IMO, the best OS/distro to run is the one that you can get support with if you run into problems. I don't know if you know other Linux people that you could call on the phone or come over to your place if you have problems, or if the only people you know who run Linux are on the internet. That being said, from what I've seen, Ubuntu has a good community, as does Fedora/CentOS. I belong to a local Linux User's Group, and I think most of the people in the group run one of [Ubuntu|Fedora|CentOS]; we have I think one Ubuntu dev and one Debian dev in our group. I personally run Debian, because I like starting with the bare minimum install and adding to it from there, but I understand it's not for everybody (higher overhead for some admin tasks), and that if I run into problems, I may have to figure it out myself in order to fix it. I'm alright with this though.
×