Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
Quasar

SDL Win x64?

Recommended Posts

Anybody know how to get SDL and its supporting libraries in a Windows x64 flavor, in a manner not involving MinGW? Do the projects included with the libraries support building for this target already?

Share this post


Link to post

Do you want to allocate more than 2GB in Eternity? :)

SDL compiles fine for me with msvc. Some versions of prboom-plus did non need sdl at all (just glboom-plus.exe), because of static linking. Did not try to compile x64 though.

Share this post


Link to post

Would just like to make the jump now before 64-bit becomes the de facto platform and everybody gets whiny because we don't support it yet ;)

There have already been numerous requests for x64 Linux support, I figure Windows won't be far behind.

Share this post


Link to post

The difference being that Windows doesn't care if a compiled .EXE (which most people use) is 32 or 64 bits whereas most Linux users are compiling the source themselves so they are much more likely to run into problems.

Share this post


Link to post
Quasar said:

Would just like to make the jump now before 64-bit becomes the de facto platform and everybody gets whiny because we don't support it yet ;)

There have already been numerous requests for x64 Linux support, I figure Windows won't be far behind.

Keep in mind the 64bit programming models are different between Unix and Windows. Unix uses the LP64 model, while Win64 uses LLP64.

Standard porting concerns apply - ie don't use int, long and pointers interchangeably assuming they are all the same size.

The Unix rational is at http://www.unix.org/version2/whatsnew/lp64_wp.html and the Win64 one is at http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx

It appears to me the Win64 model was chosen specifically to keep 32bit apps running, unless you have a compelling need to use a larger address space. Unix appears that if you designed well, and didn't mix int, long and pointers interchangeably, you can rebuild for a larger address space and if needed the extended range of 64bit arithmetic.

Is there a need to port to Win64 other than as a general code cleanup opportunity ?

Share this post


Link to post
Yagisan said:

The Unix rational is at http://www.unix.org/version2/whatsnew/lp64_wp.html and the Win64 one is at http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx

It appears to me the Win64 model was chosen specifically to keep 32bit apps running, unless you have a compelling need to use a larger address space. Unix appears that if you designed well, and didn't mix int, long and pointers interchangeably, you can rebuild for a larger address space and if needed the extended range of 64bit arithmetic.



Too bad that it happened. But somehow the best part is the geeks' reponses on the Windows blog. Somehow these people persistently fail to comprehend the realities of commercial software development...

Share this post


Link to post
Graf Zahl said:

Too bad that it happened. But somehow the best part is the geeks' reponses on the Windows blog. Somehow these people persistently fail to comprehend the realities of commercial software development...

They shouldn't have to. Bad technology standards are bad, regardless of whether there are commercial reasons behind them or not. As a matter of fact, usually commercially pushed standards ARE bad, because they focus less on standardizing a correct principle and more on "tossing it in with the old stuff" so they have to do less work.

Share this post


Link to post

On the flip side many standards that are solely guided by high ideals are doomed to fail commercially because often any commercial implications are ignored - and these can make or break a standard.

If you ignore any such consideration you'll end up with empty hands despite claiming to have done everything right.

Let's make this simple: A standard nobody uses because it involves too much hassle is not worth anything.

Microsoft would have risked that developers would just ignore the 64bit system because it's too much work to port existing software to it.

It may not have been the ideal solution but I have to agree that with all things factored in it's the right one.

The bigger problem is that MS's compilers still don't support the C99 integer types. Now that's something that really irks me,

Share this post


Link to post

Most Windows applications are binary only. The companies that developed them might not exist anymore. The original source code might be lost, or unportable.

And there are still millions of people who need to use this old software for their job, daily.

Arrives Windows 64 Good Standards Only edition. Old apps no longer work at all. People stay with Windows 32 since it's what they have to use. Windows 64 is a total commercial flop. Commercial software developers keep targeting the Win32 platform since it's good enough for most uses and that's the largest market. The transition never happens.



Arrives Windows 64 Bad Standards Make Geeks Cry edition. Old apps still work. People move to Windows 64 since it's what's new and it doesn't prevent them from working. Commercial software developers start targeting the Win64 platform.



Heh. Doom itself is a perfect example of a very bad standard. It has plenty of bugs and of limitations that are considered features and that necessitate complicated workarounds to be maintained. If a source port doesn't manage some frequent hacks, everybody here gang up on it saying it's complete crap; nobody lauds it by saying it standardizes a correct principle.

Share this post


Link to post

I'd just like to point out this is the second time Microsoft has ported Windows to a 64bit platform. They missed a golden opportunity to pick a better model with _no_ loss of compatibility with existing 32bit applications, as they run on the WoW layer anyway.

As it stands, the only reason to port to "64bit" Windows is for a larger address space. It is for intents and purposes, a 32bit system.

Share this post


Link to post

So? What other reason would there be to port to 64 bit than to be able to address more memory? I can see no other reason for it and if that necessity did not exist there'd be no real need for any 64 bit system.


I really don't understand what's the problem. Why is the size of the 'long' type that important? It's something nobody should use anyway. Really safe code should only used explicitly sized types for data structures.

Share this post


Link to post
Graf Zahl said:

So? What other reason would there be to port to 64 bit than to be able to address more memory? I can see no other reason for it and if that necessity did not exist there'd be no real need for any 64 bit system.

There is another example. Almost all chess engines use bitboards (8*8 = 64) and x64 versions are ~1.9x faster than x32. That gives ~60-70 ELO :)

Share this post


Link to post
entryway said:

Almost all chess engines use bitboards (8*8 = 64) and x64 versions are ~1.9x faster than x32 :) Doubled speed is ~70 ELO

They can already beat human champions (Deep Blue vs. Kasparov was 12 years ago), so what's the point? :P

Share this post


Link to post
Graf Zahl said:

So? What other reason would there be to port to 64 bit than to be able to address more memory?

Because your target machine is a 64bit native system perhaps ? You need the extended calculation range of native 64bit types ? You've been ordered to do a code audit and cleanup so marketing can slap OSX snow leopard, Windows 7, 64bit OK stickers on the box so the boss can justify a 30% increase in RRP of your shovelware application ?

Graf Zahl said:

I can see no other reason for it and if that necessity did not exist there'd be no real need for any 64 bit system.

No one forces you to port to any system you don't see a need for.

Graf Zahl said:

I really don't understand what's the problem. Why is the size of the 'long' type that important? It's something nobody should use anyway.

Certain relevant standards say that short <= int <= long. Previously the only safe way to perform pointer arithmetic on compiler that does not support C99 was to use long to manipulate the pointers. Now, on Win64 you need to use a new type - the long long. This is a reality that people need to deal with, it took until 2004 to ratify C99, and the industry standard compiler for the Windows platform does not support it. Without 3rd party support for C99 types, there isn't a safe way to perform pointer arithmetic or implement function pointers, in a method that is portable to other systems without resorting to (more) ifdefs and typedefs in your code.

Graf Zahl said:

Really safe code should only used explicitly sized types for data structures.

Really safe code in that case needs to be built by a compiler that supports C99, and the optional explicit size typed defined in section 7.18. You yourself noted that the Microsoft compilers do not support this.

You may have missed it, but as far as I'm concerned a port to Win64 isn't worth it unless you want to ensure that your conversion to all c99 types, and your third party c99 headers work. At best you'll get a small speed up from the additional registers in the amd64 architecture, and that may be negated by the size override prefixes needed to stipulate 32bit access to the native 64bit registers.

Share this post


Link to post
Yagisan said:

Certain relevant standards say that short <= int <= long. Previously the only safe way to perform pointer arithmetic on compiler that does not support C99 was to use long to manipulate the pointers. Now, on Win64 you need to use a new type - the long long.



And here's where you are very, very wrong. Nobody should *EVER* use int, long or (god beware) long long to do pointer arithmetic in new code. Long has never been a safe type and anyone treating it with such assumptions is making a big mistake.

The only safe types for this are 'ptrdiff_t', 'intptr_t' and 'size_t' and they are the only thing that should be used for this. (MS supports these types unlike the C99 sized ints.) Code using the standard integer types for pointer arithmetic is one of the biggest problems people face now with the advent of 64 bit and no decision how large to make ints and longs is going to change this. If you need to migrate use the proper types and you won't run into problems. Continue to use longs and when the next transition arrives the same problems will resurface.


This is a reality that people need to deal with, it took until 2004 to ratify C99, and the industry standard compiler for the Windows platform does not support it. Without 3rd party support for C99 types, there isn't a safe way to perform pointer arithmetic or implement function pointers, in a method that is portable to other systems without resorting to (more) ifdefs and typedefs in your code.

#ifdef _MSC_VER
... define your C99 types yourself
#else
#include "stdint.h"
#endif

Save that as mystdint.h and don't bother any further with compiler dependent stuff. Aside from these types C99 doesn't contain anything that's really needed for type safe programming.

For pointer arithmetic, see my first response above.


Really safe code in that case needs to be built by a compiler that supports C99, and the optional explicit size typed defined in section 7.18. You yourself noted that the Microsoft compilers do not support this.



The only thing MS doesn't provide is the stdint header. The types are there but named differently.


You may have missed it, but as far as I'm concerned a port to Win64 isn't worth it unless you want to ensure that your conversion to all c99 types, and your third party c99 headers work.


You are seeing a problem where none exists.
If you insist on continuing to use int and long where you should use size specific types I can't help you.


At best you'll get a small speed up from the additional registers in the amd64 architecture, and that may be negated by the size override prefixes needed to stipulate 32bit access to the native 64bit registers.


I don't know how precisely the 32bit access in 64 bit mode works but since it's still the size of a normal int and therefore very important I wonder if this isn't implemented without a performance hit. IMO it'd be stupid to have this implemented sub-optimally in the hardware.

Share this post


Link to post
Graf Zahl said:

And here's where you are very, very wrong. Nobody should *EVER* use int, long or (god beware) long long to do pointer arithmetic in new code. Long has never been a safe type and anyone treating it with such assumptions is making a big mistake.

I don't see why you are assuming I am advocating using int, long and long long in new code. The discussion was about a port to Win64 which implies an existing codebase.

Graf Zahl said:

The only safe types for this are 'ptrdiff_t', 'intptr_t' and 'size_t' and they are the only thing that should be used for this. (MS supports these types unlike the C99 sized ints.)

intptr_t is an optional part of the C99 standard (that should be defined in stdint.h) - the same section I referred to earlier.

Graf Zahl said:

Code using the standard integer types for pointer arithmetic is one of the biggest problems people face now with the advent of 64 bit and no decision how large to make ints and longs is going to change this. If you need to migrate use the proper types and you won't run into problems. Continue to use longs and when the next transition arrives the same problems will resurface.

You basically echo my point in my first thread post. Don't use int, long and pointer interchangeably.

Graf Zahl said:

#ifdef _MSC_VER
... define your C99 types yourself
#else
#include "stdint.h"
#endif

Save that as mystdint.h and don't bother any further with compiler dependent stuff. Aside from these types C99 doesn't contain anything that's really needed for type safe programming.

Thank you for that example of anditional ifdefs and typedefs for Win64.

Graf Zahl said:

The only thing MS doesn't provide is the stdint header. The types are there but named differently.

That may be so, but the standard requires the header. It's not optional.

Graf Zahl said:

You are seeing a problem where none exists.
If you insist on continuing to use int and long where you should use size specific types I can't help you.

I've only pointed out the realities of porting an existing codebase to Win64 will require adjustments in certain areas. No more, no less. I've not discussed greenfield applications at all.

Graf Zahl said:

I don't know how precisely the 32bit access in 64 bit mode works but since it's still the size of a normal int and therefore very important I wonder if this isn't implemented without a performance hit. IMO it'd be stupid to have this implemented sub-optimally in the hardware.

It depends specifically on the _type_ of 32bit access. Having a re-read of volume 3 of the amd64 architecture programmers manual indicates that the prefix I was thinking of is only relevant when accessing memory and not registers in 32bit words. That would make any possible performance hit in Win64 application dependent.

It feels very odd for one of the few times we are more or less in agreement GZ that you still read into my posts things I haven't said.

Share this post


Link to post
Quasar said:

Anybody know how to get SDL and its supporting libraries in a Windows x64 flavor, in a manner not involving MinGW? Do the projects included with the libraries support building for this target already?


Mingw64 works good (I use it for ReMooD's SDL and it's 64-bit compilation) but since you don't want any Mingws you might be able to compile it using the VC++ projects with little modification.

Share this post


Link to post
GhostlyDeath said:

Mingw64 works good (I use it for ReMooD's SDL and it's 64-bit compilation) but since you don't want any Mingws you might be able to compile it using the VC++ projects with little modification.

Main issue with MinGW is that SDL's support for it is horrible (configs and makefiles regularly end up broken) and the resulting DLLs have broken behaviors (such as hard-mapping my stdio to a file - if I wanted that, I would explicitly code it so - this is not something for a library to decide for me).

In response to the various arguments and discussions going on previously, I must add my own comments:

  • EE has already stopped using short or long types in any place where the size of an integer variable matters, with the exception of some map data structures which I don't believe I've gotten around to changing yet.
  • EE uses a 3rd-party header to provide C99 types for Visual C++. Believe it or not, GCC has given us more trouble in the transition to C99 types, as its headers are a grotesque non-standard mess that contain conflicting and incorrect definitions of the types in unnecessary/redundant places.
  • Fundamental DOOM data types such as byte, fixed_t, and angle_t are typedefs for the proper C99 fixed-size type now.
  • EE has eliminated all instances of pointer coercion into integers (there were VERY few of these to begin with, as they are terrible practice) outside of the Small virtual machine, which has already been identified as having intractible 64-bit portability issues, and will simply not be supported there at all (it is already considered deprecated on 32-bit and will be replaced by our currently-in-planning Aeon API on ECMAScript).
  • I have taken steps to clean up any Win32-specific code to work on Win64 as well, excluding the SEH which isn't meant to be portable anyway.
  • I just eliminated the undefined behavior with respect to va_list which was causing compilation failure on GCC 64-bit.
So as you can see, we are already well on the way. There are benefits to compiling any program as 64-bit, one of the key ones already having been mentioned being that the porting process cleans up stuff that is already a problem in the code (non-portable idioms).

I may be wrong, but don't x64 processors use SSE instructions for floating point? If this is the case, it may make EE's Cardboard rendering faster. I do not know if it requires major reorganization of the code to take advantage of that, or if the compiler's optimizer is smart enough to apply tweaks on its own.

At any rate, I don't feel that 64-bit integer math is the only reason to port up. 64-bit integers are rarely necessary for anything. They represent a scale of numbers that is largely outside of daily experience. How often do you even deal with 4 billion of something, for that matter? :)

Share this post


Link to post
Quasar said:

Main issue with MinGW is that SDL's support for it is horrible (configs and makefiles regularly end up broken) and the resulting DLLs have broken behaviors (such as hard-mapping my stdio to a file - if I wanted that, I would explicitly code it so - this is not something for a library to decide for me).

Excluding mingw, the only 64bit compilers I'm aware of are the full versions of msvc2005, msvc2008 and the upcoming msvc2010 - the express editions appear to be 32bit only.

Quasar said:

I may be wrong, but don't x64 processors use SSE instructions for floating point? If this is the case, it may make EE's Cardboard rendering faster. I do not know if it requires major reorganization of the code to take advantage of that, or if the compiler's optimizer is smart enough to apply tweaks on its own.

The amd64 abi does state sse is the default for floating point code, but the x87 unit is still available if you want to use it. I think the second implicit question here is, do the compilers turn your floating point code into vectorised sse code. The newer versions of gcc and the intel compiler do support automatic vectorisation in their "release" modes - typically -O3 or similar builds. I'm unsure of the status of msvc in this area. There is generally no trouble vectorising simple loops, but the compilers need help with more complex code - you may need to re-arrange the code a bit to help the compilers. Both the intel and gcc compilers can provide very verbose diagnostics on what does and doesn't vectorise, and in my experience, the intel compiler is far better at making automatically verctorised code with little to no changes.

Share this post


Link to post
Quasar said:

EE uses a 3rd-party header to provide C99 types for Visual C++. Believe it or not, GCC has given us more trouble in the transition to C99 types, as its headers are a grotesque non-standard mess that contain conflicting and incorrect definitions of the types in unnecessary/redundant places.



... and that from the compiler who otherwise tortures its users with literal standard compliance that more often gets in the way rather then help...



I may be wrong, but don't x64 processors use SSE instructions for floating point? If this is the case, it may make EE's Cardboard rendering faster. I do not know if it requires major reorganization of the code to take advantage of that, or if the compiler's optimizer is smart enough to apply tweaks on its own.



I think so, yes. Since all x64 CPUs support SSE2 there's no reason not to use it. On the other hand, where's the gain? The differences in performance to x87 math are only noticable with expensive floating point calculations like in a node builder. I did some profiling with GZDoom and can't detect any significant speed difference. I don't think it's much different for a software renderer because most of the time is spent doing other things
than doing such calculations.


At any rate, I don't feel that 64-bit integer math is the only reason to port up. 64-bit integers are rarely necessary for anything. They represent a scale of numbers that is largely outside of daily experience. How often do you even deal with 4 billion of something, for that matter? :)


Correct - but imagine Doom having a fixed point type that doesn't constantly overflow... (on the other hand, I'd rather port the engine to using doubles instead of 32.32 fixed point. ;) )

Share this post


Link to post
Yagisan said:

Excluding mingw, the only 64bit compilers I'm aware of are the full versions of msvc2005, msvc2008 and the upcoming msvc2010 - the express editions appear to be 32bit only.

It is possible to cross compile 32/64bit using the express editions of Visual C++ however it takes a bit of coercing (registry surgery). I am currently using VC2008 Express to compile 64bit Doomsday binaries.

Share this post


Link to post
KuriKai said:

A comparison of Ubuntu 9.10 32bit, 32bit PAE and 64bit was done here
http://www.phoronix.com/scan.php?page=article&item=ubuntu_32_pae&num=1

results ere: 64bit was better/faster

It is unclear from that benchmark if all they change was the kernel, or the userland as well when they ran the 64bit tests.

Switching from the x86 to amd64 architectures is a known anomaly. You suddenly go from having (best case pentium 4 class or later architecture) 8 32bit "general purpose*" registers, 8 80bit floating point register stack, and 8 128bit SSE registers, to 16 "general purpose" registers, 8 80bit floating point register stack, and 16 128bit SSE registers. The sheer doubling of the amount of registers is responsible for a lot of application speedup due to less register pressure.

* while they are listed as general purpose, in actual fact some have very definite purposes which often means you can not use all 8 of them, typically you'll lose 1 for a frame pointer, on security enhanced linux/bsd systems, you'll lose another for stack smashing protection, and then you'll lose another one for position independent code such as libraries, leaving you in effect 5 "general purpose" registers to play with. Contrast that with the powerpc architecture which as a large amount of registers, and didn't gain any more when they went to 64bit, and it's not surprising to see that on most powerpc systems, userland is still 32bit as the switch to 64bit decreased performance with extra cache line evictions etc etc

DaniJ said:

It is possible to cross compile 32/64bit using the express editions of Visual C++ however it takes a bit of coercing (registry surgery). I am currently using VC2008 Express to compile 64bit Doomsday binaries.

But out of the box, on install it doesn't actually give you the option to do 64bit development - it's not officially supported.

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  
×