Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
tigertoddy

Jdoom speed with (semi) modern PCs?

Recommended Posts

I'm trying to run Jdoom with high res textures, md2's, etc etc, but am getting sub-par framerates in large open areas/lots of onscreen models. I'm not sure whether it's my setup or not. My setup is:

P4 3ghz prescott
4x512mb ddr1 ram
geforce 6800 GT (although it may be as low as an fx5600 - I have a feeling that the vendor just flashed the BIOS) - 512mb ddr2 ram

I have no problems running any other games, but then again, none of those are custom-made.

Should I be seeing decent performance with this system?

Share this post


Link to post

Turn down/off flares and dynamic lights, as that should help a bit with the framerate, as well as the model view distance so they are rendered as sprites from further away. You may want to turn off shadows as those eat up FPS like crazy.

Share this post


Link to post
Mr. Chris said:

You may want to turn off shadows as those eat up FPS like crazy.

Do you mean those simple circles on floor under sprites? I do not believe it can be a reason of slowdown, because I implemented it in glboom-plus some time ago in very similar way and it does not eat framerate at all. For nuts.wad with ~5000 shadows in frame I have 385 fps instead of 400 without shadows

Share this post


Link to post

The current 1.9 beta's of Doomsday have several speed bottle necks because of temporary code in place while things are being rewritten.

In most cases, the 1.9 beta's are generally faster than 1.8.6, but would be considerably so all the time without the above mentioned bottle necks. But in some cases said bottle necks can make things notably slower than 1.8.6.

Share this post


Link to post

tigertoddy said:
Should I be seeing decent performance with this system?


Not until they fix the renderer. Well strictly speaking the problem is more that nvidia drivers in particular no longer optimise for "immediate" mode which doomsday uses heavily. This is further compounded by issues with nvidia driver threaded optimisations which further penalise applications that use "immediate" mode.

Share this post


Link to post
Yagisan said:

Not until they fix the renderer. Well strictly speaking the problem is more that nvidia drivers in particular no longer optimise for "immediate" mode which doomsday uses heavily. This is further compounded by issues with nvidia driver threaded optimisations which further penalise applications that use "immediate" mode.



When did they change this? I haven't updated my driver in a long time and since I don't play modern games I wouldn't want to update and screw myself with a useless driver.

At least this would explain one strange bug report concerning frame rate drops I got some time ago.

Share this post


Link to post

The reports started coming in for a period of time before I left the doomsday project. That would be sometime before January 2008

Share this post


Link to post

Ok, my driver is newer so that's not the cause then.

Interestingly, when I bought my current computer in September 2007 with a Geforce 8600 it exhibited horrendous performance issues with certain constructs and it took more than half a year until NVidia's drivers fixed these problems. Maybe they undid some of their changes because they were too costly for some applications.

One thing that was particularly bad were vertex arrays that got changed frequently. The initial driver broke down to less than 10 fps with such a construct and the first few driver updates late 2007 did not improve this at all. The driver I am using now (dated Summer 2008) no longer has problems with this.

Regarding immediate mode, as much as I'd like to get rid of it I just don't see how it could be possible with an engine like Doom where the geometry can be dynamic due to moving floors and ceilings - not to mention the sprites.

Share this post


Link to post

Looks like I'd better try and clear things up. There is a fair amount of half truth and just general misinformation in this thread.

Firstly, lets put asside the temporary, naive resource search routines. They are indeed a big (unnecessary) bottleneck that we are well aware of but haven't addressed yet because we are still radically redesigning the engine architecture. For example, systems that were game-side have moved through the engine and are now in plugins and conversely, some plugins have been swallowed up by the engine. In other words, the architecture is in a state of flux, so spending too much time on optimizing at this point would (largely) be a waste.

Threaded Optimization:
The problem here is that Nvidia's threaded optimization quite simply does not get on with applications that frequently update vertex arrays (as Graf encountered). Way back around the version 1.5.x era (I think) Doomsday's renderer was enhanced so that rather than send a stream of "immediate mode" GL, API calls, instead our renderer would maitain (large) arrays and then upload large chunks of data for a given render frame in one go. Back then, this was a pretty good idea and it noticeably improved frame rates. Fast forward to 1.9.0-beta5 (a year ago) and the situation is very different. What was once a good optimization is no longer so. Now, I'm no Nvidia driver developer but I would suppose that vertex arrays are now rather low on the list of priorities given that there are better alternatives which most current applications will be using anyway.

So, a couple of versions ago I began work on removing the use of global vertex arrays and as of 1.9.0-beta6 we no longer upload data using them. The exception to this is the model renderer which will still use them as it is quite convenient to do so.

If therefore you are not using 3D models with Doomsday, you won't be affected by the threaded optimization issue. If you do and you are affected by this problem then you currently have two options:

a) Disable the transfer of models using vertex arrays in Doomsday using the -novtxar command line option.
b) Disable Nvidia's threaded optimization when playing Doomsday via the Nvidia driver control panel.

Come the scheduled renderer rewrite, this problem will be addressed by redesigning how models are handled so that all T&L is done on the GPU.

"Immediate mode" in GL DOOM renderers:
Strictly speaking there is nothing at all wrong with immediate mode drawing. The notion that drivers might not handle it all that well I find somewhat ridiculus so lets put that argument to rest. Yes, most modern games do very little drawing via immediate mode but if an implementation were to seriously handicap immediate mode there would be uproar.

Depending on what you want to accomplish with your GL DOOM port renderer, using immediate mode drawing could be all you need. Traversing the BSP from front to back, clipping solid segs and generating a handful of polygons JIT can be done fast enough to maintain a playable framerate on even the slowest of contemporay hardware. The costly part i.e., the BSP generation is done off line and the tesselating of strictly convex polygons can be done very quickly. Now I dare say that there is not one GL DOOM renderer that is tesselating subsectors JIT as since they do not change shape at all it would be very wasteful not to cache the results.

Using immediate mode for drawing may only become a problem when the complexity of your renderer grows or you wish to embelish the scene with added effects/detail/complexities. Soon you find you are caching all manner of data, for example, what I refer to as "vertex z poles" (a sorted list of z coordinates per world vertex, generated from plane interfaces at that XY location) to eliminate T-junction render artefacts.

As far as I am concerned there is nothing "broken" that needs fixing in the current Doomsday renderer. There are some issues to be worked out certainly but given that the fundamental design of which is geared towards hardware which 99.9% of users won't be using anyway, resolving those issues would be wasted effort. It makes far more logical sense to spend that time on a new renderer designed to take advantage of modern systems, which is what we intend to do.

EDIT: Spelling (whoops).

Share this post


Link to post
DaniJ said:

Looks like I'd better try and clear things up. There is a fair amount of half truth and just general misinformation in this thread.

Ouch - that's harsh.

DaniJ said:

Firstly, lets put asside the temporary, naive resource search routines. They are indeed a big (unnecessary) bottleneck that we are well aware of but haven't addressed yet because we are still radically redesigning the engine architecture. For example, systems that were game-side have moved through the engine and are now in plugins and conversely, some plugins have been swallowed up by the engine. In other words, the architecture is in a state of flux, so spending too much time on optimizing at this point would (largely) be a waste.

Profiler logs please. Doomsday has way to much premature optimisation in it. Relatively speaking, how can you be sure this is the big bottleneck.

DaniJ said:

Threaded Optimization:
The problem here is that Nvidia's threaded optimization quite simply does not get on with applications that frequently update vertex arrays (as Graf encountered). Way back around the version 1.5.x era (I think) Doomsday's renderer was enhanced so that rather than send a stream of "immediate mode" GL, API calls, instead our renderer would maitain (large) arrays and then upload large chunks of data for a given render frame in one go. Back then, this was a pretty good idea and it noticeably improved frame rates. Fast forward to 1.9.0-beta5 (a year ago) and the situation is very different. What was once a good optimization is no longer so. Now, I'm no Nvidia driver developer but I would suppose that vertex arrays are now rather low on the list of priorities given that there are better alternatives which most current applications will be using anyway.

So, a couple of versions ago I began work on removing the use of global vertex arrays and as of 1.9.0-beta6 we no longer upload data using them. The exception to this is the model renderer which will still use them as it is quite convenient to do so.

I'll try to ignore the bit that feels like nvidia fanboism (why justify what changes they make ? - just accept that they do change and sometimes - much like doomsday releases - it's not for the better), what your saying is, only the models use vertex arrays, and the rest of the engine doesn't. So how does the rest of the engine talk to opengl ? I don't see any newer methods in the codebase such as glsl, therefor with a little deductive reasoning, you must be using something like say immediate mode.

DaniJ said:

If therefore you are not using 3D models with Doomsday, you won't be affected by the threaded optimization issue. If you do and you are affected by this problem then you currently have two options:

a) Disable the transfer of models using vertex arrays in Doomsday using the -novtxar command line option.
b) Disable Nvidia's threaded optimization when playing Doomsday via the Nvidia driver control panel.


This looks awfully like you are in fact confirming what I said.

DaniJ said:

Come the scheduled renderer rewrite, this problem will be addressed by redesigning how models are handled so that all T&L is done on the GPU.


And on the vast majority of GPU's (which by volume shipped are intel chips) how will this work ?

DaniJ said:

"Immediate mode" in GL DOOM renderers:
Strictly speaking there is nothing at all wrong with immediate mode drawing. The notion that drivers might not handle it all that well I find somewhat ridiculus so lets put that argument to rest. Yes, most modern games do very little drawing via immediate mode but if an implementation were to seriously handicap immediate mode there would be uproar.

I've never said there is anything wrong with immediate mode, just that a certain driver vendor no longer optimises their drivers for it. It really is an easy test, benchmark on the extact same system, just changing the driver versions between runs. Watch the frame rate start to dip, then plateau out in recent releases. This particular vendor tends to optimise for the hardware they are currently selling, drops support for older, yet still working gpu's, from their drivers, and is known to have their drivers cheat in benchmarking by detecting popular benchmarking applications, and changing internal settings. If you were this vendor, would you want to optimise for older applications, or your new expensive hardware no one needs (yet). I'm sure they get a lot of sales from people that feel their application is now too "slow".

DaniJ said:

As far as I am concerned there is nothing "broken" that needs fixing in the current Doomsday renderer. There are some issues to be worked out certainly but given that the fundamental design of which is geared towards hardware which 99.9% of users won't be using anyway, resolving those issues would be wasted effort. It makes far more logical sense to spend that time on a new renderer designed to take advantage of modern systems, which is what we intend to do.

EDIT: Spelling (whoops).

Issues are brokenness. I argued that the problem was not in doomsday. you posted and basically argued the problem is in doomsday. Which is it ?

Share this post


Link to post
Yagisan said:

And on the vast majority of GPU's (which by volume shipped are intel chips) how will this work ?


Frankly, they are shit and that's what you get for using them. They may be the majority in volume shipped (BTW, not here in Germany where it's nearly impossible to get a system with an Intel GPU aside from cheap notebooks.) but the vast majority of that majority is never used for gaming.


I've never said there is anything wrong with immediate mode, just that a certain driver vendor no longer optimises their drivers for it. It really is an easy test, benchmark on the extact same system, just changing the driver versions between runs. Watch the frame rate start to dip, then plateau out in recent releases. This particular vendor tends to optimise for the hardware they are currently selling, drops support for older, yet still working gpu's, from their drivers, and is known to have their drivers cheat in benchmarking by detecting popular benchmarking applications, and changing internal settings. If you were this vendor, would you want to optimise for older applications, or your new expensive hardware no one needs (yet). I'm sure they get a lot of sales from people that feel their application is now too "slow".



Still better than another vendor that also should not be named which has been incapable of delivering a bug free OpenGL driver for years and still doesn't seem to be able to.

BTW I haven't experienced any frame rate dips on my system with the last driver updates. But of course I don't use that outdated hardware and none of the frame rate values I have seen posted recently can confirm your statements that driver performance has dropped. So to quote you: Benchmark logs, please! I want to see proof of this.

Share this post


Link to post
Graf Zahl said:

Frankly, they are shit and that's what you get for using them. They may be the majority in volume shipped (BTW, not here in Germany where it's nearly impossible to get a system with an Intel GPU aside from cheap notebooks.) but the vast majority of that majority is never used for gaming.

I'm quite sure the casual gamer uses them. I have them in my primary system. If the choice of doomsday is to require a high end gaming system to run, so be it - but if that is the way they choose to go - it should be fair to expect doom 3/quake 4/crysis quality out of it - not the quake 2 we get.

Graf Zahl said:

Still better than another vendor that also should not be named which has been incapable of delivering a bug free OpenGL driver for years and still doesn't seem to be able to.

I agree, that vendor, while their hardware is decent, makes shit drivers. Thankfully on my preferred platform we have other vendors writing drivers for that hardware.

Graf Zahl said:

BTW I haven't experienced any frame rate dips on my system with the last driver updates. But of course I don't use that outdated hardware and none of the frame rate values I have seen posted recently can confirm your statements that driver performance has dropped. So to quote you: Benchmark logs, please! I want to see proof of this.

My hardware is considered obsolete from that vendor ( the newer drivers don't work with my hardware, and the older drivers don't work with my kernel ), nor do I run the platform that exhibits that problem. I switched to the vendor that can't write drivers to save their life, and to integrated intel gpu systems. I got burned once by that vendor that decided to obsolete my working hardware, so I voted with my wallet and switched to vendors that will not obsolete my hardware. I'm happy to run the benchmarks - but someone else will need to supply the systems they want me to benchmark.

Share this post


Link to post
Yagisan said:

Profiler logs please. Doomsday has way to much premature optimisation in it. Relatively speaking, how can you be sure this is the big bottleneck.

Simple; I don't need profiler logs in this instance because I have intimate knowledge of how Doomsday works.

As for premature optimization what exactly are you referring to?

So how does the rest of the engine talk to opengl ? I don't see any newer methods in the codebase such as glsl, therefor with a little deductive reasoning, you must be using something like say immediate mode.

Correct.

If the choice of doomsday is to require a high end gaming system to run, so be it - but if that is the way they choose to go - it should be fair to expect doom 3/quake 4/crysis quality out of it - not the quake 2 we get.

That is indeed the way we plan to go in the future but these things take time. You can't suddenly expect us to instantly leap frog a decade, technology-wise.

Share this post


Link to post
Yagisan said:

My hardware is considered obsolete from that vendor



That must be some really old stuff then...

Share this post


Link to post
DaniJ said:

Simple; I don't need profiler logs in this instance because I have intimate knowledge of how Doomsday works.

As for premature optimization what exactly are you referring to?

For example moving the opengl calls out of the main thread in svn 6534, and the general tendency to optimise various functions at the expense of fixing bugs. It appears to happen a lot on this project. Did you really need to replace the LOS code recently with the algorithm you acquired from eternity in svn 6613. I'd argue that at the time, no, you didn't need to optimise those. 6534 in fact turned out to be a mistake - that lead to crashes. 6613 at least mixed in some fixes, although perhaps that re-release of 1.9.0beta6.3 (really - should have been 1.9.0beta6.4 ) may have been out sooner had the precious free time been spent more on hunting that bug then making doomsday faster.

DaniJ said:

That is indeed the way we plan to go in the future but these things take time. You can't suddenly expect us to instantly leap frog a decade, technology-wise.

What I said was: If the choice of doomsday is to require a high end gaming system to run, so be it - but if that is the way they choose to go - it should be fair to expect doom 3/quake 4/crysis quality out of it. Nothing more, nothing less.

Graf Zhal said:

That must be some really old stuff then...

Not particularly - they do utilise extremely popular gpus that can in-fact run doom 3, although it is AGP hardware. If it isn't broken - why fix it ?

Share this post


Link to post
Yagisan said:

For example moving the opengl calls out of the main thread in svn 6534

Actually you are quite wrong here. We only ever call OpenGL from the main thread.

Did you really need to replace the LOS code recently with the algorithm you acquired from eternity in svn 6613. I'd argue that at the time, no, you didn't need to optimise those.

Wrong again. The reason the LOS algorithm was replaced was very simple; there were a couple of bugs in the existing implementation and rather than fix them I decided to replace the whole algorithm entirely. Whats the point in fixing bugs in algorithm when a better alternative is available anyway?

perhaps that re-release of 1.9.0beta6.3 (really - should have been 1.9.0beta6.4 ) may have been out sooner had the precious free time been spent more on hunting that bug then making doomsday faster.

Void argument given my preceeding comment and the purpose of said change. The optimization came free ;)

Share this post


Link to post
DaniJ said:

Actually you are quite wrong here. We only ever call OpenGL from the main thread.

No, I'm not. In svn 6534 you move the texture reset into the busy thread. The code path goes GL_ClearTextureMemory() -> GL_ClearRuntimeTextures() -> glDeleteTextures

That's quite clearly executing in a different thread.

Skyjake backed it out in svn 6647. Perhaps that could be an indication that it was not such a good idea ?

DaniJ said:

Wrong again. The reason the LOS algorithm was replaced was very simple; there were a couple of bugs in the existing implementation and rather than fix them I decided to replace the whole algorithm entirely. Whats the point in fixing bugs in algorithm when a better alternative is available anyway?

The reason you and skyjake always gave me was the release of 1.9.0 is imminent. In this case, that "bug" had nothing to do with the re-release of the brown-paper bag release of 1.9.0-beta6.3 - it could have waited could it not ?

DaniJ said:

Void argument given my preceeding comment and the purpose of said change. The optimization came free ;)

No optimisation ever comes free. The is a cost at some point. It may be time, space, clarity of code, but there is always a cost. Of course, I never did see any open bug reports saying LOS code was broken, and you did replace LOS code in 6611, and the optimisation was in 6613, so, I see it wasn't as free as you make it out to be.

None of this really helps the OP however - and the general consensus here wants to blame doomsday for the performance issue - which means until doomsday rectifies it, the OP is out of luck. OP perhaps you may want to try a different open source engine such as Risen3D, GZdoom, or one of the many others here.

Share this post


Link to post
Yagisan said:

No, I'm not. In svn 6534 you move the texture reset into the busy thread. The code path goes GL_ClearTextureMemory() -> GL_ClearRuntimeTextures() -> glDeleteTextures

That's quite clearly executing in a different thread.

Skyjake backed it out in svn 6647. Perhaps that could be an indication that it was not such a good idea ?

The reason it was backed out is because not all OpenGL implementations are thread safe. It was quite simply a mistake that was fixed. We should only ever be calling OpenGL from the main thread.

The reason you and skyjake always gave me was the release of 1.9.0 is imminent. In this case, that "bug" had nothing to do with the re-release of the brown-paper bag release of 1.9.0-beta6.3 - it could have waited could it not?

I fail to see your logic here. It was a bug that skyjake found and so fixed. Why not fix it then and there when it was so trivial to do so?

No optimisation ever comes free. The is a cost at some point. It may be time, space, clarity of code, but there is always a cost. Of course, I never did see any open bug reports saying LOS code was broken, and you did replace LOS code in 6611, and the optimisation was in 6613, so, I see it wasn't as free as you make it out to be.

Thats right, butterfly effect and all that.

Whether you saw them or not is irrelevant to be frank. The bug report in question was and is still in our tracker.

None of this really helps the OP however...

Absolutely. I wasn't the one who decided to turn this thread in a "lets bash the development practices of the Doomsday dev team thread".

Share this post


Link to post

Good to see yagisan as assertive as ever.

OP, what version of Doomsday are you using? And what WAD/s are you playing? If it's something like Alien Vendetta or Hell Revealed, both of which have some large maps with lots of textures and things onscreen at once, they'll stress alot of system's out with the jDTP and jDRP. If it's just the stock IWADs, try disabling a couple of the models or lowering the visibility range of them etc.

Share this post


Link to post
DaniJ said:

The reason it was backed out is because not all OpenGL implementations are thread safe. It was quite simply a mistake that was fixed. We should only ever be calling OpenGL from the main thread.

Exactly - that was a premature optimisation to move it out.

DaniJ said:

I fail to see your logic here. It was a bug that skyjake found and so fixed. Why not fix it then and there when it was so trivial to do so?

Of course you fail, you deliberately confuse the points. Skyjake did the correct thing. If you re-read you see I'm referencing the LOS changes.

DaniJ said:

Whether you saw them or not is irrelevant to be frank. The bug report in question was and is still in our tracker.


http://sourceforge.net/search/?group_id=74815&words=%22line+of+sight%22&Search=Search gave zero results. Excuse me for being dense - but wouldn't it be logical to have the words line of sight in a bug report for line of sight issues ?

DaniJ said:

Absolutely. I wasn't the one who decided to turn this thread in a "lets bash the development practices of the Doomsday dev team thread".

You should never ask me to answer a question you don't want to hear the answer to. You of all people should know I'll answer it, even if it isn't what you want to hear. I'm certainly not bashing the development practices, I'm just stating a fact. If I wanted to bash I have other more valid flame worthy topics I could post on.

So, in conclusion, to make peace, I'll agree with you that doomsday is at fault, and not the nvidia driver. Are we all happy now ?

Share this post


Link to post
Yagisan said:

http://sourceforge.net/search/?group_id=74815&words=%22line+of+sight%22&Search=Search gave zero results. Excuse me for being dense - but wouldn't it be logical to have the words line of sight in a bug report for line of sight issues ?

If bug reports were always logical we would be living in an ideal world :) See here: http://sourceforge.net/tracker/?func=detail&aid=2655883&group_id=74815&atid=542099

You should never ask me to answer a question you don't want to hear the answer to. You of all people should know I'll answer it, even if it isn't what you want to hear. I'm certainly not bashing the development practices, I'm just stating a fact. If I wanted to bash I have other more valid flame worthy topics I could post on.

No project is perfect or ever runs smoothly. You know I too have plenty of issues with our own development practice and the current situation does not make dealing with them any easier. As you well know, there are all kinds of issues with our current codebase and I am working as quickly and effectively as I can (my opinion of course) to rectify them whilst still maintaining a regular release cycle and pushing towards the stated goals for the 1.9.0 architecture. Naturally because I am working hastily there will be temporary regressions and on occasion, silly mistakes will be made. The truth of the situation is that in order to do all the things we want to do with Doomsday it requires a serious amount of work in order to get there and doing so in an _acceptable_ time frame requires such an approach.

So, in conclusion, to make peace, I'll agree with you that doomsday is at fault, and not the nvidia driver. Are we all happy now ?

I don't believe I ever did blame NVidia's driver but yes, I'm happy.

Share this post


Link to post

JDoom ran great with my 1999 Sony VAIO until I shot the super shotgun, the weapon sparked, and Windows 98SE Blue screened.

Share this post


Link to post
GhostlyDeath said:

and Windows 98SE Blue screened

You had to get used to it. That's peculiar to mustdie.

Share this post


Link to post

I have almost the same system as the OP, however I ditched my old FX5200 (2:4:4 250/200mhz) for an AGP HD3850 (320:16:16 668/826mhz). I went from something like 1002 in 3DMark03 to over 25000.

I know nothing about coding an OGL renderer, but I think a little old NV30/NV40 is going to struggle running the JDTP, MD2s and probably all the fancy lighting and that no matter what you do. Turn the settings down or get a better card.

jDoom isn't Crysis, but it's not doom2.exe either.

Share this post


Link to post
Super Jamie said:

I have almost the same system as the OP, however I ditched my old FX5200 (2:4:4 250/200mhz) for an AGP HD3850 (320:16:16 668/826mhz). I went from something like 1002 in 3DMark03 to over 25000.


The FX5200 is probably the worst POS I've ever seen. Back in 2004 when I bought a computer with one preinstalled it was even slower than my old system with a GF3. Needless to say, I replaced that card within a week with something more powerful.


jDoom isn't Crysis,


No, it isn't. But what many tend to forget is that Doom - no matter what flavor it comes in, is very hard to optimize for 3D hardware. All the things that really would speed up rendering are hindered by some of the stuff the engine does with its geometry.

Essentially, a Quake 2 engine will probably run much better with a hardware renderer than a Doom engine.

Share this post


Link to post
Yagisan said:

Not until they fix the renderer. Well strictly speaking the problem is more that nvidia drivers in particular no longer optimise for "immediate" mode which doomsday uses heavily. This is further compounded by issues with nvidia driver threaded optimisations which further penalise applications that use "immediate" mode.



Sorry to come back to this so late.

Yesterday I had to update my driver so that I could run an OpenGL 3.0 application which the old one didn't support.

So I took the chance to do some performance comparisons with GZDoom to see if your claims could be proven.

The results were quite interesting:

1. I got a 5-10% performance increase across the board, despite using immediate mode exclusively for rendering.
2. I also checked threaded optimizations. Again the results were the contrary of your claims. With threaded optimizations on the game again ran 10-15% faster.

So where is the degrading driver performance you claim to be present? Seems to me that you just forwarded some flawed information.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×