Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Alter

The future of programming.

Recommended Posts

While the programming languages keep advancing/developing themselves, they still manage to be blind enough to not see a problem that will hurt many in future.

The problem being, lack of a straight-forward way to program on more cores. The technology keeps advancing and the software can't catch up neither the programmers can so tons of power is wasted as a result.

Programming languages need to invent a straight forward way to tell the CPU on which core/SPU the stuff needs to be done, for now we gotta put up with tons of complex messes that do this. And it's even worse when you program for Cell which is an absolute clusterfuck to get hang of!

Problem being, since this isn't a perfect world, no one will try to help any other programming language programmer help come up with a way to straight-forward program the cores/SPUs. Too many try to score the jackpot on themselves only to leave anyone else behind. We already have 7 cores on CPUs at this point and it's not that far away till we get 16 cores, 32 cores and so on. The technology keeps advancing, programming software can't catch up with it, neither the programmers can.

It is a major problem that is already affecting us and we can see it (read: poorly optimized multi-core games). And thinking it will affect us even more in future. We have so much ridicoulous power yet we can't make use of it since we don't have great enough tools to do it.

What is your opinion guys on this hell of a futuristic mess?

Share this post


Link to post

Depends, the level of software advande dont have to be in phase with hardware development, for example

software tries to reach 125 fps
hardware(along with its own software) supports 670 fps

this ensures that there shouldnt be any fps drop at all,
and the amount of frustrated quakerlive playerswould sink
with perhaps atleast 25%

and, isnt hardware developed mostly today with an machine doing
atleast 50% of all work? software takes atleast around 70% manpower
I guess


how ever, the leagues of gaming will meet new challenges; new game consoles and new games, all for technology

however, Starcraft2 is one example of the future mapping for many people, its editor is even more advanced than the warcraft3 editor

but starcraft2 has the growing problem just like many other modern games: its totally shit -or- totally shit compared to its previous predecessors.

the perfection of DOOM keeps DOOM alive

Share this post


Link to post

Whilst I've only grazed the surface of the subject in my studies, the threading system seems a simple enough way to distribute work to multiple cores/threads. How this translates to consoles I couldn't say, as it's something I have no actual experience of.

Share this post


Link to post
alterworldruler said:

Programming languages need to invent a straight forward way to tell the CPU on which core/SPU the stuff needs to be done, for now we gotta put up with tons of complex messes that do this. And it's even worse when you program for Cell which is an absolute clusterfuck to get hang of!



Wrong, wrong! Wrong!!!

Software should never have to bother with such low level stuff. What's really needed are programming languages that can natively implement parallel algorithms.

What's also needed in a world where graphics hardware offers a huge number of parallel processing units is to move away from compiling directly to assembly code but instead to some intermediate code that at run time can be compiled into machine code for the processors it's going to run on. Only with a greater level of abstraction from the hardware the true power of parallel systems will ever be usable. OpenCL and similar approaches are a good start but it's far too cumbersome to use. What's needed is a language that seamlessly integrates this stuff instead of going through a complex layer to a different programming language just for the parallel stuff.

Direct control over certain processing resources is utterly counterproductive here and will do more harm than good because nobody knows if said resources will still be available in future computing systems.

Phobus said:

Whilst I've only grazed the surface of the subject in my studies, the threading system seems a simple enough way to distribute work to multiple cores/threads. How this translates to consoles I couldn't say, as it's something I have no actual experience of.


It still could use some improvement. I can imagine a system based on 'jobs' which would be attached to a job queue and then being processed by the OS transparently across multiple cores/CPUs without ever bothering the user with the gory details of the thread handling.

Parallel computing has just entered the mainstream and I'm confident that we haven't even seen the surface of the changes this will bring to programming.

Share this post


Link to post

I'm with Graf here. Software is more reliable and easier to reuse when you keep programmers working on the problem they are trying to solve and let computers worry about hardware details. Most programmers are bad at writing threaded code and letting them deal with hardware directly will likely breed unreliable code as well as fail to guarantee it will run on future hardware like Graf mentioned.

Other areas have already abstracted the hardware away from the guys writing applications. Think about things like DirectX letting game devs write hardware-independent code or Java and C# hiding pointers and memory allocation from sloppy programmers. The result is programs that work better -- at least when there is no pressing reason for the programmer to write low-level stuff.

Share this post


Link to post

There are a lot of programming languages, extension and libraries that do offer a certain degree of automated parallelization for constructs as loops, scatter/gather operations etc.

However they are more oriented to the 'number crunching" type of programming, e.g. OpenMP for SMP and MPI for distributed environments, rather than the sort of multithreading found within e.g. a web browser or a web application.

If however you restrict your application domains to number crunching, image processing etc. then parallelizing a loop is often as easy as adding a

#pragma forall
or similar directive before a loop, and the compiler will construct the necessary multithreaded or distributed message-passing code for it. This means of course that you have a sort of rigid parallelism based on scatter/gather and fork/join control flow, but it's exactly what you need for most number crunching apps. Even Doom itself can benefit from this sort of "hard and dumb" parallelism in some parts of its renderer ;-)

Now if you want to code some fucked up thing with 1000 threads sending messages to each other and blocking on semaphores in OpenMP...your bad ;-)

Share this post


Link to post

Parallel programming is undoubtedly the future of programming, the question is how it is to be done. Clearly the traditional method of threads and locking is much too heavyweight (and fragile). Really, the problem is with the whole locking aspect; I've found that even after reading through some threaded code for hours to check for correctness, it's still easy to end up making mistakes like this.

So in my mind the real question is what we can use instead of locking. Approaches like OpenMP don't really address this problem at all, so I don't really see them as particularly interesting. Really locking is a way of synchronizing access, so that only one thread accesses data at a time. I rather like Go's approach of using message passing through channels between threads, as it's a conceptually simple mechanism that does the job that mutexes do much better. I recently saw this which is sort of similar in a way.

I've recently been learning Haskell, which although clearly a difficult language to learn, is unique in that functions aren't allowed to have any side effects. While this has the advantage of avoiding bugs that side effects might introduce, it also has the additional benefit that programs can just be automatically parallelized by the compiler/runtime. Talk about the holy grail of programming!

While I certainly don't think Haskell is going to become a popular mainstream programming language any time soon, I do wonder if there is the possibility that as the number of cores increases, we might end up with a niche for pure functional languages for certain applications. At the very least I wouldn't be surprised if we end up seeing ideas appearing in mainstream languages such as the concept of distinguishing between pure and non-pure functions. Ideas from functional languages have been slowly making their way into mainstream languages for decades now, so it wouldn't be a new thing.

Share this post


Link to post

Approaches like Go or Haskell just make "inherently parallel" problems easier or, at the limit, automatic to code. But in the end, at the lowest level, these must be translated to traditional threading constructs: semaphores, mutexes, barriers, etc. no matter how abstractly and elegantly they may be represented in whatever language.

And once you exceed a certain degree of complexity, control flow impredictability (such as user interface or network interaction) or simply you have a problem that isn't so obviously parallelizable you'll have to take care of the low-level details yourself, and every modern programming language should provide at least the equivalent of Java's util.concurrent package (I still remember how .NET was slow to catch up on this front).

I also see a lot of confusion in terms here: if parallel "jobs" (in the Unix sense) is all you want, hey presto! that's easy on any SMP machine (which includes multi-CPU, multi-Core and even Hyperthreading machines: run two separate applications ("jobs") each on its memory space and let the OS do the scheduling for you.

You need more control, as in, splitting the workload without spawning new processes with separate memory spaces? Then you need threads, in any modern POSIX compliant OS. How they are handled, exactly, depends on the OS itself. In most they are treated just like ordinary processes -but with a shared memory space- and in some they have a special sub-process status. In any case, they are generally not strictly queued in the sense of an old batch-processing system: they are multitasked in round-robin fashion by any modern OS, just like ordinary processes.

A major exception to this model is MPI, which actually spawn multiple independent processes each with its private memory but a master-slave hierarchy, which can only communicate and exchange data through the MPI interface, even if over a network. Again, this is only really used in number crunching over computing clusters.

OpenMP is not suitable for GUI or network multithreading because it's ill-suited to handle sparse, random events: it's more useful for those instances where you have a big chunk of easily parallelizable work that you simply want to split into as many cores as you have available, and hope that the OS's scheduling won't let you down.

Then there were older multi-CPU systems like e.g. the Sega 32X with its twin CPUs.... those too had an internal master-slave hierarchy and would split precise tasks and sync up on ad-hoc constructs, but the programmers had full control over them.

Multithreading is actually a great parallel programming model: you can create as many threads as you want, and the very least you'll get round-robin execution, which in itself is no small thing e.g. for handling async network traffic, GUI input etc.

The traditional single-tasking way (DOS, older arcade games) would be to use periodic interrupts.

Back to multithreading, IF the hardware you're running on has multiple execution units, your threads will be transparently scheduled on different CPUs, potentially giving you a speed increase (depends on how the threads are synced up though). You have no guarantees, but if I split a job in 4 thread and fire it up on 4 cores, chances are that, on average, 1 core will be dedicated to one portion of the job (no under-used or over-taxed ones).

Maybe I make it sound too easy, and a thesis specialized in parallel programming sure helps, but threads ain't half as bad as they are made to sound, and, dulcis in fundo, any other way of "parallel programming" is simply a frontend for threads, not a truly alternative system.

Share this post


Link to post
fraggle said:

Parallel programming is undoubtedly the future of programming

I'm not totally convinced of that, seems to me that CPU speed, RAM and hard disk sizes have already reached a point where they are sufficient for nearly every day-to-day tasks that people use computers for (not counting games here).

Share this post


Link to post

It's also for the everyday user. There is always some way to add more features, make programs anticipate users, perform better searches, aggregate more data, etc.

Share this post


Link to post

Parallelization might lead to artificial brains, the 'last invention that man ever need make'. I also heard functional programming with no side effects like haskell is good for it. That was already mentioned but I just wanted to appear smart or something even though I'm not. Thanks for reading.

Share this post


Link to post

Once parallel programming becomes as easy as declaring a class in C++ it will get used for everyday applications.

Saying it's not needed because current hardware is sufficient is incredibly shortsighted.

Maes said:

...
Maybe I make it sound too easy, and a thesis specialized in parallel programming sure helps, but threads ain't half as bad as they are made to sound, and, dulcis in fundo, any other way of "parallel programming" is simply a frontend for threads, not a truly alternative system.


The problem right now is that all the multithreading/synchronization constructs are part of the libraries, not the language. To harness all the power of modern CPUs a programming language is needed that is designed to help the programmer in parallelizing tasks, not make it a cumbersome endeavour where he has do manage all the gory synchronization details himself.

Contrary to what you seem to believe, the way most current languages are constructed makes them inherently ill suited for parallel programming. A far greater amount of abstraction is needed to make it usable to the average developer.

Share this post


Link to post
Graf Zahl said:

Once parallel programming becomes as easy as declaring a class in C++

Assuming that a way to make parallel programming easy exists is incredibly naive. If people much smarter than you and me haven't been able to crack this nut after decades of trying, then maybe there is no silver bullet.

Share this post


Link to post

One shouldn't underestimate the potential for applications to require ever more power. I wouldn't be surprised if in 2020, you will need multiprocessing to run notepad.exe.

Share this post


Link to post
andrewj said:

Assuming that a way to make parallel programming easy exists is incredibly naive. If people much smarter than you and me haven't been able to crack this nut after decades of trying, then maybe there is no silver bullet.


I'm not so sure about that. Until now all this was limited by the available hardware. We are still in an age where 90% of all software is system specific compiled binaries. To be honest, I have no hope that these problems will ever be resolved if this method of distribution persists. How should such programs ever be able to use the power available on modern graphics cards if the code can't run on them?

So far any research in this matter was more of a theoretical nature that never entered the programming mainstream - because simply up until recently there was no benefit in it. But now, with modern graphics cards getting ever more powerful and being designed for parallelization there is increasing demand for a proper solution and I'm certain it will eventually come. Not this year, probably not in the next 5 years, but it will come.

Gez said:

One shouldn't underestimate the potential for applications to require ever more power. I wouldn't be surprised if in 2020, you will need multiprocessing to run notepad.exe.


I wouldn't be surprised if in 10 years from now software as it is written today is considered as obsolete as assembly written DOS programs are now.

Share this post


Link to post
andrewj said:

Assuming that a way to make parallel programming easy exists is incredibly naive. If people much smarter than you and me haven't been able to crack this nut after decades of trying, then maybe there is no silver bullet.

I don't agree. There already are programming languages which make parallel programming easy, e.g. Haskell (which has been already mentioned in this thread). The problem is that most programmers do not want to use some strange, incomprehensible (for them) language, they want something they already know, probably with C-like syntax and semantics.

Share this post


Link to post
Graf Zahl said:

Contrary to what you seem to believe, the way most current languages are constructed makes them inherently ill suited for parallel programming. A far greater amount of abstraction is needed to make it usable to the average developer.


Yeah, using a vectorized "forall" directive in Fortran is truly hardcore stuff.

But I guess it falls under the "number crunching" and "easily parallelizable loops" case.

By reading between the lines of many posts in this thread, many of you seem to wish for some magic way that will automatically and transparently split ANY source file you throw at it into many, tiny, independent chunks of code that will be dynamically executed in parallel onto an arbitrary number of available cores, be there 2 or 1000, and all that without ANY programmer effort, and that somehow data and timing dependencies will all be automatically resolved and/or guessed by the compiler.

If traditional sequential programming is like a personal effort, Parallel programming is like managing a team of workers: you have more "hands" at your disposal, but you must tell them exactly what to do, when to start working, in what order certain tasks must finish etc. or else they won't even start doing anything, or, worse, they will get in each other's way.

I strongly recommend to anyone wishing to seriously take up parallel programming with ANY language or platform to at least glance through High-Performance Java Platform Computing. While it has code examples tied to a particular and now obsolete Java parallel code library, it explains very well the basic techniques behind it, which are general enough to be expanded to any language and parallel library. If you can understand the -admittedly not hard- examples and concepts of this book, then you can take on every "traditional" parallel programming method with minimal effort. It's no wonder that java.util.concurrent were based on the book author's work (the book came out in 2001). Hint: it's what I used to prepare for my parallel programming thesis, and I still find it valid after all these years.

Graf Zahl said:

I wouldn't be surprised if in 10 years from now software as it is written today is considered as obsolete as assembly written DOS programs are now.


Hardly. Beginner programmers will still begin programming with a non-parallel "Hello world" and some will never move on from this model due to the nature of their future tasks (enterprise/web applications etc.) and at most, numerical calculus will be taught with a preferential weight on automatic/directive based parallel tools (probably some future dialect of Fortran and/or MATLAB), and more weight will be given in theoretical classes about parallel computing and the concepts of data dependencies, problems well-suited and ill-suited to parallelization etc.

Then, if you want to REALLY nitpick, everybody and the cat can be a "parallel programmer" nowadays. Why? Well, just slap together a simple Java applet and fire it up: voila', there are already at least two threads runnings (one for the GUI and one for your code) without you already knowing!

Share this post


Link to post

I should mention that all high schools in poland (the IT-oriented ones) don't teach anything in programming past Java though teachers do whine about that. Yeah, we only get to learn Pascal, C/C++ and Java. But that's it. I'm about to reach the C++ part of the programming course in 2 weeks as we speak (bleh, we took WAY too long on such simplicity that is Pascal)

Still, this is a worthwhile discussion.

Share this post


Link to post

Maes, I'm sorry but I have to disagree. Your argument could as well be used to dismiss object oriented languages because you can recreate all the OOP stuff in C, albeit in a much more verbose way and more prone to errors.

The same logic applies here. A programming language is a tool that's supposed to help the programmer get the most out of his software. Once the language gets in the way of doing things it becomes a burden rather than an asset. That's why no sane programmer uses C to create OOP structures.

With parallel algorithms it's no different. The way current languages are structured they more often stand in the way of implementing parallelization. And no library is going to change that.

Personally I don't see much of a future for C++, Java etc. in parallel programming because they are based on concepts that are inherently non-parallel with all multithreading features tacked on. The clumsy synchronizastion objects the programmer has to maintain and which more often than not result in a deadlock are ample proof of that.

A powerful parallelizable language needs to be based on a different set of basic concepts. The way threads are defined currently should have no place in it whatsoever. It will have to do some radical changes to how programs are done, way beyond anything people here in this forum can imagine. It may well be comparable with the transition from assembly language to C in magnitude.

Share this post


Link to post

Well, provide a concrete example of what you wish for then, because I really can't understand what you're after (unless you wish for a language automatically making everything "parallel" for you, whether you want it or not).

In the case of a loop iterating and doing some trivial task with no side-effects on a set of N elements, you can parallelize it with a forall statement, and extensions like OpenMP or parallilzed fortran/C/C++/Java compilers already provide that. ONE statement. For a guy that has coded something like GZDoom, that mustn't be too hard now, is it? ;-)

If you need to somehow run two arbitrary independent functions in parallel (again, assuming that they have no side-effects on each others), you can do that very easily even in Java/C/C++/C# without caring when they do end: just fire them up, add whatever syntactic sugar you want, and don't give a shit when they end.

Here, I even propose such a construct:

MagicParallelRun(void* your_function)
However often you need to be SURE when a task has ended or how much it has progressed, and you NEED some sort of construct to get your bearings.

Again, make a concrete example of how you'd implement e.g. summing N elements of an array in Haskell: there MUST be an entry and an exit point to whatever approach you're using. It may be syntactically as sugar-coated as you wish, it may look "easier" than with C, but it will conceptually be the same thing.

In MPI, I'd call it a "gather" operation. In OpenMP, a "cumulative forall". In Java, I'd probably fire two threads and use an "Accumulator Future". In parallel fortran, I'd use a "cumulative forall" similar to OpenMP etc.

Maybe in Haskell you can just write a normal loop with an accumulating variable and the compiler may parallelize that for you. If that's what you want, you can get that same functionality with OpenMP, although perhaps not as sugary.

Now, if you think of a more complex function that depends on a lot of intermediate values that are computed in parallel in separate processes/function/magic unicorn rainbow threads ...I challenge ANYONE to come up with an automatic parallelizing compiler that could automate that, without at least some pre-analyzed annotations thrown in by the programmers. At best, it will be some sort of NP-complete problem.

Share this post


Link to post
Maes said:

Well, provide a concrete example of what you wish for then, because I really can't understand what you're after (unless you wish for a language automatically making everything "parallel" for you, whether you want it or not).



If you want to put it in such broad terms, yes. All you say may be true but it's still approaching the problem from the wrong side (i.e. how to force parallelism into a mostly serial programming language.)

These are merely stopgap measures to work around the lack of a proper solution.

You are still thinking in the concepts of today's serial programming and try to extrapolate from that. But what's needed here is not language extensions to make parallel computing easier with current languages but completely new programming concepts that make the implementation of parallel processes a natural thing that comes natively out of the programming language, just like OOP languages make a concept like polymorphism natural instead of burdening the programmers with maintaining the virtual tables etc all by themselves. That's precisely the state where we mostly are with parallel programming right now.

I am fully aware that such things don't come into existence overnight. It's all years away, I think, but how else would you make progress if you don't imagine things first.

Share this post


Link to post

So, something like a tacit automatic parallelization no matter what code you write?

Surely that will require A LOT of post-code analysis no matter how expressive a language you use. Since most programming tasks are mostly mathematical in nature, you'll never come up with a language that expresses something totally different (though lambda calculus and functional programming are fucked up enough to be niches of their own).

E.g. if I write two consequent loops or function calls that don't affect one another, it would be nice if the compiler would determine that they are free of mutual side-effects and quietly make them into separate threads for me, which in turn would scale up automatically to the number of available cores (although that would potentially fuck up cache coherency, so an option to ensure that DOESN'T happen or limit the degree of parallelism would be nice, too).

Maybe languages that have guaranteed side-effect free functions are good candidates for this sort of "omni-threading" model, but then again, many common "enterprise" and even numerical programming tasks are actually harder to express in these languages, so it kinda defeats the purpose.

So you'd need a combination of a specially restricted non-procedural language and a powerful intention and dependency analyzer (essentially, the compiler understanding what you were trying to do and what the consequences of running things in parallel will be and what precautions need to be taken), which is essentially like asking the computer to program for you, but taking it to a whole other level compared to merely generating "glue" or "template" code for you.

In standard procedural programming however, you really need all those constructs because YOU, the programmer, are the only responsible entity for what happens to YOUR variables, to shared memory, to global vars etc. so you need the most anal, petty, complete and strict degree of control that you can possibly have.

Functional programming is alien enough as it is, it may have its uses, but sometimes you really need to just be able to tell the machine "Do action X, then action Y, then action Z, if foo==true then do bar etc. etc."

Share this post


Link to post

I think Graf has the right idea about the future, where parallel programming is concerned. To get proper use out of the concept, languages and compilers will have to be geared towards it. My guess is that a language with a C-style syntax would probably prove most popular, due to the large amount of familiarity with it. However, what's going on underneath will be very different - it will most likely end up transparent to the programmers, just like many other processes and functions currently are now.

(Excuse me if there's any incorrect terms here. I'm very happy to concede I'm still warming up with programming)

Share this post


Link to post

The human brain seems to be proof that parallelization can work. Its interesting that, aside from savants, we suck at math/number crunching, but are general enough to learn chess/riding a bike/etc, and computers are kind of the opposite. Evolution is a genius that doesn't even 'try', pumping out insane complexity like a human brain, all the way down to behavior of sperm and egg (just saw national geographic's 'in the womb' series). Nothing is too small or insignificant to avoid being selected and working as part of the whole system; gecko feet use van der waals forces etc. So maybe some sort of self organization like evolution is the only realistic way to make very complex things, rather than have a brain trying to orchestrate it all such as in typical modern programs. Brains of neurons, colonies of ants, flocks of birds seem to be organizations of individuals that are mostly symmetrical to eachother operating in parallel. Maybe you could somehow make a software 'ant' and once you have that, make a colony out of them. I guess that's sort of what OOP does, though I've never even used it.


'I wouldn't be surprised if in 2020, you will need multiprocessing to run notepad.exe.'

Yeah, because it'll be bogged down with web 8.0 with a side window filled with a virtual world of unremovable search companions and a paperclip rendered as a 3d volume that pops up telling you to feed them every 5 minutes. Every letter you type is spoken by a realistic robot voice and animated, and genuine advantage 13.2 phones home every second to make sure you can legally use notepad, plus some secret microsoft root kit calculates prime numbers in the background for secret purposes.

Share this post


Link to post
gggmork said:

So maybe some sort of self organization like evolution is the only realistic way to make very complex things, rather than have a brain trying to orchestrate it all such as in typical modern programs.

Yeah. It may be a sort of chicken-and-egg conundrum where to create a complex system you'd need a super-intelligent entity already. But we don't know how to design one of those, so the most tempting tool would be to let it arise of its own accord as it already previously did in nature - through evolution.

Of course, evolution is the process probably the least likely to lead to "Friendly" AI. So we might solve the problems of parallel processing and then all get turned into gray goo :P

Share this post


Link to post

The brain works in a way totally different from the current computer architectures, which are mostly extensions and variations of the classic Turing, Von Neumann and Harvard architecture.

The human brain OTOH is more similar to neural networks, which in the current state of the art are simulated rather than being actually built in hardware, and they are not even programmed in the traditional sense in any formal language: they are just "self organized" and/or "trained" to fit a specific problem, so the day where an easily programmable computer can be made purely out of neural networks is still far, far away (and not practical anyway: a traditional computer would beat it in cost and hardware simplicity for most tasks). Compare e.g. how microcontrollers are used in lieu of more sophisticated embedded systems in many devices, even today.

There are also other computing constucts like e.g. systolic arrays which may do some particular tasks in parallel more intuitively than traditional parallel computing, but, once again, they are not easily programmable (the most correct term would be "reconfurable") and designing one that performs a specific nontrivial task requires MSc-level specialization, compared to how easily you can teach any McProgrammer worth a damn to use threads.

The Haskell language that Graf seems to be pimping so much is one of the so-called functional languages. While such languages may be more suited to "free form" unconditional parallelization, they have restrictions and expressive difficulties of their own. Easier threading, and fucking harder to do pretty much anything else that matters commercially, such as OSes, web browsers, video games etc. with them ;-)

Share this post


Link to post

Haskell? What should such a ridiculous language accomplish? It's a typical niche product with absolutely no chance to accomplish anything substantial.

Haskell strikes me as something that it still firmly locked in the old way of thinking just with more limitations that just happen to work in favor of parallelization.

Share this post


Link to post

Ah sorry, it was fraggle and gggmork that were pimping Haskell.

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×