Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Koko Ricky

Arguments for and against the tech singularity

Recommended Posts

If haven't caught wind of the prediction that the middle of this century will undergo a paradigm shift known as the "technological singularity," then here's the scoop: This is supposedly what will happen when worldwide AI intellectual output outweighs worldwide human intellectual output. Having beings smarter than us involved in our affairs will make it impossible to predict the direction technology will go; it's akin to trying imagine the colors of wavelengths of light your eyes have not been evolved to detect. Whether or not it will actually happen is almost as interesting as its implications, so I thought I'd present some arguments for and against this theory coming to fruition and get some feedback.

Why it might happen:
· The claims of the singularity movement, similar to novels, Hollywood films and video games, can serve as catalysts for self fulfilling prophecies.
· Tech acceleration over the decades suggests exponential rather than linear growth, leading to unexpected breakthroughs.
· With the AI Watson having beaten the world Jeopardy champions, AI has cleared a few of the hurdles in using human language to communicate.
· AI research is more active than ever, with massive corporations like Google getting involved (for better or worse).
· Brain scan resolutions are becoming less invasive and outputting higher resolution images; meanwhile, new innovations in microscopy are allowing for more intimate (and also less invasive) examinations of cell behavior. Such breakthroughs should eventually make studying consciousness more accessible.
· AI research has slowly moved from trying to create an incredibly smart machine that is taught to perform specific intellectual tasks, to relatively low intellect machines that can slowly enhance their ability to learn.
· Number crunching has been faster in computers than in the human mind for a long time, so an AI of relatively low intellect (such as that of a toddler) could prove to be incredibly efficient due to information being processed at a far greater speed.

Why it might not happen:
· Pandemic illness could wipe out the vast majority of the human race.
· Nuclear fallout would effectively force society to start over, if not extinguish humanity.
· Global climate change could cause enormous amounts of destruction, either halting or at least pushing back the singularity.
· An asteroid could collide with Earth, extinguishing most life.
· The hard problems of consciousness may be impossible or at least extremely difficult to solve, preventing AIs from attaining true awareness and the ability to make voluntary choices.
· AI consciousness may be too limited to surpass human level intellect.
· Humanity may lose interest in AI research if progress slows down significantly.
· Something similar to Dune's Butlerian Jihad could occur, whereby all devices that are capable of high thinking are banned from use.
· A "Terminator" or "Matrix" scenario might lead to AIs becoming hostile and extinguishing/enslaving humanity, assuming they have the physical capacity to do so.

I personally think the against arguments are a bit silly, but I tried to provide as many as I could think of. The only one I think is legit are the hard problems of consciousness, but I strongly suspect that we can figure it out.

Share this post


Link to post

We're never going to use AI for fundamental research, we're only going to use them to spy and spam. Don't be silly.

Also, the singularity isn't going to happen because the exponential progress that it predicts will require, to effect itself, exponential consumption of resources. Computers are not a low-end manufacturing product. The major problems we face, ultimately, are all about our overconsumption of resources. The singularity is a tech geek's dream, cut of from the hard reality. It's an idea dreamed by people who hex-edited their savegames to give them 32767 of each resource type in their strategy games, but you cannot do that in reality, not even with a supercomputer.

Share this post


Link to post

Its cheaper to not have machines. People will go with people. Cars will drive themselves until the first lawsuit happens. Cars will never fly, because its cheaper not to and idiots would plunge to their death.

Machines can beat us, but no one will want or need them when there's always humans around. Sure there are automatic beer pourer's out there, but beyond a novelty, no one gives a fuck when we can do it ourselves.

What's the purpose of having machines smarter than us? To prove that we can? So they can solve any problem before it happens? Someone has to program them. Not just that, but what if a new problem shows up that wasn't programmed in there? Someone has to program a solution.

Once machines get programs, come the viruses. Suddenly the world's smartest, most efficient machine, robot or android becomes useless due to a computer virus. Its cool though, because humans will fix them.

Life will always go on.

Game makers can't even get AI correct. People spend freely on entertainment than necessity.

Sidenote: About the Terminator scenario, why would they make giant metal machines to exterminate us, when they can just poison the air or create small insects to murder people before anyone knew what's going on. That wouldn't make a good movie though.

Share this post


Link to post

I personally find that comic to be a rather poor argument against the singularity, because it oversimplifies the role of technology in human development. Also, I don't know about other nerds, but I am interested in determining whether or not the singularity will occur not because life sucks and the singularity will make it better, but because my research has pointed toward it being a strong possibility.

Resource management is not a problem, because new ways of computing (such as DNA computing, superconductive computing, quantum computing) inevitably end up using less energy than the tech that came before it. Compare integrated circuits to vacuum tubes. New innovations in computing will be low energy and high efficiency.

There seems to be a vested interest in AI research to get them as smart as and eventually smarter than, us, in order to accelerate progress that much faster. Now, why these engineers and scientists want it to be faster is hard to say; perhaps it's endless curiosity. I for one think it will happen, because the trends for the last few decades have favored smaller, cheaper, faster, smarter computers, with AI research slowly tackling the hurdles needed for machines to attain sentience.

The argument about people being cheaper than machines only holds true outside of industrial settings. As AIs become more sophisticated, increasingly broad occupations will be cheaper to handle via machine. That goes back to my argument that the first version of something is often novel or even impractical.

Lastly, I think geo is assuming that AIs would not be able to eventually learn on their own, perform self-correcting scans, cure themselves of viruses, improve their own code, etc. That is the goal of AI and there are rudimentary, early stage simulations demonstrating that self-improvement should be possible.

Share this post


Link to post

Isn't it rather fallacious when you just handwave off all the issues with "future will solve it for us in ways we cannot imagine yet"? Which is exactly what Gez and Ling's comic were mocking? Extrapolating our current development into far future absolutely does not work.

Share this post


Link to post
GoatLord said:

Resource management is not a problem, because new ways of computing (such as DNA computing, superconductive computing, quantum computing) inevitably end up using less energy than the tech that came before it. Compare integrated circuits to vacuum tubes. New innovations in computing will be low energy and high efficiency.

And yet, if you compare the energy and resources poured into making and operating vacuum tube computers in the 50s with those poured into making and operating integrated circuit computers nowadays, you'd quickly see that it's a lot more resources and energy being used today.

If you make a new gizmo that consumes ten times less X than the old version, but you end up having a thousand times more new gizmos than you had old gizmos, you're gonna spend a hundred times more X than before.

And that's what you see with technology. Computers are everywhere now. There's desktop and laptop computers, sure, but also now you phone is a computer, you car is a computer, your watch is a computer, your TV is a computer, your glasses is a computer, your fridge is a computer, your toilet seat is a computer, your motherfucking kitchen faucet is a computer. Every single little thing has to be "smart" and "connected" so that Google and co. can make perfectly-tailored advertising and the NSA will know everything it wants to know about you.

Do you think this trend would stop with the singularity? Or would it instead accelerate even further?

Share this post


Link to post

The energy efficiency issue seems to be more of a problem than I suspected. However, I don't think it's "hand wavy" to assume that radical new forms of computing, such as the ones I mentioned earlier (quantum and superconductive computing being particularly noteworthy), could result in drastic decreases in energy consumption. If the idea is to create machines that are as clever about energy consumption as the human brain, then surely that technology is something we will inevitably arrive at, since we are trying to scan the human brain down the molecule, perhaps smaller.

Gez, no, I don't think that trend will stop with the singularity. I think what you're saying represents one of the great dystopic outcomes of the singularity, whereby a nightmare beyond Orwellian proportions makes everyone's thoughts and actions available to higher authorities. That is indeed a disturbing thought.

Share this post


Link to post

Screw AI. Can you imagine not having freedom but be a vegetable? Even Carmack, if he's an asshole, should be left alone coding if that's all he cares about (I don't think he'd want that taken away). Everyone should be left alone doing their thing, even if others hate it.

If someone is a Satanist, then so what that's his/her choice. But if him/her breaks into someone's house to rape kids or something, or 9/11, then yeah something should be done about it. If Jehovah Witnesses come knocking on people's doors without a phone call first (and not minding their business), and thinking they know what's best for you, I can understand being annoyed by that. I just don't see the problem with freedom unless it's imposing and altering somebody's personal space.

I honestly think it's technically possible for AI at some point. But I don't think it'd be reliable for the long run. It'd be like the Matrix movie (everything would be in a self-destructive state). I think even a single nuke over the entire planets surface might be less shitty TBH. People are imperfect, but I don't think that's a good enough reason for AI. AI should be controlled by imperfect humans if anything.

Some people have a problem with obesity, slothfulness and over consuming. I don't know what to say about that! Lol. There's people that honestly get beaten in life and become depressed, but then there's the kind that proudly and joyfully use others no matter what.

If people want to use Windows, great. But don't butt in and take my Linux away! That goes for Mac users too lol.

Share this post


Link to post

I suspect the Butlerian Jihad is more likely than we realize. Barring everything else, let's say sophisticated AI is possible and if we work hard enough, will inevitably become a reality. I think we're simply too litigious a society to allow AI to be involved in any real decision-making with any real consequences. For example, we already have drones that could easily bomb targets into oblivion for us, but we still require a human at the controls in order to make the final call. Whatever progress we're making toward self-driving cars, action is already being taken to halt that because if a self-driving car is involved in an accident, we don't know how to sue the right person. I imagine, 50 or 100 years down the line, we will have incredibly brilliant advanced AI, but if things keep going the way they have been, that AI won't be allowed to really participate in society in any meaningful way. It will be relegated to being a curiosity for researchers and nothing more.

Share this post


Link to post

^ Not sure I agree. It does sound feasible when you describe things in that particular narrative, but look how different our progress is compared to the visionary scifis. Our entire set of morals is changing along with technology progression, so fleshy things getting upset about robots taking ther jerbs and kicking them out of the factories and plane cockpits is totally a 20th century concept. Gez mentioned smart devices invading literally every aspect of life of a first world person, that's only going to get worse. There won't be super-computers telling nations what to do, there will be increasingly smart AIs whispering in your ear (and everyone's) and you will accept it, because that particular AI will adapt to your habits and it will suggest really cool things to you. It will totally know you better than your best bud and you will trust it, because a bit of advertisment is just necessary evil when it makes you look smarter and generally a better person all the time by advising you smart things about what to say and what to buy and what to like. Helper AIs, not replacement or ruler AIs... how do you fight that?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×