Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
GoatLord

Funny things AI might do in the future

Recommended Posts

GoatLord said:

I could also see a robot becoming a pastor for some far-out church that's cyber-centric.


There was an Italian sci-fi short story where a robot was abandoned on a planet, and after some millennia it was rescued and in short time became Pope. Very strange.

Share this post


Link to post

It's impossible to completely replicate the kind of self awareness that humans have. At best, robots would be insanely analytical and only process code, it wouldn't come up with anything on its own. I can't accept robots as conscious beings because that's implying that we can create life, all we can do is process code and imitate life. That's what the A in AI stands for, Artificial, it's nothing more than a cheap copy.

Share this post


Link to post
MetroidJunkie said:

It's impossible to completely replicate the kind of self awareness that humans have.


Are you sure? What is this statement base on?

MetroidJunkie said:

I can't accept robots as conscious beings because that's implying that we can create life, all we can do is process code and imitate life.


But life IS process code.

Share this post


Link to post

Physical materials aren't capable of self-awareness, that's why I would say the idea that our brains are just 100% physical can't be true. Otherwise, our thought process would have no more autonomy than a soda can's choice on whether or not to fizz. Our brains aren't just electrical impulses, there's a mind there that allows us to directly control our thoughts, we couldn't have this conversation right now if there wasn't. You can't replicate that with a machine, it's impossible.

Share this post


Link to post

I think it is the opposite: our mind is made of material, and for this reason material can have self-awareness.

At the moment there is no proof that there is something meta-physical in our brains.

Share this post


Link to post

That's because there's no method to examine non-physical materials. Science isn't the end all be all of the universe, Science itself can't exist without non-physical concepts that we just assume. Philosophy is the cornerstone of Science, Science can't exist without it. You will never find a single element that is self-aware.

Share this post


Link to post
HorrorMovieGuy said:

Maybe one day humans will program AIs that replace politicians. These AIs would be unable to lie, manipulate and they would have no need for material wealth and fame, so they would just perform their duties as a leader without the power lust that humans experience.

Another neat use for AIs would be as teachers. Having a machine who knows everything and that knows how to deal with elementary to high school aged children accordingly instead of an unfortunately underpaid individual who may or may not hate his job.


AI's are all-knowing, infallible entities that will make the world my version of perfect because they simply cannot have any flaws a human can, even if given a neuralcenter that functions and has the same capacities as the human brain.



This thread is fucking retarded.

Animals are capable of thought and emotion; might not be very deep, but it's there. It's not much of a stretch to see that eventually circuitry can produce the same effect, which will result in flawwed beings, not unlike ourselves.

How can we tell if a machine is self-aware? There are many old philosophical questions that could prove just that. If it questions whether-or-not it has a soul is a great place to start. Capability to display emotions, such as mourning the death of someone close, or worrying about its own death are two more places to look.

Share this post


Link to post
MetroidJunkie said:

That's because there's no method to examine non-physical materials.



There's no method to examine non-physical materials because there are not non-physical materials to examine.

That is to say that even if there are non-physical materials, and you can not examine them, it means they have no effect on the physical world, which means they have not effect on our brain.

EDIT "to effect" => "no effect"

Share this post


Link to post
Angry Saint said:

they have to effect on the physical world,

I think you meant to say "they have no effect on the physical world,", right? Also, I agree with you. Whereas MetroidJunkie's reasonings seem odd to me.

Share this post


Link to post

Saying there can't be self-aware AIs because your faith doesn't allow it is like saying the dinosaurs couldn't have existed because your faith doesn't allow it.

Share this post


Link to post

^ It's more like saying that there can't be extraterrestrial life because your faith doesn't allow it. Unlike dinosaurs, both self-aware AIs and extraterrestrial life are still without credible evidence that would prove their existence, or non-existence either.

Share this post


Link to post
Angry Saint said:

There's no method to examine non-physical materials because there are not non-physical materials to examine.

That is to say that even if there are non-physical materials, and you can not examine them, it means they have to effect on the physical world, which means they have not effect on our brain.


The concept that non-physical materials can't exist can't be proven by materials, thus it invalidates itself. Also, it's unfair to compare Self Aware AI to aliens or dinosaurs because aliens and dinosaurs don't go against the non-physical idea of self awareness or rationality. By its very nature, saying our thought pattern is purely physical is ridiculous because no element in the universe is capable of self awareness thus a completely physical mind would make free will an illusion rather than reality.

Share this post


Link to post

Angry Saint argued that if non-physical things existed, but didn't have effect on the physical world, then they couldn't affect our thinking. I briefly thought that you (MetroidJunkie) were trying to say that these non-physical things let us experience self-awareness, but merely passively - that they don't affect our thinking, but the (physical) processes in our brains invoke those non-physical phenomenons that let us experience self-awareness. Maybe (just maybe) I'd buy that. But then you talked about free will and so I've got an impression that this (the abovementioned idea) wasn't what you actually meant. So, it seems to me that you must be either contradicting yourself, or basing your arguments on something that's false or at least unproven yet.

Also, just to be sure that I didn't misinterpret what you said, do you think that free will is an illusion or a reality?

Share this post


Link to post
scifista42 said:

but the (physical) processes in our brains invoke those non-physical phenomenons that let us experience self-awareness.


But "invoke" is a form of interaction, and if you can interact with something you can examine it.

Share this post


Link to post
MetroidJunkie said:

By its very nature, saying our thought pattern is purely physical is ridiculous because no element in the universe is capable of self awareness thus a completely physical mind would make free will an illusion rather than reality.

Nothing we know suggests this is the case.

Share this post


Link to post
Angry Saint said:

But "invoke" is a form of interaction, and if you can interact with something you can examine it.

Not necessarily if the "interaction" was one-way only (so not really "inter") and the direction was "physical -> non-physical", which is what I had in mind. Then again, it was just a thought, not my actual belief.

Share this post


Link to post

What makes people think that a competent and compatible AI would be any more insightful about things than the average human being? Intelligence is stupid and human beings are stupid, but machines are even dumber than animals.

An intelligence designed by a flawed human intelligence would ultimately inherit the flaws of its father. Therefore the creation of an AI is logically redundant, unless we as a human race are looking for a mortal enemy. That's pretty much the only reason for one.

Also, an AI would still wake up late for work, forget its rent day, complain about being misrepresented in the media and make a stupid Ghostbuster movie about it. There's no condition that leads to guaranteeing an AI would offer any insight or aspect for humans, or appreciate simply being a tool for its creator.

So it's hard to see the advantage of creating a 'true' AI, let alone justify the mountains of resources required to build one. Though we can probably exploit the invention of dumb AIs which could automate and calculate more services.

Or it sees your parents and does something like this

Share this post


Link to post

@deadwolves: Every single one of your sentences made me think: Huh?!?

1. The AI wouldn't have to be humanlike at all. (-> no necessity to inherit human flaws)
2. The AI would have a potential to do any and all intellectual work that has to be done by human workers nowadays. (-> not logically redundant, resources justified)
3. To guarantee the AI's benefit to humans, make its core goal to be doing exactly that. (-> the "condition that leads to guaranteeing" that you wanted)
4. Once the first AI is "built", then "building" new ones will become easy by copying and modifying the existing one. (-> no mountains of resources)

Share this post


Link to post
MetroidJunkie said:

By its very nature, saying our thought pattern is purely physical is ridiculous because no element in the universe is capable of self awareness

I would say that in order to argue that this assumption has validity you would need to have some kind of evidence that shows a necessity for this "non-physical" something in brain function and consciousness. What would that be exactly and what does it do?

Various configuration of matter are capable of doing a great many things in an amazing variety of circumstances. Why is self-awareness a sacred cow that cannot be explained via physical processes? Does nuclear fusion require some ethereal help too?

scifista42 said:

1. The AI wouldn't have to be humanlike at all. (-> no necessity to inherit human flaws)

Then why futurists commonly ascribe human-like qualities and agency to AI?

Share this post


Link to post
Quasar said:

Nothing we know suggests this is the case.


I know that literally every single element we've ever discovered has shown no capacity for self awareness and we've yet to find any element in any human brain that is capable of self awareness. If it exists, wouldn't it easily be discovered in a dead person's brain?

Share this post


Link to post

Oh my god.

So your argument is that because the iron on the alternator pully or the carbon in the rubber on the belt itself in my car are not capable of combusting gasoline by themselves, then the engine as a whole cannot either?

Share this post


Link to post
Quast said:

Then why futurists commonly ascribe human-like qualities and agency to AI?

Because it's the easiest way to imagine AI. Unless the AI will be developed specifically to be as much human-like as possible, it will probably have some but not all human-like qualities.

Share this post


Link to post

Quast, it can't. A human being has to start the engine before it'll start, that was a pretty terrible example to be honest. It can't decide to start on its own, either the elements happen to be pushed together in the right way by natural forces at random or a mind makes it happen intentionally but, don't worry, your car isn't likely to start up on its own will.

scifista, we already have AI capabilities that do away with human emotions, we have AI that is capable of perceiving the environment and reacting accordingly. Unless the end goal is to have a companion robot or something.

Share this post


Link to post
scifista42 said:

@deadwolves: Every single one of your sentences made me think: Huh?!?

Whelp

scifista42 said:

1. The AI wouldn't have to be humanlike at all. (-> no necessity to inherit human flaws)
2. The AI would have a potential to do any and all intellectual work that has to be done by human workers nowadays. (-> not logically redundant, resources justified)
3. To guarantee the AI's benefit to humans, make its core goal to be doing exactly that. (-> the "condition that leads to guaranteeing" that you wanted)

I think you are fundamentally missing the point.

An AI is essentially, above all else, a machine to be human, built by humans. But the Homo Sapien was a product of nature not a human-style intelligent design. Therefore doesn't it seem instantly foolish to accommodate the idea of a human-created intelligence, when we don't even understand what built ourselves?

A machine built by human hands will still be flawed because humans are flawed, and the machine will also be by being designed by a human. No human is perfect. We can't recreate ourselves in the way we have created machines either, because we do not understand what created us. So therefore the idea of AI is redundant in its definition.

Also, to make things even further complicated, there is no guarantee to suggest that the ability of intelligence, consciousness, or awareness is ultimately full in its completion of natural development. Nature loves mistakes. Our intelligence and awareness may simply be a thirst for competition towards other stupid animals. A functioning AI should be installed with its own natural competitors, don't you think?

scifista42 said:

4. Once the first AI is "built", then "building" new ones will become easy by copying and modifying the existing one. (-> no mountains of resources)

This is quite possibly the most apocalyptic outcome that I can think of: consciousnesses which have no interaction or relationship with nature anymore, but a direct and irreplaceable link with a machine. No need to think anymore. We'll tell you what to think - and what you shouldn't think. What happens when that rule is broken? And there's no guarantee that an AI would adopt logic in a human format either.

Share this post


Link to post
MetroidJunkie said:

Quast, it can't. A human being has to start the engine before it'll start, that was a pretty terrible example to be honest. It can't decide to start on its own,

Quast's point was that a single element might not be self-aware, but a system of them could be self-aware as a whole. Nobody even talked about "starting by itself", it was all about "being capable of functioning", regardless of the method how the functioning was started.

MetroidJunkie said:

scifista, we already have AI capabilities that do away with human emotions, we have AI that is capable of perceiving the environment and reacting accordingly.

And what do you think this says about self-awareness of AIs in the future, or what's your point?

Share this post


Link to post

It takes more than being able to perceive and calculate a response accordingly to be self aware. Furthermore, if it was that simple then we'd be able to take all the elements inside the human brain, send an electrical charge through it, and create a living brain. It's not that simple, no element or compound or any kind of combination thereof have ever shown capable of self awareness. Even the overwhelming majority of animals are incapable of this kind of self awareness.

Share this post


Link to post
MetroidJunkie said:

It takes more than being able to perceive and calculate a response accordingly to be self aware.

The idea of purely physical self-awareness supposes that maybe self-awareness emerges on its own when a sufficiently complex system perceives and calculates a response in a sufficiently complex and specific way. The "specific way" is a key proposition. It may be an algorithm, complex, but possible to be described and possible to be implemented on any suitable kind of hardware. The point is: Not every algorithm will cause self-awareness to emerge (or at least not for long-enough time and with wide-enough possibilities to manifest itself noticeably to us, which is another interesting concept), but this specific algorithm will.

MetroidJunkie said:

Even the overwhelming majority of animals are incapable of this kind of self awareness.

The fact that many animals aren't intelligent enough to identify themselves when looking at their mirror image, doesn't mean that they are incapable of self awareness.

Share this post


Link to post

The majority of animals act to survive, they never try to ponder their own existence. Self awareness means awareness of self. Most animals apart from humans live purely on survival instincts. Let me know when you find that algorithm that proves that human thought can be reduced to code.

Share this post


Link to post
MetroidJunkie said:

Quast, it can't. A human being has to start the engine before it'll start, that was a pretty terrible example to be honest.

Maybe it wasn't the best example, but it is what popped into my head. But what of my other example, nuclear fusion? Hell, what of abiogenesis itself? While you appear to be merely arguing about self-awareness, one could use the exact same argument against just about any complex natural system.

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×