Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Bank

Can you construct sentience?

Recommended Posts

After having heard that a new, remastered, complete footage version of Blade Runner is to be released in January, I decided I'd watch the movie again to refresh my memory of one of my favorite films of all time. While I love Blade Runner's atmosphere and action, I always found that the levels of symbolism and moral questioning that was brought about were always my favorite parts. What this in turn did was make me start thinking about what it means to be human, or more appropriately, sentient.

The replicants, the human-imitating robots in Blade Runner, were built with the capacity to gain emotion. They were also given a short lifespan of 4 years as a fail safe, and would often act unstably because of the short time they had to build emotions, like one might experience during puberty. In order to fight this problem, they built a replicant that had false memories, in order to give it a cushion. The replicant never knew it wasn't human. But this brings up the question, is the replicant programmed to act human and act as though it didn't know it wasn't human or does it generally feel these emotions? Is it an imitation of sentience, or the creation of completely new sentience?

When we think of robots, we think of AI. This artificial intelligence is merely a mathematical equation to judge the effects of a situation on the robot itself and the others around it. This means, that for a given situation, it was always react the same as long as the effected parties remain in the same state. It will never react irrationally. But what if you built, much like a programmer would into software, a list of requirements that must be present. Like Asimov's three laws of Robotics but with emotions.

It was stated in the movie that a replicant's only crime was wanting to be human. This means that at some point, they were programmed with a need for self preservation. Now, when you think of a human, that is also the only instinctual goal. We are to stay alive as long as possible, retaining our genes through reproduction. So if you were to give a robot a goal such as self preservation, would it in turn develop sentience to do so?

We may not think of it this way, but every action we make is an assessment of the situation and how it effects us and others, much like a robot with programming would. So if you programmed enough choices to assess, would it become sentient? Or would it merely imitate sentience?

Discuss.

Share this post


Link to post

Can you program intuition, the sixth sense, a soul or whatever you want to call it? If yes, then we can construct a sentience. If not, it'll never be even close to the real thing.

Share this post


Link to post

I don't think that programming a bunch of choices will create sentience. Thats how insects behave, they have a bunch of preprogrammed behaviors and thats it. I don't know the answer to sentience but I bet it has something to do with parallel operations.

Share this post


Link to post
Shtbag667 said:

I don't think that programming a bunch of choices will create sentience. Thats how insects behave, they have a bunch of preprogrammed behaviors and thats it.

Wikipedia
Sentience refers to possession of sensory organs, the ability to feel or perceive, not necessarily including the faculty of self-awareness.

I'm pretty sure insects are sentient. And it's not a programmed behavior, the only thing that is programmed is a directive. How to accomplish that directive is open to interpretation.

I'm assuming that the machine is self aware as well, but it does not necessarily have to be in order to be sentient.

Share this post


Link to post

Yeah, if sentience were intelligence perhaps we'd be Homo Sentiens instead of Homo Sapiens.

Shtbag667 said:
Thats how insects behave, they have a bunch of preprogrammed behaviors and thats it.

In a way, an insect is to a man what a calculator is to a computer. There's a difference of degree of autonomy and complexity, but we're like "brothers" in design. Humans are also relatively conditioned, in any case. We often overrate our autonomous intelligence, deluded by our personal consciousness.

Automated devices (the ones with a relatively complex response to stimuli) are already sentient to a degree, compared to a rock, at least. I'm pretty sure the level of complexity of "artificial intelligence" can still be increased considerably, as long as nothing catastrophic happens to humanity in the foreseeable future.

Share this post


Link to post

All i know is i hate emotions, philosphy and other such weaknesses and wastes of time. If you ask me the sooner we develop into Daleks/Cybermen the better

Share this post


Link to post
GooberMan said:

Only if its buttons responded positively to my soft caress.

If only the buttons responded to frustrated smashes faster...

Once the psychologists truely discover how the brain works, sentience will very much be possible. And after all, isn't the brain just a biological machine that each of us relies on to get through life? And maybe these sentient machines will develop Alzheimer's as they age and the RAM in them disintegrates!

Share this post


Link to post
doom2day said:

If only the buttons responded to frustrated smashes faster...

Once the psychologists truely discover how the brain works, sentience will very much be possible. And after all, isn't the brain just a biological machine that each of us relies on to get through life? And maybe these sentient machines will develop Alzheimer's as they age and the RAM in them disintegrates!


Old people's home for Robots :)

"Oh no, I forgot how to process what my visual sensors are detecting -- I'm blind!!!"

Share this post


Link to post

There is no evidence the brain is nothing but a big neural network that has been optimally trained. Each neuron creates a certain resistance to the current it receives depending on which dendrite is excited, modifying the resulting output power through the axon as a simple coefficient multiplication.

Take a look at 'simple' 12 neuron networks and you'll see after some careful connection planning and layering you can distinguish between faces provided you simplify the input a bit. My father works on this. Artificial intelligence is not a matter of creating endlessly complex finite-choice machines, these problems are actually calculated by a weighed mesh of multiplications, and pretty much nothing more (there's some complex-ish error prevention methods).

It's just a matter of ramping up in processing power. Intelligence holds no special place in Nature, even though it looks scarce and unique to us, it's built upon principles we have started to master.

Share this post


Link to post

Bank said:

Wikipedia said:
Sentience refers to possession of sensory organs, the ability to feel or perceive, not necessarily including the faculty of self-awareness.


I think you actually mean sapience, as you seem to be talking about judgement and not sensation, though I'm surprised you missed this because it was only two sentences away from your quote:

Wikipedia said:
The word sentient is often confused with the word sapient, which can connote knowledge, consciousness, or apperception.


and there is also this:

Wikipedia said:
Some science fiction uses the term sentience to describe a species with human-like intelligence, but a more appropriate term for intelligent beings would be 'sapience'.

Share this post


Link to post

Zaldron said:
Artificial intelligence is not a matter of creating endlessly complex finite-choice machines,

Which would naturally be very uneconomical (possibly requiring a complexity rivaling or exceeding the environment it interacts with). Instead, economic complexity is created with the interaction of simpler parts (just like society is made of people, that are made of cells composed of molecules, that are made of atoms, and so on).

Share this post


Link to post
Mindless Rambler said:

I think you actually mean sapience, as you seem to be talking about judgement and not sensation


No, I'm referring to the ability to sense or feel, not imitate that ability. I'm open to the idea that a robot could develop sapience, but my question regards sentience.

Share this post


Link to post

And what does feeling mean to you? It is only input, again, perceived through electrical stimulation. Our feelings are nothing but the perceived "sensation" of a chemical rushing over the body and modifying certain mechanisms while the brain subconsciously misbehaves due to the "almost anything matters" nature of neural networks.

The fact you are an organic machine doesn't give you any true advantages beyond the millions of years of refinement through evolution. In fact, you're made of commoner elements presenting inherent limitations to their strenght, hardness, electrical conductivity and reliability. Can you picture an artificial neural network made with semiconductors? with quantum principles? Minds that work tens of thousands of times faster than ours?

Share this post


Link to post
Zaldron said:

The fact you are an organic machine doesn't give you any true advantages beyond the millions of years of refinement through evolution. In fact, you're made of commoner elements presenting inherent limitations to their strenght, hardness, electrical conductivity and reliability. Can you picture an artificial neural network made with semiconductors? with quantum principles? Minds that work tens of thousands of times faster than ours?


Yes I can, but if a robot only truly imitates and never develops sentience or even sapience, then it may operate on a completely different level as our minds, but still be inferior to a thinking human. It wouldn't necessarily mean copying a human brain, it could be finding out exactly what makes us react the way we do and transferring that to a series of programmed situations where a choice is made much like a human makes a choice.

If a robot became sentient and could feel, that would put it one step closer to making irrational decisions. If it was sapient, it would most definitely have that capacity.

Does a robot hand truly feel the hand it shakes or does it merely imitate it by knowing where all parts of that hand are at all times, and reacts accordingly?

You've brought up a very interesting point, if the human body is just a series of mechanisms, then does a robot gain actual sapience by having those same mechanisms?

Share this post


Link to post

Bank said:
If a robot became sentient and could feel, that would put it one step closer to making irrational decisions. If it was sapient, it would most definitely have that capacity.

Irrationality can be seen as error (like "bugs") or as a not evident coherent working in a more complex body (systematic or gregarian behavior). Both cases already do or can apply to what can be called artificial intelligence.

Share this post


Link to post

Isn't "sentience" already the event-based mechanism of most systems existing? It feels, it acts.

A sound discovery yet left: when we discover how the brain works and is made. By that time, we shall be able to do stuff to the real world, the same we do for virtual, imaginary computer graphics and sounds. The shaping of reality, hehehehehehehehehehe.

Share this post


Link to post
printz said:

Isn't "sentience" already the event-based mechanism of most systems existing? It feels, it acts.

A sound discovery yet left: when we discover how the brain works and is made. By that time, we shall be able to do stuff to the real world, the same we do for virtual, imaginary computer graphics and sounds. The shaping of reality, hehehehehehehehehehe.


Do robots feel? I would assume that they calculate how far away the surface is from their own body in order to "touch" it effectively.

And holographic displays are not too far away, Discovery Channel says 50 years. I'd like to see it personally.

Share this post


Link to post

Some of you seem to make an arbitrary distinction between robots and humans that doesn't exist. Humans and robots are both arrangements of matter than exhibit certain behaviours we label as intelligence, sentience, whatever. They are both kinds of machines, one biological and built by evolution, the other electronic and built by design. If you accept that one arrangement of matter can manifest sentience, then why not another? Is it not just a prejudice of your birth?

Imagine a race of spacefaring robots, whose creators are so long extinct they have passed out of even their creation's memory. They come to Earth and encounter biological life for the first time. Might they not look at human life and say, "Well, it may seem sentient, but it's just a clump of mud and slime trained to mimic intelligent behaviour by millions of years of trial and error. Everyone knows intelligence is a trait peculiar to electronic circuitry." Would they be really any more right or wrong to make such pronouncements than we are?

When we speak about "sentience", "conciousness" or "intelligence", we are not talking science, we're talking philosophy. And speaking philosophically, these things are just observed behaviours, divorced from the physical how and why they arise. If a thing can exhibit those behaviours to the same extent or more that we can, then that thing is also intelligent, sentient, whatever. We do not have any yardstick by which to say "this method of achieving intelligent action is valid, this method is not".

Share this post


Link to post

So here's my two cents:
The brain is a series of links of neurons. (I don't really know much about it's physical anatomy) but here the thing, when you get down far enough, a synapse either triggers or doesn't trigger. At that level, there's no mind to make up. The same structure of a mind should always act the same in the same situation. However, because of reality's ever-changing nature, it is impossible for us to perfectly duplicate any situation. As far as I can tell, a mind is just a series of "If this happens, do this" but with so many variables in it that it is nearly impossible to duplicate.

Share this post


Link to post

[offtopic]When I sawthe topic, I swear it said "Can you construct a Sentence?"[/offtopic]

In relation to sentience, I feel that it would be nerly impossible to achieve

Share this post


Link to post

There was a TV show on back in 2002 or something which was set in 2010 or so, in which the internet starts playing up and sending porn to government ministers and the like. And it turns out these programs sent out "into the internet" to search for things for people and organise it better have joined together and it has used the vast computing power of every PC connected to become conscious. And then Rebus came on the other side and that was better.

In the far future i suppose it may be possible for a computer to be built which uses the tinest quantum particles as it's switches, allowing billions and billions of them to be put into a humaoid-shaped machine, which could theoretically have human-like intelligence. I imagine programming such a machine would be a bitch though, you'd really have to build millions of them and let them loose somewhere to "evolve" thier own intelligence and way of doing things, a process which would at first result in the destruction of nearly all of them as they try and find out what happens when you leap off a cliff. Still maybe they could use wi-fi or something to "share" thier "findings", so the ones that kill themselves where transmitting what they where doing to the others, so the survivors know not to do the same thing, which would speed the process up a bit, but it could still take centuries

Share this post


Link to post
Bank said:

Yes I can, but if a robot only truly imitates

And don't you think true imitation is simply the exact same thing? If we inspire ourselves with Nature's approach to make our own version, but fundamentally keep it conceptually the same, what is the difference, then? Is our electricity any different from that of lighting, simply because we artificially muster it and don't go around frictioning millions of cubic meters of clouds?

If we conceive in vitro is the resulting creature not alive or sentient given how we only mimicked a natural phenomenon?

For example, when an AI comes to the point where it must choose to keep sustaining structural damage for some kind of longer-term benefit, despite full knowledge of the dreaded implications of such damage, then who are you to tell it doesn't feel pain? It must look hard and deep to keep dealing with it, it must have very good reasons, which were not be present, would compel it to inmediatly get out of harm's way. Exactly like us. The fact you feel something ouchy is just a nervous report that you have over time associated with a bad feeling, as to not undermine its malignacy. How is that any different?

Share this post


Link to post

If the only directive is survival, and the means are not programmed, then the mind and the machine would both have to simulate possible courses of action to determine what's best. I think the big difference is in level of awareness. A machine, properly programmed to understand a mechanical model of the world (physics and sociology, most likely -- you have to withstand the elements as much as you have to avoid being lynched by people) should be able to analyze a situation and act appropriately, possibly simulating outcomes internally. I know that's how I do it.

The way I think humans do it may seem irrational to observers, but they are not privy to the full, unexpressable experience and internal simulation of the thing. Retrospect is always more accurate than the view from the ground, also.

Unprotected sex, eating junk food, having a cigarette, driving while intoxicated -- all have roots in survival instinct, I'd say. Every "wrong" thing we do could probably be traced back to how we think we can best affect our own survival. Unprotected sex, for instance, might be explained as reproductive instinct (a sort of survival). Junk food is immediately available in response to a feeling of urgency caused by hunger pangs. Smoking a cigarette, at least the first one ever, for a lot of people, is a response to social pressure. We're social creatures, and probably for survival reasons. Not just reproductive drive, but protection from others by friends. So when you trace it back, though cigarettes can kill you, you smoke it anyway based on a more immediate percieved threat to survival.

The thing about analyzing input and simulating courses of action is that you can't just sit and run simulations all day. You have to be, on the fly, simulating and taking input at the same time, whether to respond to new threats to your survival, or to add new variables to whatever simulation you're running. You might not find the right answer in the time you've got available, so you terminate simulation and commit to whatever course of action gave the best results (whether the originally intended success or not) in simulation. But people don't commit -- they never found the right answer, and now they're acting on a hunch. I guess we could introduce faith and hope here, but they seem like they're mostly distractions to full committal to action, at least in this model.

There are some very mechanical ways to think about human action here. The trick is having the right programming language to feed it sequentially (and sequentially is probably just plain not the best way) through a processor.

I like this, as far as a model for selecting a course of action goes: http://www.thebigview.com/buddhism/eightfoldpath.html - It's a set of unspecific directives that can be applied to any goal, really. They're not specific. They're... variable? All a matter of continuing to analyze input.

What I've heard is that the human brain makes neural connections based on experience. So, let's say you exercise right view, intention, speech, action, etc. You won't get it right all the time, I'm sure, but you'll have neural connections in place for doing certain things better in the future.

It's all quite logical, in fact.

Somebody mentioned feelings. Somebody said it's a matter of reacting to chemical sensations. Well, I think it might be more primarily logical. Pain is feedback intended to tell you that you're at risk for injury or death (and hopefully not too late). Fear of rejection goes back to that security within society thing.

"Does a robot hand truly feel," I saw. Do you truly feel? Or do you take tactile input, recieve a stimulus telling you it's there, and then act accordingly to established neural procedures? If this hand in your hand is offered in friendship, it's social security, and it's a good feeling. You may note other things about it, such as that the hand is rough, or cold, that aren't of any particular noteworthiness, but they're brought to attention because they're either unusual features (based on pre-established neurology) or because they're familiar but not common and might become relevant, concievably, in some subconscious simulation.

(As an aside to that, I think the conscious/subconscious thing is just like how you can dedicate less processor time to processes that are deemed less important.)

"Irrationality can be seen as error," I also saw. I think actions percieved as irrational are just decisions that were made in situations where conditions were not right to perform a satisfactory simulation or take in complete input, resulting in a "best guess" scenario. So yes, there's an error, but it's not that the mind is leaving the correct process for determining what the best concievable course of action is, it's just that the simulation didn't work out.

As far as robots gaining sapience by having the same mechanisms as a human, I don't think you necessarily have to have the same bits and pieces to achieve the same ends along the same general model.

If all things are in constant motion, and this constitutes both life and death, where the ending of one construct is the beginning of another, then life and death are both constant. Could intelligence be definitively a conscious push for survival of one's construct? Fighting the life and death cycle? Is fighting for life the same as fighting for stagnation and death, on some level? (Peripherally on topic, but barely.)

Dan: Only if it were a SEXY robot body. Cyberskin.

I think we have problems when people ask questions like "is exact mimicry of life the same as life itself" and then say "Life IS:" this or that. I prefer to leave the questions open and see that we just keep experimenting and attempting to create. Creation is the generally agreed upon divine act, no?

pre-emptive tl;dr

Share this post


Link to post

If you can make a machine as intricate as a human being, it would behave exactly like a human being and appear to be sentient. However, sentience is being aware of yourself. The machine is not sentient, because it only consists of very complex yet purely mechanical behaviour. The machine is capable of sensing the world, just like us, but it isn't aware of it.

However, life (sentience) may come to it. Before human beings are born, we are merely the lifeless matter we are made of. Our bodies in turn are comparable to an intricate machine, just made of organic matter instead of metal. At some point in our development, sentience comes to us. Some people (including myself) believe this to be the soul.

Therefore, if we ever do develop an AI at a level comparable to our own or of an animal's, sentience may come to it. Not because we made the intricate machine sentient, but because sentience can manifest itself in a complex 'thing' and make it alive.

In short, life isn't made, it makes itself.

Share this post


Link to post
Terra-jin said:

If you can make a machine as intricate as a human being, it would behave exactly like a human being and appear to be sentient. However, sentience is being aware of yourself. The machine is not sentient, because it only consists of very complex yet purely mechanical behaviour. The machine is capable of sensing the world, just like us, but it isn't aware of it.

However, life (sentience) may come to it. Before human beings are born, we are merely the lifeless matter we are made of. Our bodies in turn are comparable to an intricate machine, just made of organic matter instead of metal. At some point in our development, sentience comes to us. Some people (including myself) believe this to be the soul.

Therefore, if we ever do develop an AI at a level comparable to our own or of an animal's, sentience may come to it. Not because we made the intricate machine sentient, but because sentience can manifest itself in a complex 'thing' and make it alive.

In short, life isn't made, it makes itself.

so basically the life fairy sprinkles magic pixy dust on it and it's like "omg im alive now i kin feal it"

srsly thx 4 poastin

You expounded brilliantly on your points.

Share this post


Link to post

Terra-jin said:
but because sentience can manifest itself in a complex 'thing' and make it alive.

There's no evidence of such a thing. Contrarily, intricate relations create systems like lifeforms. Life is a result (or agregation) of combined simpler bodies, and not something else that is placed on the smaller scale relations to make them all alive once they are together.

The concept of soul (especially when disassociated from the material) is equivalent to our mechanism of abstracted reasoning; with it we are pointing out something in the way we see things, rather than the things that we see.

Something that is aware of itself is something that can react to its own actions. You are "aware" of yourself because your bodily and active functions must be coordinated and centralized in order to successfully function as a relative unit in the environment.

Share this post


Link to post
Danarchy said:

If you could put your brain into a robot body, would you?


The Butlerian Jihad series of Dune books had brains in robot bodies that would stimulate each other's pleasure centers therefore having BRAIN SEX.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×