Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Reaper978

When might this technology be available?

Recommended Posts

I often find myself thinking and feeling in ways that can't be expressed in words, or in any other medium for that matter. Other times I have ideas or "conceptualizations" for a work of art or piece of music, but there's no way to express that idea or record it. I'd never be able to paint or draw the kinds of things I imagine because they require a very advanced skill in drawing to make.

Could it be possible to have a technology that reads the thoughts of an artist and then "paints" what they're thinking? I, who have no skills in physically drawing things, would benefit immensely from such a device. But without such technology, I can only imagine places and images, not paint them.

I'm reminded of the part in the Final Fantasy movie when the main character's dream is recorded and saved by a machine. That is another hypothetical technology that I'd be interested in trying out, as my dreams are interesting but hard to remember.

I'd like to see my thoughts and feelings actually recorded, the same way sound is recorded onto a vinyl record.

I typically have tons of ideas for art and music flash by in my mind without any way to express it. I wish there was some kind of computer that could take my ideas and put them down in some concrete way. I guess that's all I have to say.

Share this post


Link to post

Interesting stuff right there. I kinda doubt their will ever be a machine that could go inside your thoughts and imagination and record or draw what it saw, but hey, technology is moving fast. It all kinda sounds like something out of a sci-fi movie.

Share this post


Link to post

I had a discussion about just this maybe a month ago. I still think that information derived directly from the brain would have to be 'translated' and 'packaged' (to use non-technical terms) in order to be readable to anyone else. So you'd still have to filter it through a semi-external language, even if it's not as exterior to your thoughts as, say, your spoken language. Think of disagreements over things like style and colour, or the taste of food. Sure, you can express how much you love broccoli (either verbally, pictorially, aurily, or even telepathically), but you're still going to have to somehow convey why it is you love broccoli so much. The reason why is probably even beyond your own conscious knowing, and likely has more to do with your genetics than anything you decided for yourself. And even if you did convey it perfectly, people that receive your impressions on love of broccoli will still only comprehend those feelings from the perspective of their own ego.

The only way to fully grok what it's like to be, for instance, a to literally be me.

tl;dr I believe people 'think' in different modes from each other, and this would still present a barrier to communication.

EDIT: clarified my thoughts. X-D

Share this post


Link to post

If such a device were feasible, it would imply that everybody -quite literally- a) thinks alike, and b) is capable of fully picturing what they want, in order for a device to "read" it and extract it. Neither is true.

It might work after training/adaptation to its user's thought patterns and the user learning to think in a way that's easier for the machine to decipher, but that in itself would require conscious effort, determination and skill (and again there would be discrimination between those who would be able to use such an instrument proficiently, and those who would not even be able to pass the calibration test).

Kinda like it's easier to build excavators with a bunch of straight levers and training a human operator to handle them correctly, than to design some sort of "glove" or advanced interface which would make digging as easy as making scooping motions with your hand or thinking about digging ;-)

Share this post


Link to post

Heck, I don't even know why I love Doom so much. I just saw it on a computer and started playing.

Probably something to do with my mother; fuck if I know!

EDIT: There's also the fact that humans come essentially pre-wired to learn and express complex language. As far as I know, we have no such analogue for brain-to-brain communication. Essentially, it's as if two ants were philosophising "wouldn't it be great if we could yell at each other over distances to communicate rather than having to smell each others' ass-trails? I would be so happy."

Bad analogy, but maybe you get what I'm saying. God, I wish you could just read me thoughts...

Share this post


Link to post

Since humans are -apparently- unable to assimilate new, arbitrary knowledge any faster than their senses can read it, hear it or see it (NB: this does NOT cover inductive/associative thoughts/calculations, which can cover very broad topics with relatively little effort/input), then whatever "brain to brain" communication would probably require to "piggyback" on visual, auditory or textual cues, with all the limitations of the recipient's observatory ability, language expressiveness/knowledge and listening & reading comprehension that this would imply.

It would indeed be a major breakthrough in neurology if a mechanism that DOES NOT rely on being able to speak/see/read/hear/feel something in order to understand it, was discovered. Bonus points if this was independent of the individual's mental development, education or cultural background, for it would mean that there's indeed a universal "human programming language".

Share this post


Link to post

That's sounds more like the work of a very advanced AI, that approximates information from one brain to another in a way the recipient can understand, within the closest degree possible, what the sender intended to convey. Which would actually be pretty neat...

I could see something like that making 'actual' communication obsolete. Especially if the AI had some sort of directive, like 'maximise efficiency' or 'increase global net happiness.'

Share this post


Link to post
schwerpunk said:

That's sounds more like the work of a very advanced AI, that approximates information from one brain to another in a way the recipient can understand, within the closest degree possible, what the sender intended to convey. Which would actually be pretty neat...

I could see something like that making 'actual' communication obsolete. Especially if the AI had some sort of directive, like 'maximise efficiency' or 'increase global net happiness.'


Both these goals rely on very strong and so far unproven cultural/linguistical/neurological/psychological assumptions.

The most significant is that it's possible to express knowledge or convey specific emotions/thoughts in a culture- and language- independent way, which is simply not possible apart for very basic and primal concepts. E.g. a deep growling sound or wild arm gesturing would probably not be interpreted as "friendly" by any culture, so as long as you keep concepts to caveman-level, mutual understanding is probably possible.

But once you throw language and culture dependent things in the mix, things will diverge a lot. E.g. my GF tried to make me take a test where I was to assign a round, fluffy shape and an angular, thorny shape to one (artificial, nonsensical) word each, chosen so that it would be nonsensical in my language (Greek). Much to her surprise, I assigned them "contrary to norm", simply because they reminded me very specific concepts in another language I knew, so they automatically were not nonsensical to me. E.g. the word which was supposed to sound "harsh, aggressive" sounded ridiculous to me, so I didn't assign it to the angular shape, as I "should have" ;-)

I argued that IMO, there was no way that such a test which involves the use of language can be made totally culture-independent and equally effective on test subjects of arbitrary education levels.

Similarly, unless it's discovered that there's indeed a common human "language behind the language" which every human possesses and which can be "spoken to" directly, it will be nearly impossible to achieve non-verbal "direct" communication that is totally culture- and education- independent.

The closest there is, Sign Language, IS in fact, TOTALLY culture and education dependent, despite what one might think about signs -signs for concepts should be instinctive and similar for every human being right? (Hint: they are not. Not even saying "Yes" and "No" with nodding/shaking your head are universal).

Share this post


Link to post
Reaper978 said:

Could it be possible to have a technology that reads the thoughts of an artist and then "paints" what they're thinking? I, who have no skills in physically drawing things, would benefit immensely from such a device. But without such technology, I can only imagine places and images, not paint them.

I strongly suspect that such a device would be fundamentally impossible to create because although it sounds like a simple concept, in practise it's the details that make the artistic work and you probably don't actually have the imagination to fill them all in. I don't mean that in an insulting way, merely to say that an actual artist who has painted hundreds of paintings knows how to add particular detailing to achieve a particular goal (whether that's realism or something more abstract). Good artists have spent many hours experimenting with different techniques and seeing how particular colours or shading might affect the final work. If you haven't done that, not only do you not know how to paint, you don't know "what" to paint either.

The same is true of other forms of artistic work. A simple example is music. I remember a friend of mine who was learning to play guitar telling me how he could hear the tune in his head, he just didn't have the skill to properly play it! But who knows if it would have been any good anyway? The division is more clear-cut in music: there's the skill of playing an instrument and the skill of being a composer. Even if you could somehow make a machine that would do one for you, it still won't do the other.

Share this post


Link to post
Aliotroph? said:

This isn't a completely wacky idea. Here's the closest I've seen.


The most interesting aspect is how the models used need to be "calibrated" to each subject -again, proof that we don't all "think alike", and that direct-to-brain communication may prove an even greater challenge than overcoming natural language barriers.

Each of us has his own "machine language", so to speak, and for all we know some could "function" in totally non-standard ways while looking "normal" (more or less) on the outside.

Share this post


Link to post

It's an interesting idea.

You need to distinguish between, on the one hand, a machine which, on the basis of a brain scan, could produce an image that reproduces a piece of mental imagery, and - on the other hand - a machine which, on the basis of a brain scan, could produce a sentence that captured the propositional content of a thought or judgement. Think of this as tracking the distinction between visualising a typical tomato, and thinking to yourself, 'Tomatoes are typically red and spherical' - the first essentially involves imagery, but is not evaluable as true or false, whereas (arguably) the second does not essentially involve imagery, and is evaluable as true or false.

The theory behind the machine (I guess) would be that specific sorts of brain activity - that is, electrical activity at certain locations - can be reliably correlated with a subject's thinking a thought with a certain content, or visualising an image of a certain sort, and so if we can learn enough about how differences in brain activity track differences in content/image, then, on basis of a brain scan, we can produce a sentence or an image which accurately reflects what the subject thought/visualised on a given occasion.

An immediate problem is restricted generality - the same function can be performed by different areas of the brain in different people, and a single function can be taken on by different areas of the brain as subject's age, and/or suffer injuries. Of course, there are, in most of us, broad similarities, but probably not enough for translation rules to apply generally. So, the machine would always have to be calibrated to individuals, hence the calibration process in the Berkeley study which Maes picked up on.

But, with respect to thought, I think the calibration process might be insurmountably complicated, since there is no upper limit to kinds of thoughts that a subject can think - even with a modest store of concepts (and our stores are typically far from modest), and syntactic structures, you can produce an unlimited number of thoughts that differ in their contents. This creates a lot of problems for producing something that might function in the way that you envision. (There some other complicated reasons for being skeptical here, but this post is already too long!)

With respect to mental imagery, I'd echo something that fraggle said. Visualise a small group of frogs - now, is there an answer to question of how many frogs you just visualised? Or visualise a ripe tomato - is there an answer to the question of which specific shade of red you imagined it to have? I suspect the answer to each of these questions is 'no'. The point is that mental imagery is - typically - indeterminate in various respects, and it's unclear how we could accurately translate this to a concrete representation of such imagery - the machine could not fail to fill in detail that, strictly speaking, wasn't there - I can't paint a small group of frogs without painting a specific number, nor can I paint a ripe tomato without painting it a specific shade of red - so the output of the machine would, necessarily, lack fidelity.

Share this post


Link to post
durian said:

The theory behind the machine (I guess) would be that specific sorts of brain activity - that is, electrical activity at certain locations - can be reliably correlated with a subject's thinking a thought with a certain content, or visualising an image of a certain sort, and so if we can learn enough about how differences in brain activity track differences in content/image, then, on basis of a brain scan, we can produce a sentence or an image which accurately reflects what the subject thought/visualised on a given occasion.


That is the theory, more or less, but what is severely lacking here is resolution and capture accuracy: the brain is estimated to contain about 100 billion neurons, and a much higher number of synapses and possible "states", even if it was revealed to use binary or ternary "logic".

How can a "snapshot" which, at best, is limited to a few thousands data points (the "imaging" process), even hope to capture everything or even just enough data? Those methods are doing a very low resolution and low speed capture of what's going on in the brain, based only on a few "inspection points".

To make an audio example, it's like sampling bird chirps at baseband with a 3 kHz sampling rate: you'll record an arbitrary sequence of something, a lot of aliasing etc. but you will have forever lost most if not all of the original information, and reconstruction would be impossible without additional assumptions.

Share this post


Link to post

Indeed - that all seems absolutely correct.

I guess my thought was that even if fMRI imaging were improved to a point at which these issues could be overcome, there would still be massive barriers to moving from a brain scan to a sentence that accurately reflected the content of a thought.

Share this post


Link to post

I remember seeing an article about an experiment where they had these peeps wearing these brain wave detector deals and were asked to focus on an image or a line of text, and the computer was supposed to decipher what they were looking at purely by the brain wave patterns detected by the headset deal. While the interpreted image wasn't crystal clear, it WAS clear that the image produced by deciphering the brain waves was what they were looking at.

I don't think it's much of a stretch to use similar technology to interpret what the user is imagining rather than what they're directly observing.

[edit]Just noticed Alio's post. That's pretty similar to the article I saw, only the one I saw was just interpreting still images (and it wasn't with an MRI machine, it was with some kind of custom made halo-lookin' device that sat on their heads)

Share this post


Link to post

I've had the exact same thought a few times myself, especially the thing with recording dreams. I often find myself remembering bits and pieces of dreams, but not entire ones. It would be like a feature-length movie.

Share this post


Link to post

Nobody said it was impossible, just that the resolution required to get anything close to even a static web-resolution image (compressed 20 times with lossy JPEG) is 2-3 orders of magnitude above what can be done today.

And a full "brain status scan" requires a resolution of at least 6-7 orders of magnitude higher than what MRI can provide.

The ideal would be to wire sensors to individual neurons and monitor signals directly, but of course this can't be done for medical and ethical reasons, so an indirect approach is the only viable one.

What's interesting, is that this form of indirect signal gathering has a lot in common with TEMPEST procedures -there too, you have to deal with very weak, indirectly-gathered signals or weakly-correlated external manifestations (often illegally/covertly), and try to guess the "state" of the system behind them, by coupling low-accuracy observations with other intelligence.

Share this post


Link to post
Nomad said:

I don't think it's much of a stretch to use similar technology to interpret what the user is imagining rather than what they're directly observing.

Indeed. I think that in the case of mental imagery things are much more feasible, although the issue of indeterminancy that I raised above would still pose a problem.

Still, if we can imagine subjects that can visualise things to maximal determinacy, then I see no problem – in principle – with making a machine that produces pictures accurately depicting the things that subjects visualise, in the way that they visualise them.

As Nomad suggests, if we can do it for bona fide visual experiences, then we should be able to do it for experiences of visualising – FWIW some of the areas of the brain that are critically involved in seeing are also involved in visualising, although the implications of this for the relationship between seeing and visualising are unclear.

But note that bona fide visual experiences are typically (so long as your eyesight is good) unlike episiodes of visualising in that the things that are presented to us in vision are presented to high degree of determinacy, and so there are definite answers to questions of the ‘how many?’, ‘what shade?’ sort that I posed earlier for visualising. To the extent that its indeterminate what we visualise, it’s unclear to me how we’re to produce an image which accurately depicts it.

Share this post


Link to post

This is a thought experiment I've had many times, it's always very stimulating to discuss. I see the differences in brain activity being akin to the differences in operating systems. Each OS has its idiosyncrasies, but all are based in programming languages that are fundamentally similar enough to allow for cross-compatibility. So a "dream machine" could potentially understand the fundamental structure of all human brains, thn adjust itself to each individual's nuances. I think it would only require human intervention at first. Any AI operating on a near human level should be able to handle it if a human can.

Thoughts and dreams are difficult to visually describe. There's this unmistakable sense that they're flat, transparent sheets overlaying our vision, with lots of missing and constantly changing information. Sometimes they seem to have more solidity than other times. If you've ever tried to explore a dream environment, it will behave like a glitchy 3D rendering. Could this information be displayed volumetrically with voxels? And what about the missing information? Would the machine just fill it in with black? And what about hallucinations? Would the machine record your "real" vision in addition to the hallucination?

Technology is accelerating exponentially, not linearly. Undoubtedly there are advances that will take longer than expected, and others that will arrive next year. A dream/thought recorder doesn't appear to violate physical or quantum laws. It should be possible, but it has terrifying privacy issues if taken advantage of by the government, and endless sources of creativity/perversion if made a consumer product.

Share this post


Link to post

Oh, and to address the thread's question: I believe in the next few decades, we'll see very crude and low resolution reconstructions of some thoughts. The ideal level of detail we all want to see, I would assume are nearly a century away.

Share this post


Link to post
GoatLord said:

Oh, and to address the thread's question: I believe in the next few decades, we'll see very crude and low resolution reconstructions of some thoughts. The ideal level of detail we all want to see, I would assume are nearly a century away.


Even with a low detail however, the implications for psychology/sociology may have a far greater impact than the CS or electrical engineer's wet dream of being able to produce "high quality" dream recordings or interpreting them.

If thought patterns are revealed to be completely unique for each individual -kinda like fingerprints-, this will have major implications in medical, religious and sociological fields -quite literally, no two people will be alike, so religious leaders will be more than happy and savour the "triumph" of Man's uniqueness, Free Will and inscrutability to Anyone But God. Then again, neurologists and psychiatrists/psychologists won't be too happy: this will mean that universal diagnoses and cures for mental illnesses or neurological damage won't be possible.

If however patterns are revealed to be predictable and common between different individuals -or worse, if specific thought groups are proven to exist (groups of people that think alike but differently than other groups) this will have nefarious consequences for mind control, brainwashing, manipulation and discrimination (if you think in way X, then you must be Y), though it will of course make medical treatment of neural damage and psychological problems easier. Most religions will also be shaken, as the not-so-uniqueness of the Individual (and even the inexistence of Free Will) will have been partially proven.

Share this post


Link to post

I'm betting on the latter, Maes. Ever notice how often in history two people invent the same thing independently of each other, sometimes even in the same year? Or how you and a friend might utter the same exclamatory phrase in perfect unison? People are genetically very nearly identical. I strongly suspect whatever electrochemical processes allow for consciousness are fundamentally similar across the globe.

Share this post


Link to post
Maes said:

If thought patterns are revealed to be completely unique for each individual -kinda like fingerprints-, this will have major implications in medical, religious and sociological fields -quite literally, no two people will be alike, so religious leaders will be more than happy and savour the "triumph" of Man's uniqueness, Free Will and inscrutability to Anyone But God. Then again, neurologists and psychiatrists/psychologists won't be too happy: this will mean that universal diagnoses and cures for mental illnesses or neurological damage won't be possible.

If however patterns are revealed to be predictable and common between different individuals -or worse, if specific thought groups are proven to exist (groups of people that think alike but differently than other groups) this will have nefarious consequences for mind control, brainwashing, manipulation and discrimination (if you think in way X, then you must be Y), though it will of course make medical treatment of neural damage and psychological problems easier. Most religions will also be shaken, as the not-so-uniqueness of the Individual (and even the inexistence of Free Will) will have been partially proven.

I'm afraid I don't really understand what you're saying here, and so it's difficult to evaluate your claims about the implications of the possibilities that you canvass. What precisely do you mean by 'thought patterns'?

Share this post


Link to post

Kinda like there are different but codifiable blood groups or face shapes.

Share this post


Link to post

Oh I understand that part - grouping people in terms of their similarities in certain respects - what I don't understand is what the respect is in which, on your consideration, people might be found to be similar/dissimilar. You say 'thought patterns', but I don't know what you mean by that.

Share this post


Link to post

Brains "wired" in particular ways, or the expression of thoughts or behaving in characteristic ways -kinda like some people can move their ears, some people are left-handed, some people just can't "get" maths while some can paint paintings, people with dyslexia crawling camel-style etc.

Some of these are already observable with traditional macroscopic psychiatric/neurological means, but MRI would be far more revealing, and allow e.g. to screen candidates for a job (e.g. this guy X has a thought pattern associated with laziness, don't hire him).

Share this post


Link to post

I see. It's an interesting point.

I think it might be useful here to distinguish between, on the one hand, bona fide psychological phenomenon, including (but not limited to) conscious psychological episodes - like thoughts and sensory experiences, and - on the other hand - neural events and processes. It's clear that, in individual cases, the occurrence of neural events and processes of a certain sort, constitute a condition on the occurrence of psychological events and processes of a certain sort, but the relationship between the two is, to say the least, unclear.

As such, it not clear that what's revealed by MRI are characteristic patterns of thought, rather than neurological patterns that are associated, statistically, with thoughts, or (more generally) psychological events or processes, of certain sorts.

That said, it's clear - without peering into people's brains - that we can type, or sort, people in terms of their dispositions, given their background beliefs, to think and act in certain ways in certain circumstances - or, more generally, by their psychological abilities - and it would certainly not be unreasonable to expect that when two people are thinking in similar ways, similar neurological events are occurring in the brains of each (although this is obviously an empirical hypothesis that would need to be tested - an issue would be to determine the respect in which neurological events are to be counted as similar - one option here is to look for functional similarities, rather than mere brute physical similarities, which seems to me to be sensible).

Now if, through performing such scans, certain ways of thinking could be reliably correlated with certain patterns of neurological activity - which seems to be within the realms of possibility - then it's plausible that, just by looking at the results of a scan, we could accurately judge that a subject was thinking in one way, rather than another - e.g., engaging in spatial reasoning rather performing mental arithmetic, or planning what to eat for dinner, rather than reminiscing about yesterday's meal - although there's an issue concerning how fine-grained our judgements could really be, just on the basis of the scan, but we can bracket that.

Still, scans like this - that record neurological activity through some period of time - are not obviously going to tell you that someone's brain is 'wired' to produce neurological activity of a certain sort - all they can tell you is that it is producing activity of that sort. Of course, if something is acting in a certain way, then it's capable of acting in that way, but that doesn't tell you that it's 'wired' (whatever that means) to act in that way.

To elaborate: All the scan can tell you is whether or not neurological activity, statistically associated with a subject's thinking, at that time, in a certain way, is occurring. If the activity is occurring, then you'd have a reason to judge that the subject is thinking in the relevant way, and if its not occurring, you have a reason to judge that the subject isn't. But, if the activity is occurring, insofar this gives you a reason to judge that they're thinking in the relevant way, then - plausibly - it would also give you a reason to judge that they're capable of thinking in that way. So, you'd be in a position to infer, on the basis of the scan, the presence of a certain psychological capacity, that was being exercised by the subject on that occasion. But this doesn't get you to the claim that the subject is 'wired' to exercise that capacity, or that they're characteristically disposed to do so, and so it wouldn't be much of a basis on which to infer their likelihood to behave in certain ways in the future, or to infer the presence of something approaching a trait.

I guess if you had a scan which monitored a subject for an extended period of time - say, at least a day - you'd be in a better position to make these kinds of predictions, but it's not obvious to me that you'd thereby be in a tremendously better position than you would if you simply observed them for a day. Of course, you might be able to pick up on some things that you can't just by observing, but really you can learn a lot about what people are thinking, and the ways that they're disposed to think, by studying the ways in which they're disposed to act in certain circumstances.

So anyway, I think it would be unreasonable to expect that the psychological similarities - that is, similarities in occurrent psychological states and processes - that we already recognise, don’t have some reflection at the neurological level, but I'm not sure that this fact, in and of itself, would be of far reaching significance.

Share this post


Link to post

If projecting someone's thoughts (or indeed, even understanding them) is so impossible, how is it that I can see my own thoughts and understand them so perfectly?

Share this post


Link to post
Reaper978 said:

If projecting someone's thoughts (or indeed, even understanding them) is so impossible, how is it that I can see my own thoughts and understand them so perfectly?

You don't actually 'see' so much as you understand them. The idea of an image that exists in the brain is, I believe, a fallacy. The truth is probably more along the lines of an overlapping plane of personally relevant symbols and associations.

For instance, how you see a tractor would be drastically different than someone looking at the same thing without knowledge of what a tractor is. Optically, you'd be looking at something identical, but in the language of your brains two very different conversations would be going on.

That's how I see it anyway. Heh.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×