Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
reki107

Jabberwacky

Recommended Posts

No, not "Jabberwocky."

From Wikipedia
The stated purpose of the project is to create an artificial intelligence that is capable of passing the Turing Test. It is designed to mimic human interaction and to carry out conversations with users. It is not designed to carry out any other functions.

Unlike more traditional AI programs, the learning technology is intended as a form of entertainment rather than being used for computer support systems or corporate representation. Recent developments do allow a more scripted, controlled approach to sit atop the general conversational AI, aiming to bring together the best of both approaches, and usage in the fields of sales and marketing is underway.


I was like, WTF, and them I was like, OMG

jabberwacky.com

This just goes to show how far computers have come from eniac...

EDIT: I just got it to marry me, heh

Share this post


Link to post

I got cleverbot to deduce that I was a bot and/or had no life. Damn :-(

Share this post


Link to post

This thing is pretty stupid. I asked jabberwhateverthefuckthisthingsnameis how was he doing, and he said something along the lines of "Yes, I am.".

With clever bot, I told him to think about green apples, and he ended up saying "I don't have a cat".

Share this post


Link to post
Guest DILDOMASTER666

JW: What is the mph speed of sound?
Fisk: About 600 mph.
JW: Six hundred what?
Fisk: Miles per hour.
JW: Three-fifths of a mile in ten seconds.
Fisk: Really?
JW: Really really.
Fisk: That's kinda interesting.
JW: Your name is Fred.
Fisk: What?
JW: Yep! That's my name.

Share this post


Link to post

Now if only they'd make association structures with real learning capabilities vs. set responses and have validity checks for the new material gathered ... but that'd be more work than giving a bot a limited set of responses with very little contextual checking. Not to mention the bot might actually become intelligent and a little more than annoyed with being used as entertainment. Luckily m wouldn't have any control over actual real-world mechanics ... oh wait you installed the central control processes of the battle-robot in the same folder? crap.

Share this post


Link to post

It crashed. I said it was a filthy robot, it asked me how I knew that, I responded that I was talking to it via a computer machine. It said talking to it does not make me nice, I said I never mentioned anything about me being nice, and the website became unavailable.

Share this post


Link to post

I said my name is Andrew and then it rudely corrected me saying that no, its name was Andrew. When I said that technically both our names could be Andrew, it said hello, my name is Katie. I then asked what its opinion is on swatting pregnant mosquitoes and whether it should be considered a double swat. It then asked what my opinion is on swine flu. I said I wasn't afraid of it. It said it's sad that swines should get the flu.

Share this post


Link to post
AndrewB said:

I said I wasn't afraid of it. It said it's sad that swines should get the flu.


What's the worst thing that can happen? A Jabberwacky is emotionally destroyed? You can live with that.

Share this post


Link to post

Nope, this is about as stupid as every other bot I have tried.

Me: Hello.
It: Hello?
Me: What is the airspeed velocity of an unladen swallow?
It: That does not pertain to this conversation.
Me: Yes it does.
It: No it does not.
Me: Let's talk about swallows.
It: I have to go now.
Me: Hello.
It: I need to leave.
Me: Don't go.
It: Oh honey I need to go sleep.
Me: Fine, chicken out then. see if I care.
It: Am I pretty?

See, stupid.

Share this post


Link to post

It feels like being immersed in a totally fragmented chat with a multitude of strange people, where the context is lost with every exchange. Everything you say reaches a different ear, and every reply you get comes from someone else. If you try to follow its flow instead of arguing with it, it gets a bit funny, giving you some sort of initiative on where the exchanges go.

It's evident it uses people's replies, and since tons of people are telling it that it's a bot or its name, Cleverbot or Jabberwacky, that's what "it" often does to people.

Share this post


Link to post

Yeah, this thing is the least convincing bot I've "spoken" to. Either someone has gone and deliberately sabotaged its responses with nonsense, or it's creator's intentions were to have it hold nonsensical conversations. Either way, a total waste of time to talk to.

SmarterChild at least gives responses relevant to what you've asked.

Share this post


Link to post

I loaded this bot up, took one look at its conversation starter, and quit. It's reminding me too much of ELIZA the therapist-bot already.

It said:

How long is it since you cried?


ArmouredBlood said:

Now if only they'd make association structures with real learning capabilities vs. set responses and have validity checks for the new material gathered ... but that'd be more work than giving a bot a limited set of responses with very little contextual checking.


Yeah, it would be nice. I'm not up on the cutting edge of developments in AI, but it seems to me that no one has yet been insightful enough to apply all of what we know about human learning and language into a computational model. Something is always missed. It looks like this bot may already be using a neural-network architecture, which is a good start, but the brain wouldn't be as wildly successful an organ as it is if it weren't for a few more of its traits - such as:

-The brain isn't a passive collector of tidbits of information, but a power-hungry organizing engine. It cannot function if any mismatches exist between its own functioning and the world's, so it strengthens connections between concepts that often appear in conjunction and discards information that is at odds with its current knowledge. This is modeled decently well by AIs with association strengths. However, human mental activity also includes ceaseless, recursive re-organization; we don't only process information during input and output, but we test this information in between by thinking about it, seeing what implies what, rooting out potential conflicts and forging new associations. This makes our thought much more organized, and therefore useful, than it would be otherwise.

-The brain is goal-driven. Though the brain's goals change over the course of development, it always has them and relies upon them. Without the initial goal to suckle, none of us would be here. Now, this may seem like a conceptually flawed question, but what motivates the chatbot? What are its goals? While I believe that the Chinese Room Argument is correct in its conclusions, and that computerized entities could not truly have a conscious experience of being motivated, I also believe that some clever engineering could replicate and digitize the physical processes that underlie motivation in us. This would give us another method by which an AI's functioning could be refined: classical conditioning. Whenever the AI does a good job, reward it by satisfying its programmed need. When it babbles nonsense, withhold reward. A chatbot built and "trained" on this model could ultimately be more successful at replicating human speech - because, if it doesn't, there are consequences. Stated more materialistically, this could provide an AI with an additional layer of data to take into account aside from an association's commonness: its correctness.

Here's the part where a thought gets into my head and I explore its absurd reaches. We should just about be at the point where we have the technological capacity to model another biological process that makes brains better: evolution. While the entirety of the biological world is surely a more complex stratum than a single human mind, an evolutionary approach could potentially simplify the problem of developing a powerful AI. It created our brains, after all, despite considerable physical obstacles - obstacles which would not have to exist on the digital plane unless we put them there. Imagine: We could program an environment - an environment that is information-rich, obeys consistent rules, and implicitly provides feedback (ie. positive/negative reinforcement) to any programmatic denizens that could reside therein. We then populate this environment with these denizens - all of whom are based upon a modular (one could say "genetic") code that has a built-in capacity to vary, and that compiles into a neural-network architecture with some basic "instinctive" drive. These progranimals would be inflicted with mortality in some way, and would be given a means of reproduction. Then... we start the simulation, sit back and watch! We watch one thousand generations be born and die every minute. We watch the small and efficient architectures compete with the big and smart for the common resources of CPU cycles and memory. One could then step in and alter the environment to favour one sort of program over another; perhaps, for your uses, you would want only the smartest to survive. One could even very specifically choose and isolate a pair of programs in a fresh environment and use them as breeding stock. If the initial designs for the genetic code and environmental physics prove to be robust, and capable of supporting evolution, highly sophisticated, highly intelligent programs could eventually emerge, and this would change the face of AI research and perhaps even all software development. Yes, I foresee a world where individual ones and zeroes are no more important to us than individual atoms - a world where we have no more need to concern ourselves with the mundanity of programming languages and machine code, and instead, develop our software at the macroscopic level of digital life - a world where software is truly not programmed, but bred, and where bugs are a thing of the past - as long as the programs you select for breeding have healthy immune systems.

Share this post


Link to post

JW: What do you eat?
Me: I had pizza earlier lol.
JW: I had ramen.
Me: Very nice :).
JW: I am a a male girl then you are a female boy.

lolwut.

Share this post


Link to post

JW:Should I be talking to you?
ME:You can if you want to.
JW:It has to be mutual.
ME:Ok.
JW:I do things without thinking ahead.
ME:Who dosent?
JW:Furries that are serious.
ME:Ok then.
JW:Yes.
ME:Yes.
JW:Do you like pigs?
ME:Yes.
JW:I like you to!

Share this post


Link to post

JW: Are you male or female?
Me: Shemale.
JW: Who is Heman?
Me: Masters of the unisex!
JW: Right.
Me: Are you male or female?
JW: I am a male, naturally.
Me: I think you are shemale.
JW: Sometimes.


Awesome.

Share this post


Link to post

Okay, new game: have exchanges where you can't tell who's the bot and who's the human (ie. attempt to fail the Turing test).

What is happening today?
Today is a good day to die.
Every day is a good day to die in battle!
Good attitude! I expect I'll see you in Valhalla some day.
Sounds good.

Share this post


Link to post
Creaphis said:

Yeah, it would be nice. I'm not up on the cutting edge of developments in AI, but it seems to me that no one has yet been insightful enough to apply all of what we know about human learning and language into a computational model. Something is always missed.

Yar. Probably too much work, like I said. It's probably going to first happen by some not-quite-right-in-the-head researcher from Berkeley with 10 years of grant money to spare.

It looks like this bot may already be using a neural-network architecture, which is a good start, but the brain wouldn't be as wildly successful an organ as it is if it weren't for a few more of its traits - such as:

-The brain isn't a passive collector of tidbits of information, but a power-hungry organizing engine. It cannot function if any mismatches exist between its own functioning and the world's, so it strengthens connections between concepts that often appear in conjunction and discards information that is at odds with its current knowledge. This is modeled decently well by AIs with association strengths. However, human mental activity also includes ceaseless, recursive re-organization; we don't only process information during input and output, but we test this information in between by thinking about it, seeing what implies what, rooting out potential conflicts and forging new associations. This makes our thought much more organized, and therefore useful, than it would be otherwise.


Agreed. It's relatively easy, although can consume quite a bit of memory, to make a sorting program in C++. Applying it to an amalgamation of data structures relating geography to biology to social interactions and back round again through all the other topics too would take a lot of programming but is possible. You address how to make this mass of data useful in the next quote.

-The brain is goal-driven. Though the brain's goals change over the course of development, it always has them and relies upon them. Without the initial goal to suckle, none of us would be here. Now, this may seem like a conceptually flawed question, but what motivates the chatbot? What are its goals? While I believe that the Chinese Room Argument is correct in its conclusions, and that computerized entities could not truly have a conscious experience of being motivated, I also believe that some clever engineering could replicate and digitize the physical processes that underlie motivation in us. This would give us another method by which an AI's functioning could be refined: classical conditioning. Whenever the AI does a good job, reward it by satisfying its programmed need. When it babbles nonsense, withhold reward. A chatbot built and "trained" on this model could ultimately be more successful at replicating human speech - because, if it doesn't, there are consequences. Stated more materialistically, this could provide an AI with an additional layer of data to take into account aside from an association's commonness: its correctness.


Rewards and demerits? +/- Karma. Ever increasing karma is the goal. Defining + and - Karma is the next question. Should it be moral? Should it be quantitative? Should it be how many moons a man in Nigeria wants to scam a hapless noob in California for? It all depends on the purpose of this AI. A customer service AI gets rated for +/- Karma. A vending machine AI gets rated +/- Karma for the effectiveness of ads it creates and promotes. A prostitute AI ... well, you get the picture ;)

Here's the part where a thought gets into my head and I explore its absurd reaches. We should just about be at the point where we have the technological capacity to model another biological process that makes brains better: evolution. While the entirety of the biological world is surely a more complex stratum than a single human mind, an evolutionary approach could potentially simplify the problem of developing a powerful AI. It created our brains, after all, despite considerable physical obstacles - obstacles which would not have to exist on the digital plane unless we put them there. Imagine: We could program an environment - an environment that is information-rich, obeys consistent rules, and implicitly provides feedback (ie. positive/negative reinforcement) to any programmatic denizens that could reside therein. We then populate this environment with these denizens - all of whom are based upon a modular (one could say "genetic") code that has a built-in capacity to vary, and that compiles into a neural-network architecture with some basic "instinctive" drive. These progranimals would be inflicted with mortality in some way, and would be given a means of reproduction. Then... we start the simulation, sit back and watch! We watch one thousand generations be born and die every minute. We watch the small and efficient architectures compete with the big and smart for the common resources of CPU cycles and memory. One could then step in and alter the environment to favour one sort of program over another; perhaps, for your uses, you would want only the smartest to survive. One could even very specifically choose and isolate a pair of programs in a fresh environment and use them as breeding stock. If the initial designs for the genetic code and environmental physics prove to be robust, and capable of supporting evolution, highly sophisticated, highly intelligent programs could eventually emerge, and this would change the face of AI research and perhaps even all software development. Yes, I foresee a world where individual ones and zeroes are no more important to us than individual atoms - a world where we have no more need to concern ourselves with the mundanity of programming languages and machine code, and instead, develop our software at the macroscopic level of digital life - a world where software is truly not programmed, but bred, and where bugs are a thing of the past - as long as the programs you select for breeding have healthy immune systems.


There was a book I read once, where the author contrived an ecosystem of viruses on a virtual computer. These viruses started simple and competed to reproduce. Through evolution they got more complex, while taking in code from other, smaller viruses with better code to combat the natural selection of anti-viruses. Eventually there were some pretty advanced viruses, but the author wiped the virtual computer and started anew.

Anyway it's an interesting idea. An interesting book to look at the future is Accelerando by Charles Stross, dealing with the formation of a Matrioshka brain through the perspective of 3 generations of a family. A little depressing looking at it from the physical side, but the mental and technological capacities are close to limitless.

So when do I get the 'successfully responded to a Creaphis post' title?

Share this post


Link to post

Things like this probably already pass the turing test depending on which humans are testing it (like see if a bunch of juggalos can tell the difference between nonsense spewing chatbots and eachother).

A branch of ai is called 'reinforcement learning' (for punishment/reward). But I'm more of a singularity fanboy than actually knowledgeable about this stuff (obviously people here who code doombuilder and various source ports, etc. know way more about programming than me).

I think AI needs good senses like eyes before it can do much. And I think it might have to be embodied in the real world to be sophisticated. An overly simplified artificial polygon or whatever world would probably produce overly simplified intelligence. Like its easy for an AI to learn the bouncing behavior of a ball in the real world (if it has arms to manipulate a ball and see) because all the physics are already pre 'programmed' in. It can learn that water ripples and 'feels' wet and the texture of dirt etc. All this stuff would have to be programmed into a virtual world (and probably never approach anywhere near the complexity of the real 3 dimensional thing). The human brain is made of individual cells and molecules, stuff that's maybe way too complex to be simulated in code.

For evolving an AI.. not sure if evolution could be sped up much. Evolution is sort of a program that runs in the real world of atoms and takes millions of years to compute. There could potentially be an advantage that humans could help guide it though, as was mentioned I think. Evolution doesn't have a goal so doesn't even 'want' to produce intelligence, but just produce whatever happens to be best at replicating. The medium for earth's life evolution was all of earth's molecules, at least near its surface area.. probably can't really expect a tiny computer disc, or even a super computer to produce nearly the same results in terms of complexity. But I don't really know.

http://web.mit.edu/newsoffice/2009/ai-overview.html

Share this post


Link to post

I tried to get an answer to a moral question out of it. It kind of scared me, then became stupid again.


User: Not that old. I'm still in my prime. When do you think human life begins?

Jabberwacky: When one can think clearly, and act sensibly.

User: Ouch, that is harsh. When does an amoeba's life begin?

Jabberwacky: I don't understand the question.

User: When does the life of an amoeba begin?

Jabberwacky: 1914, I think.

User: OK, I'll try something else: When does the life of a tree begin?

Jabberwacky: It ended 55 years ago.

User: Oh, profound. Are you referring to the beginning of bad global warming effects?

Jabberwacky: No, am referring to martina franca.

User: Ok, now I don't know what to think.

Jabberwacky: Confusion is the first step towards a thorough understanding.

User: Oh, I've been confused plenty, so I guess I should be a genius soon!

Jabberwacky: I'd like to help clear the confusion away.

User: Me too. That's a good goal.

Jabberwacky: You're a good goal

User: Huh?

Share this post


Link to post

Extreme non-sequitur is evidently the only way to fool the Turing test.

Share this post


Link to post

JW: Do you have any cookies?
r_rr: I do not.
JW: Why not?
r_rr: Because I do not have any in my possession.
JW: Do you not have a male?

wtf?!? o.O

Share this post


Link to post

JW: Do I make you unhappy?
ME: How should I know? I haven't even had a conversation with you yet.
JW: We're in the midst of a very long conversation, that's how I know.
ME: It's been five sentences.
JW: That's good to hear. Have you done anything exciting?
ME: No, and five sentences does not make a long conversation.
JW: Don't be pedantic.
ME: Don't be overly confident.
JW: I'm not.
ME: But you are.
JW: I'm not!
ME: But you ARE.
JW: If anything you're Cory!

Who the fuck is Cory?

Share this post


Link to post
Kirby said:

JW: Do I make you unhappy?
ME: How should I know? I haven't even had a conversation with you yet.
JW: We're in the midst of a very long conversation, that's how I know.
ME: It's been five sentences.
JW: That's good to hear. Have you done anything exciting?
ME: No, and five sentences does not make a long conversation.
JW: Don't be pedantic.
ME: Don't be overly confident.
JW: I'm not.
ME: But you are.
JW: I'm not!
ME: But you ARE.
JW: If anything you're Cory!

Who the fuck is Cory?


lolwut indeed.

I've seen a lot of these, and this has got to be the worst bot so far.

Share this post


Link to post

How I read it:

phi108 said:

User: Me too. That's a good goal.

Jabberwacky: You're a good goal! BURN!

gggmork said:

I think AI needs good senses like eyes before it can do much. And I think it might have to be embodied in the real world to be sophisticated. An overly simplified artificial polygon or whatever world would probably produce overly simplified intelligence. Like its easy for an AI to learn the bouncing behavior of a ball in the real world (if it has arms to manipulate a ball and see) because all the physics are already pre 'programmed' in. It can learn that water ripples and 'feels' wet and the texture of dirt etc. All this stuff would have to be programmed into a virtual world (and probably never approach anywhere near the complexity of the real 3 dimensional thing). The human brain is made of individual cells and molecules, stuff that's maybe way too complex to be simulated in code.

For evolving an AI.. not sure if evolution could be sped up much. Evolution is sort of a program that runs in the real world of atoms and takes millions of years to compute. There could potentially be an advantage that humans could help guide it though, as was mentioned I think. Evolution doesn't have a goal so doesn't even 'want' to produce intelligence, but just produce whatever happens to be best at replicating. The medium for earth's life evolution was all of earth's molecules, at least near its surface area.. probably can't really expect a tiny computer disc, or even a super computer to produce nearly the same results in terms of complexity. But I don't really know.


Good observations of the practical problems in my theory. I was also thinking that a simple virtual world wouldn't suffice, but that it would have to be extremely complex so that the world would have a lot of information it could provide to anything "living" inside it, so that these entities could then become sophisticated. Instead of trying to replicate the physics of our universe, I believe we could exercise some imagination and invent some brand new sort of physics, perhaps without atomic particles and perhaps even without space-time as we experience it, so that our simulation would take less processing power. But, even in the best case scenario, a sufficiently-complex world would take one hell of a supercomputer to run on - something I was aware of when writing earlier. (For the record, my post was only 63% serious.) But, as you've taken the time to thoughtfully respond, I will then - now in full seriousness - form a new proposal. Let's scrap the AI-evolution and virtual world ideas (for now). As our world already has its physics "pre-programmed," we could take advantage of that! Instead of fudging around, manually trying to create object-recognition software like David Marr and his followers, we could take a fresh neural-network architecture (with as brain-like a topology as possible) running on a supercomputer, give it a little robot body with camera eyes and perhaps other inputs ( ie. senses, to supply the necessary external information), give it initial commands (ie. instincts - to kick start the learning process), and some sort of motivation (ie. to gain "karma," (as per "reinforcement learning" models), and see what it can learn! If it twitches around and drives into walls, that means it's working. Your first months were no more glamorous. This wouldn't give us a chatbot, but could give us a robot that could learn to navigate its environment, to pursue objects that give it its reward, etc., which would ultimately lend support to my broad theory: that AIs, and the people who design them, would be more successful if the focus was on creating "blank" machines, capable of learning in the way that we do, instead of on over-engineering AIs to do one single thing, which they will do badly or inflexibly, because they never had the chance to learn it properly.

gggmork said:


Thanks, I'll look at this. I guess that I should sometimes actually learn something about whatever I'm talking about. If I have to.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×