Thread 'BOINC and Artificial Intelligence: Discuss'

Message boards : The Lounge : BOINC and Artificial Intelligence: Discuss
Message board moderation

To post messages, you must log in.

AuthorMessage
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7360 - Posted: 10 Jan 2007, 1:21:35 UTC
Last modified: 10 Jan 2007, 1:25:33 UTC

A fellow Team Mate posed a question recently in the Team ACC Forum:
...It got me to thinking about Artificial Intelligence and BOINC. I don't really mean the trained expert systems that are quite widely used nowadays. I mean the concept much discussed in SF of an emergent intelligence in a huge distributed computer system.

Could BOINC provide the IT / processing / network infrastructure to create an experimental environment that could nurture the emergence of machine intelligence?...


I think it's an interesting question, and since it relates to BOINC as opposed to any particular project, I thought I would raise the question here to see what people thought.
JOIN Team ACC
ID: 7360 · Report as offensive
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7361 - Posted: 10 Jan 2007, 1:22:48 UTC

First off, I'm not sure that we would be able to recognize what intelligence is.
It is rather ill defined and nebulous a concept.

I define myself as intelligent, and since other humans are similar to me, I also define them as intelligent (their actions, motives, speech, etc). As for animals, hard as I try, I'm not able to get inside of the mind of an animal, or if I ever do I would never be sure that I have done so correctly.

Intelligence is, I believe, an emergent phenomena. At some point in the evolutionary process, the brain went from automatic responses to external stimuli to choosing between responses to stimuli, hence thought and mind emerged.
In that early stage of emergence, it would be difficult to measure intelligence (if it can be measured at all) but there would be the kernel of it there - would we be able to recognise it?

And so with computers, would we recognise emergent intelligence or conciousness, how would it manifest itself?

The only things that we know of, at the moment, that has intelligence is organic biological brains. It might be the case that the only way we can make an artificial intelligence is by growing a biological brain of some sort, and providing appropriate input/output mechanisms aswell as food and a suitable environment.
Computer networks on the other hand are just made up wires, hard disks, electrons; they work by following computer code written by intelligent people. All computers do is what they are programmed to - they don't make 'real' decisions.

We don't know enough (yet) of how the human brain works to mimic it in a computer, and even if we could mimic it - it would still be made up of completely different material. It would be put together completely differently.

That said if it could be done at all, BOINC or something similar could be a very useful first step.

I don't think it is a stupid idea, but a difficult one - where do you start?
JOIN Team ACC
ID: 7361 · Report as offensive
mo.v
Avatar

Send message
Joined: 13 Aug 06
Posts: 778
United Kingdom
Message 7425 - Posted: 11 Jan 2007, 23:31:00 UTC

Bill Gates seems to think that 'sometime soon' we'll have domestic (and I hope domesticated) robots doing the housework for us. Cooking meals and ironing our clothes. At the moment a small proportion of the robots finish a race across the Mojave desert and seem unable to do much else that's useful, while quite a lot of humans can't cook a decent meal either.

I think we might make faster progress trying to teach the humans.
ID: 7425 · Report as offensive
ProfileKSMarksPsych
Avatar

Send message
Joined: 30 Oct 05
Posts: 1239
United States
Message 7430 - Posted: 12 Jan 2007, 4:07:53 UTC - in response to Message 7425.  

Bill Gates seems to think that 'sometime soon' we'll have domestic (and I hope domesticated) robots doing the housework for us. Cooking meals and ironing our clothes. At the moment a small proportion of the robots finish a race across the Mojave desert and seem unable to do much else that's useful, while quite a lot of humans can't cook a decent meal either.

I think we might make faster progress trying to teach the humans.



I'm in the process of reading his article in Scientific American. I've just started it, but so far it seems interesting.
Kathryn :o)
ID: 7430 · Report as offensive
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7435 - Posted: 12 Jan 2007, 19:07:25 UTC - in response to Message 7425.  
Last modified: 12 Jan 2007, 19:12:07 UTC

Bill Gates seems to think that 'sometime soon' we'll have domestic (and I hope domesticated) robots doing the housework for us. Cooking meals and ironing our clothes. At the moment a small proportion of the robots finish a race across the Mojave desert and seem unable to do much else that's useful, while quite a lot of humans can't cook a decent meal either.

I think we might make faster progress trying to teach the humans.

There are lot's of robots and machines 'out there' that engage in ostensibly complex tasks, following instructions that they 'in the main' cannot deviate from. I'm not sure that this would make them intelligent, even at an emergent level.

Humans too can engage in complex tasks, sometimes following a set of instructions, but more importantly following a set of guidelines.

I think it is this capability that we have, to make real world (and real time) decisions in possibly unfamiliar situations and interpret the guidelines to a best fit scenario, and then do error checking or problem solving if results (or perceived results) begin to deviate from our expectations.

A robot that can go to my washing machine, open it up, take out my laundry, inspect it so that it can tell if it has been washed correctly (I've forgotten to put in the washing powder on more than one occasion), check for damage to clothes rendering them unwearable to my particular sense of unwearable (I've had clothes get damaged in washing machines in the past), and then take all the clothes and iron them at the correct settings for the different fabric types. This, I believe, is not coming any time soon; though I would love to have one if I could afford it! That all said and done, would we class this as intelligent behaviour, would it be a real Artificial Intelligence?

Sometimes when I talk to people about these sorts of things, a number of terms get bandied about and sometimes conflated:

Intelligence, Conciousness, Life

Can we have any of the above without one or more of the others?

What requirements would we need to describe something as intelligent, concious or alive?

What kind of BOINC experiments could be set up that might test a few ideas?

Any thoughts?
JOIN Team ACC
ID: 7435 · Report as offensive
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7436 - Posted: 12 Jan 2007, 19:09:07 UTC - in response to Message 7430.  
Last modified: 12 Jan 2007, 19:09:22 UTC

Bill Gates seems to think that 'sometime soon' we'll have domestic (and I hope domesticated) robots doing the housework for us. Cooking meals and ironing our clothes. At the moment a small proportion of the robots finish a race across the Mojave desert and seem unable to do much else that's useful, while quite a lot of humans can't cook a decent meal either.

I think we might make faster progress trying to teach the humans.



I'm in the process of reading his article in Scientific American. I've just started it, but so far it seems interesting.


Is this the one?

A Robot in Every Home



JOIN Team ACC
ID: 7436 · Report as offensive
ProfileKSMarksPsych
Avatar

Send message
Joined: 30 Oct 05
Posts: 1239
United States
Message 7437 - Posted: 12 Jan 2007, 19:26:05 UTC - in response to Message 7436.  

Is this the one?

A Robot in Every Home


That's the one. Christmas of 2005 my gift from my brother was a subscription to Scientific American. So it's sitting on my desk waiting to be finished.
Kathryn :o)
ID: 7437 · Report as offensive
mo.v
Avatar

Send message
Joined: 13 Aug 06
Posts: 778
United Kingdom
Message 7444 - Posted: 13 Jan 2007, 3:38:46 UTC

The whole question of what constitutes intelligence is quite tricky, but isn't there a big cash prize waiting for us somewhere if we can solve it by our collective efforts?

An experiment I read about compared the memory of various animals for foods they'd previously eaten and found that pigs remembered foods better than children in a school for the severely mentally handicapped. I think the average IQ was about 40 or maybe less. On a practical level it means for example that pigs are unlikely to try eating mud twice, whereas some of the children will try again and again. But the children all had UK National Curriculum targets - to reach Level 1 by age 18.

I'm not of course claiming that pigs and humans are at different points on a single linear scale. Most of even these children have some language and a degree of self-awareness.

I'm sure that a machine that could do the laundry and ironing task would be very intelligent in one sense (it might perform better than many careless or lazy humans), but it wouldn't be intelligent in the full human sense because it would have no self-awareness.




ID: 7444 · Report as offensive
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7446 - Posted: 13 Jan 2007, 10:00:45 UTC

Self Awareness is I believe the crux of the matter.

How do biological organisms, such as humans, go from being a small collection of cells to something that is independently thinking?

What are the processes and stages involved?

How might one go about modelling such processes in a computer (or even a BOINC) environment?
JOIN Team ACC
ID: 7446 · Report as offensive
mo.v
Avatar

Send message
Joined: 13 Aug 06
Posts: 778
United Kingdom
Message 7456 - Posted: 13 Jan 2007, 17:34:28 UTC

I'm sure that human language is part of self-awareness. For example, some cats and most domesticated dogs show some degree of self-awareness when they are scolded and are visibly ashamed. Or when they're jealous ie show they're aware of their place in the social setup.

But for this self-awareness doesn't seem to develop much further without language. Whether this needs to be a human language isn't clear to me.

And even language per se isn't enough. I heard about an Australian researcher who was teaching a computer English in much the same way as Hal learned in the Space Odyssey - like teaching a baby. She was teaching simple phrases to start with and letting the computer deduce the rules. Every time it made a mistake she was correcting it, whereupon it had to redefine its rules and exceptions as human babies do. But even if the researcher succeeds in teaching it to converse competently, we still wouldn't call this human intelligence. What would the computer talk about? It would have no experience of anything that isn't stored within itself or within other computers.

So machine intelligence and human intelligence are for the time being two different things.
ID: 7456 · Report as offensive
ProfileKSMarksPsych
Avatar

Send message
Joined: 30 Oct 05
Posts: 1239
United States
Message 7457 - Posted: 13 Jan 2007, 20:03:51 UTC

Mo, interesting you brought that example of computer learning up.

My MA is in developmental psych and my adviser had some interest in using computers to model how babies learn. He spent a semester at UC-San Diego with Jeff Elman.

I took a class of his where we used a program called TLearn to model different aspects of learning. I actually took the experiment from my masters and used TLearn to try to replicate the data from the actual babies.

The book we used in the class was called "Rethinking Innateness: A Handbook for Connectionist Simulations." by K. Plunkett and J. Elman.

If anyone is interested I can dig through my old floppies and see if I can find a copy of the paper I wrote for that class.
Kathryn :o)
ID: 7457 · Report as offensive
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7506 - Posted: 15 Jan 2007, 19:30:30 UTC - in response to Message 7456.  

I'm sure that human language is part of self-awareness ...... self-awareness doesn't seem to develop much further without language. Whether this needs to be a human language isn't clear to me.
.
.
.
.
And even language per se isn't enough. What would the computer talk about? It would have no experience of anything that isn't stored within itself or within other computers.


I agree that language (of some description) is probably not enough.

For a machine (or a biological organism) to be self aware, it would have to also be aware of it's external environment (the physical real world environment). The only way to be aware of the external environment is to interact with it. It would also need to communicate it's internal representation of the external world to the external world (say, to another computer or organism), and then obtain some form of feedback (error correcting and additional information). I believe this is called learning.

I don't know much (or anything actually) about AI, but I certainly get the impression that researchers in this field are concentrating on one or another aspect of AI, rather than looking at multiple areas at the same time. For example concentrating on voice recognition and reproduction, visual (face) recognition, mobility (walking), manual dexterity (handling things), answering questions (expert systems, databases, 'communication-language').

What I think the AI researchers should do is take a leaf out of their own childhood experiences. Most (if not all) of the above was learned by most people in early childhood, all at the same time (or at least not completely seperately). So I would reckon a good starting point for AI researchers would be to take a multi-disciplinary approach to all of the above listed aspects as a whole rather than simply concentrate on each individual section on its own (piece-meal) divorced from the others.

This possibly puts it outside the ability of a BOINC project, but then again...
JOIN Team ACC
ID: 7506 · Report as offensive
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7509 - Posted: 15 Jan 2007, 19:37:31 UTC - in response to Message 7457.  

Mo, interesting you brought that example of computer learning up.

My MA is in developmental psych and my adviser had some interest in using computers to model how babies learn. He spent a semester at UC-San Diego with Jeff Elman.

I took a class of his where we used a program called TLearn to model different aspects of learning. I actually took the experiment from my masters and used TLearn to try to replicate the data from the actual babies.

The book we used in the class was called "Rethinking Innateness: A Handbook for Connectionist Simulations." by K. Plunkett and J. Elman.

If anyone is interested I can dig through my old floppies and see if I can find a copy of the paper I wrote for that class.


I did a search on Rethinking Innateness, and eventually came to this link of the publications of Jeffrey Elman, some of the publications are available to download/view in pdf format.

I'm curious as to how Tlearn was used. What aspects of learning did it model and how well do you think it worked?
JOIN Team ACC
ID: 7509 · Report as offensive
ProfileKSMarksPsych
Avatar

Send message
Joined: 30 Oct 05
Posts: 1239
United States
Message 7517 - Posted: 16 Jan 2007, 1:28:38 UTC - in response to Message 7509.  

I did a search on Rethinking Innateness, and eventually came to this link of the publications of Jeffrey Elman, some of the publications are available to download/view in pdf format.

I'm curious as to how Tlearn was used. What aspects of learning did it model and how well do you think it worked?



In all honesty I don't really remember what we did in the class. I know there were exercises in the book that we had to work through to learn how to use the software. Everyone in the class was required to do a final project with TLearn and we all did different stuff. I vaguely remember one of my lab mates doing something with the study she did with her masters thesis (Her name was Cara Cashon). It's been a while since I played with the software and I don't even know if I have the book anymore.

If I remember right, it didn't do a half bad job of replicating the data from my masters. But it's been a few years.... so my memory might not be the best. And now my curriousity is spiked. I'm gonna have to search my floppies for that paper.

Kathryn :o)
ID: 7517 · Report as offensive
Profilekinhull
Avatar

Send message
Joined: 30 Aug 05
Posts: 101
United Kingdom
Message 7714 - Posted: 23 Jan 2007, 1:10:17 UTC
Last modified: 23 Jan 2007, 1:10:52 UTC

I've put a post in the SETI@Home Cafe that might be of interest to some in this thread: Modelling Evolution

The Weasel Applet
(this links to a Java applet)
The program was never intended to model evolution accurately, but only to demonstrate the power of cumulative selection as compared to random selection.


More info here: Weasel program



JOIN Team ACC
ID: 7714 · Report as offensive

Message boards : The Lounge : BOINC and Artificial Intelligence: Discuss

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.