Computers surpassing humans

That's not quite true. You may not have been referring specifically to neural nets, but perhaps neural nets are the closest thing we have to true intellegence - they can "learn" and invent imprecise, seemingly "guestimated" algorithms. However, there's actually a pretty big difference between the way a neural network on a computer works and the way a real brain works.

Neural networks - nonlinear activation function, like 1/(1-e^-kwx), real neurons all or nothing.

Neural networks learn by adjusting weights between nodes. No one really knows what happens to a brain when it learns.

But most importantly, the learning methods employed in neural networks can't possibly be anything like real learning, and aren't nearly as useful.

The method I've used am most familiar with (and still the most common method for most applications) is backwards propagation. You setup the network with random weights, then input some information. Now you, as the trainer have to know what the desired output of your network is. Your code then takes a vector-like object (a one form, actually. CS people will tell you it's a vector, but it's not.) that describes the difference between the actual output and the desired output, and you transform it from output space to weight space.

Now you know how to adjust the weights of the network to get the desired output, and you do that. I could get into the reasons that you have to do this multiple times on multiple samples, but suffice to say it just has to do with dimensionality and non-linearity of cordinates. (BTW, CS people will tell you that it's a gradient descent, which is also true).

There are also so called "unsupervised" learning methods, but as far as I know, the applications of such training is much more limited than with BP. They can learn to pick out patterns, and give a different cluster of outputs for each input pattern.

Obviously, BP can't be how real brains learn because: 1) it requires always knowing the desired output for a given input 2) there is only one specific change in weights that can occur for one input/desired output at each training stage 3)in the end when training is complete, every possible set of inputs will produce exactly the same output every time you run your network, which never happens with people. 4) it just seems to mathematically well defined. Not that that's really a problem, but I doubt your brain really takes derivitives and goes through coordinate transformations to maximize/minimize values.

The unsupervised learning can't be the same as real learning, because all it can do is cluster input into groups.

Finally, these training methods won't work with a non-differentiable activation function. The activation function of real neurons is non-differentiable (all or nothing). That seems like a big hint that something totally different is going on in brains and NNets.

Additionally, I'm not even sure if the entire structure of nodes connected by adjustable weights is actually anything like a biological brain at all.

The term "artificial intellegence" seems accurate to me.

I wouldn't be surprised if flies are moderately smarter than anything we've ever created.
 
That's currently absolutely true: In fact when doing AI (or more specifically, evolving network programs) they do them inside a software "shell" so that they really are "airtight" to the outside world. Simply because, more or less by chance, they might evolve some code that turns out to be, for example, a really nasty virus.
 
As long as Microsoft dominate the software world we have nothing to worry about. If a psychopathic microsoft AI takes over the world its only a matter of time before it crashes.
 
did you ever wonder whether Artificial Intelligence is better than Natural Stupidity?
 
Stop trying to scare the natives - I spend lots of time telling people "The computer knows nothing - it is an overgrown calculator. If something goes wrong it is human error - either the operator or the programmer who wrote the instructions it is following"

Although I've seen home PC capibility develop over the past 25 years, for most of us it is still a calculator with pretty colours (Medi and comaboy sound like they have 'specialist' knowledge )

I still get plenty of students that seem to expect computers to 'think for themselves', but I stand by the old line "Computers do what you tell them to do.... Not what you want them to do, what you TELL THEM to do".

I do have a few instinctive objections to 'wireless' technology of any kind - give me a plug I can PULL OUT!

{Yes, I have read and seen 'Demon Seed' }
 
With all current commercial computers that's true... but by definition, the purpose of AI is that the computer will do things you HAVEN'T told it to do, so that you don't have to program it with a solution for every scenario it will face.

So in essence the quest for AI is a quest for a computer that is unpredictable.
 
Real-world AI research is still a long way from things like 'Terminator' or ''Blade Runner'.

In realistic terms, there is a decent chance we will see things in our lifetimes like vehicles which 'drive themselves' after being given a route on a roadmap, or military vehicles which don't have a pilot/driver following simple orders to move around terrain and shoot at things. These things are already under development.

It's hard to compare how 'intelligent' they are with an animal, but they stuff that's being built at the moment is certainly less sophisticated than any higher maofftopicl (like a dog or cat) in terms of communication and flexible learning. Perhaps on a par with an insect or lower reptile.

We're just about getting to grips with the 'intelligence' involved with things like 'walking' or 'recognise the same face from two slightly different angles and lighting conditions' or 'following the white line in the centre of the road whilst avoiding other vehicles'. The type of intelligence involved in things like 'come up with a philosophical reason why humans should be eliminated and then devise a plan to do it' is as far beyond current AI as playing poker is beyond ants. And it's not just a question of running the same software on faster hardware, so you can't assume that Moore's law will make it happen by itself.
 
Hehe
I've just been reading through this thread as it's something that has interested me for quite a while.
Personally, put 'technology' and 'intelligence' together and I find it worrying. As much as it may not be possible, I'm quite sure one day in the distant future, it will be. It could be the result of much research or simply an accident. But I think it will happen. It's true machines can't learn without having the information given/found (just like humans?) but when the internet is thrown in the mix...


I second that.

I was reading in a news paper a couple of months ago about this technology for the home that will be in new houses in the near future, where items such as TV, microwave, stereo etc. could be linked together on an intranet, communicate and even recognise you. It seemed the idea was that you could get in from work, the 'house' would recognise it was you, perhaps welcome you, stick some music/TV on automaticaslly and set about heating your dinner. All before you've even taken your shoes off. This must have been developed with safety in mind, but will these machines/networks have safety choices to make in future? Such as whether or not the house is on fire? Machines making conscious choices...
As exciting and convenient as this sounds, I find it very worrying. Personally I wouldn't want my household appliances gossiping about me.
 
Couple of thousand pounds and I could do that for you now without AI, AI would now what mood your in and put the right record on. As for knowing whether the house is on fire, I think it might be called a fire alarm system.

Having a system that makes the choice your not worth saving might be a little worrying though
 
Is it not true, in fact, that it's a completely different kind of AI? e.g. a currently state-of-the-art robot car that can drive itself, while very complex is still programmed... there isn't anything in the coding that wasn't put there by a human being. I don't think it qualifies as "intelligence".
 
Haha!!!


Ok, maybe fire was a very bad choice of example It is interesting though. Could you really do that??


I just had a bad image of, in future, some people inviting AI and living in great cities where everything runs itself - and some rebelling and living in caves in secluded areas of the world where there's no machines...
Maybe I've played too many video games
 
Defining intelligence might be harder than creating AI, do you want to replicate a human (what's the point in that? loads of us) or do you want something/someone? that knows everything and can come to a logical conclision based on all known availible facts.

"Did you ever wonder whether Artificial Intelligence is better than Natural Stupidity?" (quote from a programmer via AI)
 
Doesn't sound as interesting when you know the truth

1 Get PC (optional but easier but easier)
2 fully network the house - easiest way wireless switching
3 alarm, doors, electric curtain racks, lights, stereo(does anyone still have one of these?),TV(does anyone still have one of these?), kettle etc: BASICALLY IF YOU CAN SWITCH IT ON YOU CAN REMOTE IT
4 preprogramme for timers, heating, lighting etc
5 GPS your car or phone
6 sit back and enjoy (don't even have to lift the seat or flush again)

This is how Bill gates lives, even his pictures change to suit the occasion (I can do that as well)


Buy the way £2000 is the cost of materials,about £15000 if you want me to fit it


4
 
So that would have everything on a network which you had a remote for...? I take it these appliances would be fully programmed by youself and not able to make choices or communicate with eachother?
Sorry, I know a little but computers are not my strong point

ps Love the 15k price tag
 
Computers can crunch numbers and retrieve data quickly.
But they ain't intelligent yet!
AI is very targeted - it can do specific formulaic tasks.
They can't do speech recognition/generation worth diddleysquat.
You can't just switch on a computer, bung it in the jungle, and expect to come back a year or two later and find a sentient being.
I'm sure it will come, but not in my lifetime.
 
Back
Top