That's not quite true. You may not have been referring specifically to neural nets, but perhaps neural nets are the closest thing we have to true intellegence - they can "learn" and invent imprecise, seemingly "guestimated" algorithms. However, there's actually a pretty big difference between the way a neural network on a computer works and the way a real brain works.
Neural networks - nonlinear activation function, like 1/(1-e^-kwx), real neurons all or nothing.
Neural networks learn by adjusting weights between nodes. No one really knows what happens to a brain when it learns.
But most importantly, the learning methods employed in neural networks can't possibly be anything like real learning, and aren't nearly as useful.
The method I've used am most familiar with (and still the most common method for most applications) is backwards propagation. You setup the network with random weights, then input some information. Now you, as the trainer have to know what the desired output of your network is. Your code then takes a vector-like object (a one form, actually. CS people will tell you it's a vector, but it's not.) that describes the difference between the actual output and the desired output, and you transform it from output space to weight space.
Now you know how to adjust the weights of the network to get the desired output, and you do that. I could get into the reasons that you have to do this multiple times on multiple samples, but suffice to say it just has to do with dimensionality and non-linearity of cordinates. (BTW, CS people will tell you that it's a gradient descent, which is also true).
There are also so called "unsupervised" learning methods, but as far as I know, the applications of such training is much more limited than with BP. They can learn to pick out patterns, and give a different cluster of outputs for each input pattern.
Obviously, BP can't be how real brains learn because: 1) it requires always knowing the desired output for a given input 2) there is only one specific change in weights that can occur for one input/desired output at each training stage 3)in the end when training is complete, every possible set of inputs will produce exactly the same output every time you run your network, which never happens with people. 4) it just seems to mathematically well defined. Not that that's really a problem, but I doubt your brain really takes derivitives and goes through coordinate transformations to maximize/minimize values.
The unsupervised learning can't be the same as real learning, because all it can do is cluster input into groups.
Finally, these training methods won't work with a non-differentiable activation function. The activation function of real neurons is non-differentiable (all or nothing). That seems like a big hint that something totally different is going on in brains and NNets.
Additionally, I'm not even sure if the entire structure of nodes connected by adjustable weights is actually anything like a biological brain at all.
The term "artificial intellegence" seems accurate to me.
I wouldn't be surprised if flies are moderately smarter than anything we've ever created.
Neural networks - nonlinear activation function, like 1/(1-e^-kwx), real neurons all or nothing.
Neural networks learn by adjusting weights between nodes. No one really knows what happens to a brain when it learns.
But most importantly, the learning methods employed in neural networks can't possibly be anything like real learning, and aren't nearly as useful.
The method I've used am most familiar with (and still the most common method for most applications) is backwards propagation. You setup the network with random weights, then input some information. Now you, as the trainer have to know what the desired output of your network is. Your code then takes a vector-like object (a one form, actually. CS people will tell you it's a vector, but it's not.) that describes the difference between the actual output and the desired output, and you transform it from output space to weight space.
Now you know how to adjust the weights of the network to get the desired output, and you do that. I could get into the reasons that you have to do this multiple times on multiple samples, but suffice to say it just has to do with dimensionality and non-linearity of cordinates. (BTW, CS people will tell you that it's a gradient descent, which is also true).
There are also so called "unsupervised" learning methods, but as far as I know, the applications of such training is much more limited than with BP. They can learn to pick out patterns, and give a different cluster of outputs for each input pattern.
Obviously, BP can't be how real brains learn because: 1) it requires always knowing the desired output for a given input 2) there is only one specific change in weights that can occur for one input/desired output at each training stage 3)in the end when training is complete, every possible set of inputs will produce exactly the same output every time you run your network, which never happens with people. 4) it just seems to mathematically well defined. Not that that's really a problem, but I doubt your brain really takes derivitives and goes through coordinate transformations to maximize/minimize values.
The unsupervised learning can't be the same as real learning, because all it can do is cluster input into groups.
Finally, these training methods won't work with a non-differentiable activation function. The activation function of real neurons is non-differentiable (all or nothing). That seems like a big hint that something totally different is going on in brains and NNets.
Additionally, I'm not even sure if the entire structure of nodes connected by adjustable weights is actually anything like a biological brain at all.
The term "artificial intellegence" seems accurate to me.
I wouldn't be surprised if flies are moderately smarter than anything we've ever created.