Posted by they4kman on Saturday, December 17, 2011 at 2:26 a.m. (1 year, 5 months ago)
This text was crafted in an email to my good friend, Christina, to explain a model of an abstracted neural network in reference to the human brain. I quickly realized I had a lot of groundwork to lay, and it got really long. Sorry. Enjoy. But sorry.
Enter at ye own riske!
Alright, here's a quick history lesson to recap the first 10.2 billion years: the Big Bang spits out the universe, then after 9.2 billion years, the Earth technically becomes a planet. A short while after that, it's rife with all the elements to begin the theory explanation!
It's important to note just how big Earth is. It's fucking big. And molecules are fucking tiny. That's a ripe recipe for an awful lot of molecules. Plus, things are kinda warm on Earth — there's a lot of energy. All those molecules are flying everywhere. Most of the time, nothing happens. It's important to know, but that's all. It's cooler when things happen.
Eventually, things happen. Molecules connect with each other. They interact with each other. They begin to grow larger and more complex. They seem to have moving parts. Some of them seem to "create" other molecules, like a factory. After a long fucking time — I think around 500 million years (it's somewhere within 1 billion :P) — even those factories become complex.
The factories are now capable of using strings of molecules connected together in a chain pattern to create completely new factories that produce factories and another pattern. That chain pattern in turn goes to another factory, and so on. Changes to how the chain is laid out affect how the factory produces its own chain.
What used to just be random chemical reactions now becomes changes to this chain. Because a change only occurs when the chain gets passed to a factory, one pass is called a generation. The actual change in the chain is what's referred to as a mutation. Just a change in how the chain is laid out.
This really jumpstarts the evolution of new life. Instead of relying on pure chance on the reactions between any of the some hundred different elements on Earth, these factories could slim down the input to just a few simple chemical structures, known as nucleotides, glued together.
(Side note: While the function of those chemical structures is simple, the actual layout is fucking complicated and amazing. It was a fantastic feat of nature that took an awesome amount of time to create. Just beautiful. I'm sure I told you about the DNA book I checked out at UCF. It's really magical stuff, and it was always hard to explain my excitement about DNA :P)
Anyway! With the new chain, which of course is DNA, changing the way new generations of life worked was slimmed down to linking a few colour-coded magnets together, whereas it used to involve playing a symphony on 100 pianos. So in a short amount of time, the mutations to the chain could occur, run it through the latest generation of the factory — the cell, and produce the next generation cell. The term for this cycle of change, test, repeat is "rapid turnaround." Not very important, just a nice succinct term for yer vocabulary.
After a gazillion generations (plus or minus a jillion), the cells have started to interact with each other to form organs — large gatherings of cells working together to perform a common task. This is past single- or even many-cell organisms; this is million- or billion-cell organisms with many organs even interacting with each other. The entire beginning of the species tree begins with these organisms.
One of the organs created is a collection of neurons. A neuron by itself is a very simple concept: it first learns a signal, then whenever that same signal is sent to the neuron, it fires its own signal. When you combine a shitton of neurons, you get a neural network, and that's when things start becoming cool.
Neural networks need some initial information in order to work. Because if the neural network is blank, it means none of the neurons have learned a signal, so none of them will fire, and nothing interesting will happen. (Please note I have a lot of research to do on biological neural networks; don't take my word as authority, just an idea for now)
Luckily, because that initial information was created by pure chance when neural networks first came into existence, we don't have to worry about that. The important thing to take from this is: DNA knows how to produce the initial information for the neural network.
This is where DNA mutation becomes really, really fucking cool. The DNA chain is no longer ten, hundred, or even 10 thousand base pairs (one link in the chain) — that mother fucker is around 3.1 million base pairs in homo sapiens. Most of it's useless dribble not used anywhere, but it needs to be copied because the rest of the DNA contents need to be in specific positions along the chain. That's how one strand of DNA carries the information of the whole body: it's essentially a 5-subject notebook, containing mostly scribbles, but very detailed instructions on some pages.
So this same strand of DNA, slightly mutated by the parent, gets passed to the child. If the portion of the DNA containing the neural network's initial information remains unmutated, the child will begin to think just like his parent (assuming the womb experience is exactly the same). But if it's mutated, the child will think and learn differently than the parent.
Well, we made it to natural selection. Let me do a quick recap here: molecules collide, creating factories, becoming cells, using DNA, forming organs, working together as organisms, eventually developing neural networks (simple brain) affected by instructions present in DNA.
Some of the mutations create initial information in the neural network of the brain so that in the conditions of childhood, they died. That generation is a failure, because it did not help the child adapt to its conditions well enough to reproduce. To natural selection, that is all that matters: reproduction. If you reproduce, your DNA experiment continues for future generations to enjoy. Otherwise, it gets cancelled. No big deal.
Now we must go back to neural networks to understand how it learns collectively. Singularly, a neuron can learn just one thing. Whenever that one thing is shoved through the "in" end of the neuron, it fires through the "out" end. I believe humans have enough initial information in their brains to learn how to recognize differences in colours in images from the eye. Underneath that, I believe we come "preloaded" with the idea of pleasure and pain — a group of neurons that when fired produce a "good" or a "bad" feeling. If you combine that with a group of nurons connecting mom-like sounds from the ear to the pleasure neurons, you have a neural network that will eventually learn all the shapes in its environment by detecting colour changes in the optical image, and if a shape is present when mom-like sounds are heard, it will connect that shape with the pleasure neurons.
So let's explain how the neural network actually works with that example. The pleasure center is a group of neurons that produce a sensation of pleasure when fired. The optical nerve takes the image from the eye and converts it into signals that run through the neural network. In order for the signals from the optical nerve to be useful, they need to connect to the pleasure center somehow. That "initial information" I talked about before is exactly that: it's the connections between neurons that are already made before the brain starts processing outside information.
Let's say, for instance, that the connection was made between mom-like sounds coming from the ear neurons to the pleasure center. Now, when mom-like sounds are heard, the mom-like sound neurons fire, which in turn fire the pleasure center neurons. If the optical nerve neurons for a certain face fire at the same time as the mom-like sound neurons, a little connection is made between all the neurons in the middle of the two neuron groups. The more it happens, the stronger the connection.
(I'm not very sure how the memory function of the brain works. This is purely uneducated speculation/guesstimation.) I believe memory works by storing the locations of neurons to fire. The memory is then connected to a neuron. Whenever that neuron fires, all the locations of neurons in the memory are fired, reproducing whatever was remembered in the brain. The stronger the connections when the memory was being created, the more vivid the memory.
When we put together the chain reactions of the neural nework with the initial connections and hook it up to the memory, we get learning. We're wired to notice similarities between new things and things we've already seen. Every time we have a new experience, it builds on the connections created before. When neurons fire at the same time, they begin to form new connections between each other.
Cool thing to note: with that simplified model of the brain, ideas are simply a set of neurons firing a certain way.
Let's do a quick recap on the learning process according to this model: the brain begins with a pleasure center hooked up to some ideas, and those ideas hooked up to the five senses. Neurons firing at the same time create connections to each other. The more often they fire together, the stronger the connections. That's the human brain learning, at the functional level.
The repercussions of that process are very interesting. For instance, when you're young, your neural network is much blanker, so stronger connections are easier to make. Which means that yes, the young are impressionable. However, it doesn't mean new connections can't be made once we grow older. It just means there are less blank neurons (i.e. neurons still learning), so the neurons firing are less likely to connect if a new experience does not already fit in the neuronal pathways, because the way it was fired may be different from what was learned.
I'll work on writing down some more of the interesting consequences when I come up with ways to explain it just right (wrong).
Well, that's kinda the end of the explanation of my theory.