Our mushy brain it looks like a far cry from solid silicon chips in a computer processor, but scientists have a history of comparing the two. As Alan Turing ikani in 1952: “We do not want to know that the brain is a cold liquid.” In other words, the healer does not care, just knowing the art of making.
Today, most sophisticated mechanical devices use advanced learning tools called in-depth learning. Their algorithms learn to process large amounts of system through the secrets of interconnected networks, known as deep neural networks. As their name suggests, deep neural networks are stimulated by specific brain networks, consisting of nodes that are followed by real neurons — or, at least, based on what neuroscientists knew about neurons in the Middle Ages. 1950, when the well-known type of neuron called perceptron was born. Since then, our understanding of the computational computational of single neurons has grown exponentially, which is why natural neurons are known to be more complex than their producers. But how much?
For you to know, David Beniaguev, Idan Segev and Michael London, both at the Hebrew University in Jerusalem, taught how to connect devices to mimic nerve numbers. They showed that a deep network requires between eight and eight segments of connected neurons to represent the complexity of a single living neuron.
Even the writers did not expect such difficulties. “I thought it would be easier and smaller,” Beniaguev said. He expects three or four units to be sufficient based on the calculation of the room.
Timothy Lillicrap, who develops decision-making solutions at Google’s AI company DeepMind, said its new findings suggest that it may be necessary to reconsider the old traditions that do not compare neurons in the brain with neurons based on machine learning. “This paper really helps to force the issue of careful consideration and to deal with how to make analogies,” he said.
The most important comparison between artificial and real neurons involves the way they handle incoming information. Both types of neurons receive incoming signals and, based on this information, decide whether to send their signal to other neurons. Although the production of neurons relies on simple calculations to make decisions, decades of research have shown that this process is the most complex in the natural nervous system. Scientists use an input method by comparing existing relationships with long-distance branches like trees, called dendrites, and the concept of neuron transmitting signals.
This work is one in which the recruits of a new project also taught a more intensive communication technique to realize its complexities. He began by making extensive comparisons of the input function of a type of neuron with specific dendritic branches of the upper and lower extremities, called a pyramidal neuron, from the upper extremities of the rat. He then fed the comparison into a deep network that contained 256 neurons for each phase. He continued to increase the number of segments until he was able to track 99% at the millisecond level between inputs and outputs of the experimental neuron. The deep neural networks accurately predicted the way neuron function is produced by at least five – but not more than eight – production components. In most networks, they contain about 1,000 neurons of just one neuron.
“[The result] they make a bridge from living neurons to synthetic neurons, ”he said Andreas Tolias, a specialist in Baylor College of Medicine.
But the authors warn that it is not a direct connection yet. “The connection between the several components you have in the network is mixed with network challenges is not clear,” London said. For that reason we cannot say how much is obtained by moving, say, four to five parts. Nor can it be said that the need for 1,000 neurons means that natural neurons are 1,000 times more complex. Ultimately, it is possible that the use of visual neurons within each component can eventually connect to other layered networks – but it may require more time and time for the algorithm to learn.