Imitate or be inspired by the human brain
The very concept of AI is not unanimous in its definition. For purists, a simple neural network or image recognition system is not strictly speaking AI. Everything depends on the definition of AI, and in particular whether the definition is anthropocentric or not.
It’s a bit like magic. As long as we do not know the trick, it is magic, even art. Once we know it, it’s a technique, often very simple, if not obvious. Human intelligence is a bit of the same thing when you don’t know how it works. It preserves this mysterious and inimitable side, almost immaterial, like a soul that would have no physical existence.
With the discoveries in neurobiology and cognitive sciences, this magic gradually loses its luster. After all, man is only a highly sophisticated biological machine resulting from evolution. Certainly a complex machine, a machine whose functioning depends on a very large number of environmental parameters and the accumulation of experiences, but a machine all the same. It is the first of them that is able to understand its internal functioning!
Should we absolutely try to copy or imitate the human brain to create digital solutions? In which cases is imitation useful and in which cases is inspiration only necessary? Should we try to create machines that are more intelligent than a man in all his dimensions?
The example of aviation
It can serve as a good basis for reflection. The airplane is inspired by the bird but does not imitate it. The common points are to have wings and to use their speed and lift to fly.
The concept then diverges quickly: airplanes do not have mobile wings made of feathers! Instead, their wings are generally fixed and the engines run with propellers or jet engines. The plane largely exceeds the bird in speed (supersonic for military aircraft), size (B747, A380, Galaxy C5, Antonov 124), carrying capacity (which is measured in tens of tons), altitude (10 kilometers for an airliner), and resistance to cold (it is about -50°C, which a developed biological organism can hardly bear for long, even with good plumage). On the other hand, airplanes are much inferior to birds in terms of energy efficiency and flexibility, even if the energy density of animal fat is close to that of kerosene (37 vs 43 megajoules/kg).
Bio-mimicry was useful in the beginning to conceptualize the airplane, whether in Leonardo da Vinci’s schematics or Clement Ader’s airplane which was very close to the bird. If the motorization of the airplane is very different from that of birds flapping their wings, the feathers unfolding at the time of landing and takeoff have however reappeared in the form of flaps hyper suspension, invented by Boeing for its 707 launched in the late 1950s (description) and whose most elaborate form was integrated into the Boeing 747 (below), whose first flights took place in 1969.
The eagle is one of the fastest birds in the world, reaching 120 km/hour. A conventional airliner reaches 1,000 km/h and touches down with its flaps extended at about 200 km/h. An A380 takes off in 2,700 meters and lands on 1,500 meters. An eagle lands in a few seconds and almost anywhere! It’s power versus flexibility. Pocket drones require some of the flexibility of birds, but their autonomy is generally much more limited than that of birds, especially migratory birds that can fly for several hours at a time before resting on the ground.
AI follows a similar path in biomimicry
Some characteristics of mammalian brains are imitated in neural networks, machine, and deep learning. But there are fundamental differences between human and machine intelligence: both its inputs and outputs as well as the structure of its memory and reasoning. For the moment, the machine is distinguished by its capacity to store and analyze immense volumes of information and by its raw computing power.
Man has sensory sensors at his disposal in astronomical quantities that no connected object can match at this stage, which, associated with the cortex, provides him with a sensory memory that accumulates memories throughout his existence, coming from the inputs/outputs that are the optical, auditory and olfactory nerves, as well as those that manage touch, made up of millions of neurons irrigating our sensory memory in parallel. It is a strength and a weakness. Our emotions linked to this sensory memory generate fear of certain risks and decisions that may be irrational. Secondly, the level of complexity of the brain is beyond comprehension.
Nevertheless, by brute force, AI already surpasses human beings in a whole bunch of areas, especially when it comes to “crunching” large volumes of data that are completely beyond our comprehension. When it has access to large volumes of data as in oncology or by exploiting data from connected objects, AI can do wonders.
In fact, it is rather inoperative without data. It does not yet know what to look for or take initiative. And algorithms are still very limited because the data of our lives are, fortunately, not yet consolidated. This explains the limits of these recommendation algorithms, which do not know what I have already seen or done and are not ready to find out. Therefore, they cannot make a totally relevant recommendation. The day when our whole life will be followed by objects that have been connected since birth might be different.
What about human reasoning?
It does not seem to be beyond the reach of machines. Little by little, we manage to model it for very specialized tasks. But AI still lacks flexibility and adaptability to a wide variety of situations. In short, it’s a good idea! But it is not inconceivable to be able to provide generic intelligence to a machine. It will not be achieved by trial and error, by integrating disparate algorithmic and software bricks, and not only through the brute force of the machine.
Major research initiatives seek to decode the brain
However, understanding the brain by modeling its functioning remains a goal for many researchers. The idea is not necessarily to copy it, but at least to better understand how it works in order to discover treatments for certain neurodegenerative pathologies.
Numerous national and international research initiatives have been launched in this direction. Inventoried here, they come from Europe and the USA, but also from Japan, Australia, Israel, Korea, and India.
The European Human Brain Project aims to digitally simulate the functioning of a brain. Launched after the response to a call for proposals by Henry Markram of EPFL Lausanne, a researcher at the origin of the Blue Brain Project launched in 2005, aims to create a synthetic mammalian brain. Built from an IBM Blue Gene supercomputer and running Michael Hines’ neural network software, the project aims to simulate neurons as realistically as possible.
With an EU budget of 1 billion dollars over five years, the Human Brain Project aims to improve understanding of brain function as broadly as possible, with a focus on the treatment of neuro-cerebral pathologies and the creation of technological advances in AI. It is criticized here and there. It makes one think a little of Quaero by its disseminated aspect. French laboratories raised 78 million dollars in funding, notably at the CEA, while those in Germany and Switzerland took the lion’s share with 266 million dollars and 176 million dollars respectively. One wonders who will do the integration!
In practice :
It’s rather a big data project that moves away from the brain. Indeed, simulation models no longer rely at all on updated biological knowledge of how neurons function in the brain.
The USA is not to be outdone with the Brain Initiative announced by Barack Obama in 2013. It aims to better understand the functioning of the brain. The announced objective seems more operational than that of the Europeans: to better understand Alzheimer’s and Parkinson’s diseases as well as various neuronal disorders. The annual budget is in the order of 100 million dollars, which is ultimate of the same order of magnitude as the Human Brain Project. Among the projects are nanotechnology initiatives to measure individual nerve cell activity, starting with those of fruit flies.
We can also mention the Human Connectome Project, launched in 2009, another American project, funded by the NIH as the Brain Initiative, and which aims to accurately map the different regions of the brain (example above with the main internal nerve connections in the brain).
For its part, the Allen Brain Atlas project focuses on mapping the brain of different species, including humans and mice, at the level of gene expression of its different nerve cells. The platform and associated data are open.
On the neurobiology side
It also remains to understand the learning process of young children up to the age of 20. How does the brain wire itself during the learning phases? How to separate the innate from the acquired in learning processes? Mice are dissected, but obviously not infants. So we don’t know. An MRI is insufficient. The Chinese and Japanese are working on an intermediate path by mapping the brains of monkeys, which are closer to humans than rodents.
In short, a lot of research is focused on brain function, with an intersection with artificial intelligence research.
The copy of the brain is not for tomorrow and fortunately
In “The Singularity is Near”, Ray Kurzweil fantasizes about the future ability to transplant a brain into a machine and thus attain immortality, the ultimate embodiment of technological solutionism, which seeks to find a technological solution to all human problems or fantasies.
The dump of the contents of a brain into a computer, however, faces some major technological obstacles. Fortunately, by the way!
What are they? First of all, it is not yet possible to describe precisely how information is stored in the brain. Is it located in the neurons or in the synapses that connect neurons to the axons of other neurons? In “Memories may not live in neurons synapses”, published in Scientific American in 2015, it is stated that information would be stored in neurons and not at synapses. Is this storage of the same order in the cortex and in the cerebellum? What about the limbic brain, which manages emotions, happiness, and fear by interacting with both the cortex and the hormone-producing organs? We’re still looking!
In any case, the information is stored in the form of chemical and ionic gradients. Probably not in binary form (“on” or “off”) but with intermediate levels. In computer language, it seems that neurons may store whole numbers or even floating numbers instead of individual bits. It is also not excluded that neurons can store several pieces of information in different places (dendrites, synapses, axons). And there are only a few nanometers between dendrites and axon endings! The communication between the two is chemical, via the potential of calcium, sodium, and potassium ions, and regulated by nerve transmission regulating hormones such as acetylcholine, dopamine, adrenaline or amino acids such as glutamate or GABA (γ-aminobutyric acid). Which block or promote the transmission of nerve impulses. To this complexity must be added the state of the glial cells that regulate the whole and condition in particular the performance of the axons via the myelin that surrounds them. The amount of myelin around the axons varies from one place to another in the brain and modulates both the intensity and speed of nerve transmission. This makes the brain function even more complex!
What if memory is only made up of rules and methods of approximation?
What if knowledge was in fact encoded both in the neurons and in the connections between neurons? In any case, the brain is a gigantic chemical puzzle that is constantly reconfiguring itself. Neurons do not reproduce themselves, but their connections and the biological soup in which they are bathed are constantly evolving.
How can we detect these chemical potentials that are found in trillions of places in the brain, either within neurons or in the interneuronal connections? How can this be done with a non-destructive and non-invasive analysis system?
There are not 36 solutions: it is necessary to go through electromagnetic waves, and with a precision of the nanometer scale. Today, scanners generally use three technologies: CT scanners that measure the density of matter by X-rays, PET scanners that detect radioactive biological tracers by photon emission, and MRI that detects soft bodies by nuclear magnetic resonance, which does not irradiate the brain but must plunge it into an intense magnetic bath. These scanners have a resolution that does not exceed the order of a millimeter and it does not progress at all following Moore’s exponential law! Moreover, it would be interesting to evaluate the amount of energy that would have to be sent into the brain (X-rays, magnetism, etc.) to carry out this kind of detection 100 to 1,000 trillion times, which is the number of synapses in the brain, and with a resolution of the order of a nanometer. If this is the case, this electromagnetic energy would be enough to cook the brain like in a microwave oven, which would not be the desired effect at all! Except perhaps to treat some morons in a somewhat radical way.
It is a known law in physics: the more we explore the infinitely small, the more energy-intensive it is. The LHC at Cern near Geneva has allowed the detection of Higgs bosons. It consumes a trifle of 200 megawatts with peaks of 1.3 gigawatts, more than the power generated by a nuclear power plant unit! The LHC cost 9 billion dollars and the price of this kind of scientific instrument does not follow Moore’s law like DNA sequencing at all!
Electroencephalogram (EEG) sensors do exist.
They are placed at the periphery of the cortex on the head and capture the activity of large psychomotor control areas of the brain with a low level of accuracy. It is very “macro”. Memory and reasoning work at the “pico” level. What’s more, if we can roughly map the functional areas of the brain, we are quite unable to capture the role of each neuron individually. Will we be able to know precisely the position of all the synapses in the entire brain and to which neurons they belong? Not easy! Another solution: map the cortex to identify thought patterns. If we think of such an object, it might make distinct macro-zones in the brain active and recognizable.
The brain of a fetus would contain more than a thousand billion neurons, which die quickly. Neurons are in fact lost at birth as if a matrix were being hollowed out and gradually taking shape as we learn. A child’s brain would include more than 100 billion neurons, and more than 15 trillion synapses, and 150 billion dendrites.
An adult brain comprises about 85 billion neurons, 30 billion of which are in the cortex, 10 trillion synapses (neuron/neuron connections via multiple axon terminations that exit from neurons and connect to dendrites close to the nuclei of other neurons), and 300 billion dendrites (the structures of neurons on which synapses are found). It consumes about 20 watts supplied in the form of carbohydrates (glucose) via the bloodstream, making it a very efficient “machine” in terms of energy consumption. In its development from birth, the brain loses neurons but gains connections between them, and this throughout life, although the process slows down with age, even without neurodegenerative diseases.
A neurotransmitter arriving via a synapse can trigger a cascade of chain reactions in the target neuron that will regulate the expression of genes and produce regulatory proteins that will modify the behavior of dendrites in receiving signals from axons. Moreover, dendrites – the receptors in neurons – have variable shapes and behaviors. In short, we have a most complex regulatory system that has not been integrated at all in Kurzweilian models!
more than 50% of the brain’s neurons are located in the cerebellum. It manages learned automatisms such as walking, grasping, sports, driving, cycling, dancing, or mastering musical instruments. A neuron in the cerebellum contains about 25,000 synapses connecting it to the axon endings of other neurons. Those in the cortex that manage the senses and intelligence each contain 5,000 and 15,000 synapses.
The brain is also filled with glial cells that feed neurons and control their function via the myelin that surrounds axons and various other regulatory mechanisms. There are at least as many as there are neurons in the brain, adding another level of complexity. We must add the role of the hippocampus as a memory buffer, the emptying of this buffer during sleep, which reminds us that good quality and duration of sleep helps to maintain a memory. Finally, via the sympathetic and parasympathetic nervous system, the brain is connected to the rest of the organs, including the digestive system, as well as to all the senses, especially touch.
The brain is unbeatable in its density, compactness, and parallelism. On the other hand, computers surpass us in their capacity to store and process large volumes of data. Although it will be difficult to scan a brain at the neuronal level for a long time, it is still possible to understand how it works by trial and error. Neuroscience continues to make steady progress in this area. Little by little, we understand how the different levels of abstraction in the brain work, even if the associated scientific methods of verification remain quite empirical, most often performed with mice.
But it is not necessary to master the lowest level of abstraction in the brain to simulate the high levels of abstraction, without going through cloning. Just as it is not necessary to master the Higgs bosons to do chemistry or to understand how DNA is used to make proteins within cells!
In any case, whatever happens, the intelligence of a hyper-intelligent machine will not have an intelligence similar to that of man. It will probably be colder, less emotional, and more global. Artificial intelligence will be superior to that of man in many areas and not in others. It will simply be different and complementary. At least within a reasonable timeframe of a few decades.