If we were to highlight only one major discovery of the last decade, it would be that the brain is inherently very organized – and so it is from the very beginning. All necessary functions are innately there. From the first months it contains what we could describe as algorithms. The process of learning will simply activate and recycle them for cultural and educational purposes.
For example, children are endowed with deep intuitions regarding the sensory identification of numbers. Before possessing any formal learning of counting, they are able to assess and anticipate quantities. Learning to count and calculate consists simply in taking advantage of these existing circuits and, thanks to their plasticity, in recycling them. The formal education in arithmetics is “grafted’ onto the “sense of numbers” already possessed by small children. As a matter in fact, it uses the same area of the brain.
The key word is therefore brain plasticity, because this is precisely what allows us to learn. Plasticity must be understood as a remarkable ability to recycle circuits that are there since the beginning. Most importantly, it is a life-long ability.
The brain circuits that support learning are not so varied. MRI technology has shown that, in the case of reading, it always uses a small area of the left hemisphere. This very precise area is always used, independently from the subject's background culture – although some scripts, such as Chinese, also imply a small part of the right hemisphere. Yet, most amazingly, this “letter box” can only be recognized in subjects who have already learned to read. Recently, MRI has been used to scan complete illiterates to whom written words were shown: the aforementioned area didn’t show even the slightest activity. Ex-illiterates, on the contrary, would activate exactly the same area as those who have learned to read during childhood. Conversely, a lesion located in this area may affect this ability without any other effect on the subject’s intelligence. A striking case is that of persons who after a brain injury can still write but cannot read what they have written.
Learning to read activates a specific part, but it also uses and activates other areas. Reading develops the so-called early visual areas, those that react the fastest when we see something. Reading also activates and improves other areas related to spoken language, because through reading we improve our ability to encode speech sounds.
The reading area recycles an existing “algorithm,” that of face recognition: the scanner clearly shows the activation of the same area. Instead of recognizing faces, it is used to recognize letters and words.
But recycling is not a simple reusing. Indeed, plasticity is also about reorganizing algorithms – about reprogramming, so to speak. During the first month in which a child learns to write, he makes the touching mistake of writing letters or words backwards, indiscriminately. We now know that this mistake is natural. It stems from a brain-eye phenomenon: the left-right invariance. This invariance of stereoscopic vision, which plays a crucial role in the recognition of faces, will operate until it is “unlearned,” simply because learning to read recycles the area responsible for facial recognition.
The findings of neurosciences complement and support those of educational sciences. In particular, they help us understand why the “global approach” to reading is condemned to fail. This approach expects the child to recognize whole words – “chair,” “cow,” “rabbit,” – not its autonomous components made of grapheme/phoneme associations that the child will have to decompose into letters and sounds. But the brain works otherwise. It works on these segments, starting with the letters, when operating its face recognition algorithms. The traditional ABC approach, so often mocked, is actually the most appropriate method of activating and recycling the brain areas involved in reading.
How does one manage to evolve from stuttering to smooth reading? By a process of automation in which sleep plays a crucial role. Once the initial correspondences are established, a self-teaching mechanism comes into play: the child deciphers, recognizes and gains access to a “second reading” which, in turn, allows the development of new automations.
The limits of the global method were revealed by asking literate adults to learn a series of stylized patterns corresponding to words in an imaginary language worthy of Star Trek’s klingon. While the start was very fast, the best minds would quickly reach their limits around thirty exotic signs. However, it was revealed to a second control group that these patterns respond to a precise logic: there are letters. For this second group, after a harder initial stage, things became much more fluid and the available vocabulary grew exponentially. Instead of a “package” recognition method – in other words, global – they had deciphered an alphabet.
Using neuroscience discoveries, a Finnish team has developed a serious game for learning to read. This was also an opportunity to learn about optimal learning rhythms. Scanning the brain of children has shown even short term effects with the formation of a significant area in the left hemisphere. This happens very quickly: in barely eight weeks, at 15 minutes a day, the reading brain area is established. This figure of 15 minutes per day may look surprisingly short, but it is part of the discoveries of neuroscience: it’s far better to allocate regular, short lapses of time during a long period of time than lots of time during a short period.
Thanks to medical imaging, neuroscientists have been able verify that learning was best when alternating knowledge acquisition and repeated testing – something well suited to structure of the games. For example, an eight-week period with a final test won't internalize knowledge as well as a test every two weeks. It is necessary to test yourself – to make the model work: tests help us determine whether we have understood or not and, if not, to realize that we don't know. In a way, they are the best kind of learning. This is called metacognition – a cognition that goes beyond failure and transforms it into a proven success. The simplest case is that of a child who pushes a bunch of cubes: the fall or stability of the stack will inform the brain about the relevance of its predictions. Feedback and repetition are essential to secure knowledge or know-how.
Cognitive science has identified four main success factors in learning: attention, active engagement, feedback of information, and eventually, consolidation.
1. Attention is a filter you need to know how to capture and channel. Attention is the filtering mechanism that allows us to select information and to adjust processing. To do so, it eliminates in order to concentrate. That's precisely why the term concentration is so deeply appropriate. The attention system consists of three attentional systems: warning, focus and executive control.
Attention massively modulates brain activity: therefore, the key issue for the purveyor of knowledge – whether parent, teacher or trainer – is to direct attention on the “right level.” The learner must stay alert. However, attention has limits of its own. First of all, filtering implies performing two tasks simultaneously, which is very difficult – a bottleneck phenomenon has been observed inside the prefrontal cortex. When “juggling” for instance, we aren't really performing two actions at the same time. We are simply moving from one task to another, while temporarily omitting the first, at the expense of signal acquisition.
Then, when concentrated, irrelevant stimuli for the task in question will simply become ... invisible! As perfectly illustrated by the famous video, where the goal is to count the exact number of passes performed by the players dressed in white.
In light of this experience, it is clear that attention, selective by nature, leads to overconfidence – we are prepared to argue that what has happened under the radar of our perception simply never existed. Because the filter valve is somehow one-sided, a lesson than can be applied to many areas of life.
The challenge, therefore, is to focus attention. For this purpose, the so-called “master’s effect” is crucial: some teachers are able to captivate attention whereas others will stay stuck or draw attention on irrelevant levels. This is a common flaw of many school or training books, where an excessive use of illustrations and colors are placed in an attractive and yet chaotic way. Instead of an overdose of information, attention should be focused.
Executive control, as a lever of attention, plays a crucial role: it inhibits undesirable behavior, such as leaving a place of activity to do something else, start talking to someone else, etc. Progress is particularly visible in children who come from families in which certain behaviors aren’t much regulated – for example, staying still at the dinner table. The teachings of cognitive science shed a new light on the issue of discipline, but also on the inequalities between social classes. They also provide tools to fight against these inequalities.
2. Active engagement. The guiding principle could not be clearer: a passive organism does not learn. Active engagement must be promoted. The teacher can achieve engagement only if the child or learner engages himself. This goes hand in hand with testing. Without testing the reliability of knowledge, one remains in the illusion of knowledge – and it is more than likely that everyone is concerned in an area or another. The child, the learner must be able to test himself. Making learning conditions (reasonably) more difficult will paradoxically lead to increased engagement and cognitive effort, which means improved attention.
3. The feedback. To err is human, but also ... necessary. Activity, rather than passive listening, is crucial but it still isn't enough. It is currently believed that the cortex is a kind of machine that generates predictions and integrates prediction errors: it launches a prediction, receives in return sensory information, and a comparison is made between the two. The difference creates an error signal which is propagated inside the brain and that will help correct and improve the following prediction. The feedback of information is essential.
The brain works iteratively, through cycles that can be broken down into four successive stages: prediction, feedback, corrections, new prediction. This is called the Bayesian (from the inference of the same name) or statistical approach to brain function. It internalizes organically statistics. The brain makes continually adjustments thanks to the feedback of information: in other words, errors are fundamental! Indeed, if signals of error allow us to adjust our predictions again, learning will be triggered only if there is an error signal, otherwise, nothing changes.
Transposed to pedagogy, this means that the error is normal, inevitable and... fertile. Provided that it is actively noticed by the learner, who, in turn, must overcome it rather than ignore it. On the other hand, to be useful, errors must not be too severely punished: stress is a learning inhibitor. Worse, the feeling of helplessness could kill future efforts in the bud. So, to overcome errors and achieve success, what is the optimal mode? Motivation through positive reinforcement and immaterial reward should always be promoted. Of course, paying children to have good grades is out of question. Instead, taking advantage of the fact that human beings are social animals, success can be rewarded by social reinforcement: approval, validation, encouragement.
4. Consolidating achievements. Let us remember our first steps when getting the driving license: there is a conscious effort, with a huge number of signals to process in real time, a feeling that we won’t make it, that we’re overwhelmed. It’s terrifying! However, this is a typical example of what is called an explicit process: a situation, or rather a stage, during which the prefrontal cortex is very strongly mobilized by executive attention. As a fulfillment of learning, the challenge is to accomplish the transfer from explicit to implicit.
Indeed, by shifting gradually to faster, more efficient, unconscious networks, the brain achieves automation. The prefrontal cortex is released, in the same way resources are freed in a slow, overloaded computer system. Once released from the unnecessary tasks in the background, the computer can “surf” smoothly again. Our cortex also shows a bottleneck phenomenon, which again, is very similar to computer RAM. It works as a sort of buffer that cannot handle more than one determined volume of information at once.
Let’s come back to our first example of reading. At the very beginning, a child must consciously remember each of the associations between letters and sounds, and apply them one by one, like the adults who had to learn an “alien” language in our previous example. Learn that the letter “t” is pronounced as in “tongue,” and so forth for each letter. As revealed by children in their early literacy phase and also with dyslexia, the more letters are stored, the more time it takes. It’s linear and serial! However, this never happens for adults and children from third grade onwards: an eight letters word will be read as fast as a three-letter word, because the treatment is not serial, but massively parallel: all letters are read at the same time! It is easy for an adult, a teacher, to fail to see this initial difficulty, and forget about what the child needs.
Besides, when all the “percentage of resources” of our “CPU” is applied to deciphering, we cannot focus on the meaning of the text. The automation phenomenon is crucial because it releases high-level ressources.
Before concluding, let’s say a word on an unexpected, yet crucial, element in the consolidation of learning: sleep. It was proven that sleeping even a single nap, without re-learning, will improve the learning performance. In fact, the brain works during sleep: it “classifies” all the new information, probably by playing it back in fast-forward mode. This increased speed allows the brain to detect patterns, to consolidate episodic memory (memory of experiences) and, thanks to algorithms, to establish generalizations or even to make discoveries. The scientific journal Nature published an article showing that many mathematicians often report that they find in the morning the solution to a problem they stumbled upon the night before – this experiment was repeated and verified during lab tests.
For children with attention disorders or learning difficulties, it has been shown that increasing sleep time could give very positive results, possibly more than medication. At a time when Ritalin is almost universally administered for attention-deficit “hyperactivity,” it would seem wise to reverse the problem. If an increasing number of children are affected by attention deficit disorders, it might simply come from lack of sleep.
In the end, the results of cognitive science regarding this issue are crystal clear: sleep plays a crucial role in learning and it is far better to distribute the time dedicated to learning on every day rather than concentrate it in only one. Better a quarter of an hour every day rather than an hour a few days a week, especially regarding long-term memory. The brain is not made to learn only half of the week. This is where serious games could play a significant role and help establish a virtuous circle, including during weekends. This is especially true in poor families, where 15 minutes of cognition every night would be far more stimulating than having the child remain in passive mode two or three days in a row.
What can we learn from this presentation? First of all, that our brain is structured since birth, giving us very deep intuitions. It has powerful learning algorithms that nobody has been able to duplicate in a machine for the moment. A baby is undoubtedly the best supercomputer we know: it is an organized system that produces Bayesian statistical inferences, from the first months of life. And even if individual differences are possible – from one brain to another, the extension of a given brain area can vary from 1 to 2 – in this field, the respective shares of innate and acquired are still unknown.
Clearly, the capacities of all newborns are huge. Thanks to generalization algorithms, far from growing at the expense of one another, the different brain areas and different types of intelligence (verbal, memorial, spatial, interpersonal, etc.) benefit from their individual evolutions. In fact, the specialization of the mind comes from the time we spend, or not, on determined fields.
The school must provide this wonderful human machine with a structured, rich and demanding environment – while showing strategically both generosity and tolerance regarding errors. Learning at school, ultimately, is an extension of evolution. It uses a system of symbols to process numbers, colors, sounds, persons – in short, the whole environment – as precise entities and helps avoid approximations. By doing so, it takes advantage of the unique capabilities that we have inherited from natural evolution; those we use to consolidate knowledge, and above all, understanding.
In the adult life, these capabilities are a direct concern to the corporate world. The potential of cognitive science is enormous if we can take advantage of its teachings about the human brain during early life and transpose all this corpus of knowledge. First of all, through software: in the near future, serious games could form everyone in a virtually limitless range of areas. Then, in management: when you give an instruction, you are actually asking for a learning process. We must always keep in mind our first steps inside a car. We need to channel our willingness and our executive control to trigger the “learning device” inside the other person: make them feel alert, involved, prevent counterproductive feelings of helplessness, see them internalize their progress. Marketing, for its part, has greatly helped: why pay for a thirty second spot if we find out that our brain ceases to pay attention on a specific content after only fifteen seconds, as shown by MRI. Moreover, we have begun to measure the impact of keywords. Finally, neuroeconomics uses brain imaging to study cognitive and emotional factors in the decisions of economic players.