Based on the results of a study by scientists at the SFU Scientific Research Technological Centre for Neurotechnology, speech 'prostheses' can be developed in the future to restore communication with disabled people, who will be able to reproduce their own mental speech through a smart speaker.
Everyday life is difficult to imagine without speech, it is the most important means of social communication. However, due to head injuries, strokes and other neurodegenerative diseases, a person may be unable to express themselves verbally. Thus, while conscious, often with intact intelligence and having internal (mental) speech, they find themselves in social isolation.
It is this problem that has awakened the interest of SFU scientists to develop non-muscle communication channels that can be used by paralysed people. Valery Kiroi, Oleg Bakhtin, Elena Krivko, Dmitry Lazurenko, and Elena Aslanian worked on the research under the supervision of Dmitry Shaposhnikov, leading researcher of the Scientific and Technological Center of Neurotechnologies of the South Federal University. The project is supported by the Russian Science Foundation (RSF).
According to the scientists, it is rather difficult to recognise speech through brain activity. It is possible to recreate speech in detail if electrodes are placed directly in the brain tissue, allowing the activity of individual neurons to be recorded. However, the electroencephalography (EEG) method makes it possible to study some features of brain signals during normal (oral) and inner speech without surgical interventions. The EEG method is cheap, safe and non-traumatic enough to be widely used to restore patients' lost function of communication with the outside world.
Considering that speech is a complex cognitive process, the realization of which requires coordinated activity of a number of cortical structures of the large hemispheres, EEG coherence indices were studied. These indices made it possible to evaluate the degree of interaction between different brain areas under conditions of real and mental pronunciation of different words denoting directions in space, e.g. up, down, right, left, forward, backward.
At the first stage, undergraduate and graduate students took part in the research conducted by scientists from the Southern Federal University. All of them had no experience in psychophysiological examinations, were right-handed and had no health problems. The experiment was conducted in accordance with the recommendations of the Bioethics Commission of the Southern Federal University, developed on the basis of the Declaration of Helsinki.
The results of the research showed that under the conditions of real pronunciation of different words the level of synchronization and interaction of different brain structures increased significantly, most of all - on gamma-2-rhythm frequencies (55-70 Hz). As is well known, these EEG frequencies are believed to play the leading role in realizing cognitive, i.e. cognitive, functions of the brain.
It was shown that under the conditions of mental pronunciation of the same words in the left verbal hemisphere of the brain, the formation of specific spatial coherence patterns was also observed, reflecting the connection of projective, verbal areas of the neocortex, primarily the Broca and Wernike zones. Application of machine learning and neural network classification models has demonstrated significant similarity between the brain mechanisms of ordinal (oral) and internal (hidden) speech formation.
This, in particular, points to the high promise of using mental speech as part of device development in brain-computer interface (BCI)-based technology, providing direct coupling of human brain activity to external devices such as motorized chairs, prosthetic limbs, and speech communicators.