April 28, 2024

I, Science

The science magazine of Imperial College

Timoleon Fourfaro explores cutting edge research attempting to visualise dreams

(By Timoleon Fourfaro on 13th December 2023)

Imagine a world where you could record your dreams and replay them as a video when you wake up. It sounds more like a deleted scene from a Christopher Nolan blockbuster than reality. Yet, on the frontier of neuroscience, a team of Japanese researchers is attempting to turn this cinematic daydream into a scientific breakthrough. 

In 2013, Tomoyasu Horikawa and his colleagues from Yukiyasu Kamitani’s lab at ATR Computational Neuroscience Laboratories in Kyoto (Japan) received universal recognition after using brain scanning to decode the visual content of peoples’ dreams. Specifically, they used non-invasive functional magnetic resonance imaging (fMRI) to measure brain activity in sleeping participants, while simultaneously recording their brain waves through electroencephalography (EEG). 

During non-rapid eye movement (NREM) sleep, which occurs right after falling asleep the body enters a state of relaxation and reduced activity; marked by slower brain activity and a general slowing of physiological functions. Later in the sleep cycle, it transitions to rapid-eye movement (REM) sleep which is crucial for memory consolidation and is the stage where voluntary muscles become immobilised and most vivid dreaming occurs. There is now consensus that dreaming can occur during both REM and NREM sleep, but it remains uncertain whether the nature of the dreams in these phases differs qualitatively.

To address the gaps in our understanding of dream physiology, Horikawa hypothesised that the visuals experienced during sleep may be reflected, to some extent, in the patterns of activity within the visual cortex. This hypothesis finds support in the work of the neuroscientist David Eagleman, who, in his 2020 book “Livewired, discussed the role brainstem-generated waves play in shaping visual experiences during dreaming. He mentions that while we dream, “waves of spikes travel from the brainstem to the occipital cortex” which upon arrival, manifests as visual perception. Through ponto-geniculo-occipital (PGO) waves, the brainstem plays a particularly important role in generating visual content during sleep by stimulating the occipital cortex, where the visual cortex is located. Additionally, the brainstem sends signals to relax the muscles which are essential for limb movements so that we do not act out our dreams. 

Figure 1: Table S1 from Horikawa et al. supplementary material. It shows an example of a verbal report, specifically the 114th report of participant 3.

Building on this, Horikawa and his colleagues focused on the visual experiences reported by participants during the N1 and N2 stages of sleep, which are both in the NREM period. Using live EEG scanning to monitor participants’ sleep rhythms, the researchers gently roused them during the N1-N2 stages, asked them to describe their experiences, and recorded their responses (see Figure 1 for an example). Using dreams from the NREM stage allowed the collection of a significant volume of data from each participant since they were awakened more frequently. This process was repeated to create a comprehensive dataset, capturing a minimum of 200 different dream experiences per individual.  

Figure 2: Illustrative depiction of the Google image database used for algorithm training based on the 114th dream report of participant 3. The images used are taken from Unsplash and do not represent the actual database.

After collecting the verbal reports, Horikawa and his team identified the most frequently used keywords from each participant. Using the WordNet lexical database, they grouped these keywords into sets of synonymous concepts, known as synsets (Figure 1). They then labelled the fMRI data collected before each awakening with a “vector” indicating the presence or absence of specific synsets in the subsequent dream report. Following this, the researchers built a database using images from Google that corresponded to the reported dream contents. For example, the database corresponding to the dream report from Figure 1 might look something like Figure 2. 

Participants then underwent another fMRI scan while awake, during which they viewed images from the previously created dream-based dataset. This scan captured their regular brain activity associated with specific visual scenes and was used as a reference to decode sleep data. The main idea relies on the distinct and predictable patterns that the brain exhibits in response to visual stimuli. Given their uniqueness, machine learning allows for the training of an algorithm to identify and create correlations between each fMRI pattern and a corresponding visual experience. Using machine learning, the researchers developed a pattern recognition model based on linear support vector machines (SVM) and fMRI patterns. Essentially, this decoder matched brain activity during sleep with the brain activity measured during wakeful image viewing to predict the visual content of dreams (see video below). While current dream decoding research has not reached the point of reconstructing detailed dream images, this study successfully identified broad categories of objects and scenes such as cars, books, and buildings, based on neural activities during sleep.  

While dream decoders are yet to master detailed dream images, deep neural networks (DNNs) hold strong promise. We can imagine DNNs as virtual brains with multiple layers of interconnected nodes. Each node processes information and transmits it to the next layer. This sequential processing allows the network to learn and store hierarchical representations of data. In the context of brain decoding and reconstruction, researchers train DNNs not only to correlate brain activity patterns to external stimuli but also to discover patterns and relationships between them. DNNs, when trained correctly, have the flexibility to process highly complex datasets to more accurately recognise objects and make predictions.  

By learning how to recognise patterns in a dataset, DNNs can accurately reconstruct images using brain activity. This means that the final piece is not a replica of the individual’s thought, but a reconstruction based on the closest matches from the image dataset. While the accuracy is not perfect, statistical tests indicate that it significantly exceeds chance, creating the potential of to link brain activity with complex visual representations. Applying this method to individuals during sleep, with a focus on high-level semantic decoding, the likelihood of predicting detailed features and potentially applying it to REM sleep imagery is high. 

So, is it possible to record and playback our dreams? 

Hypothetically, yes. However, when it comes to building dream decoders using this method, there are many limitations. A highly trained algorithm, equipped with a massive dataset covering all conceivable perceptual processes, could flawlessly decode and reconstruct dreams. Yet achieving this would require extensive exposure of individuals to scanning machines, possibly involving invasive and expensive techniques. The current knowledge and technology fall short, as building an all-purpose dream decoder demands an infinite database that exceeds our current understanding of basic brain functions. Even with a highly trained decoder, its effectiveness would be individual-specific since the intricacies of how each human’s brain processes information are unique, even between identical twins! 

To address these challenges, a recent study from the same research group in Kyoto, led by Shen et al. in 2019, aimed to evaluate the potential of a one-step, end-to-end approach to create a more direct and seamless connection between brain activity and reconstructed images. This innovative study incorporated three important DNNs one dedicated to generating reconstructed images; another for discriminating the reconstructed from the original images; and a third for evaluating the similarity in the features between the two images (Figure 3). This training phase optimised the model, resulting in significantly reduced visual information loss and enhanced adaptability in handling diverse stimuli, even those unrelated to the initial training dataset (see video below). Importantly, although the model was originally trained only on natural images, it successfully reconstructed both natural scenes and artificial shapes, showcasing its impressive adaptability (as depicted in Figures 4a and 4b). This ground-breaking study brings us one step closer to overcoming the challenge of connecting brain activity to reconstructed images. Through reconstructing diverse stimuli this study paves the way for a more accurate link between our neural patterns and the potential playback of dreams. 

Figure 3: Image taken from Shen et al., 2019: Schematic representation of the DNN training reconstruction.

Figure 4: Images from Shen et al., 2019. Reconstruction of natural (a) and artificial (b) images.

As we navigate the intersection of artificial intelligence and dreams, ongoing research continues to explore the mysteries of cognition and consciousness. The exploration of similar techniques in REM sleep and the implications that this technology holds for sleep-related disorders is vast. It also holds the potential for understanding the nature of altered perceptions, such as the neural basis of hallucinations and other psychotic disorders. Horikawa’s study not only provides insights into the machinery of dream representation but also sparks curiosity about the future of mind-reading technologies and unexploited dimensions of human-computer interactions. However, we still have a lot to discover before we can fully capture dream experiences. So, next time you find yourself salsa dancing with penguins on an overcooked bacon dancefloor, rest assured, your adventure remains your secret. No dream detective can invade your fantastical world yet. But the journey into dream decoding continues, so stay tuned. 

Movie recommendation:

Inception (2010) 

Further reading: 

Horikawa, T., Tamaki, M., Miyawaki, Y. and Kamitani, Y. (2013). Neural Decoding of Visual Imagery During Sleep. Science, [online] 340(6132), pp.639–642. doi:https://doi.org/10.1126/science.1234330. 

Shen, G., Dwivedi, K., Majima, K., Horikawa, T. and Kamitani, Y. (2019). End-to-End Deep Image Reconstruction From Human Brain Activity. Frontiers in Computational Neuroscience, 13. doi:https://doi.org/10.3389/fncom.2019.00021.