Neural Oscillations

A breakdown of the temporal structure of neural communication.

Beyond Logic Gates

Popular comparisons of the functioning of brains and current computers (following von Neumann) are lacking. While neurons might appear transistor-like in the sense that they either fire or don’t – like a binary logic gate – the chemical processing at close apposition chemical synapses, their modulation via gliotransmitters, second messengers, and the indirect influences of nitric oxide and hormones in the bloodstream all paint a more complicated picture of a highly parallel and fuzzy analog architecture. In “A Primer on the Brain” I went into some detail about all of them. In this article I want to focus on another way neurons encode information, which goes a step beyond simple binary logic gates and towards a multi-valued logic. Specifically, the firing rate of a neuron in itself can transmit information via its timing. Because of this, neurons wire up into networks firing at characteristic rates.

The study of the temporal structure of the brain started in 1929, when Berger attempted to find a scientific explanation for the phenomenon of telepathy (as belief in this was not uncommon at the time). He realised that he could record the electrical activity of the brain from the scalp via electroencephalography (EEG) and thus found a noninvasive way to monitor the collective pulsing of large-scale neural networks. 

Neuronal oscillations measured with EEG represent the degree of synchronisation of the activity of underlying neural ensembles (groups of thousands of neurons embedded in a network, firing all at once). There are a few reasons for this emergent rhythmicity of firing. On the level of a single neuron, the refractory period (a time window after a neuron fires, in which it is unable to fire again) brings about rhythmic firing. This, however, does not explain why the neural ensemble fires rhythmically as well, rather than excitatory neurons producing run-away neural excitation or inhibitory neurons causing run-away neural inhibition. The reason this doesn’t happen is the coupling between excitatory and inhibitory neurons. Specifically, once excitatory neurons in the network fire, they activate inhibitory neurons, which prevents infinite positive feedback loops and run-away excitation. Their coupling also works in the other direction, with inhibitory neurons activating excitatory neurons, which then bring about the next pulse of neural activity and prevent an infinite negative feedback loop with run-away inhibition. In this way, the coupling between excitatory and inhibitory neurons plays an important part in balancing the excitability of neural networks. When this balance fails, this can bring about serious neuropathologies like epilepsy or Alzheimer’s.

When measured as electrical currents from the scalp, the characteristics of neural ensembles are their frequency, their power, and their phase. Frequency refers to the oscillator’s speed (cycles per second) described in hertz (Hz). Power is the squared amplitude of the oscillation in a frequency band (x-z Hz A^2). Phase is the oscillator’s position at a given time point in radians or degrees.

The EEG signal is generally grouped into five main frequency bands of delta, theta, alpha, beta and gamma. I will follow a theoretical extrapolation by Pletzer, Kerschbaum and Klimesch (2010), in-vitro measurements by Roopun et al. (2008a,b) and electrocorticogram measurements by Groppe et al. (2013). I will also define the frequency bands to be delta 1 (1-2 Hz), delta 2 (2-3 Hz), theta 1 (3-5 Hz), theta 2 (5-8 Hz), alpha (8-12 Hz), beta 1 (12-20 Hz), beta 2 (20-30 Hz), gamma 1 (30-50 Hz), and gamma 2 (50-80 Hz). 

Within those bands, most people tend towards a center frequency which is their preferred (or dominant) mode of operation. Across frequencies, these center frequencies (as manifested by distinct peaks in the signal), are scaled by the factor phi, which generates a fractal-like harmonic chain. Common center frequencies are approximately 1.5 Hz (delta 1), 2.5 Hz (delta 2), 4 Hz (theta 1), 6.5 Hz (theta 2), 10 Hz (alpha), 16 Hz (beta 1), 25 Hz (beta 2), 40 Hz (gamma 1), and 65 Hz (gamma 2), with their power standing in inverse relationship to their frequency.

It is thought that this relationship allows for cross-frequency phase synchronisation (CFS) – a temporary coupling of phase which allows information to transfer between bands. So, rather than specific frequency bands mapping to discrete cognitive and perceptual functions, which would be implausible, as the number of identified cognitive and perceptual mechanisms far outnumbers the number of identified frequency bands, it is massively parallel communication between different (groups of) neurons enabled by distinct firing rates that gives rise to complex states. More precisely, CFS allows for multiplexing, where multiple signals are incorporated into a single functional system. From this perspective, it makes more sense to think about the individual amplitude, firing rate, and phase of specific frequency bands as constituting a state-space of a local network (a discrete part of the functional system in the brain), within a global network (a larger part of the brain or the brain as a whole). Somewhat analogous to how just 26 letters can produce all of human literature, a limited number of distinct frequency bands can, together with other forms of (non-)neural communication in the brain, encapsulate all the complexities of the human mind – it is the contextual combination of frequency bands in specific parts of the brain that matters.

To take full advantage of multiplexing, the brain evolved to keep the ratio of adjacent frequency components roughly constant at phi (approximating the golden mean). This is simply the most efficient way of maximising the number of frequency bands while minimising their temporal interference with each other. This is because, for CFS to work, most of the time there should not be any synchronisation between frequency bands. After all, if neural systems were coupled at all times, this would effectively reduce the state of the brain to a binary oscillator. This is why the brain scales its frequency bands based on an irrational number, rather than a perfect integer. If the relationship was exactly 2, for example, all frequencies would be multiples of each other. This would make the brain immensely inefficient, as it’d be impossible to tell whether the action potential of a neuron occurs at the peak of 10, 20, or 40 Hz, as they’d all line up perfectly. Rather, CFS is built on top of a meta-stable chaotic pattern that allows for temporary coupling and hence information exchange between functional networks.

In line with empirical findings, Pletzer, Kerschbaum and Klimesch (2010) have developed a proof that the need for desynchronisation of brain activity (a prerequisite for the functional advantage of multiplexing) makes it necessary for the individual frequency bands to be related to each other based on the golden mean ratio. 

Simplified, the proof goes as follows:

  1. Only irrational numbers (all numbers which cannot be expressed as a ratio of two whole numbers) would ensure that different frequencies do not meet routinely.
  2. In the resting state of the brain, these coincidental meetings of frequencies should be kept to a minimum to maximise the number of functional parallel frequency bands and hence the number of states that can be encoded at once.
  3. Among irrational numbers, phi has the largest distance to any rational number in the interval and is hence the most irrational number.
  4. Given the above, for resting-state brains, center frequencies should be scaled with phi.

With this, they also argue that coupling between frequencies m and n only becomes possible when their ratio does not equal the golden ratio  (m:n |= 1.618). In other words, when CFS occurs, it is a consequence of a simultaneous firings of neurons oscillating at different frequencies, so that their peaks align momentarily. This can be achieved with either a temporary down-regulation of oscillatory amplitudes below a firing threshold and/or a shift in frequencies (oscillatory speed). Hence, these meta-stable couplings might appear as noise in the EEG signal, they are a direct reaction to input and contextual demands and serve a functional purpose.

A System near Criticality

The interplay of neural ensembles oscillating at different frequencies leads to a complex, nonlinear system. Individual oscillators attract and repel each other as a consequence of their perpetual engagement and disengagement, with slower frequency perturbations rippling through all upper scales. The background activity that results due to transient coupling-related shifts in oscillatory speed and amplitude appears as noise but has a functional purpose. It can be isolated by simply removing oscillatory peaks of center frequencies from the EEG power spectrum for a more thorough analysis. As a result, we get a hyperbolic shape commonly referred to as 1/f or pink noise.

Pink noise is situated right between statistically generated flat white noise (with individual states of the system having no correlation in time) and 1/f^2 brown noise (akin to a random walk, with no correlation of system-states between increments). Specifically, the 1/f spectrum consists of a broadband pattern that can be formulated by the power law function of P(f) = Af^{-ple}, with f being the frequency, ple the power law exponent, and A the initial amplitude. The smaller the power law exponent, the closer the 1/f spectrum is to white noise. Conversely, the higher the power law exponent, the more it approximates brown noise.

As should be expected given the likely origin of 1/f noise in meta-stable cross-frequency couplings, variations in the power law exponent should also be predictive of overall brain function. Indeed, a higher power law exponent is linked to increased cognitive processing speed (Ouyang et al., 2020). It is also predictive of degrees of consciousness – even under the influence of sedatives (Colombo et al., 2019). This makes a 1/f spectrum analysis of potential relevance for a number of applications, from assessing general intelligence as well as the effectiveness of anaesthetics, or the degree of awareness in patients with disorders of consciousness.

With this approximation of brown noise (and hence a random walk dynamic), the brain puts itself right at the edge of order and chaos through a controlled introduction of randomness. This is a result of phase synchronisation stability being a function of finding a balance between low and high coupling strength. Put simply, physical systems can be organised according to varying degrees of coupling-strengths of their individual elements, ranging from low to high coupling. The golden mean between both, is, according to criticality theory, the critical region that allows for complex systems.

Crudely, imagine the highest form of coupling as every neuron being synchronised with every other neuron. This brings about a binary on/off state of the brain, which can encode only two states. The lowest state of coupling, on the other hand, would consist of no neuron-synchronisation at all, with individual neurons simply firing randomly, resulting in a Gaussian noise activity pattern which again encodes very little meaningful information. At the medium level of coupling, neurons fire in meta-stable complex coordinated rhythms to encode the maximum number of states at once. In sum, there’s a balance to be found between oscillatory rigidity and neural firings without temporal structure. Consequently, coupled systems have to be dissipative with the resulting background activity approximating – but not reaching – a Brownian motion of activity patterns, while otherwise following strong deterministic rules.

At this point, the brain operates at criticality – between order and disorder – which affords both maximal flexibility and stability at the same time (Pastukhov et al., 2013; Hellyer et al., 2014). 

Principles of Connectivity

Although the mean frequency within a band does vary between people and also changes as a function of age or the task at hand, for example, all brains – even nonhuman ones – generate similar peak frequencies and oscillatory bands. Indeed, neural oscillations are remarkably similar across different species, like cats, bees, bats, dogs – and even spontaneously emerge in cerebral organoids grown from stem cells in vitro (Trujillo et al., 2019; Kay, 2015). This points to the oscillatory architecture of the brain to be an evolutionary stable design, which is important enough to be preserved throughout time and across species. It is so stable, that once we have measured a peak in the EEG signal, we can use the golden mean rule to roughly predict all other peaks in the signal. The point to which the center frequencies can vary in healthy brains between individuals is a function of frequency. Specifically, slower oscillations do not allow the same variance in frequency (although their loop time is longer, so the variance is bigger in ms). As frequencies speed up, the loop time variance decreases, but the frequency variance increases. This gives rise to an increasing frequency spectrum in each frequency band, creating a Fibonacci sequence, with the variance in frequency n being the sum of the variance in the two preceding frequency bands (n_{width} = n-1_{width} + n-2_{width}).

Aging and EEG
As a consequence of aging, the overall amplitude of lower frequencies diminishes. Concurrently, faster center frequencies speed up. This is a consequence of continual pruning of neural connections as the brain ages and a consequent decrease in grey matter.

The above suggests that there is no typical frequency domain for the brain, i.e. that the frequency architecture is not a result of overall oscillatory bias in the structure, but an outcome solely of band/state maximisation. If this is true, we can assume that:

  1. Each network we identified (from delta to gamma) has approximately the same percentage of connections per frequency.
  2. Conduction latency is approximately constant irrespective of fiber length (myelin thickness is positively correlated with fiber length, so that is not an unrealistic assumption to make). 
  3. Loop time in each network is equal to the period of its center frequency domain (667 ms for delta 2, 400 ms for delta 1, 250 ms for theta 1, 154 ms for theta 2, 100 ms for alpha, 62.5 ms for beta 1, 40 ms for beta 2, 25 ms for gamma1 and 15 ms for gamma 2).

From this, we can make a number of deductions which are in line with empirical findings (Hagmann et al., 2008):

  1. As each band widens in scope (from delta 1-4 Hz to gamma >30 Hz) as center frequencies accelerate, 1) means that the number of fibers in each network follow a unimodal right-tailed inverted u-shape distribution, with a progressively greater number of fibers in networks oscillating at faster frequencies. 
  2. Given that 4), 2) would also indicate that within each network, the distribution of myelinated fibres should be right-tailed as well. 
  3. Following 2) the frequency profile of a network is a consequence of the preferred length of connections, with, given 3) faster oscillation networks being increasingly local, and slower oscillation networks increasingly global. 

Hence, given 4) local networks are more abundant than global networks, which means, following 1), that global networks have a greater number of neuron connections than local networks proportionally speaking. We can also conclude that given 2) and 3) more global connections oscillate at slower frequencies. Importantly, this leads to the conclusion that the distribution of neural connections is not random, but follows directly from maximising the number of functional frequency bands, with meta-stable coherence (signal-similarity) in oscillations allowing for function cooperation of distinct – and regionally separated – brain regions.

In network science terms, following 1-4), brain networks exhibit small-world connectivity. That is, closely adjacent clusters of neurons with short path lengths and a smaller number of long path (more distributed) neurons are highly coupled (Sporns and Honey, 2006). With this, the brain strikes the balance between local and global connectivity, given the wiring costs. As this maximises information processing and transmission capabilities, this wiring strategy is largely preserved across frequency bands.

Brain state-space attractor plane
We can think of the oscillatory architecture of the brain as being a multidimensional energy state-space. To simplify it, imagine a two-dimensional plane. On it, we place a ball representing the state space of an oscillator. As the ball moves on the plane, the oscillator it represents changes its frequency, phase, or amplitude. The plane features basins which give stability to the oscillator – depending on their depth – and repellers in the form of hills, which, as the oscillator approaches them, increasingly perturb the system. Aside from its endogenous rhythms, the oscillator is driven by random energy (noise or larger-scale system dynamics). Just like our two-dimensional plane, the brain is multistable, with a neural oscillator settling in (at least temporarily) in a number of potential state-spaces depending on context specific demands reflected in system dynamics. Some state-spaces are less optimal than others though. In other words, a functionally pathological state-space might be stable enough that the oscillator gets stuck. On the flipside this means that a healthy brain, which, in this framework, we assume to be maximally stable, is hard to disrupt. This is something I observed during my own research as well: trying to modulate neural oscillations of healthy participants resulted in immediate homeostatic correction, in an attempt to bring the oscillator back to more stable (normal) baseline levels. I speculate that the remarkably consistent center frequencies we observe in EEG profiles in healthy populations (Fingelkurts et al., 2006) could be something akin to a global minima (the maximally deep basin in the two dimensional brain-state attractor plane). So, while state-space movement at the margins might very well follow Hebbian rules, the further the oscillator moves from its global minima, the more likely it is for homeostatic mechanisms to supplant them and ensure stability of the system.

Hierarchies of Cognition

The trend of the speed of oscillation being indicative of path length holds on a global level between distinct brain regions, as well as within one brain region depending on the level of cortical hierarchy. Each step in the processing hierarchy represents a progression in the move from low level input, to percepts, to simulators, and then to concepts. To make this more concrete, let’s look at the example of vision, as humans are so reliant on this sense. Indeed, sight is so important to humans that the two brain areas most crucial for it, the parietal and nonprimary visual cortex, are enlarged by a factor of 2.5 compared to the same areas in a chimpanzee brain.

The way our visual system builds up a representation of the world is, in part, not unlike how an artificial neural network makes sense of an image (but it’s also very different, as I’ll elaborate on shortly). First, the primary visual cortex (forming the lowest part of the hierarchy, henceforth referred to as V1) features a retinotopic map which keeps track of what part of the retina is stimulated by light. Subsequent nonprimary areas (V2-3) then process the individual basic elements of perception – boundaries and edges – further. This means that lower-order areas (V1-3) are active for specific views of objects. This is because the boundaries and edges of an object will differ based on its orientation to the perceiver. Higher-order areas (V4-6), on the other hand, integrate basic elements of perception into percepts (or objects of perception). Consequently, neurons in V4-6 respond independently of the rotation of an object and rather to the object itself. Individual variations in basic elements of visual perception change more rapidly than the objects in our visual field. Imagine, for example, a person entering the room. While they keep moving, the boundaries and edges that make up the person’s appearance constantly change, yet they are still perceived as one person. This means that V1-3 would need a higher refresh rate than V4-6. Indeed, lower order layers of the cortical hierarchy which deal with lower-level input oscillate at a faster rate, as also indicated by their shorter path length. Once neural connections extend into higher levels of the cortical hierarchy, oscillations slow down. This makes sense not just as a function of path length, but also in consequence of them having integrated information from lower-order areas into more stable view-invariant percepts.

The formation of percepts is the first step in forming a higher-order mental representation based on regularities in the world. From this, the brain starts to build up a model of the world surrounding it, as well as of the body it inhabits. Specifically, it forms a percept of itself, of its body, of its bodily state, and then models the world based on the way the body can act upon it. In its most complete sense, this leads the brain to not perceive the world, but move to the stage of simulating it instead from the vantage point of action. In this way, perception, cognition, the phenomenological effects of being (themselves a consequence of the mental simulation the brain engages in), are all serving the coordination of action. In other words, action is primary, as the outcomes of which have shaped the development of the brain to begin with (via evolutionary pressures).

Once a representational model is formed, the brain can make predictions about future states of the world, only actively processing discrepancies between prediction and sensory inputs (Hohwy, 2013). Practically, this means that layers of cortical hierarchy, such as the hierarchy of the visual cortex described above, do not follow a linear bottom-up mechanism, in which the scene that presents itself to the observer is made up out of information being channeled through a feedforward network incrementally enriching its detail, but that neural communication is dynamic from the start, with higher-order areas influencing the activity of lower-order networks with top-down predictions (or priors). This makes human perception very different from how an artificial neural network processes an image.

Concepts – and indeed thought – form neural representations with the help of the semantic system (specifically the middle temporal and ventral prefrontal cortex) which deals in stored semantic knowledge, and the perceptual system (especially the parietal cortex), which deals with relational reasoning (so the distance and closeness of things, both conceptually and physically). Specifically, more abstract concepts are mostly reliant on the semantic system, whereas more discrete concepts are represented primarily in the perceptual system (Wang et al., 2010). With this said, thought is not based on speech proper, as there is no activation in language-generating (Broca’s area) or language-processing (Wernicke’s area) parts of the brain. (Monti et al., 2009)

From the formation of simulations of specifics (percepts embedded in a specific context), the next step is abstraction of generalities of various contexts and representation of a scene on a conceptual level. Put differently, a number of individual scenes (conceptual microstates), are compressed into a concept that describes them all (conceptual macrostate). Indeed, at the highest order of organisation the brain treats conceptual (or imagined) and physical (or actual) representation of things or actions comparably, with context- or task-relevant areas of the brain responding similarly – but not identically – to imagined, seen, or performed actions, for example (Macuga, 2012).

On a practical level, the move from low level inputs to concepts reduces information entropy and compresses data, which in turn reduces the required computational resources to process them. In information theory, information entropy tells us how much information there is in an event. In general, the more uncertain or random the event is, the more entropy it will contain. By reducing data load and integrating it in a model, the brain reduces randomness, and hence entropy.

To again take the visual system as an example, the retina receives data at around 10 GB/s. As the brain already has a model of light points hitting the retina, it only needs to process points of light which diverge from its prediction. This reduces the data at the optic nerve to 1 Mb/s. At the level of perception of boundaries and edges at the fourth layer of V1, the data stream has been compressed further to 1Kb/s, with only an estimated <100b/s making up the content of conscious awareness when all that needs to be processed are prediction errors in an otherwise complete conceptual model of the world (Raichle, 2010).

Put differently, this means that most information about the world does not reach consciousness because it does not need to. Conscious awareness is only necessary as a fail-state when predictions fail and automatic processes are ill-equipped to deal with sufficiently novel situations. What might be called “free will” functions as a debugging mode for the mind. This mechanism is at the core of the move from conscious incompetence to unconscious competence; the latter supplants conscious awareness with a flow state in which the body moves in ways far too complex for conscious control.

Free Will
Some neuroscientists are fond of pointing out that, in experiments such as the famous paper by Libet et al. (1983), where participants are asked to press a button when they feel like it, neural activity indicating their movement precedes their concious awareness of having made the decision to move. They argue that this speaks against the existence of free will. I don’t feel like the broader literature supports this claim. Specifically, more recent studies (e.g. Schmidt et al., 2016) attempting to replicate the experiment by Libet et al., put forward an alternative explanation of the data, where the readiness potential does not constitute a preparatory signal, but rather endogenous fluctuations of cortical potential of motor areas which make voluntary (free-willed) movement more likely. Put differently, the initiation of movement was conscious, but its timing a consequence of unconscious ongoing motor cortex oscillations. Having said this, not all action (mental or otherwise) is under voluntary, conscious control. Rather, there is a threshold for any stimuli that decides whether it enters conscious awareness. I argue that free will is an emergent phenomenon of mental simulation, which allows for debugging of responses to novel situations. While lower-level physical processes which are unconscious give rise to consciousness, I argue it can affect its constituent parts via top-down causation, constituting a form of free will – something I will expand on in a future article.

Indeed, the complexity of automatic processes, such as the functions subserved by the autonomic nervous system, for example, makes conscious control of them highly impractical. Evolution has, over time, imbued us with highly evolved sensory and motor brain functions, which make the difficult seem easy. Conversely, conscious processes, such as abstract reasoning are simple in comparison – they just seem impressive to us because of how evolutionary novel they are. This is known to machine learning researchers as Moravec’s paradox: it is surprisingly low-level mechanisms which are more computationally expensive than high-level mechanisms – algorithms that mimic abstract reasoning require less powerful hardware than those that allow robots to perform even basic motor interactions with their environment.

Reducing the world to a conceptual space and only actively processing prediction errors also has the benefit of allowing the brain to deal with high-complexity situations and be ready for events before they actually happen. The readiness potential of neurons in the motor cortex which, in highly predictable situations – perhaps as a consequence of having attained unconscious mastery – precedes any conscious decision to move is an example of this (Libet et al., 1983).

While this highlights how mental models guide unconscious action, they also enable conscious ones. Specifically, the ability to generate mental models allows us to engage in counterfactual thinking and reflection on thinking (meta-cognition), which some have suggested to be at the root of phenomenological consciousness. To paraphrase Robert Anton Wilson: “Who is the master that makes the grass green?”.

In Short

In sum, the temporal structure of the brain seems to be governed by a few simple rules that result in striking complexity. In order to maximise the number of states it can represent in parallel, it has to scale its individual frequency bands in approximation with phi. In consequence, small-world networks result whose connections are meta-stable. This in turn gives rise to broadband background activity keeping the brain near criticality, which maximises system complexity. As shown, all of this can be derived from first principles: maximise the number of frequency bands, minimise their interference, and avoid both oscillatory rigidity and a loss of temporal organisation. This then allows for complex processes such as the formation of percepts and ultimately simulators which aid data compression through the brain acting as an action-oriented prediction machine that regulates itself based on endogenous feedback loops, only employing cognition when automatic modes fail.

So that, in a nutshell, is my perspective on the brain at this time. A lot of the ideas expressed here are controversial and speculative. Our understanding of the brain is still in its infancy, and based on the history of cognitive science in general and neuroscience specifically I wouldn’t be surprised to see some ideas overturned and revised in the near future. Because of this, I want to remind any reader that “the map is not the territory”, as Korzybski used to say. That is, the framework I propose here is just that – a, to my knowledge, empirically adequate rough model (or map) of the temporal structure of the brain and not a claim about its true mechanisms. As it stands, I think a lot of the ideas are of interest to people who want to either take inspiration from contemporary models of the brain to build better engineering tools (in machine learning), or who want to build diagnostic or neuromodulatory technology to assess and treat neuropathologies, for which even an incomplete or partially wrong understanding of the brain might prove “good enough” to bring about a betterment of conditions in patients.


Berger, H., 1929. Über das Elektroenkephalogramm des Menschen. Archiv für Psychiatrie und Nervenkrankheiten87(1), pp.527-570.Vancouver

Colombo, M.A., Napolitani, M., Boly, M., Gosseries, O., Casarotto, S., Rosanova, M., Brichant, J.F., Boveroux, P., Rex, S., Laureys, S. and Massimini, M., 2019. The spectral exponent of the resting EEG indexes the presence of consciousness during unresponsiveness induced by propofol, xenon, and ketamine. NeuroImage189, pp.631-644.

Groppe, D.M., Bickel, S., Keller, C.J., Jain, S.K., Hwang, S.T., Harden, C. and Mehta, A.D., 2013. Dominant frequencies of resting human brain activity as measured by the electrocorticogram. Neuroimage79, pp.223-233.

Fingelkurts, A.A., Fingelkurts, A.A., Ermolaev, V.A. and Kaplan, A.Y., 2006. Stability, reliability and consistency of the compositions of brain oscillations. International Journal of Psychophysiology59(2), pp.116-126.

Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C. J., Wedeen, V. J., et al. (2008). Mapping the structural core of human cerebral cortex. PLoS Biol. 6:e159. doi: 10.1371/journal.pbio.0060159

Hellyer, P.J., Shanahan, M., Scott, G., Wise, R.J., Sharp, D.J. and Leech, R., 2014. The control of global brain dynamics: opposing actions of frontoparietal control and default mode networks on attention. Journal of Neuroscience34(2), pp.451-461.

Hohwy, J., 2013. The predictive mind. Oxford University Press.

Kay, L.M., 2015. Olfactory system oscillations across phyla. Current opinion in neurobiology, 31, pp.141-147.

Libet, B., Gleason, C. A., Wright, E. W., Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary act. Brain, 106, 623–642.

Macuga, K.L. and Frey, S.H., 2012. Neural representations involved in observed, imagined, and imitated actions are dissociable and hierarchically organized. Neuroimage59(3), pp.2798-2807.

Ouyang, G., Hildebrandt, A., Schmitz, F. and Herrmann, C.S., 2020. Decomposing alpha and 1/f brain activities reveals their differential associations with cognitive processing speed. NeuroImage205, p.116304.Vancouver

Pastukhov, A., García-Rodríguez, P.E., Haenicke, J., Guillamon, A., Deco, G. and Braun, J., 2013. Multi-stable perception balances stability and sensitivity. Frontiers in computational neuroscience7, p.17.

Pletzer, B., Kerschbaum, H. and Klimesch, W., 2010. When frequencies never synchronize: the golden mean and the resting EEG. Brain research1335, pp.91-102.

Raichle, M.E., 2010. Two views of brain function. Trends in cognitive sciences14(4), pp.180-190.

Roopun, A.K., Kramer, M.A., Carracedo, L.M., Kaiser, M., Davies, C.H., Traub, R.D., Kopell, N.J., Whittington, M.A., 2008a. Temporal interactions between cortical rhythms. Front. Neurosci. 2, 145–154.

Roopun, A.K., Kramer, M.A., Carracedo, L.M., Kaiser, M., Davies, C.H., Traub, R.D., Kopell, N.J., Whittington, M.A., 2008b. Period concatenation underlies interactions between gamma and beta rhythms in neocortex. Front. Cell Neurosci. 2, 1.

Schmidt, S., Jo, H.G., Wittmann, M. and Hinterberger, T., 2016. ‘Catching the waves’–slow cortical potentials as moderator of voluntary action. Neuroscience & Biobehavioral Reviews68, pp.639-650.Vancouver

Sporns, O. and Honey, C.J., 2006. Small worlds inside big brains. Proceedings of the National Academy of Sciences103(51), pp.19219-19220.

Trujillo, C.A., Gao, R., Negraes, P.D., Gu, J., Buchanan, J., Preissl, S., Wang, A., Wu, W., Haddad, G.G., Chaim, I.A. and Domissy, A., 2019. Complex Oscillatory Waves Emerging from Cortical Organoids Model Early Human Brain Network Development. Cell stem cell.