Project 4
Representation and processing of constructions in the brain
(Third Party Funds Group – Sub project)
Abstract:
The main objective of this project is to study the representational and computational principles of construction storage and processing, and their structural and functional implementation in the brain. Recent experimental findings in neuroscience provide converging evidence for the neural and psychological plausibility of network-like representations of hierarchical linguistic structures such as constructions (Herbst 2018a, 2020) on abstract cognitive maps. Cognitive maps are mental representations that serve an organism to acquire, code, store, recall, and decode information about the relative locations and features of objects (Tolman 1948, O’Keefe & Nadel 1978).
Electrophysiological research in rodents suggests that the hippocampus (O’Keefe & Nadel 1978) and the entorhinal cortex (Moser, Moser & McNaughton 2017) are the neurological basis of cognitive maps. There, highly specialised neurons including place (O’Keefe & Dostrovsky 1971) and grid cells (Hafting et al. 2005) support map-like spatial codes, and thus enable spatial navigation (Moser, Kropff & Moser 2008). Furthermore, human fMRI studies during virtual navigation tasks have shown that the hippocampal and entorhinal spatial codes, together with areas in the frontal lobe, enable route planning during navigation (Spiers & Maguire 2006, Spiers & Gilbert 2015, Hartley et al. 2003, Balaguer et al. 2016) based on distance preserving representations (Morgan et al. 2011). Recent human fMRI studies even suggest that these maplike representations extend beyond physical space to more abstract social and conceptual spaces (Epstein et al. 2017), thereby contributing broadly to other cognitive domains (Schiller et al. 2015), and thus enabling navigation and route planning in arbitrary abstract cognitive spaces (Bellmund et al. 2018). Besides contributing to spatial navigation, the hippocampus also plays a crucial role in episodic and declarative memory (Tulving & Markowitsch 1998) by receiving highly processed information via direct and indirect pathways from a large number of multi-modal areas of the cerebral cortex (Battaglia et al. 2011) including language related areas (Hickok & Poeppel 2004).
Finally, some findings indicate that the hippocampus even contributes to the coding of narrative context (Milivojevic et al. 2016). These findings provide a novel theoretical framework of language representation and processing. There, hippocampal coding would enable flexible representational mapping of linguistic structures across a wide range of scales and hierarchical levels, from phonemes, single words and collocations (Evert 2008), through valency patterns (Herbst & Uhrig 2019, Herbst et al. 2004), to idioms and abstract argument structure constructions (Herbst 2018, Herbst & Uhrig 2019).
In particular, the project is to explore the hypothesis that linguistic constructions are represented as multi-scale network-like maps, and that constructions are combined in the process of formulating an utterance by navigating on these maps, i.e. a certain utterance would correspond to a certain route. Furthermore, overlap and blending (Herbst 2018b) of constructions are realized and guided as and correspond to switching between different levels of the multi-scale maps. In contrast to the brain, computational models have the decisive advantage that they are fully accessible, i.e. the temporal evolution of all model variables can be read out for further analysis at any time. In addition, we can perform arbitrary experimental manipulations on these models. Therefore, starting from contemporary machine learning approaches for the representation and processing of natural language (e.g. Devlin et al. 2019), computational models will be constructed that can build and navigate on hierarchical map-like multi-scale representations of language, thereby being capable of representing natural language input, transforming these representations according to pre-defined tasks (e.g. re-phrasing), and producing modified natural language output. Using evolutionary optimization, the biological fidelity of the initially constructed models will be increased iteratively by applying the concepts of variation and selection. In order to select those candidate models that best fit to measured brain activity, internal model activity and brain activity will be compared using an advanced methodology comprising multivariate statistics (Kriegeskorte, Mur & Bandettini 2008, Krauss et al. 2018, Schilling et al. 2021) and Bayesian model selection (Mark et al. 2018). After each selection step, new “child models” will be created by adjusting their architectures and parameters using machine learning approaches such as neural architecture search (Elsken, Metzen & Hutter 2018, Wistuba, Rawat & Pedapati 2019, Liu et al. 2018, Zoph & Le 2016, Chen et al. 2019) and genetic algorithms (Gerum et al. 2020). So far, neuroscientific studies of language have mostly employed over-simplified experimental paradigms, e.g. by focussing on single word processing or sentences in isolation. Very recently, the advantages of using natural, connected language such as narratives for neuroimaging studies have been discussed (Willems, Nastase & Milivojevic 2020, Jääskeläinen et al. 2020, Hamilton & Huth 2020, Hauk & Weiss 2020, Schilling et al. 2021). Therefore, multi-modal (fMRI, MEG, EEG, iEEG and ECoG) measurements during continuous speech perception (listening to audiobooks) will be performed, as described in detail in our preliminary work (Schilling et al. 2021).
This project is done by Pegah Ramezani and is supervised by Dr. Patrick Krauß and Prof. Dr. Ewa Dabrowska.