• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 107
  • 43
  • 25
  • 1
  • 1
  • Tagged with
  • 177
  • 144
  • 103
  • 61
  • 59
  • 59
  • 37
  • 35
  • 34
  • 26
  • 23
  • 21
  • 19
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Entropy Production and Phase Transitions far from Equilibrium with Emphasis on Wet Granular Matter / Entropieproduktion und Phasenübergänge fern vom Gleichgewicht mit Betonung feuchter granularer Materie

Hager-Fingerle, Axel 11 December 2007 (has links)
No description available.
32

Faking the Implicit Association Test (IAT): Predictors, Processes, and Detection

Röhner, Jessica 05 February 2014 (has links) (PDF)
Unverfälschbarkeit stellt ein wichtiges Gütekriterium psychologischer Testverfahren dar. Dieses Kriterium gilt dann als erfüllt, wenn das Testverfahren auf Grund seiner Konstruktion keine Steuerung oder Verzerrung der Ausprägung von Testwerten seitens der Versuchspersonen ermöglicht (vgl. Moosbrugger & Kelava, 2012). Im Gegensatz zu direkten Verfahren (z.B. Fragebogen und Interviews), bei welchen die Ausprägung hinsichtlich eines Merkmales durch Selbstbeschreibung der Versuchspersonen erfragt wird und eine Verfälschung (z.B. durch sozial erwünschtes Antwortverhalten) nicht ausgeschlossen werden kann, wurde indirekten Verfahren (z.B. dem Impliziten Assoziationstest; IAT; Greenwald, McGhee, & Schwartz, 1998) lange Zeit Immunität gegen Fälschungsversuche unterstellt. Diese begründet sich unter anderem durch die Annahme, dass mittels indirekter Verfahren implizite Merkmale gemessen werden. Implizite Merkmale unterscheiden sich von den „eher klassischen“ expliziten Merkmalen, welche vorwiegend mittels direkter Verfahren gemessen werden. Ein wesentlicher Unterschied besteht darin, dass Versuchspersonen nicht notwendigerweise um die Ausprägung hinsichtlich ihrer impliziten Merkmale wissen und dass sie diese Ausprägung auch nicht kontrollieren können (vgl. De Houwer, 2006; De Houwer & Moors, 2007, in press). Die theoretischen Annahmen bezüglich der Eigenschaften impliziter Merkmale bzw. Messergebnisse legen zwei Implikationen nahe. Erstens: Wir können implizite Merkmale ausschließlich über indirekte Zugänge erfassen, da diese nicht notwendigerweise bewusst sind und so eine Selbstauskunft nicht möglich erscheint. Zweitens: Personen können ihre impliziten Messergebnisse nicht kontrollieren und folglich auch nicht verfälschen. Vermutlich gab es auch aus diesem Grund vor wenigen Jahren einen regelrechten Boom, der zu der Entwicklung einer Vielzahl indirekter Verfahren zur Erfassung impliziter Merkmale geführt hat. Ob jedoch die Messergebnisse dieser Verfahren tatsächlich implizit und damit nicht verfälschbar sind, darf nicht nur theoretisch unterstellt, sondern muss empirisch überprüft werden (vgl. De Houwer, 2006). Der IAT gilt als das bekannteste, reliabelste und valideste indirekte Verfahren (Bosson, Swan, & Pennebaker, 2000; Rudolph, Schröder-Abé, Schütz, Gregg, & Sedikides, 2008). In meiner Dissertation habe ich mich aus diesem Grund der empirischen Überprüfung auf Verfälschbarkeit des IATs gewidmet. Die vorliegende Dissertation besteht aus insgesamt fünf Kapiteln. Das 1. Kapitel bildet eine theoretische Einführung zu den Themen Fälschung im diagnostischen Kontext und zum IAT. Grundlegende Befunde und Fragen zur Verfälschbarkeit des IATs werden dargestellt. Kapitel 2 bis 4 bilden empirische Beiträge meiner Forschung, die sich jeweils schwerpunktmäßig mit unterschiedlichen Aspekten der Verfälschbarkeit des IATs beschäftigen. In Kapitel 2 wird der Frage nachgegangen, unter welchen Bedingungen der IAT verfälschbar ist. Bis dato haben die wenigen existierenden Studien ein sehr widersprüchliches Bild bezüglich der Verfälschbarkeit des IATs aufgezeigt. Ein Grund hierfür könnte sein, dass potentiell relevante Faktoren, welche die Verfälschbarkeit des Verfahrens beeinflussen können, noch nie gemeinsam in einer Studie untersucht wurden. Die vorliegende Studie wurde genau mit diesem Ziel konstruiert und durchgeführt. Die Ergebnisse verweisen auf ein komplexes Zusammenspiel verschiedener Faktoren und zeigen auf, unter welchen Bedingungen der IAT verfälschbar ist. Implikationen dieser Ergebnisse werden kritisch diskutiert. In Kapitel 3 werden die Fragen beantwortet, wie Personen den IAT verfälschen und ob Fälschung im IAT detektierbar ist. Die Forschung hat sich bislang nur bedingt damit beschäftigt, was fälschende Personen tun, um ihre Messergebnisse wie gewünscht zu beeinflussen. Es wurde auch noch nicht untersucht, ob Versuchspersonen unter verschiedenen Bedingungen (z.B. Fälschungsziel: hohe vs. niedrige Testwerte) unterschiedliche Strategien anwenden. Dennoch wurden Indices vorgeschlagen, welche in der Lage sein sollen, Fälschung im IAT zu detektieren (Agosta, Ghirardi, Zogmaister, Castiello, & Sartori, 2011; Cvencek, Greenwald, Brown, Gray, & Snowden, 2010). In der vorgestellten Studie habe ich einerseits untersucht, welche Strategien fälschende Personen anwenden und ob sie, je nach Bedingung, zu unterschiedlichen Strategien greifen. Andererseits habe ich untersucht, welche dieser Strategien tatsächlich mit erfolgreicher Fälschung des IATs einhergehen. Schließlich habe ich untersucht, ob die in der Vergangenheit vorgeschlagenen Indices tatsächlich in der Lage sind, erfolgreiche FälscherInnen zu detektieren. Meine Ergebnisse zeigen, dass fälschende Personen unterschiedliche Strategien anwenden, um ihr Ziel zu erreichen. Damit verbunden zeigte sich auch, dass es schwerer ist als bislang angenommen, erfolgreiche FälscherInnen im IAT zu detektieren. Implikationen dieser Ergebnisse werden kritisch diskutiert. Kapitel 4 beschäftigt sich mit der Frage, ob kognitive Fähigkeiten ein erfolgreiches Fälschen im IAT erleichtern. Bisher wurden diese Fähigkeiten nur mit Fälschungserfolg in direkten Verfahren in Verbindung gebracht (vgl. Hartshorne & May, 1928; Nguyen, Biderman, & McDaniel, 2005; Ones, Viswesvaran, & Reiss, 1996; Pauls & Crost, 2005; Snell, Sydell, & Lueke, 1999; Tett, Freund, Christiansen, Fox, & Coaster, 2012; Weiner & Gibson, 2000). In der vorgestellten Studie habe ich untersucht, ob sie auch beim Fälschen des IATs eine Rolle spielen. Besonders habe ich mich dabei für die Rolle des g Faktors der Intelligenz, der Verarbeitungsgeschwindigkeit und der Konzentrationsfähigkeit interessiert. Die Ergebnisse meiner Studie zeigen auf, dass einige dieser Prädiktoren tatsächlich einen Einfluss auf den Fälschungserfolg im IAT haben. Implikationen dieser Ergebnisse werden kritisch diskutiert. Das 5. Kapitel bildet eine Zusammenführung und Integration der Befunde meiner Forschung in die bestehende Theorie. Zudem werden ein Ausblick für die weitere Forschung sowie Empfehlungen für die Praxis gegeben.
33

Classical vs. Quantum Decoherence

Helm, Julius 12 March 2012 (has links) (PDF)
Based on the superposition principle, any two states of a quantum system may be coherently superposed to yield a novel state. Such a simple construction is at the heart of genuinely quantum phenomena such as interference of massive particles or quantum entanglement. Yet, these superpositions are susceptible to environmental influences, eventually leading to a complete disappearance of the system's quantum character. In principle, two distinct mechanisms responsible for this process of decoherence may be identified. In a classical decoherence setting, on the one hand, stochastic fluctuations of classical, ambient fields are the relevant source. This approach leads to a formulation in terms of stochastic Hamiltonians; the dynamics is unitary, yet stochastic. In a quantum decoherence scenario, on the other hand, the system is described in the language of open quantum systems. Here, the environmental degrees of freedom are to be treated quantum mechanically, too. The loss of coherence is then a direct consequence of growing correlations between system and environment. The purpose of the present thesis is to clarify the distinction between classical and quantum decoherence. It is known that there exist decoherence processes that are not reconcilable with the classical approach. We deem it desirable to have a simple, feasible model at hand of which it is known that it cannot be understood in terms of fluctuating fields. Indeed, we find such an example of true quantum decoherence. The calculation of the norm distance to the convex set of classical dynamics allows for a quantitative assessment of the results. In order to incorporate genuine irreversibility, we extend the original toy model by an additional bath. Here, the fragility of the true quantum nature of the dynamics under increasing coupling strength is evident. The geometric character of our findings offers remarkable insights into the geometry of the set of non-classical decoherence maps. We give a very intuitive geometrical measure---a volume---for the quantumness of dynamics. This enables us to identify the decoherence process of maximum quantumness, that is, having maximal distance to the convex set of dynamics consistent with the stochastic, classical approach. In addition, we observe a distinct correlation between the decoherence potential of a given dynamics and its achievable quantumness. In a last step, we study the notion of quantum decoherence in the context of a bipartite system which couples locally to the subsystems' respective environments. A simple argument shows that in the case of a separable environment the resulting dynamics is of classical nature. Based on a realistic experiment, we analyze the impact of entanglement between the local environments on the nature of the dynamics. Interestingly, despite the variety of entangled environmental states scrutinized, no single instance of true quantum decoherence is encountered. In part, the identification of the classical nature relies on numerical schemes. However, for a large class of dynamics, we are able to exclude analytically the true quantum nature.
34

Finite element method for coupled thermo-hydro-mechanical processes in discretely fractured and non-fractured porous media

Watanabe, Norihiro 26 February 2013 (has links) (PDF)
Numerical analysis of multi-field problems in porous and fractured media is an important subject for various geotechnical engineering tasks such as the management of geo-resources (e.g. engineering of geothermal, oil and gas reservoirs) as well as waste management. For practical usage, e.g. for geothermal, simulation tools are required which take into account both coupled thermo-hydro-mechanical (THM) processes and the uncertainty of geological data, i.e. the model parametrization. For modeling fractured rocks, equivalent porous medium or multiple continuum model approaches are often only the way currently due to difficulty to handle geomechanical discontinuities. However, they are not applicable for prediction of flow and transport in subsurface systems where a few fractures dominates the system behavior. Thus modeling coupled problems in discretely fractured porous media is desirable for more precise analysis. The subject of this work is developing a framework of the finite element method (FEM) for modeling coupled THM problems in discretely fractured and non-fractured porous media including thermal water flow, advective-diffusive heat transport, and thermoporoelasticity. Pre-existing fractures are considered. Systems of discretely fractured porous media can be considered as a problem of interacted multiple domains, i.e. porous medium domain and discrete fracture domain, for hydraulic and transport processes, and a discontinuous problem for mechanical processes. The FEM is required to take into account both kinds of the problems. In addition, this work includes developing a methodology for the data uncertainty using the FEM model and investigating the uncertainty impacts on evaluating coupled THM processes. All the necessary code developments in this work has been carried out with a scientific open source project OpenGeoSys (OGS). In this work, fluid flow and heat transport problems in interactive multiple domains are solved assuming continuity of filed variables (pressure and temperature) over the two domains. The assumption is reasonable if there are no infill materials in fractures. The method has been successfully applied for several numerical examples, e.g. modeling three-dimensional coupled flow and heat transport processes in discretely fractured porous media at the Gross Schoenebck geothermal site (Germany), and three-dimensional coupled THM processes in porous media at the Urach Spa geothermal site (Germany). To solve the mechanically discontinuous problems, lower-dimensional interface elements (LIEs) with local enrichments have been developed for coupled problems in a domain including pre-existing fractures. The method permits the possibility of using existing flow simulators and having an identical mesh for both processes. It enables us to formulate the coupled problems in monolithic scheme for robust computation. Moreover, it gives an advantage in practice that one can use existing standard FEM codes for groundwater flow and easily make a coupling computation between mechanical and hydraulic processes. Example of a 2D fluid injection problem into a single fracture demonstrated that the proposed method can produce results in strong agreement with semi-analytical solutions. An uncertainty analysis of THM coupled processes has been studied for a typical geothermal reservoir in crystalline rock based on the Monte-Carlo method. Fracture and matrix are treated conceptually as an equivalent porous medium, and the model is applied to available data from the Urach Spa and Falkenberg sites (Germany). Reservoir parameters are considered as spatially random variables and their realizations are generated using conditional Gaussian simulation. Two reservoir modes (undisturbed and stimulated) are considered to construct a stochastic model for permeability distribution. We found that the most significant factors in the analysis are permeability and heat capacity. The study demonstrates the importance of taking parameter uncertainties into account for geothermal reservoir evaluation in order to assess the viability of numerical modeling.
35

Der E-Learning Redaktionsleitstand

Böhme, Rico, Kalb, Hendrik, Petzoldt, Oliver, Schoop, Eric 15 December 2014 (has links) (PDF)
In den zurückliegenden Jahren ist ein deutlicher Anstieg der Nachfrage an onlinebasierten, multimedialen Aus- und Weiterbildungsangeboten sowohl im universitären als auch im privatwirtschaftlichen Bereich zu verzeichnen. Im Vergleich zu traditionellen Lehr-/Lern-Arrangements ist die Entwicklung dieser Angebote mit einem deutlich erhöhten Zeit- und Kostenfaktor verbunden. Um diesen zu reduzieren sind neue Konzepte notwendig. Neben der Parallelisierung von Produktions- und Überarbeitungsprozessen spielt dabei vor allem die Wiederverwendung einmal erstellter Lerninhalte die entscheidende Rolle. Diese zu den wichtigsten Forderungen aus Sicht der Wirtschaftsinformatik.
36

Die Relevanz semiotischer Dimensionen als "System der möglichen Fehler" für die Usability

Schwarzfischer, Klaus 19 July 2017 (has links) (PDF)
Aus Punkt 1: "Warum lohnt sich Semiotik gerade im Bereich Usability und Design? Mehr noch, die Semiotik als übergreifende Perspektive ist hier gar nicht zu vermeiden. (Vermeidbar ist allenfalls der Soziolekt bzw. Technolekt der akademischen Semiotik, nicht aber ein semiotisches Arbeiten selbst.) Das Denken vieler Designer ist eher visuell geprägt. Diese Ausrichtung auf non-verbale Formen und Handlungen scheint der Semiotik entgegen zu stehen. Die Semiotik hat zwar eine starke Tradition in der Linguistik, aber diese stellt nur eine von mehreren gleichwertigen Zugängen dar: Man denke etwa an die Medizin (wo visuelle und sonstige Symptome als Zeichen gedeutet werden), an die Malerei (wo es Repräsentationen für ästhetische, soziale und politische Entsprechungen gibt), an die Gestik (wo jede kleinere oder größere Bewegung eines Muskels mit Bedeutungen verknüpft ist) oder an die Musik (wo sehr abstrakte Tonfolgen mit emotionaler Dynamik verbunden sind) – dazu etwa Eco (2002), Nöth (2000), Hucklenbroich (2003), Mazzola (2003) sowie Grammer (2004). ..."
37

Modellierung stochastischer Mortalitätsraten zur Verbriefung von Langlebigkeitsrisiken

Lovász, Enrico 14 December 2011 (has links)
In der Arbeit wird die Verbriefung von Mortalitätsrisiken mit dem Schwerpunkt der Modellierung des Langlebigkeitsrisikos bei Extremereignissen analysiert. Nach dem Aufzeigen der Vor- und Nachteile bereits existierender Wertpapiere für Mortalitätsrisiken wird der in dieser Arbeit verwendete hypothetische Langlebigkeitsbond präsentiert. Zentraler Bestandteil dieser Anleihe ist ein parametrisches Modell mit einem Sprungprozess und der Extremwerttheorie für die Berechnung zukünftiger Sterblichkeitsraten. Dieser Ansatz der Verbriefung von Mortalitätsrisiken ist neu. Es bietet die Vorteile die Steigerung der Überlebenswahrscheinlichkeit der vergangenen Jahre besser zu erfassen und seltene (extreme) Ereignisse, welche signifikante Auswirkungen auf die Sterberaten haben, zu berücksichtigen.
38

Classical vs. Quantum Decoherence

Helm, Julius 20 December 2011 (has links)
Based on the superposition principle, any two states of a quantum system may be coherently superposed to yield a novel state. Such a simple construction is at the heart of genuinely quantum phenomena such as interference of massive particles or quantum entanglement. Yet, these superpositions are susceptible to environmental influences, eventually leading to a complete disappearance of the system's quantum character. In principle, two distinct mechanisms responsible for this process of decoherence may be identified. In a classical decoherence setting, on the one hand, stochastic fluctuations of classical, ambient fields are the relevant source. This approach leads to a formulation in terms of stochastic Hamiltonians; the dynamics is unitary, yet stochastic. In a quantum decoherence scenario, on the other hand, the system is described in the language of open quantum systems. Here, the environmental degrees of freedom are to be treated quantum mechanically, too. The loss of coherence is then a direct consequence of growing correlations between system and environment. The purpose of the present thesis is to clarify the distinction between classical and quantum decoherence. It is known that there exist decoherence processes that are not reconcilable with the classical approach. We deem it desirable to have a simple, feasible model at hand of which it is known that it cannot be understood in terms of fluctuating fields. Indeed, we find such an example of true quantum decoherence. The calculation of the norm distance to the convex set of classical dynamics allows for a quantitative assessment of the results. In order to incorporate genuine irreversibility, we extend the original toy model by an additional bath. Here, the fragility of the true quantum nature of the dynamics under increasing coupling strength is evident. The geometric character of our findings offers remarkable insights into the geometry of the set of non-classical decoherence maps. We give a very intuitive geometrical measure---a volume---for the quantumness of dynamics. This enables us to identify the decoherence process of maximum quantumness, that is, having maximal distance to the convex set of dynamics consistent with the stochastic, classical approach. In addition, we observe a distinct correlation between the decoherence potential of a given dynamics and its achievable quantumness. In a last step, we study the notion of quantum decoherence in the context of a bipartite system which couples locally to the subsystems' respective environments. A simple argument shows that in the case of a separable environment the resulting dynamics is of classical nature. Based on a realistic experiment, we analyze the impact of entanglement between the local environments on the nature of the dynamics. Interestingly, despite the variety of entangled environmental states scrutinized, no single instance of true quantum decoherence is encountered. In part, the identification of the classical nature relies on numerical schemes. However, for a large class of dynamics, we are able to exclude analytically the true quantum nature.
39

Finite element method for coupled thermo-hydro-mechanical processes in discretely fractured and non-fractured porous media

Watanabe, Norihiro 23 May 2012 (has links)
Numerical analysis of multi-field problems in porous and fractured media is an important subject for various geotechnical engineering tasks such as the management of geo-resources (e.g. engineering of geothermal, oil and gas reservoirs) as well as waste management. For practical usage, e.g. for geothermal, simulation tools are required which take into account both coupled thermo-hydro-mechanical (THM) processes and the uncertainty of geological data, i.e. the model parametrization. For modeling fractured rocks, equivalent porous medium or multiple continuum model approaches are often only the way currently due to difficulty to handle geomechanical discontinuities. However, they are not applicable for prediction of flow and transport in subsurface systems where a few fractures dominates the system behavior. Thus modeling coupled problems in discretely fractured porous media is desirable for more precise analysis. The subject of this work is developing a framework of the finite element method (FEM) for modeling coupled THM problems in discretely fractured and non-fractured porous media including thermal water flow, advective-diffusive heat transport, and thermoporoelasticity. Pre-existing fractures are considered. Systems of discretely fractured porous media can be considered as a problem of interacted multiple domains, i.e. porous medium domain and discrete fracture domain, for hydraulic and transport processes, and a discontinuous problem for mechanical processes. The FEM is required to take into account both kinds of the problems. In addition, this work includes developing a methodology for the data uncertainty using the FEM model and investigating the uncertainty impacts on evaluating coupled THM processes. All the necessary code developments in this work has been carried out with a scientific open source project OpenGeoSys (OGS). In this work, fluid flow and heat transport problems in interactive multiple domains are solved assuming continuity of filed variables (pressure and temperature) over the two domains. The assumption is reasonable if there are no infill materials in fractures. The method has been successfully applied for several numerical examples, e.g. modeling three-dimensional coupled flow and heat transport processes in discretely fractured porous media at the Gross Schoenebck geothermal site (Germany), and three-dimensional coupled THM processes in porous media at the Urach Spa geothermal site (Germany). To solve the mechanically discontinuous problems, lower-dimensional interface elements (LIEs) with local enrichments have been developed for coupled problems in a domain including pre-existing fractures. The method permits the possibility of using existing flow simulators and having an identical mesh for both processes. It enables us to formulate the coupled problems in monolithic scheme for robust computation. Moreover, it gives an advantage in practice that one can use existing standard FEM codes for groundwater flow and easily make a coupling computation between mechanical and hydraulic processes. Example of a 2D fluid injection problem into a single fracture demonstrated that the proposed method can produce results in strong agreement with semi-analytical solutions. An uncertainty analysis of THM coupled processes has been studied for a typical geothermal reservoir in crystalline rock based on the Monte-Carlo method. Fracture and matrix are treated conceptually as an equivalent porous medium, and the model is applied to available data from the Urach Spa and Falkenberg sites (Germany). Reservoir parameters are considered as spatially random variables and their realizations are generated using conditional Gaussian simulation. Two reservoir modes (undisturbed and stimulated) are considered to construct a stochastic model for permeability distribution. We found that the most significant factors in the analysis are permeability and heat capacity. The study demonstrates the importance of taking parameter uncertainties into account for geothermal reservoir evaluation in order to assess the viability of numerical modeling.
40

Processing and Integration of Sensory Information in Spatial Navigation

Goeke, Caspar 10 February 2017 (has links)
As nomads, humanity constantly moved and relocated for hundred thousands of years. Thereby, individuals or small groups of people had to navigate over very long distances in order to survive. As a result, successful spatial navigation was one of the key cognitive abilities, which ensured our survival. Although navigation has nowadays become less life-threatening, exploring our environment and efficiently navigating between places are still very important aspects in our everyday life. However, in order to be able to navigate efficiently, our brain has to perform a series of spatial cognitive operations. This dissertation is structured into three sections, which explore these cognitive operations from three different perspectives. In the first section I will elaborate about the role of reference frames in human spatial navigation. Specifically, in an online navigation study (study one) I will show that humans have distinct but stable reference frame proclivities. Furthermore, this study demonstrates the existence of a spatial strategy, in which the preference to use a particular reference frame is dependent on the axis of rotation (horizontal vs. vertical). In a follow-up study (study two) I will then analyze the factors underlying performance differences in navigation, as well as individual preferences using one or another spatial strategy. Interestingly, the results suggest that performance measures (reaction time and error rate) are influenced mostly by the factors gender and age. However, even more importantly, I will show that the prevalent factor, which influences the choice for an individual navigation strategy, is the cultural background of the participant. This underlines the importance of socio-economic aspects in human spatial navigation. In the second part of this thesis I will then discuss aspects of learning and memorizing spatial information. In this respect, the alignment study (study three) will show that humans are able to recall object-to-object relations (e.g. how to get from A to B) in a very brief time, indicating that such information is directly stored in memory. This supports an embodied (action-oriented) perspective of human spatial cognition. Following this approach, in the feelSpace study (study four) I will then investigate the long-term training effects with a sensory augmentation device. Most importantly, the respective results will demonstrate substantial changes in the subjective perception of space, in sleep stage architecture, and in neural oscillations during sleep. In the third and last section I will describe the importance of multimodal processes in spatial cognitive operations. Most importantly, in the platform study (study five) I will combine the topics of sensory augmentation and Bayesian cue combination. The results of this study show that untrained adult participants alternate rather than integrate between augmented and native sensory information. Interestingly, this alternation is based on a subjective evaluation of cue reliability. In summary, this thesis will present relevant and new findings for better understanding spatial strategy formation, learning and representing spatial relations in memory, and multimodal cue combination. An important and overarching aspect of this thesis is the characterization of individual differences in the context of human spatial navigation. Specifically, my research revealed individual differences in three areas: First, in utilizing egocentric or allocentric reference frames for spatial updating, second in individualized qualitative changes of space perception during long-term sensory augmentation, and third, in preferences to use native or augmented information in a cue combination task. Most importantly, I will provide a better definition and understanding of these individual differences, by combining qualitative and quantitative measures and using latest technologies such as online data recordings and interactive experimental setups. In fact, in the real world, humans are very active beings who follow individualized spatial cognitive strategies. Studying such interactive and individualized behavior will ultimately lead to more coherent and meaningful insights within the human sciences.

Page generated in 0.0434 seconds