• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Manipulations of spike trains and their impact on synchrony analysis

Pazienti, Antonio January 2007 (has links)
The interaction between neuronal cells can be identified as the computing mechanism of the brain. Neurons are complex cells that do not operate in isolation, but they are organized in a highly connected network structure. There is experimental evidence that groups of neurons dynamically synchronize their activity and process brain functions at all levels of complexity. A fundamental step to prove this hypothesis is to analyze large sets of single neurons recorded in parallel. Techniques to obtain these data are meanwhile available, but advancements are needed in the pre-processing of the large volumes of acquired data and in data analysis techniques. Major issues include extracting the signal of single neurons from the noisy recordings (referred to as spike sorting) and assessing the significance of the synchrony. This dissertation addresses these issues with two complementary strategies, both founded on the manipulation of point processes under rigorous analytical control. On the one hand I modeled the effect of spike sorting errors on correlated spike trains by corrupting them with realistic failures, and studied the corresponding impact on correlation analysis. The results show that correlations between multiple parallel spike trains are severely affected by spike sorting, especially by erroneously missing spikes. When this happens sorting strategies characterized by classifying only good'' spikes (conservative strategies) lead to less accurate results than tolerant'' strategies. On the other hand, I investigated the effectiveness of methods for assessing significance that create surrogate data by displacing spikes around their original position (referred to as dithering). I provide analytical expressions of the probability of coincidence detection after dithering. The effectiveness of spike dithering in creating surrogate data strongly depends on the dithering method and on the method of counting coincidences. Closed-form expressions and bounds are derived for the case where the dither equals the allowed coincidence interval. This work provides new insights into the methodologies of identifying synchrony in large-scale neuronal recordings, and of assessing its significance. / Die Informationsverarbeitung im Gehirn erfolgt maßgeblich durch interaktive Prozesse von Nervenzellen, sogenannten Neuronen. Diese zeigen eine komplexe Dynamik ihrer chemischen und elektrischen Eigenschaften. Es gibt deutliche Hinweise darauf, dass Gruppen synchronisierter Neurone letztlich die Funktionsweise des Gehirns auf allen Ebenen erklären können. Um die schwierige Frage nach der genauen Funktionsweise des Gehirns zu beantworten, ist es daher notwendig, die Aktivität vieler Neuronen gleichzeitig zu messen. Die technischen Voraussetzungen hierfür sind in den letzten Jahrzehnten durch Multielektrodensyteme geschaffen worden, die heute eine breite Anwendung finden. Sie ermöglichen die simultane extrazelluläre Ableitung von bis zu mehreren hunderten Kanälen. Die Voraussetzung für die Korrelationsanalyse von vielen parallelen Messungen ist zunächst die korrekte Erkennung und Zuordnung der Aktionspotentiale einzelner Neurone, ein Verfahren, das als Spikesortierung bezeichnet wird. Eine weitere Herausforderung ist die statistisch korrekte Bewertung von empirisch beobachteten Korrelationen. Mit dieser Dissertationsschrift lege ich eine theoretische Arbeit vor, die sich der Vorverarbeitung der Daten durch Spikesortierung und ihrem Einfluss auf die Genauigkeit der statistischen Auswertungsverfahren, sowie der Effektivität zur Erstellung von Surrogatdaten für die statistische Signifikanzabschätzung auf Korrelationen widmet. Ich verwende zwei komplementäre Strategien, die beide auf der analytischen Berechnung von Punktprozessmanipulationen basieren. In einer ausführlichen Studie habe ich den Effekt von Spikesortierung in mit realistischen Fehlern behafteten korrelierten Spikefolgen modeliert. Zum Vergleich der Ergebnisse zweier unterschiedlicher Methoden zur Korrelationsanalyse auf den gestörten, sowie auf den ungestörten Prozessen, leite ich die entsprechenden analytischen Formeln her. Meine Ergebnisse zeigen, dass koinzidente Aktivitätsmuster multipler Spikefolgen durch Spikeklassifikation erheblich beeinflusst werden. Das ist der Fall, wenn Neuronen nur fälschlicherweise Spikes zugeordnet werden, obwohl diese anderen Neuronen zugehörig sind oder Rauschartefakte sind (falsch positive Fehler). Jedoch haben falsch-negative Fehler (fälschlicherweise nicht-klassifizierte oder missklassifizierte Spikes) einen weitaus grösseren Einfluss auf die Signifikanz der Korrelationen. In einer weiteren Studie untersuche ich die Effektivität einer Klasse von Surrogatmethoden, sogenannte Ditheringverfahren, welche paarweise Korrelationen zerstören, in dem sie koinzidente Spikes von ihrer ursprünglichen Position in einem kleinen Zeitfenster verrücken. Es zeigt sich, dass die Effektivität von Spike-Dithering zur Erzeugung von Surrogatdaten sowohl von der Dithermethode als auch von der Methode zur Koinzidenzzählung abhängt. Für die Wahrscheinlichkeit der Koinzidenzerkennung nach dem Dithern stelle ich analytische Formeln zur Verfügung. Die vorliegende Arbeit bietet neue Einblicke in die Methoden zur Korrelationsanalyse auf multi-variaten Punktprozessen mit einer genauen Untersuchung von unterschiedlichen statistischen Einflüssen auf die Signifikanzabschätzung. Für die praktische Anwendung ergeben sich Leitlinien für den Umgang mit Daten zur Synchronizitätsanalyse.
2

Spatiotemporal Organization of Atrial Fibrillation Using Cross-Bicoherence with Surrogate Data

Jaimes, Rafael 19 May 2011 (has links)
Atrial fibrillation (AF) is a troublesome disease often overlooked by more serious myocardial infarctions. Up until now, there has been very little or no use of high order spectral techniques in order to evaluate the organization of the atrium during AF. Cross-bicoherence algorithm can be used alongside a surrogate data threshold in order to determine significant phase coupling interactions, giving rise to an organizational metric. This proposed algorithm is used to show rotigaptide, a gap junction coupling drug, significantly increases the organization of the atria during episodes of AF due to improvement of cell-to-cell coupling.
3

Generating Surrogates from Recurrences

Thiel, Marco, Romano, Maria Carmen, Kurths, Jürgen, Rolfs, Martin, Kliegl, Reinhold January 2006 (has links)
In this paper we present an approach to recover the dynamics from recurrences of a system and then generate (multivariate) twin surrogate (TS) trajectories. In contrast to other approaches, such as the linear-like surrogates, this technique produces surrogates which correspond to an independent copy of the underlying system, i. e. they induce a trajectory of the underlying system visiting the attractor in a different way. We show that these surrogates are well suited to test for complex synchronization, which makes it possible to systematically assess the reliability of synchronization analyses. We then apply the TS to study binocular fixational movements and find strong indications that the fixational movements of the left and right eye are phase synchronized. This result indicates that there might be one centre only in the brain that produces the fixational movements in both eyes or a close link between two centres.
4

Complexity of the Electroencephalogram of the Sprague-Dawley Rat

Smith, Phillip James 27 July 2010 (has links)
No description available.
5

Linear And Nonlinear Analysis Of Human Postural Sway

Celik, Huseyin 01 September 2008 (has links) (PDF)
Human upright posture exhibits an everlasting oscillatory behavior of complex nature, called as human postural sway. Variations in the position of the Center-of-Pressure (CoP) were used to describe the human postural sway. In this study / CoP data, which has experimentally been collected from 28 different subjects (14 males and 14 females with their ages ranging from 6 to 84), who were divided into 4 groups according to their ages has been analyzed. The data collection from each of the subjects was performed in 5 successive trials, each of which has lasted for 180-seconds long. Linear analysis methods such as the variance/standard deviation, Fast Fouri&eacute / r Transformation, and Power Spectral Density estimates were applied to the detrended CoP signal of human postural sway. Also the Run test and Ensemble averages methods were used to search for stationarity and ergodicity of the CoP signal respectively. Furthermore, in order to reveal the nonlinear characteristics of the human postural sway, its dynamics were reconstructed in m-dimensional state space from the CoPx signals. Then, the correlation dimension (D2) estimates from the embedded dynamics were calculated. Additionally, the statistical and dynamical measures computed were checked against any significant changes, which may occur during aging. The results of the study suggested that human postural sway is a stationary process when 180-second long biped quiet stance data is considered. In addition, it exhibits variable dynamical structure complex in nature (112 deterministic chaos versus 28 stochastic time series of human postural sway) for five successive trials of 28 different subjects. Moreover, we found that groups were significantly different in the correlation dimension (D2) measure (p&amp / #8804 / 0.0003). Finally, the behavior of the experimental CoPx signals was checked against two types of linear processes by using surrogate data method. The shuffled CoPx signals (Surrogate I) suggested that temporal order of CoPx is important / however, phase-randomization (Surrogate II) did not change the behavioral characteristics of the CoPx signal.
6

Analýza surogát pro určení významnosti interakce mezi kardiovaskulárními signály / Surrogate data analysis for assessing the significance of interaction between cardiovascular signals

Javorčeková, Lenka January 2019 (has links)
The aim of this diploma thesis was to get familiar with methods to generate surrogates and how to apply them on cardiovascular signals. The first part of this diploma thesis describes the basic theory of baroreflex function and methods to generate surrogate data. Surrogate data were generated from data, acquired from the database, by using three different methods. In the next part of this diploma thesis, coherence significance between blood pressure and heart intervals was calculated by using surrogates. In the end two hypotheses were defined and tested by which it was detected whether the orthostatic change of the measurement position has effect on the causal coherence change and baroreflex function.
7

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
8

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.

Page generated in 0.0691 seconds