• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 13
  • 9
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 20
  • 20
  • 15
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Stochastic and Multi-scale Modeling in Biology and Immunology

Tabbaa, Omar Peter January 2014 (has links)
No description available.
92

Particle subgrid scale modeling in large-eddy simulation of particle-laden turbulence

Cernick, Matthew J. 04 1900 (has links)
<p>This thesis is concerned with particle subgrid scale (SGS) modeling in large-eddy simulation (LES) of particle-laden turbulence. Although most particle-laden LES studies have neglected the effect of the subgrid scales on the particles, several particle SGS models have been proposed in the literature. In this research, the approximate deconvolution method (ADM), and the stochastic models of Fukagata et al. (2004), Shotorban and Mashayek (2006) and Berrouk et al. (2007) are analyzed. The particle SGS models are assessed by conducting both a priori and a posteriori tests of a periodic box of decaying, homogeneous and isotropic turbulence with an initial Reynolds number of Re=74. The model results are compared with particle statistics from a direct numerical simulation (DNS). Particles with a large range of Stokes numbers are tested using various filter sizes and stochastic model constant values. Simulations with and without gravity are performed to evaluate the ability of the models to account for the crossing trajectory and continuity effects. The results show that ADM improves results but is only capable of recovering a portion of the SGS turbulent kinetic energy. Conversely, the stochastic models are able to recover sufficient energy, but show a large range of results dependent on Stokes number and filter size. The stochastic models generally perform best at small Stokes numbers. Due to the random component, the stochastic models are unable to predict preferential concentration.</p> / Master of Applied Science (MASc)
93

ASSESSMENT AND PREDICTION OF CARDIOVASCULAR STATUS DURING CARDIAC ARREST THROUGH MACHINE LEARNING AND DYNAMICAL TIME-SERIES ANALYSIS

Shandilya, Sharad 02 July 2013 (has links)
In this work, new methods of feature extraction, feature selection, stochastic data characterization/modeling, variance reduction and measures for parametric discrimination are proposed. These methods have implications for data mining, machine learning, and information theory. A novel decision-support system is developed in order to guide intervention during cardiac arrest. The models are built upon knowledge extracted with signal-processing, non-linear dynamic and machine-learning methods. The proposed ECG characterization, combined with information extracted from PetCO2 signals, shows viability for decision-support in clinical settings. The approach, which focuses on integration of multiple features through machine learning techniques, suits well to inclusion of multiple physiologic signals. Ventricular Fibrillation (VF) is a common presenting dysrhythmia in the setting of cardiac arrest whose main treatment is defibrillation through direct current countershock to achieve return of spontaneous circulation. However, often defibrillation is unsuccessful and may even lead to the transition of VF to more nefarious rhythms such as asystole or pulseless electrical activity. Multiple methods have been proposed for predicting defibrillation success based on examination of the VF waveform. To date, however, no analytical technique has been widely accepted. For a given desired sensitivity, the proposed model provides a significantly higher accuracy and specificity as compared to the state-of-the-art. Notably, within the range of 80-90% of sensitivity, the method provides about 40% higher specificity. This means that when trained to have the same level of sensitivity, the model will yield far fewer false positives (unnecessary shocks). Also introduced is a new model that predicts recurrence of arrest after a successful countershock is delivered. To date, no other work has sought to build such a model. I validate the method by reporting multiple performance metrics calculated on (blind) test sets.
94

Struktura populace a modelování jejích změn: Neolitická demografická tranzice ve střední Evropě. / Modelling population structure and their changes: Neolithic demographic transition in Central Europe.

Galeta, Patrik January 2011 (has links)
Neolithic dispersal in Europe has been alternatively explained through spread of farmers (migrationist position) or by adoption of farming by Mesolithic foragers (indigenist position). Mixed explanations have considered a combination of both processes. Neolithic dispersal in Central Europe was traditionally viewed as migrationist process. It was believed that farmers colonized the area and replaced indigenous foragers. During the last decade, authors have adhered to integrationist view as they have observed the continuity between Mesolithic and Neolithic technologies. Interestingly, the most recent genetic analyses again invoked the idea of colonization. Surprisingly, little attention has been paid to demographic modeling. The farming quickly spread in Central Europe between 5 600 and 5 400 calBC. Assuming colonization, Neolithic dispersal in Central Europe would have to be associated with high fertility rate of farmers. Our goal was to test whether the fertility rate of farmers was high enough to allow them to colonize Central Europe without admixture with local foragers. We produced four stochastic models of population dynamics of farmers during their colonization in Central Europe. The principle of Model 1-3 is based on methods of population projections. Model 4 stems from the wave of advance...
95

Propriétés lagrangiennes de l'accélération turbulente des particules fluides et inertielles dans un écoulement avec un cisaillement homogène : DNS et nouveaux modèles de sous-maille de LES / Lagrangian properties of fluid and inertial particles moving in a homogeneous shear flow : DNS and new LES subgrid models

Barge, Alexis 12 June 2018 (has links)
Ce travail de thèse porte sur l’étude de l’accélération de particules fluides et inertielles en déplacement dans une turbulence soumise à un gradient de vitesse moyen. L’objectif est de récupérer des données de référence afin de développer des modèles LES stochastiques pour la prédiction de l’accélération de sous-maille et l’accélération de particules inertielles dans des conditions inhomogènes. La modélisation de l’accélération de sous-maille est effectuée à l’aide de l’approche LES-SSAM introduite par Sabel’nikov, Chtab et Gorokhovski[EPJB 80:177]. L’accélération est modélisée à l’aide de deux modèles stochastiques indépendants : un processus log-normal d’Ornstein-Uhlenbeck pour la norme d’accélération et un processus stochastique Ornstein-Uhlenbeck basé sur le calcul de Stratonovich pour les composantes du vecteur d’orientation de l’accélération. L’approche est utilisée pour la simulation de particules fluides et inertielles dans le cas d’une turbulence homogène isotrope et dans un cisaillement homogène. Les résultats montrent une amélioration des statistiques à petites échelles par rapport aux LES classiques. La modélisation de l’accélération des particules inertielles dans le cisaillement homogène est effectuée avec l’approche LES-STRIP introduite par Gorokhovski et Zamansky[PRF 3:034602] et est modélisée avec deux modèles stochastiques indépendants de manière similaire à l’accélération de sous-maille. Nos calculs montrent une amélioration de l’accélération et de la vitesse des particules lorsque le modèle STRIP est utilisé. Enfin dans une dernière partie, nous présentons une équation pour décrire la dynamique de particules ponctuelles de taille supérieure à l’échelle de Kolmogorov dans une turbulence homogène isotrope calculée par DNS. Les résultats sont comparés avec l’expérience et montrent que cette description reproduit bien les propriétés dynamiques des particules. / The main objective of this thesis is to study the acceleration of fluid and inertial particles moving in a turbulent flow under the influence of a homogeneous shear in order to develop LES stochastic models that predict subgrid acceleration of the flow and acceleration of inertial particles. Subgrid acceleration modelisation is done in the framework of the LES-SSAM approach which was introduced by Sabel’nikov, Chtab and Gorokhovski[EPJB 80:177]. Acceleration is predicted with two independant stochastic models : a log-normal Ornstein-Uhlenbeck process for the norm of acceleration and an Ornstein-Uhlenbeck process expressed in the sense of Stratonovich calculus for the components of the acceleration orientation vector. The approach is used to simulate fluid and inertial particles moving in a homogeneous isotropic turbulence and in a homogeneous sheared turbulence. Our results show that small scales statistics of particles are better predicted in comparison with classical LES approach. Modelling of inertial particles acceleration is done in the framework of the LES-STRIP which was introduced by Gorokhovski and Zamansky[PRF 3:034602] with two independant stochastic models in a similar way to the subgrid fluid acceleration. Computations of inertial particles in the homogeneous shear flow present good predicitons of the particles acceleration and velocity when STRIP model is used. In the last chapter, we present an equation to describe the dynamic of point-like particles which size is larger than the Kolmogorov scale moving in a homogeneous isotropic turbulence computed by direct numerical simulation. Results are compared with experiments and indicate that this description reproduces well the properties of the particles dynamic.
96

Modelagem estocástica de sequências de disparos de um conjunto de neurônios / Stochastic modeling of spike trains of a set of neurons

Arias Rodriguez, Azrielex Andres 13 August 2013 (has links)
O presente trabalho constitui um primeiro esforço por modelar disparos de neurônios usando cadeias estocásticas de memória de alcance variável. Esses modelos foram introduzidos por Rissanen (1983). A ideia principal deste tipo de modelos consiste em que a definição probabilística de cada símbolo depende somente de uma porção finita do passado e o comprimento dela é função do passado mesmo, tal porção foi chamada de \"contexto\" e o conjunto de contextos pode ser representado através de uma árvore. No passado vários métodos de estimação foram propostos, nos quais é necessário especificar algumas constantes, de forma que Galves et al.(2012) apresentaram o \"critério do menor maximizador\" (SMC), sendo este um algoritmo consistente que independe de qualquer constante. De outro lado na área da neurociência vem tomando força a ideia de que o processamento de informação do cérebro é feito de forma probabilística, por esta razão foram usados os dados coletados por Sidarta Ribeiro e sua equipe, correspondentes à atividade neuronal em ratos, para estimar as árvores de contextos que caracterizam os disparos de quatro neurônios do hipocampo e identificar possíveis associações entre eles, também foram feitas comparações de acordo com o estado comportamental do rato (Vigília / Sono), em todos os casos foi usado o algoritmo SMC para a estimação das árvores de contexto. Por último, é aberta uma discussão sobre o tamanho de amostra necessário para a implementação deste tipo de análise. / This work describes an initial effort to model spike trains of neurons using Variable Length Markov Chains (VLMC). These models were introduced by Rissanen(1983). The principal idea of this kind of models is thaht the probabilistic definition of each symbol only depends on a finite part of the past and the length of this relevant portion is a function of the past itself. This portion were called \"context\" and the set of contexts can be represented as a rooted labeled tree. In the past, several methods of estimation were proposed, where is necessary to fix any constants, for this reason Galves et al.(2012) introduced the \"smallest maximizer criterion\" (SMC), which is a consistent and constant free model selection procedure. By the other side, in the neuroscience area has gained strength the idea that the information processing in the brain is done in a probabilistic way, for this reason were used the data collected by Sidarta Ribeiro and his team, related to the neuronal activity in rats, to estimate the context trees that describing the spike trains of four neurons of hipocampus region and to identify associations between them, comparisions were also made according to the behavioural state of the rat (Wake / Sleep), in all cases the algorithm were used the SMC algortithm to estimate the context trees. Finally, is opened a discussion on the sample size required for the implementation of this kind of analysis.
97

Modelagem estocástica de sequências de disparos de um conjunto de neurônios / Stochastic modeling of spike trains of a set of neurons

Azrielex Andres Arias Rodriguez 13 August 2013 (has links)
O presente trabalho constitui um primeiro esforço por modelar disparos de neurônios usando cadeias estocásticas de memória de alcance variável. Esses modelos foram introduzidos por Rissanen (1983). A ideia principal deste tipo de modelos consiste em que a definição probabilística de cada símbolo depende somente de uma porção finita do passado e o comprimento dela é função do passado mesmo, tal porção foi chamada de \"contexto\" e o conjunto de contextos pode ser representado através de uma árvore. No passado vários métodos de estimação foram propostos, nos quais é necessário especificar algumas constantes, de forma que Galves et al.(2012) apresentaram o \"critério do menor maximizador\" (SMC), sendo este um algoritmo consistente que independe de qualquer constante. De outro lado na área da neurociência vem tomando força a ideia de que o processamento de informação do cérebro é feito de forma probabilística, por esta razão foram usados os dados coletados por Sidarta Ribeiro e sua equipe, correspondentes à atividade neuronal em ratos, para estimar as árvores de contextos que caracterizam os disparos de quatro neurônios do hipocampo e identificar possíveis associações entre eles, também foram feitas comparações de acordo com o estado comportamental do rato (Vigília / Sono), em todos os casos foi usado o algoritmo SMC para a estimação das árvores de contexto. Por último, é aberta uma discussão sobre o tamanho de amostra necessário para a implementação deste tipo de análise. / This work describes an initial effort to model spike trains of neurons using Variable Length Markov Chains (VLMC). These models were introduced by Rissanen(1983). The principal idea of this kind of models is thaht the probabilistic definition of each symbol only depends on a finite part of the past and the length of this relevant portion is a function of the past itself. This portion were called \"context\" and the set of contexts can be represented as a rooted labeled tree. In the past, several methods of estimation were proposed, where is necessary to fix any constants, for this reason Galves et al.(2012) introduced the \"smallest maximizer criterion\" (SMC), which is a consistent and constant free model selection procedure. By the other side, in the neuroscience area has gained strength the idea that the information processing in the brain is done in a probabilistic way, for this reason were used the data collected by Sidarta Ribeiro and his team, related to the neuronal activity in rats, to estimate the context trees that describing the spike trains of four neurons of hipocampus region and to identify associations between them, comparisions were also made according to the behavioural state of the rat (Wake / Sleep), in all cases the algorithm were used the SMC algortithm to estimate the context trees. Finally, is opened a discussion on the sample size required for the implementation of this kind of analysis.
98

Redundant Input Cancellation by a Bursting Neural Network

Bol, Kieran G. 20 June 2011 (has links)
One of the most powerful and important applications that the brain accomplishes is solving the sensory "cocktail party problem:" to adaptively suppress extraneous signals in an environment. Theoretical studies suggest that the solution to the problem involves an adaptive filter, which learns to remove the redundant noise. However, neural learning is also in its infancy and there are still many questions about the stability and application of synaptic learning rules for neural computation. In this thesis, the implementation of an adaptive filter in the brain of a weakly electric fish, A. Leptorhynchus, was studied. It was found to require a cerebellar architecture that could supply independent frequency channels of delayed feedback and multiple burst learning rules that could shape this feedback. This unifies two ideas about the function of the cerebellum that were previously separate: the cerebellum as an adaptive filter and as a generator of precise temporal inputs.
99

Redundant Input Cancellation by a Bursting Neural Network

Bol, Kieran G. 20 June 2011 (has links)
One of the most powerful and important applications that the brain accomplishes is solving the sensory "cocktail party problem:" to adaptively suppress extraneous signals in an environment. Theoretical studies suggest that the solution to the problem involves an adaptive filter, which learns to remove the redundant noise. However, neural learning is also in its infancy and there are still many questions about the stability and application of synaptic learning rules for neural computation. In this thesis, the implementation of an adaptive filter in the brain of a weakly electric fish, A. Leptorhynchus, was studied. It was found to require a cerebellar architecture that could supply independent frequency channels of delayed feedback and multiple burst learning rules that could shape this feedback. This unifies two ideas about the function of the cerebellum that were previously separate: the cerebellum as an adaptive filter and as a generator of precise temporal inputs.
100

Sequence stratigraphic interpretation methods for low-accommodation, alluvial depositional sequences: applications to reservoir characterization of Cut Bank field, Montana

Ramazanova, Rahila 15 May 2009 (has links)
In South Central Cut Bank Sand Unit (SCCBSU) of Cut Bank field, primary production and waterflood projects have resulted in recovery of only 29 % of the original oil in place from heterogeneous, fluvial sandstone deposits. Using highresolution sequence stratigraphy and geostatistical analysis, I developed a geologic model that may improve the ultimate recovery of oil from this field. In this study, I assessed sequence stratigraphic concepts for continental settings and extended the techniques to analyze low-accommodation alluvial systems of the Cut Bank and Sunburst members of the lower Kootenai formation (Cretaceous) in Cut Bank field. Identification and delineation of five sequences and their bounding surfaces led to a better understanding of the reservoir distribution and variability. Recognition of stacking patterns allowed for the prediction of reservoir rock quality. Within each systems tract, the best quality reservoir rocks are strongly concentrated in the lowstand systems tract. Erosional events associated with falling baselevel resulted in stacked, communicated (multistory) reservoirs. The lowermost Cut Bank sandstone has the highest reservoir quality and is a braided stream parasequence. Average net-to-gross ratio value (0.6) is greater than in other reservoir intervals. Little additional stratigraphically untapped oil is expected in the lowermost Cut Bank sandstone. Over most of the SCCBSU, the Sunburst and the upper Cut Bank strata are valley-fill complexes with interfluves that may laterally compartmentalize reservoir sands. Basal Sunburst sand (Sunburst 1, average net-to-gross ratio ~0.3) has better reservoir quality than other Sunburst or upper Cut Bank sands, but its reservoir quality is significantly less than that of lower Cut Bank sand. Geostatistical analysis provided equiprobable representations of the heterogeneity of reservoirs. Simulated reservoir geometries resulted in an improved description of reservoir distribution and connectivity, as well as occurrences of flow barriers. The models resulting from this study can be used to improve reservoir management and well placement and to predict reservoir performance in Cut Bank field. The technical approaches and tools from this study can be used to improve descriptions of other oil and gas reservoirs in similar depositional systems.

Page generated in 0.3871 seconds