• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 13
  • 9
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 20
  • 20
  • 15
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Adaptation de la modélisation hybride eulérienne/lagrangienne stochastique de Code_Saturne à la dispersion atmosphérique de polluants à l’échelle micro-météorologique et comparaison à la méthode eulérienne / Adaptation of the hybrid Eulerian/Lagrangian stochastic model of the CFD code Code_Saturne to pollutant atmospheric dispersion at the micro-meteorological scale and comparison with the Eulerian method

Bahlali, Meïssam 19 October 2018 (has links)
Cette thèse s'inscrit dans un projet de modélisation numérique de la dispersion atmosphérique de polluants à travers le code de mécanique des fluides numérique Code_Saturne. L'objectif est de pouvoir simuler la dispersion atmosphérique de polluants en environnement complexe, c'est-à-dire autour de centrales, sites industriels ou en milieu urbain. Dans ce contexte, nous nous concentrons sur la modélisation de la dispersion des polluants à micro-échelle, c'est-à-dire pour des distances de l'ordre de quelques mètres à quelques kilomètres et correspondant à des échelles de temps de l'ordre de quelques dizaines de secondes à quelques dizaines de minutes : on parle de modélisation en champ proche. L’approche suivie dans ces travaux de recherche suit une formulation hybride eulérienne/lagrangienne, où les champs dynamiques moyens relatifs au fluide porteur (pression, vitesse, température, turbulence) sont calculés via une approche eulérienne et sont ensuite fournis au solveur lagrangien. Ce type de formulation est couramment utilisé dans la littérature atmosphérique pour son efficacité numérique. Le modèle lagrangien stochastique considéré dans nos travaux est le Simplified Langevin Model (SLM), développé par Pope (1985,2000). Ce modèle appartient aux méthodes communément appelées méthodes PDF (Probability Density Function), et, à notre connaissance, n'a pas été exploité auparavant dans le contexte de la dispersion atmosphérique. Premièrement, nous montrons que le SLM respecte le critère dit de mélange homogène (Thomson, 1987). Ce critère, essentiel pour juger de la bonne qualité d'un modèle lagrangien stochastique, correspond au fait que si des particules sont initialement uniformément réparties dans un fluide incompressible, alors elles doivent le rester. Nous vérifions le bon respect du critère de mélange homogène pour trois cas de turbulence inhomogène représentatifs d'une large gamme d'applications pratiques : une couche de mélange, un canal plan infini, ainsi qu'un cas de type atmosphérique mettant en jeu un obstacle au sein d'une couche limite neutre. Nous montrons que le bon respect du critère de mélange homogène réside simplement en la bonne introduction du terme de gradient de pression en tant que terme de dérive moyen dans le modèle de Langevin (Pope, 1987; Minier et al., 2014; Bahlali et al., 2018c). Nous discutons parallèlement de l'importance de la consistance entre champs eulériens et lagrangiens dans le cadre de telles formulations hybrides eulériennes/lagrangiennes. Ensuite, nous validons le modèle dans le cas d'un rejet de polluant ponctuel et continu, en conditions de vent uniforme et turbulence homogène. Dans ces conditions, nous disposons en effet d'une solution analytique nous permettant une vérification précise. Nous observons que dans ce cas, le modèle lagrangien discrimine bien les deux différents régimes de diffusion de champ proche et champ lointain, ce qui n'est pas le cas d'un modèle eulérien à viscosité turbulente (Bahlali et al., 2018b).Enfin, nous travaillons sur la validation du modèle sur plusieurs campagnes expérimentales en atmosphère réelle, en tenant compte de la stratification thermique de l'atmosphère et de la présence de bâtiments. Le premier programme expérimental considéré dans nos travaux concerne le site du SIRTA (Site Instrumental de Recherche par Télédétection Atmosphérique), dans la banlieue sud de Paris, et met en jeu une stratification stable de la couche limite atmosphérique. La seconde campagne étudiée est l'expérience MUST (Mock Urban Setting Test). Réalisée aux Etats-Unis, dans le désert de l'Utah, cette expérience a pour but de représenter une ville idéalisée, au travers d'un ensemble de lignées de conteneurs. Deux rejets ont été simulés et analysés, respectivement en conditions d'atmosphère neutre et stable (Bahlali et al., 2018a) / This Ph.D. thesis is part of a project that aims at modeling pollutant atmospheric dispersion with the Computational Fluid Dynamics code Code_Saturne. The objective is to simulate atmospheric dispersion of pollutants in a complex environment, that is to say around power plants, industrial sites or in urban areas. In this context, the focus is on modeling the dispersion at micro-scale, that is for distances of the order of a few meters to a few kilometers and corresponding to time scales of the order of a few tens of seconds to a few tens of minutes: this is also called the near field area. The approach followed in this thesis follows a hybrid Eulerian/Lagrangian formulation, where the mean dynamical fields relative to the carrier fluid (pressure, velocity, temperature, turbulence) are calculated through an Eulerian approach and are then provided to the Lagrangian solver. This type of formulation is commonly used in the atmospheric literature for its numerical efficiency. The Lagrangian stochastic model considered in our work is the Simplified Langevin Model (SLM), developed by Pope (1985,2000). This model belongs to the methods commonly referred to as PDF (Probability Density Function) methods, and, to our knowledge, has not been used before in the context of atmospheric dispersion. First, we show that the SLM meets the so-called well-mixed criterion (Thomson, 1987). This criterion, essential for any Lagrangian stochastic model to be regarded as acceptable, corresponds to the fact that if particles are initially uniformly distributed in an incompressible fluid, then they must remain so. We check the good respect of the well-mixed criterion for three cases of inhomogeneous turbulence representative of a wide range of practical applications: a mixing layer, an infinite plane channel, and an atmospheric-like case involving an obstacle within a neutral boundary layer. We show that the good respect of the well-mixed criterion lies simply in the good introduction of the pressure gradient term as the mean drift term in the Langevin model (Pope, 1987; Minier et al., 2014; Bahlali et al., 2018c). Also, we discuss the importance of consistency between Eulerian and Lagrangian fields in the framework of such Eulerian/Lagrangian hybrid formulations. Then, we validate the model in the case of continuous point source pollutant dispersion, under uniform wind and homogeneous turbulence. In these conditions, there is an analytical solution allowing a precise verification. We observe that in this case, the Lagrangian model discriminates well the two different near- and far-field diffusion regimes, which is not the case for an Eulerian model based on the eddy-viscosity hypothesis (Bahlali et al., 2018b).Finally, we work on the validation of the model on several experimental campaigns in real atmosphere, taking into account atmospheric thermal stratification and the presence of buildings. The first experimental program considered in our work has been conducted on the `SIRTA' site (Site Instrumental de Recherche par Télédétection Atmosphérique), in the southern suburb of Paris, and involves a stably stratified surface layer. The second campaign studied is the MUST (Mock Urban Setting Test) experiment. Conducted in the United States, in Utah's desert, this experiment aims at representing an idealized city, through several ranges of containers. Two cases are simulated and analyzed, respectively corresponding to neutral and stable atmospheric stratifications (Bahlali et al., 2018a)
72

Modelo estocástico para auxílio à tomada de decisão em investimentos de geração de energia renovável a partir do portfólio e da aversão ao risco do investidor

Freitas, Renan Alves de 05 April 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-08-20T13:56:41Z No. of bitstreams: 1 Renan Alves de Freitas_.pdf: 2956894 bytes, checksum: e057a664a6ae2ee57625b2f4d9973422 (MD5) / Made available in DSpace on 2018-08-20T13:56:41Z (GMT). No. of bitstreams: 1 Renan Alves de Freitas_.pdf: 2956894 bytes, checksum: e057a664a6ae2ee57625b2f4d9973422 (MD5) Previous issue date: 2018-04-05 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O Brasil possui uma matriz energética diferente da média mundial, onde a maior parte da geração elétrica é através de fontes renováveis. Entretanto, a expectativa de aumento da carga, os problemas causados pela queima dos combustíveis fósseis e os impactos ambientais das grandes hidrelétricas, são fatores que implicam na necessidade de aumentar a capacidade instalada de outras fontes renováveis. Tal aumento de capacidade requer investimentos em geração renovável, sendo que essa decisão é afetada por diferentes aspectos como a variabilidade na geração, as incertezas no mercado de energia, além da aversão ao risco do investidor, o portfólio de geração da empresa, entre outros. O trabalho apresenta um modelo estocástico de auxílio a tomada de decisão para investimentos em fontes renováveis que considerem esses fatores e maximizem o retorno esperado para um determinado nível de aversão ao risco. Dada as incertezas do problema, é utilizado o Conditional Value-at-Risk, CVaR, para modelar o risco do portfólio em relação aos cenários mais desfavoráveis de receita futura. Os cenários são gerados com base em históricos de geração e os dados de saída do NEWAVE, modelo de planejamento da operação de médio prazo. As simulações expõem como o portfólio atual e a opção de investimento se relacionam em termos de complementação energética. Também se evidencia o risco que a fonte intermitente acarreta à empresa através da avaliação do CVaR, contudo o portfólio atual da empresa pode servir de hedge para o investimento, reduzindo assim o risco do projeto. Os resultados obtidos mostram que a diversificação da capacidade instalada de geração da empresa e a composição complementar das fontes geradoras reduzem os riscos financeiros do portfólio do investidor. O nível de aversão ao risco do decisor também influencia a posição de mercado que a empresa deve adotar, tal que o modelo tende a soluções mais conservadoras a medida que o grau de aversão ao risco aumenta. Assim, confirmando a literatura a existência de um trade-off entre a aversão ao risco e o retorno esperado. / Brazil has a world different energy matrix, where the most part of the electricity generation is by renewable resources. However, the expectation of increase load demand, the problems caused by the burning of fossil fuels and the environmental impacts of large hydropowers are factors that imply the need to increase the installed capacity of other renewable sources. This increase in capacity requires investments in renewable generation, and this decision is affected by different aspects such as variability in generation, uncertainties in the energy market and investor risk aversion, the company’s current portfolio, among others. The paper presents a stochastic decision support model for investments in renewable energy that considers these factors and maximizes the expected return for a given level of risk aversion. To represent the uncertainties of the problem is used the Conditional Value-at-Risk (CVaR), that model the portfolio risk in relation to the most unfavorable future revenue scenarios. The scenarios are generated based on past generation and the output data from NEWAVE, the medium-term planning model of the operation system. The simulations show how the current portfolio and the investment option are related in terms of energy complementation. It is also possible to realize that risk for intermittent source entails to company by means of the evaluation of CVaR. In this way, current company portfolio can be a hedge for the investment, thus reducing the risk of the project. The results show that the diversification of the company’s assets and the complementary composition of the generating sources reduce the financial risks of the investor’s portfolio. The risk aversion level of the decision maker also influences the market position that the company must adopt, such that the model tends towards more conservative solutions when the risk aversion is higher. Thus, confirming the literature, the existence of a trade-off between risk aversion and expected return.
73

Modélisation stochastique de processus d'agrégation en chimie / Stochastic modeling of aggregation and floculation processes in chemestry

Paredes Moreno, Daniel 27 October 2017 (has links)
Nous concentrons notre intérêt sur l'Équation du Bilan de la Population (PBE). Cette équation décrit l'évolution, au fil du temps, des systèmes de particules en fonction de sa fonction de densité en nombre (NDF) où des processus d'agrégation et de rupture sont impliqués. Dans la première partie, nous avons étudié la formation de groupes de particules et l'importance relative des variables dans la formation des ces groupes en utilisant les données dans (Vlieghe 2014) et des techniques exploratoires comme l'analyse en composantes principales, le partitionnement de données et l'analyse discriminante. Nous avons utilisé ce schéma d'analyse pour la population initiale de particules ainsi que pour les populations résultantes sous différentes conditions hydrodynamiques. La deuxième partie nous avons étudié l'utilisation de la PBE en fonction des moments standard de la NDF, et les méthodes en quadrature des moments (QMOM) et l'Extrapolation Minimale Généralisée (GME), afin de récupérer l'évolution, d'un ensemble fini de moments standard de la NDF. La méthode QMOM utilise une application de l'algorithme Produit- Différence et GME récupère une mesure discrète non-négative, étant donnée un ensemble fini de ses moments standard. Dans la troisième partie, nous avons proposé un schéma de discrétisation afin de trouver une approximation numérique de la solution de la PBE. Nous avons utilisé trois cas où la solution analytique est connue (Silva et al. 2011) afin de comparer la solution théorique à l'approximation trouvée avec le schéma de discrétisation. La dernière partie concerne l'estimation des paramètres impliqués dans la modélisation des processus d'agrégation et de rupture impliqués dans la PBE. Nous avons proposé une méthode pour estimer ces paramètres en utilisant l'approximation numérique trouvée, ainsi que le Filtre Étendu de Kalman. La méthode estime interactivement les paramètres à chaque instant du temps, en utilisant un estimateur de Moindres Carrés non-linéaire. / We center our interest in the Population Balance Equation (PBE). This equation describes the time evolution of systems of colloidal particles in terms of its number density function (NDF) where processes of aggregation and breakage are involved. In the first part, we investigated the formation of groups of particles using the available variables and the relative importance of these variables in the formation of the groups. We use data in (Vlieghe 2014) and exploratory techniques like principal component analysis, cluster analysis and discriminant analysis. We used this scheme of analysis for the initial population of particles as well as in the resulting populations under different hydrodynamics conditions. In the second part we studied the use of the PBE in terms of the moments of the NDF, and the Quadrature Method of Moments (QMOM) and the Generalized Minimal Extrapolation (GME), in order to recover the time evolution of a finite set of standard moments of the NDF. The QMOM methods uses an application of the Product-Difference algorithm and GME recovers a discrete non-negative measure given a finite set of its standard moments. In the third part, we proposed an discretization scheme in order to find a numerical approximation to the solution of the PBE. We used three cases where the analytical solution is known (Silva et al. 2011) in order to compare the theoretical solution to the approximation found with the discretization scheme. In the last part, we proposed a method for estimate the parameters involved in the modelization of aggregation and breakage processes in PBE. The method uses the numerical approximation found, as well as the Extended Kalman Filter. The method estimates iteratively the parameters at each time, using an non- linear Least Square Estimator.
74

Stochastic Modeling and Simulation of the TCP protocol

Olsén, Jörgen January 2003 (has links)
<p>The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the <i>Transmission Control Protocol (TCP)</i> is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. </p><p>In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. </p><p>The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment.</p><p>Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator <i>ns-2</i>. </p><p>This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.</p>
75

Stochastic Modeling and Simulation of the TCP protocol

Olsén, Jörgen January 2003 (has links)
The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the Transmission Control Protocol (TCP) is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment. Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator ns-2. This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.
76

Reaction Constraints for the Pi-Calculus - A Language for the Stochastic and Spatial Modeling of Cell-Biological Processes

John, Mathias 26 August 2010 (has links) (PDF)
For cell-biological processes, it is the complex interaction of their biochemical components, affected by both stochastic and spatial considerations, that create the overall picture. Formal modeling provides a method to overcome the limits of experimental observation in the wet-lab by moving to the abstract world of the computer. The limits of the abstract world again depend on the expressiveness of the modeling language used to formally describe the system under study. In this thesis, reaction constraints for the pi-calculus are proposed as a language for the stochastic and spatial modeling of cell-biological processes. The goal is to develop a language with sufficient expressive power to model dynamic cell structures, like fusing compartments. To this end, reaction constraints are augmented with two language constructs: priority and a global imperative store, yielding two different modeling languages, including non-deterministic and stochastic semantics. By several modeling examples, e.g. of Euglena's phototaxis, and extensive expressiveness studies, e.g. an encoding of the spatial modeling language BioAmbients, including a prove of its correctness, the usefulness of reaction constraints, priority, and a global imperative store for the modeling of cell-biological processes is shown. Thereby, besides dynamic cell structures, different modeling styles, e.g. individual-based vs. population-based modeling, and different abstraction levels, as e.g. provided by reaction kinetics following the law of Mass action or the Michaelis-Menten theory, are considered.
77

Stochastic Analysis Of Flow And Solute Transport In Heterogeneous Porous Media Using Perturbation Approach

Chaudhuri, Abhijit 01 1900 (has links)
Analysis of flow and solute transport problem in porous media are affected by uncertainty inbuilt both in boundary conditions and spatial variability in system parameters. The experimental investigation reveals that the parameters may vary in various scales by several orders. These affect the solute plume characteristics in field-scale problem and cause uncertainty in the prediction of concentration. The main focus of the present thesis is to analyze the probabilistic behavior of solute concentration in three dimensional(3-D) heterogeneous porous media. The framework for the probabilistic analysis has been developed using perturbation approach for both spectral based analytical and finite element based numerical method. The results of the probabilistic analysis are presented either in terms of solute plume characteristics or prediction uncertainty of the concentration. After providing a brief introduction on the role of stochastic analysis in subsurface hydrology in chapter 1, a detailed review of the literature is presented to establish the existing state-of-art in the research on the probabilistic analysis of flow and transport in simple and complex heterogeneous porous media in chapter 2. The literature review is mainly focused on the methods of solution of the stochastic differential equation. Perturbation based spectral method is often used for probabilistic analysis of flow and solute transport problem. Using this analytical method a nonlocal equation is solved to derive the expression of the spatial plume moments. The spatial plume moments represent the solute movement, spreading in an average sense. In chapter 3 of the present thesis, local dispersivity if also assumed to be random space function along with hydraulic conductivity. For various correlation coefficients of the random parameters, the results in terms of the field scale effective dispersivity are presented to demonstrate the effect of local dispersivity variation in space. The randomness of local dispersivity is found to reduce the effective fields scale dispersivity. The transverse effective macrodispersivity is affected more than the longitudinal effective macrodispersivity due to random spatial variation of local dispersivity. The reduction in effective field scale longitudinal dispersivity is more for positive correlation coefficient. The applicability of the analytical method, which is discussed in earlier chapter, is limited to the simple boundary conditions. The solution by spectral method in terms of statistical moments of concentration as a function of space and time, require higher dimensional integration. Perturbation based stochastic finite element method(SFEM) is an alternative method for performing probabilistic analysis of concentration. The use of this numerical method for performing probabilistic analysis of concentration. The use of this numerical method is non common in the literature of stochastic subsurface hydrology. The perturbation based SFEM which uses FEM for spatial discretization of the steady state flow and Laplace transform for the solute transport equation, is developed in chapter 4. The SFEM is formulated using Taylor series of the dependent variable upto second-order term. This results in second-order accurate mean and first-order accurate standard deviation of concentration. In this study the governing medium properties viz. hydraulic Conductivity, dispersivity, molecular diffusion, porosity, sorption coefficient and decay coefficient are considered to vary randomly in space. The accuracy of results and computational efficiency of the SFEM are compared with Monte Carle Simulation method(MCSM) for both I-D and 3-D problems. The comparison of results obtained hby SFEM and MCSM indicates that SFEM is capable in providing reasonably accurate mean and standard deviation of concentration. The Laplace transform based SFEM is simpler and advantageous since it does not require any stability criteria for choosing the time step. However it is not applicable for nonlinear transport problems as well as unsteady flow conditions. In this situation, finite difference method is adopted for the time discretization. The first part of the Chapter 5, deals with the formulation of time domain SFEM for the linear solute transport problem. Later the SFEM is extended for a problem which involve uncertainty of both system parameters and boundary/source conditions. For the flow problem, the randomness in the boundary condition is attributed by the random spatial variation of recharge at the top of the domain. The random recharge is modeled using mean, standard deviation and 2-D spatial correlation function. It is observed that even for the deterministic recharge case, the behavior of prediction uncertainty of concentration in the space is affected significantly due to the variation of flow field. When the effect of randomness of recharge condition is included, the standard deviation of concentration increases further. For solute transport, the concentration input at the source is modeled as a time varying random process. Two types of random source at the source is modeled as a time varying random process. Two types of random source condition are considered, firstly the amount of solute mass released at uniform time interval is random and secondly the source is treated as a Poission process. For the case of multiple random mass releases, the stochastic response function due to stochastic system is obtained by using SFEM. Comparing the results for the two type of random sources, it sis found that the prediction uncertainty is more when it is modeled as a Poisson process. The probabilistic analysis of nonlinear solute transport problem using MCSM is often requires large computational cost. The formulation of the alternative efficient method, SFEM, for nonlinear solute transport problem is presented in chapter 6. A general Langmuir-Freundlich isotherm is considered to model the equilibrium mass transfer between aqueous and sorbed phase. In the SFEM formulation, which uses the Taylor Series expansion, the zeroth-order derivatives of concentration are obtained by solving nonlinear algebraic equation. The higher order derivatives are obtained by solving linear equation. During transport, the nonlinear sorbing solutes is characterized by sharp solute fronts with a traveling wave behavior. Due to this the prediction uncertainty is significantly higher. The comparison of accuracy and computational efficiency of SFEM with MCSM for I-D and 3-D problems, reveals that the performance of SFEM for nonlinear problem is good and similar to the linear problem. In Chapter 7, the nonlinear SFEM is extended for probabilistic analysis of biodegrading solute, which is modeled by a set of PDEs coupled with nonlinear Monod type source/sink terms. In this study the biodegradation problem involves a single solute by a single class of microorganisms coupled with dynamic microbial growth is attempted using this methods. The temporal behavior of mean and standard deviation of substrate concentration are not monotonic, they show peaks before reaching lower steady state value. A comparison between the SFEM and MCSM for the mean and standard deviation of concentration is made for various stochastic cases of the I-D problem. In most of the cases the results compare reasonably well. The analysis of probabilistic behavior of substrate concentration for different correlation coefficient between the physical parameters(hydraulic conductivity, porosity, dispersivity and diffusion coefficient) and the biological parameters(maximum substrate utilization rate and the coefficient of cell decay) is performed. It is observed that the positive correlation between the two sets of parameters results in a lower mean and significantly higher standard deviation of substrate concentration. In the previous chapters, the stochastic analysis pertaining to the prediction uncertainty of concentration has been presented for simple problem where the system parameters are modeled as statistically homogeneous random. The experimental investigations in a small watershed, point towards a complex in geological substratum. It has been observed through the 2-D electrical resistivity imaging that the interface between the layers of high conductive weathered zone and low conductive clay is very irregular and complex in nature. In chapter 8 a theoretical model based on stochastic approach is developed to stimulate the complex geological structure of the weathered zone, using the 2-D electrical image. The statistical parameters of hydraulic conductivity field are estimated using the data obtained from the Magnetic Resonance Sounding(MRS) method. Due to the large complexity in the distribution of weathered zone, the stochastic analysis of seepage flux has been carried out by using MCSM. A batter characterization of the domain based on sufficient experimental data and suitable model of the random conductivity field may help to use the efficient SFEM. The flow domain is modeled as (i) an unstructured random field consisting of a single material with spatial heterogeneity, and (ii) a structured random field using 2-D electrical imaging which is composed of two layers of different heterogeneous random hydraulic properties. The simulations show that the prediction uncertainty of seepage flux is comparatively less when structured modeling framework is used rather than the unstructured modeling. At the end, in chapter 9 the important conclusions drawn from various chapters are summarized.
78

Stochastic Modeling and Analysis of Power Systems with Intermittent Energy Sources

Pirnia, Mehrdad 10 February 2014 (has links)
Electric power systems continue to increase in complexity because of the deployment of market mechanisms, the integration of renewable generation and distributed energy resources (DER) (e.g., wind and solar), the penetration of electric vehicles and other price sensitive loads. These revolutionary changes and the consequent increase in uncertainty and dynamicity call for significant modifications to power system operation models including unit commitment (UC), economic load dispatch (ELD) and optimal power flow (OPF). Planning and operation of these ???smart??? electric grids are expected to be impacted significantly, because of the intermittent nature of various supply and demand resources that have penetrated into the system with the recent advances. The main focus of this thesis is on the application of the Affine Arithmetic (AA) method to power system operational problems. The AA method is a very efficient and accurate tool to incorporate uncertainties, as it takes into account all the information amongst dependent variables, by considering their correlations, and hence provides less conservative bounds compared to the Interval Arithmetic (IA) method. Moreover, the AA method does not require assumptions to approximate the probability distribution function (pdf) of random variables. In order to take advantage of the AA method in power flow analysis problems, first a novel formulation of the power flow problem within an optimization framework that includes complementarity constraints is proposed. The power flow problem is formulated as a mixed complementarity problem (MCP), which can take advantage of robust and efficient state-of-the-art nonlinear programming (NLP) and complementarity problems solvers. Based on the proposed MCP formulation, it is formally demonstrated that the Newton-Raphson (NR) solution of the power flow problem is essentially a step of the traditional General Reduced Gradient (GRG) algorithm. The solution of the proposed MCP model is compared with the commonly used NR method using a variety of small-, medium-, and large-sized systems in order to examine the flexibility and robustness of this approach. The MCP-based approach is then used in a power flow problem under uncertainties, in order to obtain the operational ranges for the variables based on the AA method considering active and reactive power demand uncertainties. The proposed approach does not rely on the pdf of the uncertain variables and is therefore shown to be more efficient than the traditional solution methodologies, such as Monte Carlo Simulation (MCS). Also, because of the characteristics of the MCP-based method, the resulting bounds take into consideration the limits of real and reactive power generation. The thesis furthermore proposes a novel AA-based method to solve the OPF problem with uncertain generation sources and hence determine the operating margins of the thermal generators in systems under these conditions. In the AA-based OPF problem, all the state and control variables are treated in affine form, comprising a center value and the corresponding noise magnitudes, to represent forecast, model error, and other sources of uncertainty without the need to assume a pdf. The AA-based approach is benchmarked against the MCS-based intervals, and is shown to obtain bounds close to the ones obtained using the MCS method, although they are slightly more conservative. Furthermore, the proposed algorithm to solve the AA-based OPF problem is shown to be efficient as it does not need the pdf approximations of the random variables and does not rely on iterations to converge to a solution. The applicability of the suggested approach is tested on a large real European power system.
79

Stochastic modeling and decision making in two healthcare applications: inpatient flow management and influenza pandemics

Shi, Pengyi 13 January 2014 (has links)
Delivering health care services in an efficient and effective way has become a great challenge for many countries due to the aging population worldwide, rising health expenses, and increasingly complex healthcare delivery systems. It is widely recognized that models and analytical tools can aid decision-making at various levels of the healthcare delivery process, especially when decisions have to be made under uncertainty. This thesis employs stochastic models to improve decision-making under uncertainty in two specific healthcare settings: inpatient flow management and infectious disease modeling. In Part I of this thesis, we study patient flow from the emergency department (ED) to hospital inpatient wards. This line of research aims to develop insights into effective inpatient flow management to reduce the waiting time for admission to inpatient wards from the ED. Delayed admission to inpatient wards, also known as ED boarding, has been identified as a key contributor to ED overcrowding and is a big challenge for many hospitals. Part I consists of three main chapters. In Chapter 2 we present an extensive empirical study of the inpatient department at our collaborating hospital. Motivated by this empirical study, in Chapter 3 we develop a high fidelity stochastic processing network model to capture inpatient flow with a focus on the transfer process from the ED to the wards. In Chapter 4 we devise a new analytical framework, two-time-scale analysis, to predict time-dependent performance measures for some simplified versions of our proposed model. We explore both exact Markov chain analysis and diffusion approximations. Part I of the thesis makes contributions in three dimensions. First, we identify several novel features that need to be built into our proposed stochastic network model. With these features, our model is able to capture inpatient flow dynamics at hourly resolution and reproduce the empirical time-dependent performance measures, whereas traditional time-varying queueing models fail to do so. These features include unconventional non-i.i.d. (independently and identically distributed) service times, an overflow mechanism, and allocation delays. Second, our two-time-scale framework overcomes a number of challenges faced by existing analytical methods in analyzing models with these novel features. These challenges include time-varying arrivals and extremely long service times. Third, analyzing the developed stochastic network model generates a set of useful managerial insights, which allow hospital managers to (i) identify strategies to reduce the waiting time and (ii) evaluate the trade-off between the benefit of reducing ED congestion and the cost from implementing certain policies. In particular, we identify early discharge policies that can eliminate the excessively long waiting times for patients requesting beds in the morning. In Part II of the thesis, we model the spread of influenza pandemics with a focus on identifying factors that may lead to multiple waves of outbreak. This line of research aims to provide insights and guidelines to public health officials in pandemic preparedness and response. In Chapter 6 we evaluate the impact of seasonality and viral mutation on the course of an influenza pandemic. In Chapter 7 we evaluate the impact of changes in social mixing patterns, particularly mass gatherings and holiday traveling, on the disease spread. In Chapters 6 and 7 we develop agent-based simulation models to capture disease spread across both time and space, where each agent represents an individual with certain socio-demographic characteristics and mixing patterns. The important contribution of our models is that the viral transmission characteristics and social contact patterns, which determine the scale and velocity of the disease spread, are no longer static. Simulating the developed models, we study the effect of the starting season of a pandemic, timing and degree of viral mutation, and duration and scale of mass gatherings and holiday traveling on the disease spread. We identify possible scenarios under which multiple outbreaks can occur during an influenza pandemic. Our study can help public health officials and other decision-makers predict the entire course of an influenza pandemic based on emerging viral characteristics at the initial stage, determine what data to collect, foresee potential multiple waves of attack, and better prepare response plans and intervention strategies, such as postponing or cancelling public gathering events.
80

Modeling and uncertainty quantification in the nonlinear stochastic dynamics of horizontal drillstrings / Modélisation et quantification des incertitudes en dynamique stochastique non linéaire des tubes de forage horizontaux

Barbosa Da Cunha Junior, Americo 11 March 2015 (has links)
Prospection de pétrole utilise un équipement appelé tube de forage pour forer le sol jusqu'au le niveau du réservoir. Cet équipement est une longue colonne rotative, composée par une série de tiges de forage interconnectées et les équipements auxiliaires. La dynamique de cette colonne est très complexe parce que dans des conditions opérationnelles normales, elle est soumise à des vibrations longitudinales, latérales et de torsion, qui présentent un couplage non linéaire. En outre, cette structure est soumise à effets de frottement et à des chocs dûs aux contacts mécaniques entre les paires tête de forage/sol et tube de forage/sol. Ce travail présente un modèle mécanique-mathématique pour analyser un tube de forage en configuration horizontale. Ce modèle utilise la théorie des poutres qui utilise l'inertie de rotation, la déformation de cisaillement et le couplage non linéaire entre les trois mécanismes de vibration. Les équations du modèle sont discrétisées par la méthode des éléments finis. Les incertitudes des paramètres du modèle d'interaction tête de forage/sol sont prises en compte par l'approche probabiliste paramétrique, et les distributions de probabilité des paramètres aléatoires sont construits par le principe du maximum d'entropie. Des simulations numériques sont réalisées afin de caractériser le comportement dynamique non linéaire de la structure, et en particulier, de l'outil de forage. Des phénomènes dynamiques non linéaires par nature, comme le slick-slip et le bit-bounce, sont observés dans les simulations, ainsi que les chocs. Une analyse spectrale montre étonnamment que les phénomènes slick-slip et bit-bounce résultent du mécanisme de vibration latérale, et ce phénomène de choc vient de la vibration de torsion. Cherchant à améliorer l'efficacité de l'opération de forage, un problème d'optimisation qui cherche à maximiser la vitesse de pénétration de la colonne dans le sol, sur ses limites structurelles, est proposé et résolu / Oil prospecting uses an equipment called drillstring to drill the soil until the reservoir level. This equipment is a long column under rotation, composed by a sequence of connected drill-pipes and auxiliary equipment. The dynamics of this column is very complex because, under normal operational conditions, it is subjected to longitudinal, lateral, and torsional vibrations, which presents a nonlinear coupling. Also, this structure is subjected to friction and shocks effects due to the mechanical contacts between the pairs drill-bit/soil and drill-pipes/borehole. This work presents a mechanical-mathematical model to analyze a drillstring in horizontal configuration. This model uses a beam theory which accounts rotatory inertia, shear deformation, and the nonlinear coupling between three mechanisms of vibration. The model equations are discretized using the finite element method. The uncertainties in bit-rock interaction model parameters are taken into account through a parametric probabilistic approach, and the random parameters probability distributions are constructed by means of maximum entropy principle. Numerical simulations are conducted in order to characterize the nonlinear dynamic behavior of the structure, specially, the drill-bit. Dynamical phenomena inherently nonlinear, such as slick-slip and bit-bounce, are observed in the simulations, as well as shocks. A spectral analysis shows, surprisingly, that slick-slip and bit-bounce phenomena result from the lateral vibration mechanism, and that shock phenomena comes from the torsional vibration. Seeking to increase the efficiency of the drilling process, an optimization problem that aims to maximize the rate of penetration of the column into the soil, respecting its structural limits, is proposed and solved

Page generated in 0.1208 seconds