• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 31
  • 31
  • 10
  • 10
  • 8
  • 8
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Data assimilation for parameter estimation in coastal ocean hydrodynamics modeling

Mayo, Talea Lashea 25 February 2014 (has links)
Coastal ocean models are used for a vast array of applications. These applications include modeling tidal and coastal flows, waves, and extreme events, such as tsunamis and hurricane storm surges. Tidal and coastal flows are the primary application of this work as they play a critical role in many practical research areas such as contaminant transport, navigation through intracoastal waterways, development of coastal structures (e.g. bridges, docks, and breakwaters), commercial fishing, and planning and execution of military operations in marine environments, in addition to recreational aquatic activities. Coastal ocean models are used to determine tidal amplitudes, time intervals between low and high tide, and the extent of the ebb and flow of tidal waters, often at specific locations of interest. However, modeling tidal flows can be quite complex, as factors such as the configuration of the coastline, water depth, ocean floor topography, and hydrographic and meteorological impacts can have significant effects and must all be considered. Water levels and currents in the coastal ocean can be modeled by solv- ing the shallow water equations. The shallow water equations contain many parameters, and the accurate estimation of both tides and storm surge is dependent on the accuracy of their specification. Of particular importance are the parameters used to define the bottom stress in the domain of interest [50]. These parameters are often heterogeneous across the seabed of the domain. Their values cannot be measured directly and relevant data can be expensive and difficult to obtain. The parameter values must often be inferred and the estimates are often inaccurate, or contain a high degree of uncertainty [28]. In addition, as is the case with many numerical models, coastal ocean models have various other sources of uncertainty, including the approximate physics, numerical discretization, and uncertain boundary and initial conditions. Quantifying and reducing these uncertainties is critical to providing more reliable and robust storm surge predictions. It is also important to reduce the resulting error in the forecast of the model state as much as possible. The accuracy of coastal ocean models can be improved using data assimilation methods. In general, statistical data assimilation methods are used to estimate the state of a model given both the original model output and observed data. A major advantage of statistical data assimilation methods is that they can often be implemented non-intrusively, making them relatively straightforward to implement. They also provide estimates of the uncertainty in the predicted model state. Unfortunately, with the exception of the estimation of initial conditions, they do not contribute to the information contained in the model. The model error that results from uncertain parameters is reduced, but information about the parameters in particular remains unknown. Thus, the other commonly used approach to reducing model error is parameter estimation. Historically, model parameters such as the bottom stress terms have been estimated using variational methods. Variational methods formulate a cost functional that penalizes the difference between the modeled and observed state, and then minimize this functional over the unknown parameters. Though variational methods are an effective approach to solving inverse problems, they can be computationally intensive and difficult to code as they generally require the development of an adjoint model. They also are not formulated to estimate parameters in real time, e.g. as a hurricane approaches landfall. The goal of this research is to estimate parameters defining the bottom stress terms using statistical data assimilation methods. In this work, we use a novel approach to estimate the bottom stress terms in the shallow water equations, which we solve numerically using the Advanced Circulation (ADCIRC) model. In this model, a modified form of the 2-D shallow water equations is discretized in space by a continuous Galerkin finite element method, and in time by finite differencing. We use the Manning’s n formulation to represent the bottom stress terms in the model, and estimate various fields of Manning’s n coefficients by assimilating synthetic water elevation data using a square root Kalman filter. We estimate three types of fields defined on both an idealized inlet and a more realistic spatial domain. For the first field, a Manning’s n coefficient is given a constant value over the entire domain. For the second, we let the Manning’s n coefficient take two distinct values, letting one define the bottom stress in the deeper water of the domain and the other define the bottom stress in the shallower region. And finally, because bottom stress terms are generally spatially varying parameters, we consider the third field as a realization of a stochastic process. We represent a realization of the process using a Karhunen-Lo`ve expansion, and then seek to estimate the coefficients of the expansion. We perform several observation system simulation experiments, and find that we are able to accurately estimate the bottom stress terms in most of our test cases. Additionally, we are able to improve forecasts of the model state in every instance. The results of this study show that statistical data assimilation is a promising approach to parameter estimation. / text
12

Stochastic Modeling of the Equilibrium Speed-Density Relationship

Wang, Haizhong 01 September 2010 (has links)
Fundamental diagram, a graphical representation of the relation among traffic flow, speed, and density, has been the foundation of traffic flow theory and transportation engineering for many years. For example, the analysis of traffic dynamics relies on input from this fundamental diagram to find when and where congestion builds up and how it dissipates; traffic engineers use a fundamental diagram to determine how well a highway facility serves its users and how to plan for new facilities in case of capacity expansion. Underlying a fundamental diagram is the relation between traffic speed and density which roughly corresponds to drivers’ speed choices under varying car-following distances. First rigorously documented by Greenshields some seventy-five years ago, such a relation has been explored in many follow-up studies, but these attempts are dominantly deterministic in nature, i.e. they model traffic speed as a function of traffic density. Though these functional speed-density models are able to coarsely explain how traffic slows down as more vehicles are crowded on highways, empirical observations show a wide-scattering of traffic speeds around the values predicted by these models. In addition, functional speed-density models lead to deterministic prediction of traffic dynamics, which lack the power to address the uncertainty brought about by random factors in traffic flow. Therefore, it appears more appropriate to view the speed-density relation as a stochastic process, in which a certain density level gives rise not only to an average value of traffic speed but also to its variation because of the randomness of drivers’ speed choices. The objective of this dissertation is to develop such a stochastic speed-density model to better represent empirical observations and provide a basis for a probabilistic prediction of traffic dynamics. It would be ideal if such a model is formulated with both mathematical elegance and empirical accuracy. The mathematical elegance of the model must include the features of: a single equation (single-regime) with physically meaningful parameters and must be easy to implement. The interpretation of empirical accuracy is twofold; on the one hand, the mean of the stochastic speeddensity model should match the average behavior of the empirical equilibrium speeddensity observations statistically. On the other hand, the magnitude of traffic speed variance is controlled by the variance function which is dependent on the response. Ultimately, it is expected that the stochastic speed-density model is able to reproduce the wide-scattering speed-density relation observed at a highway segment after being calibrated by a set of local parameters and, in return, the model can be used to perform probabilistic prediction of traffic dynamics at this location. The emphasis of this dissertation is on the former (i.e. the development, calibration, and validation of the stochastic speed-density model) with a few numerical applications of the model to demonstrate the latter (i.e. probabilistic prediction). Following the seminal Greenshields model, a great variety of deterministic speeddensity models have been proposed to mathematically represent the empirical speeddensity observations which underlie the fundamental diagram. Observed in the existing speed-density models was their deterministic nature striving to balance two competing goals: mathematical elegance and empirical accuracy. As the latest development of such a pursuit, we show that the stochastic speed-density model can be developed through discretizing a random traffic speed process using the Karhunen- Lo`eve expansion. The stochastic speed-density relationship model is largely motivated by the prevalent randomness exhibited in empirical observations that mainly comes from drivers, vehicles, roads, and environmental conditions. In a general setting, the proposed stochastic speed-density model has two components: deterministic and stochastic. For the deterministic component, we propose to use a family of logistic speed density models to track the average trend of empirical observations. In particular, the five-parameter logistic speed-density model arises as a natural candidate due to the following considerations: (1) The shape of the five-parameter logistic speed-density model can be adjusted by its physically meaningful parameters to match the average behavior of empirical observations. Statistically, the average behavior is modeled by the mean of empirical observations. (2) A three-parameter and four-parameter logistic speed-density model can be obtained by reducing the shape or scale parameter in the five-parameter model, but the counter-effect is the loss of empirical accuracy. (3) The five-parameter model yields the best accuracy compared to three-parameter and four-parameter model. The magnitude of the stochastic component is dominated by the variance of traffic speeds indexed by traffic density. The empirical traffic speed variance increases as density increases to around 25 - 30 veh/km, then starts decreasing as traffic density gets larger. It has been verified by empirical evidence that traffic speed variation shows a parabolic shape which makes the proposed variance function in a suitable formula to model its variation. The variance function is dependent on the logistic speed-density relationship with varying model parameters. A detailed analysis of empirical traffic speed variance can be found in Chapter 6. Modeling results show that by taking care of second-order statistics (i.e., variance and correlation) the proposed stochastic speed-density model is suitable for describing the observed phenomenon as well as for matching the empirical data. Following the results, a stochastic fundamental diagram of traffic flow can be established. On the application side, the stochastic speed-density relationship model can potentially be used for real-time on-line prediction and to explain phenomenons in a similar manner. This enables dynamic control and management systems to anticipate problems before they occur rather than simply reacting to existing conditions. Finally, we will summarize our findings and discuss our future research directions.
13

Practical Analysis Tools for Structures Subjected to Flow-Induced and Non-Stationary Random Loads

Scott, Karen Mary Louise 14 July 2011 (has links)
There is a need to investigate and improve upon existing methods to predict response of sensors due to flow-induced vibrations in a pipe flow. The aim was to develop a tool which would enable an engineer to quickly evaluate the suitability of a particular design for a certain pipe flow application, without sacrificing fidelity. The primary methods, found in guides published by the American Society of Mechanical Engineers (ASME), of simple response prediction of sensors were found to be lacking in several key areas, which prompted development of the tool described herein. A particular limitation of the existing guidelines deals with complex stochastic stationary and non-stationary modeling and required much further study, therefore providing direction for the second portion of this body of work. A tool for response prediction of fluid-induced vibrations of sensors was developed which allowed for analysis of low aspect ratio sensors. Results from the tool were compared to experimental lift and drag data, recorded for a range of flow velocities. The model was found to perform well over the majority of the velocity range showing superiority in prediction of response as compared to ASME guidelines. The tool was then applied to a design problem given by an industrial partner, showing several of their designs to be inadequate for the proposed flow regime. This immediate identification of unsuitable designs no doubt saved significant time in the product development process. Work to investigate stochastic modeling in structural dynamics was undertaken to understand the reasons for the limitations found in fluid-structure interaction models. A particular weakness, non-stationary forcing, was found to be the most lacking in terms of use in the design stage of structures. A method was developed using the Karhunen Loeve expansion as its base to close the gap between prohibitively simple (stationary only) models and those which require too much computation time. Models were developed from SDOF through continuous systems and shown to perform well at each stage. Further work is needed in this area to bring this work full circle such that the lessons learned can improve design level turbulent response calculations. / Ph. D.
14

Minimally Corrective, Approximately Recovering Priors to Correct Expert Judgement in Bayesian Parameter Estimation

May, Thomas Joseph 23 July 2015 (has links)
Bayesian parameter estimation is a popular method to address inverse problems. However, since prior distributions are chosen based on expert judgement, the method can inherently introduce bias into the understanding of the parameters. This can be especially relevant in the case of distributed parameters where it is difficult to check for error. To minimize this bias, we develop the idea of a minimally corrective, approximately recovering prior (MCAR prior) that generates a guide for the prior and corrects the expert supplied prior according to that guide. We demonstrate this approach for the 1D elliptic equation or the elliptic partial differential equation and observe how this method works in cases with significant and without any expert bias. In the case of significant expert bias, the method substantially reduces the bias and, in the case with no expert bias, the method only introduces minor errors. The cost of introducing these small errors for good judgement is worth the benefit of correcting major errors in bad judgement. This is particularly true when the prior is only determined using a heuristic or an assumed distribution. / Master of Science
15

[en] INTEGRITY OF AN OFFSHORE STRUCTURE SUBJECTED TO WAVES / [pt] INTEGRIDADE DE UMA ESTRUTURA OFFSHORE SUJEITA À ONDAS

VICTOR FERNANDO DEORSOLA SACRAMENTO 11 April 2019 (has links)
[pt] Este trabalho apresenta um método para calcular a resistência à fadiga de uma torre de perfuração considerando a elevação da superfície do mar, a dinâmica da plataforma na qual a torre está instalada e a dinâmica da própria torre. Modelos de ordem reduzida são utilizados para obter a elevação da superfície do mar e a dinâmica torre, e as incertezas nos parâmetros dos componentes do sistema podem ser incluídas na análise também. As análises podem ser feitas para vários estados de mar, conforme sua distribuição de probabilidade, e nenhuma hipótese sobre a distribuição de probabilidade precisa ser feita inicialmente. O histograma de distribuição de ciclos de tensão para toda vida útil do equipamento é obtido usando um procedimento de contagem de ciclos Rainflow. Os resultados e as incertezas nos mesmos são discutidos. / [en] This work presents a method for evaluation of the fatigue resistance of a drilling tower considering the sea surface elevation, the dynamics of the platform on which the tower is installed and the dynamics of the tower itself. Reduced order models are used for obtaining the sea surface elevation and the dynamics of the tower, and the uncertainties on the parameters of the components of the system can be included in the analysis as well. The analysis can be done for several sea states, according its probability distribution, and no assumption about the probability distribution of the stress ranges has to be made previously. The histogram for the distribution of stress ranges for the entire working life of the equipment is obtained using a Rainflow technique. The results and the uncertainties on them are discussed.
16

[en] AN INTRODUCTION TO MODEL REDUCTION THROUGH THE KARHUNEN-LOÈVE EXPANSION / [pt] UMA INTRODUÇÃO À REDUÇÃO DE MODELOS ATRAVÉS DA EXPANSÃO DE KARHUNEN-LOÈVE

CLAUDIO WOLTER 10 April 2002 (has links)
[pt] Esta dissertação tem como principal objetivo estudar aplicações da expansão ou decomposição de Karhunen-Loève em dinâmica de estruturas. Esta técnica consiste, basicamente, na obtenção de uma decomposição linear da resposta dinâmica de um sistema qualquer, representado por um campo vetorial estocástico, tendo a importante propriedade de ser ótima, no sentido que dado um certo número de modos, nenhuma outra decomposição linear pode melhor representar esta resposta. Esta capacidade de compressão de informação faz desta decomposição uma poderosa ferramenta para a construção de modelos reduzidos para sistemas mecânicos em geral. Em particular, este trabalho aborda problemas em dinâmica estrutural, onde sua aplicação ainda é bem recente. Inicialmente, são apresentadas as principais hipóteses necessárias à aplicação da expansão de Karhunen-Loève, bem como duas técnicas existentes para sua implementação, com domínios distintos de utilização.É dada especial atenção à relação entre os modos empíricos fornecidos pela expansão e os modos de vibração intrínsecos a sistemas vibratórios lineares, tanto discretos quanto contínuos, exemplificados por uma treliça bidimensional e uma placa retangular. Na mesma linha, são discutidas as vantagens e desvantagens de se usar esta expansão como ferramenta alternativa à análise modal clássica. Como aplicação a sistemas não-lineares, é apresentado o estudo de um sistema de vibroimpacto definido por uma viga em balanço cujo deslocamento transversal é limitado por dois batentes elásticos. Os modos empíricos obtidos através da expansão de Karhunen-Loève são, então, usados na formulação de um modelo de ordem reduzida, através do método de Galerkin, e o desempenho deste novo modelo investigado. / [en] This dissertation has the main objetive of studying applications of the Karhunen-Loève expansion or decomposition in structural dynamics. This technique consists basically in obtaining a linear decomposition of the dynamic response of a general system represented by a stochastic vector field. It has the important property of optimality, meaning that for a given number of modes, no other linear decomposition is able of better representing this response. This information compression capability characterizes this decomposition as powerful tool for the construction of reduced-order models of mechanical systems in general. Particularly, this work deals with structural dyamics problems where its application is still quite new. Initially, the main hypothesis necessary to the application of the Karhunen-Loève expansion are presented, as well as two existing techniques for its implementation that have different domains of use. Special attention is payed to the relation between empirical eigenmodes provided by the expansion and mode shapes intrinsic to linear vibrating systems, both discrete and continuous, exemplified by a bidimensional truss and a rectangular plate. Furthermore, the advantages and disadvantages of using this expansion as an alternative tool for classical modal analysis are discussed. As a nonlinear application, the study of a vibroimpact system consisting of a cantilever beam whose transversal displacement is constrained by two elastic barriers is presented. The empirical eigenmodes provided by the Karhunen-Loève expansion are then used to formulate a reduced-order model through Galerkin projection and the performance of this new model is investigated.
17

A Study Of Natural Convection In Molten Metal Under A Magnetic Field

Guray, Ersan 01 September 2006 (has links) (PDF)
The interaction between thermal convection and magnetic field is of interest in geophysical and astrophysical problems as well as in metallurgical processes such as casting or crystallization. A magnetic field may act in such a way to damp the convective velocity field in the melt or to reorganize the flow aligned with the magnetic field. This ability to manipulate the flow field is of technological importance in industrial processes. In this work, a direct numerical simulation of three-dimensional Boussinesq convection in a horizontal layer of electrically conducting fluid confined between two perfectly conducting horizontal plates heated from below in a gravitational and magnetic field is performed using a spectral element method. Periodic boundary conditions are assumed in the horizontal directions. The numerical model is then used to study the effects of imposing magnetic field. Finally, a low dimensional representation scheme is presented based on the Karhunen-Loeve approach.
18

New Algorithms for Uncertainty Quantification and Nonlinear Estimation of Stochastic Dynamical Systems

Dutta, Parikshit 2011 August 1900 (has links)
Recently there has been growing interest to characterize and reduce uncertainty in stochastic dynamical systems. This drive arises out of need to manage uncertainty in complex, high dimensional physical systems. Traditional techniques of uncertainty quantification (UQ) use local linearization of dynamics and assumes Gaussian probability evolution. But several difficulties arise when these UQ models are applied to real world problems, which, generally are nonlinear in nature. Hence, to improve performance, robust algorithms, which can work efficiently in a nonlinear non-Gaussian setting are desired. The main focus of this dissertation is to develop UQ algorithms for nonlinear systems, where uncertainty evolves in a non-Gaussian manner. The algorithms developed are then applied to state estimation of real-world systems. The first part of the dissertation focuses on using polynomial chaos (PC) for uncertainty propagation, and then achieving the estimation task by the use of higher order moment updates and Bayes rule. The second part mainly deals with Frobenius-Perron (FP) operator theory, how it can be used to propagate uncertainty in dynamical systems, and then using it to estimate states by the use of Bayesian update. Finally, a method to represent the process noise in a stochastic dynamical system using a nite term Karhunen-Loeve (KL) expansion is proposed. The uncertainty in the resulting approximated system is propagated using FP operator. The performance of the PC based estimation algorithms were compared with extended Kalman filter (EKF) and unscented Kalman filter (UKF), and the FP operator based techniques were compared with particle filters, when applied to a duffing oscillator system and hypersonic reentry of a vehicle in the atmosphere of Mars. It was found that the accuracy of the PC based estimators is higher than EKF or UKF and the FP operator based estimators were computationally superior to the particle filtering algorithms.
19

Numerical Methods For Solving The Eigenvalue Problem Involved In The Karhunen-Loeve Decomposition

Choudhary, Shalu 02 1900 (has links) (PDF)
In structural analysis and design it is important to consider the effects of uncertainties in loading and material properties in a rational way. Uncertainty in material properties such as heterogeneity in elastic and mass properties can be modeled as a random field. For computational purpose, it is essential to discretize and represent the random field. For a field with known second order statistics, such a representation can be achieved by Karhunen-Lo`eve (KL) expansion. Accordingly, the random field is represented in a truncated series expansion using a few eigenvalues and associated eigenfunctions of the covariance function, and corresponding random coefficients. The eigenvalues and eigenfunctions of the covariance kernel are obtained by solving a second order Fredholm integral equation. A closed-form solution for the integral equation, especially for arbitrary domains, may not always be available. Therefore an approximate solution is sought. While finding an approximate solution, it is important to consider both accuracy of the solution and the cost of computing the solution. This work is focused on exploring a few numerical methods for estimating the solution of this integral equation. Three different methods:(i)using finite element bases(Method1),(ii) mid-point approximation(Method2), and(iii)by the Nystr¨om method(Method3), are implemented and numerically studied. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. In the first method an eigenfunction is first represented in a linear combination of a set of finite element bases. The resulting error in the integral equation is then minimized in the Galerkinsense, which results in a generalized matrix eigenvalue problem. In the second method, the domain is partitioned into a finite number of subdomains. The covariance function is discretized by approximating its value over each subdomain locally, and thereby the integral equation is transformed to a matrix eigenvalue problem. In the third method the Fredholm integral equation is approximated by a quadrature rule, which also results in a matrix eigenvalue problem. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. The first part of the numerical study involves comparing these three methods. This numerical study is first done in one dimensional domain. Then for study in two dimensions a simple rectangular domain(referred toasDomain1)is taken with an uncertain material property modeled as a Gaussian random field. For the chosen covariance model and domain, the analytical solutions are known, which allows verifying the accuracy of the numerical solutions. There by these three numerical methods are studied and are compared for a chosen target accuracy and different correlation lengths of the random field. It was observed that Method 2 and Method 3 are much faster than the Method 1. On the other hand, for Method 2 and 3, additional cost for discretizing the domain into nodes should be considered whereas for a mechanics-related problem, Method 1 can use the available finite element mesh used for solving the mechanics problem. The second part of the work focuses on studying on the effect of the geometry of the model on realizations of the random field. The objective of the study is to see the possibility of generating the random field for a complicated domain from the KL expansion for a simpler domain. For this purpose, two KL decompositions are obtained: one on the Domain1, and another on the same rectangular domain modified with a rectangular hole (referredtoasDomain2) inside it. The random process is generated and realizations are compared. It was observed from the studies that probability density functions at the nodes on both the domains, that is, on Domain 1 and Domain 2, are similar. This observation leads to a possibility that a complicated domain can be replaced by a corresponding simpler domain, thereby reducing the computational cost.
20

Compression of Hyperspectral Images

Cheng, Kai-Jen January 2013 (has links)
No description available.

Page generated in 0.0613 seconds