• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 420
  • 92
  • 32
  • 31
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 741
  • 741
  • 113
  • 112
  • 112
  • 90
  • 79
  • 79
  • 68
  • 64
  • 61
  • 57
  • 53
  • 53
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Data assimilation for parameter estimation in coastal ocean hydrodynamics modeling

Mayo, Talea Lashea 25 February 2014 (has links)
Coastal ocean models are used for a vast array of applications. These applications include modeling tidal and coastal flows, waves, and extreme events, such as tsunamis and hurricane storm surges. Tidal and coastal flows are the primary application of this work as they play a critical role in many practical research areas such as contaminant transport, navigation through intracoastal waterways, development of coastal structures (e.g. bridges, docks, and breakwaters), commercial fishing, and planning and execution of military operations in marine environments, in addition to recreational aquatic activities. Coastal ocean models are used to determine tidal amplitudes, time intervals between low and high tide, and the extent of the ebb and flow of tidal waters, often at specific locations of interest. However, modeling tidal flows can be quite complex, as factors such as the configuration of the coastline, water depth, ocean floor topography, and hydrographic and meteorological impacts can have significant effects and must all be considered. Water levels and currents in the coastal ocean can be modeled by solv- ing the shallow water equations. The shallow water equations contain many parameters, and the accurate estimation of both tides and storm surge is dependent on the accuracy of their specification. Of particular importance are the parameters used to define the bottom stress in the domain of interest [50]. These parameters are often heterogeneous across the seabed of the domain. Their values cannot be measured directly and relevant data can be expensive and difficult to obtain. The parameter values must often be inferred and the estimates are often inaccurate, or contain a high degree of uncertainty [28]. In addition, as is the case with many numerical models, coastal ocean models have various other sources of uncertainty, including the approximate physics, numerical discretization, and uncertain boundary and initial conditions. Quantifying and reducing these uncertainties is critical to providing more reliable and robust storm surge predictions. It is also important to reduce the resulting error in the forecast of the model state as much as possible. The accuracy of coastal ocean models can be improved using data assimilation methods. In general, statistical data assimilation methods are used to estimate the state of a model given both the original model output and observed data. A major advantage of statistical data assimilation methods is that they can often be implemented non-intrusively, making them relatively straightforward to implement. They also provide estimates of the uncertainty in the predicted model state. Unfortunately, with the exception of the estimation of initial conditions, they do not contribute to the information contained in the model. The model error that results from uncertain parameters is reduced, but information about the parameters in particular remains unknown. Thus, the other commonly used approach to reducing model error is parameter estimation. Historically, model parameters such as the bottom stress terms have been estimated using variational methods. Variational methods formulate a cost functional that penalizes the difference between the modeled and observed state, and then minimize this functional over the unknown parameters. Though variational methods are an effective approach to solving inverse problems, they can be computationally intensive and difficult to code as they generally require the development of an adjoint model. They also are not formulated to estimate parameters in real time, e.g. as a hurricane approaches landfall. The goal of this research is to estimate parameters defining the bottom stress terms using statistical data assimilation methods. In this work, we use a novel approach to estimate the bottom stress terms in the shallow water equations, which we solve numerically using the Advanced Circulation (ADCIRC) model. In this model, a modified form of the 2-D shallow water equations is discretized in space by a continuous Galerkin finite element method, and in time by finite differencing. We use the Manning’s n formulation to represent the bottom stress terms in the model, and estimate various fields of Manning’s n coefficients by assimilating synthetic water elevation data using a square root Kalman filter. We estimate three types of fields defined on both an idealized inlet and a more realistic spatial domain. For the first field, a Manning’s n coefficient is given a constant value over the entire domain. For the second, we let the Manning’s n coefficient take two distinct values, letting one define the bottom stress in the deeper water of the domain and the other define the bottom stress in the shallower region. And finally, because bottom stress terms are generally spatially varying parameters, we consider the third field as a realization of a stochastic process. We represent a realization of the process using a Karhunen-Lo`ve expansion, and then seek to estimate the coefficients of the expansion. We perform several observation system simulation experiments, and find that we are able to accurately estimate the bottom stress terms in most of our test cases. Additionally, we are able to improve forecasts of the model state in every instance. The results of this study show that statistical data assimilation is a promising approach to parameter estimation. / text
262

Channel, spectrum, and waveform awareness in OFDM-based cognitive radio systems

Yücek, Tevfik 01 January 2007 (has links)
The radio spectrum is becoming increasingly congested everyday with emerging technologies and with the increasing number of wireless devices. Considering the limited bandwidth availability, accommodating the demand for higher capacity and data rates is a challenging task, requiring innovative technologies that can offer new ways of exploiting the available radio spectrum. Cognitive radio arises to be a tempting solution to the spectral crowding problem by introducing the notion of opportunistic spectrum usage. Because of its attractive features, orthogonal frequency division multiplexing (OFDM) has been successfully used in numerous wireless standards and technologies. We believe that OFDM will play an important role in realizing the cognitive radio concept as well by providing a proven, scalable, and adaptive technology for air interface. The goal of this dissertation is to identify and address some of the challenges that arise from the introduction of cognitive radio. Specifically, we propose methods for obtaining awareness about channel, spectrum, and waveform in OFDM-based cognitive radio systems in this dissertation. Parameter estimation for enabling adaptation, spectrum sensing, and OFDM system identification are the three main topics discussed. OFDM technique is investigated as a candidate for cognitive radio systems. Cognitive radio features and requirements are discussed in detail, and OFDM's ability to satisfy these requirements is explained. In addition, we identify the challenges that arise from employing OFDM technology in cognitive radio. Algorithms for estimating various channel related parameters are presented. These parameters are vital for enabling adaptive system design, which is a key requirement for cognitive radio. We develop methods for estimating root-mean-square (RMS) delay spread, Doppler spread, and noise variance. The spectrum opportunity and spectrum sensing concepts are re-evaluated by considering different dimensions of the spectrum which is known as multi-dimensional spectrum space. Spectrum sensing problem in a multi-dimensional space is addressed by developing a new sensing algorithm termed as partial match filtering (PMF). Cognitive radios are expected to recognize different wireless networks and have capability of communicating with them. Algorithms for identification of multi-carrier transmissions are developed. Within the same work, methods for blindly detecting transmission parameters of an OFDM based system are developed. Blind detection is also very helpful in reducing system signaling overhead in the case of adaptive transmission where transmission parameters are changed depending on the environmental characteristics or spectrum availability.
263

Systematic optimization and experimental validation of simulated moving bed chromatography systems for ternary separations and equilibrium limited reactions

Agrawal, Gaurav 21 September 2015 (has links)
Simulated Moving Bed (SMB) chromatography is a separation process where the components are separated due to their varying affinity towards the stationary phase. Over the past decade, many modifications have been proposed in SMB chromatography in order to effectively separate a binary mixture. However, the separation of multi-component mixtures using SMB is still one of the major challenges. Although many different strategies have been proposed, previous studies have rarely performed comprehensive investigations for finding the best ternary separation strategy from various possible alternatives. Furthermore, the concept of combining reaction with SMB has been proposed in the past for driving the equilibrium limited reactions to completion by separating the products from the reaction zone. However, the design of such systems is still challenging due to the complex dynamics of simultaneous reaction and adsorption. The first objective of the study is to find the best ternary separation strategy among various alternatives design of SMB. The performance of several ternary SMB operating schemes, that are proposed in the literature, are compared in terms of the optimal productivity obtained and the amount of solvent consumed. A multi- objective optimization problem is formulated which maximizes the SMB productivity and purity of intermediate eluting component at the same time. Furthermore, the concept of optimizing a superstructure formulation is proposed, where numerous SMB operating schemes can be incorporated into a single formulation. This superstructure approach has a potential to find more advantageous operating scheme compared to existing operating schemes in the literature. The second objective of the study is to demonstrate the Generalized Full Cycle (GFC) operation experimentally for the first time, and compare its performance to the JO process. A Semba OctaveTM chromatography system is used as an experimental SMB unit to implement the optimal operating schemes. In addition, a simultaneous optimization and model correction (SOMC) scheme is used to resolve the model mismatch in a systematic way. We also show a systematic comparison of both JO and GFC operations by presenting a Pareto plot of the productivity achieved against the desired purity of the intermediate eluting component experimentally. The third objective of the study is to develop an simulated moving bed reactor (SMBR) process for an industrial-scale application, and demonstrate the potential of the ModiCon operation for improving the performance of the SMBR compared to the conventional operating strategy. A novel industrial application involving the esterification of acetic acid and 1-methoxy-2-propanol is considered to produce propylene glycol methyl ether (PMA) as the product. A multi-objective optimization study is presented to find the best reactive separation strategy for the production of the PMA product. We also present a Pareto plot that compares the ModiCon operation, which allows periodical change of the feed composition and the conventional operating strategy for the optimal production rate of PMA that can be achieved against the desired conversion of acetic acid.
264

A method for parameter estimation and system identification for model based diagnostics

Rengarajan, Sankar Bharathi 16 February 2011 (has links)
Model based fault detection techniques utilize functional redundancies in the static and dynamic relationships among system inputs and outputs for fault detection and isolation. Analytical models based on the underlying physics of the system can capture the dependencies between different measured signals in terms of system states and parameters. These physical models of the system can be used as a tool to detect and isolate system faults. As a machine degrades, system outputs deviate from desired outputs, generating residuals defined by the error between sensor measurements and corresponding model simulated signals. These error residuals contain valuable information to interpret system states and parameters. Setting up the measurements from a faulty system as baseline, the parameters of the idealistic model can be varied to minimize these residuals. This process is called “Parameter Tuning”. A framework to automate this “Parameter Tuning” process is presented with a focus on DC motors and 3-phase induction motors. The parameter tuning module presented is a multi-tier module which is designed to operate on real system models that are highly non-linear. The tuning module combines artificial intelligence techniques like Quasi-Monte Carlo (QMC) sampling (Hammersley sequencing) and Genetic Algorithm (Non Dominated Sorting Genetic Algorithm) with an Extended Kalman filter (EKF), which utilizes the system dynamics information available via the physical models of the system. A tentative Graphical User Interface (GUI) was developed to simplify the interaction between a machine operator and the module. The tuning module was tested with real measurements from a DC motor. A simulation study was performed on a 3-phase induction motor by suitably adjusting parameters in an analytical model. The QMC sampling and genetic algorithm stages worked well even on measurement data with the system operating in steady state condition. But the downside was computational expense and inability to estimate the parameters online – ‘batch estimator’. The EKF module enabled online estimation where update was made based on incoming measurements. But observability of the system based on incoming measurements posed a major challenge while dealing with state estimation filters. Implementation details and results are included with plots comparing real and faulty systems. / text
265

Vehicle-terrain parameter estimation for small-scale robotic tracked vehicle

Dar, Tehmoor Mehmoud 02 August 2011 (has links)
Methods for estimating vehicle-terrain interaction parameters for small scale robotic vehicles have been formulated and evaluated using both simulation and experimental studies. A model basis was developed, guided by experimental studies with an iRobot PackBot. The intention was to demonstrate whether a nominally instrumented robotic vehicle could be used as a test platform for generating data for vehicle-terrain parameter estimation. A comprehensive skid-steered model was found to be sensitive enough to distinguish between various forms of unknown terrains. This simulation study also verified that the Bekker model for large scale vehicles adopted for this research was applicable to the small scale robotic vehicle used in this work. This fact was also confirmed by estimating coefficients of friction and establishing their dependence on forward velocity and turning radius as the vehicle traverses different terrains. On establishing that mobility measurements for this robotic were sufficiently sensitive, it was found that estimates could be made of key dynamic variables and vehicle-terrain interaction parameters. Four main contributions are described for reliably and robustly using PackBot data for vehicle-terrain property estimation. These estimation methods should contribute to efforts in improving mobility of small scale tracked vehicles on uncertain terrains. The approach is embodied in a multi-tiered algorithm based on the dynamic and kinematic models for skid-steering as well as tractive force models parameterized by key vehicle-terrain parameters. In order to estimate and characterize the key parameters, nonlinear estimation techniques such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and a General Newton Raphson (GNR) method are integrated into this multi-tiered algorithm. A unique idea in using an EKF with an added State Noise Compensation algorithm is presented which shows its robustness and consistency in estimating slip variables and other parameters for deformable terrains. In the multi-tiered algorithm, a kinematic model of the robotic vehicle is used to estimate slip variables and turning radius. These estimated variables are stored in a truth table and used in a skid-steered dynamic model to estimate the coefficients of friction. The total estimated slip on the left and right track, along with the total tractive force computed using a motor model, are then used in the GNR algorithm to estimate the key vehicle-terrain parameters. These estimated parameters are cross-checked and confirmed with EKF estimation results. Further, these simulation results verify that the tracked vehicle tractive force is not dependent on cohesion for frictional soils. This sequential algorithm is shown to be effective in estimating vehicle-terrain interaction properties with relatively good accuracy. The estimated results obtained from UKF and EKF are verified and compared with available experimental data, and tested on a PackBot traversing specified terrains at the Southwest Research Institute (SwRI), Small Robotics Testbed in San Antonio, Texas. In the end, based on the development and evaluation of small scale vehicle testing, the effectiveness of on-board sensing methods and estimation techniques are also discussed for potential use in real time estimation of vehicle-terrain parameters. / text
266

Process measurements and kinetics of unseeded batch cooling crystallization

Li, Huayu 08 June 2015 (has links)
This thesis describes the development of an empirical model of focus beam reflectance measurements (FBRM) and the application of the model to monitoring batch cooling crystallization and extracting information on crystallization kinetics. Batch crystallization is widely used in the fine chemical and pharmaceutical industries to purify and separate solid products. The crystal size distribution (CSD) of the final product greatly influences the product characteristics, such as purity, stability, and bioavailability. It also has a great effect on downstream processing. To achieve a desired CSD of the final product, batch crystallization processes need to be monitored, understood, and controlled. FBRM is a promising technique for in situ determination of the CSD. It is based on scattering of laser light and provides a chord-length distribution (CLD), which is a complex function of crystal geometry. In this thesis, an empirical correlation between CSDs and CLDs is established and applied in place of existing first-principles FBRM models. Built from experimental data, the empirical mapping of CSD and CLD is advantageous in representing some effects that are difficult to quantify by mathematical and physical expressions. The developed model enables computation of the CSD from measured CLDs, which can be followed during the evolution of the crystal population during batch cooling crystallization processes. Paracetamol, a common drug product also known as acetaminophen, is selected as the model compound in this thesis study. The empirical model was first established and verified in a paracetamol-nonsolvent (toluene) slurry, and later applied to the paracetamol-ethanol crystallization system. Complementary to the FBRM measurements, solute concentrations in the liquid phase were determined by in situ infrared spectra, and they were jointly implemented to monitor the crystallization process. The framework of measuring the CSD and the solute concentration allows the estimation of crystallization kinetics, including those for primary nucleation, secondary nucleation, and crystal growth. These parameters were determined simultaneously by fitting the full population balance model to process measurements obtained from multiple unseeded paracetamol-ethanol crystallization runs. The major contributions of this thesis study are (1) providing a novel methodology for using FBRM measurements to estimate CSD; (2) development of an experimental protocol that provided data sets rich in information on crystal growth and primary and secondary nucleation; (3) interpretation of kinetics so that appropriate model parameters could be extracted from fitting population balances to experimental data; (4) identification of the potential importance of secondary nucleation relative to primary nucleation. The protocol and methods developed in this study can be applied to other systems for evaluating and improving batch crystallization processes.
267

Ensemble Filtering Methods for Nonlinear Dynamics

Kim, Sangil January 2005 (has links)
The standard ensemble filtering schemes such as Ensemble Kalman Filter (EnKF) and Sequential Monte Carlo (SMC) do not properly represent states of low priori probability when the number of samples is too small and the dynamical system is high dimensional system with highly non-Gaussian statistics. For example, when the standard ensemble methods are applied to two well-known simple, but highly nonlinear systems such as a one-dimensional stochastic diffusion process in a double-well potential and the well-known three-dimensional chaotic dynamical system of Lorenz, they produce erroneous results to track transitions of the systems from one state to the other.In this dissertation, a set of new parametric resampling methods are introduced to overcome this problem. The new filtering methods are motivated by a general H-theorem for the relative entropy of Markov stochastic processes. The entropy-based filters first approximate a prior distribution of a given system by a mixture of Gaussians and the Gaussian components represent different regions of the system. Then the parameters in each Gaussian, i.e., weight, mean and covariance are determined sequentially as new measurements are available. These alternative filters yield a natural generalization of the EnKF method to systems with highly non-Gaussian statistics when the mixture model consists of one single Gaussian and measurements are taken on full states.In addition, the new filtering methods give the quantities of the relative entropy and log-likelihood as by-products with no extra cost. We examine the potential usage and qualitative behaviors of the relative entropy and log-likelihood for the new filters. Those results of EnKF and SMC are also included. We present results of the new methods on the applications to the above two ordinary differential equations and one partial differential equation with comparisons to the standard filters, EnKF and SMC. These results show that the entropy-based filters correctly track the transitions between likely states in both highly nonlinear systems even with small sample size N=100.
268

TOWARDS IMPROVED IDENTIFICATION OF SPATIALLY-DISTRIBUTED RAINFALL RUNOFF MODELS

Pokhrel, Prafulla January 2010 (has links)
Distributed rainfall runoff hydrologic models can be highly effective in improving flood forecasting capabilities at ungauged, interior locations of the watershed. However, their implementation in operational decision-making is hindered by the high dimensionality of the state-parameter space and by lack of methods/understanding on how to properly exploit and incorporate available spatio-temporal information about the system. This dissertation is composed of a sequence of five studies, whose overall goal is to improve understanding on problems relating to parameter identifiability in distributed models and to develop methodologies for their calibration.The first study proposes and investigates an approach for calibrating catchment scale distributed rainfall-runoff models using conventionally available data. The process, called regularization, uses spatial information about soils and land-use that is embedded in prior parameter estimates (Koren et al. 2000) and knowledge of watershed characteristics, to constrain and reduce the dimensionality of the feasible parameter space.The methodology is further extended in the second and third studies to improve extraction of `hydrologically relevant' information from the observed streamflow hydrograph. Hydrological relevance is provided by using signature measures (Yilmaz et al 2008) that correspond to major watershed functions. While the second study applies a manual selection procedure to constrain parameter sets from the subset of post calibrated solutions, the third develops an automatic procedure based on a penalty function optimization approach.The fourth paper investigates the relative impact of using the commonly used multiplier approach to distributed model calibration, in comparison with other spatial regularization strategies and also includes investigations on whether calibration to data at the catchment outlet can provide improved performance at interior locations. The model calibration study conducted for three mid sized catchments in the US led to the important finding that basin outlet hydrographs might not generally contain information regarding spatial variability of the parameters, and that calibration of the overall mean of the spatially distributed parameter fields may be sufficient for flow forecasting at the outlet. This then was the motivation for the fifth paper which investigates to what degree the spatial characteristics of parameter and rainfall fields can be observable in catchment outlet hydrographs.
269

Chest Observer for Crash Safety Enhancement

Blåberg, Christian January 2008 (has links)
Feedback control of Chest Acceleration or Chest Deflection is believed to be a good way of minimizing the risk of injury. In order to implement such a controller in a car, an observer estimating these responses is needed. The objective of the study was to develop a model of the dummy’s chest capable of estimating the Chest Acceleration and the Chest Deflection during frontal crashes in real time. The used sensor data come from car accelerometer and spindle rotation sensor of the belt, the data has been collected from dummies during crash tests. This study has accomplished the aims using a simple linear model of the chest using masses, springs and dampers. The parameters of the model have been estimated through system identification. Two types of black-box models have also been studied, one ARX model and one state-space model. The models have been tested and validated against data coming from different crash setups. The results show that all of the studied models can be used to estimate the dummy responses, the physical grey-box model and the black-box state-space model in particular. / Genom att använda återkoppling av storheterna bröstacceleration och bröstintryck antas man kunna minska risken för skador vid krockar i personbilar. För att kunna implementera detta behövs en observatör för dessa storheter. Målet med denna studie är att ta fram en modell för att kunna skatta accelerationen i bröstkorgen samt bröstintrycket i realtid i frontala krockar. Sensordata som använts kom från en accelerometer och en givare för att mäta rotationen i bältessnurran. Detta har gjorts genom att modellera bröstkorgen med linjära fjädrar och dämpare. Dess parametrar har skattats från data från krocktester från krockdockor. Två s.k. black-box-modeller har också tagits fram, en ARX-modell och en på tillståndsform. Modellerna har testats och validerats mha data från olika sorters krocktester. Resultaten visar att alla studerade modeller kan användas för att skatta de ovan nämnda storheterna, den fysikaliska modellen och black-box-modellen på tillståndsform fungerade bäst.
270

An Option Pricing Model with Regime-Switching Economic Indicators

Ma, Zongming Jr 23 August 2013 (has links)
Although the Black-Scholes (BS) model and its alternatives have been widely applied in finance, their flaws have drawn the attention of many investors and risk managers. The Black-Scholes (BS) model fails to explain the volatility smile. Its alternatives, such as the BS model with a Poisson jump process, fail to explain the volatility clustering. Based on the literature, a novel dynamic regime-switching option-pricing model is developed in this thesis, to overcome the flaws of the traditional option pricing models. Five macroeconomic indicators are identified as the drivers of economic states over time. Two regimes are selected among all likely numbers of regimes under the Bayes Information Criterion (BIC). Both in-sample and out-of-sample tests are constructed to examine the prediction of the model. Empirical results show that the two-state regime-switching option-pricing model exhibits significant prediction power.

Page generated in 0.3279 seconds