• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 480
  • 92
  • 36
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 823
  • 823
  • 127
  • 122
  • 118
  • 101
  • 85
  • 81
  • 77
  • 70
  • 70
  • 63
  • 62
  • 59
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

A method for parameter estimation and system identification for model based diagnostics

Rengarajan, Sankar Bharathi 16 February 2011 (has links)
Model based fault detection techniques utilize functional redundancies in the static and dynamic relationships among system inputs and outputs for fault detection and isolation. Analytical models based on the underlying physics of the system can capture the dependencies between different measured signals in terms of system states and parameters. These physical models of the system can be used as a tool to detect and isolate system faults. As a machine degrades, system outputs deviate from desired outputs, generating residuals defined by the error between sensor measurements and corresponding model simulated signals. These error residuals contain valuable information to interpret system states and parameters. Setting up the measurements from a faulty system as baseline, the parameters of the idealistic model can be varied to minimize these residuals. This process is called “Parameter Tuning”. A framework to automate this “Parameter Tuning” process is presented with a focus on DC motors and 3-phase induction motors. The parameter tuning module presented is a multi-tier module which is designed to operate on real system models that are highly non-linear. The tuning module combines artificial intelligence techniques like Quasi-Monte Carlo (QMC) sampling (Hammersley sequencing) and Genetic Algorithm (Non Dominated Sorting Genetic Algorithm) with an Extended Kalman filter (EKF), which utilizes the system dynamics information available via the physical models of the system. A tentative Graphical User Interface (GUI) was developed to simplify the interaction between a machine operator and the module. The tuning module was tested with real measurements from a DC motor. A simulation study was performed on a 3-phase induction motor by suitably adjusting parameters in an analytical model. The QMC sampling and genetic algorithm stages worked well even on measurement data with the system operating in steady state condition. But the downside was computational expense and inability to estimate the parameters online – ‘batch estimator’. The EKF module enabled online estimation where update was made based on incoming measurements. But observability of the system based on incoming measurements posed a major challenge while dealing with state estimation filters. Implementation details and results are included with plots comparing real and faulty systems. / text
322

Vehicle-terrain parameter estimation for small-scale robotic tracked vehicle

Dar, Tehmoor Mehmoud 02 August 2011 (has links)
Methods for estimating vehicle-terrain interaction parameters for small scale robotic vehicles have been formulated and evaluated using both simulation and experimental studies. A model basis was developed, guided by experimental studies with an iRobot PackBot. The intention was to demonstrate whether a nominally instrumented robotic vehicle could be used as a test platform for generating data for vehicle-terrain parameter estimation. A comprehensive skid-steered model was found to be sensitive enough to distinguish between various forms of unknown terrains. This simulation study also verified that the Bekker model for large scale vehicles adopted for this research was applicable to the small scale robotic vehicle used in this work. This fact was also confirmed by estimating coefficients of friction and establishing their dependence on forward velocity and turning radius as the vehicle traverses different terrains. On establishing that mobility measurements for this robotic were sufficiently sensitive, it was found that estimates could be made of key dynamic variables and vehicle-terrain interaction parameters. Four main contributions are described for reliably and robustly using PackBot data for vehicle-terrain property estimation. These estimation methods should contribute to efforts in improving mobility of small scale tracked vehicles on uncertain terrains. The approach is embodied in a multi-tiered algorithm based on the dynamic and kinematic models for skid-steering as well as tractive force models parameterized by key vehicle-terrain parameters. In order to estimate and characterize the key parameters, nonlinear estimation techniques such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and a General Newton Raphson (GNR) method are integrated into this multi-tiered algorithm. A unique idea in using an EKF with an added State Noise Compensation algorithm is presented which shows its robustness and consistency in estimating slip variables and other parameters for deformable terrains. In the multi-tiered algorithm, a kinematic model of the robotic vehicle is used to estimate slip variables and turning radius. These estimated variables are stored in a truth table and used in a skid-steered dynamic model to estimate the coefficients of friction. The total estimated slip on the left and right track, along with the total tractive force computed using a motor model, are then used in the GNR algorithm to estimate the key vehicle-terrain parameters. These estimated parameters are cross-checked and confirmed with EKF estimation results. Further, these simulation results verify that the tracked vehicle tractive force is not dependent on cohesion for frictional soils. This sequential algorithm is shown to be effective in estimating vehicle-terrain interaction properties with relatively good accuracy. The estimated results obtained from UKF and EKF are verified and compared with available experimental data, and tested on a PackBot traversing specified terrains at the Southwest Research Institute (SwRI), Small Robotics Testbed in San Antonio, Texas. In the end, based on the development and evaluation of small scale vehicle testing, the effectiveness of on-board sensing methods and estimation techniques are also discussed for potential use in real time estimation of vehicle-terrain parameters. / text
323

Process measurements and kinetics of unseeded batch cooling crystallization

Li, Huayu 08 June 2015 (has links)
This thesis describes the development of an empirical model of focus beam reflectance measurements (FBRM) and the application of the model to monitoring batch cooling crystallization and extracting information on crystallization kinetics. Batch crystallization is widely used in the fine chemical and pharmaceutical industries to purify and separate solid products. The crystal size distribution (CSD) of the final product greatly influences the product characteristics, such as purity, stability, and bioavailability. It also has a great effect on downstream processing. To achieve a desired CSD of the final product, batch crystallization processes need to be monitored, understood, and controlled. FBRM is a promising technique for in situ determination of the CSD. It is based on scattering of laser light and provides a chord-length distribution (CLD), which is a complex function of crystal geometry. In this thesis, an empirical correlation between CSDs and CLDs is established and applied in place of existing first-principles FBRM models. Built from experimental data, the empirical mapping of CSD and CLD is advantageous in representing some effects that are difficult to quantify by mathematical and physical expressions. The developed model enables computation of the CSD from measured CLDs, which can be followed during the evolution of the crystal population during batch cooling crystallization processes. Paracetamol, a common drug product also known as acetaminophen, is selected as the model compound in this thesis study. The empirical model was first established and verified in a paracetamol-nonsolvent (toluene) slurry, and later applied to the paracetamol-ethanol crystallization system. Complementary to the FBRM measurements, solute concentrations in the liquid phase were determined by in situ infrared spectra, and they were jointly implemented to monitor the crystallization process. The framework of measuring the CSD and the solute concentration allows the estimation of crystallization kinetics, including those for primary nucleation, secondary nucleation, and crystal growth. These parameters were determined simultaneously by fitting the full population balance model to process measurements obtained from multiple unseeded paracetamol-ethanol crystallization runs. The major contributions of this thesis study are (1) providing a novel methodology for using FBRM measurements to estimate CSD; (2) development of an experimental protocol that provided data sets rich in information on crystal growth and primary and secondary nucleation; (3) interpretation of kinetics so that appropriate model parameters could be extracted from fitting population balances to experimental data; (4) identification of the potential importance of secondary nucleation relative to primary nucleation. The protocol and methods developed in this study can be applied to other systems for evaluating and improving batch crystallization processes.
324

Ensemble Filtering Methods for Nonlinear Dynamics

Kim, Sangil January 2005 (has links)
The standard ensemble filtering schemes such as Ensemble Kalman Filter (EnKF) and Sequential Monte Carlo (SMC) do not properly represent states of low priori probability when the number of samples is too small and the dynamical system is high dimensional system with highly non-Gaussian statistics. For example, when the standard ensemble methods are applied to two well-known simple, but highly nonlinear systems such as a one-dimensional stochastic diffusion process in a double-well potential and the well-known three-dimensional chaotic dynamical system of Lorenz, they produce erroneous results to track transitions of the systems from one state to the other.In this dissertation, a set of new parametric resampling methods are introduced to overcome this problem. The new filtering methods are motivated by a general H-theorem for the relative entropy of Markov stochastic processes. The entropy-based filters first approximate a prior distribution of a given system by a mixture of Gaussians and the Gaussian components represent different regions of the system. Then the parameters in each Gaussian, i.e., weight, mean and covariance are determined sequentially as new measurements are available. These alternative filters yield a natural generalization of the EnKF method to systems with highly non-Gaussian statistics when the mixture model consists of one single Gaussian and measurements are taken on full states.In addition, the new filtering methods give the quantities of the relative entropy and log-likelihood as by-products with no extra cost. We examine the potential usage and qualitative behaviors of the relative entropy and log-likelihood for the new filters. Those results of EnKF and SMC are also included. We present results of the new methods on the applications to the above two ordinary differential equations and one partial differential equation with comparisons to the standard filters, EnKF and SMC. These results show that the entropy-based filters correctly track the transitions between likely states in both highly nonlinear systems even with small sample size N=100.
325

TOWARDS IMPROVED IDENTIFICATION OF SPATIALLY-DISTRIBUTED RAINFALL RUNOFF MODELS

Pokhrel, Prafulla January 2010 (has links)
Distributed rainfall runoff hydrologic models can be highly effective in improving flood forecasting capabilities at ungauged, interior locations of the watershed. However, their implementation in operational decision-making is hindered by the high dimensionality of the state-parameter space and by lack of methods/understanding on how to properly exploit and incorporate available spatio-temporal information about the system. This dissertation is composed of a sequence of five studies, whose overall goal is to improve understanding on problems relating to parameter identifiability in distributed models and to develop methodologies for their calibration.The first study proposes and investigates an approach for calibrating catchment scale distributed rainfall-runoff models using conventionally available data. The process, called regularization, uses spatial information about soils and land-use that is embedded in prior parameter estimates (Koren et al. 2000) and knowledge of watershed characteristics, to constrain and reduce the dimensionality of the feasible parameter space.The methodology is further extended in the second and third studies to improve extraction of `hydrologically relevant' information from the observed streamflow hydrograph. Hydrological relevance is provided by using signature measures (Yilmaz et al 2008) that correspond to major watershed functions. While the second study applies a manual selection procedure to constrain parameter sets from the subset of post calibrated solutions, the third develops an automatic procedure based on a penalty function optimization approach.The fourth paper investigates the relative impact of using the commonly used multiplier approach to distributed model calibration, in comparison with other spatial regularization strategies and also includes investigations on whether calibration to data at the catchment outlet can provide improved performance at interior locations. The model calibration study conducted for three mid sized catchments in the US led to the important finding that basin outlet hydrographs might not generally contain information regarding spatial variability of the parameters, and that calibration of the overall mean of the spatially distributed parameter fields may be sufficient for flow forecasting at the outlet. This then was the motivation for the fifth paper which investigates to what degree the spatial characteristics of parameter and rainfall fields can be observable in catchment outlet hydrographs.
326

Chest Observer for Crash Safety Enhancement

Blåberg, Christian January 2008 (has links)
Feedback control of Chest Acceleration or Chest Deflection is believed to be a good way of minimizing the risk of injury. In order to implement such a controller in a car, an observer estimating these responses is needed. The objective of the study was to develop a model of the dummy’s chest capable of estimating the Chest Acceleration and the Chest Deflection during frontal crashes in real time. The used sensor data come from car accelerometer and spindle rotation sensor of the belt, the data has been collected from dummies during crash tests. This study has accomplished the aims using a simple linear model of the chest using masses, springs and dampers. The parameters of the model have been estimated through system identification. Two types of black-box models have also been studied, one ARX model and one state-space model. The models have been tested and validated against data coming from different crash setups. The results show that all of the studied models can be used to estimate the dummy responses, the physical grey-box model and the black-box state-space model in particular. / Genom att använda återkoppling av storheterna bröstacceleration och bröstintryck antas man kunna minska risken för skador vid krockar i personbilar. För att kunna implementera detta behövs en observatör för dessa storheter. Målet med denna studie är att ta fram en modell för att kunna skatta accelerationen i bröstkorgen samt bröstintrycket i realtid i frontala krockar. Sensordata som använts kom från en accelerometer och en givare för att mäta rotationen i bältessnurran. Detta har gjorts genom att modellera bröstkorgen med linjära fjädrar och dämpare. Dess parametrar har skattats från data från krocktester från krockdockor. Två s.k. black-box-modeller har också tagits fram, en ARX-modell och en på tillståndsform. Modellerna har testats och validerats mha data från olika sorters krocktester. Resultaten visar att alla studerade modeller kan användas för att skatta de ovan nämnda storheterna, den fysikaliska modellen och black-box-modellen på tillståndsform fungerade bäst.
327

An Option Pricing Model with Regime-Switching Economic Indicators

Ma, Zongming Jr 23 August 2013 (has links)
Although the Black-Scholes (BS) model and its alternatives have been widely applied in finance, their flaws have drawn the attention of many investors and risk managers. The Black-Scholes (BS) model fails to explain the volatility smile. Its alternatives, such as the BS model with a Poisson jump process, fail to explain the volatility clustering. Based on the literature, a novel dynamic regime-switching option-pricing model is developed in this thesis, to overcome the flaws of the traditional option pricing models. Five macroeconomic indicators are identified as the drivers of economic states over time. Two regimes are selected among all likely numbers of regimes under the Bayes Information Criterion (BIC). Both in-sample and out-of-sample tests are constructed to examine the prediction of the model. Empirical results show that the two-state regime-switching option-pricing model exhibits significant prediction power.
328

An Examination of the Lagrangian Length Scale in Plant Canopies using Field Measurements in an Analytical Lagrangian Equation

Brown, Shannon E 02 January 2013 (has links)
Studies of trace gas fluxes have advanced the understanding of bulk interactions between the atmosphere and ecosystems. Micrometeorological instrumentation is currently unable to resolve vertical scalar sources and sinks within plant canopies. Inverted analytical Lagrangian equations provide a non-intrusive method to calculate source distributions. These equations are based on Taylor's (1921) description of scalar dispersion, which requires a measure of the degree of correlation between turbulent motions, defined by the Lagrangian length scale (L). Inverse Lagrangian (IL) analyses can be unstable, and the uncertainty in L leads to uncertainty in source predictions. A review of the literature on studies using IL analysis with various scalars in a multitude of canopy types found that parameterizations where L reduces to zero at the ground produce better results in the IL analysis than those that increase closer to the ground, but no individual L parameterization gives better results than any other does. The review also found that the relationship between L and the measurable Eulerian length scale (Le) may be more complex in plant canopies than the linear scaling investigated in boundary layer flows. The magnitude and profile shape of L was investigated within a corn and a forest canopy using field measurements to constrain an analytical Lagrangian equation. Measurements of net CO2 flux, soil-to-atmosphere CO2 flux, and in-canopy profiles of CO2 concentrations provided the information required to solve for L in a global optimization algorithm for half hour intervals. For dates when the corn was a strong CO2 sink, and for the majority of dates for the forest, the optimization frequently located L profiles that follow a convex shape. A constrained optimization then smoothed the profile shape to a sigmoidal equation. Inputting the optimized L profiles in the forward and inverse Lagrangian equations leads to strong correlations between measured and calculated concentrations (corn canopy: C_{calc} = 1.00C_{meas} +52.41 mumol m^{-3}, r^2 = 0.996; forest canopy: C_{calc} = 0.98C_{meas} +276.5 mumol m^{-3}, r^2 = 0.99) and fluxes (corn canopy: F_{soil} = 0.67F_{calc} - 0.12 mumol m^{-2}s^{-1}, r^2 = 0.71, F_{net} = 1.17F_{calc} + 1.97mumol m^{-2}s^{-1}, r^2 = 0.85; forest canopy: F_{soil} = 0.72F_{calc} - 1.92 mumol m^{-2}s^{-1}, r^2 = 0.18, F_{net} = 1.24F_{calc} + 0.65 mumol m^{-2}s^{-1}, r^2 = 0.88). In the corn canopy, coefficients of the sigmoidal equation were specific to each half hour and did not scale with any measured variable. Coefficients of the optimized L equation in the forest canopy scaled weakly with variables related to the stability above the canopy. Plausible L profiles for both canopies were associated with negative bulk Richardson number values. / Funding from NSERC.
329

Parameter estimation in nonlinear continuous-time dynamic models with modelling errors and process disturbances

Varziri, M. Saeed 25 June 2008 (has links)
Model-based control and process optimization technologies are becoming more commonly used by chemical engineers. These algorithms rely on fundamental or empirical models that are frequently described by systems of differential equations with unknown parameters. It is, therefore, very important for modellers of chemical engineering processes to have access to reliable and efficient tools for parameter estimation in dynamic models. The purpose of this thesis is to develop an efficient and easy-to-use parameter estimation algorithm that can address difficulties that frequently arise when estimating parameters in nonlinear continuous-time dynamic models of industrial processes. The proposed algorithm has desirable numerical stability properties that stem from using piece-wise polynomial discretization schemes to transform the model differential equations into a set of algebraic equations. Consequently, parameters can be estimated by solving a nonlinear programming problem without requiring repeated numerical integration of the differential equations. Possible modelling discrepancies and process disturbances are accounted for in the proposed algorithm, and estimates of the process disturbance intensities can be obtained along with estimates of model parameters and states. Theoretical approximate confidence interval expressions for the parameters are developed. Through a practical two-phase nylon reactor example, as well as several simulation studies using stirred tank reactors, it is shown that the proposed parameter estimation algorithm can address difficulties such as: different types of measured responses with different levels of measurement noise, measurements taken at irregularly-spaced sampling times, unknown initial conditions for some state variables, unmeasured state variables, and unknown disturbances that enter the process and influence its future behaviour. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2008-06-20 16:34:44.586
330

Modeling of Molecular Weight Distributions in Ziegler-Natta Catalyzed Ethylene Copolymerizations

Thompson, Duncan 29 May 2009 (has links)
The objective of this work is to develop mathematical models to predict molecular weight distributions (MWDs) of ethylene copolymers produced in an industrial gas-phase reactor using a Ziegler-Natta (Z-N) catalyst. Because of the multi-site nature of Z-N catalysts, models of Z-N catalyzed copolymerization tend to be very large and have many parameters that need to be estimated. It is important that the data that are available for parameter estimation be used effectively, and that a suitable balance is achieved between modeling rigour and simplification. In the thesis, deconvolution analysis is used to gain an understanding of how the polymer produced by various types of active sites on the Z-N catalyst responds to changes in the reactor operating conditions. This analysis reveals which reactions are important in determining the MWD and also shows that some types of active sites share similar behavior and can therefore share some kinetic parameters. With this knowledge, a simplified model is developed to predict MWDs of ethylene/hexene copolymers produced at 90 °C. Estimates of the parameters in this isothermal model provide good initial guesses for parameter estimation in a subsequent more complex model. The isothermal model is extended to account for the effects of butene and temperature. Estimability analysis and cross-validation are used to determine which parameters should be estimated from the available industrial data set. Twenty model parameters are estimated so that the model provides good predictions of MWD and comonomer incorporation. Finally, D-, A-,and V-optimal experimental designs for improving the quality of the model predictions are determined. Difficulties with local minima are addressed and a comparison of the optimality criteria is presented. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2009-05-28 20:43:58.37

Page generated in 0.1408 seconds