• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 30
  • 30
  • 10
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Time-invariant, Databased Modeling and Control of Batch Processes

Corbett, Brandon January 2016 (has links)
Batch reactors are often used to produce high quality products because any batch that does not meet quality speci cations can be easily discarded. However, for high-value products, even a few wasted batches constitute substantial economic loss. Fortunately, databases of historical data that can be exploited to improve operation are often readily available. Motivated by these considerations, this thesis addresses the problem of direct, data-based quality control for batch processes. Speci cally, two novel datadriven modeling and control strategies are proposed. The rst approach addresses the quality modeling problem in two steps. To begin, a partial least squares (PLS) model is developed to relate complete batch trajectories to resulting batch qualities. Next, the so called missing-data problem, encountered when using PLS models partway through a batch, is addressed using a data-driven, multiple-model dynamic modeling approach relating candidate input trajectories to future output behavior. The resulting overall model provides a causal link between inputs and quality and is used in a model predictive control scheme for direct quality control. Simulation results for two di erent polymerization reactors are presented that demonstrate the e cacy of the approach. The second strategy presented in this thesis is a state-space motivated, timeinvariant quality modeling and control approach. In this work, subspace identi cation methods are adapted for use with transient batch data allowing state-space dynamic models to be identifi ed from historical data. Next, the identifi ed states are related through an additional model to batch quality. The result is a causal, time-independent model that relates inputs to product quality. This model is applied in a shrinking horizon model predictive control scheme. Signi cantly, inclusion of batch duration as a control decision variable is permitted because of the time-invariant model. Simulation results for a polymerization reactor demonstrate the superior capability and performance of the proposed approach. / Thesis / Doctor of Philosophy (PhD) / High-end chemical products, ranging from pharmaceuticals to specialty plastics, are key to improving quality of life. For these products, production quality is more important than quantity. To produce high quality products, industries use a piece of equipment called a batch reactor. These reactors are favorable over alternatives because if any single batch fails to meet a quality specifi cation, it can be easily discarded. However, given the high-value nature of these products, even a small number of discarded batches is costly. This motivates the current work which addresses the complex topic of batch quality control. This task is achieved in two steps: first methods are developed to model prior reactor behavior. These models can be applied to predict how the reactor will behave under future operating policies. Next, these models are used to make informed decisions that drive the reaction to the desired end product, eliminating o -spec batches.
2

Scalable and Robust Designs of Model - Based Control Strategies for Energy - Efficient Buildings

Agbi, Clarence 01 May 2014 (has links)
In the wake of rising energy costs, there is a critical need for sustainable energy management of commercial and residential buildings. Buildings consume approximately 40% of total energy consumed in the US, and current methods to reduce this level of consumption include energy monitoring, smart sensing, and advanced integrated building control. However, the building industry has been slow to replace current PID and rule-based control strategies with more advanced strategies such as model-based building control. This is largely due to the additional cost of accurately modeling the dynamics of the building and the general uncertainty that model-based controllers can be reliably used in real conditions. The first half of this thesis addresses the challenge of constructing accurate grey-box building models for control using model identification. Current identification methods poorly estimate building model parameters because of the complexity of the building model structure, and fail to do so quickly because these methods are not scalable for large buildings. Therefore, we introduce the notion of parameter identifiability to determine those parameters in the building model that may not be accurately estimated and we use this information to strategically improve the identifiability of the building model. Finally, we present a decentralized identification scheme to reduce the computational effort and time needed to identify large buildings. The second half of this thesis discusses the challenge of using uncertain building models to reliably control building temperature. Under real conditions, building models may not match the dynamics of the building, which directly causes increased building energy consumption and poor thermal comfort. To reduce the impact of model uncertainty on building control, we pose the model-based building control problem as a robust control problem using well-known H1 control methods. Furthermore, we introduce a tuning law to reduce the conservativeness of a robust building control strategy in the presence of high model uncertainty, both in a centralized and decentralized building control framework.
3

Grobner Basis and Structural Equation Modeling

Lim, Min 23 February 2011 (has links)
Structural equation models are systems of simultaneous linear equations that are generalizations of linear regression, and have many applications in the social, behavioural and biological sciences. A serious barrier to applications is that it is easy to specify models for which the parameter vector is not identifiable from the distribution of the observable data, and it is often difficult to tell whether a model is identified or not. In this thesis, we study the most straightforward method to check for identification – solving a system of simultaneous equations. However, the calculations can easily get very complex. Grobner basis is introduced to simplify the process. The main idea of checking identification is to solve a set of finitely many simultaneous equations, called identifying equations, which can be transformed into polynomials. If a unique solution is found, the model is identified. Grobner basis reduces the polynomials into simpler forms making them easier to solve. Also, it allows us to investigate the model-induced constraints on the covariances, even when the model is not identified. With the explicit solution to the identifying equations, including the constraints on the covariances, we can (1) locate points in the parameter space where the model is not identified, (2) find the maximum likelihood estimators, (3) study the effects of mis-specified models, (4) obtain a set of method of moments estimators, and (5) build customized parametric and distribution free tests, including inference for non-identified models.
4

Grobner Basis and Structural Equation Modeling

Lim, Min 23 February 2011 (has links)
Structural equation models are systems of simultaneous linear equations that are generalizations of linear regression, and have many applications in the social, behavioural and biological sciences. A serious barrier to applications is that it is easy to specify models for which the parameter vector is not identifiable from the distribution of the observable data, and it is often difficult to tell whether a model is identified or not. In this thesis, we study the most straightforward method to check for identification – solving a system of simultaneous equations. However, the calculations can easily get very complex. Grobner basis is introduced to simplify the process. The main idea of checking identification is to solve a set of finitely many simultaneous equations, called identifying equations, which can be transformed into polynomials. If a unique solution is found, the model is identified. Grobner basis reduces the polynomials into simpler forms making them easier to solve. Also, it allows us to investigate the model-induced constraints on the covariances, even when the model is not identified. With the explicit solution to the identifying equations, including the constraints on the covariances, we can (1) locate points in the parameter space where the model is not identified, (2) find the maximum likelihood estimators, (3) study the effects of mis-specified models, (4) obtain a set of method of moments estimators, and (5) build customized parametric and distribution free tests, including inference for non-identified models.
5

Identifikace modelů finančních časových řad / Financial time series model identification

Fučík, Jan January 2016 (has links)
This thesis deals with the financial time series model identification. The univariate and multivariate ARMA models and their identification criteria are described. The procedures using the correlation structure of the time series and some information criteria are presented. The functioning of the criteria is verified on simulated time series AR, MA and ARMA. Afterwards, the criteria are compared in terms of reliability and simplicity of use. Finally, there are two examples of univariate and multivariate ARMA model identification for the real financial time series. The data and the R programme source code are enclosed on a CD. Powered by TCPDF (www.tcpdf.org)
6

Are Artificial Neural Networks the Right Tool for Modelling and Control of Batch and Batch-Like Processes?

Mustafa Rashid January 2023 (has links)
The prevalence of batch and batch-like operations, in conjunction with the continued resurgence of artificial intelligence techniques for clustering and classification applications, has increasingly motivated the exploration of the applicability of deep learning for modeling and feedback control of batch and batch-like processes. To this end, the present study seeks to evaluate the viability of artificial intelligence in general, and neural networks in particular, toward process modeling and control via a case study. Nonlinear autoregressive with exogeneous input (NARX) networks are evaluated in comparison with subspace models within the framework of model-based control. A batch polymethyl methacrylate (PMMA) polymerization process is chosen as a simulation test-bed. Subspace-based state-space models and NARX networks identified for the process are first compared for their predictive power. The identified models are then implemented in model predictive control (MPC) to compare the control performance for both modeling approaches. The comparative analysis reveals that the state-space models performed better than NARX networks in predictive power and control performance. Moreover, the NARX networks were found to be less versatile than state-space models in adapting to new process operation. The results of the study indicate that further research is needed before neural networks may become readily applicable for the feedback control of batch processes. / Thesis / Master of Applied Science (MASc)
7

CONTINENTAL SCALE DIAGNOSTIC EVALUATION OF MONTHLY WATER BALANCE MODELS FOR THE UNITED STATES

Martinez Baquero, Guillermo Felipe January 2010 (has links)
Water balance models are important for the characterization of hydrologic systems, to help understand regional scale dynamics, and to identify hydro-climatic trends and systematic biases in data. Because existing models have, to-date, only been tested on data sets of limited spatial representativeness and extent, it has not yet been established that they are capable of reproducing the range of dynamics observed in nature. This dissertation develops systematic strategies to guide selection of water balance models, establish data requirements, estimate parameters, and evaluate performance. Through a series of three papers, these challenges are investigated in the context of monthly water balance modeling across the conterminous United States. The first paper reports on an initial diagnostic iteration to evaluate relevant components of model error, and to examine details of its spatial variability. We find that to conduct a robust model evaluation it is not sufficient to rely upon conventional NSE and/or r^2aggregate statistics of performance; to have reasonable confidence that the model can provide hydrologically consistent simulations, it is also necessary to examine measures of water balance and hydrologic variability. The second paper builds upon the results of the first, and evaluates the suitability of several candidate model structures, focusing specifically snow-free catchments. A diagnostic Maximum-Likelihood model evaluation procedure is developed to incorporate the notion of `Hydrological Consistency' and controls for structural complexity. The results confirm that the evaluation of hydrologic consistency, based on benchmark comparisons and on stringent analysis of residuals, provides a robust basis for guiding model selection. The results reveal strong spatial persistence of certain model structures that needs to be understood in future studies. The third paper focuses on understanding and improving the procedure for constraining model parameters to provide hydrologically consistent results. In particular, it develops a penalty-function based modification of the Mean Squared Error estimation to help ensure proper reproduction of system behaviors by minimizing interaction of error components and by facilitating inclusion of relevant information. The analysis and results provide insight into the identifiability of model parameters, and further our understanding of how performance criteria should be applied during model identification.
8

Identification Of Low Order Vehicle Handling Models From Multibody Vehicle Dynamics Models

Saglam, Ferhat 01 January 2010 (has links) (PDF)
Vehicle handling models are commonly used in the design and analysis of vehicle dynamics. Especially, with the advances in vehicle control systems need for accurate and simple vehicle handling models have increased. These models have parameters, some of which are known or easily obtainable, yet some of which are unknown or difficult to obtain. These parameters are obtained by system identification, which is the study of building model from experimental data. In this thesis, identification of vehicle handling models is based on data obtained from the simulation of complex vehicle dynamics model from ADAMS representing the real vehicle and a general methodology has been developed. Identified vehicle handling models are the linear bicycle model and vehicle roll models with different tire models. Changes of sensitivity of the model outputs to model parameters with steering input frequency have been examined by sensitivity analysis to design the test input. To show that unknown parameters of the model can be identified uniquely, structural identifiability analysis has been performed. Minimizing the difference between the data obtained from the simulation of ADAMS vehicle model and the data obtained from the simulation of simple handling models by mathematical optimization methods, unknown parameters have been estimated and handling models have been identified. Estimation task has been performed using MATLAB Simulink Parameter Estimation Toolbox. By model validation it has been shown that identified handling models represent the vehicle system successfully.
9

Mise en oeuvre d'une régulation thermique sur une machine de mesure dimensionnelle de très haute exactitude. Utilisation d'un modèle d'ordre faible en boucle fermée / Implementation of a Thermal Regulation on a High Accuracy Dimensional Measuring Machine : Use of a Reduced Model for Feedback Control

Bouderbala, Kamélia 16 December 2015 (has links)
Ce manuscrit décrit la modélisation et la régulation de la température au sein d’un dispositif expérimental développé initialement pour valider les principes de conception adoptés pour une nouvelle machine de mesure de cylindricité au Laboratoire Commun de Métrologie du Laboratoire national de métrologie et d’essais – Conservatoire national des arts et métiers.L’appareil a été équipé de 19 sondes à résistance de platine raccordées à une référence nationale afin d’étudier l’influence sur son comportement des perturbations thermiques générées par des sources de chaleur internes et externes. L’investigation de l’influence de ces perturbations sur les mesures réalisées avec des capteurs de déplacement capacitifs a également été menée. Les perturbations thermiques internes simulant les puissances dissipées par les éléments de guidage mécaniques ont été générées par l’intermédiaire de trois films chauffants. Une modélisation par éléments finis du dispositif expérimental a été réalisée et les résultats numériques comparés à des résultats expérimentaux réalisés dans les mêmes conditions. Les écarts obtenus, de l’ordre de 0,1 °C, sont trop élevés pour que ce modèle soit adopté pour l’élaboration d’une régulation thermique en temps réel. Dans la suite, un modèle réduit a été développé à partir des données expérimentales à l’aide de la méthode d’identification modale (MIM). Le résidu obtenu lors de la comparaison des résultats issus de ce modèle et expérimentalement est inférieur à 0,003 °C. Finalement, une régulation thermique à mieux que le centième de degré a été mise en oeuvre en utilisant une commande prédictive associée à un filtre de Kalman. / This thesis describes the modelling and real-time regulation of the temperature inside an apparatus developed to validate the design principles of a cylindricity measurement machine at the Laboratoire Commun de Métrologie du Laboratoire national de métrologie et d’essais – Conservatoire national des arts et métiers. To study the effect of internal and external perturbations on the behaviour of the system, the apparatus is equipped with 19 platinum resistance thermometers calibrated with respect to the national standard. The effect of perturbations on the behaviour of capacitive displacement sensors has also been studied. The effect of internal perturbations generated by the mechanical guide rails was simulated using three film resistive heaters. Finite element modeling of the system temperature was carried out and the numerical results compared with experiment. The offsets about 0.1°C are too large for the model to be used for real-time temperature control. Subsequently, a reduced model wasdeveloped based on experimental data using the modal identification method (MIM). The residual obtained when its results are compared with experiment is 0.003°C. Finally, a temperature servo implemented using predictive control combined with a Kalman filter.
10

Interpretation, identification and reuse of models : theory and algorithms with applications in predictive toxicology

Palczewska, Anna Maria January 2014 (has links)
This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC.

Page generated in 0.1444 seconds