• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 31
  • 31
  • 10
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Dynamic Model Formulation and Calibration for Wheeled Mobile Robots

Seegmiller, Neal A. 01 October 2014 (has links)
Advances in hardware design have made wheeled mobile robots (WMRs) exceptionally mobile. To fully exploit this mobility, WMR planning, control, and estimation systems require motion models that are fast and accurate. Much of the published theory on WMR modeling is limited to 2D or kinematics, but 3D dynamic (or force-driven) models are required when traversing challenging terrain, executing aggressive maneuvers, and manipulating heavy payloads. This thesis advances the state of the art in both the formulation and calibration of WMR models We present novel WMR model formulations that are high-fidelity, general, modular, and fast. We provide a general method to derive 3D velocity kinematics for any WMR joint configuration. Using this method, we obtain constraints on wheel ground contact point velocities for our differential algebraic equation (DAE)-based models. Our “stabilized DAE” kinematics formulation enables constrained, drift free motion prediction on rough terrain. We also enhance the kinematics to predict nonzero wheel slip in a principled way based on gravitational, inertial, and dissipative forces. Unlike ordinary differential equation (ODE)-based dynamic models which can be very stiff, our constrained dynamics formulation permits large integration steps without compromising stability. Some alternatives like Open Dynamics Engine also use constraints, but can only approximate Coulomb friction at contacts. In contrast, we can enforce realistic, nonlinear models of wheel-terrain interaction (e.g. empirical models for pneumatic tires, terramechanics-based models) using a novel force-balance optimization technique. Simulation tests show our kinematic and dynamic models to be more functional, stable, and efficient than common alternatives. Simulations run 1K-10K faster than real time on an ordinary PC, even while predicting articulated motion on rough terrain and enforcing realistic wheel-terrain interaction models. In addition, we present a novel Integrated Prediction Error Minimization (IPEM) method to calibrate model parameters that is general, convenient, online, and evaluative. Ordinarily system dynamics are calibrated by minimizing the error of instantaneous output predictions. IPEM instead forms predictions by integrating the system dynamics over an interval; benefits include reduced sensing requirements, better observability, and accuracy over a longer horizon. In addition to calibrating out systematic errors, we simultaneously calibrate a model of stochastic error propagation to quantify the uncertainty of motion predictions. Experimental results on multiple platforms and terrain types show that parameter estimates converge quickly during online calibration, and uncertainty is well characterized. Under normal conditions, our enhanced kinematic model can predict nonzero wheel slip as accurately as a full dynamic model for a fraction of the computation cost. Finally, odometry is greatly improved when using IPEM vs. manual calibration, and when using 3D vs. 2D kinematics. To facilitate their use, we have released open source MATLAB and C++ libraries implementing the model formulation and calibration methods in this thesis.
12

The Identificaton Of A Bivariate Markov Chain Market Model

Yildirak, Sahap Kasirga 01 January 2004 (has links) (PDF)
This work is an extension of the classical Cox-Ross-Rubinstein discrete time market model in which only one risky asset is considered. We introduce another risky asset into the model. Moreover, the random structure of the asset price sequence is generated by bivariate finite state Markov chain. Then, the interest rate varies over time as it is the function of generating sequences. We discuss how the model can be adapted to the real data. Finally, we illustrate sample implementations to give a better idea about the use of the model.
13

Using Pareto points for model identification in predictive toxicology

Palczewska, Anna Maria, Neagu, Daniel, Ridley, Mick J. January 2013 (has links)
no / Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.
14

Interpretation, Identification and Reuse of Models. Theory and algorithms with applications in predictive toxicology.

Palczewska, Anna Maria January 2014 (has links)
This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC. / BBSRC and Syngenta (International Research Centre at Jealott’s Hill, Bracknell, UK).
15

Automatic Model Structure Identification for Conceptual Hydrologic Models

Spieler, Diana 01 August 2024 (has links)
Hydrological models play a crucial role in forecasting future water resource availability and water-related risks. It's essential that they realistically represent and simulate the processes of interest. However, which model structure is most suitable for a given task, catchment and data situation is often difficult to determine. There are only few tangible guidelines for model structure selection, and comparing multiple models simply to choose one to use in further work is a cumbersome process. It is therefore not surprising that the hydrological community has spent considerable effort on improving model parameter estimation, which can be treated as an automatized process, but the selection of a suitable model structure (i.e., the specific set of equations describing catchment function) has received comparatively little attention. To facilitate easier testing of different model structures, this thesis introduces an approach for Automatic Model Structure Identification (AMSI), which allows for the simultaneous calibration of model structural choices and model parameters. Model structural choices are treated as integer decision variables while model parameters are treated as continuous model variables in this approach. Through combining the modular modelling framework Raven with the mixed-integer optimization algorithm DDS, the testing of different structural hypotheses can thus be automated. AMSI then allows to effectively search a vast number of model structure and parameter choices to identify the most suitable model structures for a specific objective function. This thesis uses four experiments to test and benchmark AMSI's performance and capabilities. First, a synthetic experiment generates “observations” with known model structures and tests AMSI’s ability to re-identify these same structures. Second, AMSI is used in a real-world application on twelve diverse MOPEX catchments to test the feasibility of the approach. Third, a comprehensive benchmark study explores how reliably AMSI searches the available model space by comparing AMSI’s outcomes to a brute force approach that calibrates all feasible model structures in the available model search space. Fourth, the model space AMSI searches was compared to a much wider model hypothesis space, as defined by 45 diverse and commonly used model structures taken from the MARRMoT-Toolbox. This evaluation of AMSI’s performance is based on mathematical accuracy (tested via statistical metric performance) and hydrological adequacy (tested via the performance on several hydrological signatures) to assess the advantages and limitations of the method. The re-identification experiments showed that process choices that show little impact on the hydrograph are difficult to re-identify due to near equivalent diagnostic measures. The real-world experiment showed that AMSI is capable of identifying feasible and avoiding infeasible model structures for the twelve tested MOPEX catchments. The performance of the identified models was compared to that of eight other models configured for the MOPEX catchments. AMSI's performance is in the top half of the performance range found by these eight, partially more complex, models, and is therefore considered satisfactory. However, the high variance in the identified model structures with comparable objective function values reflects substantial model equifinality. This was also seen in the benchmark study. While AMSI reliably identifies the most accurate model structures in a given model hypothesis space, the equifinality in model choice as measured through an aggregated metric such as KGE is considerable. In some catchments up to 30\% of the tested model choices obtain comparable KGE scores. These models, however, show significantly different behaviour in their internal storages, showing that a wide range of simulated hydrologic conditions can lead to comparable efficiency scores and therefore a wide ensemble of different model structures may appear suitable. Using AMSI with aggregated statistical metrics therefore provides only limited insights into which models are most suitable for the given catchment. Further investigations showed that the large number of identified mathematically accurate models (as measured through good metric performance) could hardly ever also be considered hydrologically adequate models (as measured through good signature performance). In nine out of twelve catchments none of the accurate models was also considered to be adequate, while only between one (0.1\%) and 49 (0.7\%) of all tested model structures met the defined adequacy requirements in the other three catchments. This glaring disconnect between mathematical accuracy and hydrological adequacy applies to all model selection approaches tested in the benchmark experiments. Neither AMSI, nor the brute force search, nor the MARRMoT models are able to provide accurate as well as adequate model structures when calibrated for the aggregated statistical metric KGE. Therefore, no distinct advantages of commonly used, expert-developed conceptual model structures could be identified over the data-derived AMSI models, as long as model performance is assessed only with aggregated efficiency scores. This has relevant implications for all modelling studies, as despite many papers suggesting to do otherwise, assessing model performance only through mathematical accuracy (i.e., with scores such as NSE or KGE) has remained the standard practice. The great empirical evidence of the inherent constraints of aggregated metrics such as KGE provided in this thesis may help to convey the message that relying only on these scores simply cannot guarantee hydrologically adequate model structures due to the equifinality in the combined model and parameter selection problem. The results also indicate that the AMSI method is able to identify model structures that are just as mathematically accurate and hydrologically inadequate as previously developed methods for model selection yield, but at a reduced work load to the modeller. Multi-variate datasets and better model performance metrics are often mentioned as ways to reduce equifinality. If such improved methods are implemented in the calibration procedure, AMSI's ability to discriminate between more granular process equations will increase. AMSI could then be a promising way forward to reduce the subjectivity in model selection, and to explore the connections between suitable model structures and catchment characteristics.
16

Control and Model Identification on Renewable Energy Systems / Commande et identification de modèles pour des systèmes d’énergie renouvelables

Jaramillo López, Fernando 26 September 2014 (has links)
La situation compromettante de l'environnement due à la pollution, et les coûts élevés des combustibles fossiles ont engagé des nouvelles politiques et réglementations et ont fortement incité l’augmentation de l’utilisation de nouvelles sources d'énergie renouvelables. De nombreux pays dans le monde ont augmenté de façon importante le développement de ces sources d'énergie. Deux des systèmes d'énergies renouvelables les plus couramment utilisés sont les systèmes éoliens (SE) et les systèmes photovoltaïques (SP). SE convertissent l'énergie du vent en énergie électrique au moyen d'un processus électromécanique et SP convertissent directement l'énergie solaire en énergie électrique au moyen d'un processus semi-conducteur. Ces systèmes présentent de nombreux défis qui doivent être résolus afin de gagner du terrain sur les systèmes d'énergies traditionnelles. L'un de ces défis est d'augmenter l'efficacité du système avec la commande des éléments de puissance. Afin d'atteindre cet objectif, il est nécessaire de mieux comprendre le comportement dynamique de ces systèmes et de développer des nouveaux modèles mathématiques et des nouvelles techniques de commande. Ces techniques nécessitent souvent des informations du système qui ne sont pas disponibles --- ou sont trop chères si on devait les mesurer. Pour résoudre ce problème, il est nécessaire de créer des algorithmes qui puissent estimer cette information, cependant, ce n'est pas une tâche facile, car les signaux des sources d'énergie dans SE et SP (c.-à-d. la vitesse du vent, rayonnement solaire, température) entrent dans les modèles mathématiques par une relation non linéaire. Ces algorithmes doivent pouvoir estimer ces signaux --- ou les signaux qui dépendent d’eux--- avec une bonne précision. Aussi, il est nécessaire de concevoir des lois de commande qui opèrent les systèmes à leur point maximum de puissance. Dans ce travail, nous proposons des nouveaux algorithmes d'estimation et des lois de commande qui sont liés à l'augmentation de l'efficacité énergétique dans SE et SP. Des travaux antérieurs liés à l'estimation des signaux mentionnés, les considéraient comme constants. Dans cette thèse, les algorithmes d'estimation proposés considèrent l'état variable des ces signaux. Dans toutes ces nouvelles propositions, la stabilité asymptotique est prouvée en utilisant les théories de Lyapunov. Les lois de commande sont calculées en utilisant les modèles non linéaires des systèmes. En outre, certaines des ces solutions sont étendues au cas général, qui peut être utilisé sur une large classe des systèmes non linéaires. Le premier, est un estimateur de paramètres pour les systèmes non linéaires. Il permet d'estimer les paramètres non linéaires variant dans le temps. La deuxième proposition est la conception d’un schéma pour une classe de systèmes non linéaires adaptatifs qui permet de compenser les incertitudes et les perturbations qui satisfont à la "condition de correspondance". / The compromising situation of the environment due to pollution, and the high costs of the fossil fuels have originated new policies and regulations that have stimulating the interest on alternative energy sources. Many countries around the world have increased in an important way the penetration of these energy sources. Two of the most widely used renewable energy systems are the wind turbines systems (WTS) and the photovoltaic systems (PVS). WTS convert wind energy in electric energy by means of an electromechanical process and PVS convert solar energy directly in electric energy by means of a semiconductive process. These systems show many challenges that need to be solved in order to gain ground to the traditional energy systems. One of these challenges is increase the overall system efficiency by controlling the power conditioning elements. In order to achieve this, is necessary to better understand the dynamic behavior of these systems and develop new mathematical models and new control techniques. These techniques often require system information that is not possible ---or is too expensive--- measure. In order to solve this problem, is necessary to create algorithms that are able to estimate this information, however, this is not an easy task, because the signals of the energy sources in WTS and PVS (i.e., wind speed, irradiance, temperature) enter in the mathematical models in a nonlinear relation. These algorithms have to be able to estimate these signals ---or the signals that depend on them--- with good precision. Also, it is necessary to design control laws that operate the systems at their maximum power point. In this work, we propose novel estimation algorithms and control laws that are related with the increase of the energetic efficiency in WTS and PVS. Previous works related with estimation of the mentioned signals considered them as constants. In this thesis, the proposed estimation algorithms consider the time-varying condition of these signals. In all of these novel propositions, uniform asymptotic stability is proved using Lyapunov theories. The control laws are derived using the overall nonlinear models of the systems. In addition, some of these solutions are extended to the general case, which can be used on a large-class of nonlinear systems. The first one, is a novel parameter estimator for nonlinear systems. It allows to estimate time-varying nonlinear parameters. The second general proposition is a framework for a class of adaptive nonlinear systems that allows to compensate for uncertainties and perturbations that satisfy the matching condition.
17

Modélisation des équilibres entre phases et simulation de la distillation des eaux-de-vie en vue d’une meilleure compréhension du comportement des composés volatils d’arôme / Modeling of phase equilibria and simulation of spirits distillation for a better understanding of volatile aroma compounds behavior.

Puentes Mancipe, Cristian 13 December 2017 (has links)
La qualité des eaux-de-vie est un paramètre associé à la composition en composés volatils d’arôme. Cette composition résulte de la combinaison de différents facteurs dont la nature et le traitement des matières premières, mais surtout des transformations ayant lieu lors des phases de fermentation, distillation et, dans la plupart de cas, vieillissement.La distillation est une opération de séparation pratiquée depuis des millénaires, avec une technologie assez mature. Cependant, dans le domaine des eaux-de-vie, elle s’appuie essentiellement sur des connaissances empiriques. L’objectif de ce doctorat fut de contribuer à une meilleure compréhension du comportement des composés volatils d’arôme au cours de différents modes de distillation et de fournir des bases scientifiques à la conduite des unités par le biais de modules de simulation. L’attention a été portée sur la distillation d’Armagnac et de Calvados dans des colonnes multiétagées en régime stationnaire.Les modules de simulation ont été construits avec le logiciel ProSimPlus®. La première partie des travaux a été consacrée à l’acquisition de données d’équilibre liquide-vapeur des composés volatils d’arôme en milieu hydroalcoolique pour l’identification du modèle NRTL, en suivant trois approches complémentaires : recherche dans la littérature, détermination expérimentale et prédiction théorique avec les modèles UNIFAC et COSMO. Grâce à la connaissance acquise sur les volatilités relatives par rapport à l’éthanol et à l’eau, les composés volatils d’arôme ont pu être classés en trois groupes : composés légers, composés intermédiaires et composés lourds. La deuxième partie des travaux a porté sur la construction et la validation des modules de simulation, après réconciliation des données issues de la caractérisation expérimentale des unités de distillation. Cette investigation démontre que la simulation est un outil d’ingénierie performant dans le domaine des eaux-de-vie. Les résultats de la simulation ont permis d’affiner la classification des composés intermédiaires en trois catégories supplémentaires selon leur profil de concentration dans la colonne et leur taux de récupération dans le distillat. Enfin, cet outil a mis en évidence que certains paramètres opératoires, notamment l’augmentation de la teneur en éthanol du distillat ainsi que l’extraction de queues, favorisent la séparation préférentielle de certaines espèces de volatilité faible ou intermédiaire par rapport à l’éthanol. / The quality of spirits is a parameter related to the composition of volatile aroma compounds. This composition results from the combined production process of raw material extraction, subsequent fermentation, distillation and, in many cases, ageing.Distillation is a very old and the most important industrial separation technology. However, in spirits production, this operation relies essentially on empirical knowledge. The aim of this PhD was to contribute to a better understanding of the volatile aroma compounds behaviour in spirits distillation and to provide a scientific basis for the process through computer simulation. The study was focused on Armagnac and Calvados production by continuous multistage distillation.The simulation modules were built using the software ProSimPlus®. The first part of this research was dedicated to the acquisition of vapor-liquid equilibrium data of the volatile aroma compounds in ethanol-water solutions, in order to estimate the binary interaction parameters of the NRTL model.Three complementary approaches of data acquisition were used: literature compilation, experimental measurements and predictions with UNIFAC and COSMO models.According to their relative volatilities with respect to ethanol and water, the volatile aroma compounds can be classified in three groups: light compounds, intermediary compounds and heavy compounds. The second part of this research dealt with the creation and validation of simulation modules, by using reconciled experimental data from the distillation units. The results prove that simulation is a powerful tool in spirits distillation. The simulation data enables a more precise classification of the intermediary compounds in three categories, by considering their composition profiles in the distillation column and their recovery ratios from feed to distillate. Finally, the analysis of some operating parameters, including ethanol concentration in the distillate as well as tails extractions, demonstrates that the distillate composition can be modified by virtue of a selective separation of intermediary and heavy compounds with respect to ethanol.
18

Model Li-ion akumulátoru / Li-ion battery model

Loucký, Vojtěch January 2021 (has links)
This diploma thesis deals with description of the principle of Li-ion cells, literature search on the topic of mathematical models of Li-ion cells and the creation of a selected mathematical model in MATLAB, which is able to simulate the course of voltage and state of charge as a function of time for different ambient conditions, such as various aging of battery .The creation of both the model and the procedure of identification of parameters necessary for the creation of the model are described here as well as different options of identification of parameters. The selected Thevenin model is then compared with the real course and the accuracy of the model is evaluated with respect to the measured course.
19

Simulation of Strong Ground Motions in Mashiki Town, Kumamoto, Based on the Seismic Response Analysis of Soils and the Dynamic Rupture Modeling of Sources / 地盤応答解析および動力学的震源モデルに基づく熊本県益城町における強震動シミュレーション

Sun, Jikai 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第23188号 / 工博第4832号 / 新制||工||1755(附属図書館) / 京都大学大学院工学研究科建築学専攻 / (主査)教授 松島 信一, 教授 竹脇 出, 教授 林 康裕 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
20

Bayesian Identification of Nonlinear Structural Systems: Innovations to Address Practical Uncertainty

Alana K Lund (10702392) 26 April 2021 (has links)
The ability to rapidly assess the condition of a structure in a manner which enables the accurate prediction of its remaining capacity has long been viewed as a crucial step in allowing communities to make safe and efficient use of their public infrastructure. This objective has become even more relevant in recent years as both the interdependency and state of deterioration in infrastructure systems throughout the world have increased. Current practice for structural condition assessment emphasizes visual inspection, in which trained professionals will routinely survey a structure to estimate its remaining capacity. Though these methods have the ability to monitor gross structural changes, their ability to rapidly and cost-effectively assess the detailed condition of the structure with respect to its future behavior is limited.<div>Vibration-based monitoring techniques offer a promising alternative to this approach. As opposed to visually observing the surface of the structure, these methods judge its condition and infer its future performance by generating and updating models calibrated to its dynamic behavior. Bayesian inference approaches are particularly well suited to this model updating problem as they are able to identify the structure using sparse observations while simultaneously assessing the uncertainty in the identified parameters. However, a lack of consensus on efficient methods for their implementation to full-scale structural systems has led to a diverse set of Bayesian approaches, from which no clear method can be selected for full-scale implementation. The objective of this work is therefore to assess and enhance those techniques currently used for structural identification and make strides toward developing unified strategies for robustly implementing them on full-scale structures. This is accomplished by addressing several key research questions regarding the ability of these methods to overcome issues in identifiability, sensitivity to uncertain experimental conditions, and scalability. These questions are investigated by applying novel adaptations of several prominent Bayesian identification strategies to small-scale experimental systems equipped with nonlinear devices. Through these illustrative examples I explore the robustness and practicality of these algorithms, while also considering their extensibility to higher-dimensional systems. Addressing these core concerns underlying full-scale structural identification will enable the practical application of Bayesian inference techniques and thereby enhance the ability of communities to detect and respond to the condition of their infrastructure.<br></div>

Page generated in 0.095 seconds