Spelling suggestions: "subject:"model based."" "subject:"godel based.""
351 |
Modelling of a Variable Venturi in a Heavy Duty Diesel Engine / Modellering av variabel venturi i en dieselmotor för tung lastbilTorbjörnsson, Carl-Adam January 2002 (has links)
<p>The objectives in this thesis are to present a model of a variable venturi in an exhaust gas recirculation (EGR) system located in a heavy duty diesel engine. A new legislation called EURO~4 will come into force in 2005 which affects truck development and it will require an On-Board Diagnostic system in the truck. If model based diagnostic systems are to be used, one of the advantages is that the system performance will increase if a model of a variable venturi is used. </p><p>Three models with different complexity are compared in ten different experiments. The experiments are performed in a steady flow rig at different percentage of EGR gases and venturi areas. The model predicts the mass flow through the venturi. The results show that the first model with fewer simplifications performs better and has fewer errors than the other two models. The simplifications that differ between the models are initial velocity before the venturi and the assumption of incompressible flow. </p><p>The model that shows the best result is not proposed by known literature in this area of knowledge and technology. This thesis shows that further studies and work on this model, the model with fewer simplifications, can be advantageous.</p>
|
352 |
An investigation of factors impacting life-cycle application of Civil Integrated Management (CIM)Sankaran, Bharathwaj 02 February 2015 (has links)
Highway projects are delivered in a complex environment that involves participation of diverse stakeholders with different objectives. Technological advancements have provided better tools and techniques that if incorporated can lead to effective project delivery complying with the multitude of objectives. Often the projects are cost-driven, schedule-driven, or both. Presence of ongoing traffic poses an additional challenge for the developers as it impacts the safety and comfort of both the commuters and the construction workers. A wide variety of tools, techniques and work processes are adopted across many projects depending on the project and agency requirements to make the process of project management efficient across its life-cycle. Civil Integrated Management (CIM) is a terminology that encompasses all such tools and technologies that can facilitate the process of digital project delivery and asset management. This study examines the current state of practice for CIM through surveys conducted at agency and project level. The results of these surveys are summarized to provide an understanding of the organizational and contractual issues related to CIM implementation and comprehend the process of technologies implementation and associated performance benefits at the project-level. Significant factors impacting successful life-cycle CIM utilization are elicited through the surveys and follow-up interviews and are investigated further under four main categories – Technology Implementation Planning, Model-based workflow and processes, design for construction automation and Information Management. Specific examples have been provided for each of these factors to demonstrate their utility on projects. The findings of this study will provide practitioners a list of key issues to be considered for profitable and effective implementation of the CIM technologies across a project’s life-cycle. / text
|
353 |
A Comparative Study of Automated Test ExplorersGustavsson, Johan January 2015 (has links)
With modern computer systems becoming more and more complicated, theimportance of rigorous testing to ensure the quality of the product increases.This, however, means that the cost to perform tests also increases. In orderto address this problem, a lot of research has been conducted during thelast years to find a more automated way of testing software systems. Inthis thesis, different algorithms to automatically explore and test a systemhave been implemented and evaluated. In addition to this, a second setof algorithms have been implemented with the objective to isolate whichinteractions with the system were responsible for a failure. These algorithmswere also evaluated and compared against each other. In the first evaluationtwo explorers, which I called DeBruijn and LStarExplorer, were consideredsuperior to the other. The first used a DeBruijn sequence to brute forcea solution while the second used the L*-algorithm to build an FSM overthe system under test. This FSM could then be used to provide a moreaccurate description for when the failure occurred. The result from thesecond evaluation were two reducers which both tried to recreate a failureby first applying interactions performed just before the failure occurred. Ifthis was not successful, they tried interactions further and further away, untilthe failure was triggered. In addition to this, the thesis contains descriptionsabout the framework used to run the different strategies. / D ̊a v ̊ara moderna datasystem blir allt mer komplicerade, ̈okar detta st ̈andigtbehovet av rigor ̈osa tester f ̈or att s ̈akerst ̈alla kvaliteten p ̊a den slutgiltiga pro-dukten. Det h ̈ar inneb ̈ar dock att kostnaden f ̈or att utf ̈ora testerna ocks ̊ao ̈ kar. F ̈or att f ̈ors ̈oka hitta en l ̈osning p ̊a det h ̈ar problemet har forsknin-gen under senare tid arbetat med att ta fram automatiserade metoder atttesta mjukvarusystem. I den h ̈ar uppsatsen har olika algoritmer, f ̈or attutforska och testa ett system, implementerats och utv ̈arderats. D ̈arut ̈overhar ocks ̊a en grupp algoritmer implementerats som ska kunna isolera vilkainteraktioner med ett system som f ̊ar det att fallera. ̈aven dessa algoritmerhar utv ̈arderats och testats mot varandra. Resultatet fr ̊an det f ̈orsta ex-perimentet var tv ̊a explorers, h ̈ar kallade DeBruijn och LStarExplorer, somvisade sig vara b ̈attre ̈an de andra. Den f ̈orsta av dessa anv ̈ande en DeBruijn-sekvens f ̈or att hitta felen, medan den andra anv ̈ande en L*-algoritm f ̈or attbygga upp en FSM ̈over systemet. Den h ̈ar FSM:en kunde sedan anv ̈andasf ̈or att mer precist beskriva n ̈ar felet uppstod. Resultatet fr ̊an det andraexperimentet var tv ̊a reducers, vilka b ̊ada f ̈ors ̈okte ̊aterskapa fel genom attf ̈orst applicera interaktioner som ursprungligen utf ̈ordes percis innan feletuppstod. Om felet inte kunde ̊aterskapas p ̊a detta s ̈att, fortsatte de medatt applicera interaktioner l ̈angre bort tills felet kunde ̊aterskapas. Ut ̈overdetta inneh ̊aller uppsatsen ocks ̊a beskrivningar av ramverken som anv ̈andsf ̈or att k ̈ora de olika strategierna.
|
354 |
Nonlinear model-based fault detection and isolation : improvements in the case of single/multiple faults and uncertainties in the model parametersCastillo, Iván 15 June 2011 (has links)
This dissertation addresses fault detection and isolation (FDI) for nonlinear systems based on models using two different approaches. The first approach detects and isolates single and multiple faults, particularly when there are restrictions in measuring process variables. The FDI model-based method is based on nonlinear state estimators, in which the estimates are calculated under high filtering, and a high fidelity residuals model, obtained from the difference between measurements and estimates. In the second approach, a robust fault detection and isolation (RFDI) system, that handles both parameter estimation and parameters with uncertainties, is proposed in which complex models can be simplified with nonlinear functions so that they can be formulated as differential algebraic equations (DAE). In utilizing this framework, faults are identified by performing a statistical analysis. Finally, comparisons with existing data-driven approaches show that the proposed model-based methods are capable of distinguishing a fault from the diverse array of possible faults, a common occurrence in complex processes. / text
|
355 |
Model-Based Test Case Generation for Real-Time SystemsHessel, Anders January 2007 (has links)
Testing is the dominant verification technique used in the software industry today. The use of automatic test case execution increases, but the creation of test cases remains manual and thus error prone and expensive. To automate generation and selection of test cases, model-based testing techniques have been suggested. In this thesis two central problems in model-based testing are addressed: the problem of how to formally specify coverage criteria, and the problem of how to generate a test suite from a formal timed system model, such that the test suite satisfies a given coverage criterion. We use model checking techniques to explore the state-space of a model until a set of traces is found that together satisfy the coverage criterion. A key observation is that a coverage criterion can be viewed as consisting of a set of items, which we call coverage items. Each coverage item can be treated as a separate reachability problem. Based on our view of coverage items we define a language, in the form of parameterized observer automata, to formally describe coverage criteria. We show that the language is expressive enough to describe a variety of common coverage criteria described in the literature. Two algorithms for test case generation with observer automata are presented. The first algorithm returns a trace that satisfies all coverage items with a minimum cost. We use this algorithm to generate a test suite with minimal execution time. The second algorithm explores only states that may increase the already found set of coverage items. This algorithm works well together with observer automata. The developed techniques have been implemented in the tool CoVer. The tool has been used in a case study together with Ericsson where a WAP gateway has been tested. The case study shows that the techniques have industrial strength.
|
356 |
Mixture Model Averaging for ClusteringWei, Yuhong 30 April 2012 (has links)
Model-based clustering is based on a finite mixture of distributions, where each mixture component corresponds to a different group, cluster, subpopulation, or part thereof. Gaussian mixture distributions are most often used. Criteria commonly used in choosing the number of components in a finite mixture model include the Akaike information criterion, Bayesian information criterion, and the integrated completed likelihood. The best model is taken to be the one with highest (or lowest) value of a given criterion. This approach is not reasonable because it is practically impossible to decide what to do when the difference between the best values of two models under such a criterion is ‘small’. Furthermore, it is not clear how such values should be calibrated in different situations with respect to sample size and random variables in the model, nor does it take into account the magnitude of the likelihood. It is, therefore, worthwhile considering a model-averaging approach. We consider an averaging of the top M mixture models and consider applications in clustering and classification. In the course of model averaging, the top M models often have different numbers of mixture components. Therefore, we propose a method of merging Gaussian mixture components in order to get the same number of clusters for the top M models. The idea is to list all the combinations of components for merging, and then choose the combination corresponding to the biggest adjusted Rand index (ARI) with the ‘reference model’. A weight is defined to quantify the importance of each model. The effectiveness of mixture model averaging for clustering is proved by simulated data and real data under the pgmm package, where the ARI from mixture model averaging for clustering are greater than the one of corresponding best model. The attractive feature of mixture model averaging is it’s computationally efficiency; it only uses the conditional membership probabilities. Herein, Gaussian mixture models are used but the approach could be applied effectively without modification to other mixture models. / Paul McNicholas
|
357 |
Integrating remotely sensed data into forest resource inventories / The impact of model and variable selection on estimates of precisionMundhenk, Philip Henrich 26 May 2014 (has links)
Die letzten zwanzig Jahre haben gezeigt, dass die Integration luftgestützter Lasertechnologien (Light Detection and Ranging; LiDAR) in die Erfassung von Waldressourcen
dazu beitragen kann, die Genauigkeit von Schätzungen zu erhöhen. Um diese zu ermöglichen, müssen Feldaten mit LiDAR-Daten kombiniert werden. Diverse Techniken
der Modellierung bieten die Möglichkeit, diese Verbindung statistisch zu beschreiben.
Während die Wahl der Methode in der Regel nur geringen Einfluss auf Punktschätzer
hat, liefert sie unterschiedliche Schätzungen der Genauigkeit.
In der vorliegenden Studie wurde der Einfluss verschiedener Modellierungstechniken und
Variablenauswahl auf die Genauigkeit von Schätzungen untersucht. Der Schwerpunkt
der Arbeit liegt hierbei auf LiDAR Anwendungen im Rahmen von Waldinventuren. Die
Methoden der Variablenauswahl, welche in dieser Studie berücksichtigt wurden, waren
das Akaike Informationskriterium (AIC), das korrigierte Akaike Informationskriterium
(AICc), und das bayesianische (oder Schwarz) Informationskriterium. Zudem wurden
Variablen anhand der Konditionsnummer und des Varianzinflationsfaktors ausgewählt.
Weitere Methoden, die in dieser Studie Berücksichtigung fanden, umfassen Ridge Regression, der least absolute shrinkage and selection operator (Lasso), und der Random
Forest Algorithmus. Die Methoden der schrittweisen Variablenauswahl wurden sowohl
im Rahmen der Modell-assistierten als auch der Modell-basierten Inferenz untersucht.
Die übrigen Methoden wurden nur im Rahmen der Modell-assistierten Inferenz untersucht.
In einer umfangreichen Simulationsstudie wurden die Einflüsse der Art der Modellierungsmethode und Art der Variablenauswahl auf die Genauigkeit der Schätzung von
Populationsparametern (oberirdische Biomasse in Megagramm pro Hektar) ermittelt.
Hierzu wurden fünf unterschiedliche Populationen genutzt. Drei künstliche Populationen
wurden simuliert, zwei weitere basierten auf in Kanada und Norwegen erhobenen Waldinveturdaten. Canonical vine copulas wurden genutzt um synthetische Populationen
aus diesen Waldinventurdaten zu generieren. Aus den Populationen wurden wiederholt
einfache Zufallsstichproben gezogen und für jede Stichprobe wurden der Mittelwert und
die Genauigkeit der Mittelwertschätzung geschäzt. Während für das Modell-basierte
Verfahren nur ein Varianzschätzer untersucht wurde, wurden für den Modell-assistierten
Ansatz drei unterschiedliche Schätzer untersucht.
Die Ergebnisse der Simulationsstudie zeigten, dass das einfache Anwenden von schrittweisen Methoden zur Variablenauswahl generell zur Überschätzung der Genauigkeiten
in LiDAR unterstützten Waldinventuren führt. Die verzerrte Schätzung der Genauigkeiten
war vor allem für kleine Stichproben (n = 40 und n = 50) von Bedeutung.
Für
Stichproben von größerem Umfang (n = 400), war die Überschätzung der Genauigkeit
vernachlässigbar. Gute Ergebnisse, im Hinblick auf Deckungsraten und empirischem
Standardfehler, zeigten Ridge Regression, Lasso und der Random Forest Algorithmus.
Aus den Ergebnissen dieser Studie kann abgeleitet werden, dass die zuletzt genannten
Methoden in zukünftige LiDAR unterstützten Waldinventuren Berücksichtigung finden
sollten.
|
358 |
Model Predictive Control for Automotive Engine Torque Considering Internal Exhaust Gas RecirculationHayakawa, Yoshikazu, Jimbo, Tomohiko 09 1900 (has links)
the 18th World Congress The International Federation of Automatic Control, Milano (Italy), August 28 - September 2, 2011
|
359 |
Multi-Model Heterogeneous Verification of Cyber-Physical SystemsRajhans, Akshay H. 01 May 2013 (has links)
Complex systems are designed using the model-based design paradigm in which mathematical models of systems are created and checked against specifications. Cyber-physical systems (CPS) are complex systems in which the physical environment is sensed and controlled by computational or cyber elements possibly distributed over communication networks. Various aspects of CPS design such as physical dynamics, software, control, and communication networking must interoperate correctly for correct functioning of the systems. Modeling formalisms, analysis techniques and tools for designing these different aspects have evolved independently, and remain dissimilar and disparate. There is no unifying formalism in which one can model all these aspects equally well. Therefore, model-based design of CPS must make use of a collection of models in several different formalisms and use respective analysis methods and tools together to ensure correct system design. To enable doing this in a formal manner, this thesis develops a framework for multi-model verification of cyber-physical systems based on behavioral semantics.
Heterogeneity arising from the different interacting aspects of CPS design must be addressed in order to enable system-level verification. In current practice, there is no principled approach that deals with this modeling heterogeneity within a formal framework. We develop behavioral semantics to address heterogeneity in a general yet formal manner. Our framework makes no assumptions about the specifics of any particular formalism, therefore it readily supports various formalisms, techniques and tools. Models can be analyzed independently in isolation, supporting separation of concerns. Mappings across heterogeneous semantic domains enable associations between analysis results. Interdependencies across different models and specifications can be formally represented as constraints over parameters and verification can be carried out in a semantically consistent manner. Composition of analysis results is supported both hierarchically across different levels of abstraction and structurally into interacting component models at a given level of abstraction. The theoretical concepts developed in the thesis are illustrated using a case study on the hierarchical heterogeneous verification of an automotive intersection collision avoidance system.
|
360 |
Modeliais paremtas testavimas: testavimo įrankių tyrimas / Model-based testing: analysis of the model based testing toolsAdomaitis, Ernestas 27 June 2014 (has links)
Modeliais paremtas testavimas tampa vis populiaresnis. Sudėtingos programinės įrangos kokybės užtikrinimas ir modeliais orientuotas sistemų kūrimas yra pagrindinės šio testavimo būdo išpopuliarėjimo priežastys. Pagrindiniai šio darbo tikslai yra modeliais paremtų testavimo įrankių analizė ir jų tobulinimo galimybės. Darbe naudojama daugiau nei dešimt įvairių įrankių vertinimo kriterijų. Analizuojama testų padengimo kriterijų pritaikymo ir tobulinimo galimybės. Analizuojamos modeliais paremto proceso problemos ir pateikiami sprendimo būdai. Atlikta detali testų kūrimo kriterijų ir modelio perėjimo algoritmų analizė. Atlikus detalę kriterijų ir modelio perėjimo algoritmų analizę, pateikiama kriterijų apjungimo privalumai ir kaip naudojant įvairius kriterijus galima valdyti testų kūrimo procesą ir siekti 100 procentinio modelio padengimo. Pateikiama ir detaliai išanalizuojama kriterijų kūrimo metodika, kurią galim taikyti testavimo įrankiuose. / Model-based testing has become increasingly popular in recent years. Major reasons include the need for quality assurance for increasingly complex systems and the emerging model-centric development paradigm. The present thesis aims at analysing model-based testing tools and presents the possibilities for their improvement. More than dozen criteria are applied in analysing selected tools. The paper analyses the application of testing coverage criteria and presents their possible improvements. The problems occurring in the model-based testing process are being analysed and the solutions presented. The study contains an in-depth analysis of test coverage criteria and model traversal algorithms. A thorough analysis of test criteria and model traversal algorithms being performed, the advantages of the integration of test criteria are presented. Furthermore, a solution for managing the development process of test cases as well as pursuing a hundred percent model coverage is proposed. The paper presents a comprehensive analysis of test criteria development methodology that could be applied in testing tools.
|
Page generated in 0.0495 seconds