391 |
Compositional Multi-objective Parameter TuningHusak, Oleksandr 07 July 2020 (has links)
Multi-objective decision-making is critical for everyday tasks and engineering problems. Finding the perfect trade-off to maximize all the solution's criteria requires a considerable amount of experience or the availability of a significant number of resources. This makes these decisions difficult to achieve for expensive problems such as engineering. Most of the time, to solve such expensive problems, we are limited by time, resources, and available expertise. Therefore, it is desirable to simplify or approximate the problem when possible before solving it. The state-of-the-art approach for simplification is model-based or surrogate-based optimization. These approaches use approximation models of the real problem, which are cheaper to evaluate. These models, in essence, are simplified hypotheses of cause-effect relationships, and they replace high estimates with cheap approximations. In this thesis, we investigate surrogate models as wrappers for the real problem and apply \gls{moea} to find Pareto optimal decisions.
The core idea of surrogate models is the combination and stacking of several models that each describe an independent objective. When combined, these independent models describe the multi-objective space and optimize this space as a single surrogate hypothesis - the surrogate compositional model. The combination of multiple models gives the potential to approximate more complicated problems and stacking of valid surrogate hypotheses speeds-up convergence. Consequently, a better result is obtained at lower costs.
We combine several possible surrogate variants and use those that pass validation. After recombination of valid single objective surrogates to a multi-objective surrogate hypothesis, several instances of \gls{moea}s provide several Pareto front approximations. The modular structure of implementation allows us to avoid a static sampling plan and use self-adaptable models in a customizable portfolio. In numerous case studies, our methodology finds comparable solutions to standard NSGA2 using considerably fewer evaluations. We recommend the present approach for parameter tuning of expensive black-box functions.:1 Introduction
1.1 Motivation
1.2 Objectives
1.3 Research questions
1.4 Results overview
2 Background
2.1 Parameter tuning
2.2 Multi-objective optimization
2.2.1 Metrics for multi-objective solution
2.2.2 Solving methods
2.3 Surrogate optimization
2.3.1 Domain-specific problem
2.3.2 Initial sampling set
2.4 Discussion
3 Related Work
3.1 Comparison criteria
3.2 Platforms and frameworks
3.3 Model-based multi-objective algorithms
3.4 Scope of work
4 Compositional Surrogate
4.1 Combinations of surrogate models
4.1.1 Compositional Surrogate Model [RQ1]
4.1.2 Surrogate model portfolio [RQ2]
4.2 Sampling plan [RQ3]
4.2.1 Surrogate Validation
4.3 Discussion
5 Implementation
5.1 Compositional surrogate
5.2 Optimization orchestrator
6 Evaluation
6.1 Experimental setup
6.1.1 Optimization problems
6.1.2 Optimization search
6.1.3 Surrogate portfolio
6.1.4 Benchmark baseline
6.2 Benchmark 1: Portfolio with compositional surrogates. Dynamic sampling plan
6.3 Benchmark 2: Inner parameters
6.3.1 TutorM parameters
6.3.2 Sampling plan size
6.4 Benchmark 3: Scalability of surrogate models
6.5 Discussion of results
7 Conclusion
8 Future Work
A Appendix
A.1 Benchmark results on ZDT DTLZ, WFG problems
|
392 |
Fault Detection and Diagnosis for Brine to Water Heat Pump SystemsAbuasbeh, Mohammad January 2016 (has links)
The overall objective of this thesis is to develop methods for fault detection and diagnosis for ground source heat pumps that can be used by servicemen to assist them to accurately detect and diagnose faults during the operation of the heat pump. The aim of this thesis is focused to develop two fault detection and diagnosis methods, sensitivity ratio and data-driven using principle component analysis. For the sensitivity ratio method model, two semi-empirical models for heat pump unit were built to simulate fault free and faulty conditions in the heat pump. Both models have been cross-validated by fault free experimental data. The fault free model is used as a reference. Then, fault trend analysis is performed in order to select a pair of uniquely sensitive and insensitive parameters to calculate the sensitivity ratio for each fault. When a sensitivity ratio value for a certain fault drops below a predefined value, that fault is diagnosed and an alarm message with that fault appears. The simulated faults data is used to test the model and the model successfully detected and diagnosed the faults types that were tested for different operation conditions. In the second method, principle component analysis is used to drive linear correlations of the original variables and calculate the principle components to reduce the dimensionality of the system. Then simple clustering technique is used for operation conditions classification and fault detection and diagnosis process. Each fault is represented by four clusters connected with three lines where each cluster represents different fault intensity level. The fault detection is performed by measuring the shortest orthogonal distance between the test point and the lines connecting the faults’ clusters. Simulated fault free and faulty data are used to train the model. Then, a new set of simulated faults data is used to test the model and the model successfully detected and diagnosed all faults type and intensity level of the tested faults for different operation conditions. Both models used simple seven temperature measurements, two pressure measurements (from which the condensation and evaporation temperatures are calculated) and the electrical power, as an input to the fault detection and diagnosis model. This is to reduce the cost and make it more convenient to implement. Finally, for each models, a user friendly graphical user interface is built to facilitate the model operation by the serviceman.
|
393 |
Technologies and Evaluation Metrics for On-Board Over the Air ControlDatta, Aneysha January 2022 (has links)
This project has been carried out at the Electronic Embedded Systems Architecture Department at Volvo Construction Equipment (VCE), Eskilstuna, Sweden. It forms the baseline for a stepwise systematic research initiative to convert wired technologies used for certain in-vehicle control and communication components to wireless technologies. In-vehicle wireless networks are being increasingly improvised and researched to minimize the manufacturing and maintenance cost of the total amount of wiring harnesses within the vehicles. Fault tracking and maintenance becomes convenient within a wireless network. Wireless intra-vehicular communication provides an open architecture that can accommodate new components and applications. One such usability has been studied in this thesis for the Display Control Unit of Volvo paver machines. A newly designed hardware demands new technologies to ensure operator safety, security, comfort, convenience, and information. The research conducted in this thesis takes into account five probable use-cases in terms of control and communication around the new hardware, and studies suitable wireless technologies that could replace the wired technology that will be used. From a detailed literature study and the specifications provided by VCE, WAVE/IEEE802.11p and DMG/IEEE 802.11ad have been selected as optimal candidates. These two have been modelled at the physical layer of the system. After comparing the results of WAVE for 4 different channel models and 8 different coding and modulation schemes, it has been found that 1/2BPSK(3 Mbps) and 1/2 QPSK(6 Mbps) are optimal for the 3 Rician Fading Channels of Rural LOS, Urban LOS, and Highway LOS. For use-cases that involve larger distances and a large exchange of control signals, WAVE is a good choice. DMG has 19 modulation schemes for Single Carrier Modes and some of them are extremely robust at low SNR. Around SNR 20, it shows lesser packet errors than WAVE. The lower error rate is also evident from the BER values. For use-cases that involve smaller distances and a lot of image data, DMG is preferable. The work, however, does not study the safety and security aspects. Thean alysis and modelling are theoretical being based on literature studies and the necessary parameters provided by VCE. The model needs to be evaluated against field studies and practical measurements with a prototype before implementation inside the paver.
|
394 |
Towards patient selection for cranial proton beam therapy – Assessment of current patient-individual treatment decision strategiesDutz, Almut 27 November 2020 (has links)
Proton beam therapy shows dosimetric advantages in terms of sparing healthy tissue compared to conventional photon radiotherapy. Those patients who are supposed to experience the greatest reduction in side effects should preferably be treated with proton beam therapy. One option for this patient selection is the model-based approach. Its feasibility in patients with intracranial tumours is investigated in this thesis. First, normal tissue complication probability models for early and late side effects were developed and validated in external cohorts based on data of patients treated with proton beam therapy. Acute erythema as well as acute and late alopecia were associated with high-dose parameters of the skin. Late mild hearing loss was related to the mean dose of the ipsilateral cochlea. Second, neurocognitive function as a relevant side effect for brain tumour patients was investigated in detail using subjective and objective measures. It remained largely stable during recurrence-free follow-up until two years after proton beam therapy. Finally, potential toxicity differences were evaluated based on an individual proton and photon treatment plan comparison as well as on models predicting various side effects. Although proton beam therapy was able to achieve a high relative reduction of dose exposure in contralateral organs at risk, the associated reduction of side effect probabilities was less pronounced. Using a model-based selection procedure, the majority of the examined patients would have been eligible for proton beam therapy, mainly due to the predictions of a model on neurocognitive function.:1. Introduction
2. Theoretical background
2.1 Treatment strategies for tumours in the brain and skull base
2.1.1 Gliomas
2.1.2 Meningiomas
2.1.3 Pituitary adenomas
2.1.4 Tumours of the skull base
2.1.5 Role of proton beam therapy
2.2 Radiotherapy with photons and protons
2.2.1 Biological effect of radiation
2.2.2 Basic physical principles of radiotherapy
2.2.3 Field formation in radiotherapy
2.2.4 Target definition and delineation of organs at risk
2.2.5 Treatment plan assessment
2.3 Patient outcome
2.3.1 Scoring of side effects
2.3.2 Patient-reported outcome measures – Quality of life
2.3.3 Measures of neurocognitive function
2.4 Normal tissue complication probability models
2.4.1 Types of NTCP models
2.4.2 Endpoint definition and parameter fitting
2.4.3 Assessment of model performance
2.4.4 Model validation
2.5 Model-based approach for patient selection for proton beam therapy
2.5.1 Limits of randomised controlled trials
2.5.2 Principles of the model-based approach
3. Investigated patient cohorts
4. Modelling of side effects following cranial proton beam therapy
4.1 Experimental design for modelling early and late side effects
4.2 Modelling of early side effects
4.2.1 Results
4.2.2 Discussion
4.3 Modelling of late side effects
4.3.1 Results
4.3.2 Discussion
4.4 Interobserver variability of alopecia and erythema assessment
4.4.1 Patient cohort and experimental design
4.4.2 Results
4.4.3 Discussion
4.5 Summary
5. Assessing the neurocognitive function following cranial proton beam therapy
5.1 Patient cohort and experimental design
5.2 Results
5.2.1 Performance at baseline
5.2.2 Correlation between subjective and objective measures
5.2.3 Time-dependent score analyses
5.3 Discussion and conclusion
5.4 Summary
6. Treatment plan and NTCP comparison for patients with intracranial tumours
6.1 Motivation
6.2 Treatment plan comparison of cranial proton and photon radiotherapy
6.2.1 Patient cohort and experimental design
6.2.2 Results
6.2.3 Discussion
6.3 Application of NTCP models
6.3.1 Patient cohort and experimental design
6.3.2 Results
6.3.3 Discussion
6.4 Summary
7. Conclusion and further perspectives
8. Zusammenfassung
9. Summary
|
395 |
A Test Framework for Executing Model-Based Testing in Embedded SystemsIyenghar, Padma 25 September 2012 (has links)
Model Driven Development (MDD) and Model Based Testing (MBT) are gaining inroads individually for their application in embedded software engineering projects. However, their full-edged and integrated usage in real-life embedded software engineering projects (e.g. industrially relevant examples) and executing MBT in resource constrained embedded systems (e.g. 16 bit system/64 KiByte memory) are emerging fields.
Addressing the aforementioned gaps, this thesis proposes an integrated model-based approach and test framework for executing the model-based test cases, with minimal overhead, in embedded systems. Given a chosen System Under Test (SUT) and the system design model, a test framework generation algorithm generates the necessary artifacts (i.e., the test framework) for executing the model-based test cases. The main goal of the test framework is to enable test automation and test case execution at the host computer (which executes the test harness), thereby only the test input data is executed at the target. Significant overhead involved in interpreting the test data at the target is eliminated, as the test framework makes use of a target debugger (communication and decoding agent) on the host and a target monitor (software-based runtime monitoring routine) in the embedded system. In the prototype implementation of the proposed approach, corresponding (standardized) languages such as the Unified Modeling Language (UML) and the UML Testing Profile (UTP) are used for the MDD and MBT phases respectively. The applicability of the proposed approach is demonstrated using an experimental evaluation (of the prototype) in real-life examples.
The empirical results indicate that the total time spent for executing the test cases in the target (runtime-time complexity), comprises of only the time spent to decode the test input data by the target monitor and execute it in the embedded system. Similarly, the only memory requirement in the target for executing the model-based test cases in the target is that of the software-based target monitor. A quantitative comparison on the percentage change in the memory overhead (runtime-memory complexity) for the existing approach and the proposed approach indicates that the existing approach (e.g. in a MDD/MBT tool-Rhapsody), introduces approximately 150% to 350% increase in memory overhead for executing the test cases. On the other hand, in the proposed approach, the target monitor is independent of the number of test cases to be executed and their complexity. Hence, the percentage change in the memory overhead for the proposed approach shows a declining trend w.r.t the increasing code-size for equivalent application scenarios (approximately 17% to 2%).
Thus, the proposed test automation approach provides the essential benefit of executing model- based tests, without downloading the test harness in the target. It is demonstrated that it is feasible to execute the test cases specified at higher abstraction levels (e.g. using UML sequence diagrams) in resource constrained embedded systems and how this may be realized using the proposed approach. Further, as the proposed runtime monitoring mechanism is time and memory-aware, the overhead parameters can be accommodated in the earlier phases of the embedded software development cycle (if necessary) and the target monitor can be included in the final production code. The aforementioned advantages highlight the scalability, applicability, reliability and superiority of the proposed approach over the existing methodologies for executing the model-based test cases in embedded systems.
|
396 |
Visual Tracking and Motion Estimation for an On-orbit Servicing of a SatelliteOumer, Nassir Workicho 28 September 2016 (has links)
This thesis addresses visual tracking of a non-cooperative as well as a partially cooperative satellite, to enable close-range rendezvous between a servicer and a target satellite. Visual tracking and estimation of relative motion between a servicer and a target satellite are critical abilities for rendezvous and proximity operation such as repairing and deorbiting. For this purpose, Lidar has been widely employed in cooperative rendezvous and docking missions. Despite its robustness to harsh space illumination, Lidar has high weight and rotating parts and consumes more power, thus undermines the stringent requirements of a satellite design. On the other hand, inexpensive on-board cameras can provide an effective solution, working at a wide range of distances. However, conditions of space lighting are particularly challenging for image based tracking algorithms, because of the direct sunlight exposure, and due to the glossy surface of the satellite that creates strong reflection and image saturation, which leads to difficulties in tracking procedures. In order to address these difficulties, the relevant literature is examined in the fields of computer vision, and satellite rendezvous and docking. Two classes of problems are identified and relevant solutions, implemented on a standard computer are provided. Firstly, in the absence of a geometric model of the satellite, the thesis presents a robust feature-based method with prediction capability in case of insufficient features, relying on a point-wise motion model. Secondly, we employ a robust model-based hierarchical position localization method to handle change of image features along a range of distances, and localize an attitude-controlled (partially cooperative) satellite. Moreover, the thesis presents a pose tracking method addressing ambiguities in edge-matching, and a pose detection algorithm based on appearance model learning. For the validation of the methods, real camera images and ground truth data, generated with a laboratory tet bed similar to space conditions are used. The experimental results indicate that camera based methods provide robust and accurate tracking for the approach of malfunctioning satellites in spite of the difficulties associated with specularities and direct sunlight. Also exceptional lighting conditions associated to the sun angle are discussed, aimed at achieving fully reliable localization system in a certain mission.
|
397 |
Pattern Based System Engineering (PBSE)- Product Lifecycle Management (PLM) Integration and ValidationGupta, Rajat 17 November 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Mass customization, small lot sizes, reduced cost, high variability of product types
and changing product portfolio are characteristics of modern manufacturing systems
during life cycle. A direct consequence of these characteristics is a more complex system and supply chain. Product lifecycle management (PLM) and model based system
engineering (MBSE) are tools which have been proposed and implemented to address
different aspects of this complexity and resulting challenges. Our previous work has
successfully implemented a MBSE model into a PLM platform. More specifically,
Pattern based system engineering (S* pattern) models of systems are integrated with
TEAMCENTER to link and interface system level with component level, and streamline the lifecycle across disciplines. The benefit of the implementation is two folded.
On one side it helps system engineers using system engineering models enable a shift
from learning how to model to implementing the model, which leads to more effective
systems definition, design, integration and testing. On the other side the PLM platform provides a reliable database to store legacy data for future use also track changes
during the entire process, including one of the most important tools that a systems
engineer needs which is an automatic report generation tool. In the current work, we
have configured a PLM platform (TEAMCENTER) to support automatic generation
of reports and requirements tables using a generic Oil Filter system lifecycle. There
are three tables that have been configured for automatic generation which are Feature definitions table, Detail Requirements table and Stakeholder Feature Attributes
table. These tables where specifically chosen as they describe all the requirements of the system and cover all physical behaviours the oil filter system shall exhibit during its physical interactions with external systems. The requirement tables represent
core content for a typical systems engineering report. With the help of the automatic
report generation tool, it is possible to prepare the entire report within one single
system, the PLM system, to ensure a single reliable data source for an organization.
Automatic generation of these contents can save the systems engineers time, avoid
duplicated work and human errors in report preparation, train future generation of
workforce in the lifecycle all the while encouraging standardized documents in an
organization.
|
398 |
Sensor Placement for Diagnosis of Large-Scale, Complex Systems: Advancement of Structural MethodsRahman, Brian M. 02 October 2019 (has links)
No description available.
|
399 |
Hybrid Electric Vehicle Model Development and Design of Controls Testing FrameworkSatra, Mahaveer Kantilal January 2020 (has links)
No description available.
|
400 |
Modellbasierte Entwicklungsmethoden als Enabler von Smart Services im Kontext von Industrie 4.0Kampfmann, Rüdiger, Menager, Nils 29 May 2018 (has links)
Ständig steigende Anforderungen an industrielle Anlagen, wie zum Beispiel ein höherer Durchsatz oder mehr Flexibilität, führen zu einer gesteigerten Komplexität dieser Systeme. Zusätzlich verlagert sich immer mehr Funktionalität aus dem Hardware- in den Softwarebereich, so dass dessen Bedeutung stetig zunimmt. Diesem Wandel mit wettbewerbsfähiger Entwicklungszeit zu begegnen, ist eine der wichtigsten Herausforderungen im Automatisierungssektor. Einen Ansatz hierzu stellt die Verwendung modellbasierter Entwicklungsmethoden dar. Während in den frühen Phasen des Entwicklungsprozesses modellbasierte Methoden zunehmend häufiger eingesetzt werden, besteht vor allem in den späteren Entwicklungsphasen sowie in der Phase des Betriebs erheblicher Nachholbedarf. In diesem Beitrag werden zunächst die bereits heute in der Praxis verwendeten Methoden am Beispiel einer komplexen Roboterkinematik vorgestellt. Anschließend wird im Wesentlichen die Phase des Betriebs betrachtet und dargestellt, welche Mehrwerte sich durch die Verwendung so genannter Smart Services auf Basis der bereits vorhandenen physikalischen Simulationsmodelle ergeben.
|
Page generated in 0.0699 seconds