• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 15
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 120
  • 120
  • 41
  • 36
  • 23
  • 20
  • 20
  • 18
  • 18
  • 18
  • 16
  • 15
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Applying an information system framework to the Army's simulation support plan process

Boerjan, Robert A. 01 April 2003 (has links)
No description available.
92

A Knowledge Framework for Integrating Multiple Perspective in Decision-Centric Design

Mocko, Gregory Michael 11 April 2006 (has links)
Problem: Engineering design decisions require the integration of information from multiple and disparate sources. However, this information is often independently created, limited to a single perspective, and not formally represented, thus making it difficult to formulate decisions. Hence, the primary challenge is the development of computational representations that facilitate the exchange of information for decision support. Approach: First, the scope of this research is limited to representing design decisions as compromise decision support problems (cDSP). To address this challenge, the primary hypothesis is that a formal language will enable the semantics of cDSP to be captured, thus providing a digital interface through which design information can be exchanged. The primary hypothesis is answered through the development of a description logic (DL) based formal language. The primary research question is addressed in four sub-questions. The first two research questions relate to the development of a vocabulary for representing the semantics of the cDSP. The first hypothesis used to answer this question is that formal information modeling techniques can be used to explicitly capture the semantics and structure of the cDSP. The second research question is focused on the realization of a computer-processible representation. The hypothesis used to answer this question is that DL can be used for developing computational-based representations. The third research question is related to the organization and retrieval of decision information. The hypothesis used to answer this question is DL reasoning algorithms can be used to support organization and retrieval. Validation: The formal language developed in this dissertation is theoretically and empirically validated using the validation square approach. Validation of the hypotheses is achieved by systematically building confidence through example problems. Examples include the cDSP construct, analysis support models, the design of a cantilever beam, and design of a structural fin array heat sink. Contributions: The primary contribution from this dissertation is a formal language for capturing the semantics of cDSPs and analysis support models comprised of: (1) a systematic methodology for decision formulation, (2) a cDSP vocabulary, (3) a graphical information model, and (4) a DL-based representation. The components, collectively, provide a means for exchanging cDSP information.
93

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
94

Metamodeling strategies for high-dimensional simulation-based design problems

Shan, Songqing 13 October 2010 (has links)
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
95

Analysis of material flow and simulation-based optimization of transportation system : The combination of simulation and Lean to evaluate and design a transportation system

Vuoluterä, Fredrik, Carlén, Oliver January 2018 (has links)
The thesis has been performed in cooperation with a Swedish manufacturing company. The manufacturing site of the company is currently implementing a new machine layout in one of its workshops. The new layout will increase the product flow to another workshop on the site. The goal of the thesis was to evaluate the current transportation system and suggest viable alternatives for the future product flow. By means of discrete event simulation these alternative solutions would be modelled and subsequently optimized to determine if their performance is satisfactory. An approximated investment cost of the solutions would also be estimated. By performing a literature review and creating a frame of reference, a set of relevant methodologies were selected to provide a foundation to the project. Following these methodologies, the current state of transportation was identified and mapped using Value Stream Mapping. Necessary data from the current flow was identified and collected from the company computer systems. This data was deemed partly inaccurate and further verification was needed. To this end, a combination of Genchi Genbutsu, assistance from onsite engineers and a time study was used to verify the unreliable data points. The data sets from the time study and the company data which was deemed valid were represented by statistical distributions to provide input for the simulation models. Two possible solutions were picked for evaluation, an automated guided vehicle system and a tow train system. With the help of onsite personnel, a Kaizen Event was performed in which new possible routing for the future flow was evaluated. A set of simulation models portraying the automated guided vehicle system and the tow train system were developed with the aid of simulation software. The results from these models showed a low utilization of both systems. A new set of models were developed, which included all the product flows between the workshops. The new flows were modelled as generic pallets with the arrival distribution based on historical production data. This set of models were then subject for optimization with regard to the work in process and lead time of the system. The results from the optimization indicates the possibility to reduce the overall work in process by reducing certain buffer sizes while still maintaining the required throughput. These solutions were not deemed to be ready for implementation due to the low utilization of the transportation systems. The authors instead recommend expanding the scope of the system and including other product flows to reach a high utilization.
96

Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization of Stochastic Systems : Improving the efficiency of time-constrained optimization

Siegmund, Florian January 2016 (has links)
In preference-based Evolutionary Multi-objective Optimization (EMO), the decision maker is looking for a diverse, but locally focused non-dominated front in a preferred area of the objective space, as close as possible to the true Pareto-front. Since solutions found outside the area of interest are considered less important or even irrelevant, the optimization can focus its efforts on the preferred area and find the solutions that the decision maker is looking for more quickly, i.e., with fewer simulation runs. This is particularly important if the available time for optimization is limited, as is the case in many real-world applications. Although previous studies in using this kind of guided-search with preference information, for example, withthe R-NSGA-II algorithm, have shown positive results, only very few of them considered the stochastic outputs of simulated systems. In the literature, this phenomenon of stochastic evaluation functions is sometimes called noisy optimization. If an EMO algorithm is run without any countermeasure to noisy evaluation functions, the performance will deteriorate, compared to the case if the true mean objective values are known. While, in general, static resampling of solutions to reduce the uncertainty of all evaluated design solutions can allow EMO algorithms to avoid this problem, it will significantly increase the required simulation time/budget, as many samples will be wasted on candidate solutions which are inferior. In comparison, a Dynamic Resampling (DR) strategy can allow the exploration and exploitation trade-off to be optimized, since the required accuracy about objective values varies between solutions. In a dense, converged population, itis important to know the accurate objective values, whereas noisy objective values are less harmful when an algorithm is exploring the objective space, especially early in the optimization process. Therefore, a well-designed Dynamic Resampling strategy which resamples the solution carefully, according to the resampling need, can help an EMO algorithm achieve better results than a static resampling allocation. While there are abundant studies in Simulation-based Optimization that considered Dynamic Resampling, the survey done in this study has found that there is no related work that considered how combinations of Dynamic Resampling and preference-based guided search can further enhance the performance of EMO algorithms, especially if the problems under study involve computationally expensive evaluations, like production systems simulation. The aim of this thesis is therefore to study, design and then to compare new combinations of preference-based EMO algorithms with various DR strategies, in order to improve the solution quality found by simulation-based multi-objective optimization with stochastic outputs, under a limited function evaluation or simulation budget. Specifically, based on the advantages and flexibility offered by interactive, reference point-based approaches, studies of the performance enhancements of R-NSGA-II when augmented with various DR strategies, with increasing degrees of statistical sophistication, as well as several adaptive features in terms of optimization parameters, have been made. The research results have clearly shown that optimization results can be improved, if a hybrid DR strategy is used and adaptive algorithm parameters are chosen according to the noise level and problem complexity. In the case of a limited simulation budget, the results allow the conclusions that both decision maker preferences and DR should be used at the same time to achieve the best results in simulation-based multi-objective optimization. / Vid preferensbaserad evolutionär flermålsoptimering försöker beslutsfattaren hitta lösningar som är fokuserade kring ett valt preferensområde i målrymden och som ligger så nära den optimala Pareto-fronten som möjligt. Eftersom lösningar utanför preferensområdet anses som mindre intressanta, eller till och med oviktiga, kan optimeringen fokusera på den intressanta delen av målrymden och hitta relevanta lösningar snabbare, vilket betyder att färre lösningar behöver utvärderas. Detta är en stor fördel vid simuleringsbaserad flermålsoptimering med långa simuleringstider eftersom antalet olika konfigurationer som kan simuleras och utvärderas är mycket begränsat. Även tidigare studier som använt fokuserad flermålsoptimering styrd av användarpreferenser, t.ex. med algoritmen R-NSGA-II, har visat positiva resultat men enbart få av dessa har tagit hänsyn till det stokastiska beteendet hos de simulerade systemen. I litteraturen kallas optimering med stokastiska utvärderingsfunktioner ibland "noisy optimization". Om en optimeringsalgoritm inte tar hänsyn till att de utvärderade målvärdena är stokastiska kommer prestandan vara lägre jämfört med om optimeringsalgoritmen har tillgång till de verkliga målvärdena. Statisk upprepad utvärdering av lösningar med syftet att reducera osäkerheten hos alla evaluerade lösningar hjälper optimeringsalgoritmer att undvika problemet, men leder samtidigt till en betydande ökning av antalet nödvändiga simuleringar och därigenom en ökning av optimeringstiden. Detta är problematiskt eftersom det innebär att många simuleringar utförs i onödan på undermåliga lösningar, där exakta målvärden inte bidrar till att förbättra optimeringens resultat. Upprepad utvärdering reducerar ovissheten och hjälper till att förbättra optimeringen, men har också ett pris. Om flera simuleringar används för varje lösning så minskar antalet olika lösningar som kan simuleras och sökrymden kan inte utforskas lika mycket, givet att det totala antalet simuleringar är begränsat. Dynamisk upprepad utvärdering kan däremot effektivisera flermålsoptimeringens avvägning mellan utforskning och exploatering av sökrymden baserat på det faktum att den nödvändiga precisionen i målvärdena varierar mellan de olika lösningarna i målrymden. I en tät och konvergerad population av lösningar är det viktigt att känna till de exakta målvärdena, medan osäkra målvärden är mindre skadliga i ett tidigt stadium i optimeringsprocessen när algoritmen utforskar målrymden. En dynamisk strategi för upprepad utvärdering med en noggrann allokering av utvärderingarna kan därför uppnå bättre resultat än en allokering som är statisk. Trots att finns ett rikligt antal studier inom simuleringsbaserad optimering som använder sig av dynamisk upprepad utvärdering så har inga relaterade studier hittats som undersöker hur kombinationer av dynamisk upprepad utvärdering och preferensbaserad styrning kan förbättra prestandan hos algoritmer för flermålsoptimering ytterligare. Speciell avsaknad finns det av studier om optimering av problem med långa simuleringstider, som t.ex. simulering av produktionssystem. Avhandlingens mål är därför att studera, konstruera och jämföra nya kombinationer av preferensbaserade optimeringsalgoritmer och dynamiska strategier för upprepad utvärdering. Syftet är att förbättra resultatet av simuleringsbaserad flermålsoptimering som har stokastiska målvärden när antalet utvärderingar eller optimeringstiden är begränsade. Avhandlingen har speciellt fokuserat på att undersöka prestandahöjande åtgärder hos algoritmen R-NSGA-II i kombination med dynamisk upprepad utvärdering, baserad på fördelarna och flexibiliteten som interaktiva referenspunktbaserade algoritmer erbjuder. Exempel på förbättringsåtgärder är dynamiska algoritmer för upprepad utvärdering med förbättrad statistisk osäkerhetshantering och adaptiva optimeringsparametrar. Resultaten från avhandlingen visar tydligt att optimeringsresultaten kan förbättras om hybrida dynamiska algoritmer för upprepad utvärdering används och adaptiva optimeringsparametrar väljs beroende på osäkerhetsnivån och komplexiteten i optimeringsproblemet. För de fall där simuleringstiden är begränsad är slutsatsen från avhandlingen att både användarpreferenser och dynamisk upprepad utvärdering bör användas samtidigt för att uppnå de bästa resultaten i simuleringsbaserad flermålsoptimering.
97

A life cycle assessment and process system engineering integrated approach for sustainability : application to environmental evaluation of biofuel production / Approche intégrée en analyse de cycle de vie et génie des procèdes pour la durabilité : application à l'évaluation environnementale du système de production de biocarburants

Gillani, Sayed Tamiz ud din 26 September 2013 (has links)
La méthode de l’Analyse du Cycle de Vie (ACV) est devenue ces dernières années un outil d’aide à la décision « environnementale » pour évaluer l’impact des produits et des processus associés. La pratique de l’ACV est documentée comme un outil pour l’évaluation d’impacts, la comparaison et la prise de décisions « orientée produit ». L’exploitation d’une telle méthode pour les procédés de l’industrie bio-physico-chimique a gagné récemment en popularité. Il existe de nombreux faisceaux d’amélioration et d’expansion pour sa mise en oeuvre pour l’évaluation des procédés industriels. L’étude s’attache à la production de biocarburant à partir de la plante Jatropha curcas L. selon une approche « attributionelle ». Cette étude présente l’évaluation environnementale d’un agro-procédé et discute de l’opportunité de coupler les concepts, les méthodes et les outils de l’ACV et de l’IPAO (Ingénierie des Procédés Assistés par Ordinateur). Une première partie présente l’ACV appliquée à l’agrochimie. L’état de la littérature apporte des enseignements sur les diverses études qui mettent en évidence le rôle et l’importance de l’ACV pour les produits et les différents agro-procédés. La substitution des carburants classiques par les biocarburants est considérée comme une voie potentielle de substitution aux énergies fossiles. Leur processus se doit d’être évalué au regard de l’impact environnemental et du paradigme du développement durable, en complément des critères classiques, économiques et politiques. La deuxième partie aborde notre étude ACV de la production du biocarburant à partir de la plante Jatropha. Cette évaluation englobe la culture et la récolte en Afrique, l’extraction de l’huile et la phase de production de biocarburants, jusqu’à son utilisation par un moteur à explosion. À cet effet, les normes ISO 14040 et 14044 sont respectées. Basée sur une perspective « midpoint » avec les méthodes de calcul d’impacts, Impact 2002+ et CML, nous fournissons les premiers résultats de la phase d’interprétation (GES, appauvrissement des ressources, la couche d’ozone, l’eutrophisation et l’acidification). Cette étude démontre le potentiel de production de biocarburants de deuxième génération à réduire l’impact environnemental. Dans le même temps, elle révèle que l’unité de transesterification est le plus impactant. Nous identifions les limites de notre application selon une approche ACV « pure ». Dans la troisième partie, nous discutons des bénéfices attendus du couplage de l’ACV et des méthodes de modélisation et de simulation de l’ingénierie des procédés. Nous suggérons alors une amélioration de l’approche environnementale des systèmes de production. Nous fournissons un cadre de travail intégrant les différents points de vue, système, processus et procédé afin d’évaluer les performances environnementales du produit. Un outil logiciel, SimLCA, est développé sur la base de l’environnement Excel et est validé par l’utilisation de la solution ACV SimaPro et du simulateur de procédés Prosim Plus. SimLCA permet un couplage ACV-simulation pour l’évaluation environnementale du système complet de production de biocarburant. Cette intégration multi-niveaux permet une interaction dynamique des données, paramètres et résultats de simulation. Différentes configurations et scénarios sont discutés afin d’étudier l’influence de l’unité fonctionnelle et d’un paramètre de procédé. La quatrième partie établit la conclusion générale et trace les perspectives. / With the rise of global warming issues due to the increase of the greenhouse gas emission and more generally with growing importance granted to sustainable development, process system engineering (PSE) has turned to think more and more environmentally. Indeed, the chemical engineer has now taken into account not only the economic criteria of the process, but also its environmental and social performances. On the other hand LCA is a method used to evaluate the potential impacts on the environment of a product, process, or activity throughout its life cycle. The research here focused on coupling of PSE domain with the environmental analysis of agricultural and chemical activities and abatement strategies for agro-processes with the help of computer aided tools and models. Among many approaches, the coupling of PSE and LCA is investigated here because it is viewed as a good instrument to evaluate the environmental performance of different unitary processes and whole process. The coupling can be of different nature depending on the focus of the study. The main objective is to define an innovative LCA based on approach for a deep integration of product, process and system perspectives. We selected a PSE embedded LCA and proposed a framework that would lead to an improved eco-analysis, eco-design and eco-decision of business processes and resulted products for researchers and engineers. In the first place we evaluate biodiesel for environmental analysis with the help of field data, background data and impact methodologies. Through this environmental evaluation, we identify the hotspot in the whole production system. To complement the experimental data this hotspot (i.e. transesterification) is selected for further modeling and simulation. For results validation, we also implement LCA in a dedicated tool (SimaPro) and simulation in a PSE simulation tool (Prosim Plus). Finally we develop a tool (SimLCA) dedicated to the LCA by using PSE tools and methodologies. This development of SimLCA framework can serve as a step forward for determination of sustainability and eco-efficient designing.
98

Modelem řízený návrh konferenčního systému / Model Based Design of the Conference System

Caha, Matěj January 2013 (has links)
This thesis deals with the topic of  model-based design and application of simulation in system design. In the introduction, the thesis discusses the history of software development process and outlines the current status. The aim is to demonstrate a model-driven design on a case study of conference system. There will be presented formalisms of DEVS and OOPN  together with experimental tools PNtalk and SmallDEVS that allow to work with these formalisms. The resulting model of conference system will be deployed as part of a web application using a framework Seaside in the Squeak environment.
99

Schedulability in Mixed-criticality Systems / Ordonnancement des systèmes avec différents niveaux de criticité

Kahil, Rany 26 June 2019 (has links)
Les systèmes temps-réel critiques doivent exécuter leurs tâches dans les délais impartis. En cas de défaillance, des événements peuvent avoir des catastrophes économiques. Des classifications des défaillances par rapport aux niveaux des risques encourus ont été établies, en particulier dans les domaines des transports aéronautique et automobile. Des niveaux de criticité sont attribués aux différentes fonctions des systèmes suivant les risques encourus lors d'une défaillance et des probabilités d'apparition de celles-ci. Ces différents niveaux de criticité influencent les choix d'architecture logicielle et matérielle ainsi que le type de composants utilisés pour sa réalisation. Les systèmes temps-réels modernes ont tendance à intégrer sur une même plateforme de calcul plusieurs applications avec différents niveaux de criticité. Cette intégration est nécessaire pour des systèmes modernes comme par exemple les drones (UAV) afin de réduire le coût, le poids et la consommation d'énergie. Malheureusement, elle conduit à des difficultés importantes lors de leurs conceptions. En plus, ces systèmes doivent être certifiés en prenant en compte ces différents niveaux de criticités.Il est bien connu que le problème d'ordonnancement des systèmes avec différents niveaux de criticités représente un des plus grand défi dans le domaine de systèmes temps-réel. Les techniques traditionnelles proposent comme solution l’isolation complète entre les niveaux de criticité ou bien une certification globale au plus haut niveau. Malheureusement, une telle solution conduit à une mauvaise des ressources et à la perte de l’avantage de cette intégration. En 2007, Vestal a proposé un modèle pour représenter les systèmes avec différents niveaux de criticité dont les tâches ont plusieurs temps d’exécution, un pour chaque niveau de criticité. En outre, les conditions de validité des stratégies d’ordonnancement ont été définies de manière formelle, permettant ainsi aux tâches les moins critiques d’échapper aux délais, voire d’être abandonnées en cas de défaillance ou de situation d’urgence.Les politiques de planification conventionnelles et les tests d’ordonnoncement se sont révélés inadéquats.Dans cette thèse, nous contribuons à l’étude de l’ordonnancement dans les systèmes avec différents niveaux de criticité. La surcharge d'un système est représentée sous la forme d'un ensemble de tâches pouvant décrire l'exécution sur l'hyper-période de tâches ou sur une durée donnée. Ce modèle nous permet d’étudier la viabilité des tests de correction basés sur la simulation pour les systèmes avec différents niveaux de criticité. Nous montrons que les tests de simulation peuvent toujours être utilisés pour ces systèmes, et la possibilité de l’ordonnancement du pire des scénarios ne suffit plus, même pour le cas de l’ordonnancement avec priorité fixe. Nous montrons que les politiques d'ordonnancement ne sont généralement pas prévisibles. Nous définissons le concept de faible prévisibilité pour les systèmes avec différents niveaux de criticité et nous montrons ensuite qu'une classe spécifique de stratégies à priorité fixe sont faiblement prévisibles. Nous proposons deux tests de correction basés sur la simulation qui fonctionnent pour des stratégies faiblement prévisibles.Nous montrons également que, contrairement à ce que l’on croyait, le contrôle de l’exactitude ne peut se faire que par l’intermédiaire d’un nombre linéaire de préemptions.La majorité des travaux reliés à notre domaine portent sur des systèmes à deux niveaux de criticité en raison de la difficulté du problème. Mais pour les systèmes automobiles et aériens, les normes industrielles définissent quatre ou cinq niveaux de criticité, ce qui nous a motivés à proposer un algorithme de planification qui planifie les systèmes à criticité mixte avec théoriquement un nombre quelconque de niveaux de criticité. Nous montrons expérimentalement que le taux de réussite est supérieur à celui de l’état de la technique. / Real-time safety-critical systems must complete their tasks within a given time limit. Failure to successfully perform their operations, or missing a deadline, can have severe consequences such as destruction of property and/or loss of life. Examples of such systems include automotive systems, drones and avionics among others. Safety guarantees must be provided before these systems can be deemed usable. This is usually done through certification performed by a certification authority.Safety evaluation and certification are complicated and costly even for smaller systems.One answer to these difficulties is the isolation of the critical functionality. Executing tasks of different criticalities on separate platforms prevents non-critical tasks from interfering with critical ones, provides a higher guaranty of safety and simplifies the certification process limiting it to only the critical functions. But this separation, in turn, introduces undesirable results portrayed by an inefficient resource utilization, an increase in the cost, weight, size and energy consumption which can put a system in a competitive disadvantage.To overcome the drawbacks of isolation, Mixed Criticality (MC) systems can be used. These systems allow functionalities with different criticalities to execute on the same platform. In 2007, Vestal proposed a model to represent MC-systems where tasks have multiple Worst Case Execution Times (WCETs), one for each criticality level. In addition, correctness conditions for scheduling policies were formally defined, allowing lower criticality jobs to miss deadlines or be even dropped in cases of failure or emergency situations.The introduction of multiple WCETs and different conditions for correctness increased the difficulty of the scheduling problem for MC-systems. Conventional scheduling policies and schedulability tests proved inadequate and the need for new algorithms arose. Since then, a lot of work has been done in this field.In this thesis, we contribute to the study of schedulability in MC-systems. The workload of a system is represented as a set of jobs that can describe the execution over the hyper-period of tasks or over a duration in time. This model allows us to study the viability of simulation-based correctness tests in MC-systems. We show that simulation tests can still be used in mixed-criticality systems, but in this case, the schedulability of the worst case scenario is no longer sufficient to guarantee the schedulability of the system even for the fixed priority scheduling case. We show that scheduling policies are not predictable in general, and define the concept of weak-predictability for MC-systems. We prove that a specific class of fixed priority policies are weakly predictable and propose two simulation-based correctness tests that work for weakly-predictable policies.We also demonstrate that contrary to what was believed, testing for correctness can not be done only through a linear number of preemptions.The majority of the related work focuses on systems of two criticality levels due to the difficulty of the problem. But for automotive and airborne systems, industrial standards define four or five criticality levels, which motivated us to propose a scheduling algorithm that schedules mixed-criticality systems with theoretically any number of criticality levels. We show experimentally that it has higher success rates compared to the state of the art.We illustrate how our scheduling algorithm, or any algorithm that generates a single time-triggered table for each criticality mode, can be used as a recovery strategy to ensure the safety of the system in case of certain failures.Finally, we propose a high level concurrency language and a model for designing an MC-system with coarse grained multi-core interference.
100

Metody akcelerace verifikace logických obvodů / New Methods for Increasing Efficiency and Speed of Functional Verification

Zachariášová, Marcela January 2015 (has links)
Při vývoji současných číslicových systémů, např. vestavěných systému a počítačového hardware, je nutné hledat postupy, jak zvýšit jejich spolehlivost. Jednou z možností je zvyšování efektivity a rychlosti verifikačních procesů, které se provádějí v raných fázích návrhu. V této dizertační práci se pozornost věnuje verifikačnímu přístupu s názvem funkční verifikace. Je identifikováno několik výzev a problému týkajících se efektivity a rychlosti funkční verifikace a ty jsou následně řešeny v cílech dizertační práce. První cíl se zaměřuje na redukci simulačního času v průběhu verifikace komplexních systémů. Důvodem je, že simulace inherentně paralelního hardwarového systému trvá velmi dlouho v porovnání s během v skutečném hardware. Je proto navrhnuta optimalizační technika, která umisťuje verifikovaný systém do FPGA akcelerátoru, zatím co část verifikačního prostředí stále běží v simulaci. Tímto přemístěním je možné výrazně zredukovat simulační režii. Druhý cíl se zabývá ručně připravovanými verifikačními prostředími, která představují výrazné omezení ve verifikační produktivitě. Tato režie však není nutná, protože většina verifikačních prostředí má velice podobnou strukturu, jelikož využívají komponenty standardních verifikačních metodik. Tyto komponenty se jen upravují s ohledem na verifikovaný systém. Proto druhá optimalizační technika analyzuje popis systému na vyšší úrovni abstrakce a automatizuje tvorbu verifikačních prostředí tím, že je automaticky generuje z tohoto vysoko-úrovňového popisu. Třetí cíl zkoumá, jak je možné docílit úplnost verifikace pomocí inteligentní automatizace. Úplnost verifikace se typicky měří pomocí různých metrik pokrytí a verifikace je ukončena, když je dosažena právě vysoká úroveň pokrytí. Proto je navržena třetí optimalizační technika, která řídí generování vstupů pro verifikovaný systém tak, aby tyto vstupy aktivovali současně co nejvíc bodů pokrytí a aby byla rychlost konvergence k maximálnímu pokrytí co nejvyšší. Jako hlavní optimalizační prostředek se používá genetický algoritmus, který je přizpůsoben pro funkční verifikaci a jeho parametry jsou vyladěny pro tuto doménu. Běží na pozadí verifikačního procesu, analyzuje dosažené pokrytí a na základě toho dynamicky upravuje omezující podmínky pro generátor vstupů. Tyto podmínky jsou reprezentovány pravděpodobnostmi, které určují výběr vhodných hodnot ze vstupní domény. Čtvrtý cíl diskutuje, zda je možné znovu použít vstupy z funkční verifikace pro účely regresního testování a optimalizovat je tak, aby byla rychlost testování co nejvyšší. Ve funkční verifikaci je totiž běžné, že vstupy jsou značně redundantní, jelikož jsou produkovány generátorem. Pro regresní testy ale tato redundance není potřebná a proto může být eliminována. Zároveň je ale nutné dbát na to, aby úroveň pokrytí dosáhnutá optimalizovanou sadou byla stejná, jako u té původní. Čtvrtá optimalizační technika toto reflektuje a opět používá genetický algoritmus jako optimalizační prostředek. Tentokrát ale není integrován do procesu verifikace, ale je použit až po její ukončení. Velmi rychle odstraňuje redundanci z původní sady vstupů a výsledná doba simulace je tak značně optimalizována.

Page generated in 0.1046 seconds