• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 7
  • 5
  • 4
  • 2
  • 2
  • Tagged with
  • 66
  • 66
  • 32
  • 22
  • 22
  • 18
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Summarizing the Results of a Series of Experiments : Application to the Effectiveness of Three Software Evaluation Techniques

Olorisade, Babatunde Kazeem January 2009 (has links)
Software quality has become and persistently remains a big issue among software users and developers. So, the importance of software evaluation cannot be overemphasized. An accepted fact in software engineering is that software must undergo evaluation process during development to ascertain and improve its quality level. In fact, there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. Therefore, it may not be realistic or cost effective to remove all software defects prior to product release. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products - it bogs down to choosing the most appropriate for different situations. However, not much knowledge is available on the strengths and weaknesses of the available evaluation techniques. Most of the information related to the techniques available is focused on how to apply the techniques but not on the applicability conditions of the techniques – practical information, suitability, strengths, weaknesses etc. This research focuses on contributing to the available applicability knowledge of software evaluation techniques. More precisely, it focuses on code reading by stepwise abstraction as representative of the static technique, as well as equivalence partitioning (functional technique) and decision coverage (structural technique) as representatives of the dynamic technique. The specific focus of the research is to summarize the results of a series of experiments conducted to investigate the effectiveness of these techniques among other factors. By effectiveness in this research, we mean the potential of each of the techniques to generate test cases capable of revealing software faults in the case of the dynamic techniques or the ability of the static technique to generate abstractions that will aid the detection of faults. The experiments used two versions of three different programs with seven different faults seeded into each of the programs. This work uses the results of the eight different experiments performed and analyzed separately, to explore this fact. The analysis results were pooled together and jointly summarized in this research to extract a common knowledge from the experiments using a qualitative deduction approach created in this work as it was decided not to use formal aggregation at this stage. Since the experiments were performed by different researchers, in different years and in some cases at different site, there were several problems that have to be tackled in order to be able to summarize the results. Part of the problems is the fact that the data files exist in different languages, the structure of the files are different, different names is used for data fields, the analysis were done using different confidence level etc. The first step, taken at the inception of this research was to apply all the techniques to the programs used during the experiments in order to detect the faults. This purpose of this personal experience with the experiment is to be familiarized and get acquainted to the faults, failures, the programs and the experiment situations in general and also, to better understand the data as recorded from the experiments. Afterwards, the data files were recreated to conform to a uniform language, data meaning, file style and structure. A well structured directory was created to keep all the data, analysis and experiment files for all the experiments in the series. These steps paved the way for a feasible results synthesis. Using our method, the technique, program, fault, program – technique, program – fault and technique – fault were selected as main and interaction effects having significant knowledge relevant to the analysis summary result. The result, as reported in this thesis, indicated that the functional technique and the structural technique are equally effective as far as the programs and faults in these experiments are concerned. Both perform better than the code review. Also, the analysis revealed that the effectiveness of the techniques is influenced by the fault type and the program type. Some faults were found to exhibit better behavior with certain programs, some were better detected with certain techniques and even the techniques yield different result in different programs. / I can alternatively be contacted through: qasimbabatunde@yahoo.co.uk
42

Intéropérabilité sémantique entre les outils de traitement d'images en neuro-imagerie / Semantic interoperability between image processing tools in neuroimaging

Wali, Bacem 21 June 2013 (has links)
Le domaine de la recherche en neuroimagerie nécessite de pouvoir partager, réutiliser et comparer les outils de traitement d'images des différents laboratoires. Cependant la tâche de partage de traitement sous forme de services et leur composition sous forme de workflow reste une tâche difficile et trop souvent complexe. Ceci est dû dans la plupart des cas à l'hétérogénéité des services et des plateformes qui diffèrent au niveau de leurs conceptions et de leurs implémentations. Nous travaillons dans le cadre du projet NeuroLOG, une initiative cherchant à construire un système fédéré pour le partage de données et d'outils de traitement dans le domaine de la neuroimagerie. Il adopte une approche ontologique pour assurer la médiation et le partage de ressources entre les différents collaborateurs. Notre travail de thèse vise à compléter la médiation pour assurer le partage et la composition des outils de traitement d'images et à fournir aux utilisateurs spécialistes et non-spécialistes du domaine de la neuroimagerie une plateforme de composition de service ergonomique et facile à utiliser. Nous utilisons pour cela les techniques du web sémantique afin de remédier aux différents problèmes d'interopérabilité et de cohérence de ressources utilisées et produites. La première solution proposée se fonde sur une extension de la plateforme OWL-S. Elle a été adaptée aux différents services web de la plateforme de neuroimagerie. On a déduit que finalement les outils qui ne possèdent pas le format de services web et une description conforme au standard WSDL ne peuvent pas être enchaînés sous forme de workflow. A partir de là, nous avons proposé une autre approche pour effectuer la composition de services de traitement d'images. Elle se se fonde sur un nouveau modèle ontologique de composition de services qui répond aux exigences de la neuroimagerie, qui s'articule bien avec l'ontologie de domaine OntoNeuroLOG et qui pourra remédier aux différents problèmes rencontrés lors de l'élaboration de la première approche. Ce travail a permit de remédier à la fois aux problèmes d'hétérogénéité des descripteurs des services et à l'interopérabilité des services selon les contraintes de la neuroimagerie au sein de la plateforme NeuroLOG. / The field of neuroimaging research requires the ability to share, reuse and compare image processing tools coming from different laboratories. However, sharing treatment as services and composing them as workflows, is usually difficult and a complex task. This is due in most cases to the heterogeneity of services and platforms with regards to their conception and their implementation. We work within the NeuroLOG project, which aims at developing a middleware to federate data repositories and to facilitate the sharing and reuse of processing tools to analyze the shared images. It adopts an ontological approach for data and tools mediation and for sharing resources. This work aims to provide tools mediation to enhance the sharing and composition of image processing tools and provide non-specialist and expert users of neuroimaging field with an ergonomic and easy to use composition platform. We have chosen to use the Semantic Web techniques to address the various problems of resource interoperability and consistency. The first proposed solution is based on an extension of the OWL-S framework. It has been adapted to the various web services of our neuroimaging platform. We finally concluded that services that haven't the WSDL standard as descriptor could not be chained as workflow. So, we have proposed a new approach to compose image processing tools. It is based on a new ontological model for service composition that meets the requirements of the neuroimaging domain and the constraints of our domain ontology OntoNeuroLOG and addresses the various problems encountered in the development of the first approach. This work led to solve the two major problems in the composition of services; the heterogeneity of services descriptors and the interoperability of services according to the constraints within the NeuroLOG platform.
43

Simulation Based Virtual Testing for Perceived Safety and Comfort of Advanced Driver Assistance Systems and Automated Driving Systems

Singh, Harnarayan January 2020 (has links)
No description available.
44

Simulation product fidelity : a qualitative & quantitative system engineering approach / Fidélité de produit de simulation : un approche d'ingénierie de système qualitatif et quantitatif

Ponnusamy, Sangeeth saagar 26 September 2016 (has links)
La modélisation informatique et la simulation sont des activités de plus en plus répandues lors de la conception de systèmes complexes et critiques tels que ceux embarqués dans les avions. Une proposition pour la conception et réalisation d'abstractions compatibles avec les objectifs de simulation est présentée basés sur la théorie de l'informatique, le contrôle et le système des concepts d'ingénierie. Il adresse deux problèmes fondamentaux de fidélité dans la simulation, c'est-à-dire, pour une spécification du système et quelques propriétés d'intérêt, comment extraire des abstractions pour définir une architecture de produit de simulation et jusqu'où quel point le comportement du modèle de simulation représente la spécification du système. Une notion générale de cette fidélité de la simulation, tant architecturale et comportementale, est expliquée dans les notions du cadre expérimental et discuté dans le contexte des abstractions de modélisation et des relations d'inclusion. Une approche semi-formelle basée sur l'ontologie pour construire et définir l'architecture de produit de simulation est proposée et démontrée sur une étude d'échelle industrielle. Une approche formelle basée sur le jeu théorique et méthode formelle est proposée pour différentes classes de modèles des systèmes et des simulations avec un développement d'outils de prototype et cas des études. Les problèmes dans la recherche et implémentation de ce cadre de fidélité sont discutées particulièrement dans un contexte industriel. / In using Modeling and Simulation for the system Verification & Validation activities, often the difficulty is finding and implementing consistent abstractions to model the system being simulated with respect to the simulation requirements. A proposition for the unified design and implementation of modeling abstractions consistent with the simulation objectives based on the computer science, control and system engineering concepts is presented. It addresses two fundamental problems of fidelity in simulation, namely, for a given system specification and some properties of interest, how to extract modeling abstractions to define a simulation product architecture and how far does the behaviour of the simulation model represents the system specification. A general notion of this simulation fidelity, both architectural and behavioural, in system verification and validation is explained in the established notions of the experimental frame and discussed in the context of modeling abstractions and inclusion relations. A semi-formal ontology based domain model approach to build and define the simulation product architecture is proposed with a real industrial scale study. A formal approach based on game theoretic quantitative system refinement notions is proposed for different class of system and simulation models with a prototype tool development and case studies. Challenges in research and implementation of this formal and semi-formal fidelity framework especially in an industrial context are discussed.
45

Decision Support Elements and Enabling Techniques to Achieve a Cyber Defence Situational Awareness Capability

Llopis Sánchez, Salvador 15 June 2023 (has links)
[ES] La presente tesis doctoral realiza un análisis en detalle de los elementos de decisión necesarios para mejorar la comprensión de la situación en ciberdefensa con especial énfasis en la percepción y comprensión del analista de un centro de operaciones de ciberseguridad (SOC). Se proponen dos arquitecturas diferentes basadas en el análisis forense de flujos de datos (NF3). La primera arquitectura emplea técnicas de Ensemble Machine Learning mientras que la segunda es una variante de Machine Learning de mayor complejidad algorítmica (lambda-NF3) que ofrece un marco de defensa de mayor robustez frente a ataques adversarios. Ambas propuestas buscan automatizar de forma efectiva la detección de malware y su posterior gestión de incidentes mostrando unos resultados satisfactorios en aproximar lo que se ha denominado un SOC de próxima generación y de computación cognitiva (NGC2SOC). La supervisión y monitorización de eventos para la protección de las redes informáticas de una organización debe ir acompañada de técnicas de visualización. En este caso, la tesis aborda la generación de representaciones tridimensionales basadas en métricas orientadas a la misión y procedimientos que usan un sistema experto basado en lógica difusa. Precisamente, el estado del arte muestra serias deficiencias a la hora de implementar soluciones de ciberdefensa que reflejen la relevancia de la misión, los recursos y cometidos de una organización para una decisión mejor informada. El trabajo de investigación proporciona finalmente dos áreas claves para mejorar la toma de decisiones en ciberdefensa: un marco sólido y completo de verificación y validación para evaluar parámetros de soluciones y la elaboración de un conjunto de datos sintéticos que referencian unívocamente las fases de un ciberataque con los estándares Cyber Kill Chain y MITRE ATT & CK. / [CA] La present tesi doctoral realitza una anàlisi detalladament dels elements de decisió necessaris per a millorar la comprensió de la situació en ciberdefensa amb especial èmfasi en la percepció i comprensió de l'analista d'un centre d'operacions de ciberseguretat (SOC). Es proposen dues arquitectures diferents basades en l'anàlisi forense de fluxos de dades (NF3). La primera arquitectura empra tècniques de Ensemble Machine Learning mentre que la segona és una variant de Machine Learning de major complexitat algorítmica (lambda-NF3) que ofereix un marc de defensa de major robustesa enfront d'atacs adversaris. Totes dues propostes busquen automatitzar de manera efectiva la detecció de malware i la seua posterior gestió d'incidents mostrant uns resultats satisfactoris a aproximar el que s'ha denominat un SOC de pròxima generació i de computació cognitiva (NGC2SOC). La supervisió i monitoratge d'esdeveniments per a la protecció de les xarxes informàtiques d'una organització ha d'anar acompanyada de tècniques de visualització. En aquest cas, la tesi aborda la generació de representacions tridimensionals basades en mètriques orientades a la missió i procediments que usen un sistema expert basat en lògica difusa. Precisament, l'estat de l'art mostra serioses deficiències a l'hora d'implementar solucions de ciberdefensa que reflectisquen la rellevància de la missió, els recursos i comeses d'una organització per a una decisió més ben informada. El treball de recerca proporciona finalment dues àrees claus per a millorar la presa de decisions en ciberdefensa: un marc sòlid i complet de verificació i validació per a avaluar paràmetres de solucions i l'elaboració d'un conjunt de dades sintètiques que referencien unívocament les fases d'un ciberatac amb els estàndards Cyber Kill Chain i MITRE ATT & CK. / [EN] This doctoral thesis performs a detailed analysis of the decision elements necessary to improve the cyber defence situation awareness with a special emphasis on the perception and understanding of the analyst of a cybersecurity operations center (SOC). Two different architectures based on the network flow forensics of data streams (NF3) are proposed. The first architecture uses Ensemble Machine Learning techniques while the second is a variant of Machine Learning with greater algorithmic complexity (lambda-NF3) that offers a more robust defense framework against adversarial attacks. Both proposals seek to effectively automate the detection of malware and its subsequent incident management, showing satisfactory results in approximating what has been called a next generation cognitive computing SOC (NGC2SOC). The supervision and monitoring of events for the protection of an organisation's computer networks must be accompanied by visualisation techniques. In this case, the thesis addresses the representation of three-dimensional pictures based on mission oriented metrics and procedures that use an expert system based on fuzzy logic. Precisely, the state-of-the-art evidences serious deficiencies when it comes to implementing cyber defence solutions that consider the relevance of the mission, resources and tasks of an organisation for a better-informed decision. The research work finally provides two key areas to improve decision-making in cyber defence: a solid and complete verification and validation framework to evaluate solution parameters and the development of a synthetic dataset that univocally references the phases of a cyber-attack with the Cyber Kill Chain and MITRE ATT & CK standards. / Llopis Sánchez, S. (2023). Decision Support Elements and Enabling Techniques to Achieve a Cyber Defence Situational Awareness Capability [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/194242
46

Composable, Sound Transformations for Nested Recursion and Loops

Kirshanthan Sundararajah (16647885) 26 July 2023 (has links)
<p>    </p> <p>Programs that use loops to operate over arrays and matrices are generally known as <em>regular programs</em>. These programs appear in critical applications such as image processing, differential equation solvers, and machine learning. Over the past few decades, extensive research has been done on composing, verifying, and applying scheduling transformations like loop interchange and loop tiling for regular programs. As a result, we have general frameworks such as the polyhedral model to handle transformations for loop-based programs. Similarly, programs that use recursion and loops to manipulate pointer-based data structures are known as <em>irregular programs</em>. Irregular programs also appear in essential applications such as scientific simulations, data mining, and graphics rendering. However, there is no analogous framework for recursive programs. In the last decade, although many scheduling transformations have been developed for irregular programs, they are ad-hoc in various aspects, such as being developed for a specific application and lacking portability. This dissertation examines principled ways to handle scheduling transformations for recursive programs through a unified framework resulting in performance enhancement. </p> <p>Finding principled approaches to optimize irregular programs at compile-time is a long-standing problem. We specifically focus on scheduling transformations that reorder a program’s operations to improve performance by enhancing locality and exploiting parallelism. In the first part of this dissertation, we present PolyRec, a unified general framework that can compose and apply scheduling transformations to nested recursive programs and reason about the correctness of composed transformations. PolyRec is a first-of-its-kind unified general transformation framework for irregular programs consisting of nested recursion and loops. It is built on solid theoretical foundations from the world of automata and transducers and provides a fundamentally novel way to think about recursive programs and scheduling transformations for them. The core idea is designing mechanisms to strike a balance between the expressivity in representing the set of dynamic instances of computations, transformations, and dependences and the decidability of checking the correctness of composed transformations. We use <em>multi-tape </em>automata and transducers to represent the set of dynamic instances of computations and transformations, respectively. These machines are similar yet more expressive than their classical single-tape counterparts. While in general decidable properties of classical machines are undecidable for multi-tape machines, we have proven that those properties are decidable for the class of machines we consider, and we present algorithms to verify these properties. Therefore these machines provide the building blocks to compose and verify scheduling transformations for nested recursion and loops. The crux of the PolyRec framework is its regular string-based representation of dynamic instances that allows to lexicographically order instances identically to their execution order. All the transformations considered in PolyRec require different ordering of these strings representable only with <em>additive </em>changes to the strings. </p> <p>Loop transformations such as <em>skewing </em>require performing arithmetic on the representation of dynamic instances. In the second part of this dissertation, we explore this space of transformations by introducing skewing to nested recursion. Skewing plays an essential role in producing easily parallelizable loop nests from seemingly difficult ones due to dependences carried across loops. The inclusion of skewing for nested recursion to PolyRec requires significant extensions to representing dynamic instances and transformations that facilitate <em>performing arithmetic using strings</em>. First, we prove that the machines that represent the transformations are still composable. Then we prove that the representation of dependences and the algorithm that checks the correctness of composed transformations hold with minimal changes. Our new extended framework is known as UniRec, since it resembles the unimodular transformations for perfectly nested loop nests, which consider any combination of the primary transformations interchange, reversal, and skewing. UniRec opens possibilities of producing newly composed transformations for nested recursion and loops and verifying their correctness. We claim that UniRec completely subsumes the unimodular framework for loop transformations since nested recursion is more general than loop nests. </p>
47

Provsystem för marin försvarsmateriel : En studie i tillämpning av produktutvecklingsmetoder på utveckling av Verifiering och Validerings-resurser för marin försvarsmateriel

Nilsson, Albin, Molander, Josefin January 2022 (has links)
Having well-functioning equipment for the soldiers and sailors in the Swedish Armed Forces to defend the nation’s territory has been a priority since the catastrophic failure of the flagship Vasa almost 400 years ago. The quality assurance of defense materiel that is procured for the Swedish Armed Forces by the Swedish Defence Materiel Administration (SDMA) is conducted using qualified verification and validation activities, which are supported using complex resources that either create the conditions for tests or gather data from tests. The department Test and Evaluation Marine (T&amp;E Marine) in the SDMA, which works with verification and validation (VoV) of naval defense materiel, has a need to identify and develop new VoV resources for future naval acquisitions. The purpose is to find which needs the department of SDMA T&amp;E Marine has regarding test resources and from the needs develop a specification of a possible VoV resource. In cooperation with the T&amp;E Marine department, the study has used the Design Thinking methodology in a double diamond context and has through conducting interviews, observations, and a workshop created an analysis of the current state of VoV resources, which of these should be further developed, and which new resources the department could acquire to support future test and evaluation. The result of this study is that the analysis of the current situation shows that the department needs a modular remotely piloted aerial system. The result also includes a conceptual design of the VoV resource which the department could procure to support its future verification and validation work. The conclusion is that there is need for several different VoV resources. One VoV resource could be a modular remotely piloted aerial system, that is both easy to use and to carry. / Att svenska soldater och sjömän ska ha välfungerande och säker materiel för att hävda Sveriges territorium har varit en av statens prioriteringar sedan förlisningen av regalskeppet Vasa för snart 400 år sedan. Kvalitetssäkringen av försvarsmateriel som anskaffas till Försvarsmakten genom Försvarets Materielverk (FMV) sker idag genom kvalificerade verifierings- och valideringsaktiviteter och stöds utav komplexa resurser som antingen skapar förutsättningar för prov, eller samlar in data för proven. Avdelningen Test &amp; Evaluering Marin (T&amp;E Marin) på Försvarets Materielverk, som arbetar med verifiering och validering (VoV) av marin försvarsmateriel, har behov av att identifiera och utveckla nya VoV-resurser för framtida anskaffningar i marina domänen. Arbetet syftar till att ta reda på vilka behov som finns inom T&amp;E Marins område för VoV-resurser och utifrån detta behov ta fram en specifikation på hur en VoV-resurs kan se ut. I samarbete med T&amp;E Marin har studien använt sig av Design Thinking metodologin i en Double Diamond kontext, och har genom intervjuer, observationer och en workshop skapat en nulägesanalys för VoV-resurser, vilka som finns idag, vad som önskas vidareutvecklas, och vilka nya resurser som avdelningen kan ha för att stödja sitt arbete. Resultatet är en nulägesanalys som visar att ett behov är en uppdragsanpassningsbar fjärrmanövrerad flygfarkost. Resultatet innefattar också ett konceptförslag på den önskade VoV-resursen som avdelningen bör anskaffa för att stötta sitt framtida VoV-arbete. Slutsatsen är att det finns behov för flera olika VoV-resurser. En av dessa kan vara en fjärrstyrd flygfarkost med hög förmåga för uppdragsanpassning, som är enkel att använda och att bära med sig.
48

Physically Motivated Internal State Variable Form Of A Higher Order Damage Model For Engineering Materials With Uncertainty

Solanki, Kiran N 13 December 2008 (has links)
any experiments demonstrate that isotropic ductile materials used in engineering applications develop anisotropic damage and shows significant variation in elongation to failure. This anisotropic damage is manifest by material microstructural heterogeneities and morphological changes during deformation. The variation in elongation to the failure could be attributed to the uncertainties in the material microstructure and loading conditions. To study this deformation induced anisotropy arising from the initial material heterogeneities, we first performed uncertainty analysis using current form on an internal state variable plasticity and isotropic damage model (Bammann, 1984; Horstemeyer, 2001) to quantify the effect due to variations in material microstructure and loading conditions on elongation to failure. We extend the current isotropic damage form of theory into an anisotropic damage form for ductile material in which material heterogeneities are introduced based on damage distribution functions converted into a damage tensor of second rank. The outcome of this research is a physically motivated, uncertainty-based, anisotropic damage constitutive model that links microstructural features to mechanical properties. This was accomplished by pursuing three sub goals: (1) develop and quantify uncertainty related to material heterogeneities, (2) develop a methodology related to a higher order tensorial rank of damage for void nucleation and void growth, and (3) integrate thermodynamically constrained damage with a rate dependent plasticity constitutive material model. Later, we also proposed a new ISV theory that physically and strongly couples deformation due to damage-related internal defects to metal plasticity.
49

Using Event-Based and Rule-Based Paradigms to Develop Context-Aware Reactive Applications.

Le, Truong Giang 30 September 2013 (has links) (PDF)
Context-aware pervasive computing has attracted a significant research interest from both academy and industry worldwide. It covers a broad range of applications that support many manufacturing and daily life activities. For instance, industrial robots detect the changes of the working environment in the factory to adapt their operations to the requirements. Automotive control systems may observe other vehicles, detect obstacles, and monitor the essence level or the air quality in order to warn the drivers in case of emergency. Another example is power-aware embedded systems that need to work based on current power/energy availability since power consumption is an important issue. Those kinds of systems can also be considered as smart applications. In practice, successful implementation and deployment of context-aware systems depend on the mechanism to recognize and react to variabilities happening in the environment. In other words, we need a well-defined and efficient adaptation approach so that the systems' behavior can be dynamically customized at runtime. Moreover, concurrency should be exploited to improve the performance and responsiveness of the systems. All those requirements, along with the need for safety, dependability, and reliability pose a big challenge for developers.In this thesis, we propose a novel programming language called INI, which supports both event-based and rule-based programming paradigms and is suitable for building concurrent and context-aware reactive applications. In our language, both events and rules can be defined explicitly, in a stand-alone way or in combination. Events in INI run in parallel (synchronously or asynchronously) in order to handle multiple tasks concurrently and may trigger the actions defined in rules. Besides, events can interact with the execution environment to adjust their behavior if necessary and respond to unpredictable changes. We apply INI in both academic and industrial case studies, namely an object tracking program running on the humanoid robot Nao and a M2M gateway. This demonstrates the soundness of our approach as well as INI's capabilities for constructing context-aware systems. Additionally, since context-aware programs are wide applicable and more complex than regular ones, this poses a higher demand for quality assurance with those kinds of applications. Therefore, we formalize several aspects of INI, including its type system and operational semantics. Furthermore, we develop a tool called INICheck, which can convert a significant subset of INI to Promela, the input modeling language of the model checker SPIN. Hence, SPIN can be applied to verify properties or constraints that need to be satisfied by INI programs. Our tool allows the programmers to have insurance on their code and its behavior.
50

Conception d’un système avancé de réacteur PWR flexible par les apports conjoints de l’ingénierie système et de l’automatique / Conception of an advanced flexible PWR reactor system using systems engineering and control theories

Lemazurier, Lori 02 February 2018 (has links)
Devant l’augmentation de la part des énergies renouvelables en France, cette thèse propose d’étudier l’augmentation de la flexibilité des réacteurs à eau pressurisée en croisant deux disciplines pour, chacune, atteindre des objectifs complémentaires : l’Ingénierie Système (IS) et l’Automatique.Dans le contexte de l’ingénierie de systèmes complexes et du Model Based Systems Engineering, ce travail propose dans un premier temps une méthode de conception se fondant sur les principes normatifs de l’IS et respectant les habitudes et les pratiques courantes en ingénierie de Framatome. Cette méthode a pour vocation de formaliser et assurer le passage des exigences aux architectures et d’améliorer les capacités de vérification des modèles développés lors de la conception. Elle s’organise autour de langages de modélisation interopérables, couvrant l’ensemble des processus promus par l’IS. La méthode proposée est appliquée sur le système dont les performances sont les plus limitantes dans le contexte de l’augmentation de flexibilité : le Core Control. Ce composant algorithmique du réacteur assure le contrôle des paramètres de fonctionnement du cœur : la température moyenne, la distribution axiale de puissance et la position des groupes de grappes.La thèse propose ensuite des contributions techniques relevant du champ de l’Automatique. Il s’agit de concevoir un système de régulation répondant aux exigences issues de la formalisation IS évoquée ci-dessus. La solution proposée repose sur une stratégie de commande hiérarchisée, utilisant la complémentarité des approches dites de commande multi-objectif, de séquencement de gains et enfin de commande prédictive. Un modèle de réacteur nucléaire simplifié innovant est développé à des fins de conception du système de régulation et de simulations intermédiaires. Les résultats obtenus ont montré les capacités d’adaptation de la démarche proposée à des spécifications diverses. Les performances atteintes sont très encourageantes lorsque évaluées en simulation à partir d’un modèle réaliste et comparées à celles obtenues par les modes de pilotages classiques. / In the event of increasing renewable energies in France, this thesis proposes to study the flexibility increase of pressurized water reactors (PWR) throughout two different engineering disciplines aiming at complementary objectives: Systems Engineering (SE) and Control theory.In a first phase, within the frame of complex systems design and Model Based Systems Engineering, this work proposes a SE method based on SE standard principles and compliant with Framatome’s practices and addressing the revealed issues. This SE contribution is twofold: formalize and ensure the path from requirements to system architectures and enhance the capabilities of models verification. The method revolves around interoperable modeling languages, covering the SE processes: from requirement engineering to system architecture design. The method is applied to the system, which performances are the most limiting in the context of flexibility increase: the Core Control. This algorithmic reactor component ensures the control of: the average coolant temperature, the axial offset and the rod bank position, three of the core main functioning parameters.In order to provide a technical contribution relying on some advanced control methodologies. It consists in designing a control system meeting the requirements defined by the SE method application. The proposed solution is in a two-layer control strategy using the synergies of multi-objective control, gain-scheduling and predictive control strategies. A simplified innovative nuclear reactor model is employed to conceive the control algorithm, simulate and verify the developed models. The results obtained from this original approach showed the ability to adapt to various specifications. Compared to conventional core control modes, the simulation results showed very promising performances, while meeting the requirements, when evaluated on a realistic reactor model.

Page generated in 0.2158 seconds