• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 4
  • 3
  • 1
  • Tagged with
  • 30
  • 30
  • 30
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Conceptual Framework to Incorporate Complex Basic Events in HiP-HOPS

Kabir, Sohag, Aslansefat, K., Sorokos, I., Papadopoulos, Y., Gheraibia, Y. 18 October 2019 (has links)
No / Reliability evaluation for ensuring the uninterrupted system operation is an integral part of dependable system development. Model-based safety analysis (MBSA) techniques such as Hierarchically Performed Hazard Origin and Propagation Studies (HiP-HOPS) have made the reliability analysis process less expensive in terms of effort and time required. HiP-HOPS uses an analytical modelling approach for Fault tree analysis to automate the reliability analysis process, where each system component is associated with its failure rate or failure probability. However, such non-state-space analysis models are not capable of modelling more complex failure behaviour of component like failure/repair dependencies, e.g., spares, shared repair, imperfect coverage, etc. State-space based paradigms like Markov chain can model complex failure behaviour, but their use can lead to state-space explosion, thus undermining the overall analysis capacity. Therefore, to maintain the benefits of MBSA while not compromising on modelling capability, in this paper, we propose a conceptual framework to incorporate complex basic events in HiP-HOPS. The idea is demonstrated via an illustrative example. / This conference paper is available to view at http://hdl.handle.net/10454/17423.
12

Share-Driven Scheduling of Embedded Networks

Nolte, Thomas January 2006 (has links)
<p>Many products are built from more or less independently developed subsystems. For instance, a car consists of subsystems for transmission, braking, suspension, etc. These subsystems are frequently controlled by an embedded computer system. In the automotive industry, as well as in other application domains, there is currently a trend from an approach where subsystems have dedicated computer hardware and other resources (a federated approach) to an approach where subsystems share hardware and other resources (an integrated approach). This is motivated by a strong pressure to reduce product cost, at the same time as an increasing number of subsystems are being introduced.</p><p>When integrating subsystems, it is desirable that guarantees valid before integration are also valid after integration, since this would eliminate the need for costly reverifications. The computer network is a resource that is typically shared among all subsystems. Hence, a central issue when integrating subsystems is to provide an efficient scheduling of message transmissions on the network. There are essentially three families of schedulers that can be used: priority-driven schedulers that assign priorities to messages, time-driven schedulers that assign specific time-slots for transmission of specific messages, and share-driven schedulers that assign shares of the available network capacity to groups of messages.</p><p>This thesis presents a framework for share-driven scheduling, to be implemented and used in embedded networks, with the aim to facilitate subsystem integration by reducing the risk of interference between subsystems. The framework is applied in the automotive domain.</p><p>The initial parts of the thesis give an overview of systems, subsystems and network technologies found and used in the automotive domain. Then, the share-driven scheduling framework is presented, analytically investigated and proven, as well as evaluated in a simulation study. Finally it is shown how the framework is to be configured and used in the context of subsystem integration. The results show that the framework allows for flexible and efficient scheduling of messages with real-time constraints, facilitating integration of subsystems from a network point of view.</p>
13

Share-Driven Scheduling of Embedded Networks

Nolte, Thomas January 2006 (has links)
Many products are built from more or less independently developed subsystems. For instance, a car consists of subsystems for transmission, braking, suspension, etc. These subsystems are frequently controlled by an embedded computer system. In the automotive industry, as well as in other application domains, there is currently a trend from an approach where subsystems have dedicated computer hardware and other resources (a federated approach) to an approach where subsystems share hardware and other resources (an integrated approach). This is motivated by a strong pressure to reduce product cost, at the same time as an increasing number of subsystems are being introduced. When integrating subsystems, it is desirable that guarantees valid before integration are also valid after integration, since this would eliminate the need for costly reverifications. The computer network is a resource that is typically shared among all subsystems. Hence, a central issue when integrating subsystems is to provide an efficient scheduling of message transmissions on the network. There are essentially three families of schedulers that can be used: priority-driven schedulers that assign priorities to messages, time-driven schedulers that assign specific time-slots for transmission of specific messages, and share-driven schedulers that assign shares of the available network capacity to groups of messages. This thesis presents a framework for share-driven scheduling, to be implemented and used in embedded networks, with the aim to facilitate subsystem integration by reducing the risk of interference between subsystems. The framework is applied in the automotive domain. The initial parts of the thesis give an overview of systems, subsystems and network technologies found and used in the automotive domain. Then, the share-driven scheduling framework is presented, analytically investigated and proven, as well as evaluated in a simulation study. Finally it is shown how the framework is to be configured and used in the context of subsystem integration. The results show that the framework allows for flexible and efficient scheduling of messages with real-time constraints, facilitating integration of subsystems from a network point of view.
14

La détection des sursauts gamma par le télescope ECLAIRs pour la mission spatiale SVOM / Detection of Gamma-Ray Bursts with the ECLAIRs instrument onboard the space mission SVOM

Antier-Farfar, Sarah 29 November 2016 (has links)
Les sursauts gamma sont des événements fascinants de par leur origine longtemps restée mystérieuse, leur apparition imprévisible dans le ciel, et la formidable énergie qu'ils libèrent sous forme de bouffées de rayonnement gamma. Découverts fortuitement au début des années 1970, ils se traduisent par un intense flash de rayons gamma de brève durée (de quelques ms à quelques min), appelé émission prompte, suivi d'une émission longue, appelée rémanence, qui décroît rapidement, en émettant depuis les rayons X jusqu'au domaine radio. L'origine des sursauts gamma est encore largement discutée mais ces phénomènes extrêmes sont très vraisemblablement associés à la formation de nouveaux trous noirs stellaires. Mon sujet de thèse se situe au coeur de la mission sino-française SVOM dont le lancement du satellite est prévu en 2021, qui scrutera le ciel pour observer les sursauts avec une précision inégalée, associant observations spatiales et terrestres. Mon travail concerne l'instrument principal de la mission, le télescope spatial ECLAIRs. Il s'agit d'une caméra à masque codé sensible aux rayons X et gamma de basse énergie, en charge de la détection et de la localisation de l'émission prompte des sursauts. Durant mon travail de thèse, j'ai travaillé sur les performances scientifiques de l'instrument ECLAIRs et j'ai en particulier estimé le nombre de sursauts qui seront détectés et leurs caractéristiques. Pour cela, j'ai mis en place des simulations de performances utilisant les prototypes des algorithmes de détection embarqués combinés au modèle de l'instrument ECLAIRs. Les données en entrée des simulations comportent un bruit de fond simulé, et une population synthétique de sursauts gamma générée à partir de catalogues existants issus des observations des missions antérieures (CGRO, HETE-2, Swift et Fermi). Mon étude a permis d'estimer finement l'efficacité de détection d'ECLAIRs et prédit un taux de sursauts attendu par ECLAIRs entre 40 et 70 sursauts par an. Par ailleurs, mon travail a montré qu'ECLAIRs sera particulièrement sensible à une population de sursauts très riches en rayons X, population encore mal connue. Ma thèse présente plusieurs autres études complémentaires portant sur la performance de localisation, le taux de fausses alertes et les caractéristiques des déclenchements des algorithmes. Enfin, j'ai proposé deux nouvelles méthodes originales de détection de sursauts dont les résultats préliminaires présentés dans ma thèse sont très encourageants. Ils montrent que la sensibilité d'ECLAIRs aux sursauts courts (population d'intérêt particulier en raison de son lien attendu avec les ondes gravitationnelles) peut être encore améliorée. / Discovered in the early 1970s, gamma-ray bursts (GRBs) are amazing cosmic phenomena appearing randomly on the sky and releasing large amounts of energy mainly through gamma-ray emission. Although their origin is still under debate, they are believed to be produced by some of the most violent explosions in the Universe leading to the formation of stellar black-holes. GRBs are detected by their prompt emission, an intense short burst of gamma-rays (from a few millisecondes to few minutes), and are followed by a lived-afterglow emission observed on longer timescales from the X-ray to the radio domain. My thesis participates to the developement of the SVOM mission, which a Chinese-French mission to be launched in 2021, devoted to the study of GRBs and involving space and ground instruments. My work is focussed on the main instrument ECLAIRs, a hard X-ray coded mask imaging camera, in charge of the near real-time detection and localization of the prompt emission of GRBs. During my thesis, I studied the scientific performances of ECLAIRs and in particular the number of GRBs expected to be detected by ECLAIRs and their characteristics. For this purpose, I performed simulations using the prototypes of the embedded trigger algorithms combined with the model of the ECLAIRs instrument. The input data of the simulations include a background model and a synthetic population of gamma-ray bursts generated from existing catalogs (CGRO, HETE-2, Fermi and Swift). As a result, I estimated precisely the ECLAIRs detection efficiency of the algorithms and I predicted the number of GRBs to be detected by ECLAIRs : 40 to 70 GRBs per year. Moreover, the study highlighted that ECLAIRs will be particularly sensitive to the X-ray rich GRB population. My thesis provided additional studies about the localization performance, the rate of false alarm and the characteristics of the triggers of the algorithms. Finally, I also proposed two new methods for the detection of GRBs.The preliminary results were very promising and demonstrate that the sensitivity of ECLAIRs to the short GRBs (an interesting population due to the predicted association with gravitational waves) could be improved further.
15

Implementering av testplattform för end-to-end streaming telemetry i nätverk

Erlandsson, Niklas January 2020 (has links)
Målen med denna studie är att implementera en testmiljö för streaming telemetry samt jämföra två alternativ för att möjliggöra realtidsanalys av det insamlade datat. Dessa två alternativ är Python-biblioteken PyKafka och Confluent-Kafka-Python. Bedömningskritierna för jämförselsen var dokumentation, kodmängd och minnesanvändning. Testmiljön för streaming telemetry använder en router med Cisco IOS XR programvara som skickar data till en Cisco Pipeline collector, som vidare sänder datat till ett Kafka-kluster. Jämförelsen av Python-biblioteken utfördes med språket Python. Resultaten av jämförelsen visade att båda biblioteken hade välskriven dokumentation och liten skillnad i kodmängd, dock använde Confluent-Kafka-Python mindre minne. Studien visar att streaming telemetry med realtidsanalys kan fungera bra som ett komplement till eller en ersättning av SNMP. Studien rekommenderar användning av Confluent-Kafka-Python för implementering i produktionsmiljöer med ett stort antal nätverksenheter med tanke på den lägre minnesanvändningen. / The goals of this study are to implement a test environment for streaming telemetry and compare two alternatives for analysing the collected data in realtime. The two alternatives are the Python libraries PyKafka and Confluent-Kafka-Python. The comparison focused mainly on three areas, these being documentation, amount of code and memory usage. The test environment for streaming telemetry was set up with a router running IOS XR software that is sending data to a Cisco Pipeline collector, which in turn sends data to a Kafka-cluster. The comparison of the two libraries for interfacing with the cluster was made with the language Python. The results of the comparison showed that both libraries had well-written documentation and showed a negligible difference in amount of code. The memory usage was considerably lower with the Confluent-Kafka-Python library. The study shows that streaming telemetry together with real-time analysis makes a good complement to or a replacement of SNMP. The study further recommends the use of Confluent-Kafka-Python in real-world implementations of streaming telemetry, particularly in large networks with a large amount of devices.
16

Preparation and Detailed X-Ray Photoelectron Spectroscopy and Spectroscopic EllipsometryAnalysis of Ultrathin Protective Coatings

Johnson, Brian Ivins 01 October 2019 (has links)
Ultra-thin films (UTFs) are important in many applications, seen in the semiconductor industry, in chromatography, in sensing, in microfluidics, in aerospace, and in robotics. They also protect materials from corrosion, change surface energies, limit water intrusion into materials, allow material self-cleaning and self-healing, provide scratch resistance, and impart other specific chemical properties. In many cases, UTFs drastically alter surface properties and therefore their applications. It is imperative that proper and consistent characterization be performed on coatings to confirm and understand their desired properties. In Chapter two, Al oxidation under MgF2 protective layers is studied using real time X-ray photoelectron spectroscopy (XPS), and spectroscopic ellipsometry (SE). These tools allowed me to monitor Al oxidation for both short (hours) and long (months) periods of time. XPS revealed the chemical changes that took place in these materials as a function of time, and these changes were verified with SE. These studies help increase an understanding of aluminum changes under MgF2 protective layers. The third chapter demonstrates ab initio calculations guided X-ray photoelectron spectroscopy (XPS) analysis of surfaces functionalized with fluorinated silanes. This study addresses deficiencies in the literature where CF2:CF3 ratios from experimental XPS data do not match theoretical CF2:CF3 ratios. In a systematic approach, I developed semi-empirical models directed both by ab initio calculations and adjustable, empirical parameters. These models were effective in describing the raw data and exceeded fitting methods used in literature. In Chapter four, SiO2 UTFs with variable thicknesses deposited on Eagle XG® glass substrates are characterized. Challenges associated with this work consisted of similar optical functions of the film and substrate as well as backside reflections from the substrate. These obstacles were met using a multi-sample analysis (MSA), a variable angle spectroscopic ellipsometric approach, and mechanical abrasion/roughening of the substrate backside. With these approaches, I developed a model that precisely fit the data collected from all the samples and gave the correct optical function of the material along with thickness values for each film. Surface characterization represents a commitment of resources. It takes time to make measurements, and it takes time to analyze and understand the results. As presented in this work, I increase understanding of ultra-thin films at interfaces using both a multi-tool approach as well as using multiple analytical methods on data collected from each tool.
17

Interactive Visualization of Statistical Data using Multidimensional Scaling Techniques

Jansson, Mattias, Johansson, Jimmy January 2003 (has links)
<p>This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137. </p><p>In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions. </p><p>In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize. </p><p>Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.</p>
18

Análise reologica e sensorial de iogurte de leite de cabra com polpa de umbu. / Rheological and sensory analysis of goat milk yogurt with umbu pulp.

MARINHO, Maria Verônica Monteiro. 03 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-03T20:45:36Z No. of bitstreams: 1 MARIA VERÔNICA MONTEIRO MARINHO - DISSERTAÇÃO PPGEA 2011..pdf: 14161012 bytes, checksum: 98f60a63c6ac5eb3a86397eec9d5976f (MD5) / Made available in DSpace on 2018-07-03T20:45:36Z (GMT). No. of bitstreams: 1 MARIA VERÔNICA MONTEIRO MARINHO - DISSERTAÇÃO PPGEA 2011..pdf: 14161012 bytes, checksum: 98f60a63c6ac5eb3a86397eec9d5976f (MD5) Previous issue date: 2011-09 / Capes / O consumo do leite de cabra e de seus derivados é crescente, tendo-se como base, sua qualidade nutricional; no entanto, o sabor forte e marcante tem aceitabilidade reduzida induzindo rejeição em boa parte dos consumidores. A produção de iogurte a partir do leite de cabra combinado com polpa de frutas, pode melhorar as características sensoriais próprias do leite, mascarando seu odor característico, melhorando os teores nutricionais e, por consequência, aumentando a comercialização e o mercado. Assim sendo, objetivou-se, com esta pesquisa, a elaboração e a avaliação sensorial, química, física, físico-química e reológica de iogurte de leite de cabra adicionada de polpa de umbu {Spondias tuberosá). Foram utilizados, na elaboração das amostras, iogurte de leite de cabra, polpa de umbu, adicionada nas proporções de 10, 15 e 20%, e sacarose. Realizou-se a caracterização química, física e físico-química da polpa do umbu utilizada na elaboração das amostras. Os dados das avaliações químicas, físicas e físico-químicas da polpa foram expressos em média simples. Nas avaliações químicas, físicas e físico-químicas dos iogurtes os dados foram tratados de acordo com o delineamento inteiramente casualizado com três repetições, utilizando-se o programa Assistat. Nos iogurtes elaborados realizou-se a análise sensorial por meio de uma escala hedônica de nove pontos, na qual foram avaliados os atributos de consistência, sabor, cor, aroma, aparência, acidez e doçura e aplicados testes de aceitação; os dados foram tratados pelos programas Consensor 1.1 e Assistat, com comparação entre as médias, pelo Teste de Tukey. O estudo do comportamento reológico dos iogurtes foi realizado nas temperaturas 5, 10 e 15 °C, com leituras realizadas emviscosímetro e os dados ajustados com os modelos de Ostwald de Waelle (Lei da Potência), Casson e Mizrahi-Berk. A polpa de umbu apresentou valores de pH, acidez total titulável, sólidos solúveis totais, ácido ascórbico e teor de água, próximos aos valores reportados em literatura; em relação à cor, apresentou predominância do matiz amarelo. Para os iogurtes com diferentes concentrações de polpa de umbu o que apresentou as maiores notas médias em todos os parâmetros da avaliação sensorial, foi o iogurte de leite de cabra com 10% de polpa de umbu. Com base nos valores do pH, os iogurtes foram classificados como alimentos ácidos; em relação aos parâmetros acidez total titulável e ácido ascórbico, o iogurte de leite de cabra, com 10% de polpa de umbu, apresentou os maiores valores; não houve diferença significativa entre as formulações em relação aos sólidos solúveis totais (°Brix), proteínas, lipídios e atividade de água. Os iogurtes de leite de cabra com polpa de umbu apresentaram comportamento de fluido não newtoniano com características pseudoplásticas. Os modelos reológicos de Ostwald de Waelle, Casson e Mizrahi-Berk representaram satisfatoriamente o comportamento reológico dos iogurtes de leite de cabra com diferentes concentrações de polpa de umbu. O aumento no percentual de polpa de umbu adicionada reduziu as viscosidades aparentes dos iogurtes. / The consumption of goat milk and its derivatives is growing, due to its nutritional quality. However, its strong flavor has reduced acceptability in a considerable part of the consumer market. The production of yogurt from goat milk combined with fruit pulp can improve the sensory traits of the former by masking its characteristic aroma, improving nutritional content, thereby increasing its commercialization and market share. Thus, this study aimed at the development and sensory, chemical, physical, physico-chemical and rheological assessment of yogurt from goat's milk added to umbu pulp {Spondias tuberosa). Samples of goat milk yogurt and umbu pulp, added in proportions of 10, 15 and 20%, and sucrose, were used to elaborate the samples. We carried out the chemical, physical and physicochemical characterization of umbu pulp used in sample preparation. Data from chemical, physical and physicochemical assessments of the pulp were expressed as means. In the chemical, physical and physicochemical evaluations of the yogurts, data were analyzed in accordance with a randomized design with three replications using the Assistat program. In prepared yogurts, sensory evaluation was conducted using a nine-point hedonic scale, where we assessed consistency, flavor, color, aroma, appearance, acidity and sweetness, applying acceptance testing. Data were analyzed with Consensor 1.1 and Assistat programs, and means compared by applying Tukey's test. The study of rheological behavior was performed at 5, 10 and 15 °C, with readings performed in a viscometer and data fit to Ostwald the Waelle (Power Law), Casson, and Mizrahi-Berk models. The umbu pulp showed pH, titratable acidity, soluble solids, ascorbic acid and moisture content close to values reported in the literature, and, in terms of color, exhibited a predominantly yellowness. For yogurts with different concentrations oíumbu pulp, goat's milk (10% umbu pulp) displayed the highest mean scores in ali sensory parameters. Based on pH values, yogurts were classified as acidic foods; with respect to titratable acidity and ascorbic acid, yogurt from goat's milk with 10% umbu pulp exhibited the highest values; there was no significant difference between the formulations in terms of soluble solids (°Brix), proteins, lipids and water activity. Goat milk yogurt with umbu pulp showed non- Newtonian fluid behavior with pseudoplastic characteristics. Ostwald the Waelle, Casson, and Mizrahi-Berk rheological models adequately represented the rheological behavior of goat milk yogurt with different concentrations of umbu pulp. The increased percentage of umbu pulp reduced the apparent viscosity of the yogurts.
19

Interactive Visualization of Statistical Data using Multidimensional Scaling Techniques

Jansson, Mattias, Johansson, Jimmy January 2003 (has links)
This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137. In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions. In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize. Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.
20

Real-time anomaly detection with in-flight data : streaming anomaly detection with heterogeneous communicating agents / Détection des anomalies sur les données en vol en temps réel avec des agents communicants hétérogènes

Aussel, Nicolas 21 June 2019 (has links)
Avec l'augmentation du nombre de capteurs et d'actuateurs dans les avions et le développement de liaisons de données fiables entre les avions et le sol, il est devenu possible d'améliorer la sécurité et la fiabilité des systèmes à bord en appliquant des techniques d'analyse en temps réel. Cependant, étant donné la disponibilité limité des ressources de calcul embarquées et le coût élevé des liaisons de données, les solutions architecturelles actuelles ne peuvent pas exploiter pleinement toutes les ressources disponibles, limitant leur précision.Notre but est de proposer un algorithme distribué de prédiction de panne qui pourrait être exécuté à la fois à bord de l'avion et dans une station au sol tout en respectant un budget de communication. Dans cette approche, la station au sol disposerait de ressources de calcul rapides et de données historiques et l'avion disposerait de ressources de calcul limitées et des données de vol actuelles.Dans cette thèse, nous étudierons les spécificités des données aéronautiques et les méthodes déjà existantes pour produire des prédictions de pannes à partir de ces dernières et nous proposerons une solution au problème posé. Notre contribution sera détaillé en trois parties.Premièrement, nous étudierons le problème de prédiction d'événements rares créé par la haute fiabilité des systèmes aéronautiques. Beaucoup de méthodes d'apprentissage en classification reposent sur des jeux de données équilibrés. Plusieurs approches existent pour corriger le déséquilibre d'un jeu de donnée et nous étudierons leur efficacité sur des jeux de données extrêmement déséquilibrés.Deuxièmement, nous étudierons le problème d'analyse textuelle de journaux car de nombreux systèmes aéronautiques ne produisent pas d'étiquettes ou de valeurs numériques faciles à interpréter mais des messages de journaux textuels. Nous étudierons les méthodes existantes basées sur une approche statistique et sur l'apprentissage profond pour convertir des messages de journaux textuels en une forme utilisable en entrée d'algorithmes d'apprentissage pour classification. Nous proposerons notre propre méthode basée sur le traitement du langage naturel et montrerons comment ses performances dépassent celles des autres méthodes sur un jeu de donnée public standard.Enfin, nous offrirons une solution au problème posé en proposant un nouvel algorithme d'apprentissage distribué s'appuyant sur deux paradigmes d'apprentissage existant, l'apprentissage actif et l'apprentissage fédéré. Nous détaillerons notre algorithme, son implémentation et fournirons une comparaison de ses performances avec les méthodes existantes / With the rise of the number of sensors and actuators in an aircraft and the development of reliable data links from the aircraft to the ground, it becomes possible to improve aircraft security and maintainability by applying real-time analysis techniques. However, given the limited availability of on-board computing and the high cost of the data links, current architectural solutions cannot fully leverage all the available resources limiting their accuracy.Our goal is to provide a distributed algorithm for failure prediction that could be executed both on-board of the aircraft and on a ground station and that would produce on-board failure predictions in near real-time under a communication budget. In this approach, the ground station would hold fast computation resources and historical data and the aircraft would hold limited computational resources and current flight's data.In this thesis, we will study the specificities of aeronautical data and what methods already exist to produce failure prediction from them and propose a solution to the problem stated. Our contribution will be detailed in three main parts.First, we will study the problem of rare event prediction created by the high reliability of aeronautical systems. Many learning methods for classifiers rely on balanced datasets. Several approaches exist to correct a dataset imbalance and we will study their efficiency on extremely imbalanced datasets.Second, we study the problem of log parsing as many aeronautical systems do not produce easy to classify labels or numerical values but log messages in full text. We will study existing methods based on a statistical approach and on Deep Learning to convert full text log messages into a form usable as an input by learning algorithms for classifiers. We will then propose our own method based on Natural Language Processing and show how it outperforms the other approaches on a public benchmark.Last, we offer a solution to the stated problem by proposing a new distributed learning algorithm that relies on two existing learning paradigms Active Learning and Federated Learning. We detail our algorithm, its implementation and provide a comparison of its performance with existing methods

Page generated in 0.4593 seconds