241 |
Fault Detection and Identification in Computer Networks: A soft Computing ApproachMohamed, Abduljalil January 2009 (has links)
Governmental and private institutions rely heavily on reliable computer networks for
their everyday business transactions. The downtime of their infrastructure networks may result in millions of dollars in cost. Fault management systems are used to keep today’s complex networks running without significant downtime cost, either by using active techniques or passive techniques. Active techniques impose excessive management traffic, whereas passive techniques often ignore uncertainty inherent in network alarms,leading to unreliable fault identification performance. In this research work, new
algorithms are proposed for both types of techniques so as address these handicaps.
Active techniques use probing technology so that the managed network can be tested periodically and suspected malfunctioning nodes can be effectively identified and
isolated. However, the diagnosing probes introduce extra management traffic and storage space. To address this issue, two new CSP (Constraint Satisfaction Problem)-based algorithms are proposed to minimize management traffic, while effectively maintain the same diagnostic power of the available probes. The first algorithm is based on the standard CSP formulation which aims at reducing the available dependency matrix significantly as means to reducing the number of probes. The obtained probe set is used for fault detection and fault identification. The second algorithm is a fuzzy CSP-based algorithm. This proposed algorithm is adaptive algorithm in the sense that an initial reduced fault detection probe set is utilized to determine the minimum set of probes used
for fault identification. Based on the extensive experiments conducted in this research both algorithms have demonstrated advantages over existing methods in terms of the overall management traffic needed to successfully monitor the targeted network system.
Passive techniques employ alarms emitted by network entities. However, the fault
evidence provided by these alarms can be ambiguous, inconsistent, incomplete, and
random. To address these limitations, alarms are correlated using a distributed Dempster-Shafer Evidence Theory (DSET) framework, in which the managed network is divided into a cluster of disjoint management domains. Each domain is assigned an Intelligent Agent for collecting and analyzing the alarms generated within that domain. These agents are coordinated by a single higher level entity, i.e., an agent manager that combines the partial views of these agents into a global one. Each agent employs DSET-based algorithm that utilizes the probabilistic knowledge encoded in the available fault propagation model to construct a local composite alarm. The Dempster‘s rule of combination is then used by the agent manager to correlate these local composite alarms.
Furthermore, an adaptive fuzzy DSET-based algorithm is proposed to utilize the fuzzy
information provided by the observed cluster of alarms so as to accurately identify the malfunctioning network entities. In this way, inconsistency among the alarms is removed by weighing each received alarm against the others, while randomness and ambiguity of the fault evidence are addressed within soft computing framework. The effectiveness of
this framework has been investigated based on extensive experiments.
The proposed fault management system is able to detect malfunctioning behavior
in the managed network with considerably less management traffic. Moreover, it
effectively manages the uncertainty property intrinsically contained in network alarms,thereby reducing its negative impact and significantly improving the overall performance of the fault management system.
|
242 |
Identifiering av parametrar för tillståndsbedömning av en vattenkraftstationCarlsson, Magnus January 2004 (has links)
The report begins with a general inventory of possible technical faults in a hydropower plant and of possible fault indicating measurements. Then an investigation is made concerning a few different faults. Based on this investigation a choice on seal box condition and water leakage is made as problem for a more thorough examination, in which it is concluded that the turbine water leakage is larger when the turbine is put into operation. The examination ultimately results in a computer alarm for faults related to the seal box. Finally a few things are mentioned about flow measurement and pressure measurement in relation to the project as a whole. / Rapporten inleds med en övergripande sammanställning av möjliga tekniska fel i en vattenkraftstation och av möjliga felindikerande mätningar. Därefter görs en probleminventering av ett antal olika fel. Baserat på denna inventering väljes sedan tätningsboxkondition och läckvatten som problem för en mer ingående undersökning, vid vilken det konstateras att turbinvattenläckaget är större när turbinen är i drift. Undersökningen utmynnar sedan i ett datalarm för fel relaterade till tätningsboxar. Slutligen nämns något om flödesmätning och tryckmätning i relation till projektet i sin helhet.
|
243 |
Fault Detection and Identification in Computer Networks: A soft Computing ApproachMohamed, Abduljalil January 2009 (has links)
Governmental and private institutions rely heavily on reliable computer networks for
their everyday business transactions. The downtime of their infrastructure networks may result in millions of dollars in cost. Fault management systems are used to keep today’s complex networks running without significant downtime cost, either by using active techniques or passive techniques. Active techniques impose excessive management traffic, whereas passive techniques often ignore uncertainty inherent in network alarms,leading to unreliable fault identification performance. In this research work, new
algorithms are proposed for both types of techniques so as address these handicaps.
Active techniques use probing technology so that the managed network can be tested periodically and suspected malfunctioning nodes can be effectively identified and
isolated. However, the diagnosing probes introduce extra management traffic and storage space. To address this issue, two new CSP (Constraint Satisfaction Problem)-based algorithms are proposed to minimize management traffic, while effectively maintain the same diagnostic power of the available probes. The first algorithm is based on the standard CSP formulation which aims at reducing the available dependency matrix significantly as means to reducing the number of probes. The obtained probe set is used for fault detection and fault identification. The second algorithm is a fuzzy CSP-based algorithm. This proposed algorithm is adaptive algorithm in the sense that an initial reduced fault detection probe set is utilized to determine the minimum set of probes used
for fault identification. Based on the extensive experiments conducted in this research both algorithms have demonstrated advantages over existing methods in terms of the overall management traffic needed to successfully monitor the targeted network system.
Passive techniques employ alarms emitted by network entities. However, the fault
evidence provided by these alarms can be ambiguous, inconsistent, incomplete, and
random. To address these limitations, alarms are correlated using a distributed Dempster-Shafer Evidence Theory (DSET) framework, in which the managed network is divided into a cluster of disjoint management domains. Each domain is assigned an Intelligent Agent for collecting and analyzing the alarms generated within that domain. These agents are coordinated by a single higher level entity, i.e., an agent manager that combines the partial views of these agents into a global one. Each agent employs DSET-based algorithm that utilizes the probabilistic knowledge encoded in the available fault propagation model to construct a local composite alarm. The Dempster‘s rule of combination is then used by the agent manager to correlate these local composite alarms.
Furthermore, an adaptive fuzzy DSET-based algorithm is proposed to utilize the fuzzy
information provided by the observed cluster of alarms so as to accurately identify the malfunctioning network entities. In this way, inconsistency among the alarms is removed by weighing each received alarm against the others, while randomness and ambiguity of the fault evidence are addressed within soft computing framework. The effectiveness of
this framework has been investigated based on extensive experiments.
The proposed fault management system is able to detect malfunctioning behavior
in the managed network with considerably less management traffic. Moreover, it
effectively manages the uncertainty property intrinsically contained in network alarms,thereby reducing its negative impact and significantly improving the overall performance of the fault management system.
|
244 |
Diagnotics of Air Gap Eccentricity in Closed-Loop Drive-Connected Induction MotorsHuang, Xianghui 01 April 2005 (has links)
The trend toward ever-expanding variable speed induction motor applications results in the need for reliable condition monitoring and detection schemes for closed-loop motor-drive systems. This research focuses on the detection of air gap eccentricity in induction motors supplied by a vector-controlled drive. The majority of existing eccentricity detection techniques is based on monitoring fault harmonics in the stator current. This research analyzes the distribution of the eccentricity-related fault harmonics between the stator voltage and current, and points out that monitoring only the stator current is insufficient. When the motor is supplied by a vector-controlled drive, both voltage and current become modulated signals and contain fault harmonics. Either stator voltage or current can contain larger fault harmonics due to the influence of the drive controllers and the mechanical load. Therefore, a combination of monitoring both variables is necessary to ensure good detection reliability. Furthermore, with an AC drive, the motor speed and load will change widely, which changes fault harmonics too. A new detection scheme using an artificial neural network is proposed to incorporate the influence of changing operating conditions into the fault detection. This detection scheme is more reliable and cost efficient. In addition, a new off-line non-invasive eccentricity detection method is proposed by using the surge test. Simulation and experiments are conducted to validate the feasibilities of the proposed detection schemes.
|
245 |
Performance and Implementation Aspects of Nonlinear FilteringHendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
|
246 |
POWER SYSTEM FAULT DETECTION AND CLASSIFICATION BY WAVELET TRANSFORMS AND ADAPTIVE RESONANCE THEORY NEURAL NETWORKSKasinathan, Karthikeyan 01 January 2007 (has links)
This thesis aims at detecting and classifying the power system transmission line faults. To deal with the problem of an extremely large data set with different fault situations, a three step optimized Neural Network approach has been proposed. The approach utilizes Discrete Wavelet Transform for detection and two different types of self-organized, unsupervised Adaptive Resonance Theory Neural Networks for classification. The fault scenarios are simulated using Alternate Transients Program and the performance of this highly improved scheme is compared with the existing techniques. The simulation results prove that the proposed technique handles large data more efficiently and time of operation is considerably less when compared to the existing methods.
|
247 |
A model-based reasoning architecture for system-level fault diagnosisSaha, Bhaskar 04 January 2008 (has links)
This dissertation presents a model-based reasoning architecture with a two fold purpose: to detect and classify component faults from observable system behavior, and to generate fault propagation models so as to make a more accurate estimation of current operational risks. It incorporates a novel approach to system level diagnostics by addressing the need to reason about low-level inaccessible components from observable high-level system behavior. In the field of complex system maintenance it can be invaluable as an aid to human operators.
The first step is the compilation of the database of functional descriptions and associated fault-specific features for each of the system components. The system is then analyzed to extract structural information, which, in addition to the functional database, is used to create the structural and functional models. A fault-symptom matrix is constructed from the functional model and the features database. The fault threshold levels for these symptoms are founded on the nominal baseline data. Based on the fault-symptom matrix and these thresholds, a diagnostic decision tree is formulated in order to intelligently query about the system health. For each faulty candidate, a fault propagation tree is generated from the structural model. Finally, the overall system health status report includes both the faulty components and the associated at risk components, as predicted by the fault propagation model.
|
248 |
Κατασκευή διαγνωστικού συστήματος βλαβών αυτοκινήτου με επεξεργασία ακουστικού σήματοςΠαναγιώτου, Δημήτριος 03 October 2011 (has links)
Mέσα στα πλαίσια της διπλωματικής αυτής προσεγγίστηκε μια νεα τεχνική για τη διάγνωση
σφαλμάτων τετράχρονης μηχανής. Αυτή η τεχνική βασίζεται πάνω σε πιεζοηλεκτρικά υλικά
και αισθητήρες και υλοποιεί την ικανότητά τους να μετατρέπουν μηχανική τάση ή πίεση σε
ηλεκτρική τάση. Αυτοί οι αισθητήρες τοποθετούνται πάνω στη μηχανή και το παραγόμενο σήμα δειγματοληπτείται και περνά απο μια επεξεργασία με σκοπό την αποκόμιση μιας διάγνωσης για την κατάσταση στην οποία βρίσκεται η μηχανή αλλά και για τη συνεχή
παρακολούθηση αυτής. Κατα την εκπόνηση της διπλωματικής, διαφορετικές τεχνικές για
την εύρεση σφαλμάτων μηχανής χρησιμοποιούνται και προσεγγίζονται διαφορετικοί
αλγόριθμοι δειγματοληψίας και επεξεργασίας με σκοπό να εξασφαλισθεί ο πιο αποδοτικός
για την εφαρμογή αυτή. Ακόμα, η χρήση και οι ικανότητες των σύγχρονων ενσωματωμένων
συστημάτων ερευνώνται και πιο συγκεκριμμένα του μικροεπεξεργαστή της Analog, τον
ADuC7020, πάνω στον οποίο βασίζεται και η διπλωματική αυτή, για την υλοποίηση της
συσκευής εντοπισμού σφαλμάτων και τον προγραμματισμό αυτής.
Κατα την φάση της εξομοίωσης, ένα ημιτονοειδές σήμα χρησιμοποιήθηκε για την
προσέγγιση του ήχου της μηχανής. Στο στάδιο της επεξεργασίας του σήματος
χρησιμοποιήθηκε ψηφιακό φίλτρο πεπερασμένης κρουστικής απόκρισης (FIR) και
τροποποιημένος αλγόριθμος DFT για τη δειγματοληψία και την επεξεργασία του
ψηφικοποιημένου σήματος αλλά και για την αναγνώριση της συχνότητας. Πιο
συγκεκριμένα, υλοποιήθηκε ο βελτιστοποιημένος αλγόριθμος Goertzel ο οποίος
προτιμήθηκε για την επίτευξη μεγαλύτερων ταχυτήτων. Τέλος, υλοποιήθηκε και η τεχνική
της υπερδειγματοληψίας για την επίτευξη μεγαλύτερης ακρίβειας. Ο προγραμματισμός
έγινε σε γλώσσα C και το λογισμικό που χρησιμοποιήθηκε ήταν το μVision4 της Keil. / At the present thesis we examine the automatic fault detection in petrol engines using piezoelectric sensors in embedded systems.
|
249 |
An analysis and comparison of two methods for UAV actuator fault detection and isolationOdendaal, Hendrik Mostert 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Fault detection and isolation (FDI) is an important aspect of effective fault tolerant control
architectures. The Electronic System Laboratory at Stellenbosch University identified the
need to study viable methods of FDI. In this research two FDI methods for actuator failures
on the Meraka Modular UAV are investigated.
The Meraka Modular UAV is an unmanned aircraft that was developed by the CSIR. A
simple six degree of freedom non-linear mathematical model is developed that presents a
platform on which the two FDI methods are formulated. The theoretical model is used in
a simulation environment to extensively test and compare the performance of the proposed
FDI methods in different types of flight conditions.
The first method investigated is a multiple model adaptive estimator (MMAE), which incorporates
a bank of Kalman filters. Each Kalman filter in the MMAE is conditioned for
each expected actuator fault scenario. The limitations of using linear Kalman filters are explained
and they are replaced by extended Kalman filters, whose associated advantages and
disadvantages are discussed. Each filter in the bank of Kalman filters produces a residual
vector and residual covariance matrix. This information is subjected to a Bayes classifier to
determine the fault scenario which will have the highest likelihood of being active.
The second method that is studied incorporates the parity space approach for FDI. The
parity space consists of the parity relations that quantify all the analytical redundancies
available between the sensors’ outputs and actuator inputs of a system. A transformation
matrix is then optimised to transform these parity relations into residuals that are specially
sensitive to specific actuator faults. Actuator faults cause the parity space residuals’ variance
to increase. A cumulative summation procedure is used to determine when the residuals’
variance has changed sufficiently to indicate an actuator fault. A pseudoinverse actuator
estimation scheme is used to extract the actuator deflections from the parity relations.
The FDI performance is tested by deliberately failing specific actuators of the Meraka Modular
UAV in-flight. The flight test data is then used to analyse and compare the performance
of the two FDI methods investigated in the research. It is found that, for the specific
Meraka Modular UAV, the FDI performs as expected with disturbance effects and actuator
excitation influencing the FDI effectiveness. The research shows that the bank of Kalman
filters creates less false alarms whereas the parity space FDI is more sensitive to faults. It
is illustrated that FDI can be improved with active actuator excitation and process noise
estimation techniques, delivering promising results. / AFRIKAANSE OPSOMMING: Fout-deteksie en -isolasie (FDI) is belangrik vir ’n stelsel se beheerder om foute te kan
hanteer. Die Elektroniese Stelsellabaratorium (ESL) by die Universiteit van Stellenbosch
het die behoefte geïdentifiseer om te gaan kyk na moontlike FDI-stelsels wat gebruik kan
word op hul onbemande vliegtuie (OV). In hierdie navorsing is daar na twee FDI-metodes
gekyk wat op die Meraka Modulêre OV toegepas kan word.
Die Meraka Modulêre OV is ’n vliegtuig wat deur die WNNR ontwikkel is. ’n Eenvoudige sesgrade-
van-vryheid, nie-liniêre wiskundige model van die Meraka Modulêre OV is ontwikkel,
en die FDI-metodes is rondom hierdie model geformuleer. Die teoretiese model is gebruik
in ’n simulasie-omgewing en die werkverrigting van die twee FDI-metodes is in verskillende
vlug-omstandighede getoets en vergelyk.
Die eerste metode waarna gekyk is, was ’n multi-model aanpasbare afskatter (MMAA), wat
’n bank van Kalman-filters gebruik. Elke Kalman-filter in die MMAA is gekondisioneer
vir elke denkbare aktueerder-fout. Die beperkinge rondom liniêre Kalman-filters is uitgelig
en vergelyk met uitgebreide Kalman-filters, waarvan die voor- en nadele bespreek is. Elke
filter in die MMAA produseer ’n residu-vektor en residu-kovariansiematriks. Hierdie informasie
is na ’n Bayes-klassifiseerder gestuur om te bepaal watter fout-senario die grootste
waarskynlikheid het om aktief te wees.
Die tweede metode waarna gekyk is, het die pariteitsruimte vir FDI gebruik. Die pariteitsruimte
is uit al die pariteitsverwantskappe opgebou wat die verhoudings tussen al die insette
en uitsette van ’n sisteem kwantifiseer. ’n Transformasie-matriks is geoptimaliseer om hierdie
pariteitsverwantskappe te transformeer na residue wat elkeen sensitief is tot ’n spesikiefe
aktueerderfout. ’n Spesifieke aktueerderfout veroorsaak dat ’n spesifieke residu se variansie
verhoog. ’n Kummulatiewe sommeringsproses is dan gebruik om te bepaal of die variansie
genoegsaam toegeneem het. Sodoende kon daar bepaal word of ’n fout ontstaan het. ’n
Pseudo-inversaktueerder-afskattingstegniek is gebruik om die afgeskatte aktueerderdefleksie
uit die pariteitsverwantskappe te onttrek.
Die FDI-werkverrigtinge van die twee metodes is getoets deur sekere aktueerders met opset
te laat faal gedurende vlugtoetse. Die vlugtoetsdata is gebruik om die werkverrigting van die
FDI-metodes te analiseer en met mekaar te vergelyk. Met die spesifieke Meraka Modulêre
OV is, soos te wagte, bevind dat versteurings en aktueerderopwekking ’n groot invloed op
die FDI’s se werkverrigtinge toon.
|
250 |
Problèmes d'identification dans les graphes / Identification problems in graphsParreau, Aline 05 July 2012 (has links)
Dans cette thèse, nous étudions des problèmes d'identification des sommets dans les graphes. Identifier les sommets d'un graphe consiste à attribuer à chaque sommet un objet qui rend le sommet unique par rapport aux autres. Nous nous intéressons particulièrement aux codes identifiants : sous-ensembles de sommets d'un graphe, dominants, tels que le voisinage fermé de chaque sommet du graphe a une intersection unique avec l'ensemble. Les sommets du code identifiant peuvent être considérés comme des capteurs et chaque sommet du graphe comme un lieu possible pour une défaillance. Nous caractérisons tout d'abord l'ensemble des graphes pour lesquels tous les sommets sauf un sont nécessaires dans tout code identifiant. Le problème consistant à trouver un code identifiant optimal, c'est-`a-dire de taille minimale, étant NP-difficile, nous l'étudions sur quatre classes restreintes de graphes. Suivant les cas, nous pouvons résoudre complètement le problème (pour les graphes de Sierpinski), améliorer les bornes générales (pour les graphes d'intervalles, les graphes adjoints, la grille du roi) ou montrer que le problème reste difficile même restreint (pour les graphes adjoints). Nous considérons ensuite des variations autour des codes identifiants permettant plus de flexibilité pour les capteurs. Nous étudions par exemple des capteurs du plan capables de détecter des défaillances `a un rayon connu avec une erreur tolérée. Nous donnons des constructions de tels codes et bornons leur taille pour des valeurs de rayons et d'erreurs fixés ou asymptotiques. Nous introduisons enfin la notion de coloration identifiante d'un graphe, permettant d'identifier les sommets d'un graphe avec les couleurs présentes dans son voisinage. Nous comparons cette coloration avec la coloration propre des graphes et donnons des bornes sur le nombre de couleurs nécessaires pour identifier un graphe, pour plusieurs classes de graphes. / In this thesis, we study problems on vertices identification of graphs. To identify the vertices of a graph consists in giving to each vertex of the graph an object that makes it unique. We are specially interested in the problem of identifying codes : dominating sets of vertices for which the closed neighborhood of each vertex has a unique intersection with the set. The vertices of the identifying code can be seen as sensors and each vertex of the graph as the location of a potential fault. We first classify all finite graphs for which all but one of the vertices are needed in any identifying code. Finding an optimal identifying code, i.e, an identifying code of minimum size, is a $NP$-hard problem. Therefore, we study this problem in some restricted classes of graphes. Depending on the class considered, we are able to solve this problem (for Sierpi`nski graphs), to give better bounds on the size of an identifying code than the general one (for interval graphs, line graphs and the king grid) or to prove that the problem remains NP-hard even in the restricted class (for line graphs). Then, we consider some variations of identifing codes that give flexibility to the sensors. For example, we study codes sensors able to detect faults within a radius around a fixed value. We give constructions of such codes and bounds on their size for general and asymptotic values of the radius and the tolerance on it. Finally, we introduce identifying colourings of graphs; verex-colouring of graph such that each vertex is identified by the set of colours in its closed neighbourhood. We compare this colouring of graphs with proper vertex-coloring and give bounds on the number of colours required to identify a graph, for several class of graphs.
|
Page generated in 0.0585 seconds