• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 39
  • 39
  • 19
  • 14
  • 13
  • 11
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Patterns and quality of object-oriented software systems

Khomh, Foutse 04 1900 (has links)
Lors de ces dix dernières années, le coût de la maintenance des systèmes orientés objets s'est accru jusqu' à compter pour plus de 70% du coût total des systèmes. Cette situation est due à plusieurs facteurs, parmi lesquels les plus importants sont: l'imprécision des spécifications des utilisateurs, l'environnement d'exécution changeant rapidement et la mauvaise qualité interne des systèmes. Parmi tous ces facteurs, le seul sur lequel nous ayons un réel contrôle est la qualité interne des systèmes. De nombreux modèles de qualité ont été proposés dans la littérature pour contribuer à contrôler la qualité. Cependant, la plupart de ces modèles utilisent des métriques de classes (nombre de méthodes d'une classe par exemple) ou des métriques de relations entre classes (couplage entre deux classes par exemple) pour mesurer les attributs internes des systèmes. Pourtant, la qualité des systèmes par objets ne dépend pas uniquement de la structure de leurs classes et que mesurent les métriques, mais aussi de la façon dont celles-ci sont organisées, c'est-à-dire de leur conception, qui se manifeste généralement à travers les patrons de conception et les anti-patrons. Dans cette thèse nous proposons la méthode DEQUALITE, qui permet de construire systématiquement des modèles de qualité prenant en compte non seulement les attributs internes des systèmes (grâce aux métriques), mais aussi leur conception (grâce aux patrons de conception et anti-patrons). Cette méthode utilise une approche par apprentissage basée sur les réseaux bayésiens et s'appuie sur les résultats d'une série d'expériences portant sur l'évaluation de l'impact des patrons de conception et des anti-patrons sur la qualité des systèmes. Ces expériences réalisées sur 9 grands systèmes libres orientés objet nous permettent de formuler les conclusions suivantes: • Contre l'intuition, les patrons de conception n'améliorent pas toujours la qualité des systèmes; les implantations très couplées de patrons de conception par exemple affectent la structure des classes et ont un impact négatif sur leur propension aux changements et aux fautes. • Les classes participantes dans des anti-atrons sont beaucoup plus susceptibles de changer et d'être impliquées dans des corrections de fautes que les autres classes d'un système. • Un pourcentage non négligeable de classes sont impliquées simultanément dans des patrons de conception et dans des anti-patrons. Les patrons de conception ont un effet positif en ce sens qu'ils atténuent les anti-patrons. Nous appliquons et validons notre méthode sur trois systèmes libres orientés objet afin de démontrer l'apport de la conception des systèmes dans l'évaluation de la qualité. / Maintenance costs during the past decades have reached more than 70% of the overall costs of object-oriented systems, because of many factors, such as changing software environments, changing users' requirements, and the overall quality of systems. One factor on which we have a control is the quality of systems. Many object-oriented software quality models have been introduced in the literature to help assess and control quality. However, these models usually use metrics of classes (such as number of methods) or of relationships between classes (for example coupling) to measure internal attributes of systems. Yet, the quality of object-oriented systems does not depend on classes' metrics solely: it also depends on the organisation of classes, i.e. the system design that concretely manifests itself through design styles, such as design patterns and antipatterns. In this dissertation, we propose the method DEQUALITE to systematically build quality models that take into account the internal attributes of the systems (through metrics) but also their design (through design patterns and antipatterns). This method uses a machine learning approach based on Bayesian Belief Networks and builds on the results of a series of experiments aimed at evaluating the impact of design patterns and antipatterns on the quality of systems. These experiments, performed on 9 large object-oriented open source systems enable us to draw the following conclusions: • Counter-intuitively, design patterns do not always improve the quality of systems; tangled implementations of design patterns for example significantly affect the structure of classes and negatively impact their change- and fault-proneness. • Classes participating in antipatterns are significantly more likely to be subject to changes and to be involved in fault-fixing changes than other classes. • A non negligible percentage of classes participate in co-occurrences of antipatterns and design patterns in systems. On these classes, design patterns have a positive effect in mitigating antipatterns. We apply and validate our method on three open-source object-oriented systems to demonstrate the contribution of the design of system in quality assessment.
32

Security and privacy model for association databases

Kong, Yibing Unknown Date (has links)
With the rapid development of information technology, data availability is improved greatly. Data may be accessed at anytime by people from any location. However,threats to data security and privacy arise as one of the major problems of the development of information systems, especially those information systems which contain personal information. An association database is a personal information system which contains associations between persons. In this thesis, we identify the security and privacy problems of association databases. In order to solve these problems, we propose a new security and privacy model for association databases equipped with both direct access control and inference control mechanisms. In this model, there are multiple criteria including, not only confidentiality, but also privacy and other aspects of security to classify the association. The methods used in the system are: The direct access control method is based on the mandatory model; The inference control method is based on both logic reasoning and probabilistic reasoning (Belief Networks). My contributions to security and privacy model for association databases and to inference control in the model include: Identification of security and privacy problems in association databases; Formal definition of association database model; Representation association databases as directed multiple graphs; Development of axioms for direct access control; Specification of the unauthorized inference problem; A method for unauthorized inference detection and control that includes: Development of logic inference rules and probabilistic inference rule; Application of belief networks as a tool for unauthorized inference detection and control.
33

Security and privacy model for association databases

Kong, Yibing Unknown Date (has links)
With the rapid development of information technology, data availability is improved greatly. Data may be accessed at anytime by people from any location. However,threats to data security and privacy arise as one of the major problems of the development of information systems, especially those information systems which contain personal information. An association database is a personal information system which contains associations between persons. In this thesis, we identify the security and privacy problems of association databases. In order to solve these problems, we propose a new security and privacy model for association databases equipped with both direct access control and inference control mechanisms. In this model, there are multiple criteria including, not only confidentiality, but also privacy and other aspects of security to classify the association. The methods used in the system are: The direct access control method is based on the mandatory model; The inference control method is based on both logic reasoning and probabilistic reasoning (Belief Networks). My contributions to security and privacy model for association databases and to inference control in the model include: Identification of security and privacy problems in association databases; Formal definition of association database model; Representation association databases as directed multiple graphs; Development of axioms for direct access control; Specification of the unauthorized inference problem; A method for unauthorized inference detection and control that includes: Development of logic inference rules and probabilistic inference rule; Application of belief networks as a tool for unauthorized inference detection and control.
34

Μεθοδολογία στατιστικής μάθησης για την πρόγνωση ασθενών με τη Β-χρόνια λεμφογενή λευχαιμία (Β-ΧΛΛ) με χρήση δεδομένων κυτταρομετρίας ροής / Statistical learning methodology for the prognosis of B-chronic lymphocytic leukemia (B-CLL) using flow cytometry data

Λακουμέντας, Ιωάννης 20 April 2011 (has links)
Η Β-χρόνια Λεμφογενής Λευχαιμία (Β-ΧΛΛ) αποτελεί τον πιο κοινό τύπο λευχαιμίας στο Δυτικό κόσμο. Η πρόγνωσή της θεωρείται ως ένα από τα πιο ενδιαφέροντα προβλήματα απόφασης στην κλινική έρευνα και πρακτική. Για διάφορους κλινικούς και εργαστηριακούς δείκτες είναι γνωστό ότι σχετίζονται με την εξέλιξη της νόσου. Για τις παραμέτρους, όμως, που εξάγονται με ανάλυση κυτταρομετρίας ροής, οι οποίες αποτελούν τον ακρογωνιαίο λίθο της διαδικασίας διάγνωσης της νόσου, το αν προσφέρουν επιπρόσθετη προγνωστική πληροφορία αποτελεί ανοιχτό πρόβλημα. Στη διατριβή αυτή προτείνουμε ένα σύστημα υποβοήθησης για τις αποφάσεις των ειδικών του πεδίου, το οποίο πραγματοποιεί πολυπαραμετρική πρόγνωση ασθενών με Β-ΧΛΛ, συνδυάζοντας τη χρήση ποικίλων ετερογενών προγνωστικών δεικτών (κλινικών, εργαστηριακών και κυτταρομετρίας ροής) που σχετίζονται με τη νόσο. Η διάγνωση της Β-ΧΛΛ βασίζεται κυρίως στη μελέτη του αντιγονικού φαινότυπου των κυττάρων των ασθενών, η οποία διενεργείται με κυτταρομετρία ροής. Αν και η διαδικασία που ακολουθείται κατά την ανάλυση αυτή είναι σαφώς ορισμένη, ο τρόπος με τον οποίο οι εργαστηριακοί υπεύθυνοι την πραγματοποιούν παραδοσιακά χαρακτηρίζεται από ανακρίβεια και υποκειμενικότητα. Καθώς η τεχνολογία της κυτταρομετρίας ροής εξελίσσεται ραγδαία, γίνεται όλο και πιο επιτακτική η ανάγκη για την ανάπτυξη αυτοματοποιημένων μεθόδων ανάλυσης των δεδομένων που παράγει. Σε αυτά τα πλαίσια, παρουσιάζουμε ένα χρήσιμο παράδειγμα αυτοματοποιημένης ανάλυσης κυτταρομετρικών δεδομένων, η οποία δεν απαιτεί την άμεση επίβλεψη των ειδικών, για τη διάγνωση ασθενών με Β-ΧΛΛ. Οι τιμές των χαρακτηριστικών παραμέτρων που εξάγονται με εφαρμογή της προτεινόμενης μεθοδολογίας, ενσωματώνονται κατόπιν στο προαναφερθέν προγνωστικό σύστημα. Ανάγοντας το πρόβλημα της πρόγνωσης της Β-ΧΛΛ σε ένα στιγμιότυπο ταξινόμησης προτύπων, καθώς και προσομοιώνοντας κάθε ένα από τα βήματα της διαδικασίας της διάγνωσης της νόσου με ένα στιγμιότυπο συσταδοποίησης δεδομένων, αντιμετωπίσαμε τα δύο προβλήματα εφαρμόζοντας τεχνικές στατιστικής μάθησης. Εστιάσαμε σε μεθοδολογίες δικτύων πεποίθησης, χρησιμοποιώντας συγκεκριμένα το naïve-Bayes μοντέλο και για τις δύο περιπτώσεις, στην επιβλεπόμενη και στη μη επιβλεπόμενη εκδοχή του, αντίστοιχα. Τα χαρακτηριστικά και η φύση των δεδομένων (κυρίως των κυτταρομετρικών) που παράγονται από έναν παθολογικό υποκείμενο μηχανισμό, όπως αυτός της νόσου, δεν ευνοούν την απευθείας εφαρμογή του παραπάνω μοντέλου στο εκάστοτε στιγμιότυπο. Για το λόγο αυτό, συνδυάσαμε την εφαρμογή του naïve-Bayes μοντέλου με κατάλληλες ευρετικές αλγοριθμικές διαδικασίες, για την επίτευξη καλύτερων αποτελεσμάτων, με κριτήριο βέλτιστου όχι μόνο κάποιες συχνά χρησιμοποιούμενες μετρικές αποτίμησης αλγόριθμων, αλλά και τη γνώμη των αιματολόγων. Χάρη στην ιδιότητά τους να ενσωματώνουν την έμπειρη γνώση των ειδικών ως εκ των προτέρων πληροφορία αρχικοποίησης των μεθόδων μάθησής τους, οι Bayesian μεθοδολογίες κρίνονται ως οι πλέον κατάλληλες για την εφαρμογή τους σε τέτοιου τύπου προβλήματα. / B-Chronic Lymphocytic Leukemia (B-CLL) is known to be the most common type of leukemia in the Western world. Its prognosis remains one of the most interesting decision problems in clinical research and practice. Various clinical and laboratory factors are known to be associated with the evolution of the disease. However, for the parameters obtained by flow cytometry analysis, that are traditionally utilized as the cornerstone during the diagnosis procedure of the disease, whether they offer additional prognostic information is an open issue. In this dissertation, we propose a decision support system to the hematologists, that provides multiparametric B-CLL patients’ prognosis, combining the usage of diverse heterogeneous factors (clinical, laboratory and flow cytometry) associated with the disease. B-CLL diagnosis is primarily derived from the study of the antigenic phenotype of the patients’ blood cells, which is held with flow cytometry analysis. Despite the fact that the method of the analysis is well defined, the process traditionally followed by the laboratory experts is characterized by amounts of inexactness and subjectivity. As flow cytometry technology advances rapidly, the need for adequate automated (computer-assisted) analysis methodologies on the data it produces is accordingly increasing. In this context, we present a useful paradigm of automated analysis of flow cytometry data, that does not require the direct supervision of the expert, for B-CLL patients’ diagnosis. The values of the flow cytometry characteristic parameters extracted by applying the proposed methodology are afterward incorporated to the prognostic system for B-CLL mentioned above. By reducing the B-CLL prognosis problem to an instance of the pattern classification problem, as well as by simulating each step of the B-CLL diagnosis procedure with an instance of the data classification problem, we proceeded with applying statistical learning techniques. We focused on Bayesian network methodologies and utilized the naïve-Bayes model for both cases, in its supervised and unsupervised version, respectively. The characteristics of the data (especially of the flow cytometry ones) generated by a pathological underlying mechanism, like the disease’s one, did not encourage the direct use of the above model. Therefore, we combined the naïve-Bayes model with a set of suitable heuristic algorithmic procedures to obtain better results, not only with respect to some commonly used algorithmic optimality metrics, but also by considering the experts’ opinion. Due to their ability of incorporating the expert knowledge as a priori initial information to their learning methods, Bayesian methodologies are considered as the most appropriate ones to make use of in such types of applications.
35

Patterns and quality of object-oriented software systems

Khomh, Foutse 04 1900 (has links)
No description available.
36

Estudo Comparativo de M?tricas de Pontua??o para Aprendizagem Estrutural de Redes Bayesianas

Pifer, Aderson Cleber 30 August 2006 (has links)
Made available in DSpace on 2014-12-17T14:56:21Z (GMT). No. of bitstreams: 1 AdersonCP.pdf: 441948 bytes, checksum: 3ac355b4df6f67d2c5c0a9bb8f35c95a (MD5) Previous issue date: 2006-08-30 / Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures / Redes Bayesianas s?o poderosas ferramentas de representa??o gr?fica de distribui??es de probabilidade. Tais redes manipulam incertezas existentes em sistemas do mundo real. A partir da ?ltima d?cada, especial interesse no aprendizado de sua estrutura a partir de um conjunto de dados. Entretanto, o aprendizado da estrutura ? um problema NP-Dif?cil, o que gerou a cria??o de Algoritmos heur?sticos de busca. Muitos desses Algoritmos s?o baseados em m?tricas de pontua??o para estimar o modelo. Este trabalho procura comparar tr?s das m?tricas mais utilizadas. Para gerar os resul tados foram utilizadas as redes ASIA e ALARM, que s?o dois dos benchmarks padr?es e o Algoritmo de busca K-2. A m?trica Bayesiana Heckerman-Geiger com hiperpar?metros que dificultam a gera??o de arestas apresentam melhores resultados que ?quelas que flexibilizam a gera??o de arestas, acontecendo o mesmo com a m?trica MDL modificada. A compara??o das duas m?tricas mostrou que a m?trica Bayesiana ? superior ? m?trica MDL com grandes conjuntos de dados e inferior, caso contr?rio. A modifica??o na m?trica MDL resultou em estruturas mais pr?ximas ?s apresentadas pela MDL para um conjunto reduzido de dados e mais pr?ximas ? Heckerman-Geiger para um grande conjunto de dados, quando seus par?metros restrigem a cria??o de arestas
37

L’évaluation de la fiabilité d’un système mécatronique en phase de développement / Reliability analysis of mechatronic systems

Ben Said Amrani, Nabil 01 July 2019 (has links)
L’étude de la fiabilité des systèmes mécatroniques est un axe de recherche relativement récent. Ces études doivent être menées au plus tôt au cours de la phase de conception, afin de prévoir, modéliser et concevoir des systèmes fiables, disponibles et sûrs et de réduire les coûts et le nombre de prototypes nécessaires à la validation d’un système. Après avoir défini les systèmes mécatroniques et les notions de sûreté de fonctionnement et de fiabilité, nous présentons un aperçu des approches existantes (quantitatives et qualitatives) pour la modélisation et l’évaluation de la fiabilité, et nous mettons en évidence les points d’amélioration et les pistes à développer par la suite.Les principales difficultés dans les études de fiabilité des systèmes mécatroniques sont la combinaison multi-domaines (mécanique, électronique,informatique) et les différents aspects fonctionnels et dysfonctionnels (hybride, dynamique, reconfigurable et interactif). Il devient nécessaire d’utiliser de nouvelles approches pour l’estimation de la fiabilité.Nous proposons une méthodologie d’évaluation de la fiabilité prévisionnelle en phase de conception d’un système mécatronique, en prenant en compte les interactions multi-domaines entre les composants, à l’aide de la modélisation par Réseaux de Pétri,Réseaux bayésiens et fonctions de croyance.L’évaluation de la fiabilité en phase de développement doit être robuste, avec une confiance suffisante et prendre en compte tant les incertitudes épistémiques concernant les variables aléatoires d’entrée du modèle utilisé que l’incertitude sur le modèle pris en hypothèse. L’approche proposée a été appliquée à l’«actionneur intelligent» de la société Pack’ Aero. / Reliability analysis of mechatronic systems is one of the most dynamic fields of research. This analysis must be conducted during the design phase, in order to model and to design safe and reliable systems. After presenting some concepts of mechatronic systems and of dependability and reliability, we present an overview of existing approaches (quantitatives and qualitatives) for the reliability assessment and we highlight the perspectives to develop. The criticality of mechatronic systems is due, on one hand, to multi-domain combination (mechanical, electronic, software), and, on the other hand, to their different functional and dysfunctional aspects (hybrid, dynamic, reconfigurable and interactive). Therefore, new approaches for dependability assessment should be developped. We propose a methodology for reliability assessment in the design phase of a mechatronic system, by taking into account multi-domain interactions and by using modeling tools such as Petri Nets and Dynamic Bayesian Networks. Our approach also takes into account epistemic uncertainties (uncertainties of model and of parameters) by using an evidential network adapted to our model. Our methodology was applied to the reliability assessment of an "intelligent actuator" from Pack’Aero
38

Étude et développement d'un dispositif routier d'anticollision basé sur un radar ultra large bande pour la détection et l'identification notamment des usagers vulnérables / Study and development of a road collision avoidance system based on ultra wide-band radar for obstacles detection and identification dedicated to vulnerable road users

Sadli, Rahmad 12 March 2019 (has links)
Dans ce travail de thèse, nous présentons nos travaux qui portent sur l’identification des cibles en général par un radar Ultra-Large Bande (ULB) et en particulier l’identification des cibles dont la surface équivalente radar est faible telles que les piétons et les cyclistes. Ce travail se décompose en deux parties principales, la détection et la reconnaissance. Dans la première approche du processus de détection, nous avons proposé et étudié un détecteur de radar ULB robuste qui fonctionne avec des données radar 1-D (A-scan) à une dimension. Il exploite la combinaison des statistiques d’ordres supérieurs et du détecteur de seuil automatique connu sous le nom de CA-CFAR pour Cell-Averaging Constant False Alarm Rate. Cette combinaison est effectuée en appliquant d’abord le HOS sur le signal reçu afin de supprimer une grande partie du bruit. Puis, après avoir éliminé le bruit du signal radar reçu, nous implémentons le détecteur de seuil automatique CA-CFAR. Ainsi, cette combinaison permet de disposer d’un détecteur de radar ULB à seuil automatique robuste. Afin d’améliorer le taux de détection et aller plus loin dans le traitement, nous avons évalué l’approche des données radar 2-D (B-Scan) à deux dimensions. Dans un premier temps, nous avons proposé une nouvelle méthode de suppression du bruit, qui fonctionne sur des données B-Scan. Il s’agit d’une combinaison de WSD et de HOS. Pour évaluer les performances de cette méthode, nous avons fait une étude comparative avec d’autres techniques de suppression du bruit telles que l’analyse en composantes principales, la décomposition en valeurs singulières, la WSD, et la HOS. Les rapports signal à bruit -SNR- des résultats finaux montrent que les performances de la combinaison WSD et HOS sont meilleures que celles des autres méthodes rencontrées dans la littérature. A la phase de reconnaissance, nous avons exploité les données des deux approches à 1-D et à 2-D obtenues à partir du procédé de détection. Dans la première approche à 1-D, les techniques SVM et le DBN sont utilisées et évaluées pour identifier la cible en se basant sur la signature radar. Les résultats obtenus montrent que la technique SVM donne de bonnes performances pour le système proposé où le taux de reconnaissance global moyen atteint 96,24%, soit respectivement 96,23%, 95,25% et 97,23% pour le cycliste, le piéton et la voiture. Dans la seconde approche à 1-D, les performances de différents types d’architectures DBN composées de différentes couches ont été évaluées et comparées. Nous avons constaté que l’architecture du réseau DBN avec quatre couches cachées est meilleure et la précision totale moyenne peut atteindre 97,80%. Ce résultat montre que les performances obtenues avec le DBN sont meilleures que celles obtenues avec le SVM (96,24%) pour ce système de reconnaissance de cible utilisant un radar ULB. Dans l’approche bidimensionnelle, le réseau de neurones convolutifs a été utilisé et évalué. Nous avons proposé trois architectures de CNN. La première est le modèle modifié d’Alexnet, la seconde est une architecture avec les couches de convolution arborescentes et une couche entièrement connectée, et la troisième est une architecture avec les cinq couches de convolution et deux couches entièrement connectées. Après comparaison et évaluation des performances de ces trois architectures proposées nous avons constaté que la troisième architecture offre de bonnes performances par rapport aux autres propositions avec une précision totale moyenne qui peut atteindre 99,59%. Enfin, nous avons effectué une étude comparative des performances obtenues avec le CNN, DBN et SVM. Les résultats montrent que CNN a les meilleures performances en termes de précision par rapport à DBN et SVM. Cela signifie que l’utilisation de CNN dans les données radar bidimensionnels permet de classer correctement les cibles radar ULB notamment pour les cibles à faible SER et SNR telles que les cyclistes ou les piétons. / In this thesis work, we focused on the study and development of a system identification using UWB-Ultra-Wide-Band short range radar to detect the objects and particularly the vulnerable road users (VRUs) that have low RCS-Radar Cross Section- such as cyclist and pedestrian. This work is composed of two stages i.e. detection and recognition. In the first approach of detection stage, we have proposed and studied a robust UWB radar detector that works on one dimension 1-D radar data ( A-scan). It relies on a combination of Higher Order Statistics (HOS) and the well-known CA-CFAR (Cell-Averaging Constant False Alarm Rate) detector. This combination is performed by firstly applying the HOS to the received radar signal in order to suppress the noise. After eliminating the noise of the received radar signal, we apply the CA-CFAR detector. By doing this combination, we finally have an UWB radar detector which is robust against the noise and works with the adaptive threshold. In order to enhance the detection performance, we have evaluated the approach of using two dimensions 2-D (B-Scan) radar data. In this 2-D radar approach, we proposed a new method of noise suppression, which works on this B-Scan data. The proposed method is a combination of WSD (Wavelet Shrinkage Denoising) and HOS. To evaluate the performance of this method, we performed a comparative study with the other noise removal methods in literature including Principal Component Analysis (PCA), Singular Value Decomposition (SVD), WSD and HOS. The Signal-to-Noise Ratio (SNR) of the final result has been computed to compare the effectiveness of individual noise removal techniques. It is observed that a combination of WSD and HOS has better capability to remove the noise compared to that of the other applied techniques in the literature; especially it is found that it allows to distinguish efficiency the pedestrian and cyclist over the noise and clutters whereas other techniques are not showing significant result. In the recognition phase, we have exploited the data from the two approaches 1-D and 2-D, obtained from the detection method. In the first 1-D approach, Support Vector Machines (SVM) and Deep Belief Networks (DBN) have been used and evaluated to identify the target based on the radar signature. The results show that the SVM gives good performances for the proposed system where the total recognition accuracy rate could achieve up to 96,24%. In the second approach of this 1-D radar data, the performance of several DBN architectures compose of different layers have been evaluated and compared. We realised that the DBN architecture with four hidden layers performs better than those of with two or three hidden layers. The results show also that this architecture achieves up to 97.80% of accuracy. This result also proves that the performance of DBN is better than that of SVM (96.24%) in the case of UWB radar target recognition system using 1-D radar signature. In the 2-D approach, the Convolutional Neural Network (CNN) has been exploited and evaluated. In this work, we have proposed and investigated three CNN architectures. The first architecture is the modified of Alexnet model, the second is an architecture with three convolutional layers and one fully connected layer, and the third is an architecture with five convolutional layers and two fully connected layers. The performance of these proposed architectures have been evaluated and compared. We found that the third architecture has a good performance where it achieves up to 99.59% of accuracy. Finally, we compared the performances obtained using CNN, DBN and SVM. The results show that CNN gives a better result in terms of accuracy compared to that of DBN and SVM. It allows to classify correctly the UWB radar targets like cyclist and pedestrian.
39

Application of Information Theory and Learning to Network and Biological Tomography

Narasimha, Rajesh 08 November 2007 (has links)
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.

Page generated in 0.4982 seconds