• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 16
  • 12
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 83
  • 41
  • 38
  • 26
  • 22
  • 21
  • 15
  • 15
  • 15
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A new model for worm detection and response : development and evaluation of a new model based on knowledge discovery and data mining techniques to detect and respond to worm infection by integrating incident response, security metrics and apoptosis

Mohd Saudi, Madihah January 2011 (has links)
Worms have been improved and a range of sophisticated techniques have been integrated, which make the detection and response processes much harder and longer than in the past. Therefore, in this thesis, a STAKCERT (Starter Kit for Computer Emergency Response Team) model is built to detect worms attack in order to respond to worms more efficiently. The novelty and the strengths of the STAKCERT model lies in the method implemented which consists of STAKCERT KDD processes and the development of STAKCERT worm classification, STAKCERT relational model and STAKCERT worm apoptosis algorithm. The new concept introduced in this model which is named apoptosis, is borrowed from the human immunology system has been mapped in terms of a security perspective. Furthermore, the encouraging results achieved by this research are validated by applying the security metrics for assigning the weight and severity values to trigger the apoptosis. In order to optimise the performance result, the standard operating procedures (SOP) for worm incident response which involve static and dynamic analyses, the knowledge discovery techniques (KDD) in modeling the STAKCERT model and the data mining algorithms were used. This STAKCERT model has produced encouraging results and outperformed comparative existing work for worm detection. It produces an overall accuracy rate of 98.75% with 0.2% for false positive rate and 1.45% is false negative rate. Worm response has resulted in an accuracy rate of 98.08% which later can be used by other researchers as a comparison with their works in future.
72

Etude comportementale des mesures d'intérêt d'extraction de connaissances / Behavioral study of interestingness measures of knowledge extraction

Grissa, Dhouha 02 December 2013 (has links)
La recherche de règles d’association intéressantes est un domaine important et actif en fouille de données. Puisque les algorithmes utilisés en extraction de connaissances à partir de données (ECD), ont tendance à générer un nombre important de règles, il est difficile à l’utilisateur de sélectionner par lui même les connaissances réellement intéressantes. Pour répondre à ce problème, un post-filtrage automatique des règles s’avère essentiel pour réduire fortement leur nombre. D’où la proposition de nombreuses mesures d’intérêt dans la littérature, parmi lesquelles l’utilisateur est supposé choisir celle qui est la plus appropriée à ses objectifs. Comme l’intérêt dépend à la fois des préférences de l’utilisateur et des données, les mesures ont été répertoriées en deux catégories : les mesures subjectives (orientées utilisateur ) et les mesures objectives (orientées données). Nous nous focalisons sur l’étude des mesures objectives. Néanmoins, il existe une pléthore de mesures objectives dans la littérature, ce qui ne facilite pas le ou les choix de l’utilisateur. Ainsi, notre objectif est d’aider l’utilisateur, dans sa problématique de sélection de mesures objectives, par une approche par catégorisation. La thèse développe deux approches pour assister l’utilisateur dans sa problématique de choix de mesures objectives : (1) étude formelle suite à la définition d’un ensemble de propriétés de mesures qui conduisent à une bonne évaluation de celles-ci ; (2) étude expérimentale du comportement des différentes mesures d’intérêt à partir du point de vue d’analyse de données. Pour ce qui concerne la première approche, nous réalisons une étude théorique approfondie d’un grand nombre de mesures selon plusieurs propriétés formelles. Pour ce faire, nous proposons tout d’abord une formalisation de ces propriétés afin de lever toute ambiguïté sur celles-ci. Ensuite, nous étudions, pour différentes mesures d’intérêt objectives, la présence ou l’absence de propriétés caractéristiques appropriées. L’évaluation des mesures est alors un point de départ pour une catégorisation de celle-ci. Différentes méthodes de classification ont été appliquées : (i) méthodes sans recouvrement (CAH et k-moyennes) qui permettent l’obtention de groupes de mesures disjoints, (ii) méthode avec recouvrement (analyse factorielle booléenne) qui permet d’obtenir des groupes de mesures qui se chevauchent. Pour ce qui concerne la seconde approche, nous proposons une étude empirique du comportement d’une soixantaine de mesures sur des jeux de données de nature différente. Ainsi, nous proposons une méthodologie expérimentale, où nous cherchons à identifier les groupes de mesures qui possèdent, empiriquement, un comportement semblable. Nous effectuons par la suite une confrontation avec les deux résultats de classification, formel et empirique dans le but de valider et mettre en valeur notre première approche. Les deux approches sont complémentaires, dans l’optique d’aider l’utilisateur à effectuer le bon choix de la mesure d’intérêt adaptée à son application. / The search for interesting association rules is an important and active field in data mining. Since knowledge discovery from databases used algorithms (KDD) tend to generate a large number of rules, it is difficult for the user to select by himself the really interesting knowledge. To address this problem, an automatic post-filtering rules is essential to significantly reduce their number. Hence, many interestingness measures have been proposed in the literature in order to filter and/or sort discovered rules. As interestingness depends on both user preferences and data, interestingness measures were classified into two categories : subjective measures (user-driven) and objective measures (data-driven). We focus on the study of objective measures. Nevertheless, there are a plethora of objective measures in the literature, which increase the user’s difficulty for choosing the appropriate measure. Thus, our goal is to avoid such difficulty by proposing groups of similar measures by means of categorization approaches. The thesis presents two approaches to assist the user in his problematic of objective measures choice : (1) formal study as per the definition of a set of measures properties that lead to a good measure evaluation ; (2) experimental study of the behavior of various interestingness measures from data analysispoint of view. Regarding the first approach, we perform a thorough theoretical study of a large number of measures in several formal properties. To do this, we offer first of all a formalization of these properties in order to remove any ambiguity about them. We then study for various objective interestingness measures, the presence or absence of appropriate characteristic properties. Interestingness measures evaluation is therefore a starting point for measures categorization. Different clustering methods have been applied : (i) non overlapping methods (CAH and k-means) which allow to obtain disjoint groups of measures, (ii) overlapping method (Boolean factor analysis) that provides overlapping groups of measures. Regarding the second approach, we propose an empirical study of the behavior of about sixty measures on datasets with different nature. Thus, we propose an experimental methodology, from which we seek to identify groups of measures that have empirically similar behavior. We do next confrontation with the two classification results, formal and empirical in order to validate and enhance our first approach. Both approaches are complementary, in order to help the user making the right choice of the appropriate interestingness measure to his application.
73

Avaliação da distorção harmônica total de tensão no ponto de acoplamento comum industrial usando o processo KDD baseado em medição / Evaluation of total voltage harmonic distortion at the industrial joint coupling point using the KDD-based measurement process

OLIVEIRA, Edson Farias de 27 March 2018 (has links)
Submitted by Kelren Mota (kelrenlima@ufpa.br) on 2018-06-13T17:38:37Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_AvaliacaoDistorcaoHarmonica.pdf: 4309009 bytes, checksum: 017d26b4d8e0ce6653f66d67f13f4cb6 (MD5) / Approved for entry into archive by Kelren Mota (kelrenlima@ufpa.br) on 2018-06-13T17:39:00Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_AvaliacaoDistorcaoHarmonica.pdf: 4309009 bytes, checksum: 017d26b4d8e0ce6653f66d67f13f4cb6 (MD5) / Made available in DSpace on 2018-06-13T17:39:00Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_AvaliacaoDistorcaoHarmonica.pdf: 4309009 bytes, checksum: 017d26b4d8e0ce6653f66d67f13f4cb6 (MD5) Previous issue date: 2018-03-27 / In the last decades, the transformation industry has provided the introduction of increasingly faster and more energy efficient products for residential, commercial and industrial use, however these loads due to their non-linearity have contributed significantly to the increase of distortion levels harmonic of voltage as a result of the current according to the Power Quality indicators of the Brazilian electricity distribution system. The constant increase in the levels of distortions, especially at the point of common coupling, has generated in the current day a lot of concern in the concessionaires and in the consumers of electric power, due to the problems that cause like losses of the quality of electric power in the supply and in the installations of the consumers and this has provided several studies on the subject. In order to contribute to the subject, this thesis proposes a procedure based on the Knowledge Discovery in Database - KDD process to identify the impact loads of harmonic distortions of voltage at the common coupling point. The proposed methodology uses computational intelligence and data mining techniques to analyze the data collected by energy quality meters installed in the main loads and the common coupling point of the consumer and consequently establish the correlation between the harmonic currents of the nonlinear loads with the harmonic distortion at the common coupling point. The proposed process consists in analyzing the loads and the layout of the location where the methodology will be applied, in the choice and installation of the QEE meters and in the application of the complete KDD process, including the procedures for collection, selection, cleaning, integration, transformation and reduction, mining, interpretation, and evaluation of data. In order to contribute, the data mining techniques of Decision Tree and Naïve Bayes were applied and several algorithms were tested for the algorithm with the most significant results for this type of analysis as presented in the results. The results obtained evidenced that the KDD process has applicability in the analysis of the Voltage Total Harmonic Distortion at the Point of Common Coupling and leaves as contribution the complete description of each step of this process, and for this it was compared with different indices of data balancing, training and test and different scenarios in different shifts of analysis and presented good performance allowing their application in other types of consumers and energy distribution companies. It also shows, in the chosen application and using different scenarios, that the most impacting load was the seventh current harmonic of the air conditioning units for the collected data set. / Nas últimas décadas, a indústria de transformação, tem proporcionado a introdução de produtos cada vez mais rápidos e energeticamente mais eficientes para utilização residencial, comercial e industrial, no entanto essas cargas devido à sua não linearidade têm contribuído significativamente para o aumento dos níveis de distorção harmônica de tensão em decorrência da corrente conforme indicadores de Qualidade de Energia Elétrica do sistema brasileiro de distribuição de energia elétrico. O constante aumento dos níveis das distorções, principalmente no ponto de acoplamento comum, tem gerado nos dias atuais muita preocupação nas concessionárias e nos consumidores de energia elétrica, devido aos problemas que causam como perdas da qualidade de energia elétrica no fornecimento e nas instalações dos consumidores e isso têm proporcionado diversos estudos sobre o assunto. Com o intuito de contribuir com o assunto, a presente tese propõe um procedimento com base no processo Knowledge Discovery in Database - KDD para identificação das cargas impactantes das distorções harmônicas de tensão no ponto de acoplamento comum. A metodologia proposta utiliza técnicas de Inteligência computacional e mineração de dados para análise dos dados coletados por medidores de qualidade de energia instalados nas cargas principais e no ponto de acoplamento comum do consumidor e consequentemente estabelecer a correlação entre as correntes harmônicas das cargas não lineares com a distorção harmônica no ponto de acoplamento comum. O processo proposto consiste na análise das cargas e do layout do local onde a metodologia será aplicada, na escolha e na instalação dos medidores de QEE e na aplicação do processo KDD completo, incluindo os procedimentos de coleta, seleção, limpeza, integração, transformação e redução, mineração, interpretação, e avaliação dos dados. Com o propósito de contribuição foram aplicadas as técnicas de mineração de dados Árvore de Decisão e Naïve Bayes e foram testados diversos algoritmos em busca do algoritmo com resultados mais significativos para esse tipo de análise conforme apresentado nos resultados. Os resultados obtidos evidenciaram que o processo KDD possui aplicabilidade na análise da Distorção Harmônica Total de Tensão no Ponto de Acoplamento Comum e deixa como contribuição a descrição completa de cada etapa desse processo, e para isso foram comparados com diferentes índices de balanceamento de dados, treinamento e teste e diferentes cenários em diferentes turnos de análise e apresentaram bom desempenho possibilitando sua aplicação em outros tipos de consumidores e empresas de distribuição de energia. Evidencia também, na aplicação escolhida e utilizando diferentes cenários, que a carga mais impactante foi a sétima harmônica de corrente das centrais de ar condicionado para o conjunto de dados coletados.
74

Identificação e estimação de ruído em redes DSL: uma abordagem baseada em inteligência computacional

FARIAS, Fabrício de Souza 25 January 2012 (has links)
Submitted by Irvana Coutinho (irvana@ufpa.br) on 2013-01-24T12:14:20Z No. of bitstreams: 2 license_rdf: 23898 bytes, checksum: e363e809996cf46ada20da1accfcd9c7 (MD5) Dissertacao_IdentificacaoEstimulacaoRuido.pdf: 1534456 bytes, checksum: 376786e221762a1b34af76521652d2bb (MD5) / Approved for entry into archive by Ana Rosa Silva(arosa@ufpa.br) on 2013-01-25T12:21:25Z (GMT) No. of bitstreams: 2 license_rdf: 23898 bytes, checksum: e363e809996cf46ada20da1accfcd9c7 (MD5) Dissertacao_IdentificacaoEstimulacaoRuido.pdf: 1534456 bytes, checksum: 376786e221762a1b34af76521652d2bb (MD5) / Made available in DSpace on 2013-01-25T12:21:25Z (GMT). No. of bitstreams: 2 license_rdf: 23898 bytes, checksum: e363e809996cf46ada20da1accfcd9c7 (MD5) Dissertacao_IdentificacaoEstimulacaoRuido.pdf: 1534456 bytes, checksum: 376786e221762a1b34af76521652d2bb (MD5) Previous issue date: 2012 / CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico / Este trabalho propõe a utilização de técnicas de inteligência computacional objetivando identificar e estimar a potencia de ruídos em redes Digital Subscriber Line ou Linhas do Assinante Digital (DSL) em tempo real. Uma metodologia baseada no Knowledge Discovery in Databases ou Descobrimento de Conhecimento em Bases de Dados (KDD) para detecção e estimação de ruídos em tempo real, foi utilizada. KDD é aplicado para selecionar, pré-processar e transformar os dados antes da etapa de aplicação dos algoritmos na etapa de mineração de dados. Para identificação dos ruídos o algoritmo tradicional backpropagation baseado em Redes Neurais Artificiais (RNA) é aplicado objetivando identificar o tipo de ruído em predominância durante a coleta das informações do modem do usuário e da central. Enquanto, para estimação o algoritmo de regressão linear e o algoritmo híbrido composto por Fuzzy e regressão linear foram aplicados para estimar a potência em Watts de ruído crosstalk ou diafonia na rede. Os resultados alcançados demonstram que a utilização de algoritmos de inteligência computacional como a RNA são promissores para identificação de ruídos em redes DSL, e que algoritmos como de regressão linear e Fuzzy com regressão linear (FRL) são promissores para a estimação de ruídos em redes DSL. / This paper proposes the use of computational intelligence techniques aiming to identify and estimate the noise power in Digital Subscriber Line (DSL) networks on real time. A methodology based on Knowledge Discovery in Databases (KDD) for detect and estimate noise in real time, was used. KDD is applied to select, pre-process and transform data before data mining step. For noise identification the traditional backpropagation algorithm based on Artificial Neural Networks (ANN) is applied aiming to identify the predominant noise during the collection of information from the user's modem and the DSL Access Multiplexer (DSLAM). While the algorithm for noise estimation, linear regression and a hybrid algorithm consisting of Fuzzy with linear regression are applied to estimate the noise power in Watts. Results show that the use of computational intelligence algorithms such as RNA are promising for noise identification in DSL networks, and algorithms such as linear regression and fuzzy with linear regression (FRL) are promising for noise estimation in DSL networks.
75

Agente para suporte à decisão multicritério em gestão pública participativa / Agent to support multicriteria decision in Public Participatory Management

Amorim, Leonardo Afonso 26 September 2014 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2015-02-05T20:32:05Z No. of bitstreams: 3 Dissertação - Leonardo Afonso Amorim - 2014.pdf: 2774608 bytes, checksum: b212628d1bce8ef7bf3f80e7286db111 (MD5) Dissertação - Leonardo Afonso Amorim - 2014 - Projeto.zip: 11944741 bytes, checksum: 141e5b6b22a4f615ef5f5bee052d97b5 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2015-02-05T20:32:50Z (GMT) No. of bitstreams: 3 Dissertação - Leonardo Afonso Amorim - 2014.pdf: 2774608 bytes, checksum: b212628d1bce8ef7bf3f80e7286db111 (MD5) Dissertação - Leonardo Afonso Amorim - 2014 - Projeto.zip: 11944741 bytes, checksum: 141e5b6b22a4f615ef5f5bee052d97b5 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-02-05T20:32:50Z (GMT). No. of bitstreams: 3 Dissertação - Leonardo Afonso Amorim - 2014.pdf: 2774608 bytes, checksum: b212628d1bce8ef7bf3f80e7286db111 (MD5) Dissertação - Leonardo Afonso Amorim - 2014 - Projeto.zip: 11944741 bytes, checksum: 141e5b6b22a4f615ef5f5bee052d97b5 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-09-26 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Decision making in public management is associated with a high degree of complexity due to insufficient financial resources to meet all the demands emanating from various sectors of society. Often, economic activities are in conflict with social or environmental causes. Another important aspect in decision making in public management is the inclusion of various stakeholders, eg public management experts, small business owners, shopkeepers, teachers, representatives of social and professional classes, citizens etc. The goal of this master thesis is to present two computational agents to aid decision making in public management as part of ADGEPA project: Miner Agent (MA) and Agent Decision Support (DSA). The MA uses data mining techniques and DSA uses multi-criteria analysis to point out relevant issues. The context in which this work fits is ADGEPA project. The ADGEPA (which means Digital Assistant for Participatory Public Management) is an innovative practice to support participatory decision making in public resources management. The main contribution of this master thesis is the ability to assist in the discovery of patterns and correlations between environmental aspects that are not too obvious and can vary from community to community. This contribution would help the public manager to make systemic decisions that in addition to attacking the main problem of a given region would decrease or solve other problems. The validation of the results depends on actual data and analysis of public managers. In this work, the data were simulated. / Tomada de decisão em gestão pública é associada ao alto grau de complexidade devido à insuficiência de recursos financeiros para atender todas as demandas provindas de diversos setores da sociedade. Frequentemente, atividades econômicas estão em conflito com causas sociais ou ambientais. Outro aspecto importante em tomadas de decisão em gestão pública é a inclusão dos diversos stakeholders, por exemplo especialistas em gestão pública, pequenos empresários, pequenos comerciantes, professores, representantes de classes sociais e profissionais, os próprios cidadãos etc. Diante disto, o objetivo deste trabalho de mestrado é apresentar uma proposta de Agente Minerador (AM) e Agente de Suporte à Decisão (ASD) para Gestão Pública Participativa e como fazer a interface entre eles. O AM faz uso de técnicas de mineração de dados para se encontrar regras de associação entre dados socioambientais, temporais e espaciais e o ASD faz uso de análise multicritério para ranquear problemas socioambientais que devem ser solucionados com prioridade. O contexto em que este trabalho se insere é o projeto ADGEPA (Assistente Digital para Gestão Pública Participativa), um projeto inovador para suporte à tomada de decisão participativa em gestão pública. Entende-se que a contribuição principal deste trabalho de mestrado é a possibilidade de auxiliar na descoberta de padrões e correlações entre aspectos socioambientais que não são muito óbvias e que podem variar de comunidade para comunidade. Esta contribuição poderá auxiliar o gestor público a tomar decisões sistêmicas que além de atacar o problema principal de uma determinada região diminuirá ou solucionará também problemas de outros aspectos. A validação dos resultados depende de dados reais e de análise de gestores públicos. Neste trabalho os dados foram simulados.
76

Méthodologie d'analyse du centre de gravité de normes internationales publiées : une démarche innovante de recommandation. / Methodology for center of gravity analysis of published international standards : an innovative approach

Peoples, Bruce E. 08 April 2016 (has links)
.../... / “Standards make a positive contribution to the world we live in. They facilitate trade, spreadknowledge, disseminate innovative advances in technology, and share good management andconformity assessment practices”7. There are a multitude of standard and standard consortiaorganizations producing market relevant standards, specifications, and technical reports in thedomain of Information Communication Technology (ICT). With the number of ICT relatedstandards and specifications numbering in the thousands, it is not readily apparent to users howthese standards inter-relate to form the basis of technical interoperability. There is a need todevelop and document a process to identify how standards inter-relate to form a basis ofinteroperability in multiple contexts; at a general horizontal technology level that covers alldomains, and within specific vertical technology domains and sub-domains. By analyzing whichstandards inter-relate through normative referencing, key standards can be identified as technicalcenters of gravity, allowing identification of specific standards that are required for thesuccessful implementation of standards that normatively reference them, and form a basis forinteroperability across horizontal and vertical technology domains. This Thesis focuses on defining a methodology to analyze ICT standards to identifynormatively referenced standards that form technical centers of gravity utilizing Data Mining(DM) and Social Network Analysis (SNA) graph technologies as a basis of analysis. As a proofof concept, the methodology focuses on the published International Standards (IS) published bythe International Organization of Standards/International Electrotechnical Committee; JointTechnical Committee 1, Sub-committee 36 Learning Education, and Training (ISO/IEC JTC1 SC36). The process is designed to be scalable for larger document sets within ISO/IEC JTC1 that covers all JTC1 Sub-Committees, and possibly other Standard Development Organizations(SDOs).Chapter 1 provides a review of literature of previous standard analysis projects and analysisof components used in this Thesis, such as data mining and graph theory. Identification of adataset for testing the developed methodology containing published International Standardsneeded for analysis and form specific technology domains and sub-domains is the focus ofChapter 2. Chapter 3 describes the specific methodology developed to analyze publishedInternational Standards documents, and to create and analyze the graphs to identify technicalcenters of gravity. Chapter 4 presents analysis of data which identifies technical center of gravitystandards for ICT learning, education, and training standards produced in ISO/IEC JTC1 SC 36.Conclusions of the analysis are contained in Chapter 5. Recommendations for further researchusing the output of the developed methodology are contained in Chapter 6.
77

Modélisation de documents et recherche de points communs - Proposition d'un framework de gestion de fiches d'anomalie pour faciliter les maintenances corrective et préventive

Claude, Grégory 16 May 2012 (has links) (PDF)
La pratique quotidienne d'une activité génère un ensemble de connaissances qui se traduisent par un savoir-faire, une maîtrise, une compétence qu'une personne acquiert au cours du temps. Pour les préserver, la capitalisation des connaissances est devenue une activité essentielle dans les entreprises. Nos travaux de recherche ont pour objectif de modéliser et mettre en œuvre un système afin d'extraire et de formaliser les connaissances issues des anomalies qui surviennent dans un contexte de production industrielle et de les intégrer dans un framework facilitant la maintenance corrective et préventive. Ce framework structure la connaissance sous la forme de groupes d'anomalies. Ces groupes peuvent être rapprochés des patterns : ils représentent un problème auquel une ou plusieurs solutions sont associées. Ils ne sont pas définis a priori, c'est l'analyse des anomalies passées qui génère des groupes pertinents, qui peuvent évoluer avec l'ajout de nouvelles anomalies. Pour identifier ces patterns, supports de la connaissance, un processus complet d'extraction et de formalisation de la connaissance est suivi, Knowledge Discovery in Databases. Ce processus a été appliqué dans des domaines très variés. Nous lui donnons ici une nouvelle dimension, le traitement d'anomalies et plus particulièrement celles qui surviennent au cours de processus de production industrielle. Les étapes génériques qui le composent, depuis la simple sélection des données jusqu'à l'interprétation des patterns qui supportent les connaissances, sont considérées pour affecter à chacune un traitement spécifique pertinent par rapport à notre contexte applicatif.
78

Contribution des techniques de fusion et de classification des images au processus d'aide à la reconnaissance des cibles radar non coopératives / The contribution of fusion and classification techniques for non-cooperative target recognition

Jdey Aloui, Imen 23 January 2014 (has links)
La reconnaissance automatique de cibles non coopératives est d’une grande importance dans divers domaines. C’est le cas pour les applications en environnement incertain aérien et maritime. Il s’avère donc nécessaire d’introduire des méthodes originales pour le traitement et l’identification des cibles radar. C’est dans ce contexte que s’inscrit notre travail. La méthodologie proposée est fondée sur le processus d’extraction de connaissance à partir de données (ECD) pour l’élaboration d’une chaine complète de reconnaissance à partir des images radar en essayant d’optimiser chaque étape de cette chaine de traitement. Les expérimentations réalisées pour constituer une base de données d’images ISAR ont été effectuées dans la chambre anéchoïque de l’ENSTA de Bretagne. Ce dispositif de mesures utilisé a l’avantage de disposer d’une maîtrise de la qualité des données représentants les entrées dans le processus de reconnaissance (ECD). Nous avons ainsi étudié les étapes composites de ce processus de l’acquisition jusqu’à l’interprétation et l’évaluation de résultats de reconnaissance. En particulier, nous nous sommes concentrés sur l’étape centrale dédiée à la fouille de données considérée comme le cœur du processus développé. Cette étape est composée de deux phases principales : une porte sur la classification et l’autre sur la fusion des résultats des classifieurs, cette dernière est nommée fusion décisionnelle. Dans ce cadre, nous avons montré que cette dernière phase joue un rôle important dans l’amélioration des résultats pour la prise de décision tout en prenant en compte les imperfections liées aux données radar, notamment l’incertitude et l’imprécision. Les résultats obtenus en utilisant d’une part les différentes techniques de classification (kppv, SVM et PMC), et d’autre part celles de de fusion décisionnelle (Bayes, vote, théorie de croyance, fusion floue) font l’objet d’une étude analytique et comparative en termes de performances. / The automatic recognition of non-cooperative targets is very important in various fields. This is the case for applications in aviation and maritime uncertain environment. Therefore, it’s necessary to introduce innovative methods for radar targets treatment and identification.The proposed methodology is based on the Knowledge Discovery from Data process (KDD) for a complete chain development of radar images recognition by trying to optimize every step of the processing chain.The experimental system used is based on an ISAR image acquisition system in the anechoic chamber of ENSTA Bretagne. This system has allowed controlling the quality of the entries in the recognition process (KDD). We studied the stages of the composite system from acquisition to interpretation and evaluation of results. We focused on the center stage; data mining considered as the heart of the system. This step is composed of two main phases: classification and the results of classifiers combination called decisional fusion. We have shown that this last phase improves results for decision making by taking into account the imperfections related to radar data, including uncertainty and imprecision.The results across different classification techniques as a first step (kNN, SVM and MCP) and decision fusion in a second time (Bayes, majority vote, belief theory, fuzzy fusion) are subject of an analytical and comparative study in terms of performance.
79

Možnosti prezentace výsledků DZD na webu / Options of presentation of KDD results on Web

Koválik, Tomáš January 2015 (has links)
This diploma thesis covers KDD analysis of data and options of presentation of KDD results on Web. The paper is divided into three main sections, which follow the whole process of this thesis. In the first section are mentioned theoretical basics needed for understanding of discussed problem. In this section are described notions data matrix and domain knowledge, concept of CRISP-DM methodology, GUHA method, system LISp-Miner and implementation of GUHA method in LISp-Miner including description of core procedures 4ft-Miner and CF-Miner. The second section is dedicated to the first goal of this paper. It briefly summarizes analysis made during pre-analysis phase. Then is described process of analysis of domain knowledge in a given data set. The third part focuses on the second goal of this thesis, which is problem of presentation of KDD results on Web. This section covers brief theoretical basis for used technologies. Then is described development of export script for automatic generation of website from results found using LISp-Miner system including description of structure of the output and recommendations for work in LISp-Miner system.
80

A new model for worm detection and response. Development and evaluation of a new model based on knowledge discovery and data mining techniques to detect and respond to worm infection by integrating incident response, security metrics and apoptosis.

Mohd Saudi, Madihah January 2011 (has links)
Worms have been improved and a range of sophisticated techniques have been integrated, which make the detection and response processes much harder and longer than in the past. Therefore, in this thesis, a STAKCERT (Starter Kit for Computer Emergency Response Team) model is built to detect worms attack in order to respond to worms more efficiently. The novelty and the strengths of the STAKCERT model lies in the method implemented which consists of STAKCERT KDD processes and the development of STAKCERT worm classification, STAKCERT relational model and STAKCERT worm apoptosis algorithm. The new concept introduced in this model which is named apoptosis, is borrowed from the human immunology system has been mapped in terms of a security perspective. Furthermore, the encouraging results achieved by this research are validated by applying the security metrics for assigning the weight and severity values to trigger the apoptosis. In order to optimise the performance result, the standard operating procedures (SOP) for worm incident response which involve static and dynamic analyses, the knowledge discovery techniques (KDD) in modeling the STAKCERT model and the data mining algorithms were used. This STAKCERT model has produced encouraging results and outperformed comparative existing work for worm detection. It produces an overall accuracy rate of 98.75% with 0.2% for false positive rate and 1.45% is false negative rate. Worm response has resulted in an accuracy rate of 98.08% which later can be used by other researchers as a comparison with their works in future. / Ministry of Higher Education, Malaysia and Universiti Sains Islam Malaysia (USIM)

Page generated in 0.0423 seconds