• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 31
  • 9
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 84
  • 65
  • 64
  • 36
  • 36
  • 36
  • 29
  • 23
  • 23
  • 22
  • 20
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Aplicação do processo de descoberta de conhecimento em dados do poder judiciário do estado do Rio Grande do Sul / Applying the Knowledge Discovery in Database (KDD) Process to Data of the Judiciary Power of Rio Grande do Sul

Schneider, Luís Felipe January 2003 (has links)
Para explorar as relações existentes entre os dados abriu-se espaço para a procura de conhecimento e informações úteis não conhecidas, a partir de grandes conjuntos de dados armazenados. A este campo deu-se o nome de Descoberta de Conhecimento em Base de Dados (DCBD), o qual foi formalizado em 1989. O DCBD é composto por um processo de etapas ou fases, de natureza iterativa e interativa. Este trabalho baseou-se na metodologia CRISP-DM . Independente da metodologia empregada, este processo tem uma fase que pode ser considerada o núcleo da DCBD, a “mineração de dados” (ou modelagem conforme CRISP-DM), a qual está associado o conceito “classe de tipo de problema”, bem como as técnicas e algoritmos que podem ser empregados em uma aplicação de DCBD. Destacaremos as classes associação e agrupamento, as técnicas associadas a estas classes, e os algoritmos Apriori e K-médias. Toda esta contextualização estará compreendida na ferramenta de mineração de dados escolhida, Weka (Waikato Environment for Knowledge Analysis). O plano de pesquisa está centrado em aplicar o processo de DCBD no Poder Judiciário no que se refere a sua atividade fim, julgamentos de processos, procurando por descobertas a partir da influência da classificação processual em relação à incidência de processos, ao tempo de tramitação, aos tipos de sentenças proferidas e a presença da audiência. Também, será explorada a procura por perfis de réus, nos processos criminais, segundo características como sexo, estado civil, grau de instrução, profissão e raça. O trabalho apresenta nos capítulos 2 e 3 o embasamento teórico de DCBC, detalhando a metodologia CRISP-DM. No capítulo 4 explora-se toda a aplicação realizada nos dados do Poder Judiciário e por fim, no capítulo 5, são apresentadas as conclusões. / With the purpose of exploring existing connections among data, a space has been created for the search of Knowledge an useful unknown information based on large sets of stored data. This field was dubbed Knowledge Discovery in Databases (KDD) and it was formalized in 1989. The KDD consists of a process made up of iterative and interactive stages or phases. This work was based on the CRISP-DM methodology. Regardless of the methodology used, this process features a phase that may be considered as the nucleus of KDD, the “data mining” (or modeling according to CRISP-DM) which is associated with the task, as well as the techniques and algorithms that may be employed in an application of KDD. What will be highlighted in this study is affinity grouping and clustering, techniques associated with these tasks and Apriori and K-means algorithms. All this contextualization will be embodied in the selected data mining tool, Weka (Waikato Environment for Knowledge Analysis). The research plan focuses on the application of the KDD process in the Judiciary Power regarding its related activity, court proceedings, seeking findings based on the influence of the procedural classification concerning the incidence of proceedings, the proceduring time, the kind of sentences pronounced and hearing attendance. Also, the search for defendants’ profiles in criminal proceedings such as sex, marital status, education background, professional and race. In chapters 2 and 3, the study presents the theoretical grounds of KDD, explaining the CRISP-DM methodology. Chapter 4 explores all the application preformed in the data of the Judiciary Power, and lastly, in Chapter conclusions are drawn
42

Uso de medidas de desempenho e de grau de interesse para análise de regras descobertas nos classificadores

Rocha, Mauricio Rêgo Mota da 20 August 2008 (has links)
Made available in DSpace on 2016-03-15T19:38:11Z (GMT). No. of bitstreams: 1 Mauricio Rego Mota da Rocha.pdf: 914988 bytes, checksum: d8751dcc6d37e161867d8941bc8f7d64 (MD5) Previous issue date: 2008-08-20 / Fundo Mackenzie de Pesquisa / The process of knowledge discovery in databases has become necessary because of the large amount of data currently stored in databases of companies. They operated properly can help the managers in decision-making in organizations. This process is composed of several steps, among them there is a data mining, stage where they are applied techniques for obtaining knowledge that can not be obtained through traditional methods of analysis. In addition to the technical, in step of data mining is also chosen the task of data mining that will be used. The data mining usually produces large amount of rules that often are not important, relevant or interesting to the end user. This makes it necessary to review the knowledge discovered in post-processing of data. In the stage of post-processing is used both measures of performance but also of degree of interest in order to sharpen the rules more interesting, useful and relevant. In this work, using a tool called WEKA (Waikato Environment for Knowledge Analysis), were applied techniques of mining, decision trees and rules of classification by the classification algorithms J48.J48 and J48.PART respectively. In the post-processing data was implemented a package with functions and procedures for calculation of both measures of performance but also of the degree of interest rules. At this stage consultations have also been developed (querys) to select the most important rules in accordance with measures of performance and degree of interest. / O processo de descoberta de conhecimento em banco de dados tem se tornado necessário devido à grande quantidade de dados atualmente armazenados nas bases de dados das empresas. Esses dados devidamente explorados podem auxiliar os gestores na tomada de decisões nas organizações. Este processo é composto de várias etapas, dentre elas destaca-se a mineração de dados, etapa onde são aplicadas técnicas para obtenção de conhecimento que não podem ser obtidas através de métodos tradicionais de análise. Além das técnicas, na etapa demineração de dados também é escolhida a tarefa de mineração que será utilizada. A mineração de dados geralmente produz grande quantidade de regras que muitas vezes não são importantes, relevantes ou interessantes para o usuário final. Isto torna necessária a análise do conhecimento descoberto no pós-processamento dos dados. Na etapa de pós-processamento são utilizadas medidas tanto de desempenho como também de grau de interesse com a finalidade de apontar as regras mais interessante, úteis e relevantes. Neste trabalho, utilizando-se de uma ferramenta chamada WEKA (Waikato Environment for Knowledge Analysis), foram aplicadas as técnicas de mineração de Árvore de Decisão e de Regras de Classificação através dos algoritmos de classificação J48.J48 e J48.PART respectivamente. No pós-processamento de dados foi implementado um pacote com funções e procedimentos para cálculo das medidas tanto de desempenho como também de grau de interesse de regras. Nesta etapa também foram desenvolvidas consultas (querys) para selecionar as regras mais importantes de acordo com as medidas de desempenho e de grau de interesse.
43

Využití data miningových metod při zpracování dat z demografických šetření / Using data mining methods for demographic survey data processing

Fišer, David January 2015 (has links)
USING DATA MINING METHODS FOR DEMOGRAPHIC SURVEY DATA PROCESSING Abstract The goal of the thesis was to describe and demonstrate principles of the process of knowledge discovery in databases - data mining (DM). In the theoretical part of the thesis, selected methods for data mining processes are described as well as basic principles of those DM techniques. In the second part of the thesis a DM task is realized in accordance to CRISP-DM methodology. Practical part of the thesis is divided into two parts and data from the survey of American Community Survey served as the basic data for the practical part of the thesis. First part contains a classification task which goal was to determinate whether the selected DM techniques can be used to solve missing data in the surveys. The success rate of classifications and following data value prediction in selected attributes was in 55-80 % range. The second part of the practical part of the thesis was then focused of determining knowledge of interest using associating rules and the GUHA method. Keywords: data mining, knowledge discovery in databases, statistic surveys, missing values, classification, association rules, GUHA method, ACS
44

Porovnatelnost dat v dobývání znalostí z databází / Data comparability in knowledge discovery in databases

Horáková, Linda January 2017 (has links)
The master thesis is focused on analysis of data comparability and commensurability in datasets, which are used for obtaining knowledge using methods of data mining. Data comparability is one of aspects of data quality, which is crucial for correct and applicable results from data mining tasks. The aim of the theoretical part of the thesis is to briefly describe the field of knowledqe discovery and define specifics of mining of aggregated data. Moreover, the terms of comparability and commensurability is discussed. The main part is focused on process of knowledge discovery. These findings are applied in practical part of the thesis. The main goal of this part is to define general methodology, which can be used for discovery of potential problems of data comparability in analyzed data. This methodology is based on analysis of real dataset containing daily sales of products. In conclusion, the methodology is applied on data from the field of public budgets.
45

Génération de connaissances à l’aide du retour d’expérience : application à la maintenance industrielle / Knowledge generation using experience feedback : application to industrial maintenance

Potes Ruiz, Paula Andrea 24 November 2014 (has links)
Les travaux de recherche présentés dans ce mémoire s’inscrivent dans le cadre de la valorisation des connaissances issues des expériences passées afin d’améliorer les performances des processus industriels. La connaissance est considérée aujourd'hui comme une ressource stratégique importante pouvant apporter un avantage concurrentiel décisif aux organisations. La gestion des connaissances (et en particulier le retour d’expérience) permet de préserver et de valoriser des informations liées aux activités d’une entreprise afin d’aider la prise de décision et de créer de nouvelles connaissances à partir du patrimoine immatériel de l’organisation. Dans ce contexte, les progrès des technologies de l’information et de la communication jouent un rôle essentiel dans la collecte et la gestion des connaissances. L’implémentation généralisée des systèmes d’information industriels, tels que les ERP (Enterprise Resource Planning), rend en effet disponible un grand volume d’informations issues des événements ou des faits passés, dont la réutilisation devient un enjeu majeur. Toutefois, ces fragments de connaissances (les expériences passées) sont très contextualisés et nécessitent des méthodologies bien précises pour être généralisés. Etant donné le potentiel des informations recueillies dans les entreprises en tant que source de nouvelles connaissances, nous proposons dans ce travail une démarche originale permettant de générer de nouvelles connaissances tirées de l’analyse des expériences passées, en nous appuyant sur la complémentarité de deux courants scientifiques : la démarche de Retour d’Expérience (REx) et les techniques d’Extraction de Connaissances à partir de Données (ECD). Le couplage REx-ECD proposé porte principalement sur : i) la modélisation des expériences recueillies à l’aide d’un formalisme de représentation de connaissances afin de faciliter leur future exploitation, et ii) l’application de techniques relatives à la fouille de données (ou data mining) afin d’extraire des expériences de nouvelles connaissances sous la forme de règles. Ces règles doivent nécessairement être évaluées et validées par les experts du domaine avant leur réutilisation et/ou leur intégration dans le système industriel. Tout au long de cette démarche, nous avons donné une place privilégiée aux Graphes Conceptuels (GCs), formalisme de représentation des connaissances choisi pour faciliter le stockage, le traitement et la compréhension des connaissances extraites par l’utilisateur, en vue d’une exploitation future. Ce mémoire s’articule en quatre chapitres. Le premier constitue un état de l’art abordant les généralités des deux courants scientifiques qui contribuent à notre proposition : le REx et les techniques d’ECD. Le second chapitre présente la démarche REx-ECD proposée, ainsi que les outils mis en œuvre pour la génération de nouvelles connaissances afin de valoriser les informations disponibles décrivant les expériences passées. Le troisième chapitre présente une méthodologie structurée pour interpréter et évaluer l’intérêt des connaissances extraites lors de la phase de post-traitement du processus d’ECD. Finalement, le dernier chapitre expose des cas réels d’application de la démarche proposée à des interventions de maintenance industrielle. / The research work presented in this thesis relates to knowledge extraction from past experiences in order to improve the performance of industrial process. Knowledge is nowadays considered as an important strategic resource providing a decisive competitive advantage to organizations. Knowledge management (especially the experience feedback) is used to preserve and enhance the information related to a company’s activities in order to support decision-making and create new knowledge from the intangible heritage of the organization. In that context, advances in information and communication technologies play an essential role for gathering and processing knowledge. The generalised implementation of industrial information systems such as ERPs (Enterprise Resource Planning) make available a large amount of data related to past events or historical facts, which reuse is becoming a major issue. However, these fragments of knowledge (past experiences) are highly contextualized and require specific methodologies for being generalized. Taking into account the great potential of the information collected in companies as a source of new knowledge, we suggest in this work an original approach to generate new knowledge based on the analysis of past experiences, taking into account the complementarity of two scientific threads: Experience Feedback (EF) and Knowledge Discovery techniques from Databases (KDD). The suggested EF-KDD combination focuses mainly on: i) modelling the experiences collected using a knowledge representation formalism in order to facilitate their future exploitation, and ii) applying techniques related to data mining in order to extract new knowledge in the form of rules. These rules must necessarily be evaluated and validated by experts of the industrial domain before their reuse and/or integration into the industrial system. Throughout this approach, we have given a privileged position to Conceptual Graphs (CGs), knowledge representation formalism chosen in order to facilitate the storage, processing and understanding of the extracted knowledge by the user for future exploitation. This thesis is divided into four chapters. The first chapter is a state of the art addressing the generalities of the two scientific threads that contribute to our proposal: EF and KDD. The second chapter presents the EF-KDD suggested approach and the tools used for the generation of new knowledge, in order to exploit the available information describing past experiences. The third chapter suggests a structured methodology for interpreting and evaluating the usefulness of the extracted knowledge during the post-processing phase in the KDD process. Finally, the last chapter discusses real case studies dealing with the industrial maintenance domain, on which the proposed approach has been applied.
46

Použití metod dobývání znalostí v oblasti kardiochirurgie / Application of knowledge discovery methods in the field of cardiac surgery

Čech, Bohuslav January 2014 (has links)
This theses demonstrate practical use of knowledge discovery in the field of cardiac surgery. The tasks of the Department of Cardiac Surgery University Hospital Olomouc are solved through the use of GUHA method and LISp-Miner system. Mitral valve surgery data comes from clinical practice between the years 2002 and 2011. Theoretical part includes chapter on KDD -- type of tasks, methods and methodology and chapter on cardiac surgery -- anatomy and functions of heart, mitral valve disease and diagnostic methods including quantification. Practical part brings solutions of the tasks and whole process is described in the spirit of CRISP-DM.
47

Meta-učení v oblasti dolování dat / Meta-Learning in the Area of Data Mining

Kučera, Petr January 2013 (has links)
This paper describes the use of meta-learning in the area of data mining. It describes the problems and tasks of data mining where meta-learning can be applied, with a focus on classification. It provides an overview of meta-learning techniques and their possible application in data mining, especially  model selection. It describes design and implementation of meta-learning system to support classification tasks in data mining. The system uses statistics and information theory to characterize data sets stored in the meta-knowledge base. The meta-classifier is created from the base and predicts the most suitable model for the new data set. The conclusion discusses results of the experiments with more than 20 data sets representing clasification tasks from different areas and suggests possible extensions of the project.
48

Získávání znalostí z marketingových dat / Knowledge discovery in marketing data

Kazárová, Marie January 2020 (has links)
Data mining techniques are used by companies to gain competitive advantages. In today's marketplace, they are also used by marketers mainly for personalization of advertising and for maintaining long-term relationship with customers. Progress in knowledge discovery in databases and availability of computational power comes not only with positive impact, but also with challenges. The practical part of the thesis aims to explore and describe data mining techniques applied to e-commerce dataset. Dataset consists of transaction and web analytics data. The goal of experimental application aims to make a selection of users who most probably react to a marketing communication and to identify the factors which influence them. Target segment of users is obtained through the use of data mining technique clustering. The classification model uses decision tree algorithm to predict whether users submit transaction with an accuracy of 75%. The results are useful for optimization of marketing and business strategy.
49

Vytvoření nových klasifikačních modulů v systému pro dolování z dat na platformě NetBeans / Creation of New Clasification Units in Data Mining System on NetBeans Platform

Kmoščák, Ondřej January 2009 (has links)
This diploma thesis deals with the data mining and the creation of data mining unit for data mining system, which is beeing developed at FIT. This is a client application consisting of a kernel and its graphical user interface and independent mining modules. The application uses support of Oracle Data Mining. The data mining system is implemented in Java language and its graphical user interface is built on NetBeans platform. The content of this work will be the introduction into the issue of knowledge discovery and then the presentation of the chosen Bayesian classification method, for which there will subsequently be implemented the stand-alone data mining module. Furthermore, the implementation of this module will be described.
50

Data Mining with Newton's Method.

Cloyd, James Dale 01 December 2002 (has links) (PDF)
Capable and well-organized data mining algorithms are essential and fundamental to helpful, useful, and successful knowledge discovery in databases. We discuss several data mining algorithms including genetic algorithms (GAs). In addition, we propose a modified multivariate Newton's method (NM) approach to data mining of technical data. Several strategies are employed to stabilize Newton's method to pathological function behavior. NM is compared to GAs and to the simplex evolutionary operation algorithm (EVOP). We find that GAs, NM, and EVOP all perform efficiently for well-behaved global optimization functions with NM providing an exponential improvement in convergence rate. For local optimization problems, we find that GAs and EVOP do not provide the desired convergence rate, accuracy, or precision compared to NM for technical data. We find that GAs are favored for their simplicity while NM would be favored for its performance.

Page generated in 0.1502 seconds