• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 10
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 100
  • 100
  • 85
  • 63
  • 19
  • 17
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Detecção de fraudes em cartões: um classificador baseado em regras de associação e regressão logística / Card fraud detection: a classifier based on association rules and logistic regression

Paulo Henrique Maestrello Assad Oliveira 11 December 2015 (has links)
Os cartões, sejam de crédito ou débito, são meios de pagamento altamente utilizados. Esse fato desperta o interesse de fraudadores. O mercado de cartões enxerga as fraudes como custos operacionais, que são repassados para os consumidores e para a sociedade em geral. Ainda, o alto volume de transações e a necessidade de combater as fraudes abrem espaço para a aplicação de técnicas de Aprendizagem de Máquina; entre elas, os classificadores. Um tipo de classificador largamente utilizado nesse domínio é o classificador baseado em regras. Entretanto, um ponto de atenção dessa categoria de classificadores é que, na prática, eles são altamente dependentes dos especialistas no domínio, ou seja, profissionais que detectam os padrões das transações fraudulentas, os transformam em regras e implementam essas regras nos sistemas de classificação. Ao reconhecer esse cenário, o objetivo desse trabalho é propor a uma arquitetura baseada em regras de associação e regressão logística - técnicas estudadas em Aprendizagem de Máquina - para minerar regras nos dados e produzir, como resultado, conjuntos de regras de detecção de transações fraudulentas e disponibilizá-los para os especialistas no domínio. Com isso, esses profissionais terão o auxílio dos computadores para descobrir e gerar as regras que embasam o classificador, diminuindo, então, a chance de haver padrões fraudulentos ainda não reconhecidos e tornando as atividades de gerar e manter as regras mais eficientes. Com a finalidade de testar a proposta, a parte experimental do trabalho contou com cerca de 7,7 milhões de transações reais de cartões fornecidas por uma empresa participante do mercado de cartões. A partir daí, dado que o classificador pode cometer erros (falso-positivo e falso-negativo), a técnica de análise sensível ao custo foi aplicada para que a maior parte desses erros tenha um menor custo. Além disso, após um longo trabalho de análise do banco de dados, 141 características foram combinadas para, com o uso do algoritmo FP-Growth, gerar 38.003 regras que, após um processo de filtragem e seleção, foram agrupadas em cinco conjuntos de regras, sendo que o maior deles tem 1.285 regras. Cada um desses cinco conjuntos foi submetido a uma modelagem de regressão logística para que suas regras fossem validadas e ponderadas por critérios estatísticos. Ao final do processo, as métricas de ajuste estatístico dos modelos revelaram conjuntos bem ajustados e os indicadores de desempenho dos classificadores também indicaram, num geral, poderes de classificação muito bons (AROC entre 0,788 e 0,820). Como conclusão, a aplicação combinada das técnicas estatísticas - análise sensível ao custo, regras de associação e regressão logística - se mostrou conceitual e teoricamente coesa e coerente. Por fim, o experimento e seus resultados demonstraram a viabilidade técnica e prática da proposta. / Credit and debit cards are two methods of payments highly utilized. This awakens the interest of fraudsters. Businesses see fraudulent transactions as operating costs, which are passed on to consumers. Thus, the high number of transactions and the necessity to combat fraud stimulate the use of machine learning algorithms; among them, rule-based classifiers. However, a weakness of these classifiers is that, in practice, they are highly dependent on professionals who detect patterns of fraudulent transactions, transform them into rules and implement these rules in the classifier. Knowing this scenario, the aim of this thesis is to propose an architecture based on association rules and logistic regression - techniques studied in Machine Learning - for mining rules on data and produce rule sets to detect fraudulent transactions and make them available to experts. As a result, these professionals will have the aid of computers to discover the rules that support the classifier, decreasing the chance of having non-discovered fraudulent patterns and increasing the efficiency of generate and maintain these rules. In order to test the proposal, the experimental part of the thesis has used almost 7.7 million transactions provided by a real company. Moreover, after a long process of analysis of the database, 141 characteristics were combined using the algorithm FP-Growth, generating 38,003 rules. After a process of filtering and selection, they were grouped into five sets of rules which the biggest one has 1,285 rules. Each of the five sets was subjected to logistic regression, so their rules have been validated and weighted by statistical criteria. At the end of the process, the goodness of fit tests were satisfied and the performance indicators have shown very good classification powers (AUC between 0.788 and 0.820). In conclusion, the combined application of statistical techniques - cost sensitive learning, association rules and logistic regression - proved being conceptually and theoretically cohesive and coherent. Finally, the experiment and its results have demonstrated the technical and practical feasibilities of the proposal.
72

Data mining of temporal sequences for the prediction of infrequent failure events : application on floating train data for predictive maintenance / Fouille de séquences temporelles pour la maintenance prédictive : application aux données de véhicules traceurs ferroviaires

Sammouri, Wissam 20 June 2014 (has links)
De nos jours, afin de répondre aux exigences économiques et sociales, les systèmes de transport ferroviaire ont la nécessité d'être exploités avec un haut niveau de sécurité et de fiabilité. On constate notamment un besoin croissant en termes d'outils de surveillance et d'aide à la maintenance de manière à anticiper les défaillances des composants du matériel roulant ferroviaire. Pour mettre au point de tels outils, les trains commerciaux sont équipés de capteurs intelligents envoyant des informations en temps réel sur l'état de divers sous-systèmes. Ces informations se présentent sous la forme de longues séquences temporelles constituées d'une succession d'événements. Le développement d'outils d'analyse automatique de ces séquences permettra d'identifier des associations significatives entre événements dans un but de prédiction d'événement signant l'apparition de défaillance grave. Cette thèse aborde la problématique de la fouille de séquences temporelles pour la prédiction d'événements rares et s'inscrit dans un contexte global de développement d'outils d'aide à la décision. Nous visons à étudier et développer diverses méthodes pour découvrir les règles d'association entre événements d'une part et à construire des modèles de classification d'autre part. Ces règles et/ou ces classifieurs peuvent ensuite être exploités pour analyser en ligne un flux d'événements entrants dans le but de prédire l'apparition d'événements cibles correspondant à des défaillances. Deux méthodologies sont considérées dans ce travail de thèse: La première est basée sur la recherche des règles d'association, qui est une approche temporelle et une approche à base de reconnaissance de formes. Les principaux défis auxquels est confronté ce travail sont principalement liés à la rareté des événements cibles à prédire, la redondance importante de certains événements et à la présence très fréquente de "bursts". Les résultats obtenus sur des données réelles recueillies par des capteurs embarqués sur une flotte de trains commerciaux permettent de mettre en évidence l'efficacité des approches proposées / In order to meet the mounting social and economic demands, railway operators and manufacturers are striving for a longer availability and a better reliability of railway transportation systems. Commercial trains are being equipped with state-of-the-art onboard intelligent sensors monitoring various subsystems all over the train. These sensors provide real-time flow of data, called floating train data, consisting of georeferenced events, along with their spatial and temporal coordinates. Once ordered with respect to time, these events can be considered as long temporal sequences which can be mined for possible relationships. This has created a neccessity for sequential data mining techniques in order to derive meaningful associations rules or classification models from these data. Once discovered, these rules and models can then be used to perform an on-line analysis of the incoming event stream in order to predict the occurrence of target events, i.e, severe failures that require immediate corrective maintenance actions. The work in this thesis tackles the above mentioned data mining task. We aim to investigate and develop various methodologies to discover association rules and classification models which can help predict rare tilt and traction failures in sequences using past events that are less critical. The investigated techniques constitute two major axes: Association analysis, which is temporal and Classification techniques, which is not temporal. The main challenges confronting the data mining task and increasing its complexity are mainly the rarity of the target events to be predicted in addition to the heavy redundancy of some events and the frequent occurrence of data bursts. The results obtained on real datasets collected from a fleet of trains allows to highlight the effectiveness of the approaches and methodologies used
73

Génération de connaissances à l’aide du retour d’expérience : application à la maintenance industrielle / Knowledge generation using experience feedback : application to industrial maintenance

Potes Ruiz, Paula Andrea 24 November 2014 (has links)
Les travaux de recherche présentés dans ce mémoire s’inscrivent dans le cadre de la valorisation des connaissances issues des expériences passées afin d’améliorer les performances des processus industriels. La connaissance est considérée aujourd'hui comme une ressource stratégique importante pouvant apporter un avantage concurrentiel décisif aux organisations. La gestion des connaissances (et en particulier le retour d’expérience) permet de préserver et de valoriser des informations liées aux activités d’une entreprise afin d’aider la prise de décision et de créer de nouvelles connaissances à partir du patrimoine immatériel de l’organisation. Dans ce contexte, les progrès des technologies de l’information et de la communication jouent un rôle essentiel dans la collecte et la gestion des connaissances. L’implémentation généralisée des systèmes d’information industriels, tels que les ERP (Enterprise Resource Planning), rend en effet disponible un grand volume d’informations issues des événements ou des faits passés, dont la réutilisation devient un enjeu majeur. Toutefois, ces fragments de connaissances (les expériences passées) sont très contextualisés et nécessitent des méthodologies bien précises pour être généralisés. Etant donné le potentiel des informations recueillies dans les entreprises en tant que source de nouvelles connaissances, nous proposons dans ce travail une démarche originale permettant de générer de nouvelles connaissances tirées de l’analyse des expériences passées, en nous appuyant sur la complémentarité de deux courants scientifiques : la démarche de Retour d’Expérience (REx) et les techniques d’Extraction de Connaissances à partir de Données (ECD). Le couplage REx-ECD proposé porte principalement sur : i) la modélisation des expériences recueillies à l’aide d’un formalisme de représentation de connaissances afin de faciliter leur future exploitation, et ii) l’application de techniques relatives à la fouille de données (ou data mining) afin d’extraire des expériences de nouvelles connaissances sous la forme de règles. Ces règles doivent nécessairement être évaluées et validées par les experts du domaine avant leur réutilisation et/ou leur intégration dans le système industriel. Tout au long de cette démarche, nous avons donné une place privilégiée aux Graphes Conceptuels (GCs), formalisme de représentation des connaissances choisi pour faciliter le stockage, le traitement et la compréhension des connaissances extraites par l’utilisateur, en vue d’une exploitation future. Ce mémoire s’articule en quatre chapitres. Le premier constitue un état de l’art abordant les généralités des deux courants scientifiques qui contribuent à notre proposition : le REx et les techniques d’ECD. Le second chapitre présente la démarche REx-ECD proposée, ainsi que les outils mis en œuvre pour la génération de nouvelles connaissances afin de valoriser les informations disponibles décrivant les expériences passées. Le troisième chapitre présente une méthodologie structurée pour interpréter et évaluer l’intérêt des connaissances extraites lors de la phase de post-traitement du processus d’ECD. Finalement, le dernier chapitre expose des cas réels d’application de la démarche proposée à des interventions de maintenance industrielle. / The research work presented in this thesis relates to knowledge extraction from past experiences in order to improve the performance of industrial process. Knowledge is nowadays considered as an important strategic resource providing a decisive competitive advantage to organizations. Knowledge management (especially the experience feedback) is used to preserve and enhance the information related to a company’s activities in order to support decision-making and create new knowledge from the intangible heritage of the organization. In that context, advances in information and communication technologies play an essential role for gathering and processing knowledge. The generalised implementation of industrial information systems such as ERPs (Enterprise Resource Planning) make available a large amount of data related to past events or historical facts, which reuse is becoming a major issue. However, these fragments of knowledge (past experiences) are highly contextualized and require specific methodologies for being generalized. Taking into account the great potential of the information collected in companies as a source of new knowledge, we suggest in this work an original approach to generate new knowledge based on the analysis of past experiences, taking into account the complementarity of two scientific threads: Experience Feedback (EF) and Knowledge Discovery techniques from Databases (KDD). The suggested EF-KDD combination focuses mainly on: i) modelling the experiences collected using a knowledge representation formalism in order to facilitate their future exploitation, and ii) applying techniques related to data mining in order to extract new knowledge in the form of rules. These rules must necessarily be evaluated and validated by experts of the industrial domain before their reuse and/or integration into the industrial system. Throughout this approach, we have given a privileged position to Conceptual Graphs (CGs), knowledge representation formalism chosen in order to facilitate the storage, processing and understanding of the extracted knowledge by the user for future exploitation. This thesis is divided into four chapters. The first chapter is a state of the art addressing the generalities of the two scientific threads that contribute to our proposal: EF and KDD. The second chapter presents the EF-KDD suggested approach and the tools used for the generation of new knowledge, in order to exploit the available information describing past experiences. The third chapter suggests a structured methodology for interpreting and evaluating the usefulness of the extracted knowledge during the post-processing phase in the KDD process. Finally, the last chapter discusses real case studies dealing with the industrial maintenance domain, on which the proposed approach has been applied.
74

Vizualizace asociačních pravidel ve webovém prostředí / Association rule visualization in web environment

Škrabal, Radek January 2011 (has links)
The aim of this thesis is to implement a web interface for the LISp-Miner academic system which provides association rule mining capability. There is a new trend these days for knowledge discovery in databases applications which are being transformed from desktop applications to web applications and this arises both new opportunities and issues. This thesis describes new interactive approach for association rule mining in which the user is an essential part of the algorithm and can alter the task setting. Users can also collaborate by creating domain knowledge repository which helps finding new interesting information out of the data.
75

Product allocation for an automated order picking system in an e-commerce warehouse : A data mining approach

Dahl, Alexander January 2020 (has links)
Warehouse automation is a measure E-commerce companies can take to get a more streamlined flow through their warehouse. Order picking is the most labor intensive task in a warehouse. By automating the order picking process companies can lower their costs and improve their response times. This thesis studies the A-frame, an automated order picking system, at a large online pharmacy, Apotea AB. An A-frame has dispensing channels on its side and a conveyor belt that runs through the entire machine. Products for an order are ejected from the channels onto the conveyor belt and at the end of the machine they are dropped into a box. The box is then sealed, labeled and sent to the customer. For the automatic flow to function correctly, all orders picked by the A-frame need to be complete orders. Complete orders are orders where there are no products missing. To maximize the throughput of the A-frame, an appropriate product allocation will be required. Due to the vast number of combinations, it is extremely difficult to identify an optimal product allocation. This study has examined three different approaches to the product allocation problem for an A-frame. The first two methods are based on ranking the products depending on their quantities sold. The last method uses association rule learning, which is a machine learning technique for finding interesting patterns in a data set. Association rule learning was used to find which products were associated to each other. These associations were then placed in a graph structure and solved using a heuristic. To evaluate the different allocation methods, a simulation model was created. The A-frame was simulated using a discrete event simulation, which meant all methods could be tested on the same data to correctly compare the performance of each allocation. The study showed that the heuristic using association rules gave the highest number of picks for the tested period. However, it was only marginally better than the method that first removed orders that could not be picked from the A-frame and then ranked all products by their quantities sold. The study's conclusion is that while association rule learning resulted in the highest number of picked orders, the gain of using it does not motivate its complexity. Instead a more simple approach by ranking products by their quantities sold should be used. Warehousing in the era of E-commerce has to be fast, correct and cheap.
76

An Approach to Extending Ontologies in the Nanomaterials Domain

Leshi, Olumide January 2020 (has links)
As recently as the last decade or two, data-driven science workflows have become increasingly popular and semantic technology has been relied on to help align often parallel research efforts in the different domains and foster interoperability and data sharing. However, a key challenge is the size of the data and the pace at which it is being generated, so much that manual procedures lag behind. Thus, eliciting automation of most workflows. In this study, the effort is to continue investigating ways by which some tasks performed by experts in the nanotechnology domain, specifically in ontology engineering, could benefit from automation. An approach, featuring phrase-based topic modelling and formal topical concept analysis is further motivated, together with formal implication rules, to uncover new concepts and axioms relevant to two nanotechnology-related ontologies. A corpus of 2,715 nanotechnology research articles helps showcase that the approach can scale, as seen in a number of experiments conducted. The usefulness of document text ranking as an alternative form of input to topic models is highlighted as well as the benefit of implication rules to the task of concept discovery. In all, a total of 203 new concepts are uncovered by the approach to extend the referenced ontologies
77

DRFS: Detecting Risk Factor of Stroke Disease from Social Media Using Machine Learning Techniques

Pradeepa, S., Manjula, K. R., Vimal, S., Khan, Mohammad S., Chilamkurti, Naveen, Luhach, Ashish Kr 01 January 2020 (has links)
In general humans are said to be social animals. In the huge expanded internet, it's really difficult to detect and find out useful information about a medical illness. In anticipation of more definitive studies of a causal organization between stroke risk and social network, It would be suitable to help social individuals to detect the risk of stroke. In this work, a DRFS methodology is proposed to find out the various symptoms associated with the stroke disease and preventive measures of a stroke disease from the social media content. We have defined an architecture for clustering tweets based on the content using Spectral Clustering an iterative fashion. The class label detection is furnished with the use of highest TF-IDF value words. The resultant clusters obtained as the output of spectral clustering is prearranged as input to the Probability Neural Network (PNN) to get the suitable class labels and their probabilities. Find Frequent word set using support count measure from the group of clusters for identify the risk factors of stroke. We found that the anticipated approach is able to recognize new symptoms and causes that are not listed in the World Health Organization (WHO), Mayo Clinic and National Health Survey (NHS). It is marked that they get associated with precise outcomes portray real statistics. This type of experiments will empower health organization, doctors and Government segments to keep track of stroke diseases. Experimental results shows the causes preventive measures, high and low risk factors of stroke diseases.
78

A comparative analysis of database sanitization techniques for privacy-preserving association rule mining / En jämförande analys av tekniker för databasanonymisering inom sekretessbevarande associationsregelutvinning

Mårtensson, Charlie January 2023 (has links)
Association rule hiding (ARH) is the process of modifying a transaction database to prevent sensitive patterns (association rules) from discovery by data miners. An optimal ARH technique successfully hides all sensitive patterns while leaving all nonsensitive patterns public. However, in practice, many ARH algorithms cause some undesirable side effects, such as failing to hide sensitive rules or mistakenly hiding nonsensitive ones. Evaluating the utility of ARH algorithms therefore involves measuring the side effects they cause. There are a wide array of ARH techniques in use, with evolutionary algorithms in particular gaining popularity in recent years. However, previous research in the area has focused on incremental improvement of existing algorithms. No work was found that compares the performance of ARH algorithms without the incentive of promoting a newly suggested algorithm as superior. To fill this research gap, this project compares three ARH algorithms developed between 2019 and 2022—ABC4ARH, VIDPSO, and SA-MDP— using identical and unbiased parameters. The algorithms were run on three real databases and three synthetic ones of various sizes, in each case given four different sets of sensitive rules to hide. Their performance was measured in terms of side effects, runtime, and scalability (i.e., performance on increasing database size). It was found that the performance of the algorithms varied considerably depending on the characteristics of the input data, with no algorithm consistently outperforming others at the task of mitigating side effects. VIDPSO was the most efficient in terms of runtime, while ABC4ARH maintained the most robust performance as the database size increased. However, results matching the quality of those in the papers originally describing each algorithm could not be reproduced, showing a clear need for validating the reproducibility of research before the results can be trusted. / ”Association rule hiding”, ungefär ”döljande av associationsregler” – hädanefter ARH – är en process som går ut på att modifiera en transaktionsdatabas för att förhindra att känsliga mönster (så kallade associationsregler) upptäcks genom datautvinning. En optimal ARH-teknik döljer framgångsrikt alla känsliga mönster medan alla ickekänsliga mönster förblir öppet tillgängliga. I praktiken är det dock vanligt att ARH-algoritmer orsakar oönskade sidoeffekter. Exempelvis kan de misslyckas med att dölja vissa känsliga regler eller dölja ickekänsliga regler av misstag. Evalueringen av ARH-algoritmers användbarhet inbegriper därför mätning av dessa sidoeffekter. Bland det stora urvalet ARH-tekniker har i synnerhet evolutionära algoritmer ökat i popularitet under senare år. Tidigare forskning inom området har dock fokuserat på inkrementell förbättring av existerande algoritmer. Ingen forskning hittades som jämförde ARH-algoritmer utan det underliggande incitamentet att framhäva överlägsenheten hos en nyutvecklad algoritm. Detta projekt ämnar fylla denna lucka i forskningen genom en jämförelse av tre ARH-algoritmer som tagits fram mellan 2019 och 2022 – ABC4ARH, VIDPSO och SA-MDP – med hjälp av identiska och oberoende parametrar. Algoritmerna kördes på sex databaser – tre hämtade från verkligheten, tre syntetiska av varierande storlek – och fick i samtliga fall fyra olika uppsättningar känsliga regler att dölja. Prestandan mättes enligt sidoeffekter, exekveringstid samt skalbarhet (dvs. prestation när databasens storlek ökar). Algoritmernas prestation varierade avsevärt beroende på indatans egenskaper. Ingen algoritm var konsekvent överlägsen de andra när det gällde att minimera sidoeffekter. VIDPSO var tidsmässigt mest effektiv, medan ABC4ARH var mest robust vid hanteringen av växande indata. Resultat i nivå med de som uppmättes i forskningsrapporterna som ursprungligen presenterat varje algoritm kunde inte reproduceras, vilket tyder på ett behov av att validera reproducerbarheten hos forskning innan dess resultat kan anses tillförlitliga.
79

Mining High Impact Combinations of Conditions from the Medical Expenditure Panel Survey

Mohan, Arjun 14 November 2023 (has links) (PDF)
The condition of multimorbidity — the presence of two or more medical conditions in an individual — is a growing phenomenon worldwide. In the United States, multimorbid patients represent more than a third of the population and the trend is steadily increasing in an already aging population. There is thus a pressing need to understand the patterns in which multimorbidity occurs, and to better understand the nature of the care that is required to be provided to such patients. In this thesis, we use data from the Medical Expenditure Panel Survey (MEPS) from the years 2011 to 2015 to identify combinations of multiple chronic conditions (MCCs). We first quantify the significant heterogeneity observed in these combinations and how often they are observed across the five years. Next, using two criteria associated with each combination -- (a) the annual prevalence and (b) the annual median expenditure -- along with the concept of non-dominated Pareto fronts, we determine the degree of impact each combination has on the healthcare system. Our analysis reveals that combinations of four or more conditions are often mixtures of diseases that belong to different clinically meaningful groupings such as the metabolic disorders (diabetes, hypertension, hyperlipidemia); musculoskeletal conditions (osteoarthritis, spondylosis, back problems etc.); respiratory disorders (asthma, COPD etc.); heart conditions (atherosclerosis, myocardial infarction); and mental health conditions (anxiety disorders, depression etc.). Next, we use unsupervised learning techniques such as association rule mining and hierarchical clustering to visually explore the strength of the relationships/associations between different conditions and condition groupings. This interactive framework allows epidemiologists and clinicians (in particular primary care physicians) to have a systematic approach to understand the relationships between conditions and build a strategy with regards to screening, diagnosis and treatment over a longer term, especially for individuals at risk for more complications. The findings from this study aim to create a foundation for future work where a more holistic view of multimorbidity is possible.
80

Data Mining in a Multidimensional Environment

Günzel, Holger, Albrecht, Jens, Lehner, Wolfgang 12 January 2023 (has links)
Data Mining and Data Warehousing are two hot topics in the database research area. Until recently, conventional data mining algorithms were primarily developed for a relational environment. But a data warehouse database is based on a multidimensional model. In our paper we apply this basis for a seamless integration of data mining in the multidimensional model for the example of discovering association rules. Furthermore, we propose this method as a userguided technique because of the clear structure both of model and data. We present both the theoretical basis and efficient algorithms for data mining in the multidimensional data model. Our approach uses directly the requirements of dimensions, classifications and sparsity of the cube. Additionally we give heuristics for optimizing the search for rules.

Page generated in 0.0938 seconds