• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

CUDIA : a probabilistic cross-level imputation framework using individual auxiliary information / Probabilistic cross-level imputation framework using individual auxiliary information

Park, Yubin 17 February 2012 (has links)
In healthcare-related studies, individual patient or hospital data are not often publicly available due to privacy restrictions, legal issues or reporting norms. However, such measures may be provided at a higher or more aggregated level, such as state-level, county-level summaries or averages over health zones such as Hospital Referral Regions (HRR) or Hospital Service Areas (HSA). Such levels constitute partitions over the underlying individual level data, which may not match the groupings that would have been obtained if one clustered the data based on individual-level attributes. Moreover, treating aggregated values as representatives for the individuals can result in the ecological fallacy. How can one run data mining procedures on such data where different variables are available at different levels of aggregation or granularity? In this thesis, we seek a better utilization of variably aggregated datasets, which are possibly assembled from different sources. We propose a novel "cross-level" imputation technique that models the generative process of such datasets using a Bayesian directed graphical model. The imputation is based on the underlying data distribution and is shown to be unbiased. This imputation can be further utilized in a subsequent predictive modeling, yielding improved accuracies. The experimental results using a simulated dataset and the Behavioral Risk Factor Surveillance System (BRFSS) dataset are provided to illustrate the generality and capabilities of the proposed framework. / text
2

Improving the Scalability of an Exact Approach for Frequent Item Set Hiding

LaMacchia, Carolyn 01 January 2013 (has links)
Technological advances have led to the generation of large databases of organizational data recognized as an information-rich, strategic asset for internal analysis and sharing with trading partners. Data mining techniques can discover patterns in large databases including relationships considered strategically relevant to the owner of the data. The frequent item set hiding problem is an area of active research to study approaches for hiding the sensitive knowledge patterns before disclosing the data outside the organization. Several methods address hiding sensitive item sets including an exact approach that generates an extension to the original database that, when combined with the original database, limits the discovery of sensitive association rules without impacting other non-sensitive information. To generate the database extension, this method formulates a constraint optimization problem (COP). Solving the COP formulation is the dominant factor in the computational resource requirements of the exact approach. This dissertation developed heuristics that address the scalability of the exact hiding method. The heuristics are directed at improving the performance of COP solver by reducing the size of the COP formulation without significantly affecting the quality of the solutions generated. The first heuristic decomposes the COP formulation into multiple smaller problem instances that are processed separately by the COP solver to generate partial extensions of the database. The smaller database extensions are then combined to form a database extension that is close to the database extension generated with the original, larger COP formulation. The second heuristic evaluates the revised border used to formulate the COP and reduces the number of variables and constraints by selectively substituting multiple item sets with composite variables. Solving the COP with fewer variables and constraints reduces the computational cost of the processing. Results of heuristic processing were compared with an existing exact approach based on the size of the database extension, the ability to hide sensitive data, and the impact on nonsensitive data.
3

An Architecture For High-performance Privacy-preserving And Distributed Data Mining

Secretan, James 01 January 2009 (has links)
This dissertation discusses the development of an architecture and associated techniques to support Privacy Preserving and Distributed Data Mining. The field of Distributed Data Mining (DDM) attempts to solve the challenges inherent in coordinating data mining tasks with databases that are geographically distributed, through the application of parallel algorithms and grid computing concepts. The closely related field of Privacy Preserving Data Mining (PPDM) adds the dimension of privacy to the problem, trying to find ways that organizations can collaborate to mine their databases collectively, while at the same time preserving the privacy of their records. Developing data mining algorithms for DDM and PPDM environments can be difficult and there is little software to support it. In addition, because these tasks can be computationally demanding, taking hours of even days to complete data mining tasks, organizations should be able to take advantage of high-performance and parallel computing to accelerate these tasks. Unfortunately there is no such framework that is able to provide all of these services easily for a developer. In this dissertation such a framework is developed to support the creation and execution of DDM and PPDM applications, called APHID (Architecture for Private, High-performance Integrated Data mining). The architecture allows users to flexibly and seamlessly integrate cluster and grid resources into their DDM and PPDM applications. The architecture is scalable, and is split into highly de-coupled services to ensure flexibility and extensibility. This dissertation first develops a comprehensive example algorithm, a privacy-preserving Probabilistic Neural Network (PNN), which serves a basis for analysis of the difficulties of DDM/PPDM development. The privacy-preserving PNN is the first such PNN in the literature, and provides not only a practical algorithm ready for use in privacy-preserving applications, but also a template for other data intensive algorithms, and a starting point for analyzing APHID's architectural needs. After analyzing the difficulties in the PNN algorithm's development, as well as the shortcomings of researched systems, this dissertation presents the first concrete programming model joining high performance computing resources with a privacy preserving data mining process. Unlike many of the existing PPDM development models, the platform of services is language independent, allowing layers and algorithms to be implemented in popular languages (Java, C++, Python, etc.). An implementation of a PPDM algorithm is developed in Java utilizing the new framework. Performance results are presented, showing that APHID can enable highly simplified PPDM development while speeding up resource intensive parts of the algorithm.
4

Geometric Methods for Mining Large and Possibly Private Datasets

Chen, Keke 07 July 2006 (has links)
With the wide deployment of data intensive Internet applications and continued advances in sensing technology and biotechnology, large multidimensional datasets, possibly containing privacy-conscious information have been emerging. Mining such datasets has become increasingly common in business integration, large-scale scientific data analysis, and national security. The proposed research aims at exploring the geometric properties of the multidimensional datasets utilized in statistical learning and data mining, and providing novel techniques and frameworks for mining very large datasets while protecting the desired data privacy. The first main contribution of this research is the development of iVIBRATE interactive visualization-based approach for clustering very large datasets. The iVIBRATE framework uniquely addresses the challenges in handling irregularly shaped clusters, domain-specific cluster definition, and cluster-labeling of the data on disk. It consists of the VISTA visual cluster rendering subsystem, and the Adaptive ClusterMap Labeling subsystem. The second main contribution is the development of ``Best K Plot'(BKPlot) method for determining the critical clustering structures in multidimensional categorical data. The BKPlot method uniquely addresses two challenges in clustering categorical data: How to determine the number of clusters (the best K) and how to identify the existence of significant clustering structures. The method consists of the basic theory, the sample BKPlot theory for large datasets, and the testing method for identifying no-cluster datasets. The third main contribution of this research is the development of the theory of geometric data perturbation and its application in privacy-preserving data classification involving single party or multiparty collaboration. The key of geometric data perturbation is to find a good randomly generated rotation matrix and an appropriate noise component that provides satisfactory balance between privacy guarantee and data quality, considering possible inference attacks. When geometric perturbation is applied to collaborative multiparty data classification, it is challenging to unify the different geometric perturbations used by different parties. We study three protocols under the data-mining-service oriented framework for unifying the perturbations: 1) the threshold-satisfied voting protocol, 2) the space adaptation protocol, and 3) the space adaptation protocol with a trusted party. The tradeoffs between the privacy guarantee, the model accuracy and the cost are studied for the protocols.
5

Novel frequent itemset hiding techniques and their evaluation / Σύγχρονες μέθοδοι τεχνικών απόκρυψης συχνών στοιχειοσυνόλων και αξιολόγησή τους

Καγκλής, Βασίλειος 20 May 2015 (has links)
Advances in data collection and data storage technologies have given way to the establishment of transactional databases among companies and organizations, as they allow enormous volumes of data to be stored efficiently. Most of the times, these vast amounts of data cannot be used as they are. A data processing should first take place, so as to extract the useful knowledge. After the useful knowledge is mined, it can be used in several ways depending on the nature of the data. Quite often, companies and organizations are willing to share data for the sake of mutual benefit. However, these benefits come with several risks, as problems with privacy might arise, as a result of this sharing. Sensitive data, along with sensitive knowledge inferred from these data, must be protected from unintentional exposure to unauthorized parties. One form of the inferred knowledge is frequent patterns, which are discovered during the process of mining the frequent itemsets from transactional databases. The problem of protecting such patterns is known as the frequent itemset hiding problem. In this thesis, we review several techniques for protecting sensitive frequent patterns in the form of frequent itemsets. After presenting a wide variety of techniques in detail, we propose a novel approach towards solving this problem. The proposed method is an approach that combines heuristics with linear-programming. We evaluate the proposed method on real datasets. For the evaluation, a number of performance metrics are presented. Finally, we compare the results of the newly proposed method with those of other state-of-the-art approaches. / Η ραγδαία εξέλιξη των τεχνολογιών συλλογής και αποθήκευσης δεδομένων οδήγησε στην καθιέρωση των βάσεων δεδομένων συναλλαγών σε οργανισμούς και εταιρείες, καθώς επιτρέπουν την αποδοτική αποθήκευση τεράστιου όγκου δεδομένων. Τις περισσότερες φορές όμως, αυτός ο τεράστιος όγκος δεδομένων δεν μπορεί να χρησιμοποιηθεί ως έχει. Μια πρώτη επεξεργασία των δεδομένων πρέπει να γίνει, ώστε να εξαχθεί η χρήσιμη πληροφορία. Ανάλογα με τη φύση των δεδομένων, αυτή η χρήσιμη πληροφορία μπορεί να χρησιμοποιηθεί στη συνέχεια αναλόγως. Αρκετά συχνά, οι εταιρείες και οι οργανισμοί είναι πρόθυμοι να μοιραστούν τα δεδομένα μεταξύ τους με στόχο το κοινό τους όφελος. Ωστόσο, αυτά τα οφέλη συνοδεύονται με διάφορους κινδύνους, καθώς ενδέχεται να προκύψουν προβλήματα ιδιωτικής φύσης, ως αποτέλεσμα αυτής της κοινής χρήσης των δεδομένων. Ευαίσθητα δεδομένα, μαζί με την ευαίσθητη γνώση που μπορεί να προκύψει από αυτά, πρέπει να προστατευτούν από την ακούσια έκθεση σε μη εξουσιοδοτημένους τρίτους. Μια μορφή της εξαχθείσας γνώσης είναι τα συχνά μοτίβα, που ανακαλύφθηκαν κατά την εξόρυξη συχνών στοιχειοσυνόλων από βάσεις δεδομένων συναλλαγών. Το πρόβλημα της προστασίας συχνών μοτίβων τέτοιας μορφής είναι γνωστό ως το πρόβλημα απόκρυψης συχνών στοιχειοσυνόλων. Στην παρούσα διπλωματική εργασία, εξετάζουμε διάφορες τεχνικές για την προστασία ευαίσθητων συχνών μοτίβων, υπό τη μορφή συχνών στοιχειοσυνόλων. Αφού παρουσιάσουμε λεπτομερώς μια ευρεία ποικιλία τεχνικών απόκρυψης, προτείνουμε μια νέα προσέγγιση για την επίλυση αυτού του προβλήματος. Η προτεινόμενη μέθοδος είναι μια προσέγγιση που συνδυάζει ευρετικές μεθόδους με γραμμικό προγραμματισμό. Για την αξιολόγηση της προτεινόμενης μεθόδου χρησιμοποιούμε πραγματικά δεδομένα. Για τον σκοπό αυτό, παρουσιάζουμε επίσης και μια σειρά από μετρικές αξιολόγησης. Τέλος, συγκρίνουμε τα αποτελέσματα της νέας προτεινόμενης μεθόδου με άλλες κορυφαίες προσεγγίσεις.
6

CONTEXT AWARE PRIVACY PRESERVING CLUSTERING AND CLASSIFICATION

Thapa, Nirmal 01 January 2013 (has links)
Data are valuable assets to any organizations or individuals. Data are sources of useful information which is a big part of decision making. All sectors have potential to benefit from having information. Commerce, health, and research are some of the fields that have benefited from data. On the other hand, the availability of the data makes it easy for anyone to exploit the data, which in many cases are private confidential data. It is necessary to preserve the confidentiality of the data. We study two categories of privacy: Data Value Hiding and Data Pattern Hiding. Privacy is a huge concern but equally important is the concern of data utility. Data should avoid privacy breach yet be usable. Although these two objectives are contradictory and achieving both at the same time is challenging, having knowledge of the purpose and the manner in which it will be utilized helps. In this research, we focus on some particular situations for clustering and classification problems and strive to balance the utility and privacy of the data. In the first part of this dissertation, we propose Nonnegative Matrix Factorization (NMF) based techniques that accommodate constraints defined explicitly into the update rules. These constraints determine how the factorization takes place leading to the favorable results. These methods are designed to make alterations on the matrices such that user-specified cluster properties are introduced. These methods can be used to preserve data value as well as data pattern. As NMF and K-means are proven to be equivalent, NMF is an ideal choice for pattern hiding for clustering problems. In addition to the NMF based methods, we propose methods that take into account the data structures and the attribute properties for the classification problems. We separate the work into two different parts: linear classifiers and nonlinear classifiers. We propose two different solutions based on the classifiers. We study the effect of distortion on the utility of data. We propose three distortion measurement metrics which demonstrate better characteristics than the traditional metrics. The effectiveness of the measures is examined on different benchmark datasets. The result shows that the methods have the desirable properties such as invariance to translation, rotation, and scaling.
7

A comparative analysis of database sanitization techniques for privacy-preserving association rule mining / En jämförande analys av tekniker för databasanonymisering inom sekretessbevarande associationsregelutvinning

Mårtensson, Charlie January 2023 (has links)
Association rule hiding (ARH) is the process of modifying a transaction database to prevent sensitive patterns (association rules) from discovery by data miners. An optimal ARH technique successfully hides all sensitive patterns while leaving all nonsensitive patterns public. However, in practice, many ARH algorithms cause some undesirable side effects, such as failing to hide sensitive rules or mistakenly hiding nonsensitive ones. Evaluating the utility of ARH algorithms therefore involves measuring the side effects they cause. There are a wide array of ARH techniques in use, with evolutionary algorithms in particular gaining popularity in recent years. However, previous research in the area has focused on incremental improvement of existing algorithms. No work was found that compares the performance of ARH algorithms without the incentive of promoting a newly suggested algorithm as superior. To fill this research gap, this project compares three ARH algorithms developed between 2019 and 2022—ABC4ARH, VIDPSO, and SA-MDP— using identical and unbiased parameters. The algorithms were run on three real databases and three synthetic ones of various sizes, in each case given four different sets of sensitive rules to hide. Their performance was measured in terms of side effects, runtime, and scalability (i.e., performance on increasing database size). It was found that the performance of the algorithms varied considerably depending on the characteristics of the input data, with no algorithm consistently outperforming others at the task of mitigating side effects. VIDPSO was the most efficient in terms of runtime, while ABC4ARH maintained the most robust performance as the database size increased. However, results matching the quality of those in the papers originally describing each algorithm could not be reproduced, showing a clear need for validating the reproducibility of research before the results can be trusted. / ”Association rule hiding”, ungefär ”döljande av associationsregler” – hädanefter ARH – är en process som går ut på att modifiera en transaktionsdatabas för att förhindra att känsliga mönster (så kallade associationsregler) upptäcks genom datautvinning. En optimal ARH-teknik döljer framgångsrikt alla känsliga mönster medan alla ickekänsliga mönster förblir öppet tillgängliga. I praktiken är det dock vanligt att ARH-algoritmer orsakar oönskade sidoeffekter. Exempelvis kan de misslyckas med att dölja vissa känsliga regler eller dölja ickekänsliga regler av misstag. Evalueringen av ARH-algoritmers användbarhet inbegriper därför mätning av dessa sidoeffekter. Bland det stora urvalet ARH-tekniker har i synnerhet evolutionära algoritmer ökat i popularitet under senare år. Tidigare forskning inom området har dock fokuserat på inkrementell förbättring av existerande algoritmer. Ingen forskning hittades som jämförde ARH-algoritmer utan det underliggande incitamentet att framhäva överlägsenheten hos en nyutvecklad algoritm. Detta projekt ämnar fylla denna lucka i forskningen genom en jämförelse av tre ARH-algoritmer som tagits fram mellan 2019 och 2022 – ABC4ARH, VIDPSO och SA-MDP – med hjälp av identiska och oberoende parametrar. Algoritmerna kördes på sex databaser – tre hämtade från verkligheten, tre syntetiska av varierande storlek – och fick i samtliga fall fyra olika uppsättningar känsliga regler att dölja. Prestandan mättes enligt sidoeffekter, exekveringstid samt skalbarhet (dvs. prestation när databasens storlek ökar). Algoritmernas prestation varierade avsevärt beroende på indatans egenskaper. Ingen algoritm var konsekvent överlägsen de andra när det gällde att minimera sidoeffekter. VIDPSO var tidsmässigt mest effektiv, medan ABC4ARH var mest robust vid hanteringen av växande indata. Resultat i nivå med de som uppmättes i forskningsrapporterna som ursprungligen presenterat varje algoritm kunde inte reproduceras, vilket tyder på ett behov av att validera reproducerbarheten hos forskning innan dess resultat kan anses tillförlitliga.
8

Anonymous Opt-Out and Secure Computation in Data Mining

Shepard, Samuel Steven 09 November 2007 (has links)
No description available.

Page generated in 0.0934 seconds