Spelling suggestions: "subject:"[een] DATA PROTECTION"" "subject:"[enn] DATA PROTECTION""
181 |
Privacy preserving in serial data and social network publishing.January 2010 (has links)
Liu, Jia. / "August 2010." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 69-72). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Related Work --- p.3 / Chapter 3 --- Privacy Preserving Network Publication against Structural Attacks --- p.5 / Chapter 3.1 --- Background and Motivation --- p.5 / Chapter 3.1.1 --- Adversary knowledge --- p.6 / Chapter 3.1.2 --- Targets of Protection --- p.7 / Chapter 3.1.3 --- Challenges and Contributions --- p.10 / Chapter 3.2 --- Preliminaries and Problem Definition --- p.11 / Chapter 3.3 --- Solution:K-Isomorphism --- p.15 / Chapter 3.4 --- Algorithm --- p.18 / Chapter 3.4.1 --- Refined Algorithm --- p.21 / Chapter 3.4.2 --- Locating Vertex Disjoint Embeddings --- p.30 / Chapter 3.4.3 --- Dynamic Releases --- p.32 / Chapter 3.5 --- Experimental Evaluation --- p.34 / Chapter 3.5.1 --- Datasets --- p.34 / Chapter 3.5.2 --- Data Structure of K-Isomorphism --- p.37 / Chapter 3.5.3 --- Data Utilities and Runtime --- p.42 / Chapter 3.5.4 --- Dynamic Releases --- p.47 / Chapter 3.6 --- Conclusions --- p.47 / Chapter 4 --- Global Privacy Guarantee in Serial Data Publishing --- p.49 / Chapter 4.1 --- Background and Motivation --- p.49 / Chapter 4.2 --- Problem Definition --- p.54 / Chapter 4.3 --- Breach Probability Analysis --- p.57 / Chapter 4.4 --- Anonymization --- p.58 / Chapter 4.4.1 --- AG size Ratio --- p.58 / Chapter 4.4.2 --- Constant-Ratio Strategy --- p.59 / Chapter 4.4.3 --- Geometric Strategy --- p.61 / Chapter 4.5 --- Experiment --- p.62 / Chapter 4.5.1 --- Dataset --- p.62 / Chapter 4.5.2 --- Anonymization --- p.63 / Chapter 4.5.3 --- Evaluation --- p.64 / Chapter 4.6 --- Conclusion --- p.68 / Bibliography --- p.69
|
182 |
APPLICATION OF BLOCKCHAIN NETWORK FOR THE USE OF INFORMATION SHARINGUnknown Date (has links)
The Blockchain concept was originally developed to provide security in the Bitcoin cryptocurrency network, where trust is achieved through the provision of an agreed-upon and immutable record of transactions between parties.
The use of a Blockchain as a secure, publicly distributed ledger is applicable to fields beyond finance, and is an emerging area of research across many other fields in the industry.
This thesis considers the feasibility of using a Blockchain to facilitate secured information sharing between parties, where a lack of trust and absence of central control are common characteristics.
Implementation of a Blockchain Information Sharing system will be designed on an existing Blockchain network with as a communicative party members sharing secured information. The benefits and risks associated with using a public Blockchain for information sharing will also be discussed. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
|
183 |
Strategies for Improving Data Protection to Reduce Data Loss from CyberattacksCannon, Jennifer Elizabeth 01 January 2019 (has links)
Accidental and targeted data breaches threaten sustainable business practices and personal privacy, exposing all types of businesses to increased data loss and financial impacts. This single case study was conducted in a medium-sized enterprise located in Brevard County, Florida, to explore the successful data protection strategies employed by the information system and information technology business leaders. Actor-network theory was the conceptual framework for the study with a graphical syntax to model data protection strategies. Data were collected from semistructured interviews of 3 business leaders, archival documents, and field notes. Data were analyzed using thematic, analytic, and software analysis, and methodological triangulation. Three themes materialized from the data analyses: people--inferring security personnel, network engineers, system engineers, and qualified personnel to know how to monitor data; processes--inferring the activities required to protect data from data loss; and technology--inferring scientific knowledge used by people to protect data from data loss. The findings are indicative of successful application of data protection strategies and may be modeled to assess vulnerabilities from technical and nontechnical threats impacting risk and loss of sensitive data. The implications of this study for positive social change include the potential to alter attitudes toward data protection, creating a better environment for people to live and work; reduce recovery costs resulting from Internet crimes, improving social well-being; and enhance methods for the protection of sensitive, proprietary, and personally identifiable information, which advances the privacy rights for society.
|
184 |
Your Data Is My Data: A Framework for Addressing Interdependent Privacy InfringementsKamleitner, Bernadette, Mitchell, Vince January 2019 (has links) (PDF)
Everyone holds personal information about others. Each person's privacy thus critically depends on the interplay of multiple actors. In an age of technology integration, this interdependence of data protection is becoming a major threat to privacy. Yet current regulation focuses on the sharing of information between two parties rather than multiactor situations. This study highlights how current policy inadequacies, illustrated by the European Union General Data Protection Regulation, can be overcome by means of a deeper understanding of the phenomenon. Specifically, the authors introduce a new phenomenological framework to explain interdependent infringements. This framework builds on parallels between property and privacy and suggests that interdependent peer protection necessitates three hierarchical steps, "the 3Rs": realize, recognize, and respect. In response to observed failures at these steps, the authors identify four classes of intervention that constitute a toolbox addressing what can be done by marketers, regulators, and privacy organizations. While the first three classes of interventions address issues arising from the corresponding 3Rs, the authors specifically advocate for a fourth class of interventions that proposes radical alternatives that shift the responsibilities for privacy protection away from consumers.
|
185 |
A new Canadian intellectual property right : the protection of data submitted for marketing approval of pharmaceutical drugsStoddard, Damon. January 2006 (has links)
No description available.
|
186 |
Construction and formal security analysis of cryptographic schemes in the public key settingBaek, Joonsang, 1973- January 2004 (has links)
Abstract not available
|
187 |
Long term preservation of textual information in the AEC sectorBader, Refad, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2007 (has links)
As we are living in a vast changing technological era, the hardware and software required to read electronic documents continue to evolve, and the technology may be so different in the near future that it may not work on older documents. Preserving information over long term is already known as a problem. This research investigates the potential of using XML in improving long term preservation of textual information resulting from AEC (Architectural, Engineering and Construction) projects. It identifies and analyses the issues involved in the subject of handling information over a long period of time in this sector and maps out a strategy to solve those issues. The main focus is not the centralized preservation of documents, but rather the preservation of segments of information scattered between different decision makers in the AEC. In the end a methodology for exchanging information between different decision makers, collecting related information from different decision makers, and preserving such information in the AEC sector for long term purposes that is based on the use of XML will be presented. / Master of Science (Hons)
|
188 |
Knowledge based anomaly detectionPrayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
|
189 |
Knowledge based anomaly detectionPrayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
|
190 |
Linkability of communication contents : Keeping track of disclosed data using Formal Concept AnalysisBerthold, Stefan January 2006 (has links)
<p>A person who is communication about (the data subject) has to keep track of all of his revealed data in order to protect his right of informational self-determination. This is important when data is going to be processed in an automatic manner and, in particular, in case of automatic inquiries. A data subject should, therefore, be enabled to recognize useful decisions with respect to data disclosure, only by using data which is available to him.</p><p>For the scope of this thesis, we assume that a data subject is able to protect his communication contents and the corresponding communication context against a third party by using end-to-end encryption and Mix cascades. The objective is to develop a model for analyzing the linkability of communication contents by using Formal Concept Analysis. In contrast to previous work, only the knowledge of a data subject is used for this analysis instead of a global view on the entire communication contents and context.</p><p>As a first step, the relation between disclosed data is explored. It is shown how data can be grouped by types and data implications can be represented. As a second step, behavior, i. e. actions and reactions, of the data subject and his communication partners is included in this analysis in order to find critical data sets which can be used to identify the data subject.</p><p>Typical examples are used to verify this analysis, followed by a conclusion about pros and cons of this method for anonymity and linkability measurement. Results can be used, later on, in order to develop a similarity measure for human-computer interfaces.</p>
|
Page generated in 0.0379 seconds