• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 284
  • 55
  • 51
  • 25
  • 19
  • 18
  • 17
  • 10
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 587
  • 587
  • 233
  • 228
  • 183
  • 149
  • 105
  • 95
  • 81
  • 77
  • 75
  • 74
  • 71
  • 68
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

APPLICATION OF BLOCKCHAIN NETWORK FOR THE USE OF INFORMATION SHARING

Unknown Date (has links)
The Blockchain concept was originally developed to provide security in the Bitcoin cryptocurrency network, where trust is achieved through the provision of an agreed-upon and immutable record of transactions between parties. The use of a Blockchain as a secure, publicly distributed ledger is applicable to fields beyond finance, and is an emerging area of research across many other fields in the industry. This thesis considers the feasibility of using a Blockchain to facilitate secured information sharing between parties, where a lack of trust and absence of central control are common characteristics. Implementation of a Blockchain Information Sharing system will be designed on an existing Blockchain network with as a communicative party members sharing secured information. The benefits and risks associated with using a public Blockchain for information sharing will also be discussed. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
212

Strategies for Improving Data Protection to Reduce Data Loss from Cyberattacks

Cannon, Jennifer Elizabeth 01 January 2019 (has links)
Accidental and targeted data breaches threaten sustainable business practices and personal privacy, exposing all types of businesses to increased data loss and financial impacts. This single case study was conducted in a medium-sized enterprise located in Brevard County, Florida, to explore the successful data protection strategies employed by the information system and information technology business leaders. Actor-network theory was the conceptual framework for the study with a graphical syntax to model data protection strategies. Data were collected from semistructured interviews of 3 business leaders, archival documents, and field notes. Data were analyzed using thematic, analytic, and software analysis, and methodological triangulation. Three themes materialized from the data analyses: people--inferring security personnel, network engineers, system engineers, and qualified personnel to know how to monitor data; processes--inferring the activities required to protect data from data loss; and technology--inferring scientific knowledge used by people to protect data from data loss. The findings are indicative of successful application of data protection strategies and may be modeled to assess vulnerabilities from technical and nontechnical threats impacting risk and loss of sensitive data. The implications of this study for positive social change include the potential to alter attitudes toward data protection, creating a better environment for people to live and work; reduce recovery costs resulting from Internet crimes, improving social well-being; and enhance methods for the protection of sensitive, proprietary, and personally identifiable information, which advances the privacy rights for society.
213

Your Data Is My Data: A Framework for Addressing Interdependent Privacy Infringements

Kamleitner, Bernadette, Mitchell, Vince January 2019 (has links) (PDF)
Everyone holds personal information about others. Each person's privacy thus critically depends on the interplay of multiple actors. In an age of technology integration, this interdependence of data protection is becoming a major threat to privacy. Yet current regulation focuses on the sharing of information between two parties rather than multiactor situations. This study highlights how current policy inadequacies, illustrated by the European Union General Data Protection Regulation, can be overcome by means of a deeper understanding of the phenomenon. Specifically, the authors introduce a new phenomenological framework to explain interdependent infringements. This framework builds on parallels between property and privacy and suggests that interdependent peer protection necessitates three hierarchical steps, "the 3Rs": realize, recognize, and respect. In response to observed failures at these steps, the authors identify four classes of intervention that constitute a toolbox addressing what can be done by marketers, regulators, and privacy organizations. While the first three classes of interventions address issues arising from the corresponding 3Rs, the authors specifically advocate for a fourth class of interventions that proposes radical alternatives that shift the responsibilities for privacy protection away from consumers.
214

Construction and formal security analysis of cryptographic schemes in the public key setting

Baek, Joonsang, 1973- January 2004 (has links)
Abstract not available
215

Long term preservation of textual information in the AEC sector

Bader, Refad, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2007 (has links)
As we are living in a vast changing technological era, the hardware and software required to read electronic documents continue to evolve, and the technology may be so different in the near future that it may not work on older documents. Preserving information over long term is already known as a problem. This research investigates the potential of using XML in improving long term preservation of textual information resulting from AEC (Architectural, Engineering and Construction) projects. It identifies and analyses the issues involved in the subject of handling information over a long period of time in this sector and maps out a strategy to solve those issues. The main focus is not the centralized preservation of documents, but rather the preservation of segments of information scattered between different decision makers in the AEC. In the end a methodology for exchanging information between different decision makers, collecting related information from different decision makers, and preserving such information in the AEC sector for long term purposes that is based on the use of XML will be presented. / Master of Science (Hons)
216

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
217

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
218

Linkability of communication contents : Keeping track of disclosed data using Formal Concept Analysis

Berthold, Stefan January 2006 (has links)
<p>A person who is communication about (the data subject) has to keep track of all of his revealed data in order to protect his right of informational self-determination. This is important when data is going to be processed in an automatic manner and, in particular, in case of automatic inquiries. A data subject should, therefore, be enabled to recognize useful decisions with respect to data disclosure, only by using data which is available to him.</p><p>For the scope of this thesis, we assume that a data subject is able to protect his communication contents and the corresponding communication context against a third party by using end-to-end encryption and Mix cascades. The objective is to develop a model for analyzing the linkability of communication contents by using Formal Concept Analysis. In contrast to previous work, only the knowledge of a data subject is used for this analysis instead of a global view on the entire communication contents and context.</p><p>As a first step, the relation between disclosed data is explored. It is shown how data can be grouped by types and data implications can be represented. As a second step, behavior, i. e. actions and reactions, of the data subject and his communication partners is included in this analysis in order to find critical data sets which can be used to identify the data subject.</p><p>Typical examples are used to verify this analysis, followed by a conclusion about pros and cons of this method for anonymity and linkability measurement. Results can be used, later on, in order to develop a similarity measure for human-computer interfaces.</p>
219

Designing for Privacy in Interactive Systems

Jensen, Carlos 29 November 2005 (has links)
People are increasingly concerned about online privacy and how computers collect, process, share, and store their personal information. Such concerns are understandable given the growing number of privacy invasions and the pervasiveness of information capture and sharing between IT systems. This situation has led to an increasingly regulated environment, limiting what systems may do, and what safeguards they must offer users. Privacy is an especially important concern in the fields of computer supported collaborative work (CSCW), Ubiquitous Computing, and e-commerce, where the nature of the applications often requires some information collection and sharing. In order to minimize risks to users it is essential to identify privacy problems early in the design process. Several methods and frameworks for accomplishing this have been proposed in the last decades. These frameworks, though based on hard-earned experience and great insight, have not seen widespread adoption despite the high level of interest in this topic. Part of the reason for this is likely the lack of evaluation and study of these frameworks. In our research we examine the key design and analysis frameworks and their elements, and compare these to the kinds of problems users face and are concerned with in terms of privacy. Based on this analysis of the relative strengths and weaknesses of existing design frameworks we derive a new design framework; STRAP (STRuctured Analysis of Privacy). In STRAP we combine light-weight goal-oriented analysis with heuristics to provide a simple yet effective design framework. We validate our analysis by demonstrating in a series of design experiments that STRAP is more efficient and effective than any one of the existing design frameworks, and provide quantitative and qualitative evidence of the value of using such frameworks as part of the design process.
220

Timed-Release Proxy Conditional Re-Encryption for Cloud Computing

Chen, Jun-Cheng 30 August 2011 (has links)
The mobile technology is being developed very fast and it is a general situation where people can fetch or edit files via the Internet by mobile devices such as notebooks, smart phones, and so on. Due to possible possession of various devices of a user, it may be inconvenient for him to synchronize a file such that he cannot edit the same file via his devices easily. Recently, the cloud technology is becoming more and more popular and there are some new business models launched. One of them is a storage platform Dropbox which can synchronize users' files in their own devices and also allow users to share their files to others. However, Dropbox was indicated that the privacy of the files has not been protected well. Many encryption schemes have been proposed in the literature, but most of them do not support the property of secret file sharing when deploying them in cloud environment. Even though some schemes support the property, they can only provide a file owner to share all of his files with others. In some situations, the file owner may want to ensure that the receiver cannot decrypt the ciphertext until a specified time arrives. The existing encryption schemes cannot achieve these goals simultaneously. Hence, in order to cope with these problems, we propose a timed-release proxy conditional re-encryption scheme for cloud computing. Not only are users¡¦ files stored safely but also each user can freely share a desired file with another user. Furthermore, the receiver cannot obtain any information of the file until the chosen time arrives. Finally, we also demonstrate the security of our proposed scheme via formal proofs.

Page generated in 0.0795 seconds