• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1237
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2380
  • 641
  • 556
  • 520
  • 508
  • 352
  • 332
  • 308
  • 299
  • 235
  • 234
  • 218
  • 210
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

Security in Practice: Examining the Collaborative Management of Sensitive Information in Childcare Centers and Physicians' Offices

Vega, Laurian 06 May 2011 (has links)
Traditionally, security has been conceptualized as rules, locks, and passwords. More recently, security research has explored how people interact in secure (or insecure) ways in part of a larger socio-technical system. Socio-technical systems are comprised of people, technology, relationships, and interactions that work together to create safe praxis. Because information systems are not just technical, but also social, the scope of privacy and security concerns must include social and technical factors. Clearly, computer security is enhanced by developments in the technical arena, where researchers are building ever more secure and robust systems to guard the privacy and confidentiality of information. However, when the definition of security is broadened to encompass both human and technical mechanisms, how security is managed with and through the day-to-day social work practices becomes increasingly important. In this dissertation I focus on how sensitive information is collaboratively managed in socio-technical systems by examining two domains: childcare centers and physicians' offices. In childcare centers, workers manage the enrolled children and also the enrolled child's personal information. In physicians' offices, workers manage the patients' health along with the patients' health information. My dissertation presents results from interviews and observations of these locations. The data collected consists of observation notes, interview transcriptions, pictures, and forms. The researchers identified breakdowns related to security and privacy. Using Activity Theory to first structure, categorize, and analyze the observed breakdowns, I used phenomenological methods to understand the context and experience of security and privacy. The outcomes from this work are three themes, along with corresponding future scenarios. The themes discussed are security embodiment, communities of security, and zones of ambiguity. Those themes extend the literature in the areas of usable security, human-computer interaction, and trust. The presentation will use future scenarios to examine the complexity of developing secure systems for the real world. / Ph. D.
742

Achieving Security and Privacy in the Internet Protocol Version 6 Through the Use of Dynamically Obscured Addresses

Dunlop, Matthew William 24 April 2012 (has links)
Society's increased use of network applications, such as email, social networking, and web browsing, creates a massive amount of information floating around in cyber space. An attacker can collect this information to build a profile of where people go, what their interests are, and even what they are saying to each other. For certain government and corporate entities, the exposure of this information could risk national security or loss of capital. This work identifies vulnerabilities in the way the Internet Protocol version 6 (IPv6) forms addresses. These vulnerabilities provide attackers with the ability to track a node's physical location, correlate network traffic with specific users, and even launch attacks against users' systems. A Moving Target IPv6 Defense (MT6D) that rotates through dynamically obscured network addresses while maintaining existing connections was developed to prevent these addressing vulnerabilities.MT6D is resistant to the IPv6 addressing vulnerabilities since addresses are not tied to host identities and continuously change. MT6D leverages the immense address space of IPv6 to provide an environment that is infeasible to search efficiently. Address obscuration in MT6D occurs throughout ongoing sessions to provide continued anonymity, confidentiality, and security to communicating hosts. Rotating addresses mid-session prevents an attacker from determining that the same two hosts are communicating. The dynamic addresses also force an attacker to repeatedly reacquire the target node before he or she can launch a successful attack. A proof of concept was developed that demonstrates the feasibility of MT6D and its ability to seamlessly bind new IPv6 addresses. Also demonstrated is MT6D's ability to rotate addresses mid-session without dropping or renegotiating sessions.This work makes three contributions to the state-of-the-art IPv6 research. First, it fully explores the security vulnerabilities associated with IPv6 address formation and demonstrates them on a production IPv6 network. Second, it provides a method for dynamically rotating network addresses that defeats these vulnerabilities. Finally, a functioning prototype is presented that proves how network addresses can be dynamically rotated without losing established network connections. If IPv6 is to be globally deployed, it must not provide additional attack vectors that expose user information. / Ph. D.
743

Improving the Security, Privacy, and Anonymity of a Client-Server Network through the Application of a Moving Target Defense

Morrell, Christopher Frank 03 May 2016 (has links)
The amount of data that is shared on the Internet is growing at an alarming rate. Current estimates state that approximately 2.5 exabytes of data were generated every day in 2012. This rate is only growing as people continue to increase their on-line presence. As the amount of data grows, so too do the number of people who are attempting to gain access to the data. Attackers try many methods to gain access to information, including a number of attacks that occur at the network layer. A network-based moving target defense is a technique that obfuscates the location of a machine on the Internet by arbitrarily changing its IP address periodically. MT6D is one of these techniques that leverages the size of the IPv6 address space to make it statistically impossible for an attacker to find a specific target machine. MT6D was designed with a number of limitations that include manually generated static configurations and support for only peer to peer networks. This work presents extensions to MT6D that provide dynamically generated configurations, a secure and dynamic means of exchanging configurations, and with these new features, an ability to function as a server supporting a large number of clients. This work makes three primary contributions to the field of network-based moving target defense systems. First, it provides a means to exchange arbitrary information in a way that provides network anonymity, authentication, and security. Second, it demonstrates a technique that gives MT6D the capability to exchange configuration information by only sharing public keys. Finally, it introduces a session establishment protocol that clients can use to establish concurrent connections with an MT6D server. / Ph. D.
744

Security and Performance Issues in Spectrum Sharing between Disparate Wireless Networks

Vaka, Pradeep Reddy 08 June 2017 (has links)
The United States Federal Communications Commission (FCC) in its recent report and order has prescribed the creation of Citizens Broadband Radio Service (CRBS) in the 3.5 GHz band to enable sharing between wireless broadband devices and incumbent radar systems. This sharing will be enabled by use of geolocation database with supporting infrastructure termed as Spectrum Access System (SAS). Although using SAS for spectrum sharing has many pragmatic advantages, it also raises potentially serious operational security (OPSEC) issues. In this thesis, we explore OPSEC, location privacy in particular, of incumbent radars in the 3.5 GHz band. First, we show that adversarial secondary users can easily infer the locations of incumbent radars by making seemingly innocuous queries to the database. Then, we propose several obfuscation techniques that can be implemented by the SAS for countering such inference attacks. We also investigate obfuscation techniques' efficacy in minimizing spectral efficiency loss while preserving incumbent privacy. Recently, the 3GPP Rel.13 has specified a new standard to provide wide-area connectivity for IoT, termed as Narrowband IoT (NB-IoT). NB-IoT achieves excellent coexistence with legacy mobile standards, and can be deployed in any of the 2G/3G/4G spectrum (450 MHz to 3.5 GHz). Recent industry efforts show deployment of IoT networks in unlicensed spectrum, including shared bands (e.g., 3.5 GHz band). However, operating NB-IoT systems in the 3.5 GHz band can result in significant BLER and coverage loss. In this thesis, we analyse results from extensive experimental studies on the coexistence of NB-IoT and radar systems, and demonstrate the coverage loss of NB-IoT in shared spectrum. / Master of Science
745

Privacy-Preserving Synthetic Medical Data Generation with Deep Learning

Torfi, Amirsina 26 August 2020 (has links)
Deep learning models demonstrated good performance in various domains such as ComputerVision and Natural Language Processing. However, the utilization of data-driven methods in healthcare raises privacy concerns, which creates limitations for collaborative research. A remedy to this problem is to generate and employ synthetic data to address privacy concerns. Existing methods for artificial data generation suffer from different limitations, such as being bound to particular use cases. Furthermore, their generalizability to real-world problems is controversial regarding the uncertainties in defining and measuring key realistic characteristics. Hence, there is a need to establish insightful metrics (and to measure the validity of synthetic data), as well as quantitative criteria regarding privacy restrictions. We propose the use of Generative Adversarial Networks to help satisfy requirements for realistic characteristics and acceptable values of privacy metrics, simultaneously. The present study makes several unique contributions to synthetic data generation in the healthcare domain. First, we propose a novel domain-agnostic metric to evaluate the quality of synthetic data. Second, by utilizing 1-D Convolutional Neural Networks, we devise a new approach to capturing the correlation between adjacent diagnosis records. Third, we employ ConvolutionalAutoencoders for creating a robust and compact feature space to handle the mixture of discrete and continuous data. Finally, we devise a privacy-preserving framework that enforcesRényi differential privacy as a new notion of differential privacy. / Doctor of Philosophy / Computers programs have been widely used for clinical diagnosis but are often designed with assumptions limiting their scalability and interoperability. The recent proliferation of abundant health data, significant increases in computer processing power, and superior performance of data-driven methods enable a trending paradigm shift in healthcare technology. This involves the adoption of artificial intelligence methods, such as deep learning, to improve healthcare knowledge and practice. Despite the success in using deep learning in many different domains, in the healthcare field, privacy challenges make collaborative research difficult, as working with data-driven methods may jeopardize patients' privacy. To overcome these challenges, researchers propose to generate and utilize realistic synthetic data that can be used instead of real private data. Existing methods for artificial data generation are limited by being bound to special use cases. Furthermore, their generalizability to real-world problems is questionable. There is a need to establish valid synthetic data that overcomes privacy restrictions and functions as a real-world analog for healthcare deep learning data training. We propose the use of Generative Adversarial Networks to simultaneously overcome the realism and privacy challenges associated with healthcare data.
746

Privacy Preserving Authentication Schemes and Applications

Asokan, Pranav 23 June 2017 (has links)
With the advent of smart devices, Internet of things and cloud computing the amount of information collected about an individual is enormous. Using this meta-data, a complete profile about a person could be created - professional information, personal information like his/her choices, preferences, likes/dislikes etc. The concept of privacy is totally lost with this gamut of technology. The ability to separate one's on-line identity from their personal identity is near impossible. The conflicting interests of the two parties - service providers' need for authentication and the users' privacy needs - is the cause for this problem. Privacy Preserving Authentication could help solve both these problems by creating valid and anonymous identities for the users. And simply by proving the authenticity and integrity of this anonymous identity (without revealing/exposing it) the users can obtain services whilst protecting their privacy. In this thesis, I review and analyze the various types of PPA schemes leading to the discussion of our new scheme 'Lightweight Anonymous Attestation with Efficient Revocation'. Finally, the scenarios where these schemes are applicable are discussed in detail. / Master of Science
747

Inclusion of Priority Access in a Privacy-preserving ESC-based DSA System

Lu, Chang 21 August 2018 (has links)
According to the Federal Communications Commission's rules and recommendations set forth for the 3.5 GHz Citizens Broadband Radio Service, a three-tiered structure shall govern the newly established shared wireless band. The three tiers are comprised of three different levels of spectrum access; Incumbent Access, Priority Access and General Authorized Access. In accordance and fulfillment with this dynamic spectrum access framework, we present the inclusion of Priority Access tier into a two-tiered privacy-preserving ESC-based dynamic spectrum access system. / Master of Science / With the development of wireless communication technologies, the number of wireless communication reliant applications has been increasing. Most of these applications require dedicated spectrum frequencies as communication channels. As such, the radio frequency spectrum, utilized and allocated for these wireless applications, is depleting. This problem can be alleviated by adopting dynamic spectrum access schemes. The current static spectrum allocation scheme assigns designated spectrum frequencies to specific users. This static frequency management approach leads to inefficient frequency utilization as the occupation of frequency channels may vary depending upon time periods. Dynamic spectrum access schemes allow unlicensed users opportunistic access to vacant spectrum spaces. Thus, the adoption of these spectrum sharing schemes will increase the efficiency of spectrum utilization, and slow down the spectrum depletion. However, the design and implementation of these schemes face different challenges. These spectrum sharing systems need to guarantee the privacy of the involved parties while maintaining specific functionalities required and recommended by the Federal Communications Commission. In this thesis, we present the inclusion of a three-tiered frame, approved by the Federal Communications Commission, into a privacy-preserving dynamic spectrum system.
748

Breaking Privacy in Model-Heterogeneous Federated Learning

Haldankar, Atharva Amit 14 May 2024 (has links)
Federated learning (FL) is a communication protocol that allows multiple distrustful clients to collaboratively train a machine learning model. In FL, data never leaves client devices; instead, clients only share locally computed gradients or model parameters with a central server. As individual gradients may leak information about a given client's dataset, secure aggregation was proposed. With secure aggregation, the server only receives the aggregate gradient update from the set of all sampled clients without being able to access any individual gradient. One challenge in FL is the systems-level heterogeneity that is quite often present among client devices. Specifically, clients in the FL protocol may have varying levels of compute power, on-device memory, and communication bandwidth. These limitations are addressed by model-heterogeneous FL schemes, where clients are able to train on subsets of the global model. Despite the benefits of model-heterogeneous schemes in addressing systems-level challenges, the implications of these schemes on client privacy have not been thoroughly investigated. In this thesis, we investigate whether the nature of model distribution and the computational heterogeneity among client devices in model-heterogeneous FL schemes may result in the server being able to recover sensitive information from target clients. To this end, we propose two novel attacks in the model-heterogeneous setting, even with secure aggregation in place. We call these attacks the Convergence Rate Attack and the Rolling Model Attack. The Convergence Rate Attack targets schemes where clients train on the same subset of the global model, while the Rolling Model Attack targets schemes where model-parameters are dynamically updated each round. We show that a malicious adversary is able to compromise the model and data confidentiality of a target group of clients. We evaluate our attacks on the MNIST dataset and show that using our techniques, an adversary can reconstruct data samples with high fidelity. / Master of Science / Federated learning (FL) is a communication protocol that allows multiple distrustful users to collaboratively train a machine learning model together. In FL, data never leaves user devices; instead, users only share locally computed gradients or model parameters (e.g. weight and bias values) with an aggregation server. As individual gradients may leak information about a given user's dataset, secure aggregation was proposed. Secure aggregation is a protocol that users and the server run together, where the server only receives the aggregate gradient update from the set of all sampled users instead of each individual user update. In FL, users often have varying levels of compute power, on-device memory, and communication bandwidth. These differences between users are collectively referred to as systems-level (or system) heterogeneity. While there are a number of techniques to address system heterogeneity, one popular approach is to have users train on different subsets of the global model. This approach is known as model-heterogeneous FL. Despite the benefits of model-heterogeneous FL schemes in addressing systems-level challenges, the implications of these schemes on user privacy have not been thoroughly investigated. In this thesis, we investigate whether the nature of model distribution and the differences in compute power between user devices in model-heterogeneous FL schemes may result in the server being able to recover sensitive information. To this end, we propose two novel attacks in the model-heterogeneous setting with secure aggregation in place. We call these attacks the Convergence Rate Attack and the Rolling Model Attack. The Convergence Rate Attack targets schemes where users train on the same subset of the global model, while the Rolling Model Attack targets schemes where model-parameters may change each round. We first show that a malicious server is able to obtain individual user updates, despite secure aggregation being in place. Then, we demonstrate how an adversary can utilize those updates to reverse engineer data samples from users. We evaluate our attacks on the MNIST dataset, a commonly used dataset of handwritten digits and their labels. We show that by running our attacks, an adversary can accurately identify what images a user trained on.
749

Online Learning for Resource Allocation in Wireless Networks: Fairness, Communication Efficiency, and Data Privacy

Li, Fengjiao 13 December 2022 (has links)
As the Next-Generation (NextG, 5G and beyond) wireless network supports a wider range of services, optimization of resource allocation plays a crucial role in ensuring efficient use of the (limited) available network resources. Note that resource allocation may require knowledge of network parameters (e.g., channel state information and available power level) for package schedule. However, wireless networks operate in an uncertain environment where, in many practical scenarios, these parameters are unknown before decisions are made. In the absence of network parameters, a network controller, who performs resource allocation, may have to make decisions (aimed at optimizing network performance and satisfying users' QoS requirements) while emph{learning}. To that end, this dissertation studies two novel online learning problems that are motivated by autonomous resource management in NextG. Key contributions of the dissertation are two-fold. First, we study reward maximization under uncertainty with fairness constraints, which is motivated by wireless scheduling with Quality of Service constraints (e.g., minimum delivery ratio requirement) under uncertainty. We formulate a framework of combinatorial bandits with fairness constraints and develop a fair learning algorithm that successfully addresses the tradeoff between reward maximization and fairness constraints. This framework can also be applied to several other real-world applications, such as online advertising and crowdsourcing. Second, we consider global reward maximization under uncertainty with distributed biased feedback, which is motivated by the problem of cellular network configuration for optimizing network-level performance (e.g., average user-perceived Quality of Experience). We study both the linear-parameterized and non-parametric global reward functions, which are modeled as distributed linear bandits and kernelized bandits, respectively. For each model, we propose a learning algorithmic framework that can be integrated with different differential privacy models. We show that the proposed algorithms can achieve a near-optimal regret in a communication-efficient manner while protecting users' data privacy ``for free''. Our findings reveal that our developed algorithms outperform the state-of-the-art solutions in terms of the tradeoff among the regret, communication efficiency, and computation complexity. In addition, our proposed models and online learning algorithms can also be applied to several other real-world applications, e.g., dynamic pricing and public policy making, which may be of independent interest to a broader research community. / Doctor of Philosophy / As the Next-Generation (NextG) wireless network supports a wider range of services, optimization of resource allocation plays a crucial role in ensuring efficient use of the (limited) available network resources. Note that resource allocation may require knowledge of network parameters (e.g., channel state information and available power level) for package schedule. However, wireless networks operate in an uncertain environment where, in many practical scenarios, these parameters are unknown before decisions are made. In the absence of network parameters, a network controller, who performs resource allocation, may have to make decisions (aimed at optimizing network performance and satisfying users' QoS requirements) while emph{learning}. To that end, this dissertation studies two novel online learning problems that are motivated by resource allocation in the presence uncertainty in NextG. Key contributions of the dissertation are two-fold. First, we study reward maximization under uncertainty with fairness constraints, which is motivated by wireless scheduling with Quality of Service constraints (e.g., minimum delivery ratio requirement) under uncertainty. We formulate a framework of combinatorial bandits with fairness constraints and develop a fair learning algorithm that successfully addresses the tradeoff between reward maximization and fairness constraints. This framework can also be applied to several other real-world applications, such as online advertising and crowdsourcing. Second, we consider global reward maximization under uncertainty with distributed biased feedback, which is motivated by the problem of cellular network configuration for optimizing network-level performance (e.g., average user-perceived Quality of Experience). We consider both the linear-parameterized and non-parametric (unknown) global reward functions, which are modeled as distributed linear bandits and kernelized bandits, respectively. For each model, we propose a learning algorithmic framework that integrate different privacy models according to different privacy requirements or different scenarios. We show that the proposed algorithms can learn the unknown functions in a communication-efficient manner while protecting users' data privacy ``for free''. Our findings reveal that our developed algorithms outperform the state-of-the-art solutions in terms of the tradeoff among the regret, communication efficiency, and computation complexity. In addition, our proposed models and online learning algorithms can also be applied to several other real-world applications, e.g., dynamic pricing and public policy making, which may be of independent interest to a broader research community.
750

Deidentification of Face Videos in Naturalistic Driving Scenarios

Thapa, Surendrabikram 05 September 2023 (has links)
The sharing of data has become integral to advancing scientific research, but it introduces challenges related to safeguarding personally identifiable information (PII). This thesis addresses the specific problem of sharing drivers' face videos for transportation research while ensuring privacy protection. To tackle this issue, we leverage recent advancements in generative adversarial networks (GANs) and demonstrate their effectiveness in deidentifying individuals by swapping their faces with those of others. Extensive experimentation is conducted using a large-scale dataset from ORNL, enabling the quantification of errors associated with head movements, mouth movements, eye movements, and other human factors cues. Additionally, qualitative analysis using metrics such as PERCLOS (Percentage of Eye Closure) and human evaluators provide valuable insights into the quality and fidelity of the deidentified videos. To enhance privacy preservation, we propose the utilization of synthetic faces as substitutes for real faces. Moreover, we introduce practical guidelines, including the establishment of thresholds and spot checking, to incorporate human-in-the-loop validation, thereby improving the accuracy and reliability of the deidentification process. In addition to this, this thesis also presents mitigation strategies to effectively handle reidentification risks. By considering the potential exploitation of soft biometric identifiers or non-biometric cues, we highlight the importance of implementing comprehensive measures such as robust data user licenses and privacy protection protocols. / Master of Science / With the increasing availability of large-scale datasets in transportation engineering, ensuring the privacy and confidentiality of sensitive information has become a paramount concern. One specific area of concern is the protection of drivers' facial data captured by the National Driving Simulator (NDS) during research studies. The potential risks associated with the misuse or unauthorized access to such data necessitate the development of robust deidentification techniques. In this thesis, we propose a GAN-based framework for the deidentification of drivers' face videos while preserving important facial attribute information. The effectiveness of the proposed framework is evaluated through comprehensive experiments, considering various metrics related to human factors. The results demonstrate the capability of the framework to successfully deidentify face videos, enabling the safe sharing and analysis of valuable transportation research data. This research contributes to the field of transportation engineering by addressing the critical need for privacy protection while promoting data sharing and advancing human factors research.

Page generated in 0.0313 seconds