• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1237
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2380
  • 641
  • 556
  • 520
  • 508
  • 352
  • 332
  • 308
  • 299
  • 235
  • 234
  • 218
  • 210
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Enhancing Security and Privacy in Head-Mounted Augmented Reality Systems Using Eye Gaze

Corbett, Matthew 22 April 2024 (has links)
Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. Specifically, head-mounted AR devices can accurately sense and understand their environment through an increasingly powerful array of sensors such as cameras, depth sensors, eye gaze trackers, microphones, and inertial sensors. The ability of these devices to collect this information presents both challenges and opportunities to improve existing security and privacy techniques in this domain. Specifically, eye gaze tracking is a ready-made capability to analyze user intent, emotions, and vulnerability, and as an input mechanism. However, modern AR devices lack systems to address their unique security and privacy issues. Problems such as lacking local pairing mechanisms usable while immersed in AR environments, bystander privacy protections, and the increased vulnerability to shoulder surfing while wearing AR devices all lack viable solutions. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring information security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices. / Doctor of Philosophy / Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. The ability of these devices to collect information presents challenges and opportunities to improve existing security and privacy techniques in this domain. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices.
352

Building trustworthy machine learning systems in adversarial environments

Wang, Ning 26 May 2023 (has links)
Modern AI systems, particularly with the rise of big data and deep learning in the last decade, have greatly improved our daily life and at the same time created a long list of controversies. AI systems are often subject to malicious and stealthy subversion that jeopardizes their efficacy. Many of these issues stem from the data-driven nature of machine learning. While big data and deep models significantly boost the accuracy of machine learning models, they also create opportunities for adversaries to tamper with models or extract sensitive data. Malicious data providers can compromise machine learning systems by supplying false data and intermediate computation results. Even a well-trained model can be deceived to misbehave by an adversary who provides carefully designed inputs. Furthermore, curious parties can derive sensitive information of the training data by interacting with a machine-learning model. These adversarial scenarios, known as poisoning attack, adversarial example attack, and inference attack, have demonstrated that security, privacy, and robustness have become more important than ever for AI to gain wider adoption and societal trust. To address these problems, we proposed the following solutions: (1) FLARE, which detects and mitigates stealthy poisoning attacks by leveraging latent space representations; (2) MANDA, which detects adversarial examples by utilizing evaluations from diverse sources, i.e, model-based prediction and data-based evaluation; (3) FeCo which enhances the robustness of machine learning-based network intrusion detection systems by introducing a novel representation learning method; and (4) DP-FedMeta, which preserves data privacy and improves the privacy-accuracy trade-off in machine learning systems through a novel adaptive clipping mechanism. / Doctor of Philosophy / Over the past few decades, machine learning (ML) has become increasingly popular for enhancing efficiency and effectiveness in data analytics and decision-making. Notable applications include intelligent transportation, smart healthcare, natural language generation, intrusion detection, etc. While machine learning methods are often employed for beneficial purposes, they can also be exploited for malicious intents. Well-trained language models have demonstrated generalizability deficiencies and intrinsic biases; generative ML models used for creating art have been repurposed by fraudsters to produce deepfakes; and facial recognition models trained on big data have been found to leak sensitive information about data owners. Many of these issues stem from the data-driven nature of machine learning. While big data and deep models significantly improve the accuracy of ML models, they also enable adversaries to corrupt models and infer sensitive data. This leads to various adversarial attacks, such as model poisoning during training, adversarially crafted data in testing, and data inference. It is evident that security, privacy, and robustness have become more important than ever for AI to gain wider adoption and societal trust. This research focuses on building trustworthy machine-learning systems in adversarial environments from a data perspective. It encompasses two themes: securing ML systems against security or privacy vulnerabilities (security of AI) and using ML as a tool to develop novel security solutions (AI for security). For the first theme, we studied adversarial attack detection in both the training and testing phases and proposed FLARE and MANDA to secure matching learning systems in the two phases, respectively. Additionally, we proposed a privacy-preserving learning system, dpfed, to defend against privacy inference attacks. We achieved a good trade-off between accuracy and privacy by proposing an adaptive data clipping and perturbing method. In the second theme, the research is focused on enhancing the robustness of intrusion detection systems through data representation learning.
353

Privacy-aware Federated Learning with Global Differential Privacy

Airody Suresh, Spoorthi 31 January 2023 (has links)
There is an increasing need for low-power neural systems as neural networks become more widely used in embedded devices with limited resources. Spiking neural networks (SNNs) are proving to be a more energy-efficient option to conventional Artificial neural networks (ANNs), which are recognized for being computationally heavy. Despite its significance, there has been not enough attention on training SNNs on large-scale distributed Machine Learning techniques like Federated Learning (FL). As federated learning involves many energy-constrained devices, there is a significant opportunity to take advantage of the energy efficiency offered by SNNs. However, it is necessary to address the real-world communication constraints in an FL system and this is addressed with the help of three communication reduction techniques, namely, model compression, partial device participation, and periodic aggregation. Furthermore, the convergence of federated learning systems is also affected by data heterogeneity. Federated learning systems are capable of protecting the private data of clients from adversaries. However, by analyzing the uploaded client parameters, confidential information can still be revealed. To combat privacy attacks on the FL systems, various attempts have been made to incorporate differential privacy within the framework. In this thesis, we investigate the trade-offs between communication costs and training variance under a Federated Learning system with Differential Privacy applied at the parameter server (curator model). / Master of Science / Federated Learning is a decentralized method of training neural network models; it employs several participating devices to independently learn a model on their local data partition. These local models are then aggregated at a central server to achieve the same performance as if the model had been trained centrally. But with Federated Learning systems there is a communication overhead accumulated. Various communication reductions can be used to reduce these costs. Spiking Neural Networks, being the energy-efficient option to Artificial Neural Networks, can be utilized in Federated Learning systems. This is because FL systems consist of a network of energy-efficient devices. Federated learning systems are helpful in preserving the privacy of data in the system. However, an attacker can still obtain meaningful information from the parameters that are transmitted during a session. To this end, differential privacy techniques are utilized to combat privacy concerns in Federated Learning systems. In this thesis, we compare and contrast different communication costs and parameters of a federated learning system with differential privacy applied to it.
354

At Home In the City: an exploration of the relationship between density, privacy, and flexibility in urban housing

Knowlson, Byron James 14 December 2011 (has links)
"When the immediate vicinity is neither visible nor audible, the city apartment integrated into the urban fabric can be far more luxurious than the detached country home, provided both alternatives offer identical, house-like qualities of living: in the interior and at the transition to the appropriate exterior space - a small yard, a winter garden or a roof patio... ...the decision to opt for home ownership beyond the city boundaries, a voluntary choice it would seem, is in truth a flight from the insufficient housing options in the city, and less a rejection of the city as a place to live." Klaus-Dieter Weiss / Master of Architecture
355

Detecting Hidden Wireless Cameras through Network Traffic Analysis

Cowan, KC Kaye 02 October 2020 (has links)
Wireless cameras dominate the home surveillance market, providing an additional layer of security for homeowners. Cameras are not limited to private residences; retail stores, public bathrooms, and public beaches represent only some of the possible locations where wireless cameras may be monitoring people's movements. When cameras are deployed into an environment, one would typically expect the user to disclose the presence of the camera as well as its location, which should be outside of a private area. However, adversarial camera users may withhold information and prevent others from discovering the camera, forcing others to determine if they are being recorded on their own. To uncover hidden cameras, a wireless camera detection system must be developed that will recognize the camera's network traffic characteristics. We monitor the network traffic within the immediate area using a separately developed packet sniffer, a program that observes and collects information about network packets. We analyze and classify these packets based on how well their patterns and features match those expected of a wireless camera. We used a Support Vector Machine classifier and a secondary-level of classification to reduce false positives to design and implement a system that uncovers the presence of hidden wireless cameras within an area. / Master of Science / Wireless cameras may be found almost anywhere, whether they are used to monitor city traffic and report on travel conditions or to act as home surveillance when residents are away. Regardless of their purpose, wireless cameras may observe people wherever they are, as long as a power source and Wi-Fi connection are available. While most wireless camera users install such devices for peace of mind, there are some who take advantage of cameras to record others without their permission, sometimes in compromising positions or places. Because of this, systems are needed that may detect hidden wireless cameras. We develop a system that monitors network traffic packets, specifically based on their packet lengths and direction, and determines if the properties of the packets mimic those of a wireless camera stream. A double-layered classification technique is used to uncover hidden wireless cameras and filter out non-wireless camera devices.
356

Exploiting Update Leakage in Searchable Symmetric Encryption

Haltiwanger, Jacob Sayid 15 March 2024 (has links)
Dynamic Searchable Symmetric Encryption (DSSE) provides efficient techniques for securely searching and updating an encrypted database. However, efficient DSSE schemes leak some sensitive information to the server. Recent works have implemented forward and backward privacy as security properties to reduce the amount of information leaked during update operations. Many attacks have shown that leakage from search operations can be abused to compromise the privacy of client queries. However, the attack literature has not rigorously investigated techniques to abuse update leakage. In this work, we investigate update leakage under DSSE schemes with forward and backward privacy from the perspective of a passive adversary. We propose two attacks based on a maximum likelihood estimation approach, the UFID Attack and the UF Attack, which target forward-private DSSE schemes with no backward privacy and Level 2 backward privacy, respectively. These are the first attacks to show that it is possible to leverage the frequency and contents of updates to recover client queries. We propose a variant of each attack which allows the update leakage to be combined with search pattern leakage to achieve higher accuracy. We evaluate our attacks against a real-world dataset and show that using update leakage can improve the accuracy of attacks against DSSE schemes, especially those without backward privacy. / Master of Science / Remote data storage is a ubiquitous application made possible by the prevalence of cloud computing. Dynamic Symmetric Searchable Encryption (DSSE) is a privacy-preserving technique that allows a client to search and update a remote encrypted database while greatly restricting the information the server can learn about the client's data and queries. However, all efficient DSSE schemes have some information leakage that can allow an adversarial server to infringe upon the privacy of clients. Many prior works have studied the dangers of leakage caused by the search operation, but have neglected the leakage from update operations. As such, researchers have been unsure about whether update leakage poses a threat to user privacy. To address this research gap, we propose two new attacks which exploit leakage from DSSE update operations. Our attacks are aimed at learning what keywords a client is searching and updating, even in DSSE schemes with forward and backward privacy, two security properties implemented by the strongest DSSE schemes. Our UFID Attack compromises forward-private schemes while our UF Attack targets schemes with both forward privacy and Level 2 backward privacy. We evaluate our attacks on a real-world dataset and show that they efficiently compromise client query privacy under realistic conditions.
357

Google AdWords as a Network of Grey Surveillance

Roberts, Harold M. 11 March 2010 (has links)
Google's AdWords processes information about what sorts of content users are browsing for about a quarter of all web site visits. The significance of AdWords' use of this vast amount of personal data lies not in its use for such obviously authoritarian purposes but instead as a network of grey surveillance with Google acting as the hub and the various publishers, advertisers, and users watching (and controlling) each other in distinct ways. Google's model of collective intelligence in its search and ad ranking systems has so deeply intertwined itself into user experiences online (and offline) that it acts as a shared nervous system. AdWords' use of specific words to target simple ads directly connects advertising topics with the content supported by the advertising, encouraging the content to do more of the work of assigning social meaning traditionally done by the ads themselves. And the AdWords pay-per-click ad auction system greatly increases the level of mechanization within the advertising and content production system, replacing the historical human bureaucracy of the advertising industry with the mechanical bureaucracy that is much more difficult to predict or understand. That mechanical bureaucracy shapes, in constitutive but unpredictable ways, the relationship between content and ads that drives the what content is published online and how advertisers and users interact with that content. / Master of Science
358

Essays in Information and Privacy Economics

Sam, Alex January 2024 (has links)
This thesis consists of three chapters in microeconomic theory concerning strategic interactions among parties with asymmetric information. The first chapter, ''Cheap Talk with Private Signal Structure" (co-authored with Maxim Ivanov) and published in Games and Economic Behavior, Volume 132 (2022), pages 288-304, addresses the question of how a designer of information --- which is privately observed by other players --- can benefit from designing it privately. The second chapter, ''Multidimensional Signaling with a Resource Constraint" (co-authored with Seungjin Han), studies competitive monotone equilibria in a multidimensional signaling economy where senders invest in their multidimensional signals (cognitive and non-cognitive) while facing a resource constraint. The third chapter, ''Consumer Privacy Disclosure in Competitive Markets", studies how competition among multi-product sellers with market power shapes the implications of consumer privacy on market outcomes. / Thesis / Doctor of Philosophy (PhD)
359

Data Sharing and Retrieval of Manufacturing Processes

Seth, Avi 28 March 2023 (has links)
With Industrial Internet, businesses can pool their resources to acquire large amounts of data that can then be used in machine learning tasks. Despite the potential to speed up training and deployment and improve decision-making through data-sharing, rising privacy concerns are slowing the spread of such technologies. As businesses are naturally protective of their data, this poses a barrier to interoperability. While previous research has focused on privacy-preserving methods, existing works typically consider data that is averaged or randomly sampled by all contributors rather than selecting data that are best suited for a specific downstream learning task. In response to the dearth of efficient data-sharing methods for diverse machine learning tasks in the Industrial Internet, this work presents an end-to end working demonstration of a search engine prototype built on PriED, a task-driven data-sharing approach that enhances the performance of supervised learning by judiciously fusing shared and local participant data. / Master of Science / My work focuses on PriED - a data sharing framework that enhances machine learning performance while also preserving user data privacy. In particular, I have built a working demonstration of a search engine that leverages the PriED framework and allows users to collaborate with their data without compromising their data privacy.
360

Analysis of the Effects of Privacy Filter Use on Horizontal Deviations in Posture of VDT Operators

Probst, George T. 12 July 2000 (has links)
The visual display terminal (VDT) is an integral part of the modern office. An issue of concern associated with the use of the VDT is maintaining privacy of on-screen materials. Privacy filters are products designed to restrict the viewing angle to documents displayed on a VDT, so that the on-screen material is not visible to persons other than the VDT operator. Privacy filters restrict the viewing angle either by diffraction or diffusion of the light emitted from the VDT. Constrained posture is a human factors engineering problem that has been associated with VDT use. The purpose of this research was to evaluate whether the use of privacy filters affected: 1) the restriction of postures associated with VDT use, 2) operator performance, and 3) subjective ratings of display issues, posture, and performance. Nine participants performed three types of tasks: word processing, data entry, and Web browsing. Each task was performed under three filter conditions: no filter, diffraction filter, and diffusion filter. Participants were videotaped during the tasks using a camera mounted above the VDT workstation. The videotape was analyzed and horizontal head deviation was measured at 50 randomly selected points during each task. Horizontal head deviation was measured as the angle between an absolute reference line, which bisects the center of the VDT screen, and a reference point located at the center of the participant's head. Standard deviation of head deviation were evaluated across filter type and task type. Accuracy- and/or time-based measures were used to evaluate performance within each task. Participants used a seven-point scale to rate the following: readability, image quality, brightness, glare, posture restriction, performance, and discomfort. The results indicated that the interaction between task and filter type affected the standard deviation of horizontal head deviation (a measure of the average range of horizontal deviation). The standard deviation of horizontal deviation was significantly larger within the Web browsing task under the no filter and diffusion filter conditions as compared to the diffraction filter condition. Filter type affected subjective ratings of the following: readability, image quality, brightness, posture restriction, and discomfort. The diffraction filter resulted in lower readability, image quality, and brightness ratings than the diffusion and no filter conditions. Participants reported that the ability to change postures was significantly decreased by the use of the diffraction filter as compared to the no filter and diffraction filter conditions. The diffraction filter resulted in an increase in reported discomfort as compared to the no filter condition. The interaction between filter and task type affected subjective ratings of performance. Participants reported a decrease in the rating of perceived performance under the diffraction filter / Web browsing condition as compared to the no filter / word processing, diffusion filter / Web browsing, and diffusion filter / data entry conditions. A decrease in the rating of perceived performance was reported in the diffraction filter / data entry condition as compared to the no filter / word processing and diffusion filter / Web browsing conditions. Neither diffraction nor diffusion filter affected performance within any of the tasks, based on the objective performance measures used in the experiment. / Master of Science

Page generated in 0.0389 seconds