• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1241
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2389
  • 643
  • 558
  • 523
  • 509
  • 352
  • 333
  • 308
  • 299
  • 235
  • 235
  • 218
  • 210
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
751

Online Learning for Resource Allocation in Wireless Networks: Fairness, Communication Efficiency, and Data Privacy

Li, Fengjiao 13 December 2022 (has links)
As the Next-Generation (NextG, 5G and beyond) wireless network supports a wider range of services, optimization of resource allocation plays a crucial role in ensuring efficient use of the (limited) available network resources. Note that resource allocation may require knowledge of network parameters (e.g., channel state information and available power level) for package schedule. However, wireless networks operate in an uncertain environment where, in many practical scenarios, these parameters are unknown before decisions are made. In the absence of network parameters, a network controller, who performs resource allocation, may have to make decisions (aimed at optimizing network performance and satisfying users' QoS requirements) while emph{learning}. To that end, this dissertation studies two novel online learning problems that are motivated by autonomous resource management in NextG. Key contributions of the dissertation are two-fold. First, we study reward maximization under uncertainty with fairness constraints, which is motivated by wireless scheduling with Quality of Service constraints (e.g., minimum delivery ratio requirement) under uncertainty. We formulate a framework of combinatorial bandits with fairness constraints and develop a fair learning algorithm that successfully addresses the tradeoff between reward maximization and fairness constraints. This framework can also be applied to several other real-world applications, such as online advertising and crowdsourcing. Second, we consider global reward maximization under uncertainty with distributed biased feedback, which is motivated by the problem of cellular network configuration for optimizing network-level performance (e.g., average user-perceived Quality of Experience). We study both the linear-parameterized and non-parametric global reward functions, which are modeled as distributed linear bandits and kernelized bandits, respectively. For each model, we propose a learning algorithmic framework that can be integrated with different differential privacy models. We show that the proposed algorithms can achieve a near-optimal regret in a communication-efficient manner while protecting users' data privacy ``for free''. Our findings reveal that our developed algorithms outperform the state-of-the-art solutions in terms of the tradeoff among the regret, communication efficiency, and computation complexity. In addition, our proposed models and online learning algorithms can also be applied to several other real-world applications, e.g., dynamic pricing and public policy making, which may be of independent interest to a broader research community. / Doctor of Philosophy / As the Next-Generation (NextG) wireless network supports a wider range of services, optimization of resource allocation plays a crucial role in ensuring efficient use of the (limited) available network resources. Note that resource allocation may require knowledge of network parameters (e.g., channel state information and available power level) for package schedule. However, wireless networks operate in an uncertain environment where, in many practical scenarios, these parameters are unknown before decisions are made. In the absence of network parameters, a network controller, who performs resource allocation, may have to make decisions (aimed at optimizing network performance and satisfying users' QoS requirements) while emph{learning}. To that end, this dissertation studies two novel online learning problems that are motivated by resource allocation in the presence uncertainty in NextG. Key contributions of the dissertation are two-fold. First, we study reward maximization under uncertainty with fairness constraints, which is motivated by wireless scheduling with Quality of Service constraints (e.g., minimum delivery ratio requirement) under uncertainty. We formulate a framework of combinatorial bandits with fairness constraints and develop a fair learning algorithm that successfully addresses the tradeoff between reward maximization and fairness constraints. This framework can also be applied to several other real-world applications, such as online advertising and crowdsourcing. Second, we consider global reward maximization under uncertainty with distributed biased feedback, which is motivated by the problem of cellular network configuration for optimizing network-level performance (e.g., average user-perceived Quality of Experience). We consider both the linear-parameterized and non-parametric (unknown) global reward functions, which are modeled as distributed linear bandits and kernelized bandits, respectively. For each model, we propose a learning algorithmic framework that integrate different privacy models according to different privacy requirements or different scenarios. We show that the proposed algorithms can learn the unknown functions in a communication-efficient manner while protecting users' data privacy ``for free''. Our findings reveal that our developed algorithms outperform the state-of-the-art solutions in terms of the tradeoff among the regret, communication efficiency, and computation complexity. In addition, our proposed models and online learning algorithms can also be applied to several other real-world applications, e.g., dynamic pricing and public policy making, which may be of independent interest to a broader research community.
752

Deidentification of Face Videos in Naturalistic Driving Scenarios

Thapa, Surendrabikram 05 September 2023 (has links)
The sharing of data has become integral to advancing scientific research, but it introduces challenges related to safeguarding personally identifiable information (PII). This thesis addresses the specific problem of sharing drivers' face videos for transportation research while ensuring privacy protection. To tackle this issue, we leverage recent advancements in generative adversarial networks (GANs) and demonstrate their effectiveness in deidentifying individuals by swapping their faces with those of others. Extensive experimentation is conducted using a large-scale dataset from ORNL, enabling the quantification of errors associated with head movements, mouth movements, eye movements, and other human factors cues. Additionally, qualitative analysis using metrics such as PERCLOS (Percentage of Eye Closure) and human evaluators provide valuable insights into the quality and fidelity of the deidentified videos. To enhance privacy preservation, we propose the utilization of synthetic faces as substitutes for real faces. Moreover, we introduce practical guidelines, including the establishment of thresholds and spot checking, to incorporate human-in-the-loop validation, thereby improving the accuracy and reliability of the deidentification process. In addition to this, this thesis also presents mitigation strategies to effectively handle reidentification risks. By considering the potential exploitation of soft biometric identifiers or non-biometric cues, we highlight the importance of implementing comprehensive measures such as robust data user licenses and privacy protection protocols. / Master of Science / With the increasing availability of large-scale datasets in transportation engineering, ensuring the privacy and confidentiality of sensitive information has become a paramount concern. One specific area of concern is the protection of drivers' facial data captured by the National Driving Simulator (NDS) during research studies. The potential risks associated with the misuse or unauthorized access to such data necessitate the development of robust deidentification techniques. In this thesis, we propose a GAN-based framework for the deidentification of drivers' face videos while preserving important facial attribute information. The effectiveness of the proposed framework is evaluated through comprehensive experiments, considering various metrics related to human factors. The results demonstrate the capability of the framework to successfully deidentify face videos, enabling the safe sharing and analysis of valuable transportation research data. This research contributes to the field of transportation engineering by addressing the critical need for privacy protection while promoting data sharing and advancing human factors research.
753

REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments

Desai, Humaid Ahmed Habibullah 22 November 2023 (has links)
Federated Learning (FL) is a sub-domain of machine learning (ML) that enforces privacy by allowing the user's local data to reside on their device. Instead of having users send their personal data to a server where the model resides, FL flips the paradigm and brings the model to the user's device for training. Existing works share model parameters or use distillation principles to address the challenges of data heterogeneity. However, these methods ignore some of the other fundamental challenges in FL: device heterogeneity and communication efficiency. In practice, client devices in FL differ greatly in their computational power and communication resources. This is exacerbated by unbalanced data distribution, resulting in an overall increase in training times and the consumption of more bandwidth. In this work, we present a novel approach for resource-efficient FL called emph{REFT} with variable pruning and knowledge distillation techniques to address the computational and communication challenges faced by resource-constrained devices. Our variable pruning technique is designed to reduce computational overhead and increase resource utilization for clients by adapting the pruning process to their individual computational capabilities. Furthermore, to minimize bandwidth consumption and reduce the number of back-and-forth communications between the clients and the server, we leverage knowledge distillation to create an ensemble of client models and distill their collective knowledge to the server. Our experimental results on image classification tasks demonstrate the effectiveness of our approach in conducting FL in a resource-constrained environment. We achieve this by training Deep Neural Network (DNN) models while optimizing resource utilization at each client. Additionally, our method allows for minimal bandwidth consumption and a diverse range of client architectures while maintaining performance and data privacy. / Master of Science / In a world driven by data, preserving privacy while leveraging the power of machine learning (ML) is a critical challenge. Traditional approaches often require sharing personal data with central servers, raising concerns about data privacy. Federated Learning (FL), is a cutting-edge solution that turns this paradigm on its head. FL brings the machine learning model to your device, allowing it to learn from your data without ever leaving your device. While FL holds great promise, it faces its own set of challenges. Existing research has largely focused on making FL work with different types of data, but there are still other issues to be resolved. Our work introduces a novel approach called REFT that addresses two critical challenges in FL: making it work smoothly on devices with varying levels of computing power and reducing the amount of data that needs to be transferred during the learning process. Imagine your smartphone and your laptop. They all have different levels of computing power. REFT adapts the learning process to each device's capabilities using a proposed technique called Variable Pruning. Think of it as a personalized fitness trainer, tailoring the workout to your specific fitness level. Additionally, we've adopted a technique called knowledge distillation. It's like a student learning from a teacher, where the teacher shares only the most critical information. In our case, this reduces the amount of data that needs to be sent across the internet, saving bandwidth and making FL more efficient. Our experiments, which involved training machines to recognize images, demonstrate that REFT works well, even on devices with limited resources. It's a step forward in ensuring your data stays private while still making machine learning smarter and more accessible.
754

Security, Privacy and Risks Within Smart Cities: Literature Review and Development of a Smart City Interaction Framework

Ismagilova, Elvira, Hughes, L., Rana, Nripendra P., Dwivedi, Y.K. 16 September 2020 (has links)
Yes / The complex and interdependent nature of smart cities raises significant political, technical, and socioeconomic challenges for designers, integrators and organisations involved in administrating these new entities. An increasing number of studies focus on the security, privacy and risks within smart cities, highlighting the threats relating to information security and challenges for smart city infrastructure in the management and processing of personal data. This study analyses many of these challenges, offers a valuable synthesis of the relevant key literature, and develops a smart city interaction framework. The study is organised around a number of key themes within smart cities research: privacy and security of mobile devices and services; smart city infrastructure, power systems, healthcare, frameworks, algorithms and protocols to improve security and privacy, operational threats for smart cities, use and adoption of smart services by citizens, use of blockchain and use of social media. This comprehensive review provides a useful perspective on many of the key issues and offers key direction for future studies. The findings of this study can provide an informative research framework and reference point for academics and practitioners.
755

How privacy practices affect customer commitment in the sharing economy: A study of Airbnb through an institutional perspective

Chen, S., Tamilmani, Kuttimani, Tran, K.T., Waseem, Donia, Weerakkody, Vishanth J.P. 28 October 2022 (has links)
Yes / Privacy is an emerging issue for home-sharing platforms such as Airbnb. Home-sharing providers (business customers) are subject to both digital privacy risks (e.g., data breaches and unauthorized data access) and physical privacy risks (e.g., property damage and invasion of their personal space). Therefore, platforms need to strengthen their institutions of privacy management to protect the interests of providers and maintain their commitment. By applying the micro-level psychological aspect of institutional theory, our research investigates how providers decide their level of commitment to a platform by evaluating the institutions of the platform’s privacy management. Our survey recruited 380 Airbnb providers from the Prolific panel. Structural equation modeling analysis shows that both physical and digital privacy practices strengthen providers’ legitimacy judgment of the platform’s privacy management and subsequently increase their commitment to the platform. Our theoretical contribution lies in revealing the effects of physical and digital privacy practices on B2B relationships from an institutional perspective. Our research is among the first to provide an integrative framework illustrating providers’ psychological process of legitimacy judgement. It also has practical implications for sharing economy platforms to manage privacy. / The authors gratefully acknowledge the Seed Corn Funding from University of Bradford and the Research Productivity Support Scheme from Macquarie University.
756

Right to publicity and privacy versus first amendment freedom of speech

Lukman, Joshua R. 01 January 2003 (has links)
A person's right to publicity may often contradict with another person's rights under the First Amendment. While a person's legal protection over their right of publicity is relatively new in the eyes of our court, this topic of law and other related matters seem to be at the center of attention in current large profile civil litigation cases. The First Amendment seeks to promote speech, whereas the right of publicity laws seeks .to limit speech. If civil action is brought against a defendant for violating the plaintiffs right of publicity, a First Amendment exception may apply as a valid defense. This contradiction in the nature of these laws is forcing our court system to review applicable cases on a case by base basis, resulting in some degree of unpredictability in the courts. Because many of the parties in these cases are large commercial companies, more money is at stake as suits of misappropriation are filed. The issue of what direction(s) the courts should take in this matter spawns opposing views. While some views suggest that bright lines be drawn within right of publicity laws in order to avoid redundant and excessive cases and appeals, opposing views contend that bright lines cannot be drawn given the unique and sometimes artistic expression protected under the First Amendment. Our courts have applied the basic framework of copyright law in order to aid in their decision-making. Courts must weigh the right of publicity against the First Amendment.
757

Practical Privacy-Preserving Federated Learning with Secure Multi-Party Computation

Akhtar, Benjamin Asad 12 August 2024 (has links)
Master of Science / In a world with ever greater need for machine learning and artificial intelligence, it has be- come increasingly important to offload computation intensive tasks to companies with the compute resources to perform training on potentially sensitive data. In applications such as finance or healthcare, the data providers may have a need to train large quantities of data, but cannot reveal the data to outside parties for legal or other reasons. Originally, using a decentralized training method known as Federated Learning (FL) was proposed to ensure data did not leave the client's device. This method still was susceptible to attacks and further security was needed. Multi-Party Computation (MPC) was proposed in conjunction with FL as it provides a way to securely compute with no leakage of data values. This was utilized in a framework called SAFEFL, however, it was extremely slow. Reducing the computation overhead using programming tools at our disposal for this frame- work turns it from a unpractical to useful design. The design can now be used in industry with some overhead compared to non-MPC computing, however, it has been greatly im- proved.
758

Usability Issues in the User Interfaces of Privacy-Enhancing Technologies

LaTouche, Lerone W. 01 January 2013 (has links)
Privacy on the Internet has become one of the leading concerns for Internet users. These users are not wrong in their concerns if personally identifiable information is not protected and under their control. To minimize the collection of Internet users' personal information and help solve the problem of online privacy, a number of privacy-enhancing technologies have been developed. These so-called privacy-enhancing technologies still have usability issues in the user interfaces because Internet users do not have the choices required to monitor and control their personal data when released in online repositories. Current research shows a need exists to improve the overall usability of privacy-enhancing technology user interfaces. A properly designed privacy-enhancing technology user interface will give the Internet users confidence they can monitor and control all aspects of their personal data. Specific methods and criteria for assessing the usability of privacy-enhancing technology user interfaces either have not been developed or have not been widely published leading to the complexity of the user interfaces, which negatively affects the privacy and security of Internet users' personal data. This study focused on the development of a conceptual framework, which will provide a sound foundation for use in assessing the user interfaces of Web-based privacy-enhancing technologies for user-controlled e-privacy features. The study investigated the extent to which user testing and heuristic evaluation help identify the lack of user-controlled e-privacy features and usability problems in selected privacy-enhancing technology user interfaces. The outcome of this research was the development of a domain-specific heuristics checklist with criteria for the future evaluation of privacy-enhancing technologies' applications user interfaces. The results of the study show the domain-specific heuristics checklist generated more usability problems and a higher number of severe problems than the general heuristics. This suggests domain-specific heuristics can be used as a discount usability technique, which enforces the concept of usability that the heuristics are easy to use and learn. The domain-specific heuristics checklist should be of interest to privacy and security practitioners involved in the development of privacy-enhancing technologies' user interfaces. This research should supplement the literature on human-computer interaction, personal data protection, and privacy management.
759

User perspective of privacy and surveillance on social networks

Balan, Khalil January 2017 (has links)
Social networks have integrated into people’s daily lives and they became a powerful medium for effective marketing and communication worldwide. Problem raise when governments and special agencies violate users’ information privacy under the pretext of protecting national security or something as, furthermore, when information became the source of income for social networks it became necessary to investigate users concerns about informational privacy on social platforms, if there are. The main purpose of the thesis is to understand what level of privacy awareness users on social networks have and how much relevant knowledge about surveillance on social networks they recognize. Moreover, the thesis aims to present users’ opinion about surveillance on Facebook and if they accept to be surveyed in certain scenarios. As results, the study has identified ambiguity in Facebook terms and data policy, while there has been clarity that Facebook applies massive surveillance in terms of data collection on all users on the network. 71% of the participants had concerns about their privacy on social networks, two-thirds of the participants didn’t read Facebook terms and 76% did believe that social networks sell users information for own benefits. The majority of the interview participants showed lack of knowledge about data collection on social networks, and didn’t know if governments do surveillance on social platforms or not. However, 37% of the survey participants claimed that they have nothing to hide and governments can look into their activities online, and almost similar percentage supported such an action. Further, most of the interview participants protect their informational privacy on social networks by having good privacy settings, controlling who have access to certain posts or managing friends list. However, 1/3 of the participants who had good privacy settings didn’t know all their friends on FB. Through personal observations on data analysis and literature review, I concluded the thesis with some suggestions of possible approaches to enhance information privacy, these recommendations present my own thoughts and weren’t derived in academic way rather personal notes during the thesis study.
760

A proteção de dados pessoais do empregado no direito brasileiro: um estudo sobre os limites na obtenção e no uso pelo empregador da informação relativa ao empregado / The protection of employees personal data in the Brazilian law: a study on the limits of employers collecting and using employees personal information

Sanden, Ana Francisca Moreira de Souza 15 October 2012 (has links)
O estudo explora a questão de como o Direito do Trabalho brasileiro protege a informação pessoal do empregado perante o empregador e se essa proteção considera deliberadamente os riscos subjacentes ao uso da informação em ambiente de crescente processamento automático. No Capítulo I, apresenta-se no contexto internacional o problema da obtenção e do uso pelo empregador da informação relativa ao empregado em um ambiente de crescente automatização e justifica-se a necessidade de sua abordagem no Direito brasileiro. No Capítulo II é examinado o arcabouço da proteção de dados pessoais do empregado nas normas internacionais. No Capítulo III, buscam-se os fundamentos do conceito de dados pessoais no ordenamento jurídico brasileiro, principalmente na órbita constitucional, sob a fórmula do direito à autodeterminação informativa. O Capítulo IV traça um quadro geral da proteção da informação relativa ao empregado no Direito do Trabalho brasileiro, considerando os limites à obtenção e ao uso da informação pessoal, os deveres do empregador como responsável pelo acervo de informações pessoais que mantém e os direitos diretamente relacionados à autodeterminação informativa. O Capítulo V investiga as potencialidades do quadro normativo brasileiro atual para oferecer uma proteção ao empregado que se aproxime das finalidades almejadas nas normas internacionais setoriais. / The study is concerned with the question of how the Brazilian Labor Law protects the employees personal information from collection and use by the employer and whether it considers the threats to the employee that arise from increasing automatic processing. Chapter I presents, in the international context, the problem of the employers collecting and using the employees personal information in a world of increasing automation and justifies the need for its approach in the Brazilian Law. Chapter II presents the employees data protection framework proposed in international agreements. Chapter III discusses the fundaments of the concept of personal data and its formulation as informational self-determination in the Brazilian Law and in the Brazilian Constitution. Chapter IV offers a general picture of the protection of the employees information in the Brazilian Labor Law, considering the limits to its collection and use, the obligations of the employer as responsible for the retrieval of the stored data, and the employee`s rights regarding informational self- determination. Chapter V investigates the possibilities of the present Brazilian Legal framework in offering protection to the employee on the same basis as proposed in sectorial international agreements.

Page generated in 0.2471 seconds