• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 27
  • 21
  • 18
  • 17
  • 12
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Online Learning for Resource Allocation in Wireless Networks: Fairness, Communication Efficiency, and Data Privacy

Li, Fengjiao 13 December 2022 (has links)
As the Next-Generation (NextG, 5G and beyond) wireless network supports a wider range of services, optimization of resource allocation plays a crucial role in ensuring efficient use of the (limited) available network resources. Note that resource allocation may require knowledge of network parameters (e.g., channel state information and available power level) for package schedule. However, wireless networks operate in an uncertain environment where, in many practical scenarios, these parameters are unknown before decisions are made. In the absence of network parameters, a network controller, who performs resource allocation, may have to make decisions (aimed at optimizing network performance and satisfying users' QoS requirements) while emph{learning}. To that end, this dissertation studies two novel online learning problems that are motivated by autonomous resource management in NextG. Key contributions of the dissertation are two-fold. First, we study reward maximization under uncertainty with fairness constraints, which is motivated by wireless scheduling with Quality of Service constraints (e.g., minimum delivery ratio requirement) under uncertainty. We formulate a framework of combinatorial bandits with fairness constraints and develop a fair learning algorithm that successfully addresses the tradeoff between reward maximization and fairness constraints. This framework can also be applied to several other real-world applications, such as online advertising and crowdsourcing. Second, we consider global reward maximization under uncertainty with distributed biased feedback, which is motivated by the problem of cellular network configuration for optimizing network-level performance (e.g., average user-perceived Quality of Experience). We study both the linear-parameterized and non-parametric global reward functions, which are modeled as distributed linear bandits and kernelized bandits, respectively. For each model, we propose a learning algorithmic framework that can be integrated with different differential privacy models. We show that the proposed algorithms can achieve a near-optimal regret in a communication-efficient manner while protecting users' data privacy ``for free''. Our findings reveal that our developed algorithms outperform the state-of-the-art solutions in terms of the tradeoff among the regret, communication efficiency, and computation complexity. In addition, our proposed models and online learning algorithms can also be applied to several other real-world applications, e.g., dynamic pricing and public policy making, which may be of independent interest to a broader research community. / Doctor of Philosophy / As the Next-Generation (NextG) wireless network supports a wider range of services, optimization of resource allocation plays a crucial role in ensuring efficient use of the (limited) available network resources. Note that resource allocation may require knowledge of network parameters (e.g., channel state information and available power level) for package schedule. However, wireless networks operate in an uncertain environment where, in many practical scenarios, these parameters are unknown before decisions are made. In the absence of network parameters, a network controller, who performs resource allocation, may have to make decisions (aimed at optimizing network performance and satisfying users' QoS requirements) while emph{learning}. To that end, this dissertation studies two novel online learning problems that are motivated by resource allocation in the presence uncertainty in NextG. Key contributions of the dissertation are two-fold. First, we study reward maximization under uncertainty with fairness constraints, which is motivated by wireless scheduling with Quality of Service constraints (e.g., minimum delivery ratio requirement) under uncertainty. We formulate a framework of combinatorial bandits with fairness constraints and develop a fair learning algorithm that successfully addresses the tradeoff between reward maximization and fairness constraints. This framework can also be applied to several other real-world applications, such as online advertising and crowdsourcing. Second, we consider global reward maximization under uncertainty with distributed biased feedback, which is motivated by the problem of cellular network configuration for optimizing network-level performance (e.g., average user-perceived Quality of Experience). We consider both the linear-parameterized and non-parametric (unknown) global reward functions, which are modeled as distributed linear bandits and kernelized bandits, respectively. For each model, we propose a learning algorithmic framework that integrate different privacy models according to different privacy requirements or different scenarios. We show that the proposed algorithms can learn the unknown functions in a communication-efficient manner while protecting users' data privacy ``for free''. Our findings reveal that our developed algorithms outperform the state-of-the-art solutions in terms of the tradeoff among the regret, communication efficiency, and computation complexity. In addition, our proposed models and online learning algorithms can also be applied to several other real-world applications, e.g., dynamic pricing and public policy making, which may be of independent interest to a broader research community.
12

REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments

Desai, Humaid Ahmed Habibullah 22 November 2023 (has links)
Federated Learning (FL) is a sub-domain of machine learning (ML) that enforces privacy by allowing the user's local data to reside on their device. Instead of having users send their personal data to a server where the model resides, FL flips the paradigm and brings the model to the user's device for training. Existing works share model parameters or use distillation principles to address the challenges of data heterogeneity. However, these methods ignore some of the other fundamental challenges in FL: device heterogeneity and communication efficiency. In practice, client devices in FL differ greatly in their computational power and communication resources. This is exacerbated by unbalanced data distribution, resulting in an overall increase in training times and the consumption of more bandwidth. In this work, we present a novel approach for resource-efficient FL called emph{REFT} with variable pruning and knowledge distillation techniques to address the computational and communication challenges faced by resource-constrained devices. Our variable pruning technique is designed to reduce computational overhead and increase resource utilization for clients by adapting the pruning process to their individual computational capabilities. Furthermore, to minimize bandwidth consumption and reduce the number of back-and-forth communications between the clients and the server, we leverage knowledge distillation to create an ensemble of client models and distill their collective knowledge to the server. Our experimental results on image classification tasks demonstrate the effectiveness of our approach in conducting FL in a resource-constrained environment. We achieve this by training Deep Neural Network (DNN) models while optimizing resource utilization at each client. Additionally, our method allows for minimal bandwidth consumption and a diverse range of client architectures while maintaining performance and data privacy. / Master of Science / In a world driven by data, preserving privacy while leveraging the power of machine learning (ML) is a critical challenge. Traditional approaches often require sharing personal data with central servers, raising concerns about data privacy. Federated Learning (FL), is a cutting-edge solution that turns this paradigm on its head. FL brings the machine learning model to your device, allowing it to learn from your data without ever leaving your device. While FL holds great promise, it faces its own set of challenges. Existing research has largely focused on making FL work with different types of data, but there are still other issues to be resolved. Our work introduces a novel approach called REFT that addresses two critical challenges in FL: making it work smoothly on devices with varying levels of computing power and reducing the amount of data that needs to be transferred during the learning process. Imagine your smartphone and your laptop. They all have different levels of computing power. REFT adapts the learning process to each device's capabilities using a proposed technique called Variable Pruning. Think of it as a personalized fitness trainer, tailoring the workout to your specific fitness level. Additionally, we've adopted a technique called knowledge distillation. It's like a student learning from a teacher, where the teacher shares only the most critical information. In our case, this reduces the amount of data that needs to be sent across the internet, saving bandwidth and making FL more efficient. Our experiments, which involved training machines to recognize images, demonstrate that REFT works well, even on devices with limited resources. It's a step forward in ensuring your data stays private while still making machine learning smarter and more accessible.
13

INFLUENCE ANALYSIS TOWARDS BIG SOCIAL DATA

Han, Meng 03 May 2017 (has links)
Large scale social data from online social networks, instant messaging applications, and wearable devices have seen an exponential growth in a number of users and activities recently. The rapid proliferation of social data provides rich information and infinite possibilities for us to understand and analyze the complex inherent mechanism which governs the evolution of the new technology age. Influence, as a natural product of information diffusion (or propagation), which represents the change in an individual’s thoughts, attitudes, and behaviors resulting from interaction with others, is one of the fundamental processes in social worlds. Therefore, influence analysis occupies a very prominent place in social related data analysis, theory, model, and algorithms. In this dissertation, we study the influence analysis under the scenario of big social data. Firstly, we investigate the uncertainty of influence relationship among the social network. A novel sampling scheme is proposed which enables the development of an efficient algorithm to measure uncertainty. Considering the practicality of neighborhood relationship in real social data, a framework is introduced to transform the uncertain networks into deterministic weight networks where the weight on edges can be measured as Jaccard-like index. Secondly, focusing on the dynamic of social data, a practical framework is proposed by only probing partial communities to explore the real changes of a social network data. Our probing framework minimizes the possible difference between the observed topology and the actual network through several representative communities. We also propose an algorithm that takes full advantage of our divide-and-conquer strategy which reduces the computational overhead. Thirdly, if let the number of users who are influenced be the depth of propagation and the area covered by influenced users be the breadth, most of the research results are only focused on the influence depth instead of the influence breadth. Timeliness, acceptance ratio, and breadth are three important factors that significantly affect the result of influence maximization in reality, but they are neglected by researchers in most of time. To fill the gap, a novel algorithm that incorporates time delay for timeliness, opportunistic selection for acceptance ratio, and broad diffusion for influence breadth has been investigated. In our model, the breadth of influence is measured by the number of covered communities, and the tradeoff between depth and breadth of influence could be balanced by a specific parameter. Furthermore, the problem of privacy preserved influence maximization in both physical location network and online social network was addressed. We merge both the sensed location information collected from cyber-physical world and relationship information gathered from online social network into a unified framework with a comprehensive model. Then we propose the resolution for influence maximization problem with an efficient algorithm. At the same time, a privacy-preserving mechanism are proposed to protect the cyber physical location and link information from the application aspect. Last but not least, to address the challenge of large-scale data, we take the lead in designing an efficient influence maximization framework based on two new models which incorporate the dynamism of networks with consideration of time constraint during the influence spreading process in practice. All proposed problems and models of influence analysis have been empirically studied and verified by different, large-scale, real-world social data in this dissertation.
14

An Empirical Investigation of the Relationship between Computer Self-Efficacy and Information Privacy Concerns

Awwal, Mohammad Abdul 01 January 2011 (has links)
The Internet and the growth of Information Technology (IT) and their enhanced capabilities to collect personal information have given rise to many privacy issues. Unauthorized access of personal information may result in identity theft, stalking, harassment, and other invasions of privacy. Information privacy concerns are impediments to broad-scale adoption of the Internet for purchasing decisions. Computer self-efficacy has been shown to be an effective predictor of behavioral intention and a critical determinant of intention to use Information Technology. This study investigated the relationship between an individual's computer self-efficacy and information privacy concerns; and also examined the differences among different age groups and between genders regarding information privacy concerns and their relationships with computer self-efficacy. A paper-based survey was designed to empirically assess computer self-efficacy and information privacy concerns. The survey was developed by combining existing validated scales for computer self-efficacy and information privacy concerns. The target population of this study was the residents of New Jersey, U.S.A. The assessment was done by using the mall-intercept approach in which individuals were asked to fill out the survey. The sample size for this study was 400 students, professionals, and mature adults. The Shapiro-Wilk test was used for testing data normality and the Spearman rank-order test was used for correlation analyses. MANOVA test was used for comparing mean values of computer self-efficacy and information privacy concerns between genders and among age groups. The results showed that the correlation between computer self-efficacy and information privacy concerns was significant and positive; and there were differences between genders and among age groups regarding information privacy concerns and their relationships with computer self-efficacy. This study contributed to the body of knowledge about the relationships among antecedents and consequences of information privacy concerns and computer self-efficacy. The findings of this study can help corporations to improve e-commerce by targeting privacy policy-making efforts to address the explicit areas of consumer privacy concerns. The results of this study can also help IT practitioners to develop privacy protection tools and processes to address specific consumer privacy concerns.
15

Personalized Advertising: Examining the Consumer Attitudes of Generation Z Towards Data Privacy and Personalization : A study of consumer attitudes towards the commercial usage of personal data

Taneo Zander, Jennifer Taneo Zander, Mirkovic, Anna-Maria January 2019 (has links)
Abstract Background The advancement of Internet technology and the ability of companies to process large amounts of information has made it possible for marketers to communicate with their customers through customized measures, namely personalized advertising. One of the primary aspects that differentiates personalized advertising from traditional advertising is the collection and use of consumers’ personal information, which have presented marketers with numerous benefits and opportunities. However, this has also raised concerns among consumers regarding their privacy and the handling of their personal information. In this study, the attitudes of Generation Z will be examined regarding data privacy, personalization, and the commercial usage of their personal information, as well as how these attitudes may impact consumer behavior. Purpose                   The purpose of this study is to examine the attitudes of consumers towards personalized advertising and the commercial usage of personal consumer data, with the focus on consumers belonging to Generation Z. Issues regarding data privacy and personalization is explored, as well as how consumer attitudes towards the personalization of advertisements may impact consumer behavior in the digital environment. Method The positivistic approach was applied with the intention to draw conclusions about a population of people, namely Generation Z. A deductive approach was implemented to test an existing theory, the Theory of Planned Behavior (TPB) with the intention to examine whether Generation Z follows the trend found in the literature; namely that younger consumers (Millennials) are more positive towards personalized advertising and the sharing of personal data for commercial purposes than older generations. The empirical data was collected through a survey, which was later analyzed through statistical measures.           Conclusion              The results suggested a predominantly neutral attitude among the survey participants regarding personalized advertising and the sharing of personal data for commercial purposes. Moreover, a positive correlation between consumer attitudes and behavioral intention to interact with personalized advertisements was detected. However, the correlation was found to be rather weak, indicating that consumer attitudes are not necessarily the strongest predictor of behavioral intention among Generation Z consumers in regards to personalized advertising.
16

Towards an adaptive solution to data privacy protection in hierarchical wireless sensor networks

Al-Riyami, Ahmed January 2016 (has links)
Hierarchical Wireless Sensor networks (WSNs) are becoming attractive to many applications due to their energy efficiency and scalability. However, if such networks are deployed in a privacy sensitive application context such as home utility consumption, protecting data privacy becomes an essential requirement. Our threat analysis in such networks has revealed that PPDA (Privacy Preserving Data Aggregation), NIDA (Node ID Anonymity) and ENCD (Early Node Compromise Detection) are three essential properties for protecting data privacy. The scope of this thesis is on protecting data privacy in hierarchical WSNs byaddressing issues in relation to two of the three properties identified, i.e., NIDA and ENCD, effectively and efficiently. The effectiveness property is achieved by considering NIDA and ENCD in an integrated manner, and the efficiency property is achieved by using an adaptive approach to security provisioning. To this end, the thesis has made the following four novel contributions. Firstly, this thesis presents a comprehensive analysis of the threats to data privacy and literature review of the countermeasures proposed to address these threats. The analysis and literature review have led to the identification of two main areas for improvements: (1) to reduce the resources consumed as the result of protecting data privacy, and (2) to address the compatibility issue between NIDA and ENCD.Secondly, a novel Adaptive Pseudonym Length Estimation (AdaptPLE) method has been proposed. The method allows the determination of a minimum acceptable length of the pseudonyms used in NIDA based on a given set of security and application related requirements and constraints. In this way, we can balance the trade-off between an ID anonymity protection level and the costs (i.e., transmission and energy) incurred in achieving the protection level. To demonstrate its effectiveness, we have evaluated the method by applying it to two existing NIDA schemes, the Efficient Anonymous Communication (EAC) scheme and theCryptographic Anonymous Scheme (CAS).Thirdly, a novel Adaptive Early Node Compromise Detection (AdaptENCD) scheme for cluster-based WSNs has been proposed. This scheme allows early detections of compromised nodes more effectively and efficiently than existing proposals. This is achieved by adjusting, at run-time, the transmission rate of heartbeat messages, used to detect nodes' aliveness, in response to the average message loss ratio in a cluster. This adaptive approach allows us to significantly reduce detection errors while keeping the number of transmitted heartbeat messages as low as possible, thus reducing transmission costs. Fourthly, a novel Node ID Anonymity Preserving Scheme (ID-APS) for clusterbased WSNs has been proposed. ID-APS protects nodes ID anonymity while, at the same time, also allowing the global identification of nodes. This later property supports the identification and removal of compromised nodes in the network, which is a significant improvement over the state-of-the-art solution, the CAS scheme. ID-APS supports both NIDA and ENCD by making a hybrid use of dynamic and global identification pseudonyms. More importantly, ID-APS achieves these properties with less overhead costs than CAS. All proposed solutions have been analysed and evaluated comprehensively to prove their effectiveness and efficiency.
17

Privacy Preserving Data Mining using Unrealized Data Sets: Scope Expansion and Data Compression

Fong, Pui Kuen 16 May 2013 (has links)
In previous research, the author developed a novel PPDM method – Data Unrealization – that preserves both data privacy and utility of discrete-value training samples. That method transforms original samples into unrealized ones and guarantees 100% accurate decision tree mining results. This dissertation extends their research scope and achieves the following accomplishments: (1) it expands the application of Data Unrealization on other data mining algorithms, (2) it introduces data compression methods that reduce storage requirements for unrealized training samples and increase data mining performance and (3) it adds a second-level privacy protection that works perfectly with Data Unrealization. From an application perspective, this dissertation proves that statistical information (i. e. counts, probability and information entropy) can be retrieved precisely from unrealized training samples, so that Data Unrealization is applicable for all counting-based, probability-based and entropy-based data mining models with 100% accuracy. For data compression, this dissertation introduces a new number sequence – J-Sequence – as a mean to compress training samples through the J-Sampling process. J-Sampling converts the samples into a list of numbers with many replications. Applying run-length encoding on the resulting list can further compress the samples into a constant storage space regardless of the sample size. In this way, the storage requirement of the sample database becomes O(1) and the time complexity of a statistical database query becomes O(1). J-Sampling is used as an encryption approach to the unrealized samples already protected by Data Unrealization; meanwhile, data mining can be performed on these samples without decryption. In order to retain privacy preservation and to handle data compression internally, a column-oriented database management system is recommended to store the encrypted samples. / Graduate / 0984 / fong_bee@hotmail.com
18

User control of personal data : A study of personal data management in a GDPR-compliant grahpical user interface / Användares kontroll över personuppgifter : En studie i hanteringen av personuppgifter i ett GDPR-kompatibelt grafiskt användargränssnitt

Olausson, Michaela January 2018 (has links)
The following bachelor thesis explores the design of a GDPR (General Data Protection Regulation) compliant graphical user interface, for an administrative school system. The work presents the process of developing and evaluating a web-based prototype, a platform chosen because of its availability. The aim is to investigate if the design increases the caregivers perception of being in control over personal data, both their own and data related to children in their care. The methods for investigating this subject are grounded in real world research, using both quantitative and qualitative methods.   The results indicate that the users perceive the prototype to be useful, easy to use, easy to learn and that they are satisfied with it. The results also point towards the users feeling of control of both their own and their child’s personal data when using the prototype. The users agree that a higher sense of control also increases their sense of security.
19

Achieving privacy-preserving distributed statistical computation

Liu, Meng-Chang January 2012 (has links)
The growth of the Internet has opened up tremendous opportunities for cooperative computations where the results depend on the private data inputs of distributed participating parties. In most cases, such computations are performed by multiple mutually untrusting parties. This has led the research community into studying methods for performing computation across the Internet securely and efficiently. This thesis investigates security methods in the search for an optimum solution to privacy- preserving distributed statistical computation problems. For this purpose, the nonparametric sign test algorithm is chosen as a case for study to demonstrate our research methodology. Two privacy-preserving protocol suites using data perturbation techniques and cryptographic primitives are designed. The first protocol suite, i.e. the P22NSTP, is based on five novel data perturbation building blocks, i.e. the random probability density function generation protocol (RpdfGP), the data obscuring protocol (DOP), the secure two-party comparison protocol (STCP), the data extraction protocol (DEP) and the permutation reverse protocol (PRP). This protocol suite enables two parties to efficiently and securely perform the sign test computation without the use of a third party. The second protocol suite, i.e. the P22NSTC, uses an additively homomorphic encryption scheme and two novel building blocks, i.e. the data separation protocol (DSP) and data randomization protocol (DRP). With some assistance from an on-line STTP, this protocol suite provides an alternative solution for two parties to achieve a secure privacy-preserving nonparametric sign test computation. These two protocol suites have been implemented using MATLAB software. Their implementations are evaluated and compared against the sign test computation algorithm on an ideal trusted third party model (TTP-NST) in terms of security, computation and communication overheads and protocol execution times. By managing the level of noise data item addition, the P22NSTP can achieve specific levels of privacy protection to fit particular computation scenarios. Alternatively, the P22NSTC provides a more secure solution than the P22NSTP by employing an on-line STTP. The level of privacy protection relies on the use of an additively homomorphic encryption scheme, DSP and DRP. A four-phase privacy-preserving transformation methodology has also been demonstrated; it includes data privacy definition, statistical algorithm decomposition, solution design and solution implementation.
20

Hardware Acceleration for Homomorphic Encryption / Accélération matérielle pour la cryptographie homomorphe

Cathebras, Joël 17 December 2018 (has links)
Dans cette thèse, nous nous proposons de contribuer à la définition de systèmes de crypto-calculs pour la manipulation en aveugle de données confidentielles. L’objectif particulier de ce travail est l’amélioration des performances du chiffrement homomorphe. La problématique principale réside dans la définition d’une approche d’accélération qui reste adaptable aux différents cas applicatifs de ces chiffrements, et qui, de ce fait, est cohérente avec la grande variété des paramétrages. C’est dans cet objectif que cette thèse présente l’exploration d’une architecture hybride de calcul pour l’accélération du chiffrement de Fan et Vercauteren (FV).Cette proposition résulte d’une analyse de la complexité mémoire et calculatoire du crypto-calcul avec FV. Une partie des contributions rend plus efficace l’adéquation d’un système non-positionnel de représentation des nombres (RNS) avec la multiplication de polynôme par transformée de Fourier sur corps finis (NTT). Les opérations propres au RNS, facilement parallélisables, sont accélérées par une unité de calcul SIMD type GPU. Les opérations de NTT à la base des multiplications de polynôme sont implémentées sur matériel dédié de type FPGA. Des contributions spécifiques viennent en soutien de cette proposition en réduisant le coût mémoire et le coût des communications pour la gestion des facteurs de rotation des NTT.Cette thèse ouvre des perspectives pour la définition de micro-serveurs pour la manipulation de données confidentielles à base de chiffrement homomorphe. / In this thesis, we propose to contribute to the definition of encrypted-computing systems for the secure handling of private data. The particular objective of this work is to improve the performance of homomorphic encryption. The main problem lies in the definition of an acceleration approach that remains adaptable to the different application cases of these encryptions, and which is therefore consistent with the wide variety of parameters. It is for that objective that this thesis presents the exploration of a hybrid computing architecture for accelerating Fan and Vercauteren’s encryption scheme (FV).This proposal is the result of an analysis of the memory and computational complexity of crypto-calculation with FV. Some of the contributions make the adequacy of a non-positional number representation system (RNS) with polynomial multiplication Fourier transform over finite-fields (NTT) more effective. RNS-specific operations, inherently embedding parallelism, are accelerated on a SIMD computing unit such as GPU. NTT-based polynomial multiplications are implemented on dedicated hardware such as FPGA. Specific contributions support this proposal by reducing the storage and the communication costs for handling the NTTs’ twiddle factors.This thesis opens up perspectives for the definition of micro-servers for the manipulation of private data based on homomorphic encryption.

Page generated in 0.0646 seconds