Spelling suggestions: "subject:"federation"" "subject:"federativa""
31 |
Federated search to merge the results of the extracted functional requirementsLi, Xiang 22 August 2022 (has links)
No description available.
|
32 |
DIFFERENTIAL PRIVACY IN DISTRIBUTED SETTINGSZitao Li (14135316) 18 November 2022 (has links)
<p>Data is considered the "new oil" in the information society and digital economy. While many commercial activities and government decisions are based on data, the public raises more concerns about privacy leakage when their private data are collected and used. In this dissertation, we investigate the privacy risks in settings where the data are distributed across multiple data holders, and there is only an untrusted central server. We provide solutions for several problems under this setting with a security notion called differential privacy (DP). Our solutions can guarantee that there is only limited and controllable privacy leakage from the data holder, while the utility of the final results, such as model prediction accuracy, can be still comparable to the ones of the non-private algorithms.</p>
<p><br></p>
<p>First, we investigate the problem of estimating the distribution over a numerical domain while satisfying local differential privacy (LDP). Our protocol prevents privacy leakage in the data collection phase, in which an untrusted data aggregator (or a server) wants to learn the distribution of private numerical data among all users. The protocol consists of 1) a new reporting mechanism called the square wave (SW) mechanism, which randomizes the user inputs before sharing them with the aggregator; 2) an Expectation Maximization with Smoothing (EMS) algorithm, which is applied to aggregated histograms from the SW mechanism to estimate the original distributions.</p>
<p><br></p>
<p>Second, we study the matrix factorization problem in three federated learning settings with an untrusted server, i.e., vertical, horizontal, and local federated learning settings. We propose a generic algorithmic framework for solving the problem in all three settings. We introduce how to adapt the algorithm into differentially private versions to prevent privacy leakage in the training and publishing stages.</p>
<p><br></p>
<p>Finally, we propose an algorithm for solving the k-means clustering problem in vertical federated learning (VFL). A big challenge in VFL is the lack of a global view of each data point. To overcome this challenge, we propose a lightweight and differentially private set intersection cardinality estimation algorithm based on the Flajolet-Martin (FM) sketch to convey the weight information of the synopsis points. We provide theoretical utility analysis for the cardinality estimation algorithm and further refine it for better empirical performance.</p>
|
33 |
Learning with constraints on processing and supervisionAcar, Durmuş Alp Emre 30 August 2023 (has links)
Collecting a sufficient amount of data and centralizing them are both costly and privacy-concerning operations. These practical concerns arise due to the communication costs between data collecting devices and data being personal such as text messages of an end user. The goal is to train generalizable machine learning models with constraints on data without sharing or transferring the data.
In this thesis, we will present solutions to several aspects of learning with data constraints, such as processing and supervision. We focus on federated learning, online learning, and learning generalizable representations and provide setting-specific training recipes.
In the first scenario, we tackle a federated learning problem where data is decentralized through different users and should not be centralized. Traditional approaches either ignore the heterogeneity problem or increase communication costs to handle it. Our solution carefully addresses the heterogeneity issue of user data by imposing a dynamic regularizer that adapts to the heterogeneity of each user without extra transmission costs. Theoretically, we establish convergence guarantees. We extend our ideas to personalized federated learning, where the model is customized to each end user, and heterogeneous federated learning, where users support different model architectures.
As a next scenario, we consider online meta-learning, where there is only one user, and the data distribution of the user changes over time. The goal is to adapt new data distributions with very few labeled data from each distribution. A naive way is to store data from different distributions to train a model from scratch with sufficient data. Our solution efficiently summarizes the information from each task data so that the memory footprint does not scale with the number of tasks.
Lastly, we aim to train generalizable representations given a dataset. We consider a setting where we have access to a powerful teacher (more complex) model. Traditional methods do not distinguish points and force the model to learn all the information from the powerful model. Our proposed method focuses on the learnable input space and carefully distills attainable information from the teacher model by discarding the over-capacity information.
We compare our methods with state-of-the-art methods in each setup and show significant performance improvements. Finally, we discuss potential directions for future work.
|
34 |
Attack Strategies in Federated Learning for Regression Models : A Comparative Analysis with Classification ModelsLeksell, Sofia January 2024 (has links)
Federated Learning (FL) has emerged as a promising approach for decentralized model training across multiple devices, while still preserving data privacy. Previous research has predominantly concentrated on classification tasks in FL settings, leaving a noticeable gap in FL research specifically for regression models. This thesis addresses this gap by examining the vulnerabilities of Deep Neural Network (DNN) regression models within FL, with a specific emphasis on adversarial attacks. The primary objective is to examine the impact on model performance of two distinct adversarial attacks-output-flipping and random weights attacks. The investigation involves training FL models on three distinct data sets, engaging eight clients in the training process. The study varies the presence of malicious clients to understand how adversarial attacks influence model performance. Results indicate that the output-flipping attack significantly decreases the model performance with involvement of at least two malicious clients. Meanwhile, the random weights attack demonstrates a substantial decrease even with just one malicious client out of the eight. It is crucial to note that this study's focus is on a theoretical level and does not explicitly account for real-world settings such as non-identically distributed (non-IID) settings, extensive data sets, and a larger number of clients. In conclusion, this study contributes to the understanding of adversarial attacks in FL, specifically focusing on DNN regression models. The results highlights the importance of defending FL models against adversarial attacks, emphasizing the significance of future research in this domain.
|
35 |
Attack Strategies in Federated Learning for Regression Models : A Comparative Analysis with Classification ModelsLeksell, Sofia January 2024 (has links)
Federated Learning (FL) has emerged as a promising approach for decentralized model training across multiple devices, while still preserving data privacy. Previous research has predominantly concentrated on classification tasks in FL settings, leaving a noticeable gap in FL research specifically for regression models. This thesis addresses this gap by examining the vulnerabilities of Deep Neural Network (DNN) regression models within FL, with a specific emphasis on adversarial attacks. The primary objective is to examine the impact on model performance of two distinct adversarial attacks-output-flipping and random weights attacks. The investigation involves training FL models on three distinct data sets, engaging eight clients in the training process. The study varies the presence of malicious clients to understand how adversarial attacks influence model performance. Results indicate that the output-flipping attack significantly decreases the model performance with involvement of at least two malicious clients. Meanwhile, the random weights attack demonstrates a substantial decrease even with just one malicious client out of the eight. It is crucial to note that this study's focus is on a theoretical level and does not explicitly account for real-world settings such as non-identically distributed (non-IID) settings, extensive data sets, and a larger number of clients. In conclusion, this study contributes to the understanding of adversarial attacks in FL, specifically focusing on DNN regression models. The results highlights the importance of defending FL models against adversarial attacks, emphasizing the significance of future research in this domain.
|
36 |
ENVIRONMENTAL INTERNSHIP STORE PLANNING, ARCHITECTURE, CONSTRUCTION, AND ENGINEERING DEPARTMENT FEDERATED DEPARTMENT STORESBuerk, Phillip C. 05 December 2003 (has links)
No description available.
|
37 |
Enhancing Privacy in Federated Learning: Mitigating Model Inversion Attacks through Selective Model Transmission and Algorithmic ImprovementsJonsson, Isak January 2024 (has links)
This project aims to identify a sustainable way to construct and train machine learning models. A crucial factor in creating effective machine learning models lies in having access to vast amounts of data. However, this can pose a challenge due to the confidentiality and dispersion of data across various entities. Collecting all the data can thus become a security concern, as transmitting it to a centralized computing location may expose the data to security risks. One solution to this issue is federated learning, which utilizes locally trained AI models. Instead of transmitting data to a centralized computing location, this approach entails sending locally trained AI models and combining them into a global model. In recent years, a method called Model Inversion Attacks has emerged, revealing their potential risk in the context of extracting training data from trained AI models. This methodology potentially heightens the vulnerability of sending models instead of data, posing a security risk. In this project, various Model Inversion Attack methodologies will be examined to further understand the risk of sending models instead of data. The papers examined showed some results of extracting data from trained AI models, although they do not raise significant concerns. Nonetheless, future research in MIA may create security concerns when sending models between parties. Sending parts of the locally trained models to the global model effectively neutralizes the effectiveness of all the examined Model Inversion Attack studies. However, from the results presented in this project, it is evident that challenges persist when only sending parts of a trained model. The challenge was to construct a usable federated learning model while only sending parts of a trained model. To achieve a good federated learning model, several adjustments had to be made to the algorithm, which showed some promising results for the future of federated learning.
|
38 |
Distributed Architectures for Enhancing Artificial Intelligence of Things Systems. A Cloud Collaborative ModelElouali, Aya 23 November 2023 (has links)
In today’s world, IoT systems are more and more overwhelming. All electronic devices are becoming connected. From lamps and refrigerators in smart homes, smoke detectors and cameras in monitoring systems, to scales and thermometers in healthcare systems, until phones, cars and watches in smart cities. All these connected devices generate a huge amount of data collected from the environment. To take advantage of these data, a processing phase is needed in order to extract useful information, allowing the best management of the system. Since most objects in IoT systems are resource limited, the processing step, usually performed by an artificial intelligence model, is offloaded to a more powerful machine such as the cloud server in order to benefit from its high storage and processing capacities. However, the cloud server is geographically remote from the connected device, which leads to a long communication delay and harms the effectiveness of the system. Moreover, due to the incredibly increasing number of IoT devices and therefore offloading operations, the load on the network has increased significantly. In order to benefit from the advantages of cloud based AIoT systems, we seek to minimize its shortcomings. In this thesis, we design a distributed architecture that allows combining these three domains while reducing latency and bandwidth consumption as well as the IoT device’s energy and resource consumption. Experiments conducted on different cloud based AIoT systems showed that the designed architecture is capable of reducing up to 80% of the transmitted data. / En el mundo actual, los sistemas de IoT (Internet de las cosas) son cada vez más abrumadores. Todos los dispositivos electrónicos se están conectando entre sí. Desde lámparas y refrigeradores en hogares inteligentes, detectores de humo y cámaras para sistemas de monitoreo, hasta básculas y termómetros para sistemas de atención médica, pasando por teléfonos, automóviles y relojes en ciudades inteligentes. Todos estos dispositivos conectados generan una enorme cantidad de datos recopilados del entorno. Para aprovechar estos datos, es necesario un proceso de análisis para extraer información útil que permita una gestión óptima del sistema. Dado que la mayoría de los objetos en los sistemas de IoT tienen recursos limitados, la etapa de procesamiento, generalmente realizada por un modelo de inteligencia artificial, se traslada a una máquina más potente, como el servidor en la nube, para beneficiarse de su alta capacidad de almacenamiento y procesamiento. Sin embargo, el servidor en la nube está geográficamente alejado del dispositivo conectado, lo que conduce a una larga demora en la comunicación y perjudica la eficacia del sistema. Además, debido al increíble aumento en el número de dispositivos de IoT y, por lo tanto, de las operaciones de transferencia de datos, la carga en la red ha aumentado significativamente. Con el fin de aprovechar las ventajas de los sistemas de AIoT (Inteligencia Artificial en el IoT) basados en la nube, buscamos minimizar sus desventajas. En esta tesis, hemos diseñado una arquitectura distribuida que permite combinar estos tres dominios al tiempo que reduce la latencia y el consumo de ancho de banda, así como el consumo de energía y recursos del dispositivo IoT. Los experimentos realizados en diferentes sistemas de AIoT basados en la nube mostraron que la arquitectura diseñada es capaz de reducir hasta un 80% de los datos transmitidos.
|
39 |
REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained EnvironmentsDesai, Humaid Ahmed Habibullah 22 November 2023 (has links)
Federated Learning (FL) is a sub-domain of machine learning (ML) that enforces privacy by allowing the user's local data to reside on their device. Instead of having users send their personal data to a server where the model resides, FL flips the paradigm and brings the model to the user's device for training. Existing works share model parameters or use distillation principles to address the challenges of data heterogeneity. However, these methods ignore some of the other fundamental challenges in FL: device heterogeneity and communication efficiency. In practice, client devices in FL differ greatly in their computational power and communication resources. This is exacerbated by unbalanced data distribution, resulting in an overall increase in training times and the consumption of more bandwidth. In this work, we present a novel approach for resource-efficient FL called emph{REFT} with variable pruning and knowledge distillation techniques to address the computational and communication challenges faced by resource-constrained devices.
Our variable pruning technique is designed to reduce computational overhead and increase resource utilization for clients by adapting the pruning process to their individual computational capabilities. Furthermore, to minimize bandwidth consumption and reduce the number of back-and-forth communications between the clients and the server, we leverage knowledge distillation to create an ensemble of client models and distill their collective knowledge to the server. Our experimental results on image classification tasks demonstrate the effectiveness of our approach in conducting FL in a resource-constrained environment. We achieve this by training Deep Neural Network (DNN) models while optimizing resource utilization at each client. Additionally, our method allows for minimal bandwidth consumption and a diverse range of client architectures while maintaining performance and data privacy. / Master of Science / In a world driven by data, preserving privacy while leveraging the power of machine learning (ML) is a critical challenge. Traditional approaches often require sharing personal data with central servers, raising concerns about data privacy. Federated Learning (FL), is a cutting-edge solution that turns this paradigm on its head. FL brings the machine learning model to your device, allowing it to learn from your data without ever leaving your device. While FL holds great promise, it faces its own set of challenges. Existing research has largely focused on making FL work with different types of data, but there are still other issues to be resolved. Our work introduces a novel approach called REFT that addresses two critical challenges in FL: making it work smoothly on devices with varying levels of computing power and reducing the amount of data that needs to be transferred during the learning process. Imagine your smartphone and your laptop. They all have different levels of computing power. REFT adapts the learning process to each device's capabilities using a proposed technique called Variable Pruning. Think of it as a personalized fitness trainer, tailoring the workout to your specific fitness level. Additionally, we've adopted a technique called knowledge distillation. It's like a student learning from a teacher, where the teacher shares only the most critical information. In our case, this reduces the amount of data that needs to be sent across the internet, saving bandwidth and making FL more efficient. Our experiments, which involved training machines to recognize images, demonstrate that REFT works well, even on devices with limited resources. It's a step forward in ensuring your data stays private while still making machine learning smarter and more accessible.
|
40 |
Practical Privacy-Preserving Federated Learning with Secure Multi-Party ComputationAkhtar, Benjamin Asad 12 August 2024 (has links)
Master of Science / In a world with ever greater need for machine learning and artificial intelligence, it has be- come increasingly important to offload computation intensive tasks to companies with the compute resources to perform training on potentially sensitive data. In applications such as finance or healthcare, the data providers may have a need to train large quantities of data, but cannot reveal the data to outside parties for legal or other reasons. Originally, using a decentralized training method known as Federated Learning (FL) was proposed to ensure data did not leave the client's device. This method still was susceptible to attacks and further security was needed. Multi-Party Computation (MPC) was proposed in conjunction with FL as it provides a way to securely compute with no leakage of data values. This was utilized in a framework called SAFEFL, however, it was extremely slow.
Reducing the computation overhead using programming tools at our disposal for this frame- work turns it from a unpractical to useful design. The design can now be used in industry with some overhead compared to non-MPC computing, however, it has been greatly im- proved.
|
Page generated in 0.0641 seconds