• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 19
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 179
  • 95
  • 58
  • 40
  • 39
  • 38
  • 37
  • 26
  • 23
  • 20
  • 19
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Single Sign On med Azure AD Connect / Single Sign On with Azure AD Connect

Bohman, Dan January 2016 (has links)
Den här rapporten handlar om Azure AD Connect och Single/Simplified Sign On. Användare och kunder idag ställer större krav för enklare inloggning och en mer sömlös upplevelse för åtkomst till alla IT-tjänster. Microsoft har nyligen släppt verktyget Azure AD Connect för synkronisering av lösenord mellan Active Directory och molntjänsterna Office365, Azure och 1000-tal SaaS ”Software as a service” applikationer. TeamNorr IT-partner är ett IT företag som riktar in sig på att leverera Microsofts produkter till sina kunder och vill därför veta mer kring Azure AD Connect, vad som krävs och hur det konfigureras. Single Sign On har betydelsen att bara behöver logga in en gång för att sen slippa skriva in användare och lösenord för att komma åt de applikationer som har stöd för Single Sign On. Federerad domän är det som ger bäst och säkrast upplevelse med Single Sign On. Simplified Sign On gör att samma användarnamn och lösenord används för inloggning, ingen automatisk inloggning sker. Azure AD Connect är verktyget som installerar de roller som behövs för att köra Single Sign On eller Simplified Sign On. Som standard installeras en synkroniseringsmotor som ska hålla koll på att informationen om användarna/grupperna och lösenorden stämmer mellan det lokala Active Directory och Azure Active Directory eller den federerade domänen. Det synkroniseringsmotorn tar med när den synkroniserar bestäms av de regler som satts upp. Används lösningen med Password Sync så tillkommer inga extra roller. Väljs istället en Federerad domän så installeras 2 extra roller som heter Federation(AD FS) och Web Application Proxy(WAP). Rollerna sköter autentisering av användarna istället för Microsofts autentisering. På servrarna som hostar rollerna krävs en viss grundprestanda beroende på storlek av Active Directory och antal användare anslutna för att det ska fungera tillfredsställande. / This report covers Azure AD Connect and Single/Simplified Sign On. Users and customers today places greater demand for easier login method and seamless experience for reaching all services. Microsoft has recently released Azure AD Connect tool to help synchronize passwords between Active Directory and the cloud services Office 365/Azure and 1000s of Software as a service applications. Team Norr IT-partner is an IT company that focuses on delivering Microsoft products to thier customers and therefore wanted to know more about Azure AD Connect. How to configure the solution and what the set requirements are. Single Sign On means that you only need to sign in with password and login once and automatically get access the applications that support the technology without any more credentials.  By using a Federated domain users get the best and safest experience with Single Sign On. Simplified Sign On lets users use the same username and password to login with to all applications with support, but no automatic login. Azure AD Connect tool installs the roles that are needed to run a Single Sign On or Simplified Sign On. By default the synchronization engine will keep track of information about the users and groups. Passwords are also synchronized between on-premises Active Directory and Azure Active Directory or federation server. What the Synchronization engine takes is determined by the rules defined. Password Sync does not install any extra server roles. With the Federation path there will be extra roles installed called Federation (AD FS) and Web Application Proxy (WAP). They handle the authentication of users instead of the normal Microsoft authentication. There is some set requirement for the servers that host the roles depending on the size of Active Directory and numbers of users. The servers need a certain base performance for it to work properly.
22

Coordinated resource provisioning in federated grids

Ranjan, Rajiv Unknown Date (has links) (PDF)
A fundamental problem in building large scale Grid resource sharing system is the need for efficient and scalable techniques for discovery and provisioning of resources for delivering expected Quality of Service (QoS) to users’ applications. The current approaches to Grid resource sharing based on resource brokers are non-coordinated since these brokers make scheduling related decisions independent of the others in the system. Clearly, this worsens the load-sharing and utilisation problems of distributed Grid resources as sub-optimal schedules are likely to occur. Further, existing brokering systems rely on centralised information services for resource discovery. Centralised or hierarchical resource discovery systems are prone to single-point failure, lack scalability and fault-tolerance ability. In the centralised model, the network links leading to the server are very critical to the overall functionality of the system, as their failure might halt the entire distributed system operation.
23

Desenvolvimento e avaliação de um escalonador para grades colaborativas baseado em consumo de energia / Development and evaluation of a scheduler for federated grids based on energy consumption

Forte, Cássio Henrique Volpatto 07 February 2018 (has links)
Submitted by Cassio Henrique Volpatto Forte null (cassiohenriquevolpatto@hotmail.com) on 2018-03-23T14:11:24Z No. of bitstreams: 1 Dissertacao.pdf: 2024184 bytes, checksum: d6bed57662958df380a22b4445c0b980 (MD5) / Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-03-23T18:21:38Z (GMT) No. of bitstreams: 1 forte_chv_me_sjrp.pdf: 2024184 bytes, checksum: d6bed57662958df380a22b4445c0b980 (MD5) / Made available in DSpace on 2018-03-23T18:21:38Z (GMT). No. of bitstreams: 1 forte_chv_me_sjrp.pdf: 2024184 bytes, checksum: d6bed57662958df380a22b4445c0b980 (MD5) Previous issue date: 2018-02-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A complexidade crescente das aplicações e o grande volume de dados utilizados por elas levam a um uso sempre crescente de sistemas distribuídos de alto desempenho. Nas últimas décadas o impacto do consumo de energia cresceu em relevância para o bom funcionamento desses sistemas, e seu tratamento é um grande desafio aos projetistas de hardware, desenvolvedores de aplicações e administradores. A dificuldade desse tratamento decorre do conflito entre consumo de energia e desempenho. Reduzir o consumo de energia das máquinas em um sistema distribuído causa prejuízos ao desempenho, enquanto fazer com que elas trabalhem mais rapidamente proporciona melhor desempenho mas causa aumento no consumo de energia. Nesse cenário, as políticas de escalonamento de tarefas podem levar em conta o consumo de energia, auxiliando no tratamento do problema. Este texto apresenta o desenvolvimento e avaliação de um novo algoritmo de escalonamento de tarefas independentes em grades computacionais federadas, o EHOSEP (Energy-aware Heterogeneous Owner-Share Enforcement Policy). O objetivo do novo algoritmo é tratar o consumo de energia, associando-o a um critério de justiça de propriedade. Esse critério de justiça decorre das chamadas grades federadas ou cooperativas, formadas por recursos computacionais de diferentes proprietários, procurando estimular seu compartilhamento pela garantia de uso justo. Os resultados obtidos com a simulação da aplicação do EHOSEP em diferentes modelos de grade mostram que é possível estimular o uso da grade atendendo-se limites de potência. / The increasing complexity of applications and the large volume of data used by them lead to an ever-increasing use of high-performance distributed systems. In recent decades the energy consumption is becoming more relevant to the proper functioning of these systems, and its management is a major challenge to hardware designers, application developers and administrators. The difficulty of this management arises from the conflict between power consumption and performance. Reducing energy consumption of machines in a distributed system reduces performance as well, while making machines work faster provides better performance at a cost of an increase in energy consumption. In this scenario, task scheduling policies may also consider energy consumption, helping to solve this problem. This document presents the development and evaluation of a new scheduling algorithm for independent tasks in federated computing grids, the EHOSEP (Energyaware Heterogeneous Owner-Share Enforcement Policy). The goal of the new algorithm is to address energy consumption by associating it with a ownership fairness criterion. This fairness criterion stems from the so-called federated or cooperative grids, formed by computational resources of different owners, aiming the resource sharing by the guarantee of fair usage. Results achieved with the simulation of EHOSEP applied to different grid models show that it is possible to stimulate the use of the grid even limiting energy consumption.
24

A Resource-Aware Federated Learning Simulation Platform

Leandro, Fellipe 07 1900 (has links)
The increasing concerns regarding users‘ data privacy leads to the infeasibility of distributed Machine Learning applications, which are usually data-hungry. Federated Learning has emerged as a privacy-preserving distributed machine learning paradigm, in which the client dataset is kept locally, and only the local model parameters are transmitted to the central server. However, adoption of the Federated Learning paradigm leads to new edge computing challenges, since it assumes computationally intensive tasks can be executed locally by each device. The diverse hardware resources in a population of edge devices (e.g., smartphone models) can negatively impact the performance of Federated Learning, at both the global and local levels. This thesis contributes to this context with the implementation of a hardware-aware Federated Learning platform, which provides comprehensive support regarding the impacts of hardware heterogeneity on Federated Learning performance metrics by modeling the costs associated with training tasks on aspects of computation and communication.
25

Intelligent Device Selection in Federated Edge Learning with Energy Efficiency

Peng, Cheng 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Due to the increasing demand from mobile devices for the real-time response of cloud computing services, federated edge learning (FEL) emerges as a new computing paradigm, which utilizes edge devices to achieve efficient machine learning while protecting their data privacy. Implementing efficient FEL suffers from the challenges of devices' limited computing and communication resources, as well as unevenly distributed datasets, which inspires several existing research focusing on device selection to optimize time consumption and data diversity. However, these studies fail to consider the energy consumption of edge devices given their limited power supply, which can seriously affect the cost-efficiency of FEL with unexpected device dropouts. To fill this gap, we propose a device selection model capturing both energy consumption and data diversity optimization, under the constraints of time consumption and training data amount. Then we solve the optimization problem by reformulating the original model and designing a novel algorithm, named E2DS, to reduce the time complexity greatly. By comparing with two classical FEL schemes, we validate the superiority of our proposed device selection mechanism for FEL with extensive experimental results. Furthermore, for each device in a real FEL environment, it is the fact that multiple tasks will occupy the CPU at the same time, so the frequency of the CPU used for training fluctuates all the time, which may lead to large errors in computing energy consumption. To solve this problem, we deploy reinforcement learning to learn the frequency so as to approach real value. And compared to increasing data diversity, we consider a more direct way to improve the convergence speed using loss values. Then we formulate the optimization problem that minimizes the energy consumption and maximizes the loss values to select the appropriate set of devices. After reformulating the problem, we design a new algorithm FCE2DS as the solution to have better performance on convergence speed and accuracy. Finally, we compare the performance of this proposed scheme with the previous scheme and the traditional scheme to verify the improvement of the proposed scheme in multiple aspects.
26

Detecting Distracted Drivers using a Federated Computer Vision Model : With the Help of Federated Learning

Viggesjöö, Joel January 2023 (has links)
En av de vanligaste distraktionerna under bilkörning är utförandet av aktiviteter som avlägsnar förarens fokus från vägen, exempelvis användandet av en telefon för att skicka meddelanden. Det finns många olika sätt att hantera dessa problem, varav en teknik är att använda maskininlärning för att identifiera och notifiera distraherade bilförare. En lösning för detta blev presenterad i en tidigare artikel, varav traditionell maskininlärning med en centraliserad metod användes, vilket resulterade i goda resultat vid utvärdering. Som ett nästa steg föreslog artikeln att de skapade algoritmerna kunde bli förlängd till decentraliserad lösning för att öka stabiliteten av modellen. Således förlängde detta projekt den centrala maskininlärningsmodellen till en federerad lösning, med mål att behålla liknande resultat vid utvärdering. Som ett ytterligare delmål utforskade projektet kvantiseringstekniker för att erhålla en mindre modell, med mål att behålla liknande resultat som tidigare lösningar. Dessutom introducerades ett ytterligare delmål, vilket var att utforska metoder för att rekonstuera data för att stärka integriteten av modellen ytterligare, med mål att behålla liknande resultat som tidigare lösningar. Projektet lyckades med att förlänga modellen till federerad lärning, tillsammans med implementeringen av kvantiserings-tekniker för att erhålla en mindre modell, men delmålet angående rekonstruering av data uppnåddes ej på grund av tidsbrist. Projektet använde sig av en blandning av bibliotek från Python för att förlänga samt kvantisera modellen, vilket resulterade i fyra nya modeller: en decentraliserad modell samt tre modeller som minskade i storlek med 48 %, 70 %, och 71 % jämfört med den decentraliserade modellen. Utvärderingarna för samtliga modeller visade liknande resultat som den ursprungliga centraliserade modellen, vilket indikerade att projektet var framgångsrikt. / One of the most common driving distractions is performing activities that diverts your attention away from the road, such as using a phone for texting. To address this issue, techniques such as machine learning and computer vision could be used to identify and notify distracted drivers. A solution for this was presented in an earlier article, using a traditional centralized machine learning approach with a good prediction accuracy. As a next step, the earlier article mentions that the created computer vision algorithms could be extended to a federated learning setting to further increase the robustness of the model. Thus, this project extended the centralized machine learning model to a federated learning setting with the aim to preserve the accuracy. Additionally, the project explored quantization techniques to achieve a smaller model, while keeping the prediction accuracy. Furthermore, the project also explored if data reconstruction methods could be used to further increase privacy for user data, while preserving prediction accuracy. The project successfully extended the implementation to a federated learning setting, as well as implementing the quantization techniques for size reduction, but the solution regarding data reconstruction was never implemented due to the time constraints. The project used a mixture of Python frameworks to extend the solution to a federated learning setting and to reduce the size of the model, resulting in one decentralized model, and three models with a reduced size of 48 %, 70 %, and 71 % compared to the decentralized model. The prediction rate of these models had similar prediction accuracy as the centralized model, indicating that the project was a success.
27

Reinforcement Learning assisted Adaptive difficulty of Proof of Work (PoW) in Blockchain-enabled Federated Learning

Sethi, Prateek 10 August 2023 (has links)
This work addresses the challenge of heterogeneity in blockchain mining, particularly in the context of consortium and private blockchains. The motivation stems from ensuring fairness and efficiency in blockchain technology's Proof of Work (PoW) consensus mechanism. Existing consensus algorithms, such as PoW, PoS, and PoB, have succeeded in public blockchains but face challenges due to heterogeneous miners. This thesis highlights the significance of considering miners' computing power and resources in PoW consensus mechanisms to enhance efficiency and fairness. It explores the implications of heterogeneity in blockchain mining in various applications, such as Federated Learning (FL), which aims to train machine learning models across distributed devices collaboratively. The research objectives of this work involve developing novel RL-based techniques to address the heterogeneity problem in consortium blockchains. Two proposed RL-based approaches, RL based Miner Selection (RL-MS) and RL based Miner and Difficulty Selection (RL-MDS), focus on selecting miners and dynamically adapting the difficulty of PoW based on the computing power of the chosen miners. The contributions of this research work include the proposed RL-based techniques, modifications to the Ethereum code for dynamic adaptation of Proof of Work Difficulty (PoW-D), integration of the Commonwealth Cyber Initiative (CCI) xG testbed with an AI/ML framework, implementation of a simulator for experimentation, and evaluation of different RL algorithms. The research also includes additional contributions in Open Radio Access Network (O-RAN) and smart cities. The proposed research has significant implications for achieving fairness and efficiency in blockchain mining in consortium and private blockchains. By leveraging reinforcement learning techniques and considering the heterogeneity of miners, this work contributes to improving the consensus mechanisms and performance of blockchain-based systems. / Master of Science / Technological Advancement has led to devices having powerful yet heterogeneous computational resources. Due to the heterogeneity in the compute of miner nodes in a blockchain, there is unfairness in the PoW Consensus mechanism. More powerful devices have a higher chance of mining and gaining from the mining process. Additionally, the PoW consensus introduces a delay due to the time to mine and block propagation time. This work uses Reinforcement Learning to solve the challenge of heterogeneity in a private Ethereum blockchain. It also introduces a time constraint to ensure efficient blockchain performance for time-critical applications.
28

Federated search to merge the results of the extracted functional requirements

Li, Xiang 22 August 2022 (has links)
No description available.
29

DIFFERENTIAL PRIVACY IN DISTRIBUTED SETTINGS

Zitao Li (14135316) 18 November 2022 (has links)
<p>Data is considered the "new oil" in the information society and digital economy. While many commercial activities and government decisions are based on data, the public raises more concerns about privacy leakage when their private data are collected and used. In this dissertation, we investigate the privacy risks in settings where the data are distributed across multiple data holders, and there is only an untrusted central server. We provide solutions for several problems under this setting with a security notion called differential privacy (DP). Our solutions can guarantee that there is only limited and controllable privacy leakage from the data holder, while the utility of the final results, such as model prediction accuracy, can be still comparable to the ones of the non-private algorithms.</p> <p><br></p> <p>First, we investigate the problem of estimating the distribution over a numerical domain while satisfying local differential privacy (LDP). Our protocol prevents privacy leakage in the data collection phase, in which an untrusted data aggregator (or a server) wants to learn the distribution of private numerical data among all users. The protocol consists of 1) a new reporting mechanism called the square wave (SW) mechanism, which randomizes the user inputs before sharing them with the aggregator; 2) an Expectation Maximization with Smoothing (EMS) algorithm, which is applied to aggregated histograms from the SW mechanism to estimate the original distributions.</p> <p><br></p> <p>Second, we study the matrix factorization problem in three federated learning settings with an untrusted server, i.e., vertical, horizontal, and local federated learning settings. We propose a generic algorithmic framework for solving the problem in all three settings. We introduce how to adapt the algorithm into differentially private versions to prevent privacy leakage in the training and publishing stages.</p> <p><br></p> <p>Finally, we propose an algorithm for solving the k-means clustering problem in vertical federated learning (VFL). A big challenge in VFL is the lack of a global view of each data point. To overcome this challenge, we propose a lightweight and differentially private set intersection cardinality estimation algorithm based on the Flajolet-Martin (FM) sketch to convey the weight information of the synopsis points. We provide theoretical utility analysis for the cardinality estimation algorithm and further refine it for better empirical performance.</p>
30

Learning with constraints on processing and supervision

Acar, Durmuş Alp Emre 30 August 2023 (has links)
Collecting a sufficient amount of data and centralizing them are both costly and privacy-concerning operations. These practical concerns arise due to the communication costs between data collecting devices and data being personal such as text messages of an end user. The goal is to train generalizable machine learning models with constraints on data without sharing or transferring the data. In this thesis, we will present solutions to several aspects of learning with data constraints, such as processing and supervision. We focus on federated learning, online learning, and learning generalizable representations and provide setting-specific training recipes. In the first scenario, we tackle a federated learning problem where data is decentralized through different users and should not be centralized. Traditional approaches either ignore the heterogeneity problem or increase communication costs to handle it. Our solution carefully addresses the heterogeneity issue of user data by imposing a dynamic regularizer that adapts to the heterogeneity of each user without extra transmission costs. Theoretically, we establish convergence guarantees. We extend our ideas to personalized federated learning, where the model is customized to each end user, and heterogeneous federated learning, where users support different model architectures. As a next scenario, we consider online meta-learning, where there is only one user, and the data distribution of the user changes over time. The goal is to adapt new data distributions with very few labeled data from each distribution. A naive way is to store data from different distributions to train a model from scratch with sufficient data. Our solution efficiently summarizes the information from each task data so that the memory footprint does not scale with the number of tasks. Lastly, we aim to train generalizable representations given a dataset. We consider a setting where we have access to a powerful teacher (more complex) model. Traditional methods do not distinguish points and force the model to learn all the information from the powerful model. Our proposed method focuses on the learnable input space and carefully distills attainable information from the teacher model by discarding the over-capacity information. We compare our methods with state-of-the-art methods in each setup and show significant performance improvements. Finally, we discuss potential directions for future work.

Page generated in 0.0269 seconds