• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 123
  • 19
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 188
  • 104
  • 60
  • 43
  • 41
  • 40
  • 37
  • 28
  • 26
  • 22
  • 20
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

A Common Programming Interface for Managed Heterogeneous Data Analysis

Luong, Johannes 28 July 2021 (has links)
The widespread success of data analysis in a growing number of application domains has lead to the development of a variety of purpose build data processing systems. Today, many organizations operate whole fleets of different data related systems. Although this differentiation has good reasons there is also a growing need to create holistic perspectives that cut across the borders of individual systems. Application experts that want to create such perspectives are confronted with a variety of programming interfaces, data formats, and the task to combine available systems in an efficient manner. These issues are generally unrelated to the application domain and require a specialized set of skills. As a consequence, development is slowed down and made more expensive which stifles exploration and innovation. In addition, the direct use of specialized system interfaces can couple application code to specific processing systems. In this dissertation, we propose the data processing platform DataCalc which presents users with a unified application oriented programming interface and which automatically executes this interface in an efficient manner on a variety of processing systems. DataCalc offers a managed environment for data analyses that enables domain experts to concentrate on their application logic and decouples code from specific processing technology. The basis of this managed processing environment are the high-level domain oriented program representation DCIL and a flexible and extensible cost based optimization component. In addition to traditional up-front optimization, the optimizer also supports dynamic re-optimization of partially executed DCIL programs. This enables the system to benefit from dynamic information that only becomes available during execution of queries. DataCalc assigns workloads to available processing systems using a fine grained task scheduling model to enable efficient exploitation of available resources. In the second part of the dissertation we present a prototypical implementation of the DataCalc platform which includes connectors for the relational DBMS PostgreSQL, the document store MongoDB, the graph database Neo4j, and for the custom build PyProc processing system. For the evaluation of this prototype we have implemented an extended application scenario. Our experiments demonstrate that DataCalc is able to find and execute efficient execution strategies that minimize cross system data movement. The system achieves much better results than a naive implementation and it comes close to the performance of a hand-optimized solution. Based on these findings we are confident to conclude that the DataCalc platform architecture provides an excellent environment for cross domain data analysis on a heterogeneous federated processing architecture.
122

Erweiterung einer Komponentenplattform zur Unterstützung multimodaler Anwendungen mit föderierten Endgeräten

Kadner, Kay 05 May 2008 (has links)
Zur Erledigung einer Aufgabe kann der Benutzer mit verschiedenen Endgeräten interagieren, welche unterschiedliche Interaktionsarten (Modalitäten) anbieten. Dabei gibt es jedoch kein Endgerät, welches alle erdenkbaren Modalitäten unterstützt. Aus diesem Grund wird eine komponentenbasierte Integrationsschicht auf Basis einer Komponentenplattform entwickelt, die dem Nutzer die gewünschte Freiheit bei der Wahl der Endgeräte und somit der Modalitäten ermöglicht. Als Ausgangsbasis dafür dient das W3C Multimodal Interaction Framework. Mit Hilfe der Integrationsschicht kann der Nutzer beispielsweise Endgeräteföderationen erzeugen, die einzeln oder gemeinsam zur Interaktion verwendet werden können. Die Integrationsschicht besitzt verschiedene Konzepte, um z.B. Geschäftslogik zur Laufzeit zu verteilen, Komponentenausfälle zu behandeln und die auf verschiedene Endgeräte verteilte Nutzerschnittstelle zu synchronisieren. Die entwickelten Konzepte wurden prototypisch implementiert, validiert und auf Performanz untersucht.
123

A Comparative Case Study on How the Swedish and British Armed Forces Use Multi Domains in Aspects of Methods, Technology, and Organization / En jämförande fallstudie om hur den svenska och brittiska Försvarsmakten använder multidomänbegreppet i form av metoder, teknologi och organisation

Keyvanpour, Daniel January 2022 (has links)
The multi-domain operations are vaguely defined and there are a variety of interpretations. In general terms, multi-domain can be described as a means of communication between different joint forces such as land, water, air, cyber, and space. In multi-domain operations, the focus is on how those domains can integrate using technologies, methods, and planning.  By interviewing individuals with long experience in both the British and Swedish Armed Forces and conducting a literature study, the focus has been on understanding how multi-domain operations as a concept are understood, interpreted, and implemented in the respective nation’s operations today regarding the technology and organizational structure.   The results were compared with frameworks such as Federated Mission Networking (FMN) and Level of Information Systems Interoperability (LISI). The analysis shows that both the Swedish and British Armed Forces need greater interoperability. In order to have a better ability to cooperate within their forces, a more agile approach to the organization is needed that takes advantage of information and communication technologies. This can be achieved by managing different protocols through the different layers and models and by introducing a cloud service that functions as a cloud service function where the information flow is fast and easily accessible, independent of the domain.
124

Wireless Network Intrusion Detection and Analysis using Federated Learning

Cetin, Burak 12 May 2020 (has links)
No description available.
125

SSASy: A Self-Sovereign Authentication Scheme

Manzi, Olivier January 2023 (has links)
Amidst the wild west of user authentication, this study introduces a new sheriff in town: the Self-Sovereign Authentication Scheme (SSASy). Traditional authentication methods, like passwords, are often fraught with usability and security concerns, leading users to find workaround ways that compromise the intended security. Federated Identities (FI) offer a convenient alternative, yet, they infringe on users' sovereignty over their identity and lead to privacy concerns. To address these challenges, this study proposes SSASy, which leverages cryptography and browser technology to provide a sovereign, usable, and secure alternative to the existing user authentication schemes. The proposal, which is a proof-of-concept, is comprised of a core library, which provides the authentication protocol to developers, and a browser extension that simplifies the authentication process for users. SSASy is available as an open-source project on GitHub for practical demonstration on multiple browser stores, bringing our theoretical study into the realm of tangible, real-world application. SSASy is evaluated and compared to existing authentication schemes using the "Usability-Deployability-Security" (UDS) framework. The results demonstrate that, although other authentication schemes may excel in a specific dimension, SSASy delivers a more balanced performance across the three dimensions which makes it a promising alternative.
126

Federated DeepONet for Electricity Demand Forecasting: A Decentralized Privacy-preserving Approach

Zilin Xu (11819582) 02 May 2023 (has links)
<p>Electric load forecasting is a critical tool for power system planning and the creation of sustainable energy systems. Precise and reliable load forecasting enables power system operators to make informed decisions regarding power generation and transmission, optimize energy efficiency, and reduce operational costs and extra power generation costs, to further reduce environment-related issues. However, achieving desirable forecasting performance remains challenging due to the irregular, nonstationary, nonlinear, and noisy nature of the observed data under unprecedented events. In recent years, deep learning and other artificial intelligence techniques have emerged as promising approaches for load forecasting. These techniques have the ability to capture complex patterns and relationships in the data and adapt to changing conditions, thereby enhancing forecasting accuracy. As such, the use of deep learning and other artificial intelligence techniques in load forecasting has become an increasingly popular research topic in the field of power systems. </p> <p>Although deep learning techniques have advanced load forecasting, the field still requires more accurate and efficient models. One promising approach is federated learning, which allows for distributed data analysis without exchanging data among multiple devices or cen- ters. This method is particularly relevant for load forecasting, where each power station’s data is sensitive and must be protected. In this study, a proposed approach utilizing Federated Deeponet for seven different power stations is introduced, which proposes a Federated Deep Operator Networks and a Lagevin Dynamics-based Federated Deep Operator Networks using Stochastic Gradient Langevin Dynamics as the optimizer for training the data daily for one-day and predicting for one-day frequencies by frequencies. The data evaluation methods include mean absolute percentage error and the percentage coverage under confidence interval. The findings demonstrate the potential of federated learning for secure and precise load forecasting, while also highlighting the challenges and opportunities of implementing this approach in real-world scenarios. </p>
127

Enhancing Efficiency and Trustworthiness of Deep Learning Algorithms

Isha Garg (15341896) 24 April 2023 (has links)
<p>This dissertation explore two major goals in Deep Learning algorithm design: efficiency and trustworthiness. We motivate these concerns in Chapter 1 and give relevant background in Chapter 2. We then discuss six works to target these two goals. </p> <p>The first of these discusses how to make the model compression methodology more efficient, so it can be done in a single shot. This allows us to create models with reduced size and layers, so we can have faster and more efficient inference, and is covered in Chapter 3. We then extend this to target efficiency in continual learning in Chapter 4, while mitigating the problem of catastrophic forgetting. The method discussed also allows us to circumvent the potential for data leakage by avoiding the need to store any data from the past tasks. Next, we consider brain-inspired computing as an alternative to traditional neural networks to improve compute efficiency of networks. The spiking neural networks discussed however have large inference latency due to the need for accumulating spikes over many timesteps. We tackle this by introducing a new scheme that distributes an image over time by breaking it down into a sum of its ranked sinusoidal bases in Chapter 5. This results in networks that are faster and more efficient to deploy. Chapter 6 targets mitigating both the communication expense and potential for data leakage in federated learning, by distilling the gradients to be communicated in a small number of images that resemble noise. Communicating these images is more efficient, and circumvents the potential for data leakage as they resemble noise. We then explore the applications of studying curvature of loss with respect to input data points in the last two chapters. We first utilize curvature to create performant coresets to reduce the size of datasets, to make training more efficient in Chapter 7. In Chapter 8, we use curvature as a metric for overfitting and use it to expose dataset integrity issues arising from memorization.</p>
128

Decentralizing Large-Scale Natural Language Processing with Federated Learning / Decentralisering av storskalig naturlig språkbearbetning med förenat lärande

Garcia Bernal, Daniel January 2020 (has links)
Natural Language Processing (NLP) is one of the most popular and visible forms of Artificial Intelligence in recent years. This is partly because it has to do with a common characteristic of human beings: language. NLP applications allow to create new services in the industrial sector in order to offer new solutions and provide significant productivity gains. All of this has happened thanks to the rapid progression of Deep Learning models. Large scale contextual representation models, such asWord2Vec, ELMo and BERT, have significantly advanced NLP in recently years. With these latest NLP models, it is possible to understand the semantics of text to a degree never seen before. However, they require large amounts of text data to process to achieve high-quality results. This data can be gathered from different sources, but one of the main collection points are devices such as smartphones, smart appliances and smart sensors. Lamentably, joining and accessing all this data from multiple sources is extremely challenging due to privacy and regulatory reasons. New protocols and techniques have been developed to solve this limitation by training models in a massively distributed manner taking advantage of the powerful characteristic of the devices that generates the data. Particularly, this research aims to test the viability of training NLP models, in specific Word2Vec, with a massively distributed protocol like Federated Learning. The results show that FederatedWord2Vecworks as good as Word2Vec is most of the scenarios, even surpassing it in some semantics benchmark tasks. It is a novel area of research, where few studies have been conducted, with a large knowledge gap to fill in future researches. / Naturlig språkbehandling är en av de mest populära och synliga formerna av artificiell intelligens under de senaste åren. Det beror delvis på att det har att göra med en gemensam egenskap hos människor: språk. Naturlig språkbehandling applikationer gör det möjligt att skapa nya tjänster inom industrisektorn för att erbjuda nya lösningar och ge betydande produktivitetsvinster. Allt detta har hänt tack vare den snabba utvecklingen av modeller för djup inlärning. Modeller i storskaligt sammanhang, som Word2Vec, ELMo och BERT har väsentligt avancerat naturligt språkbehandling på senare tid år. Med dessa senaste naturliga språkbearbetningsmo modeller är det möjligt att förstå textens semantik i en grad som aldrig sett förut. De kräver dock stora mängder textdata för att bearbeta för att uppnå högkvalitativa resultat. Denna information kan samlas in från olika källor, men ett av de viktigaste insamlingsställena är enheter som smartphones, smarta apparater och smarta sensorer. Beklagligtvis är det extremt utmanande att gå med och komma åt alla dessa uppgifter från flera källor på grund av integritetsskäl och regleringsskäl. Nya protokoll och tekniker har utvecklats för att lösa denna begränsning genom att träna modeller på ett massivt distribuerat sätt med fördel av de kraftfulla egenskaperna hos enheterna som genererar data. Särskilt syftar denna forskning till att testa livskraften för att utbilda naturligt språkbehandling modeller, i specifika Word2Vec, med ett massivt distribuerat protokoll som Förenat Lärande. Resultaten visar att det Förenade Word2Vec fungerar lika bra som Word2Vec är de flesta av scenarierna, till och med överträffar det i vissa semantiska riktmärken. Det är ett nytt forskningsområde, där få studier har genomförts, med ett stort kunskapsgap för att fylla i framtida forskningar.
129

Effects of Local Data Distortion in Federated Learning

Peteri Harr, Fredrik January 2022 (has links)
This study explored how clients with distorted data affected the Federated Learning process using the FedAvg and FedProx algorithms. Different amounts of the three distortions, Translation, Rotation, and Blur, were tested using three different Machine Learning models. The models were a Dense network, the well-known convolutional network LeNet-5, and a smaller version of the ResNet architecture. The results of the study successfully showcases how different distortions affect the three models. Therefore, they also show that the risk of local data distortion is important to factor in when picking a Machine Learning model for Federated Learning.
130

Cluster selection for Clustered Federated Learning using Min-wise Independent Permutations and Word Embeddings / Kluster selektion för Klustrad Federerad Inlärning med användning av “Min-wise” Oberoende Permutations och Ordinbäddningar

Raveen Bandara Harasgama, Pulasthi January 2022 (has links)
Federated learning is a widely established modern machine learning methodology where training is done directly on the client device with local client data and the local training results are shared to compute a global model. Federated learning emerged as a result of data ownership and the privacy concerns of traditional machine learning methodologies where data is collected and trained at a central location. However, in a distributed data environment, the training suffers significantly when the client data is not identically distributed. Hence, clustered federated learning was proposed where similar clients are clustered and trained independently to form specialized cluster models which are then used to compute a global model. In this approach, the cluster selection for clustered federated learning is a major factor that affects the effectiveness of the global model. This research presents two approaches for client clustering using local client data for clustered federated learning while preserving data privacy. The two proposed approaches use min-wise independent permutations to compute client signatures using text and word embeddings. These client signatures are then used as a representation of client data to cluster clients using agglomerative hierarchical clustering. Unlike previously proposed clustering methods, the two presented approaches do not use model updates, provide a better privacy-preserving mechanism and have a lower communication overhead. With extensive experimentation, we show that the proposed approaches outperform the random clustering approach. Finally, we present a client clustering methodology that can be utilized in a practical clustered federated learning environment. / Federerad inlärning är en etablerad och modern maskininlärnings metod. Träningen är utförd direkt på klientenheten med lokal klient data. Sen är dem lokala träningsresultat delad för att beräkna en global modell. Federerad inlärning har utvecklats på grund av dataägarskap- och dataintegritetsproblem vid traditionella maskininlärnings metoder. Dessa metoder samlar och tränar data på en central enhet. I den här metoden är kluster selektionen en viktig faktor som påverkar effektiviteten av den globala modellen. Detta forskningsarbete presenterar två metoder för klient klustring med hjälp av lokala klientdata för federerad inlärning samtidigt tar metoderna hänsyn på dataintegritet. Metoderna använder “min-wise” oberoende permutations och förtränade (“text och word”) inbäddningar. Dessa klientsignaturer används som en klientdata representation för att klustrar klienter med hjälp av agglomerativ hierarkisk klustring. Till skillnad från tidigare klustringsmetoder använder de två presenterade metoderna inte modelluppdateringar. Detta ger en bättre sekretessbevarande mekanism och har lägre kommunikationskostnader. De två presenterade metoderna överträffar den slumpmässiga klustringsmetoden genom omfattande experiment och analys. Till slut presenterar vi en klientklustermetodik som kan användas i en praktisk klustrad federerad inlärningsmiljö.

Page generated in 0.0258 seconds