• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 142
  • 142
  • 35
  • 35
  • 23
  • 22
  • 20
  • 19
  • 17
  • 17
  • 16
  • 16
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Semantic interoperability framework for smart spaces

Kiljander, J. (Jussi) 19 January 2016 (has links)
Abstract At the heart of the smart space vision is the idea that devices interoperate with each other autonomously to assist people in their everyday activities. In order to make this vision a reality, it is important to achieve semantic-level interoperability between devices. The goal of this dissertation is to enable Semantic Web technology-based interoperability in smart spaces. There are many challenges that need to be solved before this goal can be achieved. In this dissertation, the focus has been on the following four challenges: The first challenge is that the Semantic Web technologies have neither been designed for sharing real-time data nor large packets of data such as video and audio files. This makes it challenging to apply them in smart spaces, where it is typical that devices produce and consume this type of data. The second challenge is the verbose syntax and encoding formats of Semantic Web technologies that make it difficult to utilise them in resource-constrained devices and networks. The third challenge is the heterogeneity of smart space communication technologies that makes it difficult to achieve interoperability even at the connectivity level. The fourth challenge is to provide users with simple means to interact with and configure smart spaces where device interoperability is based on Semantic Web technologies. Even though autonomous operation of devices is a core idea in smart spaces, this is still important in order to achieve successful end-user adoption. The main result of this dissertation is a semantic interoperability framework, which consists of following individual contributions: 1) a semantic-level interoperability architecture for smart spaces, 2) a knowledge sharing protocol for resource-constrained devices and networks, and 3) an approach to configuring Semantic Web-based smart spaces. The architecture, protocol and smart space configuration approach are evaluated with several reference implementations of the framework components and proof-of-concept smart spaces that are also key contributions of this dissertation. / Tiivistelmä Älytilavision ydinajatuksena on, että erilaiset laitteet tuottavat yhteistyössä ihmisten elämää helpottavia palveluita. Vision toteutumisen kannalta on tärkeää saavuttaa semanttisen tason yhteentoimivuus laitteiden välillä. Tämän väitöskirjan tavoitteena on mahdollistaa semanttisen webin teknologioihin pohjautuva yhteentoimivuus älytilan laitteiden välillä. Monenlaisia haasteita täytyy ratkaista, ennen kuin tämä tavoite voidaan saavuttaa. Tässä työssä keskityttiin seuraaviin neljään haasteeseen: Ensimmäinen haaste on, että semanttisen webin teknologioita ei ole suunniteltu reaaliaikaiseen kommunikaatioon, eivätkä ne sovellu isojen tiedostojen jakamiseen. Tämän vuoksi on haasteellista hyödyntää niitä älytiloissa, joissa laitteet tyypillisesti jakavat tällaista tietoa. Toinen haaste on, että semanttisen webin teknologiat perustuvat syntakseihin ja koodausformaatteihin, jotka tuottavat laitteiden kannalta tarpeettoman pitkiä viestejä. Tämä tekee niiden hyödyntämisestä hankalaa resurssirajoittuneissa laitteissa ja verkoissa. Kolmas haaste on, että älytiloissa hyödynnetään hyvin erilaisia kommunikaatioteknologioita, minkä vuoksi jopa tiedonsiirto laitteiden välillä on haasteellista. Neljäs haaste on tarjota loppukäyttäjälle helppoja menetelmiä sekä vuorovaikutukseen semanttiseen webiin pohjautuvien älytilojen kanssa että tällaisen älytilan muokkaamiseen käyttäjän tarpeiden mukaiseksi. Vaikka laitteiden itsenäinen toiminta onkin älytilojen perusajatuksia, tämä on kuitenkin tärkeää teknologian hyväksymisen ja käyttöönoton kannalta. Väitöskirjan päätulos on laitteiden semanttisen yhteentoimivuuden viitekehys, joka koostuu seuraavista itsenäisistä kontribuutioista: 1) semanttisen tason yhteentoimivuusarkkitehtuuri älytiloille, 2) tiedonjakoprotokolla resurssirajoittuneille laitteille ja verkoille sekä 3) menetelmä semanttiseen webiin pohjautuvien älytilojen konfigurointiin. Näiden kontribuutioiden evaluointi suoritettiin erilaisten järjestelmäkomponenttien referenssitoteutuksilla ja prototyyppiälytiloilla, jotka kuuluvat myös väitöskirjan keskeisiin kontribuutioihin.
102

Vertical handoff and mobility — system architecture and transition analysis

Ylianttila, M. (Mika) 16 April 2005 (has links)
Abstract The contemporary information age is equipped with rich and affordable telecommunication services. In the future, people have even more flexibility when true wireless Internet and real-time multimedia are provided seamlessly over heterogeneous wireless networks. Optimally combining the capacity and services of the current and emerging networks requires a holistic view of mobility, resource and service management. This thesis contributes to the research and development of these hybrid systems with three main contributions. Firstly, a system architecture for vertical handoff in location-aware heterogeneous wireless networks is proposed. The proposed architecture enables the mobile node to prepare for approaching vertical handoffs and wake-up a hotspot interface. The needed communication procedures are discussed, and inter-related issues of mobility and geolocation information are considered in proportion to usability, advantages and limitations. Secondly, a framework for the analysis of vertical handoff algorithm sensitivity to various mobility parameters including velocity, handoff delay and dwell time is introduced. Handoff smoothing with a dwell-timer is analyzed as one potential scheme for optimizing vertical handoff locally. It is compared to a power based algorithm to find out its sensitivity to the changes in effective data rates, velocity of the terminal and the amount of handoff delay. The analysis focuses on the transition region, having case studies on both moving-in and moving-out scenarios. An optimal value for dwell-timer is found through simulations, showing a performance gain over power based algorithm as a function of mean throughput. The analysis is extended also to a multiple network scenario. Thirdly, experimental results on the behaviour of protocols used in wireless IP networks are presented. Prototype systems demonstrate results of using Mobile IP with a fuzzy logic algorithm for vertical handoff in a heterogeneous network environment and the role of IPv6 when using a voice application in a wireless LAN environment. Latest contributions include developing plug-and-play middleware functionalities for Symbian mobile devices, extending the use of the earlier results to state-of-the-art mobile devices.
103

Efficient and Robust Deep Learning through Approximate Computing

Sanchari Sen (9178400) 28 July 2020 (has links)
<p>Deep Neural Networks (DNNs) have greatly advanced the state-of-the-art in a wide range of machine learning tasks involving image, video, speech and text analytics, and are deployed in numerous widely-used products and services. Improvements in the capabilities of hardware platforms such as Graphics Processing Units (GPUs) and specialized accelerators have been instrumental in enabling these advances as they have allowed more complex and accurate networks to be trained and deployed. However, the enormous computational and memory demands of DNNs continue to increase with growing data size and network complexity, posing a continuing challenge to computing system designers. For instance, state-of-the-art image recognition DNNs require hundreds of millions of parameters and hundreds of billions of multiply-accumulate operations while state-of-the-art language models require hundreds of billions of parameters and several trillion operations to process a single input instance. Another major obstacle in the adoption of DNNs, despite their impressive accuracies on a range of datasets, has been their lack of robustness. Specifically, recent efforts have demonstrated that small, carefully-introduced input perturbations can force a DNN to behave in unexpected and erroneous ways, which can have to severe consequences in several safety-critical DNN applications like healthcare and autonomous vehicles. In this dissertation, we explore approximate computing as an avenue to improve the speed and energy efficiency of DNNs, as well as their robustness to input perturbations.</p> <p> </p> <p>Approximate computing involves executing selected computations of an application in an approximate manner, while generating favorable trade-offs between computational efficiency and output quality. The intrinsic error resilience of machine learning applications makes them excellent candidates for approximate computing, allowing us to achieve execution time and energy reductions with minimal effect on the quality of outputs. This dissertation performs a comprehensive analysis of different approximate computing techniques for improving the execution efficiency of DNNs. Complementary to generic approximation techniques like quantization, it identifies approximation opportunities based on the specific characteristics of three popular classes of networks - Feed-forward Neural Networks (FFNNs), Recurrent Neural Networks (RNNs) and Spiking Neural Networks (SNNs), which vary considerably in their network structure and computational patterns.</p> <p> </p> <p>First, in the context of feed-forward neural networks, we identify sparsity, or the presence of zero values in the data structures (activations, weights, gradients and errors), to be a major source of redundancy and therefore, an easy target for approximations. We develop lightweight micro-architectural and instruction set extensions to a general-purpose processor core that enable it to dynamically detect zero values when they are loaded and skip future instructions that are rendered redundant by them. Next, we explore LSTMs (the most widely used class of RNNs), which map sequences from an input space to an output space. We propose hardware-agnostic approximations that dynamically skip redundant symbols in the input sequence and discard redundant elements in the state vector to achieve execution time benefits. Following that, we consider SNNs, which are an emerging class of neural networks that represent and process information in the form of sequences of binary spikes. Observing that spike-triggered updates along synaptic connections are the dominant operation in SNNs, we propose hardware and software techniques to identify connections that can be minimally impact the output quality and deactivate them dynamically, skipping any associated updates.</p> <p> </p> <p>The dissertation also delves into the efficacy of combining multiple approximate computing techniques to improve the execution efficiency of DNNs. In particular, we focus on the combination of quantization, which reduces the precision of DNN data-structures, and pruning, which introduces sparsity in them. We observe that the ability of pruning to reduce the memory demands of quantized DNNs decreases with precision as the overhead of storing non-zero locations alongside the values starts to dominate in different sparse encoding schemes. We analyze this overhead and the overall compression of three different sparse formats across a range of sparsity and precision values and propose a hybrid compression scheme that identifies that optimal sparse format for a pruned low-precision DNN.</p> <p> </p> <p>Along with improved execution efficiency of DNNs, the dissertation explores an additional advantage of approximate computing in the form of improved robustness. We propose ensembles of quantized DNN models with different numerical precisions as a new approach to increase robustness against adversarial attacks. It is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. We overcome this limitation to achieve the best of both worlds, i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble.</p> <p> </p> <p><br></p><p>In summary, this dissertation establishes approximate computing as a promising direction to improve the performance, energy efficiency and robustness of neural networks.</p>
104

A System Architecture for Phased Development of Remote sUAS Operation

Ashley, Eric 01 March 2020 (has links)
Current airspace regulations require the remote pilot-in-command of an unmanned aircraft systems (UAS) to maintain visual line of sight with the vehicle for situational awareness. The future of UAS will not have these constraints as technology improves and regulations are changed. An operational model for the future of UAS is proposed where a remote operator will monitor remote vehicles with the capability to intervene if needed. One challenge facing this future operational concept is the ability for a flight data system to effectively communicate flight status to the remote operator. A system architecture has been developed to facilitate the implementation of such a flight data system. Utilizing the system architecture framework, a Phase I prototype was designed and built for two vehicles in the Autonomous Flight Laboratory (AFL) at Cal Poly. The project will continue to build on the success of Phase I, culminating in a fully functional command and control system for remote UAS operational testing.
105

Reducing Size and Complexity of the Security-Critical Code Base of File Systems

Weinhold, Carsten 14 January 2014 (has links)
Desktop and mobile computing devices increasingly store critical data, both personal and professional in nature. Yet, the enormous code bases of their monolithic operating systems (hundreds of thousands to millions of lines of code) are likely to contain exploitable weaknesses that jeopardize the security of this data in the file system. Using a highly componentized system architecture based on a microkernel (or a very small hypervisor) can significantly improve security. The individual operating system components have smaller code bases running in isolated address spaces so as to provide better fault containment. Their isolation also allows for smaller trusted computing bases (TCBs) of applications that comprise only a subset of all components. In my thesis, I built VPFS, a virtual private file system that is designed for such a componentized system architecture. It aims at reducing the amount of code and complexity that a file system implementation adds to the TCB of an application. The basic idea behind VPFS is similar to that of a VPN, which securely reuses an untrusted network: The core component of VPFS implements all functionality and cryptographic algorithms that an application needs to rely upon for confidentiality and integrity of file system contents. These security-critical cores reuse a much more complex and therefore untrusted file system stack for non-critical functionality and access to the storage device. Additional trusted components ensure recoverability.
106

Recommendation systems for recruitment within an educational context

Lagerqvist, Gustaf, Stålhandske, Anton January 2021 (has links)
Alongside the evolution of the recruitment process, different types of recommendation systems have been developed. The purpose of this study is to investigate recommendation systems within educational contexts, successful implementations of recommendation system architecture patterns, and alternatives to previous experience when evaluating candidates. The study is conducted through two separate methods; A literature review with a qualitative approach and design science research methodology focused on design and development, demonstration and evaluation. The literature review shows that, for recommendation systems, a layered architecture built within a microservice ecosystem is successfully utilized and has multiple beneficial aspects such as improved scalability, maintainability and security. Through design science research methodology, this study shows a suggested approach to implementing a layered architecture in combination with KNN and hybrid filtering. To avoid the lapse of suitable candidates, caused by demanding previous experience, this study shows an alternative approach to recruitment, within an educational context, through the use of soft skills. Within the study, this approach is successfully used to evaluate and compare students, but the same approach could possibly be applied to evaluate and compare companies. Moving forward, this study could be further expanded by looking into possible biases arising as a result of using AI and choices made during this study, as well as weighting of student-attributes.
107

Principy vizualizace informací v informačních systémech hromadné dopravy / Principles of information visualization in public transport's information systems

Farkaš, Pavel January 2011 (has links)
The diploma thesis Principles of information visualization in public transit information systems considers the ways information design affects one of the most common activities of people in an urban environment: the use of public transit system. In this work, the city space becomes a framework for several disciplines connected to the concept of wayfinding. The author sets the topic in a wide historical and architectural context and links the connections with cognitive science. The case study describes principals for information design using the example of the Prague subway and their application to an experimental station with a new information system installed.
108

Software-defined Buffer Management and Robust Congestion Control for Modern Datacenter Networks

Danushka N Menikkumbura (12208121) 20 April 2022 (has links)
<p>  Modern datacenter network applications continue to demand ultra low latencies and very high throughputs. At the same time, network infrastructure keeps achieving higher speeds and larger bandwidths. We still need better network management solutions to keep these two demand and supply fronts go hand-in-hand. There are key metrics that define network performance such as flow completion time (the lower the better), throughput (the higher the better), and end-to-end latency (the lower the better) that are mainly governed by how effectively network application get their fair share of network resources. We observe that buffer utilization on network switches gives a very accurate indication of network performance. Therefore, network buffer management is important in modern datacenter networks, and other network management solutions can be efficiently built around buffer utilization. This dissertation presents three solutions based on buffer use on network switches.</p> <p>  This dissertation consists of three main sections. The first section is on a specification language for buffer management in modern programmable switches. The second section is on a congestion control solution for Remote Direct Memory Access (RDMA) networks. The third section is on a solution to head-of-the-line blocking in modern datacenter networks.</p>
109

Monolith to microservices using deep learning-based community detection / Monolit till mikrotjänster med hjälp av djupinlärningsbaserad klusterdetektion

Bothin, Anton January 2023 (has links)
The microservice architecture is widely considered to be best practice. Yet, there still exist many companies currently working in monolith systems. This can largely be attributed to the difficult process of updating a systems architecture. The first step in this process is to identify microservices within a monolith. Here, artificial intelligence could be a useful tool for automating the process of microservice identification. The aim of this thesis was to propose a deep learning-based model for the task of microservice identification, and to compare this model to previously proposed approaches. With the goal of helping companies in their endeavour to move towards a microservice-based architecture. In particular, the thesis has evaluated whether the more complex nature of newer deep learning-based techniques can be utilized in order to identify better microservices. The model proposed by this thesis is based on overlapping community detection, where each identified community is considered a microservice candidate. The model was evaluated by looking at cohesion, modularity, and size. Results indicate that the proposed deep learning-based model performs similarly to other state-of-the-art approaches for the task of microservice identification. The results suggest that deep learning indeed helps in finding nontrivial relations within communities, which overall increases the quality of identified microservices, From this it can be concluded that deep learning is a promising technique for the task of microservice identification, and that further research is warranted. / Allmänt anses mikrotjänstarkitekturen vara bästa praxis. Trots det finns det många företag som fortfarande arbetar i monolitiska system. Detta då det finns många svårigheter runt processesen av att byta systemaritekture. Första steget i denna process är att identifiera mikrotjänster inom en monolit. Här kan artificiell intelligens vara ett användbart verktyg för att automatisera processen runt att identifiera mikrotjänster. Denna avhandling syftar till att föreslå en djupinlärningsbaserad modell för att identifiera mikrotjänster och att jämföra denna modell med tidigare föreslagna modeller. Målet är att hjälpa företag att övergå till en mikrotjänstbaserad arkitektur. Avhandlingen kommer att utvärdera nyare djupinlärningsbaserade tekniker för att se ifall deras mer komplexa struktur kan användas för att identifiera bättre mikrotjänster. Modellen som föreslås är baserad på överlappande klusterdetektion, där varje identifierad kluster betraktas som en mikrotjänstkandidat. Modellen utvärderades genom att titta på sammanhållning, modularitet och storlek. Resultaten indikerar att den föreslagna djupinlärningsbaserade modellen identifierar mikrotjänster av liknande kvalitet som andra state-of-the-art-metoder. Resultaten tyder på att djupinlärning bidrar till att hitta icke triviala relationer inom kluster, vilket ökar kvaliteten av de identifierade mikrotjänsterna. På grund av detta dras slutsatsen att djupinlärning är en lovande teknik för identifiering av mikrotjänster och att ytterligare forskning bör utföras.
110

Design and implementation of a signaling system for a novel light-baseed bioprinter : Design och implementering av ett signalsystem för en ny ljusbaserad bioprinter

Abdalla, Osman January 2023 (has links)
A 3D bioprinter employing light-based technology has been designed and constructed in an EU-funded research initiative known as BRIGHTER (Bioprinting by Light-Sheet Lithography). This initiative is a collaborative effort between institutions and companies and aims to develop a technique for efficient and accurate production of engineered tissue. Presently, the bioprinter’s function is limited to 2D printing, with the lack of 3D printing capabilities.  The problem addressed is the integration of two separate electronic systems within the bioprinter to control the laser beam’s trajectory for 3D printing. The goal of the project is to create functional software and simulation tools to control the hardware modules in a precise and synchronized manner, thereby enabling 3D printing. The outcome manifests as a software prototype, which successfully facilitates intercommunication between the two electronic subsystems within the bioprinter, thereby enabling further progress on the bioprinter with 3D printing available. Nevertheless, the prototype requires thorough testing to determine its optimal operational efficiency in terms of timing the movements for the various hardware modules. / En 3D-bioprinter som använder ljusbaserad teknik har designats och konstruerats i ett EU-finansierat forskningsinitiativ som kallas BRIGHTER (Bioprinting by Light-Sheet Lithography). Detta initiativ är ett samarbete mellan institutioner och företag och syftar till att utveckla en teknik för effektiv och korrekt produktion av konstruerad vävnad.  I dagsläget har bioprintern inte möjligheten för 3D-utskrift, utan är begränsad till 2D-utskrift. Problemet som åtgärdas är integrationen av två separata elektroniska system inom bioprintern för att styra laserstrålens bana för 3D-utskrift. Målet med projektet är att skapa funktionell mjukvara och simuleringsverktyg för att styra hårdvarumodulerna på ett exakt och synkroniserat sätt och därigenom möjliggöra 3D-utskrift. Resultatet av examensarbetet är en mjukvaruprototyp, som framgångsrikt möjliggör interkommunikation mellan de två elektroniska systemen inom bioprintern och därigenom öppnar möjligheten för vidare arbete med 3D-utskrift tillgängligt. Prototypen kräver dock noggranna tester för att fastställa dess optimala operativa effektivitet när det gäller koordinationen av hårdvarumodulernas rörelser.

Page generated in 0.1279 seconds