• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 777
  • 215
  • 1
  • Tagged with
  • 993
  • 977
  • 975
  • 139
  • 115
  • 99
  • 98
  • 83
  • 82
  • 74
  • 72
  • 60
  • 57
  • 57
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Testing AI-democratization : What are the lower limits of textgeneration using artificial neural networks?

Kinde, Lorentz January 2019 (has links)
Articial intelligence is an area of technology which is rapidly growing. Considering it'sincreasing inuence in society, how available is it? This study attempts to create a web contentsummarizer using generative machine learning. Several concepts and technologies are explored, most notably sequence to sequence, transfer learning and recursive neural networks. The study later concludes how creating a purely generative summarizer is unfeasible on a hobbyist level due to hardware restrictions, showing that slightly more advanced machine learning techniques still are unavailable to non-specialized individuals. The reasons why are investigated in depth using an extensive theoretical section which initially explains how neural networks work, then natural language processing at large and finally how to create a generative recurrent articial neural network. Ethical and societal concerns concerning machine learning text generation is also discussed, along with alternative approaches to solving the task at hand.
72

Verifying Deadlock-Freedom for Advanced Interconnect Architectures

Meng, Wang January 2020 (has links)
Modern advanced Interconnects, such as those orchestrated by the ARM AMBA AXI protocol, can have fatal deadlocks in the connection between Masters and Slaves if those transactions are not properly arranged. There exists some research about the deadlock problems in an on-chip bus system and also methods to avoid those deadlocks which could happen. This project aims to verify those situations could make deadlock happens and also the countermeasures for those deadlocks. In this thesis, the ARM AMBA AXI protocol and countermeasures are modelled in NuSMV. Based on these models, we verified the non-trivial cycles of transactions could cause deadlocks and also some bus techniques which can mitigate deadlock problems efficiently. The results from model checking several instances of the protocol and corresponding countermeasures show the techniques could indeed avoid deadlocks.
73

Virtual Machine Placement in Cloud Environments

Li, Wubin January 2012 (has links)
With the emergence of cloud computing, computing resources (i.e., networks, servers, storage, applications, and services) are provisioned as metered on-demand services over networks, and can be rapidly allocated and released with minimal management effort. In the cloud computing paradigm, the virtual machine is one of the most commonly used resource carriers in which business services are encapsulated. Virtual machine placement optimization, i.e., finding optimal placement schemes for virtual machines, and reconfigurations according to the changes of environments, become challenging issues. The primary contribution of this licentiate thesis is the development and evaluation of our combinatorial optimization approaches to virtual machine placement in cloud environments. We present modeling for dynamic cloud scheduling via migration of virtual machines in multi-cloud environments, and virtual machine placement for predictable and time-constrained peak loads in single-cloud environments. The studied problems are encoded in a mathematical modeling language and solved using a linear programming solver. In addition to scientific publications, this work also contributes in the form of software tools (in EU-funded project OPTIMIS) that demonstrate the feasibility and characteristics of the approaches presented.
74

Capacity Scaling for Elastic Compute Clouds

Ali-Eldin, Ahmed January 2013 (has links)
AbstractCloud computing is a computing model that allows better management, higher utiliza-tion and reduced operating costs for datacenters while providing on demand resourceprovisioning for different customers. Data centers are often enormous in size andcomplexity. In order to fully realize the cloud computing model, efficient cloud man-agement software systems that can deal with the datacenter size and complexity needto be designed and built.This thesis studies automated cloud elasticity management, one of the main andcrucial datacenter management capabilities. Elasticity can be defined as the abilityof cloud infrastructures to rapidly change the amount of resources allocated to anapplication in the cloud according to its demand. This work introduces algorithms,techniques and tools that a cloud provider can use to automate dynamic resource pro-visioning allowing the provider to better manage the datacenter resources. We designtwo automated elasticity algorithms for cloud infrastructures that predict the futureload for an application running on the cloud. It is assumed that a request is either ser-viced or dropped after one time unit, that all requests are homogeneous and that it takesone time unit to add or remove resources. We discuss the different design approachesfor elasticity controllers and evaluate our algorithms using real workload traces. Wecompare the performance of our algorithms with a state-of-the-art controller. We ex-tend on the design of the best performing controller out of our two controllers anddrop the assumptions made during the first design. The controller is evaluated with aset of different real workloads.All controllers are designed using certain assumptions on the underlying systemmodel and operating conditions. This limits a controller’s performance if the modelor operating conditions change. With this as a starting point, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components,an analyzer and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers. / <p>Enligt Libris är författarnamnet: Ahmed Aleyeldin (Ali-Eldin) Hassan.</p>
75

Visual Care : Utveckling av en webbapplikation för att visualisera vårdprocesser / Visual Care : Development of a web application for visualization of care processes

Bergdahl, Otto, Arvidsson, Peter, Bennich, Viktor, Celik, Hakan, Granli, Petter, Grimsdal, Gunnar, Nilsson, Johan January 2017 (has links)
Rapporten beskriver hur produkten Visual Care har tagits fram. Produkten är en webbapplikation för visualisering av statistik från kunden Region Östergötland. Målet med applikationen är att hjälpa anställda på Region Östergötland att planera behandling av cancerpatienter. Syften med den här rapporten är att analysera projektgruppens utvecklingsmetoder och processer för att ta fram produkten Visual Care. Produkten kommer inte att användas av Region Östergötlands anställda, utan kommer istället att användas som en prototyp och inspiration för framtida projekt av kunden. / This report is about the production of the web application Visual Care. The product is a tool for visualising statistics provided by the customer Region Östergötland. The goal of the application is to help employees at Region Östergötland to plan treatment of cancer patients. The purpose of this report is to analyze the group's development process for the product. The product will not be used by the employees of Region Östergötland, but will instead be used as an inspiration for future projects by the customer.
76

A Qualitative Comparison Study Between Common GPGPU Frameworks

Söderström, Adam January 2018 (has links)
The development of graphic processing units have during the last decade improved significantly in performance while at the same time becoming cheaper. This has developed a new type of usage of the device where the massive parallelism available in modern GPU’s are used for more general purpose computing, also known as GPGPU. Frameworks have been developed just for this purpose and some of the most popular are CUDA, OpenCL and DirectX  Compute Shaders, also known as DirectCompute. The choice of what framework to use may depend on factors such as features, portability and framework complexity. This paper aims to evaluate these concepts, while also comparing the speedup of a parallel implementation of the N-Body problem with Barnes-hut optimization, compared to a sequential implementation.
77

Considering WebAssembly Containers for Edge Computing on Hardware-Constrained IoT Devices

Napieralla, Jonah January 2020 (has links)
No description available.
78

Comparative evaluation of virtualization technologies in the cloud

Johansson, Marcus, Olsson, Lukas January 2017 (has links)
The cloud has over the years become a staple of the IT industry, not only for storage purposes, but for services, platforms and infrastructures. A key component of the cloud is virtualization and the fluidity it makes possible, allowing resources to be utilized more efficiently and services to be relocated more easily when needed. Virtual machine technology, consisting of a hypervisor managing several guest systems has been the method for achieving this virtualization, but container technology, a lightweight virtualization method running directly on the host without a classic hypervisor, has been making headway in recent years. This report investigates the differences between VM’s (Virtual Machines) and containers, comparing the two in relevant areas. The software chosen for this comparison are KVM as VM hypervisor, and Docker as container platform, both run on Linux as the underlying host system. The work conducted for this report compares efficiency in common use areas through experimental evidence, and also evaluates differences in design through study of relevant literature. The results are then discussed and weighed to provide a conclusion. The results of this work shows that Docker has the capability to potentially take over the role as the main virtualization technology in the coming years, providing some of its current shortcomings are addressed and improved upon.
79

Rich window discretization techniques in distributed stream processing

Jonas, Traub January 2015 (has links)
No description available.
80

Performance Characterization of In-Memory Data Analytics on a Scale-up Server

Awan, Ahsan Javed January 2016 (has links)
The sheer increase in volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark defines the state of the art in big data analytics platforms for (i) exploiting data-flow and in-memory computing and (ii) for exhibiting superior scale-out performance on the commodity machines, little effort has been devoted at understanding the performance of in-memory data analytics with Spark on modern scale-up servers. This thesis characterizes the performance of in-memory data analytics with Spark on scale-up servers. Through empirical evaluation of representative benchmark workloads on a dual socket server, we have found that in-memory data analytics with Spark exhibit poor multi-core scalability beyond 12 cores due to thread level load imbalance and work-time inflation. We have also found that workloads are bound by the latency of frequent data accesses to DRAM. By enlarging input data size, application performance degrades significantly due to substantial increase in wait time during I/O operations and garbage collection, despite 10% better instruction retirement rate (due to lower L1 cache misses and higher core utilization). For data accesses we have found that simultaneous multi-threading is effective in hiding the data latencies. We have also observed that (i) data locality on NUMA nodes can improve the performance by 10% on average, (ii) disabling next-line L1-D prefetchers can reduce the execution time by up-to 14%. For GC impact, we match memory behaviour with the garbage collector to improve performance of applications between 1.6x to 3x. and recommend to use multiple small executors that can provide up-to 36% speedup over single large executor. / <p>QC 20160425</p>

Page generated in 0.0446 seconds