• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1641
  • 212
  • 132
  • 106
  • 19
  • 18
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • Tagged with
  • 2338
  • 2338
  • 954
  • 453
  • 441
  • 285
  • 275
  • 244
  • 240
  • 227
  • 219
  • 203
  • 201
  • 201
  • 185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Capacity Scaling for Elastic Compute Clouds

Ali-Eldin, Ahmed January 2013 (has links)
AbstractCloud computing is a computing model that allows better management, higher utiliza-tion and reduced operating costs for datacenters while providing on demand resourceprovisioning for different customers. Data centers are often enormous in size andcomplexity. In order to fully realize the cloud computing model, efficient cloud man-agement software systems that can deal with the datacenter size and complexity needto be designed and built.This thesis studies automated cloud elasticity management, one of the main andcrucial datacenter management capabilities. Elasticity can be defined as the abilityof cloud infrastructures to rapidly change the amount of resources allocated to anapplication in the cloud according to its demand. This work introduces algorithms,techniques and tools that a cloud provider can use to automate dynamic resource pro-visioning allowing the provider to better manage the datacenter resources. We designtwo automated elasticity algorithms for cloud infrastructures that predict the futureload for an application running on the cloud. It is assumed that a request is either ser-viced or dropped after one time unit, that all requests are homogeneous and that it takesone time unit to add or remove resources. We discuss the different design approachesfor elasticity controllers and evaluate our algorithms using real workload traces. Wecompare the performance of our algorithms with a state-of-the-art controller. We ex-tend on the design of the best performing controller out of our two controllers anddrop the assumptions made during the first design. The controller is evaluated with aset of different real workloads.All controllers are designed using certain assumptions on the underlying systemmodel and operating conditions. This limits a controller’s performance if the modelor operating conditions change. With this as a starting point, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components,an analyzer and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers. / <p>Enligt Libris är författarnamnet: Ahmed Aleyeldin (Ali-Eldin) Hassan.</p>
142

Visual Care : Utveckling av en webbapplikation för att visualisera vårdprocesser / Visual Care : Development of a web application for visualization of care processes

Bergdahl, Otto, Arvidsson, Peter, Bennich, Viktor, Celik, Hakan, Granli, Petter, Grimsdal, Gunnar, Nilsson, Johan January 2017 (has links)
Rapporten beskriver hur produkten Visual Care har tagits fram. Produkten är en webbapplikation för visualisering av statistik från kunden Region Östergötland. Målet med applikationen är att hjälpa anställda på Region Östergötland att planera behandling av cancerpatienter. Syften med den här rapporten är att analysera projektgruppens utvecklingsmetoder och processer för att ta fram produkten Visual Care. Produkten kommer inte att användas av Region Östergötlands anställda, utan kommer istället att användas som en prototyp och inspiration för framtida projekt av kunden. / This report is about the production of the web application Visual Care. The product is a tool for visualising statistics provided by the customer Region Östergötland. The goal of the application is to help employees at Region Östergötland to plan treatment of cancer patients. The purpose of this report is to analyze the group's development process for the product. The product will not be used by the employees of Region Östergötland, but will instead be used as an inspiration for future projects by the customer.
143

A Qualitative Comparison Study Between Common GPGPU Frameworks

Söderström, Adam January 2018 (has links)
The development of graphic processing units have during the last decade improved significantly in performance while at the same time becoming cheaper. This has developed a new type of usage of the device where the massive parallelism available in modern GPU’s are used for more general purpose computing, also known as GPGPU. Frameworks have been developed just for this purpose and some of the most popular are CUDA, OpenCL and DirectX  Compute Shaders, also known as DirectCompute. The choice of what framework to use may depend on factors such as features, portability and framework complexity. This paper aims to evaluate these concepts, while also comparing the speedup of a parallel implementation of the N-Body problem with Barnes-hut optimization, compared to a sequential implementation.
144

Considering WebAssembly Containers for Edge Computing on Hardware-Constrained IoT Devices

Napieralla, Jonah January 2020 (has links)
No description available.
145

Comparative evaluation of virtualization technologies in the cloud

Johansson, Marcus, Olsson, Lukas January 2017 (has links)
The cloud has over the years become a staple of the IT industry, not only for storage purposes, but for services, platforms and infrastructures. A key component of the cloud is virtualization and the fluidity it makes possible, allowing resources to be utilized more efficiently and services to be relocated more easily when needed. Virtual machine technology, consisting of a hypervisor managing several guest systems has been the method for achieving this virtualization, but container technology, a lightweight virtualization method running directly on the host without a classic hypervisor, has been making headway in recent years. This report investigates the differences between VM’s (Virtual Machines) and containers, comparing the two in relevant areas. The software chosen for this comparison are KVM as VM hypervisor, and Docker as container platform, both run on Linux as the underlying host system. The work conducted for this report compares efficiency in common use areas through experimental evidence, and also evaluates differences in design through study of relevant literature. The results are then discussed and weighed to provide a conclusion. The results of this work shows that Docker has the capability to potentially take over the role as the main virtualization technology in the coming years, providing some of its current shortcomings are addressed and improved upon.
146

Rich window discretization techniques in distributed stream processing

Jonas, Traub January 2015 (has links)
No description available.
147

Performance Characterization of In-Memory Data Analytics on a Scale-up Server

Awan, Ahsan Javed January 2016 (has links)
The sheer increase in volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark defines the state of the art in big data analytics platforms for (i) exploiting data-flow and in-memory computing and (ii) for exhibiting superior scale-out performance on the commodity machines, little effort has been devoted at understanding the performance of in-memory data analytics with Spark on modern scale-up servers. This thesis characterizes the performance of in-memory data analytics with Spark on scale-up servers. Through empirical evaluation of representative benchmark workloads on a dual socket server, we have found that in-memory data analytics with Spark exhibit poor multi-core scalability beyond 12 cores due to thread level load imbalance and work-time inflation. We have also found that workloads are bound by the latency of frequent data accesses to DRAM. By enlarging input data size, application performance degrades significantly due to substantial increase in wait time during I/O operations and garbage collection, despite 10% better instruction retirement rate (due to lower L1 cache misses and higher core utilization). For data accesses we have found that simultaneous multi-threading is effective in hiding the data latencies. We have also observed that (i) data locality on NUMA nodes can improve the performance by 10% on average, (ii) disabling next-line L1-D prefetchers can reduce the execution time by up-to 14%. For GC impact, we match memory behaviour with the garbage collector to improve performance of applications between 1.6x to 3x. and recommend to use multiple small executors that can provide up-to 36% speedup over single large executor. / <p>QC 20160425</p>
148

Improving performance on base stations by improving spatial locality in caches / Förbättra prestanda på basstationer genom att öka rumslokaliteten i cachen

Carlsson, Jonas January 2016 (has links)
For real-time systems like base stations there are time constraints for them to operate smoothly. This means that things like caches which brings stochastic variables will most likely not be able to be added. Ericsson however want to add caches both for the possible performance gains but also for the automatic loading of functions. As it stands, Ericsson can only use direct mapped caches and the chance for cache misses on the base stations is large. We have tried to see if randomness can be decreased by placing code in the common memory. The new placement is based on logs from earlier runs. There are two different heuristic approaches to do this. The first was developed by Pettis \&amp; Hansen and the second was developed by Gloy \&amp; Smith. We also discuss a third alternative by Hashemi, Kaeli \&amp; Calder (HKC) which was not tested. However the results show there are no practical improvements by using code placement strategies.
149

Performance impacts when moving from a VM-based solution to a container-based solution

Muchow, Nicklas, Amir Jalali, Danial January 2022 (has links)
Container-based solutions are increasing in popularity and thus more companies grav- itate towards them. However, with systems growing larger and more complex there is a general need to introduce container orchestration to manage the increase of containers. While adopting these technologies, Ericsson has noticed some increase in CPU usage when switching from a VM-based solution to a container-based solution with Kubernetes. Thus this paper is focusing on identifying the factors that may impact CPU usage in this kind of scenario. To do this, a literature review was performed to identify potential factors and an experiment was conducted on these factors to determine their impact on CPU usage. The results show that factors such as number of Pods in a request chain, the message size between Pods, and where Pods are located in a Kubernetes cluster, may impact the CPU usage of a container-based system using Kubernetes. The number of Pods in the request chain and message size between Pods had the largest impact on CPU usage, and thus a conclusion could be drawn that network I/O is the prime factor one should look into when making sure that a container-based solution performs as good as possible.
150

Adaptive Hierarchical Scheduling Framework for Real-Time Systems

Khalilzad, Nima January 2013 (has links)
Modern computer systems are often designed to play a multipurpose role. Therefore, they are capable of running a number of software tasks (software programs) simultaneously in parallel. These software tasks should share the processor such that all of them run and finish their computations as expected. On the other hand, a number of software tasks have timing requirements meaning that they should not only access the processing unit, but this access should also be in a timely manner. Thus, there is a need to timely share the processor among different software programs (applications). The time-sharing often is realized by assigning a fixed and predefined processor time-portion to each application. However, there exists a group of applications where, i) their processor demand is changing in a wide range during run-time, and/or ii) their occasional timing violations can be tolerated. For systems that contain applications with the two aforementioned properties, it is not efficient to assign the applications with fixed processor time-portions. Because, if we allocate the processor resource based on the maximum resource demand of the applications, then the processor's computing capacity will be wasted during the time intervals where the applications will require a smaller portion than maximum resource demand. To this end, in this thesis we propose adaptive processor time-portion assignments. In our adaptive scheme, at each point in time, we monitor the actual demand of the applications, and we provide sufficient processor time-portions for each application. In doing so, we are able to integrate more applications on a shared and resource constrained system, while at the same time providing the applications with timing guarantees. / <p>QC 20151217</p>

Page generated in 0.0719 seconds