• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1681
  • 332
  • 250
  • 173
  • 127
  • 117
  • 53
  • 52
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3366
  • 1662
  • 733
  • 506
  • 440
  • 422
  • 402
  • 338
  • 326
  • 323
  • 319
  • 315
  • 306
  • 265
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Towards auto-scaling in the cloud: online resource allocation techniques

Yazdanov, Lenar 26 September 2016 (has links)
Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains. Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives.
552

A Study of OpenStack Networking Performance / En studie av Openstack nätverksprestanda

Olsson, Philip January 2016 (has links)
Cloud computing is a fast-growing sector among software companies. Cloud platforms provide services such as spreading out storage and computational power over several geographic locations, on-demand resource allocation and flexible payment options. Virtualization is a technology used in conjunction with cloud technology and offers the possibility to share the physical resources of a host machine by hosting several virtual machines on the same physical machine. Each virtual machine runs its operating system which makes the virtual machines hardware independent. The cloud and virtualization layers add additional layers of software to the server environments to provide the services. The additional layers cause an overlay in latency which can be problematic for latency sensitive applications. The primary goal of this thesis is to investigate how the networking components impact the latency in an OpenStack cloud compared to a traditional deployment. The networking components were benchmarked under different load scenarios, and the results indicate that the additional latency added by the networking components is not too significant in the used network setup. Instead, a significant performance degradation could be seen on the applications running in the virtual machine which caused most of the added latency in the cloud environment. / Molntjänster är en snabbt växande sektor bland mjukvaruföretag. Molnplattformar tillhandahåller tjänster så som utspridning av lagring och beräkningskraft över olika geografiska områden, resursallokering på begäran och flexibla betalningsmetoder. Virtualisering är en teknik som används tillsammans med molnteknologi och erbjuder möjligheten att dela de fysiska resurserna hos en värddator mellan olika virtuella maskiner som kör på samma fysiska dator. Varje virtuell maskin kör sitt egna operativsystem vilket gör att de virtuella maskinerna blir hårdvaruoberoende. Moln och virtualiseringslagret lägger till ytterligare mjukvarulager till servermiljöer för att göra teknikerna möjliga. De extra mjukvarulagrerna orsakar ett pålägg på responstiden vilket kan vara ett problem för applikationer som kräver snabb responstid. Det primära målet i detta examensarbete är att undersöka hur de extra nätverkskomponenterna i molnplattformen OpenStack påverkar responstiden. Nätverkskomonenterna var utvärderade under olika belastningsscenarion och resultaten indikerar att den extra responstiden som orsakades av de extra nätverkskomponenterna inte har allt för stor betydelse på responstiden i den använda nätverksinstallationen. En signifikant perstandaförsämring sågs på applikationerna som körde på den virtuella maskinen vilket stod för den större delen av den ökade responstiden.
553

Evaluating mobile edge-computing on base stations : Case study of a sign recognition application

Castellanos Nájera, Eduardo January 2015 (has links)
Mobile phones have evolved from feature phones to smart phones with processing power that can compete with personal computers ten years ago. Nevertheless, the computing power of personal computers has also multiplied in the past decade. Consequently, the gap between mobile platforms and personal computers and servers still exists. Mobile Cloud Computing (MCC) has emerged as a paradigm that leverages this difference in processing power. It achieve this goal by augmenting smart phones with resources from the cloud, including processing power and storage capacity. Recently, Mobile Edge Computing (MEC) has brought the benefits from MCC one hop away from the end user. Furthermore, it also provides additional advantages, e.g., access to network context information, reduced latency, and location awareness. This thesis explores the advantages provided by MEC in practice by augmenting an existing application called Human-Centric Positioning System (HoPS). HoPS is a system that relies on context information and information extracted from a photograph of signposts to estimate a user's location. This thesis presents the challenges of enabling HoPS in practice, and implement strategies that make use of the advantages provided by MEC to tackle the challenges. Afterwards, it presents an evaluation of the resulting system, and discusses the implications of the results. To summarise, we make three primary contributions in this thesis: (1) we find out that it is possible to augment HoPS and improve its response time by a factor of four by offloading the code processing; (2) we can improve the overall accuracy of HoPS by leveraging additional processing power at the MEC; (3) we observe that improved network conditions can lead to reduced response time, nevertheless, the difference becomes insignificant compared with the heavy processing required. / Utvecklingen av mobiltelefoner har skett på en rusande takt. Dagens smartphones har mer processorkraft än vad stationära datorer hade för tio år sen. Samtidigt så har även datorernas processorer blivit mycket starkare. Därmed så finns det fortfarande klyftor mellan mobil plattform och datorer och servrar. Mobile Cloud Computing (MCC) används idag som en hävstång för de olika plattformernas processorkraft. Den uppnår detta genom att förbättra smartphonens processorkraft och datorminne med hjälp från datormolnet. På sistånde så har Mobile Edge Computing (MEC) gjort så att förmånerna med MCC är ett steg ifrån slutanvändaren. Dessutom så finns det andra fördelar med MEC, till exempel tillgång till nätverkssammanhangsinformation, reducerad latens, och platsmedvetenhet. Denna tes utforskar de praktiska fördelarna med MEC genom att använda tillämpningsprogrammet Human-Centric Positioning System (HoPS). HoPS är ett system som försöker att hitta platsen där användaren befinner sig på genom att använda sammanhängande information samt information från bilder med vägvisare. Tesen presenterar även de hinder som kan uppstå när HoPS implementeras i verkligheten, och använder förmåner från MEC för att hitta lösningar till eventuella hinder. Sedan så utvärderar och diskuterar tesen det resulterande systemet. För att sammanfatta så består tesen av tre huvuddelar: (1) vi tar reda på att det är möjligt att förbättra HoPS och minska svarstiden med en fjärdedel genom att avlasta kodsprocessen; (2) vi tar reda på att man kan generellt förbättra HoPS noggrannhet genom att använda den utökade processorkraften från MEC; (3) vi ser att förbättrade nätverksförutsättningar kan leda till minskad svarstid, dock så är skillnaden försumbar jämfört med hur mycket bearbetning av information som krävs.
554

Exploiting Cloud Resources For Semantic Scene Understanding On Mobile Robots

Bruse, Andreas January 2015 (has links)
Modern day mobile robots are constrained in the resources available to them. Only so much hardware can be fit onto the robotic frame and at the same time they are required to perform tasks that require lots of computational resources, access to massive amounts of data and the ability to share knowledge with other robots around it. This thesis explores the cloud robotics approach in which complex compu- tations can be offloaded to a cloud service which can have a huge amount of computational resources and access to massive data sets. The Robot Operat- ing System, ROS, is extended to allow the robot to communicate with a high powered cluster and this system is used to test our approach on such a complex task as semantic scene understanding. The benefits of the cloud approach is utilized to connect to a cloud based object detection system and to build a cat- egorization system relying on large scale datasets and a parallel computation model. Finally a method is proposed for building a consistent scene description by exploiting semantic relationships between objects. / Moderna mobila robotar har begränsade resurser. Det får inte plats hur mycket hårdvara som helst på roboten och ändå förväntas de utföra arbeten som kräver extremt mycket datorkraft, tillgång till enorm mängd data och samtidigt kommunicera med andra robotar runt omkring sig. Det här examensarbetet utforskar robotik i molnet där komplexa beräk- ningar kan läggas ut i en molntjänst som kan ha tillgång till denna stora mängd datakraft och ha plats för de stora datamängder som behövs. The Ro- bot Operating System, eller ROS, byggs ut för att stödja kommunikation med en molntjänst och det här systemet används sedan för att testa vår lösning på ett så komplext problem som att förstå en omgivning eller miljö på ett seman- tiskt plan. Fördelarna med att använda en molnbaserad lösning används genom att koppla upp sig mot ett objektigenkänningssytem i molnet och för att byg- ga ett objektkategoriseringssystem som förlitar sig på storskaliga datamängder och parallella beräkningsmodeller. Slutligen föreslås en metod för att bygga en tillförlitlig miljöbeskrivning genom att utnyttja semantiska relationer mellan föremål.
555

Measurement and Analysis of Networking Performance in Virtualised Environments

Chauhan, Maneesh January 2014 (has links)
Mobile cloud computing, having embraced the ideas like computation ooading, mandates a low latency, high speed network to satisfy the quality of service and usability assurances for mobile applications. Networking performance of clouds based on Xen and Vmware virtualisation solutions has been extensively studied by researchers, although, they have mostly been focused on network throughput and bandwidth metrics. This work focuses on the measurement and analysis of networking performance of VMs in a small, KVM based data centre, emphasising the role of virtualisation overheads in the Host-VM latency and eventually to the overall latency experienced by remote clients. We also present some useful tools such as Driftanalyser, VirtoCalc and Trotter that we developed for carrying out specific measurements and analysis. Our work proves that an increase in a VM's CPU workload has direct implications on the network Round trip times. We also show that Virtualisation Overheads (VO) have significant bearing on the end to end latency and can contribute up to 70% of the round trip time between the Host and VM. Furthermore, we thoroughly study Latency due to Virtualisation Overheads as a networking performance metric and analyse the impact of CPU loads and networking workloads on it. We also analyse the resource sharing patterns and their effects amongst VMs of different sizes on the same Host. Finally, having observed a dependency between network performance of a VM and the Host CPU load, we suggest that in a KVM based cloud installation, workload profiling and optimum processor pinning mechanism can be e ectively utilised to regulate network performance of the VMs. The ndings from this research work are applicable to optimising latency oriented VM provisioning in the cloud data centres, which would benefit most latency sensitive mobile cloud applications. / Mobil cloud computing, har anammat ideerna som beräknings avlastning, att en låg latens, höghastighetsnät för att tillfredsställa tjänsternas kvalitet och användbarhet garantier för mobila applikationer. Nätverks prestanda moln baserade på Xen och VMware virtualiseringslösningar har studerats av forskare, även om de har mestadels fokuserat på nätverksgenomströmning och bandbredd statistik. Arbetet är inriktat på mätning och analys av nätverksprestanda i virtuella maskiner i en liten, KVM baserade datacenter, betonar betydelsen av virtualiserings omkostnader i värd-VM latens och så småningom till den totala fördröjningen upplevs av fjärrklienter. Wealso presentera några användbara verktyg som Driftanalyser, VirtoCalc och Trotter som vi utvecklat för att utföra specifika mätningar och analyser. Vårt arbete visar att en ökning av en VM processor arbetsbelastning har direkta konsekvenser för nätverket Round restider. Vi visar också att Virtualiserings omkostnader (VO) har stor betydelse för början till slut latens och kan bidra med upp till 70 % av rundtrippstid mellan värd och VM. Dessutom är vi noga studera Latency grund Virtualiserings Omkostnader som en nätverksprestanda och undersöka effekterna av CPU-belastning och nätverks arbetsbelastning på den. Vi analyserar också de resursdelningsmönster och deras effekter bland virtuella maskiner i olika storlekar på samma värd. Slutligen, efter att ha observerat ett beroende mellan nätverksprestanda i ett VM och värd CPU belastning, föreslar vi att i en KVM baserad moln installation, arbetsbelastning profilering och optimal processor pinning mekanism kan anvandas effektivt för att reglera VM nätverksprestanda. Resultaten från denna forskning gäller att optimera latens orienterade VM provisione i molnet datacenter, som skulle dra störst latency känsliga mobila molnapplikationer.
556

Art-directable cloud animation

Yiyun Wang (10703088) 06 May 2021 (has links)
<div>Volumetric cloud generation and rendering algorithms are well-developed to meet the need for a realistic sky performance in animation or games. However, it is challenging to create a stylized or designed animation for volumetric clouds using physics-based generation and simulation methods in real-time.</div><div>The problem raised by the research is the current volumetric cloud animation controlling methods are not art-directable. Making a piece of volumetric cloud move in a specific way can be difficult when using only a physics-based simulation method. The purpose of the study is to implement an animating method for volumetric clouds and with art-directable controllers. Using this method, a designer can easily control the cloud's motion in a reliable way. The program will achieve interactive performance using parallel processing with CUDA. Users will be able to animate the cloud by input a few vectors inside the cloud volume. </div><div>After reviewing the literature related to the real-time simulation method of clouds, texture advection algorithms, fluid simulation, and other processes to achieve the results, the thesis offers a feasible design of the algorithm and experiments to test the hypotheses. The study uses noise textures and fractional Brownian motion (fBm) to generate volumetric clouds and render the clouds by the ray marching technique. The program will render user input vectors and a three-dimension interpolation vector field with OpenGL. By adding or changing input vectors, the user will gain a divergence minimization interpolation field. The cloud volume could be animated by the texture advection technique based on the interpolation vector field in real-time. By inputting several vectors, the user could plausibly animate the volume cloud in an art-directable way.</div>
557

Cloud BI : Utmaningar vid implementation av Cloud BI / Cloud BI : Challenges when implementing Cloud BI

Sprangers, William January 2021 (has links)
Business Intelligence (BI) möjliggör för bättre och effektivare beslutsfattande genom att förse beslutsfattare med rätt data vid rätt tid. Ny teknik kommer ständigt inom BI området och att tillämpa Cloud Computing (CC) tillsammans med BI ger många fördelar och ett flertal nya utvecklingsområden. Kombinationen av BI och CC skapar Cloud BI och denna teknologi har fortfarande en relativt låg mognadsnivå men den är under konstant utveckling och i denna rapport diskuteras vilka tekniska och organisatoriska utmaningar som kan tillkomma vid införandet av Cloud BI i en organisation. Det har genomförts intervjuer för att samla in kvalitativ data som kan beskriva detta fenomen och ge en djupare insikt i hur dessa utmaningar påverkar organisationer som vill implementera Cloud BI. Genom att utföra semistrukturerade intervjuer med rum för diskussion så har frågeställningen besvarats. Totalt deltog fem respondenter som arbetar med BI och har kunskap om Cloud BI. Resultatet visar att de tekniska utmaningarna som organisationer stöter på när dom implementerar Cloud BI är (1) tillgång, (2) säkerhet och (3) arkitektur och transport av data. Organisatoriska utmaningar som kan uppkomma är (4) resistans, (5) lagar och regler samt (6) arbetssätt och metod.
558

Factors impeding the usage of elearning at a telecommunication organization in South Africa: bridging the gap with cloud services

Mere, Phoebus 09 1900 (has links)
With the enormous competition in the industry, organizations must frequently find better ways to embrace organizational learning. This research study advocates eLearning to be one of the best methods for organizational learning, and this is the study’s main area of interest. This research explored a case at a telecommunication organization named ComTek (pseudonym). The research study addressed a problem of eLearning low usage rate, which resulted in ComTek not meeting their set learning targets during the time of the study. The usage rate was measured using the number of enrolled assessments. The study uses qualitative methods to propose a conceptual framework to understand the causes of low eLearning usage. This conceptual framework illustrated the use of the activity theory elements to understand the problem of eLearning low usage, paired with the use of cloud computing services to access eLearning, and the use of content delivery techniques to help understand eLearning low usage. This conceptual framework took advantage of cloud services like Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). This research study focused on the periods from 2016 to 2017 for collecting data and creating an understanding of the research setting, while other data was derived from historical documents about the phenomenon studied. During this period, there was inadequate literature about cloud computing and other aspects to consider within the domain of telecommunication organizations. The literature study, therefore, comprised of literature from different domains. During the study, ComTek used eLearning with the aid of learning management systems (LMS) to manage learning and leverage employee skills. During the period of the study compared to other years, about 50% of assessments had a usage rate of below 80%, a standard target established by ComTek as a benchmark, placing compliance and training at a low rate. Of the 50% of assessments, some were just above 40% in usage rate, were of a high stake, and were in the categories of compliance and training iv assessments. While this was the case, this study did not consider the technical implementation of the application systems involved, and did not create any form of intervention, but focused on understanding the activities that were involved in the learning environment. This research study used a paradigm that was constructive and interpretive in nature, using qualitative methods with the belief that there were multiple realities in understanding the situation at ComTek and possible solutions to it. To unpack the multiple realities, an exploratory case study was conducted as a research approach. In this study, the researcher used multiple data collection methods, including open-ended questionnaires and unstructured interviews. / School of Computing
559

Sistema de gestión de bicicletas compartidas con el uso de IOT y energía sostenible / Bike-Sharing management system using IoT and sustainable energy

Mercado Luna, Renato, Benavente Soto, Gabriel Alonso 16 July 2020 (has links)
En los últimos años, por la coyuntura global y los cambios ambientales cada vez más personas deciden usar medios de transporte eco amigables, siendo las bicicletas el medio más aceptado. Debido a la alta aceptación de este medio de transporte han surgido diversas empresas que ofrecen sistemas de bicicletas compartidas, estos sistemas han tenido una alta acogida en las diversas metrópolis del mundo. Esta alta demanda y acogida de tales sistemas origina una necesidad: la gestión eficiente de los mismos. En vista de ello, en este trabajo se investigó a importantes sistemas de bicicletas compartidas existentes con el fin de identificar los principales componentes de estos y los beneficios que estos sistemas brindan a sus usuarios. / In recent years, due to global situation and environment changes more and more people have decided to use eco-friendly means of transport, being bicycles the most accepted ones. Because of the huge demand of this mean of transport, several companies that offer bike sharing systems had emerged; such systems experience a high acceptance in several world metropolis. This high demand and acceptance of said systems has created a need, “The efficient management of the systems”. In this light, this work is a research of important bike sharing systems already in place, intent to identify the main components of these systems and the benefits they provide to their users. / Trabajo de investigación
560

Combating Data Leakage in the Cloud

Dlamini, Moses Thandokuhle January 2020 (has links)
The increasing number of reports on data leakage incidents increasingly erodes the already low consumer confidence in cloud services. Hence, some organisations are still hesitant to fully trust the cloud with their confidential data. Therefore, this study raises a critical and challenging research question: How can we restore the damaged consumer confidence and improve the uptake and security of cloud services? This study makes a plausible attempt at unpacking and answering the research question in order to holistically address the data leakage problem from three fronts, i.e. conflict-aware virtual machine (VM) placement, strong authentication and digital forensic readiness. Consequently, this study investigates, designs and develops an innovative conceptual architecture that integrates conflict-aware VM placement, cutting-edge authentication and digital forensic readiness to strengthen cloud security and address the data leakage problem in the hope of eventually restoring consumer confidence in cloud services. The study proposes and presents a conflict-aware VM placement model. This model uses varying degrees of conflict tolerance levels, the construct of sphere of conflict and sphere of non-conflict. These are used to provide the physical separation of VMs belonging to conflicting tenants that share the same cloud infrastructure. The model assists the cloud service provider to make informed VM placement decisions that factor in their tenants’ security profile and balance it against the relevant cost constraints and risk appetite. The study also proposes and presents a strong risk-based multi-factor authentication mechanism that scales up and down, based on threat levels or risks posed on the system. This ensures that users are authenticated using the right combination of access credentials according to the risk they pose. This also ensures end-to-end security of authentication data, both at rest and in transit, using an innovative cryptography system and steganography. Furthermore, the study proposes and presents a three-tier digital forensic process model that proactively collects and preserves digital evidence in anticipation of a legal lawsuit or policy breach investigation. This model aims to reduce the time it takes to conduct an investigation in the cloud. Moreover, the three-tier digital forensic readiness process model collects all user activity in a forensically sound manner and notifies investigators of potential security incidents before they occur. The current study also evaluates the effectiveness and efficiency of the proposed solution in addressing the data leakage problem. The results of the conflict-aware VM placement model are derived from simulated and real cloud environments. In both cases, the results show that the conflict-aware VM placement model is well suited to provide the necessary physical isolation of VM instances that belong to conflicting tenants in order to prevent data leakage threats. However, this comes with a performance cost in the sense that higher conflict tolerance levels on bigger VMs take more time to be placed, compared to smaller VM instances with low conflict tolerance levels. From the risk-based multifactor authentication point of view, the results reflect that the proposed solution is effective and to a certain extent also efficient in preventing unauthorised users, armed with legitimate credentials, from gaining access to systems that they are not authorised to access. The results also demonstrate the uniqueness of the approach in that even minor deviations from the norm are correctly classified as anomalies. Lastly, the results reflect that the proposed 3-tier digital forensic readiness process model is effective in the collection and storage of potential digital evidence. This is done in a forensically sound manner and stands to significantly improve the turnaround time of a digital forensic investigation process. Although the classification of incidents may not be perfect, this can be improved with time and is considered part of the future work suggested by the researcher. / Thesis (PhD)--University of Pretoria, 2020. / Computer Science / PhD / Unrestricted

Page generated in 0.051 seconds