• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 43
  • 19
  • 11
  • 7
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 61
  • 61
  • 53
  • 52
  • 48
  • 47
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Estimation and optimization methods for transportation networks

Wollenstein-Betech, Salomón 24 May 2022 (has links)
While the traditional approach to ease traffic congestion has focused on building infrastructure, the recent emergence of Connected and Automated Vehicles (CAVs) and urban mobility services (e.g., Autonomous Mobility-on-Demand (AMoD) systems) has opened a new set of alternatives for reducing travel times. This thesis seeks to exploit these advances to improve the operation and efficiency of Intelligent Transportation Systems using a network optimization perspective. It proposes novel methods to evaluate the prospective benefits of adopting socially optimal routing schemes, intermodal mobility, and contraflow lane reversals in transportation networks. This dissertation makes methodological and empirical contributions to the transportation domain. From a methodological standpoint, it devises a fast solver for the Traffic Assignment Problem with Side Constraints which supports arbitrary linear constraints on the flows. Instead of using standard column-generation methods, it introduces affine approximations of the travel latency function to reformulate the problem as a quadratic (or linear) programming problem. This framework is applied to two problems related to urban planning and mobility policy: social routing with rebalancing in intermodal mobility systems and planning lane reversals in transportation networks. Moreover, it proposes a novel method to jointly estimate the Origin-Destination demand and travel latency functions of the Traffic Assignment Problem. Finally, it develops a model to jointly optimize the pricing, rebalancing and fleet sizing decisions of a Mobility-on-Demand service. Empirically, it validates all the methods by testing them with real transportation topologies and real traffic data from Eastern Massachusetts and New York City showing the achievable benefits obtained when compared to benchmarks.
132

Multipath Routing with Load Balancing in Wireless Ad Hoc Networks

Groleau, Romain January 2005 (has links)
In recent years, routing research concerning wired networks has focused on minimizing the maximum utilization of the links which is equivalent to reducing the number of bottlenecks while supporting the same traffic demands. This can be achieved using multipath routing with load balancing instead of single path routing using of routing optimizers. However, in the domain of ad hoc networks multipath routing has not been investigated in depth. We would like to develop an analogy between wired and wireless networks, but before that we need to identify the major differences between these two in the case of multipath routing. First, in order to increase the network throughput, the multiple paths have to be independent so they don't share the same bottlenecks. Then, due to radio propagation properties the link capacity is not constant. So using the maximum utilization metric for wireless networks is not suitable. Based on the research done in wired networks, which has shown that using multiple paths with load balancing policies between sourcedestination pairs can minimize the maximum utilization of the links, we investigate if this is applicable to ad hoc networks. This paper proposes a multipath routing algorithm with a load balancing policy. The results obtained from an indoor 802.11g network highlight two major points. The maximum throughput is not achieved with multipath routing, but with single path routing. However, the results on the delivery ratio are encouraging, indeed we observe a real improvement thanks to our multipath routing algorithm. / På senare år har routning forskningen angående trådnätverken focusen på att minska den maximala användingen av länkar vilket motsvarar än reducering av flaskhalsar medan man stöder samma trafikkrav. Det här kan åstadkommas genom att av multiväg routning med lasta balansering I stället för använder enkelvägrouting med routing optimizers. Emellertid har inom ad hoc nätverken multiväg routning har inte blivit undersökts på djupet. Vi skulle vilja utveckla en analogy emellan trådnätverk och trådlösnätverken.men främföre det behöver identifiera de store differenserna mellan dessa två vid multiväg routning. För det första måste de flerfaldiga vägarna vara oberoende för att öka nätverkens throughput så de inte delar samma flaskhalsar. Sedan är länkkapaciteten inte constant på grund av radiospridningsegenskaperna. Så den maximal användningsmetric för trådlös nätverken passar inte. Den här arbetetet föreslår en multiväg routning algoritm med lasta balanseringen. Resultaten få från en indoor 802.11g nätverk framhåller ger två store meningen. Den maximala throughput är inte åstadkoms med multiväg routing, men med enkelväg routning. Emellertid är resultaten på den leveransförhållande uppmuntrande; vi observera en verklig förbättring tack vare vår multiväg routning algoritmen.
133

Improving the Response Time of M-Learning and Cloud Computing Environments Using a Dominant Firefly Approach

Sekaran, Kaushik, Khan, Mohammad S., Patan, Rizwan, Gandomi, Amir H., Krishna, Parimala Venkata, Kallam, Suresh 01 January 2019 (has links)
Mobile learning (m-learning) is a relatively new technology that helps students learn and gain knowledge using the Internet and Cloud computing technologies. Cloud computing is one of the recent advancements in the computing field that makes Internet access easy to end users. Many Cloud services rely on Cloud users for mapping Cloud software using virtualization techniques. Usually, the Cloud users' requests from various terminals will cause heavy traffic or unbalanced loads at the Cloud data centers and associated Cloud servers. Thus, a Cloud load balancer that uses an efficient load balancing technique is needed in all the cloud servers. We propose a new meta-heuristic algorithm, named the dominant firefly algorithm, which optimizes load balancing of tasks among the multiple virtual machines in the Cloud server, thereby improving the response efficiency of Cloud servers that concomitantly enhances the accuracy of m-learning systems. Our methods and findings used to solve load imbalance issues in Cloud servers, which will enhance the experiences of m-learning users. Specifically, our findings such as Cloud-Structured Query Language (SQL), querying mechanism in mobile devices will ensure users receive their m-learning content without delay; additionally, our method will demonstrate that by applying an effective load balancing technique would improve the throughput and the response time in mobile and cloud environments.
134

Services de répartition de charge pour le Cloud : application au traitement de données multimédia / Load distribution services for the Cloud : a multimedia data management example

Lefebvre, Sylvain 10 December 2013 (has links)
Le travail de recherche mené dans cette thèse consiste à développer de nouveaux algorithmes de répartition de charge pour les systèmes de traitement de données massives. Le premier algorithme mis au point, nommé "WACA" (Workload and Cache Aware Algorithm) améliore le temps d’exécution des traitements en se basant sur des résumés de contenus. Le second algorithme, appelé "CAWA" (Cost Aware Algorithm) tire partie de l’information de coût disponible dans les plateformes de type "Cloud Computing" en étudiant l’historique d’exécution des services.L’évaluation de ces algorithmes a nécessité le développement d’un simulateur d’infrastructures de "Cloud" nommé Simizer, afin de permettre leur test avant le déploiement en conditions réelles. Ce déploiement peut se faire de manière transparente grâce au système de distribution et de surveillance de service web nommé "Cloudizer", développé aussi dans le cadre de cette thèse. Ces travaux s’inscrivent dans le cadredu projet de plateforme de traitement de données Multimédia for Machine to Machine (MCUBE), dans le lequel le canevas Cloudizer est mis en oeuvre. / The research work carried out in this thesis consists in the development of new load balancing algorithms aimed at big data computing. The first algorithm, called « WACA » (Workload and Cache Aware Algorithm), enhances response times by locating data efficiently through content summaries. The second algorithm, called CAWA (Cost AWare Algorithm) takes advantage of the cost information available on Cloud Computing platforms by studying the workload history.Evaluation of these algorithms required the development of a cloud infrastructure simulator named Simizer, to enable testing of these policies prior to their deployment. This deployment can be transparently done thanks to the Cloudizer web service distribution and monitoring system, also developed during this thesis. These works are included in the Multimedia for Machine to Machine (MCUBE) project, where the Cloudizer Framework is deployed.
135

Effective Data Redistribution and Load Balancing for Sort-Last Volume Rendering Using a Group Hierarchy / Effektiv datadistribution och belastningsutjämning för sort-last volumetrisk rendering med hjälp av en grupphierarki

Walldén, Marcus January 2018 (has links)
Volumetric rendering is used to visualize volume data from e.g. scientific simulations. Many advanced applications use large gigabyte- or terabyte-sized data sets, which typically means that multiple compute nodes need to partake in the rendering process to achieve interactive frame rates. Load balancing is generally used to optimize the rendering performance. In existing load balancing techniques, nodes generally only render directly-connected data and handle load balancing based on data locality in kd-trees. This approach can result in redundant data transfers and unbalanced data distribution, which affect the frame rate and increase the hardware requirements of all nodes. In this thesis we present a novel load balancing technique for sort-last volume rendering which utilizes a group hierarchy. The technique allows nodes to render data from arbitrary positions in the volume, without inducing a costly image compositing stage. The technique is compared to a static load balancing technique as well as a dynamic kd-tree based load balancing technique. Our testing demonstrated that the presented technique performed better than or equal to the kd-tree based technique while also lowering the worst-case memory usage complexity of all nodes. Utilizing a group hierarchy effectively helped to lower the compositing time of the presented technique. / Volumetrisk rendering används för att visualisera bland annat vetenskapligasimuleringar. Inom avancerade användingsområden används ofta dataset med en storlek på flera gigabyte eller terabyte. Detta medför att flera noder ofta måste användas för att uppnå en interaktiv bildfrekvens. Belastningsutjämning används generellt för att optimera renderingsprestandan. I befintliga tekniker renderar noder vanligtvis endast direkt sammankopplad data och utför belastningsutjämning baserat på datalokalitet i kd-träd. Detta kan resultera i redundanta dataöverföringar och en obalanserad datadistribution, vilket påverkar bildfrekvensen och ökar hårdvarukraven för alla noder. I denna avhandling presenterar vi en ny teknik för belastningsutjämning för sort-last volumetrisk rendering som använder en grupphierarki. Tekniken tillåter noder att rendera data från godtyckliga positioner i volymen utan att förorsaka ett kostsamt steg för bildsammansättning. Tekniken jämförs med en statisk belastningsutjämningsteknik såväl som en dynamisk belastningsutjämningsteknik baserad på kd-träd. Våra tester visar att den presenterade tekniken presterar bättre eller likvärdigt med den kd-trädbaserade tekniken medan den samtidigt sänker minneskomplexiteten för alla noder. Användandet av en grupphierarki sänkte effektivt bildsammansättningstiden för den presenterade tekniken.
136

Enhancing the performance of mobile networks using Kubernetes : Load balancing traffic by utilizing workload estimation / Lastbalansering av trafik i ett Kuberneteskluster med hjälp av arbetsbelastningestimering

Laukka, Lucas, Fransson, Carl January 2023 (has links)
As global mobile network usage increases rapidly and users demand lower latency, the importance of stable 5G networks is more critical than ever. One way to orchestrate mobile network backends is by using Kubernetes. Kubernetes allows for automatic restarts and scaling of containers and provides an easy way to route incoming connections to applications running in containers. By routing the incoming connections using different load-balancing algorithms, it is possible to reduce latency through more efficient usage of worker nodes.  This thesis aims to identify ways to use load balancing inside a Kubernetes cluster to increase throughput and reduce latency in a mobile network system. We perform a literature study on possible ways to implement load balancing in Kubernetes and possible algorithms to use in the load balancing. Using the study results, we model a simplified mobile network system in a Kubernetes cluster and implement a load balancer at the Service level. By running simulations on this model, we compare three algorithms existing in Kubernetes as well as a dynamic algorithm using estimated workloads in terms of latency and throughput. The existing algorithms that are compared include Round Robin, Least Connections, and Random. The results show a potential to reduce latency by up to 31% compared to the native Random algorithm when utilizing a dynamic load balancer at the Service level.
137

Comparative Analysis of Load Balancing in Cloud Platforms for an Online Bookstore Web Application using Apache Benchmark

Pothuganti, Srilekha, Samanth, Malepiti January 2023 (has links)
Background :Cloud computing has transformed the landscape of application deploy-ment, offering on-demand access to compute resources, databases, and services viathe internet. This thesis explores the development of an innovative online book-storeweb application, harnessing the power of cloud infrastructure across AWS,Azure, andGCP. The front end utilises HTML, CSS, and JavaScript to create responsive webpages with an intuitive user interface. The back-end is constructed using Node.jsand Express for high-performance server-side logic and routing, while MongoDB, adistributed NoSQL database, stores the data. This cloud-native architecture facili-tates easy scaling and ensures high availability. Objectives: The main objectives of this thesis are to develop an intuitive onlinebookstore enabling users to add, exchange, and purchase books, deploy it acrossAWS, Azure, and GCP for scalability, implement load balancers for enhanced per-formance, and conduct load testing and benchmarking to compare the efficiency ofthese load balancers. The study aims to determine the best-performing cloud plat-form and load-balancing strategy to ensure an exceptional user experience for ouronline bookstore. Comparing load balancer data across these platforms to determinetheir performance ensures the best user experience for our online bookstore by takingthe metrics. Methods: The website deployment is done on three cloud platforms by creatinginstances separately on each platform, and then the load balance is created for eachof the services. By using the monitoring tools of every platform, we get the resultinggraphs for the metrics. From this, we increase and decrease the load in the ApacheBenchmark tool by taking the specific tasks from the website and comparing thevisualisation of the results done in an aggregate graph and summary reports. It isthen used to test the website’s overall performance by using metrics like throughput,CPU utilisation, error percentage, and cost efficiency. Results: The results are based on the Apache Benchmark Load Testing Tool of aselected website between the cloud platforms. The results of AWS, Azure, and GCPcan be shown in the aggregate graph. The graph results are based on the testingtool to determine which service is best for users because it shows less load on theserver and requests data in the shortest amount of time. We have considered 10 and50 requests, and based on the results, we have compared the metrics of throughput,CPU utilisation, error percentage, and cost efficiency. The 10 and 50 requests’ resultsare compared to determine which cloud platform performs better. Conclusions: According to the results from the 10 and 50 requests, it can be con-cluded that GCP has a higher throughput and CPU utilisation than AWS and Azure.They are less flexible and efficient for users. Thus, it can be said that GCP outper-forms in terms of load balancing.
138

Fail Over Strategy for Fault Tolerance in Cloud Computing Environment

Mohammed, Bashir, Kiran, Mariam, Maiyama, Kabiru M., Kamala, Mumtaz A., Awan, Irfan U. 04 April 2017 (has links)
Yes / Cloud fault tolerance is an important issue in cloud computing platforms and applications. In the event of an unexpected system failure or malfunction, a robust fault-tolerant design may allow the cloud to continue functioning correctly possibly at a reduced level instead of failing completely. To ensure high availability of critical cloud services, the application execution and hardware performance, various fault tolerant techniques exist for building self-autonomous cloud systems. In comparison to current approaches, this paper proposes a more robust and reliable architecture using optimal checkpointing strategy to ensure high system availability and reduced system task service finish time. Using pass rates and virtualised mechanisms, the proposed Smart Failover Strategy (SFS) scheme uses components such as Cloud fault manager, Cloud controller, Cloud load balancer and a selection mechanism, providing fault tolerance via redundancy, optimized selection and checkpointing. In our approach, the Cloud fault manager repairs faults generated before the task time deadline is reached, blocking unrecoverable faulty nodes as well as their virtual nodes. This scheme is also able to remove temporary software faults from recoverable faulty nodes, thereby making them available for future request. We argue that the proposed SFS algorithm makes the system highly fault tolerant by considering forward and backward recovery using diverse software tools. Compared to existing approaches, preliminary experiment of the SFS algorithm indicate an increase in pass rates and a consequent decrease in failure rates, showing an overall good performance in task allocations. We present these results using experimental validation tools with comparison to other techniques, laying a foundation for a fully fault tolerant IaaS Cloud environment.
139

Runtime Systems for Load Balancing and Fault Tolerance on Distributed Systems

Arafat, Md Humayun January 2014 (has links)
No description available.
140

Load-Balancing and Task Mapping for Exascale Systems

Deveci, Mehmet 22 May 2015 (has links)
No description available.

Page generated in 0.1098 seconds