• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 43
  • 19
  • 11
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 290
  • 290
  • 61
  • 61
  • 53
  • 52
  • 48
  • 47
  • 40
  • 36
  • 35
  • 34
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

COGNITIVE RADIO AND GAME THEORY : OVERVIEW AND SIMULATION

Elnourani, Mohamed January 2008 (has links)
This thesis aims to clearly describe the cognitive radio and its components and operations. Moreover, it aims on describing the expected outcome from the most common techniques that are proposed for use in cognitive radios. In addition, it describes the basic principles of game theory and some simple game models that can be used to analyze the efficiency of the optimization algorithms. Furthermore, it investigates the use of load balancing algorithm and genetic algorithm in optimizing the decision making operation in cognitive radios. Matlab software simulations were carried out and the results show the promising benefit of using those two algorithms along with game theory in optimizing the dynamic spectrum allocation process.
182

Automated Live Migration of Virtual Machines

Glad, Andreas, Forsman, Mattias January 2013 (has links)
This thesis studies the area of virtualization. The focus is on the sub-area live migration, a technique that allows a seamless migration of a virtual machine from one physical machine to another physical machine. Virtualization is an attractive technique, utilized in large computer systems, for example data centers. By using live migration, data center administrators can migrate virtual machines, seamlessly, without the users of the virtual machines taking notice about the migrations. Manually initiated migrations can become cumbersome, with an ever-increasing number of physical machines. The number of physical and virtual machines is not the only problem, deciding when to migrate and where to migrate are other problems that needs to be solved. Manually initiated migrations can also be inaccurate and untimely. Two different strategies for automated live migration have been developed in this thesis. The Push and the Pull strategies. The Push strategy tries to get rid of virtual machines and the Pull strategy tries to steal virtual machines. Both of these strategies, their design and implementation, are presented in the thesis. The strategies utilizes Shannon's Information Entropy to measure the balance in the system. The strategies further utilizes a cost model to predict the time a migration would require. This is used together with the Information Entropy to decide which virtual machine to migrate if and when a hotspot occurs. The implementation was done with the help of OMNeT++, an open-source simulation tool. The strategies are evaluated with the help of a set of simulations. These simulations include a variety of scenarios with different workloads. Our results shows that the developed strategies can re-balance a system of computers, after a large amount of virtual machines has been added or removed, in only 4-5 minutes. The results further shows that our strategies are able to keep the system balanced when the system load is at medium. This while virtual machines are continuously added or removed from the system. The contribution this thesis brings to the field is a model for how automated live migration of virtual machines can be done to improve the performance of a computer system, for example a data center.
183

Comparison of System Performance During DDoS Attacks in Modern Operating Systems

Pettersson, Erik January 2017 (has links)
Distributed Denial of Service attacks are an ever prevalent challenge for system administra-tors today to overcome. The attack, which is all about restricting legitimate users access to a service, such as a web-page. Can cost companies and governments millions of dollars if not properly managed. This study aims to explore if there is any difference in performance between some of the most modern iterations of popular server operating systems today. Those server operating systems are: Windows Server 2016, Ubuntu 16 and FreeBSD 11. And submitting them to one of the most popular DDoS attacks at the time of writing, a so called HTTP-Get request. The webservers used are some of the most widely used today, Apache and Microsoft IIS. Each server will be submitted to attacks, and compared between one another. Different de-fence methods will also be tested and examined. Tests include shorter tests that is repeated multiple times for data validity, and one longer test for every condition in order to control if the results are similar. During these tests, the operating systems will measure CPU/RAM utilization, and a control computer will measure Round Trip Time. Windows Server 2016 using IIS and FreeBSD 11 perform similarly resource wise, but Win-dows Server 2016 with IIS had a better Round Trip Time performance. Windows Server 2016 with Apache performs worst in all measurements, while Ubuntu 16 performs in the middle, but has the most stable performance.
184

Cloud Computing : Trends and Performance Issues

Al-Refai, Ali, Pandiri, Srinivasreddy January 2011 (has links)
Context: Cloud Computing is a very fascinating concept these days, it is attracting so many organiza-tions to move their utilities and applications into a dedicated data centers, and so it can be accessed from the Internet. This allows the users to focus solely on their businesses while Cloud Computing providers handle the technology. Choosing a best provider is a challenge for organizations that are willing to step into the Cloud Computing world. A single cloud center generally could not deliver large scale of resources for the cloud tenants; therefore, multiple cloud centers need to collaborate to achieve some business goals and to provide the best possible services at lowest possible costs. How-ever a number of aspects, legal issues, challenges, and policies should be taken into consideration when moving our service into the Cloud environment. Objectives: The aim of this research is to identify and elaborate the major technical and strategy differences between the cloud-computing providers in order to enable the organizations managements, system designers and decision makers to have better insight into the strategies of the different Cloud Computing providers. It is also to understand the risks and challenges due to implementing Cloud Computing, and “how” those issues can be moderated. This study will try to define Multi-Cloud Computing by studying the pros and cons of this new domain. It is also aiming to study the concept of load balancing in the cloud in order to examine the performance over multiple cloud environments. Methods: In this master thesis a number of research methods are used, including the systematic litera-ture review, contacting experts from the relevant field (Interviews) and performing a quantitative methodology (Experiment). Results: Based on the findings of the Literature Review, Interviews and Experiment, we got out the results for the research questions as, 1) A comprehensive study for identifying and comparing the major Cloud Computing providers, 2) Addressing a list of impacts of Cloud Computing (legal aspects, trust and privacy). 3) Creating a definition for Multi-Cloud Computing and identifying the benefits and drawbacks, 4) Finding the performance results on the cloud environment by performing an expe-riment on a load balancing solution. Conclusions: Cloud Computing becomes a central interest for many organizations nowadays. More and more companies start to step into the Cloud Computing service technologies, Amazon, Google, Microsoft, SalesForce, and Rackspace are the top five major providers in the market today. However, there is no Cloud that is perfect for all services. The legal framework is very important for the protection of the user’s private data; it is an important key factor for the safety of the user’s personal and sensitive information. The privacy threats vary according to the nature of the cloud scenario, since some clouds and services might face a very low privacy threats compare to the others, the public cloud that is accessed through the Internet is one of the most means when it comes the increasing threats of the privacy concerns. Lack of visibility of the provider supply chain will lead to suspicion and ultimately distrust. The evolution of Cloud Computing shows that it is likely, in a near future, the so-called Cloud will be in fact a Multi-cloud environment composed of a mixture of private and public Clouds to form an adaptive environment. Load balancing in the Cloud Computing environment is different from the typical load balancing. The architecture of cloud load balancing is using a number of commodity servers to perform the load balancing. The performance of the cloud differs depending on the cloud’s location even for the same provider. HAProxy load balancer is showing positive effect on the cloud’s performance at high amount of load, the effect is unnoticed at lower amounts of load. These effects can vary depending on the location of the cloud.
185

Parallélisations de méthodes de programmation par contraintes / Parallelizations of constraint programming methods

Menouer, Tarek 26 June 2015 (has links)
Dans le cadre du projet PAJERO, nous présentons dans cette thèse une parallélisation externe d'un solveur de Programmation Par Contraintes (PPC) basée sur des méthodes de parallélisation de la search et Portfolio. Cela, afin d'améliorer la performance de la résolution des problèmes de satisfaction et d'optimisation sous contraintes. La parallélisation de la search que nous proposons est adaptée pour une exécution en mode opportuniste et déterministe, suivant les besoins des clients. Le principe consiste à partitionner à la demande l'arbre de recherche unique généré par une seule stratégie de recherche en un ensemble de sous-arbres, pour ensuite affecter chaque sous-arbre à un coeur de calcul. Une stratégie de recherche est un algorithme qui choisit pour chaque noeud dans l'arbre de recherche la variable à assigner et choisi également l'ordonnancement de la recherche. En PPC, il existe plusieurs stratégies de recherche, certaines sont plus efficaces que d'autres, mais cela dépend généralement de la nature des problèmesde contraintes. Cependant la difficulté reste de choisir la bonne stratégie. Pour bénéficier de la variété des stratégies et de la disponibilité des ressources de calcul, un autre type de parallélisation est proposé, appelé Portfolio. La parallélisationPortfolio consiste à exécuter en parallèle N stratégies de recherche, ensuite la première stratégie qui trouve une solution met fin à toutes les autres. La nouveauté que nous proposons dans la parallélisation Portfolio consiste à adapterl'ordonnancement des N stratégies entre elles afin de privilégier la stratégie la plus prometteuse. Cela en lui donnant plus de coeurs que les autres. Pour ceci nous appliquons soit une fonction d'estimation pour chaque stratégie afin de sélectionner la stratégie qui a le plus petit arbre de recherche, soit un algorithme d'apprentissage qui permet de prédire quelle est la meilleure stratégie suivant le résultat d'un apprentissage effectué sur des instances déjà résolues. Afin d'ordonnancer plusieurs applications de PPC, nous proposons également un nouveau système d'allocation de ressources basé sur une stratégie d'ordonnancement combinée avec un modèle économique. Les applications de PPC sont résolues avec des solveurs parallèles dans une infrastructure cloud computing. L'originalité du system d'allocation est qu'il détermine automatiquement le nombre de ressources à affecter pour chaque application suivant la classe économique du client. Les performances obtenues avec nos méthodes de parallélisation sont illustrées par la résolution des problèmes de contraintes en portant le solveur Google OR-Tools au-dessus de notre framework parallèle Bobpp / In the context of the PAJERO project, we propose in this thesis an external parallelization of a Constraint Programming (CP) solver, using both search and Portfolio parallelizations, in order to solve constraint satisfaction and optimization problems. In our work the search parallelization is adapted for deterministic and non-deterministic executions, according to the needs of the user. The principle is to partition the unique search tree generated by one search strategy into a set of sub-trees, then assign each sub-tree to one computing core. A search strategy herein means an algorithm to decide which variable is selected to be assigned in each node of the search tree, and decide also the scheduling of the search. In CP, several search strategies exist and each one could be better than others for solving a specific problem. The difficulty lies in how to choose the best strategy. To benefit from the variety of strategies and the availability of computationalresources, another parallelization exists called the Portfolio parallelization. The principle of this Portfolio parallelization is to execute N search strategies in parallel. The first strategy which find a solution stops the others. The noveltyof our work in the context of the Portfolio is to adapt the schedule of the N strategies in order to favour the most promising strategy, which is a candidate to find a solution first, by giving it more cores than others. The promising strategyis selected using two methods. The first method is to use an estimation function which select the strategy with the smallest search tree. The second method is to use a learning algorithm which automatically determines the number of cores thatwill be allocated to each strategy according to the previous experiment. We have also proposed a new resource allocation system based on a scheduling strategy used with an economic model in order to execute several PPC applications. Thisapplications are solved using parallel solvers in the cloud computing infrastructure. The originality of this system is that the number of resources allocated to each PPC application is determined automatically according the economic classesof the users. The performances obtained by our parallelization methods are illustrated by solving the CP problems using the Google OR-Tools solver on top of the parallel Bobpp framework.
186

Amélioration de la dissémination de données biaisées dans les réseaux structurés / Improving skewed data dissemination in structured overlays

Antoine, Maeva 23 September 2015 (has links)
De nombreux systèmes distribués sont confrontés au problème du déséquilibre de charge entre machines. Avec l'émergence du Big Data, de larges volumes de données aux valeurs souvent biaisées sont produits par des sources hétérogènes pour être souvent traités en temps réel. Il faut donc être capable de s'adapter aux variations de volume/contenu/provenance de ces données. Nous nous intéressons ici aux données RDF, un format du Web Sémantique. Nous proposons une nouvelle approche pour améliorer la répartition des données, basée sur l'utilisation de plusieurs fonctions de hachage préservant l'ordre naturel des données dans le réseau. Cela permet à chaque pair de pouvoir indépendamment modifier la fonction de hachage qu'il applique sur les données afin de réduire l'intervalle de valeurs dont il est responsable. Plus généralement, pour résoudre le problème du déséquilibre de charge, il existe presque autant de stratégies qu'il y a de systèmes différents. Nous montrons que de nombreux dispositifs d'équilibrage de charge sont constitués des mêmes éléments de base, et que seules la mise en œuvre et l'interconnexion de ces éléments varient. Partant de ce constat, nous décrivons les concepts derrière la construction d'une API générique pour appliquer une stratégie d'équilibrage de charge qui est indépendante du reste du code. Mise en place sur notre système, l'API a un impact minimal sur le code métier et permet de changer une partie d'une stratégie sans modifier d'autres composants. Nous montrons aussi que la variation de certains paramètres peut influer sur les résultats obtenus. / Many distributed systems face the problem of load imbalance between machines. With the advent of Big Data, large datasets whose values are often highly skewed are produced by heterogeneous sources to be often processed in real time. Thus, it is necessary to be able to adapt to the variations of size/content/source of the incoming data. In this thesis, we focus on RDF data, a format of the Semantic Web. We propose a novel approach to improve data distribution, based on the use of several order-preserving hash functions. This allows an overloaded peer to independently modify its hash function in order to reduce the interval of values it is responsible for. More generally, to address the load imbalance issue, there exist almost as many load balancing strategies as there are different systems. We show that many load balancing schemes are comprised of the same basic elements, and only the implementation and interconnection of these elements vary. Based on this observation, we describe the concepts behind the building of a common API to implement any load balancing strategy independently from the rest of the code. Implemented on our distributed storage system, the API has a minimal impact on the business code and allows the developer to change only a part of a strategy without modifying the other components. We also show how modifying some parameters can lead to significant improvements in terms of results.
187

Placement d'applications parallèles en fonction de l'affinité et de la topologie / Placement of parallel applications according to the topology and the affinity

Tessier, Francois 26 January 2015 (has links)
La simulation numérique est un des piliers des Sciences et de l’industrie. La simulationmétéorologique, la cosmologie ou encore la modélisation du coeur humain sont autantde domaines dont les besoins en puissance de calcul sont sans cesse croissants. Dès lors,comment passer ces applications à l’échelle ? La parallélisation et les supercalculateurs massivementparallèles sont les seuls moyens d’y parvenir. Néanmoins, il y a un prix à payercompte tenu des topologies matérielles de plus en plus complexes, tant en terme de réseauque de hiérarchie mémoire. La question de la localité des données devient ainsi centrale :comment réduire la distance entre une entité logicielle et les données auxquelles elle doitaccéder ? Le placement d’applications est un des leviers permettant de traiter ce problème.Dans cette thèse, nous présentons l’algorithme de placement TreeMatch et ses applicationsdans le cadre du placement statique, c’est-à-dire au lancement de l’application, et duplacement dynamique. Pour cette seconde approche, nous proposons la prise en comptede la localité des données dans le cadre d’un algorithme d’équilibrage de charge. Les différentesapproches abordées sont validées par des expériences réalisées tant sur des codesd’évaluation de performances que sur des applications réelles. / Computer simulation is one of the pillars of Sciences and industry. Climate simulation,cosmology, or heart modeling are all areas in which computing power needs are constantlygrowing. Thus, how to scale these applications ? Parallelization and massively parallel supercomputersare the only ways to do achieve. Nevertheless, there is a price to pay consideringthe hardware topologies incessantly complex, both in terms of network and memoryhierarchy. The issue of data locality becomes central : how to reduce the distance betweena processing entity and data to which it needs to access ? Application placement is one ofthe levers to address this problem. In this thesis, we present the TreeMatch algorithmand its application for static mapping, that is to say at the lauchtime of the application,and the dynamic placement. For this second approach, we propose the awareness of datalocality within a load balancing algorithm. The different approaches discussed are validatedby experiments both on benchmarking codes and on real applications.
188

Ambientes de execução para o modelo de atores em plataformas hierárquicas de memória compartilhada com processadores de múltiplos núcleos / Dealing with actor runtime environments on hierarchical shared memory multi-core platforms

Emilio de Camargo Francesquini 16 May 2014 (has links)
O modelo de programação baseado em atores é frequentemente utilizado para o desenvolvimento de grandes aplicações e sistemas. Podemos citar como exemplo o serviço de bate-papo do Facebook ou ainda o WhatsApp. Estes sistemas dão suporte a milhares de usuários conectados simultaneamente levando em conta estritas restrições de desempenho e interatividade. Tais sistemas normalmente são amparados por infraestruturas de hardware com processadores de múltiplos núcleos. Normalmente, máquinas deste porte são baseadas em uma estrutura de memória compartilhada hierarquicamente (NUMA - Non-Uniform Memory Access). Nossa análise dos atuais ambientes de execução para atores e a pesquisa na literatura mostram que poucos estudos sobre a adequação deste ambientes a essas plataformas hierárquicas foram conduzidos. Estes ambientes de execução normalmente assumem que o espaço de memória é uniforme o que pode causar sérios problemas de desempenho. Nesta tese nós estudamos os desafios enfrentados por um ambiente de execução para atores quando da sua execução nestas plataformas. Estudamos particularmente os problemas de gerenciamento de memória, de escalonamento e de balanceamento de carga. Neste documento nós também analisamos e caracterizamos as aplicações baseadas no modelo de atores. Tal análise nos permitiu evidenciar o fato de que a execução de benchmarks e aplicações criam estruturas de comunicação peculiares entre os atores. Tais peculiaridades podem, então, ser utilizadas pelos ambientes de execução para otimizar o seu desempenho. A avaliação dos grafos de comunicação e a implementação da prova de conceito foram feitas utilizando um ambiente de execução real, a máquina virtual da linguagem Erlang. A linguagem Erlang utiliza o modelo de atores para concorrência com uma sintaxe clara e consistente. As modificações que nós efetuamos nesta máquina virtual permitiram uma melhora significativa no desempenho de certas aplicações através de uma melhor afinidade de comunicação entre os atores. O escalonamento e o balanceamento de carga também foram melhorados graças à utilização do conhecimento sobre o comportamento da aplicação e sobre a plataforma de hardware. / The actor model is present in several mission-critical systems, such as those supporting WhatsApp and Facebook Chat. These systems serve thousands of clients simultaneously, therefore demanding substantial computing resources usually provided by multi-processor and multi-core platforms. Non-Uniform Memory Access (NUMA) architectures account for an important share of these platforms. Yet, research on the suitability of the current actor runtime environments for these machines is very limited. Current runtime environments, in general, assume a flat memory space, thus not performing as well as they could. In this thesis we study the challenges hierarchical shared memory multi-core platforms present to actor runtime environments. In particular, we investigate aspects related to memory management, scheduling, and load-balancing. In this document, we analyze and characterize actor based applications to, in light of the above, propose improvements to actor runtime environments. This analysis highlighted the existence of peculiar communication structures. We argue that the comprehension of these structures and the knowledge about the underlying hardware architecture can be used in tandem to improve application performance. As a proof of concept, we implemented our proposal using a real actor runtime environment, the Erlang Virtual Machine (VM). Concurrency in Erlang is based on the actor model and the language has a consistent syntax for actor handling. Our modifications to the Erlang VM significantly improved the performance of some applications thanks to better informed decisions on scheduling and on load-balancing.
189

Dynamické vyvažování zátěže v paralelních aplikacích / Dynamic Load-Balancing in Parallel Applications

Dvořáček, Vojtěch January 2017 (has links)
This thesis aims to implement dynamic load balancing mechanism into the parallel simulation model of the heat distribution in a CPU cooler. The first part introduces theoretical foundations for dynamic load balancing, describing current solution approaches. The second part refers to the heat distribution model and related topics such as MPI communications library or HDF library for data storage. Then it proceeds to the implementation of simulation model with dynamic 2D decomposition of square model domain. Custom geometry based dynamic load balancing algorithm was introduced, which works with this decomposition. Important part of the implementation is Zoltan library, used especially for data migration. At the end, a set of experiments was presented, which demonstrates load balancing abilities of designed model together with conclusions and motivation for future research.
190

Steady State Analysis of Load Balancing Algorithms in the Heavy Traffic Regime

January 2019 (has links)
abstract: This dissertation studies load balancing algorithms for many-server systems (with N servers) and focuses on the steady-state performance of load balancing algorithms in the heavy traffic regime. The framework of Stein’s method and (iterative) state-space collapse (SSC) are used to analyze three load balancing systems: 1) load balancing in the Sub-Halfin-Whitt regime with exponential service time; 2) load balancing in the Beyond-Halfin-Whitt regime with exponential service time; 3) load balancing in the Sub-Halfin-Whitt regime with Coxian-2 service time. When in the Sub-Halfin-Whitt regime, the sufficient conditions are established such that any load balancing algorithm that satisfies the conditions have both asymptotic zero waiting time and zero waiting probability. Furthermore, the number of servers with more than one jobs is o(1), in other words, the system collapses to a one-dimensional space. The result is proven using Stein’s method and state space collapse (SSC), which are powerful mathematical tools for steady-state analysis of load balancing algorithms. The second system is in even “heavier” traffic regime, and an iterative refined procedure is proposed to obtain the steady-state metrics. Again, asymptotic zero delay and waiting are established for a set of load balancing algorithms. Different from the first system, the system collapses to a two-dimensional state-space instead of one-dimensional state-space. The third system is more challenging because of “non-monotonicity” with Coxian-2 service time, and an iterative state space collapse is proposed to tackle the “non-monotonicity” challenge. For these three systems, a set of load balancing algorithms is established, respectively, under which the probability that an incoming job is routed to an idle server is one asymptotically at steady-state. The set of load balancing algorithms includes join-the-shortest-queue (JSQ), idle-one-first(I1F), join-the-idle-queue (JIQ), and power-of-d-choices (Pod) with a carefully-chosen d. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019

Page generated in 0.1532 seconds