• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 31
  • 20
  • 13
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 224
  • 47
  • 39
  • 38
  • 30
  • 30
  • 29
  • 28
  • 25
  • 24
  • 23
  • 21
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Otimização da capacidade de instalações sucro-alcooleiras / Optimization of the capacity of sugar and alcohol plants

Nascimento, Ademar Nogueira do 23 February 2006 (has links)
Orientador: Maria Teresa Moreira Rodrigues / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica / Made available in DSpace on 2018-08-11T22:24:06Z (GMT). No. of bitstreams: 1 Nascimento_AdemarNogueirado_D.pdf: 4126347 bytes, checksum: dc69d9346d14be4e8a3e7d7ce0788ff3 (MD5) Previous issue date: 2006 / Resumo: O presente estudo trata da otimização da logística de transporte e recepção de cana-de açúcar em indústria sucro-alcooleira, com base em modelos de filas de espera, seguido de modelo para otimização da produção, no âmbito da programação linear, cujo objetivo é a maximização do lucro total, advindo da comercialização de seus produtos (açúcar e álcool) e sub-produtos (bagaço de cana, torta de filtro, vinhaça e eletricidade). Para o estudo das filas de espera, a metodologia adotada consistiu na análise do sistema de transporte de cana da Usina Aliança, indústria localizada no interior do Estado da Bahia, Brasil. Os dados coletados alimentaram o software de planilha eletrônica excel, da Microsoft, empregado para o teste de aderência das distribuições de probabilidade. Com base no software Quntitative Systems for Business Plus - QSB+ foram feitas simulações de descarregamentos de cana. Para a programação linear, por sua vez, tomou-se por base estudo detalhado das operações unitárias de 95 indústrias desse segmento, realizado pelo Instituto de Pesquisas Tecnológicas do Estado de São Paulo - IPT. O recurso informatizado neste caso, consistiu também, do software excel, através do solver, tendo sido usada para a simulação de cenários relativos à maximização do lucro. Os resultados obtidos indicaram que um re planejamento da frota de transporte de cana, com diferentes capacidades é, de fato, mais indicado para a otimização do descarregamento nas usinas. Quanto à otimização da produção, conclui-se que o modelo formulado responde satisfatoriamente a variações de preços de seus produtos, indicando cenários de produção mais adequados. O presente estudo contribui para a melhoria da indústria de açúcar e álcool, fazendo ligação entre a função suprimentos e a função produção, de modo sistematizado, e apoiado em bases das Engenharias Química e de Produção. / Abstract: This study is about optimizing sugar-cane transportation and delivery logistics in the sugar and alcohol industry, based on waiting-queue models following a model for optimizing production, using linear programming, with the objective of increasing the total profit resulting from the sale of the products and by-products: sugar-cane bagasse, filter cake, stillage, and electricity. The methodology employed in the waiting-queues study consisted of analyzing the transportation system of the Aliança Mill, a facility located in rural Bahia, Brazil. Data collected were manipulated with statistics tools found in Microsoft Excel, in order to perform the probability distributions adherence test. Quantitative Systems for Business Plus, QSB+ software was used to perform simulations of cargo unloading in which the number of cargo unloading facilities (dumpers) was varied. The mathematical model was based on a detailed study of the operating units of 95 industries of this area, developed by Instituto de Pesquisas Tecnológicas do Estado de São Paulo - IPT. The computer resources also included Microsoft Excel. Its "solver" function was used to simulate scenarios related to maximizing profit. The results show that re-planning the transportation fleet to include a mix of vehicles with different load capacities is recommended to achieve optimal cargo unloading at the mills. Concerning production optimization, this study shows that the model satisfactorily responds to possible price variations of products and by-products, indicating suitable production composition scenarios. This study thus contributes to processes improvement in the sugar and alcohol industry, linking the supply function to the production function in a systematized way, based on Chemical and Production Engineering. / Doutorado / Sistemas de Processos Quimicos e Informatica / Doutor em Engenharia Química
172

Otimização do posicionamento de concentradores GPRS em redes elétricas inteligentes utilizando programação linear e teoria de filas / Positioning optmization of GPRS concentrators in smart grids using linear programming and queuing theory

Souza, Gustavo Batista de Castro 17 July 2014 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-01-13T10:55:38Z No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Gustavo Batista de Castro Souza - 2014.pdf: 11760996 bytes, checksum: 8245af285d79ff9e8079bafddb72e690 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-01-13T10:56:54Z (GMT) No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Gustavo Batista de Castro Souza - 2014.pdf: 11760996 bytes, checksum: 8245af285d79ff9e8079bafddb72e690 (MD5) / Made available in DSpace on 2015-01-13T10:56:54Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Gustavo Batista de Castro Souza - 2014.pdf: 11760996 bytes, checksum: 8245af285d79ff9e8079bafddb72e690 (MD5) Previous issue date: 2014-07-17 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Smart Grids systems have become widespread around the world. The RF mesh communication systems have contributed to make power systems smarter and reliable with implementation of Distributed Automation and Demand Response technologies. This work presents a methodology for positioning of GPRS concentrators in a energy meter ZigBee mesh network in order to attain the average network delay, thus aiming to improve the performance of the communication service. The proposed algorithm determines the amount and placement of concentrators using Integer Linear Programming and a Queuing Model for the Mesh Network. The solutions given by the proposed algorithm are validated by verifying the network performance through computer simulations based on real network scenarios. / Smart Grids tem se difundido em todo o mundo. Sistemas de comunicação RF Mesh (em malha) tem contribuído para deixar sistemas de potência mais inteligentes e confiáveis com a implantação de tecnolgias de Automação da Distribuição e Resposta à Demanda. Este trabalho apresenta um metodologia de posicionamento de concentradores GPRS em uma rede ZigBee mesh de medidores de energia elétrica com o objetivo de limitar o delay médio da rede e assim otimizar o desempenho do serviço de comunicação. O algoritmo proposto determina a quantidade e a localização de concentradores utilizando Programação Linear Inteira e um Modelo de Filas para Redes Mesh. As soluções obtidas pelo algoritmo proposto são validadas verificando o desempenho da rede através de simulações computacionais baseadas em cenários reais de redes.
173

Network protocol for distribution and handling of data from JAS 39 Gripen / Nätverksprotokoll för distribuering och hantering av data från JAS 39 Gripen

Karlsson, Jonathan January 2015 (has links)
On board the aircraft JAS 39 Gripen a measuring system, Data Acquisition System (DAS), is sending sensor data to a server on the ground. In this master thesis, a unified API for distribution and handling of the sensor data is designed and implemented. The work has been carried out at Saab Aeronautics, Linköping during, 2014. During flights with the aircraft the engineers at Saab need to monitor different sensors in the aircraft, including the exact commands of the pilots. All that data is serialized and sent via radio link to a server at Saab. The current data distribution solution includes several clients that need to connect to the server. Each client has its own connection protocol, making the system complex and difficult to maintain. An API is needed in order to make the clients connect in a unified manner. This would also enable future clients to implement the API and start receiving sensor data from the server. The research conducted in the thesis project was centered on the different choices that exist for designing such an API. The question that needed answering was; how can an existing complex system can be replaced by a publish-subscribe system and what the benefits would be in terms of latency and flexibility of the system? The design would have to be flexible enough to support multiple clients. The investigated research question was answered with a design utilizing ZMQ, pthreads and a design pattern. The result is a flexible system that was sufficiently fast for the requirements set at Saab and open to future extensions. The thesis work also included designing a unified API with requirements on latency and functionality. The resulting API was designed using the publish-subscribe design pattern, the network library Zero Message Queue (ZMQ) and the threading library pthreads. The resulting system supports multiple coexisting servers and clients that request sensor data. A new feature is that the clients can start sending calculations performed on samples to other clients. To demonstrate that the solution provides a unified framework, two existing clients and the server were developed with the proposed API. To test the latency requirements, tests were performed in the control room at Saab.
174

Physique statistique des systèmes désordonnés / Stochastic growth models : universality and fragility

Gueudré, Thomas 30 September 2014 (has links)
Cette thèse présente plusieurs aspects de la croissance stochastique des interfaces, par lebiais de son modèle le plus étudié, l'équation de Kardar-Parisi-Zhang (KPZ). Bien qued'expression très simple, cette équation recèle une grande richesse phénoménologiqueet est l'objet d'une recherche intensive depuis des dizaines d'années. Cela a conduit àl'émergence d'une nouvelle classe d'universalité, contenant des modèles de croissanceparmi les plus courants, tels que le Eden model ou encore le Polynuclear Growth Model.L'équation KPZ est également reliée à des problèmes d'optimisation en présence dedésordre (le Polymère Dirigé), ou encore à la turbulence des uides (l'équation de Burger), renforçant son intérêt. Cependant, les limites de cette classe d'universalitésont encore mal comprises. L'objet de cette thèse est, après avoir présenté les progrèsles plus récents dans le domaine, de tester les limites de cette classe d'universalité. Lathèse s'articule en quatre parties :i) Dans un premier temps, nous présentons des outils théoriques qui permettent decaractériser finement l'évolution de l'interface. Ces outils montrent une grande flexibilité, que nous illustrons en considérant le cas d'une géométrie confinée (une interfacecroissant le long d'une paroi).ii) Nous nous penchons ensuite sur l'influence du désordre, et plus particulièrementl'importance des évènements extrêmes dans la mécanique de croissance. Les largesfluctuations du désordre déforment l'interface et conduisent à une modification notabledes exposants de scaling. Nous portons une attention particulière aux conséquencesd'un tel désordre sur les stratégies d'optimisation en milieu désordonné.iii) La présence de corrélations dans le désordre est d'un intérêt expérimentalimmédiat. Bien qu'elles ne modifient pas la classe d'universalité, elles influent grandement sur la vitesse de croissance moyenne de l'interface. Cette partie est dédiée àl'étude de cette vitesse moyenne, souvent négligée car délicate à définir, et à l'existenced'un optimum de croissance intimement lié à la compétition entre exploration et exploitation.iv) Enfin, nous considérons un exemple expérimental de croissance stochastique (quin'appartient toutefois pas à la classe KPZ) et développons un formalisme phénoménologiquepour modéliser la propagation d'une interface chimique dans un milieu poreux désordonné.Tout au long du manuscrit, les conséquences des phénomènes observées dans desdomaines variés, tels que les stratégies d'optimisation, la dynamique des populations,la turbulence ou la finance, sont détaillées. / This Thesis presents several aspects of the stochastic growth, through its most paradig-matic model, the Kardar-Parisi-Zhang equation (KPZ). Albeit very simple, this equa-tion shows a rich behaviour and has been extensively studied for decades. The existenceof a new universality class is now well established, containing numerous growth modelslike the Eden model or the Polynuclear Growth Model. The KPZ equation is closelyrelated to optimisation problems (the Directed Polymer) or turbulence of uids (theBurgers equation), a feature that underlines its importance. Nonetheless, the bound-aries of this universality class are still vague. The focus of this Thesis is to probe thoselimits through various modifications of the models. It is divided in four chapters:i) First, we present theoretical tools, borrowed from integrable systems, that allowto characterize in great details the evolution of the interface. Those tools exhibitconsiderable exibility due to the large corpus of work on integrable systems, and weillustrate it by tackling the case of confined geometry (growth close to a hard wall).ii) We investigate the inuence of the disorder distribution, and more specificallythe importance of large events, with heavy-tailed distributions. Those extreme eventsstretch the interface and notably modify the main scaling exponents. The consequenceson optimization strategies in disorder landscapes are emphasized.iii) The presence of correlations in the disorder is of natural experimental interest.Although they do not impact the KPZ class, they greatly inuence the average speed ofgrowth. The latter quantity is often overlooked because it is non-universal and ratherill-defined. Nonetheless, we show that a generic optimal average speed exists in presenceof time correlations, due to a competition between exploration and exploitation.iv) Finally, we consider a set of experiments about chemical front growth in porousmedium. While this growth process is not related to KPZ in an immediate way, wepresent different tools that effciently reproduce the observations.Along that work, the consequences of each Chapter in various domains, like opti-misation strategies, turbulence, population dynamics or finance, are detailed.
175

Prediction Of Queue Waiting Times For Metascheduling On Parallel Batch Systems

Rajath Kumar, * 08 1900 (has links) (PDF)
Production parallel systems are space-shared and employ batch queues in which the jobs submitted to the systems are made to wait before execution. Thus, jobs submitted to parallel batch systems incur queue waiting times in addition to the execution times. Prediction of these queue waiting times is important to provide overall estimates to the users and can also help meta-schedulers make scheduling decisions. In the first part of our research, we have developed an integrated framework PQStar for identification and prediction of jobs with short queue waiting times. Analyses of the job traces of supercomputers reveal that about 56 to 99% of the jobs incur queue waiting times of less than an hour. Hence, identifying these quick starters or jobs with short queue waiting times is Essential for overall improvement on queue waiting time predictions. An important aspect of our prediction strategy for quick starters is that it considers the processor occupancy state and the queue state at the time of the job submission in addition to the job characteristics including the requested number of processors and the estimated runtime. Our experiments with different Production supercomputer job traces show that our prediction strategies can lead to correct identification of about 20% more quick starters on an average and provide tighter bounds for these jobs, and result in about 24% higher overall prediction accuracy on an average than the next best existing method. We have also developed a framework for predicting ranges of queue waiting times for other classes of jobs by employing multi-class classification on similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k- Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the predicted class (obtained from the kNN), along with its neighboring classes, are used to provide a set of ranges of wait times with probabilities. Our experiments with different production supercomputer job traces show that our prediction strategies can lead to about 8% improved accuracy on an average in prediction of the non-quick starters, compared to the next best existing method. Finally, we have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. For a given target job, we first identify the queues/sites where the job can be a quick starter to get a set of candidate queues/sites for the scheduling of the job. We then compute the expected value of the predicted wait time in each of the candidate queues/sites, and schedule the job to the one with minimum expected value, for the execution of the job. We have performed experiments with different production supercomputer job traces and synthetic traces for various system sizes, partitioning schemes and different workloads. These experiments have shown that our scheduling strategy gives much improved performance when compared to the existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47% on an average.
176

Dynamique des carnets d’ordres : analyse statistique, modélisation et prévision / Dynamics of limit order book : statistical analysis, modelling and prediction

Huang, Weibing 18 December 2015 (has links)
Cette thèse est composée de deux parties reliées, le premier sur le carnet d'ordre et le deuxième sur les effets de valeur de tick. Dans la première partie, nous présentons notre cadre de modélisation de carnet. Le modèle queue-réactive est d'abord introduit, dans laquelle nous révisons l'approche zéro intelligence traditionnelle en ajoutant dépendance envers l'État de carnet. Une étude empirique montre que ce modèle est très réaliste et reproduit de nombreuses fonctionnalités intéressantes microscopiques de l'actif sous-jacent comme la distribution du carnet de commandes. Nous démontrons également qu'il peut être utilisé comme un simulateur de marché efficace, ce qui permet l'évaluation de la tactique de placement complexes. Nous étendons ensuite le modèle de queue-réactive à un cadre markovien général. Conditions de Ergodicité sont discutés en détail dans ce paramètre. Dans la deuxième partie de cette thèse, nous sommes intéressés à étudier le rôle joué par la valeur de la tique à deux échelles microscopiques et macroscopiques. Tout d'abord, une étude empirique sur les conséquences d'un changement de la valeur de tick est effectuée à l'aide des données du programme pilote de réduction de la taille 2014 tick japonais. Une formule de prédiction pour les effets d'un changement de valeur de tique sur les coûts de transactions est dérivé. Ensuite, un modèle multi-agent est introduit afin d'expliquer les relations entre le volume du marché, la dynamique des prix, spread bid-ask, la valeur de la tique et de l'état du carnet d'ordres d'équilibre. / This thesis is made of two connected parts, the first one about limit order book modeling and the second one about tick value effects. In the first part, we present our framework for Markovian order book modeling. The queue-reactive model is first introduced, in which we revise the traditional zero-intelligence approach by adding state dependency in the order arrival processes. An empirical study shows that this model is very realistic and reproduces many interesting microscopic features of the underlying asset such as the distribution of the order book. We also demonstrate that it can be used as an efficient market simulator, allowing for the assessment of complex placement tactics. We then extend the queue-reactive model to a general Markovian framework for order book modeling. Ergodicity conditions are discussed in details in this setting. Under some rather weak assumptions, we prove the convergence of the order book state towards an invariant distribution and that of the rescaled price process to a standard Brownian motion. In the second part of this thesis, we are interested in studying the role played by the tick value at both microscopic and macroscopic scales. First, an empirical study of the consequences of a tick value change is conducted using data from the 2014 Japanese tick size reduction pilot program. A prediction formula for the effects of a tick value change on the trading costs is derived and successfully tested. Then, an agent-based model is introduced in order to explain the relationships between market volume, price dynamics, bid-ask spread, tick value and the equilibrium order book state.
177

Sur la dépendance des queues de distributions / On the tait dependence of distributions

Aleiyouka, Mohalilou 27 September 2018 (has links)
Pour modéliser de la dépendance entre plusieurs variables peut s'appuyer soit sur la corrélation entre les variables, soit sur d'autres mesures, qui déterminent la dépendance des queues de distributions.Dans cette thèse, nous nous intéressons à la dépendance des queues de distributions, en présentant quelques propriétés et résultats.Dans un premier temps, nous obtenons le coefficient de dépendance de queue pour la loi hyperbolique généralisée selon les différentes valeurs de paramètres de cette loi.Ensuite, nous exposons des propriétés et résultats du coefficient de dépendance extrémale dans le cas où les variables aléatoires suivent une loi de Fréchet unitaire.Finalement, nous présentons un des systèmes de gestion de bases de données temps réel (SGBDTR). Le but étant de proposer des modèles probabilistes pour étudier le comportement des transactions temps réel, afin d'optimiser ses performances. / The modeling of the dependence between several variables can focus either on the positive or negative correlation between the variables, or on other more effective ways, which determine the tails dependence of distributions.In this thesis, we are interested in the tail dependence of distributions, by presenting some properties and results. Firstly, we obtain the limit tail dependence coefficient for the generalized hyperbolic law according to different parameter values of this law. Then, we exhibit some properties and results of die extremal dependence coefficient in the case where the random variables follow a unitary Fréchet law.Finally, we present a Real Time Database ManagementSystems (RDBMS). The goal is to propose probabilistic models to study thebehavior of real-time transactions, in order to optimize its performance.
178

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
179

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
180

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.

Page generated in 0.0473 seconds