• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 31
  • 20
  • 13
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 227
  • 47
  • 40
  • 38
  • 30
  • 30
  • 29
  • 28
  • 25
  • 24
  • 24
  • 21
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
182

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
183

De nouveaux estimateurs semi-paramétriques de l'indice de dépendance extrême de queue

Cissé, Mamadou Lamine January 2020 (has links) (PDF)
No description available.
184

The black housing market - A survey of thegeneral public’s attitude towards the market andpossible solutions. / Den svarta bostadsmarknaden- en kartläggning av den allmänna attitydenkring marknaden och tänkbara lösningar.

Huuva, Renée, Koyuncu, Özge January 2014 (has links)
The aim of this Bachelor of Science thesis is to study the black housing market, its spread and expansion within the Stockholm region. This hidden sort of crime is compromised and organized more than ever before. According to Fastighetsägarna the turnover is over a billion Swedish krona in Stockholm City, Sundbyberg and Solna. The demands of tenancies are at an all-time high and queue is all-time long. There is a housing shortage in the region and very few tenancies are built. Besides the fact that the regulated rent may be a cause, it is also the fact that Sweden today has the highest building costs in the European Union, although the country is one of the most resource-rich. The politics regarding the possibility to be able to live in the inner-city is primitive when the geographical location of the tenancies is not reflected in the rent itself. This leads to a hidden economic value of tenancies that later on are resold in the black housing market. Instead of an increase in the housing market, a creation of a black trade has been formed. Black landlords that sell these tenancies illegally reach out to their buyers through contacts and accommodation adds on forums such as Björnsbytare and Blocket. This Bachelor of Science thesis intends to analyze what this criminality looks like and what alternative ways there are to solve these problems. A questionnaire with over 200 respondents has been made and interviews with representatives from interest organizations have been performed due to the fact that literature within this field is inadequate. Radical actions are needed if the housing market is going to recover. There are various ways of doing this which will be discussed further in this thesis. Most importantly are our results that indicate how a new generation has been raised to think that the black housing market is acceptable. It has become a natural feature in their everyday lives. We need to restore the general public attitude if we ever want to receive a functioning housing market. / Syftet med detta kandidatarbete är att studera den svarta bostadsmarknaden samt dess utbredning och attityder som utvecklats i Stockholmsregionen. Denna dolda brottslighet är mer omfattande och organiserad än någonsin. Enligt Fastighetsägarna omsätter den över en miljard kronor i Stockholm City, Sundbyberg och Solna. Efterfrågan på bostäder är rekordhög och kötiderna för hyresrätter i Stockholms innerstad är rekordlånga. Det råder bostadsbrist i regionen och det byggs för få hyresrätter. Det är även så att Sverige idag har de högsta byggkostnaderna i hela EU trots att landet är det ett av de mest resursrika. Politiken kring att alla ska ha möjlighet att bo i innerstaden är primitiv då en hyresrätts geografiska läge inte avspeglas i hyran. Detta leder till ett dolt ekonomiskt värde på hyreskontrakt som sedan säljs vidare svart. Istället för en sund tillväxt på bostadsmarknaden har det skapats en marknad av svarthandel. Svartmäklare når enkelt ut till sina köpare via kontakter och bostadsannonser på forum som Björnsbytare och Blocket. Denna uppsats avser att analysera hur denna brottslighet ser ut, hur den kan åtgärdas och vilka alternativa sätt det finns att lösa problematiken. En enkätundersökning med över 200 svarande har utförts och intervjuer med representanter från intresseorganisationer har genomförts då litteratur kring ämnet är bristfällig. Radikala åtgärder krävs för att återställa marknadens balans. Det finns olika sätt att göra detta på, vilket kommer att diskuteras vidare i denna uppsats. Det allra viktigaste är dock att våra resultat visar hur en ny generation har uppfostrats till att tycka att svarthandeln är ganska okej. Den har blivit ett naturligt inslag i deras vardag. Vi behöver förändra människors värderingar kring detta om vi någonsin ska få en fungerande bostadsmarknad. Den åtgärd som först och främst måste vidtas är att bygga fler hyresrätter. Från politiskt håll finns olika sätt att stimulera bostadsbyggandet.
185

A comparison of algorithms used in traffic control systems / En jämförelse av algoritmer i trafiksystem

Björck, Erik, Omstedt, Fredrik January 2018 (has links)
A challenge in today's society is to handle a large amount of vehicles traversing an intersection. Traffic lights are often used to control the traffic flow in these intersections. However, there are inefficiencies since the algorithms used to control the traffic lights do not perfectly adapt to the traffic situation. The purpose of this paper is to compare three different types of algorithms used in traffic control systems to find out how to minimize vehicle waiting times. A pretimed, a deterministic and a reinforcement learning algorithm were compared with each other. Test were conducted on a four-way intersection with various traffic demands using the program Simulation of Urban MObility (SUMO). The results showed that the deterministic algorithm performed best for all demands tested. The reinforcement learning algorithm performed better than the pretimed for low demands, but worse for varied and higher demands. The reasons behind these results are the deterministic algorithm's knowledge about vehicular movement and the negative effects the curse of dimensionality has on the training of the reinforcement learning algorithm. However, more research must be conducted to ensure that the results obtained are trustworthy in similar and different traffic situations. / En utmaning i dagens samhälle är att hantera en stor mängd fordon som kör igenom en korsning. Trafikljus används ofta för att kontrollera trafikflödena genom dessa korsningar. Det finns däremot ineffektiviteter eftersom algoritmerna som används för att kontrollera trafikljusen inte är perfekt anpassade till trafiksituationen. Syftet med denna rapport är att jämföra tre typer av algoritmer som används i trafiksystem för att undersöka hur väntetid för fordon kan minimeras. En tidsbaserad, en deterministisk och en förstärkande inlärning-algoritm jämfördes med varandra. Testerna utfördes på en fyrvägskorsning med olika trafikintensiteter med hjälp av programmet Simulation of Urban MObility (SUMO). Resultaten visade att den deterministiska algoritmen presterade bäst för alla olika trafikintensiteter. Inlärningsalgoritmen presterade bättre än den tidsbaserade på låga intensiteter, men sämre på varierande och högre intensiteter. Anledningarna bakom resultaten är att den deterministiska algoritmen har kunskap om hur fordon rör sig samt att dimensionalitetsproblem påverkar träningen av inlärningsalgoritmen negativt. Det krävs däremot mer forskning för att säkerställa att resultaten är pålitliga i liknande och annorlunda trafiksituationer.
186

Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems / Inferens av buffer-kötider i dataprocesseringssystem med hjälp av Gaussiska processer

Hall, Otto January 2017 (has links)
This study investigates whether Gaussian Process Regression can be applied to evaluate buffer queue times in large scale data processing systems. It is additionally considered whether high-frequency data stream rates can be generalized into a small subset of the sample space. With the aim of providing basis for dynamic software optimization, a promising foundation for continued research is introduced. The study is intended to contribute to Direct Market Access financial trading systems which processes immense amounts of market data daily. Due to certain limitations, we shoulder a naïve approach and model latencies as a function of only data throughput in eight small historical intervals. The training and test sets are represented from raw market data, and we resort to pruning operations to shrink the datasets by a factor of approximately 0.0005 in order to achieve computational feasibility. We further consider four different implementations of Gaussian Process Regression. The resulting algorithms perform well on pruned datasets, with an average R2 statistic of 0.8399 over six test sets of approximately equal size as the training set. Testing on non-pruned datasets indicate shortcomings from the generalization procedure, where input vectors corresponding to low-latency target values are associated with less accuracy. We conclude that depending on application, the shortcomings may be make the model intractable. However for the purposes of this study it is found that buffer queue times can indeed be modelled by regression algorithms. We discuss several methods for improvements, both in regards to pruning procedures and Gaussian Processes, and open up for promising continued research. / Denna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
187

Performance Modelling and Evaluation of Active Queue Management Techniques in Communication Networks. The development and performance evaluation of some new active queue management methods for internet congestion control based on fuzzy logic and random early detection using discrete-time queueing analysis and simulation.

Abdel-Jaber, Hussein F. January 2009 (has links)
Since the field of computer networks has rapidly grown in the last two decades, congestion control of traffic loads within networks has become a high priority. Congestion occurs in network routers when the number of incoming packets exceeds the available network resources, such as buffer space and bandwidth allocation. This may result in a poor network performance with reference to average packet queueing delay, packet loss rate and throughput. To enhance the performance when the network becomes congested, several different active queue management (AQM) methods have been proposed and some of these are discussed in this thesis. Specifically, these AQM methods are surveyed in detail and their strengths and limitations are highlighted. A comparison is conducted between five known AQM methods, Random Early Detection (RED), Gentle Random Early Detection (GRED), Adaptive Random Early Detection (ARED), Dynamic Random Early Drop (DRED) and BLUE, based on several performance measures, including mean queue length, throughput, average queueing delay, overflow packet loss probability, packet dropping probability and the total of overflow loss and dropping probabilities for packets, with the aim of identifying which AQM method gives the most satisfactory results of the performance measures. This thesis presents a new AQM approach based on the RED algorithm that determines and controls the congested router buffers in an early stage. This approach is called Dynamic RED (REDD), which stabilises the average queue length between minimum and maximum threshold positions at a certain level called the target level to prevent building up the queues in the router buffers. A comparison is made between the proposed REDD, RED and ARED approaches regarding the above performance measures. Moreover, three methods based on RED and fuzzy logic are proposed to control the congested router buffers incipiently. These methods are named REDD1, REDD2, and REDD3 and their performances are also compared with RED using the above performance measures to identify which method achieves the most satisfactory results. Furthermore, a set of discrete-time queue analytical models are developed based on the following approaches: RED, GRED, DRED and BLUE, to detect the congestion at router buffers in an early stage. The proposed analytical models use the instantaneous queue length as a congestion measure to capture short term changes in the input and prevent packet loss due to overflow. The proposed analytical models are experimentally compared with their corresponding AQM simulations with reference to the above performance measures to identify which approach gives the most satisfactory results. The simulations for RED, GRED, ARED, DRED, BLUE, REDD, REDD1, REDD2 and REDD3 are run ten times, each time with a change of seed and the results of each run are used to obtain mean values, variance, standard deviation and 95% confidence intervals. The performance measures are calculated based on data collected only after the system has reached a steady state. After extensive experimentation, the results show that the proposed REDD, REDD1, REDD2 and REDD3 algorithms and some of the proposed analytical models such as DRED-Alpha, RED and GRED models offer somewhat better results of mean queue length and average queueing delay than these achieved by RED and its variants when the values of packet arrival probability are greater than the value of packet departure probability, i.e. in a congestion situation. This suggests that when traffic is largely of a non bursty nature, instantaneous queue length might be a better congestion measure to use rather than the average queue length as in the more traditional models.
188

Performance modelling and analysis of congestion control mechanisms for communication networks with quality of service constraints. An investigation into new methods of controlling congestion and mean delay in communication networks with both short range dependent and long range dependent traffic.

Fares, Rasha H.A. January 2010 (has links)
Active Queue Management (AQM) schemes are used for ensuring the Quality of Service (QoS) in telecommunication networks. However, they are sensitive to parameter settings and have weaknesses in detecting and controlling congestion under dynamically changing network situations. Another drawback for the AQM algorithms is that they have been applied only on the Markovian models which are considered as Short Range Dependent (SRD) traffic models. However, traffic measurements from communication networks have shown that network traffic can exhibit self-similar as well as Long Range Dependent (LRD) properties. Therefore, it is important to design new algorithms not only to control congestion but also to have the ability to predict the onset of congestion within a network. An aim of this research is to devise some new congestion control methods for communication networks that make use of various traffic characteristics, such as LRD, which has not previously been employed in congestion control methods currently used in the Internet. A queueing model with a number of ON/OFF sources has been used and this incorporates a novel congestion prediction algorithm for AQM. The simulation results have shown that applying the algorithm can provide better performance than an equivalent system without the prediction. Modifying the algorithm by the inclusion of a sliding window mechanism has been shown to further improve the performance in terms of controlling the total number of packets within the system and improving the throughput. Also considered is the important problem of maintaining QoS constraints, such as mean delay, which is crucially important in providing satisfactory transmission of real-time services over multi-service networks like the Internet and which were not originally designed for this purpose. An algorithm has been developed to provide a control strategy that operates on a buffer which incorporates a moveable threshold. The algorithm has been developed to control the mean delay by dynamically adjusting the threshold, which, in turn, controls the effective arrival rate by randomly dropping packets. This work has been carried out using a mixture of computer simulation and analytical modelling. The performance of the new methods that have / Ministry of Higher Education in Egypt and the Egyptian Cultural Centre and Educational Bureau in London
189

Optimal Capacity Connection Queue Management for TSOs and DSOs

Nilsson Rova, Therese January 2023 (has links)
As the electricity demand increases dramatically in Sweden, the need of using the existing electricity grid as efficiently as possible gains more importance. Simultaneously as needs expand, so does production in the form of wind parks and solar parks. This has led to an increase in connection requests at Svenska Kraftnät, the Swedish transmission system operator. The current process for accepting or rejecting these requests is based on the first-come-first-serve principle, where each request is investigated separately. This thesis investigates an alternative way of processing the requests in clusters and optimizing which combination is the best to accept from a technical point of view. To handle this multiobjective combinatorial optimization problem, a multiobjective Genetic algorithm with a Pareto filter is developed. The Genetic Algorithm finds a refined Pareto front containing optimal solutions that are plotted with objective function values. The user can then easily analyze the optimal solutions and decide upon which the final optimal request combination is. The developed Genetic Algorithm reaches a close-optimal Pareto front estimation after exploring between 15-40% of the solution space.
190

Pièges et vieillissement pour les marches aléatoires sur des environnements aléatoires hautement irréguliers : phénoménologie et étude de cas

Davignon, Élise 11 1900 (has links)
Nous présentons d’abord une introduction au sujet des marches aléatoires en milieux aléatoires. Nous nous penchons en particulier sur les phénomènes de ralentissement, et plus précisément sur la propriété de vieillissement qu’exhibent plusieurs de ces systèmes lorsque les paramètres sont tels qu’ils conduisent l’environnement aléatoire à produire fréquemment des « pièges », soient des structures qui retiennent la marche aléatoire dans la même région de l’environnement pour de longues durées de temps. Nous illustrons ces notions à l’aide de résultats connus pour deux modèles. Nous présentons par la suite une preuve pour une propriété de vieillissement dans le cas de la marche aléatoire biaisée sur les conductances aléatoires à queues lourdes dans la grille infinie hyper-cubique à d dimensions, qui est le sujet d’un article en attente de publication. / We first present an introduction to the topic of random walks on random environments (RWRE). In particular, we look at slow-down phenomena and, more specifically, ageing properties exhibited by multiple such systems when parameters are chosen such that the random environment frequently produces large “traps”: structures that hold up the progress of the random walk by keeping it in the same region of the environment for long periods of time. We illustrate these behaviours by presenting known results for two such models. We then present a proof for an ageing property in the case of the biased random walk on heavy-tailed random conductances in the infinite hyper-cubic lattice in d dimensions; this is the subject of a research article pending publication.

Page generated in 0.0464 seconds