81 |
Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference ModelPei, Guanhong 21 January 2013 (has links)
We investigate diverse aspects of the performance of wireless networks, including throughput, delay and distributed complexity. <br />One of the main challenges for optimizing them arises from radio interference, an inherent factor in wireless networks.<br />Graph-based interference models represent a large class of interference models widely used for the study of wireless networks,<br />and suffer from the weakness of over-simplifying the interference caused by wireless signals in a local and binary way.<br />A more sophisticated interference model, the physical interference model, based on SINR constraints,<br />is considered more realistic but is more challenging to study (because of its non-linear form and non-local property).<br />In this dissertation, we study the connections between the two types of interference models -- graph-based and physical interference models --<br />and tackle a set of fundamental problems under the physical interference model;<br />previously, some of the problems were still open even under the graph-based interference model, and to those we have provided solutions under both types of interference models.<br /><br />The underlying interference models affect scheduling and power control -- essential building blocks in the operation of wireless networks -- that directly deal with the wireless medium; the physical interference model (compared to graph-based interference model) compounds the problem of efficient scheduling and power control by making it non-local and non-linear.<br />The system performance optimization and tradeoffs with respect to throughput and delay require a ``global\'\' view across<br />transport, network, media access control (MAC), physical layers (referred to as cross-layer optimization)<br />to take advantage of the control planes in different levels of the wireless network protocol stack.<br />This can be achieved by regulating traffic rates, finding traffic flow paths for end-to-end sessions,<br />controlling the access to the wireless medium (or channels),<br />assigning the transmission power, and handling signal reception under interference.<br /><br />The theme of the dissertation is<br />distributed algorithms and optimization of QoS objectives under the physical interference model.<br />We start by developing the first low-complexity distributed scheduling and power control algorithms for maximizing the efficiency ratio for different interference models;<br />we derive end-to-end per-flow delay upper-bounds for our scheduling algorithms and our delay upper-bounds are the first network-size-independent result known for multihop traffic.<br />Based on that, we design the first cross-layer multi-commodity optimization frameworks for delay-constrained throughput maximization by incorporating the routing and traffic control into the problem scope.<br />Scheduling and power control is also inherent to distributed computing of ``global problems\'\', e.g., the maximum independent set problems in terms of transmitting links and local broadcasts respectively, and the minimum spanning tree problems.<br />Under the physical interference model, we provide the first sub-linear time distributed solutions to the maximum independent set problems, and also solve the minimum spanning tree problems efficiently.<br />We develop new techniques and algorithms and exploit the availability of technologies (full-/half-duplex radios, fixed/software-defined power control) to further improve our algorithms.<br />%This fosters a deeper understanding of distributed scheduling from the network computing point of view.<br /><br /><br />We highlight our main technical contributions, which might be of independent interest to the design and analysis of optimization algorithms.<br />Our techniques involve the use of linear and mixed integer programs in delay-constrained throughput maximization. This demonstrates the combined use of different kinds of combinatorial optimization approaches for multi-criteria optimization.<br />We have developed techniques for queueing analysis under general stochastic traffic to analyze network throughput and delay properties.<br />We use randomized algorithms with rigorously analyzed performance guarantees to overcome the distributed nature of wireless data/control communications.<br />We factor in the availability of emerging radio technologies for performance improvements of our algorithms.<br />Some of our algorithmic techniques that would be of broader use in algorithms for the physical interference model include:<br />formal development of the distributed computing model in the SINR model, and reductions between models of different technological capabilities, the redefinition of interference sets in the setting of SINR constraints, and our techniques for distributed computation of rulings (informally, nodes or links which are well-separated covers).<br /> / Ph. D.
|
82 |
Algorithmic problems in power management of computing systems / Problèmes algorithmiques dans les systèmes informatiques sous contraintes d'énergieZois, Georgios 12 December 2014 (has links)
Cette thèse se focalise sur des algorithmes efficaces en énergie pour des problèmes d'ordonnancement de tâches sur des processeurs pouvant varier la vitesse d'exécution ainsi que sur des processeurs fonctionnant sous un mécanisme de réchauffement-refroidissement, où pour un budget d'énergie donné ou un seuil thermique, l'objectif consiste à optimiser un critère de Qualité de Service. Une partie de notre recherche concerne des problèmes d'ordonnancement de tâches apparaissant dans des environnements de traitement de grandes données. Dans ce contexte, nous nous focalisons sur le paradigme MapReduce en considérant des problèmes d'ordonnancement efficaces en énergie sur un ensemble de processeurs, ainsi que pour la version classique.Premièrement, nous proposons des résultats de complexité, des algorithmes optimaux et approchés pour différentes variantes du problème de la minimisation du retard maximal d'un ensemble de tâches sur un processeur pouvant varier la vitesse d'exécution. Ensuite, nous considérons le problème d'ordonnancement MapReduce dans les versions énergétique et classique sur des processeurs non-reliés où le but est de minimiser le temps d'achèvement pondéré. Nous étudions deux cas spéciaux et les généralisations de ces deux problèmes en proposant des algorithmes d'approximation constante. Enfin, nous étudions le problème d'ordonnancement dans lequel la température du processeur est en-dessous un seuil donné où chaque tâche contribue au réchauffement et le but est de maximiser le nombre de tâches exécutées. Nous considérons le cas où les tâches ont des durées unitaires et ayant la même date d'échéance et nous étudions le rapport d'approximation de ce problème. / This thesis is focused on energy-efficient algorithms for job scheduling problems on speed-scalable processors, as well as on processors operating under a thermal and cooling mechanism, where, for a given budget of energy or a thermal threshold, the goal is to optimize a Quality of Service criterion. A part of our research concerns scheduling problems arising in large-data processing environments. In this context, we focus on the MapReduce paradigm and we consider problems of energy-efficient scheduling on multiple speed-scalable processors as well as classical scheduling on a set of unrelated processors.First, we propose complexity results, optimal and constant competitive algorithms for different energy-aware variants of the problem of minimizing the maximum lateness of a set of jobs on a single speed-scalable processor. Then, we consider energy-aware MapReduce scheduling as well as classical MapReduce scheduling (where energy is not our concern) on unrelated processors, where the goal is to minimize the total weighted completion time of a set of MapReduce jobs. We study special cases and generalizations of both problems and propose constant approximation algorithms. Finally, we study temperature-aware scheduling on a single processor that operates under a strict thermal threshold, where each job has its own heat contribution and the goal is to maximize the schedule's throughput. We consider the case of unit-length jobs with a common deadline and we study the approximability of the problem.
|
83 |
An Approximation Framework for Sequencing Problems with Bipartite Structure / 二部分構造を持つ順序付け問題に対する近似方式Aleksandar Shurbevski 24 September 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第18621号 / 情博第545号 / 新制||情||96(附属図書館) / 31521 / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 永持 仁, 教授 太田 快人, 教授 髙橋 豊 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
84 |
Computational analysis of smile weight distribution across the face for accurate distinction between genuine and posed smilesAl-dahoud, Ahmad, Ugail, Hassan January 2018 (has links)
Yes / In this paper, we report the results of our recent research into the understanding of the exact distribution of a smile across the face, especially the distinction in the weight distribution of a smile between a genuine and a posed smile. To do this, we have developed a computational framework for the analysis of the dynamic motion of various parts of the face during a facial expression, in particular, for the smile expression. The heart of our dynamic smile analysis framework is the use of optical flow intensity variation across the face during a smile. This can be utilised to efficiently map the dynamic motion of individual regions of the face such as the mouth, cheeks and areas around the eyes. Thus, through our computational framework, we infer the exact distribution of weights of the smile across the face. Further, through the utilisation of two publicly available datasets, namely the CK+ dataset with 83 subjects expressing posed smiles and the MUG dataset with 35 subjects expressing genuine smiles, we show there is a far greater activity or weight distribution around the regions of the eyes in the case of a genuine smile. / Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035.
|
85 |
Implementation of a Fast Approximation Algorithm for Precedence Constrained SchedulingAlskog, Måns January 2022 (has links)
We present an implementation of a very recent approximation algorithm for scheduling jobs on a single machine with precedence constraints, minimising the total weighted completion time. We also evaluate the performance of this implementation. The algorithm was published by Shi Li in 2021 and is a (6+ε)-approximation algorithm for the multiprocessor problem P|prec|∑j wjCj. We have implemented a version which is a (2+ε)-approximation algorithm for the single processor problem 1|prec|∑j wjCj. This special case can easily be generalised to the multiprocessor case, as the two algorithms are based on the same LP relaxation of the problem. Unlike other approximation algorithms for this and similar problems, for example, those published by Hall, Schulz, Shmoys and Wein in 1997, and by Li in 2020, this algorithm has been developed with a focus on obtaining a good asymptotic run time guarantee, rather than obtaining the best possible guarantee on the quality of solutions. Li’s algorithm has run time O((n+κ) · polylog(n+κ) · log3 pmax · 1/ε2), where n is the number of jobs, κ is the number of precedence constraints and pmax is the largest of the processing times of the jobs. We also present a detailed explanation of the algorithm aimed at readers who do not necessarily have a background in scheduling and/or approximation algorithms, based on the paper by Li. Finally, we empirically evaluate how well (our implementation of) this algorithm performs in practice. The performance was measured on a set of 96 randomly generated instances, with the largest instance having 1024 jobs and 32 768 precedence constraints. We can find a solution for an instance with 512 jobs and 11 585 precedence constraints in 25 minutes. / Vi presenterar en praktisk implementation av en ny approximationsalgoritm för schemaläggning av jobb på en maskin med ordningsbivillkor, under minimering av den viktade summan av sluttider. Algoritmen, som publicerades av Shi Li år 2021, är en (6+ε)-approximationsalgoritm för multiprocessorproblemet P|prec|∑j wjCj. Vi har implementerat en version som är en (2+ε)-approximationsalgoritm för enprocessorproblemet 1|prec|∑j wjCj. Detta specialfall kan enkelt generaliseras till multiprocessorfallet, eftersom de två algoritmerna baseras på samma LP-relaxation av problemet. Till skillnad från andra approximationsalgoritmer för detta och liknande problem, exempelvis de från Hall, Schulz, Shmoys och Wein år 1997, och från Li år 2020, har denna algoritm utvecklats med fokus på att uppnå en bra garanti på asymptotisk körtid, istället för att försöka uppnå den bästa möjliga garantin på lösningarnas kvalité. Lis algoritm har körtid O((n+κ) · polylog(n+κ) · log3 pmax · 1/ε2), där n är antalet jobb, κ antalet ordningsbivillkor och pmax är den största körtiden bland jobben. En detaljerad beskrivning av algoritmen riktad till personer som inte nödvändigtvis har förkunskaper inom schemaläggning och/eller approximationsalgoritmer, baserad på artikeln, ges också. Slutligen utvärderar vi empiriskt hur väl (vår implementation av) denna algoritm presterar i praktiken. Implementationens egenskaper mättes på en uppsättning av 96 slumplässigt genererade instanser, där den största instansen har 1024 jobb och 32768 ordningsbivillkor. Med vår implementation kan vi hitta en lösning för en instans med 512 jobb och 11 585 precedencensbivillkor på 25 minuter.
|
86 |
Temporal Clustering of Finite Metric Spaces and Spectral k-ClusteringRossi, Alfred Vincent, III 30 October 2017 (has links)
No description available.
|
87 |
Probability modeling of industrial situations using transform techniquesHu, Xiaohong January 1995 (has links)
No description available.
|
88 |
Sparse Deployment of Large Scale Wireless Networks for Mobile TargetsZheng, Zizhan 08 September 2010 (has links)
No description available.
|
89 |
[pt] ALGORITMOS DE APROXIMAÇÃO PARA ÁRVORES DE DECISÃO / [en] APPROXIMATION ALGORITHMS FOR DECISION TREESALINE MEDEIROS SAETTLER 13 December 2021 (has links)
[pt] A construção de árvores de decisão é um problema central em diversas áreas da ciência da computação, por exemplo, teoria de banco de dados e aprendizado computacional. Este problema pode ser visto como o problema de avaliar uma função discreta, onde para verificar o valor de cada variável da função temos que pagar um custo, e os pontos onde a função está definida estão associados a uma distribuição de probabilidade. O objetivo do problema é avaliar a função minimizando o custo gasto (no pior caso ou no caso médio). Nesta tese, apresentamos quatro contribuições relacionadas a esse problema. A
primeira é um algoritmo que alcança uma aproximação de O(log(n)) em relação a tanto o custo esperado quanto ao pior custo. A segunda é um método que combina duas árvores, uma com pior custo W e outra com custo esperado E, e produz uma árvore com pior custo de no máximo (1+p)W e custo esperado no
máximo (1/(1-e-p))E, onde p é um parâmetro dado. Nós também provamos que esta é uma caracterização justa do melhor trade-off alcançável, mostrando que existe um número infinito de instâncias para as quais não podemos obter uma árvore de decisão com tanto o pior custo menor que (1 + p)OPTW(I)
quanto o custo esperado menor que (1/(1 - e - p))OPTE(I), onde OPTW(I) (resp. OPTE(I)) denota o pior custo da árvore de decisão que minimiza o pior custo (resp. custo esperado) para uma instância I do problema. A terceira contribuição é um algoritmo de aproximação de O(log(n)) para a minimização
do pior custo para uma variante do problema onde o custo de ler uma variável depende do seu valor. Nossa última contribuição é um algoritmo randomized rounding que, dada uma instância do problema (com um inteiro adicional (k > 0) e um parâmetro 0 < e < 1/2, produz uma árvore de decisão oblivious
com custo no máximo (3/(1 - 2e))ln(n)OPT(I) e que produz no máximo (k/e) erros, onde OPT(I) denota o custo da árvore de decisão oblivious com o menor custo entre todas as árvores oblivious para a instância I que produzem no máximo k erros de classificação. / [en] Decision tree construction is a central problem in several areas of computer science, for example, data base theory and computational learning. This problem can be viewed as the problem of evaluating a discrete function, where to check the value of each variable of the function we have to pay a cost, and the points where the function is defined are associated with a probability distribution. The goal of the problem is to evaluate the function minimizing the cost spent (in the worst case or in expectation). In this Thesis, we present four contributions related to this problem. The first one is an algorithm that achieves an O(log(n)) approximation with respect to both the expected and the worst costs. The second one is a procedure that combines two trees, one with worst costW and another with expected cost E, and produces a tree with worst cost at most (1+p)W and expected cost at most (1/(1-e-p))E, where p is a given parameter. We also prove that this is a sharp characterization of the best possible trade-off attainable, showing that there are infinitely many instances for which we cannot obtain a decision tree with both worst cost smaller than
(1+p)OPTW(I) and expected cost smaller than (1/(1-e-p))OPTE(I), where OPTW(I) (resp. OPTE(I)) denotes the cost of the decision tree that minimizes the worst cost (resp. expected cost) for an instance I of the problem. The third contribution is an O(log(n)) approximation algorithm for the minimization
of the worst cost for a variant of the problem where the cost of reading a variable depends on its value. Our final contribution is a randomized rounding algorithm that, given an instance of the problem (with an additional integer k > 0) and a parameter 0 < e < 1/2, builds an oblivious decision tree with
cost at most (3/(1 - 2e))ln(n)OPT(I) and produces at most (k/e) errors, where OPT(I) denotes the cost of the oblivious decision tree with minimum cost among all oblivious decision trees for instance I that make at most k classification errors.
|
90 |
Algoritmické problémy související s průnikovými grafy / Algoritmické problémy související s průnikovými grafyIvánek, Jindřich January 2013 (has links)
In this thesis we study two clique-cover problems which have interesting applications regarding the k -bend intersection graph representation: the edge-clique-cover-degree problem and the edge-clique-layered-cover problem. We focus on the complexity of these problems and polynomial time algorithms on restricted classes of graphs. The main results of the thesis are NP-completness of the edge-clique-layered-cover problem and a polynomial-time 2-approximation algorithm on the subclass of diamond-free graphs for the same problem as well as some upper bounds on particular graph classes.
|
Page generated in 0.1017 seconds