• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Single Machine Scheduling with Release Dates

Goemans, Michel X., Queyranne, Maurice, Schulz, Andreas S., Skutella, Martin, Wang, Yaoguang 10 1900 (has links)
We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a time-indexed formulation, the other on a completiontime formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent a-points, respectively. The analysis implies in particular that the worst-case relative error of the LP relaxations is at most 1.6853, and we provide instances showing that it is at least e/(e - 1) 1.5819. Both algorithms may be derandomized, their deterministic versions running in O(n2 ) time. The randomized algorithms also apply to the on-line setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards.
2

Maintaining Stream Data Distribution Over Sliding Window

Chen, Jian January 2018 (has links)
In modern applications, it is a big challenge that analyzing the order statistics about the most recent parts of the high-volume and high velocity stream data. There are some online quantile algorithms that can keep the sketch of the data in the sliding window and they can answer the quantile or rank query in a very short time. But most of them take the GK algorithm as the subroutine, which is not known to be mergeable. In this paper, we propose another algorithm to keep the sketch that maintains the order statistics over sliding windows. For the fixed-size window, the existing algorithms can’t maintain the correctness in the process of updating the sliding window. Our algorithm not only can maintain the correctness but also can achieve similar performance of the optimal algorithm. Under the basis of maintaining the correctness, the insert time and query time are close to the best results, while others can't maintain the correctness. In addition to the fixed-size window algorithm, we also provide the time-based window algorithm that the window size varies over time. Last but not least, we provide the window aggregation algorithm which can help extend our algorithm into the distributed system.
3

Economical Aspects of Resource Allocation under Discounts

January 2015 (has links)
abstract: Resource allocation is one of the most challenging issues policy decision makers must address. The objective of this thesis is to explore the resource allocation from an economical perspective, i.e., how to purchase resources in order to satisfy customers' requests. In this thesis, we attend to answer the question: when and how to buy resources to fulfill customers' demands with minimum costs? The first topic studied in this thesis is resource allocation in cloud networks. Cloud computing heralded an era where resources (such as computation and storage) can be scaled up and down elastically and on demand. This flexibility is attractive for its cost effectiveness: the cloud resource price depends on the actual utilization over time. This thesis studies two critical problems in cloud networks, focusing on the economical aspects of the resource allocation in the cloud/virtual networks, and proposes six algorithms to address the resource allocation problems for different discount models. The first problem attends a scenario where the virtual network provider offers different contracts to the service provider. Four algorithms for resource contract migration are proposed under two pricing models: Pay-as-You-Come and Pay-as-You-Go. The second problem explores a scenario where a cloud provider offers k contracts each with a duration and a rate respectively and a customer buys these contracts in order to satisfy its resource demand. This work shows that this problem can be seen as a 2-dimensional generalization of the classic online parking permit problem, and present a k-competitive online algorithm and an optimal online algorithm. The second topic studied in this thesis is to explore how resource allocation and purchasing strategies work in our daily life. For example, is it worth buying a Yoga pass which costs USD 100 for ten entries, although it will expire at the end of this year? Decisions like these are part of our daily life, yet, not much is known today about good online strategies to buy discount vouchers with expiration dates. This work hence introduces a Discount Voucher Purchase Problem (DVPP). It aims to optimize the strategies for buying discount vouchers, i.e., coupons, vouchers, groupons which are valid only during a certain time period. The DVPP comes in three flavors: (1) Once Expire Lose Everything (OELE): Vouchers lose their entire value after expiration. (2) Once Expire Lose Discount (OELD): Vouchers lose their discount value after expiration. (3) Limited Purchasing Window (LPW): Vouchers have the property of OELE and can only be bought during a certain time window. This work explores online algorithms with a provable competitive ratio against a clairvoyant offline algorithm, even in the worst case. In particular, this work makes the following contributions: we present a 4-competitive algorithm for OELE, an 8-competitive algorithm for OELD, and a lower bound for LPW. We also present an optimal offline algorithm for OELE and LPW, and show it is a 2-approximation solution for OELD. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
4

Runtime multicore scheduling techniques for dispatching parameterized signal and vision dataflow applications on heterogeneous MPSoCs / Techniques d'ordonnancement en ligne pour la répartition d'applications flot de données de traitement de signal et de l'image sur architectures multi-cœur hétérogène embarqué

Heulot, Julien 24 November 2015 (has links)
Une tendance importante dans le domaine de l’embarqué est l’intégration de plus en plus d’éléments de calcul dans les systèmes multiprocesseurs sur puce (MPSoC). Cette tendance est due en partie aux limitations des puissances individuelles de ces éléments causées par des considérations de consommation d’énergie. Dans le même temps, en raison de leur sophistication croissante, les applications de traitement du signal ont des besoins en puissance de calcul de plus en plus dynamique. Dans la conception et le développement d’applications de traitement de signal multicoeur, l’un des principaux défis consiste à répartir efficacement les différentes tâches sur les éléments de calcul disponibles, tout en tenant compte des changements dynamiques des fonctionnalités de l’application et des ressources disponibles. Une utilisation inefficace peut conduire à une durée de traitement plus longue et/ou une consommation d’énergie plus élevée, ce qui fait de la répartition des tâches sur un système multicoeur une tâche difficile à résoudre. Les modèles de calcul (MoC) flux de données sont communément utilisés dans la conception de systèmes de traitement du signal. Ils décomposent la fonctionnalité de l’application en acteurs qui communiquent exclusivement par l’intermédiaire de canaux. L’interconnexion des acteurs et des canaux de communication est modélisée et manipulée comme un graphe orienté, appelé un graphique de flux de données. Il existe différents MoCs de flux de données qui offrent différents compromis entre la prédictibilité et l’expressivité. Ces modèles de calculs sont communément utilisés dans la conception de systèmes de traitement du signal en raison de leur analysabilité et leur expressivité naturelle du parallélisme de l’application. Dans cette thèse, une nouvelle méthode de répartition de tâches est proposée afin de répondre au défi que propose la programmation multicoeur. Cette méthode de répartition de tâches prend ses décisions en temps réel afin d’optimiser le temps d’exécution global de l’application. Les applications sont décrites en utilisant le modèle paramétrée et interfacé flux de données (PiSDF). Ce modèle permet de décrire une application paramétrée en autorisant des changements dans ses besoins en ressources de calcul lors de l’exécution. A chaque exécution, le modèle de flux de données paramétré est déroulé en un modèle intermédiaire faisant apparaitre toute les tâches de l’application ainsi que leurs dépendances. Ce modèle est ensuite utilisé pour répartir efficacement les tâches de l’application. La méthode proposé a été testée et validé sur plusieurs applications des domaines de la vision par ordinateur, du traitement du signal et du multimédia. / An important trend in embedded processing is the integration of increasingly more processing elements into Multiprocessor Systemson- Chip (MPSoC). This trend is due in part to limitations in processing power of individual elements that are caused by power consumption considerations. At the same time, signal processing applications are becoming increasingly dynamic in terms of their hardware resource requirements due to the growing sophistication of algorithms to reach higher levels of performance. In design and implementation of multicore signal processing systems, one of the main challenges is to dispatch computational tasks efficiently onto the available processing elements while taking into account dynamic changes in application functionality and resource requirements. An inefficient use can lead to longer processing times and higher energy consumption, making multicore task scheduling a very difficult problem to solve. Dataflow process network Models of Computation (MoCs) are widely used in design of signal processing systems. It decomposes application functionality into actors that communicate data exclusively through channels. The interconnection of actors and communication channels is modeled and manipulated as a directed graph, called a dataflow graph. There are different dataflow MoCs which offer different trade-off between predictability and expressiveness. These MoCs are widely used in design of signal processing systems due to their analyzability and their natural parallel expressivity. In this thesis, we propose a novel scheduling method to address multicore scheduling challenge. This scheduling method determines scheduling decisions strategically at runtime to optimize the overall execution time of applications onto heterogeneous multicore processing resources. Applications are described using the Parameterized and Interfaced Synchronous DataFlow (PiSDF) MoC. The PiSDF model allows describing parameterized application, making possible changes in application’s resource requirement at runtime. At each execution, the parameterized dataflow is then transformed into a locally static one used to efficiently schedule the application with an a priori knowledge of its behavior. The proposed scheduling method have been tested and benchmarked on multiple state-of-the-art applications from computer vision, signal processing and multimedia domains.
5

Online algoritmy pro rozvrhování paketů / Online Algorithms for Packet Scheduling

Veselý, Pavel January 2018 (has links)
We study online scheduling policies for buffer management models, in which packets are arriving over time to a buffer of a network switch to be sent through its single output port. However, the bandwidth of the port is limited and some packets need to be dropped, based on their weights. The goal of the scheduler is to maximize the weighted throughput, that is, the total weight of packets transmitted. Due to the natural lack of information about future, an optimal performance cannot be achieved, we thus pursue competitive analysis and its refinements to analyze online algorithms on worst-case inputs. Specifically, in the first part of the thesis, we focus on a simple online scheduling model with unit-size packets and deadlines, called Bounded-Delay Packet Scheduling. We design an optimal φ-competitive deterministic algo- rithm for the problem, where φ ≈ 1.618 is the golden ratio. It is based on a detailed understanding of an optimal schedule of pending packets, called the plan, which may be of independent interest. We also propose a semi-online setting with lookahead that allows the algorithm to see a little bit of future, namely, packets arriving in the next few steps. We provide an algorithm with lookahead for instances in which each packet can be scheduled in at most two consecutive slots and prove lower...
6

Distributed Algorithm Design for Constrained Multi-robot Task Assignment

Luo, Lingzhi 01 June 2014 (has links)
The task assignment problem is one of the fundamental combinatorial optimization problems. It has been extensively studied in operation research, management science, computer science and robotics. Task assignment problems arise in various applications of multi-robot systems (MRS), such as environmental monitoring, disaster response, extraterrestrial exploration, sensing data collection and collaborative autonomous manufacturing. In these MRS applications, there are realistic constraints on robots and tasks that must be taken into account both from the modeling perspective and the algorithmic perspective. From the modeling aspect, such constraints include (a) Task group constraints: where tasks form disjoint groups and each robot can be assigned to at most one task in each group. One example of the group constraints comes from tightly-coupled tasks, where multiple micro tasks form one tightly-coupled macro task and need multiple robots to perform each simultaneously. (b) Task deadline constraints: where tasks must be assigned to meet their deadlines. (c) Dynamically-arising tasks: where tasks arrive dynamically and the payoffs of future tasks are unknown. Such tasks arise in scenarios like searchrescue, where new victims are found dynamically. (d) Robot budget constraints: where the number of tasks each robot can perform is bounded according to the resource it possesses (e.g., energy). From the solution aspect, there is often a need for decentralized solution that are implemented on individual robots, especially when no powerful centralized controller exists or when the system needs to avoid single-point failure or be adaptive to environmental changes. Most existing algorithms either do not consider the above constraints in problem modeling, are centralized or do not provide formal performance guarantees. In this thesis, I propose methods to address these issues for two classes of problems, namely, the constrained linear assignment problem and constrained generalized assignment problem. Constrained linear assignment problem belongs to P, while constrained generalized assignment problem is NP-hard. I develop decomposition-based distributed auction algorithms with performance guarantees for both problem classes. The multi-robot assignment problem is decomposed into an optimization problem for each robot and each robot iteratively solving its own optimization problem leads to a provably good solution to the overall problem. For constrained linear assignment problem, my approaches provides an almost optimal solution. For constrained generalized assignment problem, I present a distributed algorithm that provides a solution within a constant factor of the optimal solution. I also study the online version of the task allocation problem with task group constraints. For the online problem, I prove that a repeated greedy version of my algorithm gives solution with constant factor competitive ratio. I include simulation results to evaluate the average-case performance of the proposed algorithms. I also include results on multi-robot cooperative package transport to illustrate the approach.
7

Design and Analysis of Algorithms for Graph Exploration and Resource Allocation Problems and Their Application to Energy Management / グラフ探索および資源割当アルゴリズムの設計と解析ならびにそのエネルギー管理への応用

Morimoto, Naoyuki 23 July 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第18530号 / 情博第534号 / 新制||情||95(附属図書館) / 31416 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 岡部 寿男, 教授 松山 隆司, 教授 阿久津 達也 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
8

Simulation & Analysis of Peer-to-Peer Network Quality for Measurement Scheduling : Online algorithms, Application for Network QoS Monitoring

Lindstedt, Gustaf January 2019 (has links)
With the growing dependency on Internet connectivity in our daily lives, monitoring connection quality to ensure a good quality of service has become increasingly important. The CheesePi project aims to build a platform for monitoring connection quality from the home user’s perspective. And with peer to peer technologies becoming more prevalent the need for quality of service monitoring between peers become more important. This thesis analyses the problem of scheduling connection quality measurements between peers in a network. A method is presented for scheduling measurements which make use of statistical models of the individual links in the network based on previous measurement data. The method applies the ADWIN1 adaptive windowing algorithm over the models and decides a priority based on the relative window sizes for each link. This method is evaluated against a round-robin scheduler through simulation and is shown to provide a better scheduling than round-robin in most cases in terms of achieving the most “information gain” per measurement iteration. The results show that for sudden changes in a network link the scheduler prioritises measurements for that link and therefore converge its view of the network to the new stable state more quickly than when using round-robin scheduling. The scheduling method was developed to be practically applicable to the CheesePi project and might effectively be deployed in real systems running the CheesePi platform. The thesis also contains an evaluation of two online algorithms for mean and variance as to how they react to change in the data source from which the samples are taken. / Med det ökade beroendet på uppkoppling till internet i vårt dagliga liv har det blivit allt viktigare att kontrollera uppkopplingskvaliteten för att säkerställa att slutanvändaren får en bra service. CheesePi-projektet har som mål att bygga en plattform för att monitorera uppkopplingskvaliteten från en hemanvändares perspektiv. I samband med att peer-to-peer teknologier förekommer mer blir det också allt viktigare att säkerställa uppkopplingskvaliteten mellan hemanvändare. Den här rapporten analyserar problemet med att planera mätningar av uppkopplingskvaliteten mellan hemanvändar-noder i ett nätverk. En metod för att planera mätningar presenteras, som använder sig av statistiska modeller av de individuella länkarna i nätverket som baseras på tidigare mätdata. Metoden applicerar ADWIN1 algoritmen, som använder adaptiva fönster, över de statistiska modellerna och bestämmer en prioritet baserat på fönstrens relativa storlek för varje länk. Denna metod utvärderas mot en “round-robin”-planerare genom simulering och demonstreras ge bättre planeringsresultat än “round-robin” i de flesta fall, när det kommer till att uppnå bäst “informations-ökning” varje mätcykel. Resultaten visar att för plötsliga förändringar i en nätverkslänk prioriterar planeraren mätningar för den länken, och därför konvergerar dess vy av nätverket till det nya stabila tillståndet fortare än för “round-robin”-planeraren. Planeringsmetoden har utvecklats för att vara användbart för CheesePi-projektet och har en möjlighet att användas på riktiga system som kör CheesePi-plattformen. Rapporten innehåller också en utvärdering av två “online”-algoritmer för att beräkna medeltalet och variansen, med avseende på hur de reagerar till förändringar i datakällan som mätvärdena utvinns från.
9

Mean-Variability-Fairness tradeoffs in resource allocation with applications to video delivery

Joseph, Vinay 20 September 2013 (has links)
Network Utility Maximization (NUM) provides a key conceptual framework to study reward allocation amongst a collection of users/entities in disciplines as diverse as economics, law and engineering. However when the available resources and/or users' utilities vary over time, reward allocations will tend to vary, which in turn may have a detrimental impact on the users' overall satisfaction or quality of experience. In this thesis, we introduce a generalization of the NUM framework which incorporates the detrimental impact of temporal variability in a user's allocated rewards and explicitly incorporates Mean-Variability-Fairness tradeoffs, i.e., tradeoffs amongst the mean and variability in users' reward allocations, as well as fairness across users. We propose a simple online algorithm to realize these tradeoffs, which, under stationary ergodic assumptions, is shown to be asymptotically optimal, i.e., achieves a long term performance equal to that of an offline algorithm with knowledge of the future variability in the system. This substantially extends work on NUM to an interesting class of relevant problems where users/entities are sensitive to temporal variability in their service or allocated rewards. We extend the theoretical framework and tools developed for realizing Mean-Variability-Fairness tradeoffs to develop a simple online algorithm to solve the problem of optimizing video delivery in networks. The tremendous increase in mobile video traffic projected for the future along with insufficiency of available wireless network capacity makes this one of the most important networking problems today. Specifically, we consider a network supporting video clients streaming stored video, and focus on the problem of jointly optimizing network resource allocation and video clients' video quality adaptation. Our objective is to fairly maximize video clients' video Quality of Experience (QoE) realizing Mean-Variability-Fairness tradeoffs, incorporating client preferences on rebuffering time and the cost of video delivery. We present a simple asymptotically optimal online algorithm NOVA (Network Optimization for Video Adaptation) to solve the problem. Our algorithm uses minimal communication, 'distributes' the tasks of network resource allocation to a centralized network controller, and video clients' video quality adaptation to the respective video clients. Further, the quality adaptation is also optimal for standalone video clients, and is an asynchronous algorithm well suited for use in the Dynamic Adaptive Streaming over HTTP (DASH) framework. We also extend NOVA for use with more general video QoE models, and study NOVA accounting for practical considerations like time varying number of video clients, sharing with other types of traffic, performance under legacy resource allocation policies, videos with variable sized segments etc. / text
10

Efficacité de l’algorithme EM en ligne pour des modèles statistiques complexes dans le contexte des données massives

Martel, Yannick 11 1900 (has links)
L’algorithme EM (Dempster et al., 1977) permet de construire une séquence d’estimateurs qui converge vers l’estimateur de vraisemblance maximale pour des modèles à données manquantes pour lesquels l’estimateur du maximum de vraisemblance n’est pas calculable. Cet algorithme est remarquable compte tenu de ses nombreuses applications en apprentissage statistique. Toutefois, il peut avoir un lourd coût computationnel. Les auteurs Cappé et Moulines (2009) ont proposé une version en ligne de cet algorithme pour les modèles appartenant à la famille exponentielle qui permet de faire des gains d’efficacité computationnelle importants en présence de grands jeux de données. Cependant, le calcul de l’espérance a posteriori de la statistique exhaustive, qui est nécessaire dans la version de Cappé et Moulines (2009), est rarement possible pour des modèles complexes et/ou lorsque la dimension des données manquantes est grande. On doit alors la remplacer par un estimateur. Plusieurs questions se présentent naturellement : les résultats de convergence de l’algorithme initial restent-ils valides lorsqu’on remplace l’espérance par un estimateur ? En particulier, que dire de la normalité asymptotique de la séquence des estimateurs ainsi créés, de la variance asymptotique et de la vitesse de convergence ? Comment la variance de l’estimateur de l’espérance se reflète-t-elle sur la variance asymptotique de l’estimateur EM? Peut-on travailler avec des estimateurs de type Monte-Carlo ou MCMC? Peut-on emprunter des outils populaires de réduction de variance comme les variables de contrôle ? Ces questions seront étudiées à l’aide d’exemples de modèles à variables latentes. Les contributions principales de ce mémoire sont une présentation unifiée des algorithmes EM d’approximation stochastique, une illustration de l’impact au niveau de la variance lorsque l’espérance a posteriori est estimée dans les algorithmes EM en ligne et l’introduction d’algorithmes EM en ligne permettant de réduire la variance supplémentaire occasionnée par l’estimation de l’espérance a posteriori. / The EM algorithm Dempster et al. (1977) yields a sequence of estimators that converges to the maximum likelihood estimator for missing data models whose maximum likelihood estimator is not directly tractable. The EM algorithm is remarkable given its numerous applications in statistical learning. However, it may suffer from its computational cost. Cappé and Moulines (2009) proposed an online version of the algorithm in models whose likelihood belongs to the exponential family that provides an upgrade in computational efficiency in large data sets. However, the conditional expected value of the sufficient statistic is often intractable for complex models and/or when the missing data is of a high dimension. In those cases, it is replaced by an estimator. Many questions then arise naturally: do the convergence results pertaining to the initial estimator hold when the expected value is substituted by an estimator? In particular, does the asymptotic normality property remain in this case? How does the variance of the estimator of the expected value affect the asymptotic variance of the EM estimator? Are Monte-Carlo and MCMC estimators suitable in this situation? Could variance reduction tools such as control variates provide variance relief? These questions will be tackled by the means of examples containing latent data models. This master’s thesis’ main contributions are the presentation of a unified framework for stochastic approximation EM algorithms, an illustration of the impact that the estimation of the conditional expected value has on the variance and the introduction of online EM algorithms which reduce the additional variance stemming from the estimation of the conditional expected value.

Page generated in 0.0618 seconds