• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 14
  • 10
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 116
  • 29
  • 21
  • 18
  • 17
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Stochastic models for resource allocation in large distributed systems / Modèles stochastiques pour l'allocation des ressources dans les grands systèmes distribués

Thompson, Guilherme 08 December 2017 (has links)
Cette thèse traite de quatre problèmes dans le contexte des grands systèmes distribués. Ce travail est motivé par les questions soulevées par l'expansion du Cloud Computing et des technologies associées. Le présent travail étudie l'efficacité de différents algorithmes d'allocation de ressources dans ce cadre. Les méthodes utilisées impliquent une analyse mathématique de plusieurs modèles stochastiques associés à ces réseaux. Le chapitre 1 fournit une introduction au sujet, ainsi qu'une présentation des principaux outils mathématiques utilisés dans les chapitres suivants. Le chapitre 2 présente un mécanisme de contrôle de congestion dans les services de Video on Demand fournissant des fichiers encodés dans diverses résolutions. On propose une politique selon laquelle le serveur ne livre la vidéo qu'à un débit minimal lorsque le taux d'occupation du serveur est supérieur à un certain seuil. La performance du système dans le cadre de cette politique est ensuite évaluée en fonction des taux de rejet et de dégradation. Les chapitres 3, 4 et 5 explorent les problèmes liés aux schémas de coopération entre centres de données (CD) situés à la périphérie du réseau. Dans le premier cas, on analyse une politique dans le contexte des services de cloud multi-ressources. Dans le second cas, les demandes arrivant à un CD encombré sont transmises à un CD voisin avec une probabilité donnée. Au troisième, les requêtes bloquées dans un CD sont transmises systématiquement à une autre où une politique de réservation (trunk) est introduite tel qu'une requête redirigée est acceptée seulement s'il y a un certain nombre minimum de serveurs libres dans ce CD. / This PhD thesis investigates four problems in the context of Large Distributed Systems. This work is motivated by the questions arising with the expansion of Cloud Computing and related technologies. The present work investigates the efficiency of different resource allocation algorithms in this framework. The methods used involve a mathematical analysis of several stochastic models associated to these networks. Chapter 1 provides an introduction to the subject in general, as well as a presentation of the main mathematical tools used throughout the subsequent chapters. Chapter 2 presents a congestion control mechanism in Video on Demand services delivering files encoded in various resolutions. We propose a policy under which the server delivers the video only at minimal bit rate when the occupancy rate of the server is above a certain threshold. The performance of the system under this policy is then evaluated based on both the rejection and degradation rates. Chapters 3, 4 and 5 explore problems related to cooperation schemes between data centres on the edge of the network. In the first setting, we analyse a policy in the context of multi-resource cloud services. In second case, requests that arrive at a congested data centre are forwarded to a neighbouring data centre with some given probability. In the third case, requests blocked at one data centre are forwarded systematically to another where a trunk reservation policy is introduced such that a redirected request is accepted only if there are a certain minimum number of free servers at this data centre.
82

Méthodes quantitatives pour l'étude asymptotique de processus de Markov homogènes et non-homogènes / Quantitative methods for the asymptotic study of homogeneous and non-homogeneous Markov processes

Delplancke, Claire 28 June 2017 (has links)
L'objet de cette thèse est l'étude de certaines propriétés analytiques et asymptotiques des processus de Markov, et de leurs applications à la méthode de Stein. Le point de vue considéré consiste à déployer des inégalités fonctionnelles pour majorer la distance entre lois de probabilité. La première partie porte sur l'étude asymptotique de processus de Markov inhomogènes en temps via des inégalités de type Poincaré, établies par l'analyse spectrale fine de l'opérateur de transition. On se place d'abord dans le cadre du théorème central limite, qui affirme que la somme renormalisée de variables aléatoires converge vers la mesure gaussienne, et l'étude est consacrée à l'obtention d'une borne à la Berry-Esseen permettant de quantifier cette convergence. La distance choisie est une quantité naturelle et encore non étudiée dans ce cadre, la distance du chi-2, complétant ainsi la littérature relative à d'autres distances (Kolmogorov, variation totale, Wasserstein). Toujours dans le contexte non-homogène, on s'intéresse ensuite à un processus peu mélangeant relié à un algorithme stochastique de recherche de médiane. Ce processus évolue par sauts de deux types (droite ou gauche), dont la taille et l'intensité dépendent du temps. Une majoration de la distance de Wasserstein d'ordre 1 entre la loi du processus et la mesure gaussienne est établie dans le cas où celle-ci est invariante sous la dynamique considérée, et étendue à des exemples où seule la normalité asymptotique est vérifiée. La seconde partie s'attache à l'étude des entrelacements entre processus de Markov (homogènes) et gradients, qu'on peut interpréter comme un raffinement du critère de Bakry-Emery, et leur application à la méthode de Stein, qui est un ensemble de techniques permettant de majorer la distance entre deux mesures de probabilité. On prouve l'existence de relations d'entrelacement du second ordre pour les processus de naissance-mort, allant ainsi plus loin que les relations du premier ordre connues. Ces relations sont mises à profit pour construire une méthode originale et universelle d'évaluation des facteurs de Stein relatifs aux mesures de probabilité discrètes, qui forment une composante essentielle de la méthode de Stein-Chen. / The object of this thesis is the study of some analytical and asymptotic properties of Markov processes, and their applications to Stein's method. The point of view consists in the development of functional inequalities in order to obtain upper-bounds on the distance between probability distributions. The first part is devoted to the asymptotic study of time-inhomogeneous Markov processes through Poincaré-like inequalities, established by precise estimates on the spectrum of the transition operator. The first investigation takes place within the framework of the Central Limit Theorem, which states the convergence of the renormalized sum of random variables towards the normal distribution. It results in the statement of a Berry-Esseen bound allowing to quantify this convergence with respect to the chi-2 distance, a natural quantity which had not been investigated in this setting. It therefore extends similar results relative to other distances (Kolmogorov, total variation, Wasserstein). Keeping with the non-homogeneous framework, we consider a weakly mixing process linked to a stochastic algorithm for median approximation. This process evolves by jumps of two sorts (to the right or to the left) with time-dependent size and intensity. An upper-bound on the Wasserstein distance of order 1 between the marginal distribution of the process and the normal distribution is provided when the latter is invariant under the dynamic, and extended to examples where only the asymptotic normality stands. The second part concerns intertwining relations between (homogeneous) Markov processes and gradients, which can be seen as refinment of the Bakry-Emery criterion, and their application to Stein's method, a collection of techniques to estimate the distance between two probability distributions. Second order intertwinings for birth-death processes are stated, going one step further than the existing first order relations. These relations are then exploited to construct an original and universal method of evaluation of discrete Stein's factors, a key component of Stein-Chen's method.
83

Limite hidrodinâmico para neurônios interagentes estruturados espacialmente / Hydrodynamic limit for spatially structured interacting neurons

Guilherme Ost de Aguiar 17 July 2015 (has links)
Nessa tese, estudamos o limite hidrodinâmico de um sistema estocástico de neurônios cujas interações são dadas por potenciais de Kac que imitam sinapses elétricas e químicas, e as correntes de vazamento. Esse sistema consiste de $\\ep^$ neurônios imersos em $[0,1)^2$, cada um disparando aleatoriamente de acordo com um processo pontual com taxa que depende tanto do seu potential de membrana como da posição. Quando o neurônio $i$ dispara, seu potential de membrana é resetado para $0$, enquanto que o potencial de membrana do neurônio $j$ é aumentado por um valor positivo $\\ep^2 a(i,j)$, se $i$ influencia $j$. Além disso, entre disparos consecutivos, o sistema segue uma movimento determinístico devido às sinapses elétricas e às correntes de vazamento. As sinapses elétricas estão envolvidas na sincronização do potencial de membrana dos neurônios, enquanto que as correntes de vazamento inibem a atividade de todos os neurônios, atraindo simultaneamente todos os potenciais de membrana para $0$. No principal resultado dessa tese, mostramos que a distribuição empírica dos potenciais de membrana converge, quando o parâmetro $\\ep$ tende à 0 , para uma densidade de probabilidade $ho_t(u,r)$ que satisfaz uma equação diferencial parcial nâo linear do tipo hiperbólica . / We study the hydrodynamic limit of a stochastic system of neurons whose interactions are given by Kac Potentials that mimic chemical and electrical synapses and leak currents. The system consists of $\\ep^$ neurons embedded in $[0,1)^2$, each spiking randomly according to a point process with rate depending on both its membrane potential and position. When neuron $i$ spikes, its membrane potential is reset to $0$ while the membrane potential of $j$ is increased by a positive value $\\ep^2 a(i,j)$, if $i$ influences $j$. Furthermore, between consecutive spikes, the system follows a deterministic motion due both to electrical synapses and leak currents. The electrical synapses are involved in the synchronization of the membrane potentials of the neurons, while the leak currents inhibit the activity of all neurons, attracting simultaneously their membrane potentials to 0. We show that the empirical distribution of the membrane potentials converges, as $\\ep$ vanishes, to a probability density $ho_t(u,r)$ which is proved to obey a nonlinear PDE of Hyperbolic type.
84

Low complexity turbo equalization using superstructures

Myburgh, Hermanus Carel January 2013 (has links)
In a wireless communication system the transmitted information is subjected to a number of impairments, among which inter-symbol interference (ISI), thermal noise and fading are the most prevalent. Owing to the dispersive nature of the communication channel, ISI results from the arrival of multiple delayed copies of the transmitted signal at the receiver. Thermal noise is caused by the random fluctuation on electrons in the receiver hardware, while fading is the result of constructive and destructive interference, as well as absorption during transmission. To protect the source information, error-correction coding (ECC) is performed in the transmitter, after which the coded information is interleaved in order to separate the information to be transmitted temporally. Turbo equalization (TE) is a technique whereby equalization (to correct ISI) and decoding (to correct errors) are iteratively performed by iteratively exchanging extrinsic information formed by optimal posterior probabilistic information produced by each algorithm. The extrinsic information determined from the decoder output is used as prior information by the equalizer, and vice versa, allowing for the bit-error rate (BER) performance to be improved with each iteration. Turbo equalization achieves excellent BER performance, but its computational complexity grows exponentially with an increase in channel memory as well as with encoder memory, and can therefore not be used in dispersive channels where the channel memory is large. A number of low complexity equalizers have consequently been developed to replace the maximum a posteriori probability (MAP) equalizer in order to reduce the complexity. Some of the resulting low complexity turbo equalizers achieve performance comparable to that of a conventional turbo equalizer that uses a MAP equalizer. In other cases the low complexity turbo equalizers perform much worse than the corresponding conventional turbo equalizer (CTE) because of suboptimal equalization and the inability of the low complexity equalizers to utilize the extrinsic information effectively as prior information. In this thesis the author develops two novel iterative low complexity turbo equalizers. The turbo equalization problem is modeled on superstructures, where, in the context of this thesis, a superstructure performs the task of the equalizer and the decoder. The resulting low complexity turbo equalizers process all the available information as a whole, so there is no exchange of extrinsic information between different subunits. The first is modeled on a dynamic Bayesian network (DBN) modeling the Turbo Equalization problem as a quasi-directed acyclic graph, by allowing a dominant connection between the observed variables and their corresponding hidden variables, as well as weak connections between the observed variables and past and future hidden variables. The resulting turbo equalizer is named the dynamic Bayesian network turbo equalizer (DBN-TE). The second low complexity turbo equalizer developed in this thesis is modeled on a Hopfield neural network, and is named the Hopfield neural network turbo equalizer (HNN-TE). The HNN-TE is an amalgamation of the HNN maximum likelihood sequence estimation (MLSE) equalizer, developed previously by this author, and an HNN MLSE decoder derived from a single codeword HNN decoder. Both the low complexity turbo equalizers developed in this thesis are able to jointly and iteratively equalize and decode coded, randomly interleaved information transmitted through highly dispersive multipath channels. The performance of both these low complexity turbo equalizers is comparable to that of the conventional turbo equalizer while their computational complexities are superior for channels with long memory. Their performance is also comparable to that of other low complexity turbo equalizers, but their computational complexities are worse. The computational complexity of both the DBN-TE and the HNN-TE is approximately quadratic at best (and cubic at worst) in the transmitted data block length, exponential in the encoder constraint length and approximately independent of the channel memory length. The approximate quadratic complexity of both the DBN-TE and the HNN-TE is mostly due to interleaver mitigation, requiring matrix multiplication, where the matrices have dimensions equal to the data block length, without which turbo equalization using superstructures is impossible for systems employing random interleavers. / Thesis (PhD)--University of Pretoria, 2013. / gm2013 / Electrical, Electronic and Computer Engineering / unrestricted
85

Contrôle optimal stochastique des processus de Markov déterministes par morceaux et application à l’optimisation de maintenance / Stochastic optimal control for piecewise deterministic Markov processes and application to maintenance optimization

Geeraert, Alizée 06 June 2017 (has links)
On s’intéresse au problème de contrôle impulsionnel à horizon infini avec facteur d’oubli pour les processus de Markov déterministes par morceaux (PDMP). Dans un premier temps, on modélise l’évolution d’un système opto-électronique par des PDMP. Afin d’optimiser la maintenance du système, on met en place un problème de contrôle impulsionnel tenant compte à la fois du coût de maintenance et du coût lié à l’indisponibilité du matériel auprès du client.On applique ensuite une méthode d’approximation numérique de la fonction valeur associée au problème, faisant intervenir la quantification de PDMP. On discute alors de l’influence des paramètres sur le résultat obtenu. Dans un second temps, on prolonge l’étude théorique du problème de contrôle impulsionnel en construisant de manière explicite une famille de stratégies є-optimales. Cette construction se base sur l’itération d’un opérateur dit de simple-saut-ou-intervention associé au PDMP, dont l’idée repose sur le procédé utilisé par U.S. Gugerli pour la construction de temps d’arrêt є-optimaux. Néanmoins, déterminer la meilleure position après chaque intervention complique significativement la construction de telles stratégies et nécessite l’introduction d’un nouvel opérateur. L’originalité de la construction de stratégies є-optimales présentée ici est d’être explicite, au sens où elle ne nécessite pas la résolution préalable de problèmes complexes. / We are interested in a discounted impulse control problem with infinite horizon forpiecewise deterministic Markov processes (PDMPs). In the first part, we model the evolutionof an optronic system by PDMPs. To optimize the maintenance of this equipment, we study animpulse control problem where both maintenance costs and the unavailability cost for the clientare considered. We next apply a numerical method for the approximation of the value function associated with the impulse control problem, which relies on quantization of PDMPs. The influence of the parameters on the numerical results is discussed. In the second part, we extendthe theoretical study of the impulse control problem by explicitly building a family of є-optimalstrategies. This approach is based on the iteration of a single-jump-or-intervention operator associatedto the PDMP and relies on the theory for optimal stopping of a piecewise-deterministic Markov process by U.S. Gugerli. In the present situation, the main difficulty consists in approximating the best position after the interventions, which is done by introducing a new operator.The originality of the proposed approach is the construction of є-optimal strategies that areexplicit, since they do not require preliminary resolutions of complex problems.
86

Estimation of the probability and uncertainty of undesirable events in large-scale systems / Estimation de la probabilité et l'incertitude des événements indésirables des grands systèmes

Hou, Yunhui 31 March 2016 (has links)
L’objectif de cette thèse est de construire un framework qui représente les incertitudes aléatoires et épistémiques basé sur les approches probabilistes et des théories d’incertain, de comparer les méthodes et de trouver les propres applications sur les grands systèmes avec événement rares. Dans la thèse, une méthode de normalité asymptotique a été proposée avec simulation de Monte Carlo dans les cas binaires ainsi qu'un modèle semi-Markovien dans les cas de systèmes multi-états dynamiques. On a aussi appliqué la théorie d’ensemble aléatoire comme un modèle de base afin d’évaluer la fiabilité et les autres indicateurs de performance dans les systèmes binaires et multi-états avec technique bootstrap. / Our research objective is to build frameworks representing both aleatory and epistemic uncertainties based on probabilistic approach and uncertainty approaches and to compare these methods and find the proper applicatin for these methods in large scale systems with rare event. In this thesis, an asymptotic normality method is proposed with Monte Carlo simulation in case of binary systems as well as semi-Markov model for cases of dynamic multistate system. We also apply random set as a basic model to evaluate system reliability and other performance indices on binary and multistate systems with bootstrap technique.
87

Analysis And Optimization Of Queueing Models With Markov Modulated Poisson Input

Hemachandra, Nandyala 06 1900 (has links) (PDF)
No description available.
88

Mobile data and computation offloading in mobile cloud computing

Liu, Dongqing 07 1900 (has links)
No description available.
89

An Interacting Particle System for Collective Migration

Klauß, Tobias 21 October 2008 (has links)
Kollektive Migration und Schwarmverhalten sind Beispiele für Selbstorganisation und können in verschiedenen biologischen Systemen beobachtet werden, beispielsweise in Vogel-und Fischschwärmen oder Bakterienpopulationen. Im Zentrum dieser Arbeit steht ein räumlich diskretes und zeitlich stetiges Model, welches das kollektive Migrieren von Individuen mittels eines stochastischen Vielteilchensystems (VTS) beschreibt und analysierbar macht. Das konstruierte Modell ist in keiner Klasse gut untersuchter Vielteilchensysteme enthalten, sodass der größte Teil der Arbeit der Entwicklung von Methoden zur Untersuchung des Langzeitverhaltens bestimmter VTS gewidmet ist. Eine entscheidende Rolle spielen hier Gibbs-Maße, die zu zeitlich invarianten Maßen in Beziehung gesetzt werden. Durch eine Simulationsstudie und die Analyse des Einflusses der Parameter Migrationsgeschwindigkeit, Sensitivität der Individuen und (räumliche) Dichte der Anfangsverteilung können Eigenschaften kollektiver Migration erklärt und Hypothesen für weitere Analysen aufgestellt werden. / Collective migration and swarming behavior are examples of self-organization and can be observed in various biological systems, such as in flocks of birds, schools of fish or populations of bacteria. In the center of this thesis lies a stochastic interacting particle system (IPS), which is a spatially discrete model with a continuous time scale that describes collective migration and which can be treated using analytical methods. The constructed model is not contained in any class of well-understood IPS’s. The largest part of this work is used to develop methods that can be used to study the long-term behavior of certain IPS’s. Thereby Gibbs-Measures play an important role and are related to temporally invariant measures. One can explain the properties of collective migration and propose a hypothesis for further analyses by a simulation study and by analysing the parameters migration velocity, sensitivity of individuals and (spatial) density of the initial distribution.
90

Modeling ambulance dispatching rules for EMS-systems / Modellering av dirigeringsstrategier för EMS-system

Knoops, Lorinde, Lundgren, Tilda January 2016 (has links)
This thesis presents a study on efficient dispatching rules in ambulance dispatching. By efficient dispatching rules, we mean such dispatching rules that lower response times for priority 1 calls while keeping response times for priority 2 calls at an adequate level. A Markov process and a simulation model were developed in order to evaluate the performance of several existing and newly designed dispatching rules. On four different response areas, five different dispatching rules were tested and their performances were compared. Particular focus was put upon the dispatch rule currently used by the Swedish emergency service provider SOS Alarm; the Closest rule. Our findings indicate that the four priority-based dispatching rules all outperform the Closest rule in decreasing the mean response time for calls of priority degree 1. Furthermore, implementing restrictions on the travel time for priority 2 calls was proven an efficient way to control the trade-off between the mean response time of priority 1 and 2 calls. The conclusion was drawn that the possibilities for more efficient ambulance dispatching are many and that SOS Alarm should consider implementing priority-based dispatching rules, alike the ones presented in this thesis, in their dispatching process. A study of the ambulance operator and controller profession, and the operator’s and controller’s interplay with the decision support system used by SOS Alarm in the ambulance dispatching process, was conducted in parallel. The properties of the interaction dynamics between operator and automation and the dangers linked to it were mapped out, described and analyzed. / Denna kandidatexamensuppsats behandlar effektiva dirigeringsstrategier inom ambulansdirigering. Effektiva dirigeringsstrategier åsyftar dirigeringsstrategier som lyckas sänka svarstiden för inkommande prioritet 1-samtal, samtidigt som svarstiden för prioritet 2-samtal hålls på en tillfredsställande nivå. I syfte att utvärdera olika dirigeringsstrategier utvecklades både en Markovsk modell och en simuleringsmodell. På fyra olika geografiska områden testades och jämfördes. Fem olika dirigeringsstrategier, varav två existerande och tre nyutvecklade. Särskilt fokus riktades mot Closest rule, vilket är den dirigeringsstrategi som används i SOS Alarms verksamhet idag. Från resultaten kunde utläsas att de prioritets-baserade dirigeringsstrategierna resulterade i en lägre genomsnittlig svarstid för prioritet 1-fall än Closest rule. Dessutom konstaterades det att en begränsning av svarstiderna för prioritet 2-samtal var ett effektivt sätt att kontrollera balansen mellan de genomsnittliga svarstiderna för samtal av prioritet 1, respektive 2. Slutsatsen drogs att möjligheterna för att utveckla nya effektiva dirigeringsstrategier är många och att SOS Alarm bör överväga att implementera prioritetsbaserade dirigeringsstrategier likt dem som presenterats i denna uppsats. Parallellt studerades ambulansoperatörens och -dirigentens yrkeskunnande, samt operatörens och dirigentens samspel med det beslutsstödssystem som används i SOS Alarms dirigeringsverksamhet. Interaktionen mellan operatör och automatisering samt de relaterade riskerna kartlades, beskrevs och analyserades.

Page generated in 0.3482 seconds