• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 17
  • 9
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 157
  • 50
  • 29
  • 25
  • 24
  • 22
  • 18
  • 18
  • 16
  • 16
  • 15
  • 13
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Properties of greedy trees

Razanajatovo Misanantenaina, Valisoa 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: A greedy tree is constructed from a given degree sequence using a simple greedy algorithm that assigns the highest degree to the root, the second, the third, . . . , -highest degree to the root’s neighbours, etc. This particular tree is the solution to numerous extremal problems among all trees with given degree sequence. In this thesis, we collect results for some distancebased graph invariants, the number of subtrees and the spectral radius in which greedy trees play a major role. We show that greedy trees are extremal for the aforementioned graph invariants by means of two different approaches, one using level greedy trees and majorization, while the other one is somewhat more direct. Finally, we prove some new results on greedy trees for additive parameters with specific toll functions. / AFRIKAANSE OPSOMMING: ’n Gulsige boom word vanuit ’n gegewe graadry deur middel van ’n eenvoudige gulsige algoritme gebou, wat die hoogste graad aan die wortel toewys, die tweede-, derde-, . . . , -hoogste graad aan die wortel se bure, ens. Hierdie spesifieke boom is die oplossing van ’n groot aantal ekstremale probleme onder al die bome met gegewe graadry. In hierdie tesis beskou ons ’n versameling van resultate oor afstand-gebaseerde grafiekinvariante, die aantal subbome en die spektraalstraal waarin gulsige bome ’n belangrike rol speel. Ons bewys dat gulsige bome ekstremaal vir die bogenoemde grafiekinvariante is deur van twee verskillende tegnieke gebruik te maak: een met behulp van vlak-gulsige bome en majorering, en ’n ander metode wat effens meer direk is. Laastens bewys ons sommige nuwe resultate oor gulsige bome vir additiewe parameters met spesifieke tolfunksies.
42

EFFICIENT GREEDY-FACE-GREEDY GEOGRAPHIC ROUTING PROTOCOLS IN MOBILE AD HOC AND SENSOR NETWORKS

Sun, Yan 01 January 2012 (has links)
This thesis describes and develops two planarization algorithms for geographic routing and a geographic routing protocol for mobile ad hoc and sensor networks. As all nodes are mobile and there is no fixed infrastructure, the design of routing protocols is one of the most challenging issues in mobile ad hoc and sensor networks. In recent years, greedyface- greedy (GFG) geographic routing protocols have been widely used, which need nodes to construct planar graphs as the underlying graphs for face routing. Two kinds of planarization algorithms have been developed, idealized and realistic planarization algorithms, respectively. The idealized planarization algorithms make the ideal assumption that the original network graph is a unit-disk graph (UDG). On the other hand, the realistic planarization algorithms do not need the original network to be a UDG. We propose an idealized planarization algorithm, which constructs an Edge Constrained Localized Delaunay graph (ECLDel). Compared to the existing planarized localized Delaunay graph [42], the construction of an ECLDel graph is far simpler, which reduces the communication cost and saves the network bandwidth. We propose a Pre-Processed Cross Link Detection Protocol (PPCLDP), which generates a planar spanning subgraph of the original network graph in realistic environments with obstacles. The proposed PPCLDP outperforms the existing Cross Link Detection Protocol [32] with much lower communication cost and better convergence time. In GFG routing protocols, greedy routing may fail at concave nodes, in which case, face routing is applied to recover from the greedy routing failure. This may cause extra hops in routing in networks containing voids. We propose a Hill-Area-Restricted (HAR) routing protocol, which avoids the extra hops taken in the original GFG routing. Compared to the existing Node Elevation Ad hoc Routing [4], the proposed HAR guarantees the packet delivery and decreases the communication cost greatly.
43

On Directed Random Graphs and Greedy Walks on Point Processes

Gabrysch, Katja January 2016 (has links)
This thesis consists of an introduction and five papers, of which two contribute to the theory of directed random graphs and three to the theory of greedy walks on point processes.           We consider a directed random graph on a partially ordered vertex set, with an edge between any two comparable vertices present with probability p, independently of all other edges, and each edge is directed from the vertex with smaller label to the vertex with larger label. In Paper I we consider a directed random graph on ℤ2 with the vertices ordered according to the product order and we show that the limiting distribution of the centered and rescaled length of the longest path from (0,0) to (n, [na] ), a<3/14, is the Tracy-Widom distribution. In Paper II we show that, under a suitable rescaling, the closure of vertex 0 of a directed random graph on ℤ with edge probability n−1 converges in distribution to the Poisson-weighted infinite tree. Moreover, we derive limit theorems for the length of the longest path of the Poisson-weighted infinite tree.           The greedy walk is a deterministic walk on a point process that always moves from its current position to the nearest not yet visited point. Since the greedy walk on a homogeneous Poisson process on the real line, starting from 0, almost surely does not visit all points, in Paper III we find the distribution of the number of visited points on the negative half-line and the distribution of the index at which the walk achieves its minimum. In Paper IV we place homogeneous Poisson processes first on two intersecting lines and then on two parallel lines and we study whether the greedy walk visits all points of the processes. In Paper V we consider the greedy walk on an inhomogeneous Poisson process on the real line and we determine sufficient and necessary conditions on the mean measure of the process for the walk to visit all points.
44

On List-Coloring and the Sum List Chromatic Number of Graphs.

Hall, Coleman 15 April 2011 (has links)
This thesis explores several of the major results in list-coloring in an expository fashion. As a specialization of list coloring, the sum list chromatic number is explored in detail. Ultimately, the thesis is designed to motivate the discussion of coloring problems and, hopefully, interest the reader in the branch of coloring problems in graph theory.
45

Resilient dynamic state estimation in the presence of false information injection attacks

Lu, Jingyang 01 January 2016 (has links)
The impact of false information injection is investigated for linear dynamic systems with multiple sensors. First, it is assumed that the system is unaware of the existence of false information and the adversary is trying to maximize the negative effect of the false information on Kalman filter's estimation performance under a power constraint. The false information attack under different conditions is mathematically characterized. For the adversary, many closed-form results for the optimal attack strategies that maximize the Kalman filter's estimation error are theoretically derived. It is shown that by choosing the optimal correlation coefficients among the false information and allocating power optimally among sensors, the adversary could significantly increase the Kalman filter's estimation errors. In order to detect the false information injected by an adversary, we investigate the strategies for the Bayesian estimator to detect the false information and defend itself from such attacks. We assume that the adversary attacks the system with certain probability, and that he/she adopts the worst possible strategy that maximizes the mean squared error (MSE) if the attack is undetected. An optimal Bayesian detector is designed which minimizes the average system estimation error instead of minimizing the probability of detection error, as a conventional Bayesian detector typically does. The case that the adversary attacks the system continuously is also studied. In this case, sparse attack strategies in multi-sensor dynamic systems are investigated from the adversary's point of view. It is assumed that the defender can perfectly detect and remove the sensors once they are corrupted by false information injected by an adversary. The adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under the constraint that the adversary can only attack the system a few times over the sensor and over the time, which leads to an integer programming problem. In order to overcome the prohibitive complexity of the exhaustive search, polynomial-time algorithms, such as greedy search and dynamic programming, are proposed to find the suboptimal attack strategies. As for greedy search, it starts with an empty set, and one sensor is added at each iteration, whose elimination will lead to the maximum system estimation error. The process terminates when the cardinality of the active set reaches to the sparsity constraint. Greedy search based approaches such as sequential forward selection (SFS), sequential backward selection (SBS), and simplex improved sequential forward selection (SFS-SS) are discussed and corresponding attack strategies are provided. Dynamic programming is also used in obtaining the sub-optimal attack strategy. The validity of dynamic programming lies on a straightforward but important nature of dynamic state estimation systems: the credibility of the state estimate at current step is in accordance with that at previous step. The problem of false information attack on and the Kalman filter's defense of state estimation in dynamic multi-sensor systems is also investigated from a game theoretic perspective. The relationship between the Kalman filter and the adversary can be regarded as a two-person zero-sum game. The condition under which both sides of the game will reach a Nash equilibrium is investigated.
46

Architecture autonome et distribuée d’adressage et de routage pour la flexibilité des communications dans l’internet / Autonomous and distributed architecture for addressing and routing to improve the flexibility of communications in internet

Cassagnes, Cyril 12 November 2012 (has links)
Les schémas de routage locaux basés sur des coordonnées prises dans le plan hyperbolique ont attiré un intérêt croissant depuis quelques années. Cependant, les solutions proposées sont toutes appliquées à des réseaux au topologie aléatoire et au nombre de nœuds limités. Dans le même temps, plusieurs travaux se sont concentrés sur la création de modèle topologique basé sur les lois de la géométrie hyperbolique. Dans ce cas, Il est montré que les graphes ont des topologies semblables à Internet et qu'un routage local hyperbolique atteint une efficacité proche de la perfection. Cependant, ces graphes ne garantissent pas le taux de réussite du routage même si aucune panne ne se produit. Dans cette thèse, l'objectif est de construire un système passant à l'échelle pour la création de réseau recouvrant capable de fournir à ses membres un service d'adressage et de routage résilient dans un environnement dynamique. Ensuite, nous étudions de quelle manière les réseaux P2PTV pourraient supporter un nombre d'utilisateur croissant. Dans cette thèse, nous essayons de répondre à cette question en étudiant les facteurs d'efficacité et de passage à l'échelle dans un système de diffusion vidéo P2P typique. Au travers des données fournies par Zattoo, producteur de réseau P2PTV, nous réalisons des simulations dont les résultats montrent qu'il y a encore des obstacles à surmonter avant que les réseaux P2P de diffusion vidéo puissent dépendre uniquement de leurs utilisateurs. / Local routing schemes based on virtual coordinates taken from the hyperbolic plane have attracted considerable interest in recent years.However, solutions have been applied to ad-hoc and sensor networks having a random topology and a limited number of nodes. In other hand, some research has focused on the creation of network topology models based on hyperbolic geometric laws. In this case, it has been shown that these graphs have an Internet-like topology and that local hyperbolic routing achieves a near perfect efficiency. However, with these graphs, routing success is not guaranteed even if no failures happen. In this thesis, we aim at building a scalable system for creating overlay networks on top of the Internet that would provide reliable addressing and routing service to its members in a dynamic environment.Next, we investigate how well P2PTV networks would support a growing number of users. In this thesis, we try to address this question by studying scalability and efficiency factors in a typical P2P based live streaming network. Through the use of the data provided by Zattoo a production P2PTV network, we carry out simulations whose results show that there are still hurdles to overcome before P2P based live streaming could depend uniquely of their users.
47

Quelques modèles mathématiques en chimie quantique et propagation d'incertitudes / Some mathematical models in quantum chemistry and uncertainty quantification

Ehrlacher, Virginie 12 July 2012 (has links)
Ce travail comporte deux volets. Le premier concerne l'étude de défauts locaux dans des matériaux cristallins. Le chapitre 1 donne un bref panorama des principaux modèles utilisés en chimie quantique pour le calcul de structures électroniques. Dans le chapitre 2, nous présentons un modèle variationnel exact qui permet de décrire les défauts locaux d'un cristal périodique dans le cadre de la théorie de Thomas-Fermi-von Weiszäcker. Celui-ci est justifié à l'aide d'arguments de limite thermodynamique. On montre en particulier que les défauts modélisés par cette théorie ne peuvent pas être chargés électriquement. Les chapitres 3 et 4 de cette thèse traitent du phénomène de pollution spectrale. En effet, lorsqu'un opérateur est discrétisé, il peut apparaître des valeurs propres parasites, qui n'appartiennent pas au spectre de l'opérateur initial. Dans le chapitre 3, nous montrons que des méthodes d'approximation de Galerkin via une discrétisation en éléments finis pour approcher le spectre d'opérateurs de Schrödinger périodiques perturbés sont sujettes au phénomène de pollution spectrale. Par ailleurs, les vecteurs propres associés aux valeurs propres parasites peuvent être interprétés comme des états de surface. Nous prouvons qu'il est possible d'éviter ce problème en utilisant des espaces d'éléments finis augmentés, construits à partir des fonctions de Wannier associées à l'opérateur de Schrödinger périodique non perturbé. On montre également que la méthode dite de supercellule, qui consiste à imposer des conditions limites périodiques sur un domaine de simulation contenant le défaut, ne produit pas de pollution spectrale. Dans le chapitre 4, nous établissons des estimations d'erreur a priori pour la méthode de supercellule. En particulier, nous montrons que l'erreur effectuée décroît exponentiellement vite en fonction de la taille de la supercellule considérée. Un deuxième volet concerne l'étude d'algorithmes gloutons pour résoudre des problèmes de propagation d'incertitudes en grande dimension. Le chapitre 5 de cette thèse présente une introduction aux méthodes numériques classiques utilisées dans le domaine de la propagation d'incertitudes, ainsi qu'aux algorithmes gloutons. Dans le chapitre 6, nous prouvons que ces algorithmes peuvent être appliqués à la minimisation de fonctionnelles d'énergie fortement convexes non linéaires et que leur vitesse de convergence est exponentielle en dimension finie. Nous illustrons ces résultats par la résolution de problèmes de l'obstacle avec incertitudes via une formulation pénalisée / The contributions of this thesis work are two fold. The first part deals with the study of local defects in crystalline materials. Chapter 1 gives a brief overview of the main models used in quantum chemistry for electronic structure calculations. In Chapter 2, an exact variational model for the description of local defects in a periodic crystal in the framework of the Thomas-Fermi-von Weisz"acker theory is presented. It is justified by means of thermodynamic limit arguments. In particular, it is proved that the defects modeled within this theory are necessarily neutrally charged. Chapters 3 and 4 are concerned with the so-called spectral pollution phenomenon. Indeed, when an operator is discretized, spurious eigenvalues which do not belong to the spectrum of the initial operator may appear. In Chapter 3, we prove that standard Galerkin methods with finite elements discretization for the approximation of perturbed periodic Schrödinger operators are prone to spectral pollution. Besides, the eigenvectors associated with spurious eigenvalues can be characterized as surface states. It is possible to circumvent this problem by using augmented finite element spaces, constructed with the Wannier functions of the periodic unperturbed Schr"odinger operator. We also prove that the supercell method, which consists in imposing periodic boundary conditions on a large simulation domain containing the defect, does not produce spectral pollution. In Chapter 4, we give a priori error estimates for the supercell method. It is proved in particular that the rate of convergence of the method scales exponentiall with respect to the size of the supercell. The second part of this thesis is devoted to the study of greedy algorithms for the resolution of high-dimensional uncertainty quantification problems. Chapter 5 presents the most classical numerical methods used in the field of uncertainty quantification and an introduction to greedy algorithms. In Chapter 6, we prove that these algorithms can be applied to the minimization of strongly convex nonlinear energy functionals and that their convergence rate is exponential in the finite-dimensional case. We illustrate these results on obstacle problems with uncertainty via penalized formulations
48

Optimal Bayesian estimators for latent variable cluster models

Rastelli, Riccardo, Friel, Nial 11 1900 (has links) (PDF)
In cluster analysis interest lies in probabilistically capturing partitions of individuals, items or observations into groups, such that those belonging to the same group share similar attributes or relational profiles. Bayesian posterior samples for the latent allocation variables can be effectively obtained in a wide range of clustering models, including finite mixtures, infinite mixtures, hidden Markov models and block models for networks. However, due to the categorical nature of the clustering variables and the lack of scalable algorithms, summary tools that can interpret such samples are not available. We adopt a Bayesian decision theoretical approach to define an optimality criterion for clusterings and propose a fast and context-independent greedy algorithm to find the best allocations. One important facet of our approach is that the optimal number of groups is automatically selected, thereby solving the clustering and the model-choice problems at the same time. We consider several loss functions to compare partitions and show that our approach can accommodate a wide range of cases. Finally, we illustrate our approach on both artificial and real datasets for three different clustering models: Gaussian mixtures, stochastic block models and latent block models for networks.
49

A study onshop sceduling problems / Um estudo sobre escalonamento de processos

Zubaran, Tadeu Knewitz January 2018 (has links)
Escalonamento de processos é um tipo de problema de otimização combinatória no qual devemos alocar máquinas à tarefas por períodos específicos de tempo. A literatura contém diversos estudos propondo técnicas para resolver modelos de escalonamento de processos como o job shop e o open shop. Esses modelos permitem que os passos no processo produtivo sejam ou completamente ordenados ou sem ordenação alguma. Com o aumento da complexidade das aplicações industriais no encontramos, mais recentemente, diversos trabalhos que propõe problemas de escalonamento de processos mais gerais para modelar mais precisamente os processos produtivos. O mixed shop, group shop e partial shop são exemplos de tais modelos. Nesse trabalho nós propomos uma busca tabu iterada para o partial shop, que é um modelo geral que inclui diversos modelos mais restritivos. Os componentes novos mais importantes da técnica são o gerador de solução inicial, a vizinhança e o limite inferior para a vizinhança. Em experimentos computacionais nós conseguimos demonstrar que a heurística genérica e única é capaz de competir, e as vezes superar, as técnicas de estado de arte desenvolvidas especificamente para partial, open, mixed e group shop. Algumas vezes uma máquina é o gargalo de um processo produtivo, e é replicada. Na literatura o caso das máquinas paralelas foi incluído em diversas extensões de problemas de escalonamento de processos. Nessa tese nós também propomos uma técnica para escalonar as máquinas paralelas, sem incluí-las explicitamente na representação do problema. Nós usamos técnicas gerais para os casos sem máquinas paralelas para produzir uma busca heurística tabu rápida, e estado da arte, para o caso do job shop com máquinas paralelas. / Shop scheduling is a combinatorial optimization type of problem in which we must allocate machines to jobs for specific periods time. A set of constraints defines which schedules are valid, and we must select one that minimizes or maximizes an objective function. In this work we use the makespan, which is the time the last job finishes. The literature contains several studies proposing techniques to solve shop problems such as the job shop and open shop. These problems allow the steps of the production processes to be either fully ordered or not ordered at all. With increasing complexity and size of industrial applications we find, more recently, several works which propose more general shop problems to model the production processes more accurately. The mixed shop, group shop and partial shop are examples of such problems In this work we propose an iterated tabu search for the partial shop, which is a general problem and includes several other more restrictive shop problems. The most important novel components of the solver are the initial solution generator, the neighbourhood, and the lower bound for the neighbourhood. In computational experiments we were able to show that the general partial shop solver is able to compete with, and sometimes surpass, the state-of-the-art solvers developed specifically for the partial, open, mixed and group shops. Sometimes a machine is a bottleneck in the production process, and is replicated. In the literature the parallel machines case has being included in several extensions of shop problems. In this thesis we also propose a technique to schedule the parallel machines heuristically, without including them explicitly in the representation of the problem. We use general techniques for the non-parallel machine cases to produce a fast tabu search heuristic results for the job shop with parallel machines.
50

Essays on the Macroeconomic Implications of Information Asymmetries

Malherbe, Frédéric 02 September 2010 (has links)
Along this dissertation I propose to walk the reader through several macroeconomic implications of information asymmetries, with a special focus on financial issues. This exercise is mainly theoretical: I develop stylized models that aim at capturing macroeconomic phenomena such as self-fulfilling liquidity dry-ups, the rise and the fall of securitization markets, and the creation of systemic risk. The dissertation consists of three chapters. The first one proposes an explanation to self-fulfilling liquidity dry-ups. The second chapters proposes a formalization of the concept of market discipline and an application to securitization markets as risk-sharing mechanisms. The third one offers a complementary analysis to the second as the rise of securitization is presented as banker optimal response to strict capital constraints. Two concepts that do not have unique acceptations in economics play a central role in these models: liquidity and market discipline. The liquidity of an asset refers to the ability for his owner to transform it into current consumption goods. Secondary markets for long-term assets play thus an important role with that respect. However, such markets might be illiquid due to adverse selection. In the first chapter, I show that: (1) when agents expect a liquidity dry-up on such markets, they optimally choose to self-insure through the hoarding of non-productive but liquid assets; (2) this hoarding behavior worsens adverse selection and dries up market liquidity; (3) such liquidity dry-ups are Pareto inefficient equilibria; (4) the government can rule them out. Additionally, I show that idiosyncratic liquidity shocks à la Diamond and Dybvig have stabilizing effects, which is at odds with the banking literature. The main contribution of the chapter is to show that market breakdowns due to adverse selection are highly endogenous to past balance-sheet decisions. I consider that agents are under market discipline when their current behavior is influenced by future market outcomes. A key ingredient for market discipline to be at play is that the market outcome depends on information that is observable but not verifiable (that is, information that cannot be proved in court, and consequently, upon which enforceable contracts cannot be based). In the second chapter, after introducing this novel formalization of market discipline, I ask whether securitization really contributes to better risk-sharing: I compare it with other mechanisms that differ on the timing of risk-transfer. I find that for securitization to be an efficient risk-sharing mechanism, it requires market discipline to be strong and adverse selection not to be severe. This seems to seriously restrict the set of assets that should be securitized for risk-sharing motive. Additionally, I show how ex-ante leverage may mitigate interim adverse selection in securitization markets and therefore enhance ex-post risk-sharing. This is interesting because high leverage is usually associated with “excessive” risktaking. In the third chapter, I consider risk-neutral bankers facing strict capital constraints; their capital is indeed required to cover the worst-case-scenario losses. In such a set-up, I find that: 1) banker optimal autarky response is to diversify lower-tail risk and maximize leverage; 2) securitization helps to free up capital and to increase leverage, but distorts incentives to screen loan applicants properly; 3) market discipline mitigates this problem, but if it is overestimated by the supervisor, it leads to excess leverage, which creates systemic risk. Finally, I consider opaque securitization and I show that the supervisor: 4) faces uncertainty about the trade-off between the size of the economy and the probability and the severity of a systemic crisis; 5) can generally not set capital constraints at the socially efficient level.

Page generated in 0.0336 seconds