• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 165
  • 95
  • 65
  • 24
  • 21
  • 18
  • 18
  • 18
  • 18
  • 18
  • 18
  • 13
  • 11
  • 11
  • Tagged with
  • 1243
  • 1243
  • 278
  • 269
  • 255
  • 255
  • 167
  • 164
  • 164
  • 130
  • 129
  • 113
  • 107
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1061

Approches "problèmes inverses" régularisées pour l'imagerie sans lentille et la microscopie holographique en ligne / Regularized inverse problems approaches for lensless imaging and in-line holographie microscopy

Jolivet, Frederic 13 April 2018 (has links)
En imagerie numérique, les approches «problèmes inverses» régularisées reconstruisent une information d'intérêt à partir de mesures et d'un modèle de formation d'image. Le problème d'inversion étant mal posé, mal conditionné et le modèle de formation d'image utilisé peu contraint, il est nécessaire d'introduire des a priori afin de restreindre l'ambiguïté de l'inversion. Ceci permet de guider la reconstruction vers une solution satisfaisante. Les travaux de cette thèse ont porté sur le développement d'algorithmes de reconstruction d'hologrammes numériques, basés sur des méthodes d'optimisation en grande dimension (lisse ou non-lisse). Ce cadre général a permis de proposer différentes approches adaptées aux problématiques posées par cette technique d'imagerie non conventionnelle : la super­-résolution, la reconstruction hors du champ du capteur, l'holographie «couleur» et enfin la reconstruction quantitative d'objets de phase (c.a.d. transparents). Dans ce dernier cas, le problème de reconstruction consiste à estimer la transmittance complexe 2D des objets ayant absorbé et/ou déphasé l'onde d'éclairement lors de l'enregistrement de l'hologramme. Les méthodes proposées sont validées à l'aide de simulations numériques puis appliquées sur des données expérimentales issues de l'imagerie sans lentille ou de la microscopie holographique en ligne (imagerie cohérente en transmission, avec un objectif de microscope). Les applications vont de la reconstruction de mires de résolution opaques à la reconstruction d'objets biologiques (bactéries), en passant par la reconstruction de gouttelettes d'éther en évaporation dans le cadre d'une étude de la turbulence en mécanique des fluides. / In Digital Imaging, the regularized inverse problems methods reconstruct particular information from measurements and an image formation model. With an inverse problem that is ill-posed and ill­conditioned, and with the used image formation mode! having few constraints, it is necessary to introduce a priori conditions in order to restrict ambiguity for the inversion. This allows us to guide the reconstruction towards a satisfying solution. The works of the following thesis delve into the development of reconstruction algorithms of digital holograms based on large-scale optimization methods (smooth and non-smooth). This general framework allowed us to propose different approaches adapted to the challenges found with this unconventional imaging technique: the super-resolution, reconstruction outside the sensor's field, the color holography and finally, the quantitative reconstruction of phase abjects (i.e. transparent). For this last case, the reconstruction problem consists of estimating the complex 2D transmittance of abjects having absorbed and/or dephased the light wave during the recording of the hologram. The proposed methods are validated with the help of numerical simulations that are then applied on experimental data taken from the lensless imaging or from the in-line holographie microscopy (coherent imaging in transmission, with a microscope abject glass). The applications range from the reconstruction of opaque resolution sights, to the reconstruction of biological objects (bacteria), passing through the reconstruction of evaporating ether droplets from a perspective of turbulence study in fluid mechanics.
1062

Investigating Scale Effects on Analytical Methods of Predicting Peak Wind Loads on Buildings

Moravej, Mohammadtaghi 11 June 2018 (has links)
Large-scale testing of low-rise buildings or components of tall buildings is essential as it provides more representative information about the realistic wind effects than the typical small scale studies, but as the model size increases, relatively less large-scale turbulence in the upcoming flow can be generated. This results in a turbulence power spectrum lacking low-frequency turbulence content. This deficiency is known to have significant effects on the estimated peak wind loads. To overcome these limitations, the method of Partial Turbulence Simulation (PTS) has been developed recently in the FIU Wall of Wind lab to analytically compensate for the effects of the missing low-frequency content of the spectrum. This method requires post-test analysis procedures and is based on the quasi-steady assumptions. The current study was an effort to enhance that technique by investigating the effect of scaling and the range of applicability of the method by considering the limitations risen from the underlying theory, and to simplify the 2DPTS (includes both in-plane components of the turbulence) by proposing a weighted average method. Investigating the effect of Reynolds number on peak aerodynamic pressures was another objective of the study. The results from five tested building models show as the model size was increased, PTS results showed a better agreement with the available field data from TTU building. Although for the smaller models (i.e., 1:100,1:50) almost a full range of turbulence spectrum was present, the highest peaks observed at full-scale were not reproduced, which apparently was because of the Reynolds number effect. The most accurate results were obtained when the PTS was used in the case with highest Reynolds number, which was the1:6 scale model with a less than 5% blockage and a xLum/bm ratio of 0.78. Besides that, the results showed that the weighted average PTS method can be used in lieu of the 2DPTS approach. So to achieve the most accurate results, a large-scale test followed by a PTS peak estimation method deemed to be the desirable approach which also allows the xLum/bm values much smaller than the ASCE recommended numbers.
1063

Aplicações da teoria dos espaços coarse a espaços de Banach e grupos topológicos / Applications of coarse spaces theory to Banach spaces and topological groups

Garcia, Denis de Assis Pinto 24 June 2019 (has links)
Este trabalho é uma contribuição ao estudo da geometria de larga escala de espaços de Banach e de grupos topológicos. Embora esses dois campos sejam tradicionalmente estudados de forma independente, em 2017, Christian Rosendal mostrou que eles podem ser encarados como faces distintas de algo maior: a geometria grosseira de grupos topológicos. Uma ferramenta essencial para o desenvolvimento dessa nova abordagem é a noção de estrutura coarse, introduzida por John Roe em 2003, a qual pode ser vista como a contraparte de larga escala do conceito de estrutura uniforme. Por essa razão, os capítulos iniciais da dissertação destinam-se a apresentar uma introdução elementar à teoria dos espaços uniformes e dos espaços coarse, destacando os conceitos-chave para a compreensão dos demais capítulos e conferindo particular atenção ao estudo de uniformidades e estruturas coarse associadas a grupos topológicos, dentre as quais são enfatizadas as estruturas uniforme à esquerda e coarse à esquerda de um grupo topológico. No capítulo 5, são discutidos resultados recentes de Christian Rosendal acerca da existência de mergulhos uniformes e mergulhos grosseiros entre espaços de Banach. Dois dos mais importantes afirmam que, se existir uma função f uniformemente contínua e não colapsada entre os espaços de Banach (X, ||·||_X) e (E, ||·||_E), então, para todo p em [1, + infty[, existirá um mergulho uniforme de (X, ||·||_X) em (l_p(E), ||·||_p) o qual é, também, um mergulho grosseiro, e que, se f for, também, limitada, existirá um mergulho grosseiro uniformemente contínuo de (X, ||·||_X) em (ExE, ||·||_(ExE)). Já no capítulo 6, estuda-se a classe das estruturas coarse invariantes à esquerda sobre grupos. Inicialmente, mostra-se como uma estrutura coarse invariante à esquerda em um grupo (G, · ) pode ser descrita em função de um certo ideal sobre G, e vice-versa. Em seguida, utiliza-se esse resultado para caracterizar a estrutura coarse à esquerda E_L de um grupo topológico (G, · , T) em termos da coleção dos conjuntos grosseiramente limitados em (G, E_L) e, com isso, provar que a estrutura coarse à esquerda associada ao grupo aditivo de um espaço normado coincide com a estrutura coarse limitada induzida pela norma. / This work is a contribution to the study of large-scale geometry of Banach spaces and topological groups. Although these two fields are traditionally studied independently, in 2017, Christian Rosendal showed they can be regarded as different aspects of a more general theory: the coarse geometry of topological groups. An essential tool for the development of this new approach is the notion of coarse structure, introduced by John Roe in 2003, which can be seen as the large-scale counterpart of the concept of uniform structure. For this reason, the initial chapters of this work intend to present an elementary introduction to both uniform and coarse spaces theory, highlighting the key concepts for the understanding of the other chapters and paying particular attention to the study of uniform and coarse structures associated with topological groups, and, mainly, to the left-uniform and the left-coarse structures of a topological group. In Chapter 5, we discuss Rosendal\'s recent results on the existence of uniform and coarse embeddings between Banach spaces. Two of the most important state that, if there is an uncollapsed uniformly continuous function f between the Banach spaces (X, ||·||_X) and (E, ||·||_E), then, for all p in [1, + infty[, (X, ||·||_X) admits a simultaneously uniform and coarse embedding into (l_p(E), ||·||_p), and that, if, in addition, we assume that f maps into a bounded set, then (X, ||·||_X) also admits a uniformly continuous coarse embedding into (ExE, ||·||_(ExE)). On the other hand, in chapter 6, we focus our attention on the class of left-invariant coarse structures on groups. In the first section, we show how a left-invariant coarse structure on a group (G, · ) can be described in terms of a certain ideal on G, and vice versa. After that, we use this result to characterize the left-coarse structure E_L of a topological group (G, · , T) in terms of the collection of the coarsely bounded sets of (G, E_L) and, with this, we prove that the left-coarse structure associated with the additive group of a normed space is simply the bounded coarse structure induced by its norm.
1064

Cosmological constraints : from the cosmic infrared background measurement to the gravitational lensing in massive galaxy clusters / Contraintes cosmologiques : de la mesure du fond diffus infrarouge au lentillage gravitationnel dans les amas de galaxies massifs

Jauzac, Mathilde 17 November 2011 (has links)
La thématique principale de mon travail de thèse est l’é;volution et la formation structures en fonction du décalage vers le rouge (redshift par la suite).Mon travail de thèse se divise en deux parties distinctes, qui finalement se regroupent au cours de mes derniers travaux. Dans un premier temps, j’ai étudié l’évolution du Fond Diffus Infrarouge (Cosmic Infrared Background, CIB par la suite) en fonction du redshift à 70 et 160 µm en utilisant des données provenant du satellite Spitzer. J’ai effectué ce travail dans les champs GOODS & COSMOS en appliquant la méthode d’empilement (stacking, par la suite). Dans un second temps, j’ai étudié la distribution de masse dans des amas de galaxies situé à grand redshift en utilisant le lentillage gravitationnel faible. Pour ce faire, j’ai utilisé des données optiques provenant du satellite spatial Hubble (Hubble Space Telescope, HST par la suite). Ces données proviennent du relevé d’amas MACS (MAssive Cluster Survey). Les amas de galaxies étudiés ici font partis d’un sous-échantillon MACS, l’échantillon "grand-z" (high-z subsample). Comprendre l’état d’évolution des amas de galaxies à grand redshift permettrait de mettre des contraintes sur les modèles de formation et d’évolution des structures. La compréhension du cycle d’évolution des amas de galaxies est l’un des enjeux majeurs de la Cosmologie observationnelle actuelle. / The principal thematic of my thesis work is the evolution and the formation of structures as a function of the redshift.My thesis analysis can be separated un two distinct parts, which can finally be merged in a third part with my last works.Firstly, I studied the evolution of the Cosmic Infrared Background (CIB) as a function of redshift at 70 and 160 µm using data from the Spitzer Space Telescope. This analysis was performed in the GOODS & COSMOS fields by applying a stacking method.Secondly, I studied the mass distribtuion in massive galaxy clusters at high redshifts by using the gravitational lensign effect.I used optical data coming from the Hubble Space Telescope. The sample of galaxy clusters I used comes from a subsample of the MAssive Cluster Survey (MACS, PI:E. Ebeling) named the "high-z" sample, and which comprises 12 clusters.Understanding the state of evolution of galaxy clusters at high redshift wil allow us to put constraints on formation and evolution models of structures. The understanding of the evolution cycle of galaxy clusters is mandatory in terms of Observational Cosmology.
1065

Cosmologie inhomogène relativiste : modèles non perturbatifs et moyennes spatiales des équations d’Einstein / Inhomogeneous Relativistic Cosmology : nonperturbative models and spatial averaging of the Einstein equations

Mourier, Pierre 29 August 2019 (has links)
Dans le modèle standard de la cosmologie, la dynamique globale de l'Univers est modélisée par l'intermédiaire d'un espace-temps de référence (ou de fond) fortement symétrique, admettant des sections spatiales homogènes et isotropes. Le couplage entre les sources fluides, homogènes, et l'expansion globale, y est déterminé par les équations d'Einstein de la Relativité Générale. La formation de structures inhomogènes de matière peut également être décrite dans ce modèle. Selon l'époque et l'échelle considérées, cette description est effectuée soit à l'aide d'un schéma perturbatif relativiste supposant une faible déviation de chaque grandeur par rapport au fond homogène imposé, soit au moyen d'une approche newtonienne au sein du même fond en expansion. L'interprétation des observations dans ce modèle suggère cependant une accélération inattendue de l'expansion, qui requiert une nouvelle composante énergétique mal comprise, l' «Énergie Noire», en plus de la Matière Noire. La cosmologie inhomogène a pour but de lever les restrictions imposées par ces modèles sur la géométrie et sur les sources sans sortir du cadre de la Relativité Générale. Cela peut notamment permettre d'améliorer le modèle de formation des structures pour prendre en compte de fortes déviations par rapport à l'homogénéité dans la distribution de matière et dans la géométrie. Cela permet également d'étudier les conséquences dynamiques, appelées effets de rétroaction («backreaction»), du développement local de telles inhomogénéités sur l'expansion à de plus grandes échelles. De telles rétroactions peuvent alors reproduire, au moins partiellement, les comportements attribués à l'Énergie Noire ou à la Matière Noire. Au cours de mon travail de thèse sous la direction de Thomas Buchert, j'ai étudié plusieurs aspects analytiques de la cosmologie inhomogène en Relativité Générale. Je présente ci-dessous les résultats de travaux au sein de collaborations, auxquels j'ai apporté des contributions majeures dans le cadre de la thèse. Je me suis tout d'abord concentré sur l'écriture d'un schéma d'approximation relativiste lagrangien, pour décrire la dynamique locale des structures jusqu'à un régime non-linéaire, dans des fluides parfaits barotropes irrotationnels. Je me suis ensuite intéressé à la description effective de fluides inhomogènes admettant un tenseur d'énergie-impulsion général ainsi que de la vorticité, au moyen de deux schémas possibles de moyenne spatiale. Ces schémas s'appliquent à un choix quelconque des hypersurfaces spatiales sur lesquelles moyenner, et fournissent pour chacun de ces choix un système d'équations d'évolution effectives, présentant plusieurs termes de rétroaction, pour un domaine d'intégration suivant la propagation des sources. Cela permet une discussion qualitative de la dépendance au choix du feuilletage des équations moyennes et des rétroactions. J'ai également étudié la réécriture de ces schémas de moyennes et équations d'évolution, et d'autres obtenus de façon similaire, sous une forme unifiée et manifestement 4-covariante. Ce dernier résultat permettra une étude plus explicite de la dépendance au feuilletage / In the standard model of cosmology, the global dynamics of the Universe is modelled via a highly symmetric background spacetime with homogeneous and isotropic spatial sections. The coupling of the homogeneous fluid sources to the overall expansion is then determined by the Einstein equations of General Relativity. In addition, the formation of inhomogeneous matter structures is described either via a relativistic perturbation scheme assuming small deviations of all fields to the prescribed homogeneous background, or using Newtonian dynamics within the same expanding background, depending on the scale and epoch. However, the interpretation of observations within this model calls for an unexpectedly accelerated expansion requiring a poorly-understood `Dark Energy' component, in addition to Dark Matter. Inhomogeneous cosmology aims at relaxing the restrictions of these models on the geometry and sources while staying within the framework of General Relativity. It can allow, in particular, for an improved modelling of the formation of structures accounting for strong deviations from homogeneity in the matter distribution and the geometry. It can also study the dynamical consequences, or backreaction effects, of the development of such inhomogeneities on the expansion of larger scales. Such a backreaction may then reproduce, at least partially, the behaviours attributed to Dark Energy or Dark Matter. During my PhD under the direction of Thomas Buchert, I have been working on several analytical aspects of general-relativistic inhomogeneous cosmology. I present below the results of collaborations in which I played a major role in the context of the PhD. I first focussed on the expression of a relativistic Lagrangian approximation scheme for the description of the local dynamics of structures up to a nonlinear regime in irrotational perfect barotropic fluids. I then considered the effective description of inhomogeneous fluids with vorticity and a general energy-momentum tensor in terms of two possible schemes of spatial averaging. These schemes are applicable to any choice of spatial hypersurfaces of averaging, providing for each choice a set of effective evolution equations, featuring several backreaction terms, for an averaging region comoving with the sources. This allows for a qualitative discussion of the dependence of the average equations and backreactions on the foliation choice. I also studied the rewriting of such averaging schemes and evolution equations under a unified and manifestly 4-covariant form. This latter result will allow for a more explicit investigation of foliation dependence
1066

應急蜂巢式行動網路建構排程 / Scheduling of contingency cellular network deployment

王彥嵩 Unknown Date (has links)
大型自然災害會癱瘓通訊系統嚴重影響到救災效率,本論文旨在快速提出一個建構排程供應急通訊系統佈建。無線通訊系統的成熟極大的為使用者帶來便利性,但當發生大規模的地震或強烈颱風等重大天然災害時,通訊系統卻常常因架構原因,隨著電力與交通系統的損毀而癱瘓。由歷年大型災變中多數災區內之行動通訊系統全面中斷即可印證行動通訊系統其實是極為脆弱,而有效運作的通訊系統卻是災情傳遞、資源調度以及互助協調是否順利的關鍵因素。 本篇論文所探討的應急通訊系統是利用僅存的連通基地台和斷訊卻沒有損毀的基地台建構一個臨時性的網路,稱為應急蜂巢式行動網路(contingency cellular network,CCN)。由於災區的交通系統可能癱瘓,因此CCN的建構需視各種運輸能力而規劃,而各個地方受災情況不盡相同,CCN的建構順序也須辨明輕重緩急依序建構,網路拓樸的規劃是本研究團隊的另一研究主題,本文主在探討如何在網路拓樸已知情況下進行CCN建構排程以達到最大的救災效益,因此我們提出一適合CCN樹狀結構的最佳化排程模型,以追求救災效益的最大化,這些模型可供使用者(救災指揮單位)系統化的解決CCN建構排程問題。 模型包含CCN樹狀拓樸、基地台數目、基地台建構時間、基地台重要度、拓樸連線集合和建構工作組數。在此模型下提出一個考慮各基地台的時效性以及重要性而進行快速排程的演算法,此演算法透過計算排程總救災效益決定優劣。分三階段實驗。三階段實驗皆可在數秒內得出接近最佳解的排程。 / When stricken by a large-scale disaster, the efficiency of disaster response operation is very critical to life saving. We propose to build a contingency cellular network to support emergency communication in large scale natural disasters by connecting disconnected base stations. This paper addresses the deployment scheduling problem. The advance of mobile communication technologies has brought great convenience to users. Cellular phone becomes the first communication tool most people would use in emergency. However, cellular networks were usually crashed due to earthquake, typhoons or other natural disasters due to power outage or backhaul broken. Unfortunately, the efficiency of communication system is a critical factor to the success of disaster response operation such as resource allocation as well as coordination of rescue and relief operations. We designed a contingency cellular network (CCN) by connecting physically intact but service-disrupted base stations together with wireless links. As the disaster area's transport system may be paralyzed, the construction of CCN may have to rely on air transportation such as helicopter or even airdrop. Since the transportation capacity may be very limited, scheduling of CCN deployment order according to the demand of disaster operation becomes an important issue. We model the CCN Deployment Scheduling Problem into a combinatorics optimization problem aiming to maximize disaster operation efficiency. The problem is proven NP Hard. Thus, we design an efficient heuristic algorithm to solve the problem when it is needed in urgent.
1067

Emergent behavior based implements for distributed network management

Wittner, Otto January 2003 (has links)
<p>Network and system management has always been of concern for telecommunication and computer system operators. The need for standardization was recognised already 20 years ago, hence several standards for network management exist today. However, the ever-increasing number of units connected to networks and the ever-increasing number of services being provided results in significant increased complexity of average network environments. This challenges current management systems. In addition to the general increase in complexity the trend among network owners and operators of merging several single service networks into larger, heterogeneous and complex full service networks challenges current management systems even further. The full service networks will require management systems more powerful than what is possible to realize basing systems purely on todays management standards. This thesis presents a distributed stochastic optimization algorithm which enables implementations of highly robust and efficient management tools. These tools may be integrated into management systems and potentially make the systems more powerful and better prepared for management of full service networks.</p><p>Emergent behavior is common in nature and easily observable in colonies of social insects and animals. Even an old oak tree can be viewed as an emergent system with its collection of interacting cells. Characteristic for any emergent system is how the overall behavior of the system emerge from many relatively simple, restricted behaviors interacting, e.g. a thousand ants building a trail, a flock of birds flying south or millions of cells making a tree grow. No centralized control exist, i.e. no single unit is in charge making global decisions. Despite distributed control, high work redundancy and stochastic behavior components, emergent systems tend to be very efficient problem solvers. In fact emergent systems tend to be both efficient, adaptive and robust which are three properties indeed desirable for a network management system. The algorithm presented in this thesis relates to a class of emergent behavior based systems known as swarm intelligence systems, i.e. the algorithm is potentially efficient, adaptive and robust.</p><p>On the contrary to other related swarm intelligence algorithms, the algorithm presented has a thorough formal foundation. This enables a better understanding of the algorithm’s potentials and limitations, and hence enables better adaptation of the algorithm to new problem areas without loss of efficiency, adaptability or robustness. The formal foundations are based on work by Reuven Rubinstein on cross entropy driven optimization. The transition from Ruinstein’s centralized and synchronous algorithm to a distributed and asynchronous algorithm is described, and the distributed algorithm’s ability to solve complex problems (NP-complete) efficiently is demonstrated.</p><p>Four examples of how the distributed algorithm may be applied in a network management context are presented. A system for finding near optimal patterns of primary/backup paths together with a system for finding cyclic protection paths in mesh networks demonstrate the algorithm’s ability to act as a tool helping management system to ensure quality of service. The algorithm’s potential as a management policy implementation mechanism is also demonstrated. The algorithm’s adaptability is shown to enable resolution of policy conflicts in a soft manner causing as little loss as possible. Finally, the algorithm’s ability to find near optimal paths (i.e. sequences) of resources in networks of large scale is demonstrated.</p>
1068

Emergent behavior based implements for distributed network management

Wittner, Otto January 2003 (has links)
Network and system management has always been of concern for telecommunication and computer system operators. The need for standardization was recognised already 20 years ago, hence several standards for network management exist today. However, the ever-increasing number of units connected to networks and the ever-increasing number of services being provided results in significant increased complexity of average network environments. This challenges current management systems. In addition to the general increase in complexity the trend among network owners and operators of merging several single service networks into larger, heterogeneous and complex full service networks challenges current management systems even further. The full service networks will require management systems more powerful than what is possible to realize basing systems purely on todays management standards. This thesis presents a distributed stochastic optimization algorithm which enables implementations of highly robust and efficient management tools. These tools may be integrated into management systems and potentially make the systems more powerful and better prepared for management of full service networks. Emergent behavior is common in nature and easily observable in colonies of social insects and animals. Even an old oak tree can be viewed as an emergent system with its collection of interacting cells. Characteristic for any emergent system is how the overall behavior of the system emerge from many relatively simple, restricted behaviors interacting, e.g. a thousand ants building a trail, a flock of birds flying south or millions of cells making a tree grow. No centralized control exist, i.e. no single unit is in charge making global decisions. Despite distributed control, high work redundancy and stochastic behavior components, emergent systems tend to be very efficient problem solvers. In fact emergent systems tend to be both efficient, adaptive and robust which are three properties indeed desirable for a network management system. The algorithm presented in this thesis relates to a class of emergent behavior based systems known as swarm intelligence systems, i.e. the algorithm is potentially efficient, adaptive and robust. On the contrary to other related swarm intelligence algorithms, the algorithm presented has a thorough formal foundation. This enables a better understanding of the algorithm’s potentials and limitations, and hence enables better adaptation of the algorithm to new problem areas without loss of efficiency, adaptability or robustness. The formal foundations are based on work by Reuven Rubinstein on cross entropy driven optimization. The transition from Ruinstein’s centralized and synchronous algorithm to a distributed and asynchronous algorithm is described, and the distributed algorithm’s ability to solve complex problems (NP-complete) efficiently is demonstrated. Four examples of how the distributed algorithm may be applied in a network management context are presented. A system for finding near optimal patterns of primary/backup paths together with a system for finding cyclic protection paths in mesh networks demonstrate the algorithm’s ability to act as a tool helping management system to ensure quality of service. The algorithm’s potential as a management policy implementation mechanism is also demonstrated. The algorithm’s adaptability is shown to enable resolution of policy conflicts in a soft manner causing as little loss as possible. Finally, the algorithm’s ability to find near optimal paths (i.e. sequences) of resources in networks of large scale is demonstrated.
1069

Large-Scale Information Acquisition for Data and Information Fusion

Johansson, Ronnie January 2006 (has links)
The purpose of information acquisition for data and information fusion is to provide relevant and timely information. The acquired information is integrated (or fused) to estimate the state of some environment. The success of information acquisition can be measured in the quality of the environment state estimates generated by the data and information fusion process. In this thesis, we introduce and set out to characterise the concept of large-scale information acquisition. Our interest in this subject is justified both by the identified lack of research on a holistic view on data and information fusion, and the proliferation of networked sensors which promises to enable handy access to a multitude of information sources. We identify a number of properties that could be considered in the context of large-scale information acquisition. The sensors used could be large in number, heterogeneous, complex, and distributed. Also, algorithms for large-scale information acquisition may have to deal with decentralised control and multiple and varying objectives. In the literature, a process that realises information acquisition is frequently denoted sensor management. We, however, introduce the term perception management instead, which encourages an agent perspective on information acquisition. Apart from explictly inviting the wealth of agent theory research into the data and information fusion research, it also highlights that the resource usage of perception management is constrained by the overall control of a system that uses data and information fusion. To address the challenges posed by the concept of large-scale information acquisition, we present a framework which highlights some of its pertinent aspects. We have implemented some important parts of the framework. What becomes evident in our study is the innate complexity of information acquisition for data and information fusion, which suggests approximative solutions. We, furthermore, study one of the possibly most important properties of large-scale information acquisition, decentralised control, in more detail. We propose a recurrent negotiation protocol for (decentralised) multi-agent coordination. Our approach to the negotiations is from an axiomatic bargaining theory perspective; an economics discipline. We identify shortcomings of the most commonly applied bargaining solution and demonstrate in simulations a problem instance where it is inferior to an alternative solution. However, we can not conclude that one of the solutions dominates the other in general. They are both preferable in different situations. We have also implemented the recurrent negotiation protocol on a group of mobile robots. We note some subtle difficulties with transferring bargaining solutions from economics to our computational problem. For instance, the characterising axioms of solutions in bargaining theory are useful to qualitatively compare different solutions, but care has to be taken when translating the solution to algorithms in computer science as some properties might be undesirable, unimportant or risk being lost in the translation. / QC 20100903
1070

SCALABLE AND FAULT TOLERANT HIERARCHICAL B&B ALGORITHMS FOR COMPUTATIONAL GRIDS

Bendjoudi, Ahcène 24 April 2012 (has links) (PDF)
La résolution exacte de problèmes d'optimisation combinatoire avec les algorithmes Branch and Bound (B&B) nécessite un nombre exorbitant de ressources de calcul. Actuellement, cette puissance est offerte par les environnements large échelle comme les grilles de calcul. Cependant, les grilles présentent de nouveaux challenges : le passage à l'échelle, l'hétérogénéité et la tolérance aux pannes. La majorité des algorithmes B&B revisités pour les grilles de calcul sont basés sur le paradigme Master-Worker, ce qui limite leur passage à l'échelle. De plus, la tolérance aux pannes est rarement adressée dans ces travaux. Dans cette thèse, nous proposons trois principales contributions : P2P-B&B, H-B&B et FTH-B&B. P2P-B&B est un famework basé sur le paradigme Master-Worker traite le passage à l'échelle par la réduction de la fréquence de requêtes de tâches et en permettant les communications directes entre les workers. H-B&B traite aussi le passage à l'échelle. Contrairement aux approches proposées dans la littérature, H-B&B est complètement dynamique et adaptatif i.e. prenant en compte l'acquisition dynamique des ressources de calcul. FTH-B&B est basé sur de nouveaux méchanismes de tolérance aux pannes permettant de construire et maintenir la hiérarchie équilibrée, et de minimiser la redondance de travail quand les tâches sont sauvegardées et restaurées. Les approches proposées ont été implémentées avec la plateforme pour grille ProActive et ont été appliquées au problème d'ordonnancement de type Flow-Shop. Les expérimentations large échelle effectuées sur la grille Grid'5000 ont prouvé l'éfficacité des approches proposées.

Page generated in 0.0293 seconds