• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 43
  • 10
  • 5
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 208
  • 208
  • 34
  • 34
  • 33
  • 30
  • 28
  • 26
  • 21
  • 21
  • 19
  • 17
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The structure of graphs and new logics for the characterization of Polynomial Time

Laubner, Bastian 14 June 2011 (has links)
Diese Arbeit leistet Beiträge zu drei Gebieten der deskriptiven Komplexitätstheorie. Zunächst adaptieren wir einen repräsentationsinvarianten Graphkanonisierungsalgorithmus mit einfach exponentieller Laufzeit von Corneil und Goldberg (1984) und folgern, dass die Logik "Choiceless Polynomial Time with Counting" auf Strukturen, deren Relationen höchstens Stelligkeit 2 haben, gerade die Polynomialzeit-Eigenschaften (PTIME) von Fragmenten logarithmischer Größe charakterisiert. Der zweite Beitrag untersucht die deskriptive Komplexität von PTIME-Berechnungen auf eingeschränkten Graphklassen. Wir stellen eine neuartige Normalform von Intervallgraphen vor, die sich in Fixpunktlogik mit Zählen (FP+C) definieren lässt, was bedeutet, dass FP+C auf dieser Graphklasse PTIME charakterisiert. Wir adaptieren außerdem unsere Methoden, um einen kanonischen Beschriftungsalgorithmus für Intervallgraphen zu erhalten, der sich mit logarithmischer Platzbeschränkung (LOGSPACE) berechnen lässt. Im dritten Teil der Arbeit beschäftigt uns die ungelöste Frage, ob es eine Logik gibt, die alle Polynomialzeit-Berechnungen charakterisiert. Wir führen eine Reihe von Ranglogiken ein, die die Fähigkeit besitzen, den Rang von Matrizen über Primkörpern zu berechnen. Wir zeigen, dass diese Ergänzung um lineare Algebra robuste Logiken hervor bringt, deren Ausdrucksstärke die von FP+C übertrifft. Außerdem beweisen wir, dass Ranglogiken strikt an Ausdrucksstärke gewinnen, wenn wir die Zahl an Variablen erhöhen, die die betrachteten Matrizen indizieren. Dann bauen wir eine Brücke zur klassischen Komplexitätstheorie, indem wir über geordneten Strukturen eine Reihe von Komplexitätsklassen zwischen LOGSPACE und PTIME durch Ranglogiken charakterisieren. Die Arbeit etabliert die stärkste der Ranglogiken als Kandidat für die Charakterisierung von PTIME und legt nahe, dass Ranglogiken genauer erforscht werden müssen, um weitere Fortschritte im Hinblick auf eine Logik für Polynomialzeit zu erzielen. / This thesis is making contributions to three strands of descriptive complexity theory. First, we adapt a representation-invariant, singly exponential-time graph canonization algorithm of Corneil and Goldberg (1984) and conclude that on structures whose relations are of arity at most 2, the logic "Choiceless Polynomial Time with Counting" precisely characterizes the polynomial-time (PTIME) properties of logarithmic-size fragments. The second contribution investigates the descriptive complexity of PTIME computations on restricted classes of graphs. We present a novel canonical form for the class of interval graphs which is definable in fixed-point logic with counting (FP+C), which shows that FP+C captures PTIME on this graph class. We also adapt our methods to obtain a canonical labeling algorithm for interval graphs which is computable in logarithmic space (LOGSPACE). The final part of this thesis takes aim at the open question whether there exists a logic which generally captures polynomial-time computations. We introduce a variety of rank logics with the ability to compute the ranks of matrices over (finite) prime fields. We argue that this introduction of linear algebra results in robust logics whose expressiveness surpasses that of FP+C. Additionally, we establish that rank logics strictly gain in expressiveness when increasing the number of variables that index the matrices we consider. Then we establish a direct connection to standard complexity theory by showing that in the presence of orders, a variety of complexity classes between LOGSPACE and PTIME can be characterized by suitable rank logics. Our exposition provides evidence that rank logics are a natural object to study and establishes the most expressive of our rank logics as a viable candidate for capturing PTIME, suggesting that rank logics need to be better understood if progress is to be made towards a logic for polynomial time.
192

Comparison of LDPC Block and LDPC Convolutional Codes based on their Decoding Latency

Hassan, Najeeb ul, Lentmaier, Michael, Fettweis, Gerhard P. 11 February 2013 (has links) (PDF)
We compare LDPC block and LDPC convolutional codes with respect to their decoding performance under low decoding latencies. Protograph based regular LDPC codes are considered with rather small lifting factors. LDPC block and convolutional codes are decoded using belief propagation. For LDPC convolutional codes, a sliding window decoder with different window sizes is applied to continuously decode the input symbols. We show the required Eb/N0 to achieve a bit error rate of 10 -5 for the LDPC block and LDPC convolutional codes for the decoding latency of up to approximately 550 information bits. It has been observed that LDPC convolutional codes perform better than the block codes from which they are derived even at low latency. We demonstrate the trade off between complexity and performance in terms of lifting factor and window size for a fixed value of latency. Furthermore, the two codes are also compared in terms of their complexity as a function of Eb/N0. Convolutional codes with Viterbi decoding are also compared with the two above mentioned codes.
193

Les théories de la complexité et la systémique en gouvernance clinique: le cas des soins intensifs chirurgicaux

Hellou, Gisèle 08 1900 (has links)
Deux thématiques importantes des technologies de la santé: la pratique médicale fondée sur des preuves probantes et l’évaluation des interventions en médecine sont fondées sur une approche positiviste et une conception mécaniste des organisations en santé. Dans ce mémoire, nous soulevons l’hypothèse selon laquelle les théories de la complexité et la systémique permettent une conceptualisation différente de ces deux aspects de la gouvernance clinique d’une unité de Soins Intensifs Chirurgicaux (SIC), qui est considérée comme un système adaptatif dynamique non linéaire qui nécessite une approche systémique de la cognition. L’étude de cas d’une unité de SIC, permet de démontrer par de nombreux exemples et des analyses de micro-situations, toutes les caractéristiques de la complexité des patients critiques et instables et de la structure organisationnelle des SIC. Après une critique épistémologique de l’Evidence-Based Medicine nous proposons une pratique fondée sur des raisonnements cliniques alliant l’abduction, l’herméneutique et la systémique aux SIC. En nous inspirant des travaux de Karl Weick, nous suggérons aussi de repenser l’évaluation des modes d’interventions cliniques en s’inspirant de la notion d’organisation de haute fiabilité pour mettre en place les conditions nécessaires à l’amélioration des pratiques aux SIC. / In Health Technology Assessment and Management, Evidence-Based Medicine and many tools available for clinical assessment reflect a positivistic and mechanistic approach to Health Care Organizations and scientific knowledge. We argue that the Complexity Theories and the Systemic decision-making process give a different insight on those two aspects of Clinical Governance in a Surgical Intensive Care Unit (SICU). In a case-study, we describe the nature of critically ill and unstable patients and the organizational structure of a SICU in a university based hospital. We demonstrate all the characteristics of complexity in that setting, through the use of many examples and micro-situational analysis. After an epistemological critical appraisal of EBM, we suggest that if a SICU is conceptualized as a dynamic non-linear adaptative system, then clinical knowledge and scientific thought processes must include hermeneutical, systemic and abductive types of reasoning. Finally, we draw upon Karl Weick’s work and suggest that a SICU must be considered as a High Reliability Organization in order to aim for improving patient care and create better conditions for quality and performance in this complex environment.
194

L’analyse de l’introduction du changement dans les systèmes de santé des pays en développement : le cas d’un système de surveillance épidémiologique en Haïti

Baldé, Thierno 02 1900 (has links)
Les systèmes de santé des pays en développement font face à de nombreux enjeux organisationnels pour améliorer l’état de santé de leur population. Au nombre de ces enjeux, il est fréquemment mentionné la présence d’organisations internationales ayant des objectifs et caractéristiques peu convergents et qui interviennent de façon non nécessairement coordonnée. Cette thèse explore la thématique de l’introduction du changement dans ces systèmes de santé en mettant un accent spécifique sur l’enjeu lié à la présence de ces organisations internationales. La méthodologie utilisée est une analyse de concept. Cette approche méthodologique consiste à effectuer des revues critiques de la littérature sur des concepts, à mobiliser de nouvelles approches théoriques pour clarifier ces concepts et à réaliser des études de cas pour leur mise à l’épreuve empirique. En nous appuyant sur la théorie de l’action sociale de Parsons, la théorie de la complexité ainsi que les expériences d’introduction du changement dans différents systèmes de santé, nous avons développé un cadre théorique d’analyse de l’introduction du changement dans les systèmes de santé des pays en développement (1er concept). Ce cadre théorique, qui suggère de concevoir le processus d’introduction du changement comme un système d’action sociale complexe et émergent, a été appliqué à l’analyse de l’introduction d’un système de surveillance épidémiologique en Haïti. Plus précisément, nous avons analysé une étape ainsi que certains aspects du mécanisme sous-jacent au processus d’introduction du changement. Ce faisant, nous avons analysé, dans les deux premiers articles de la thèse, l’étape d’adoption du système de surveillance épidémiologique (2ème concept) ainsi que les déterminants de la collaboration entre les organisations impliquées dans le processus d’introduction du changement (3ème concept). Les résultats de ces analyses nous ont permis d’objectiver de faibles niveaux d’adoption, ainsi qu’une faible articulation des déterminants de la collaboration entre les différentes organisations impliquées dans le processus d’introduction du changement. Partant de ces constats, nous avons pu mettre en évidence, dans le troisième article, une phase de « chaos » dans le fonctionnement du système de santé d’Haïti. Cette phase de « chaos », qui pourrait expliquer les difficultés liées à l’introduction du changement dans les systèmes de santé des pays en développement en général et plus particulièrement en Haïti, était caractérisée par la présence d’un ordre sous-jacent au désordre apparent dans le fonctionnement de certaines composantes du système de santé d’Haïti, l’existence d’une instabilité, d’une imprédictibilité ainsi que d’une invariance structurelle aux différents niveaux de gouvernance. Par ailleurs, cette recherche a également permis de démontrer que les caractéristiques du « chaos » sont entretenues par la présence de trois groupes de systèmes d’action sociale bien articulés et bien cohérents à tous les échelons de la pyramide sanitaire en Haïti. Il s’agissait des systèmes d’action liés aux agences de coopération bilatérale, ceux liés aux initiatives ou fondations internationales de lutte contre le sida et finalement ceux associés aux organisations onusiennes. Ces systèmes d’action sociale sont en outre associés à d’autres systèmes d’action plus complexes qui sont situés à l’extérieur du système de santé d’Haïti. Au regard de ces résultats, nous avons proposé une nouvelle approche permettant de mieux appréhender l’introduction du changement dans les systèmes de santé des pays en développement et qui s’inscrit dans une logique permettant de favoriser une plus grande variété et une plus grande diversification. Cette variété et cette diversification étant soutenue par la création et la mise en place de plusieurs interconnections entre tous les systèmes d’action en présence dans les systèmes de santé qu’ils soient d’appartenance nationale, internationale ou qu’ils agissent au niveau central, départemental ou local. La finalité de ce processus étant l’émergence de propriétés systémiques issues non seulement des propriétés des groupes de systèmes d’action individuels qui interviennent dans la constitution du système émergent, mais aussi d’autres propriétés résultant de leur mise en commun. / In an attempt to improve the health status of their population, health care systems in developing countries face several organizational issues. These issues include the presence of international organizations with different goals and characteristics, as well as little convergence and poor coordination. While focussing on this specific issue, the objective of this dissertation is to deeply explore the issues related to the process of introducing change in health care systems of developing countries. The research method for this study is a concept analysis that requires a literature review, the use of new theories to clarify these concepts as well as the use of case studies to empirically validate these concepts. Using Parsons’s social action theory and the complexity theory, a new theory of change (1st concept) was developed and applied to the process of introduction of an epidemiological surveillance system in Haiti. More specifically, in the first two articles, we have analysed the process of adopting the epidemiological surveillance system (2nd concept) and the determinants of collaboration among the different organizations involved in the change process (3rd concept). The results from these two articles enabled us to highlights the low level of adoption as well as the weak articulation of the determinants of collaborations between the various organizations involved in the change process. From these analyses, we were able to highlights the dynamics of chaos operating in Haiti’s health care system in the third article. This chaos stage which could enable us to show the difficulties associated with the introduction of change in health systems in developing countries in general, and Haiti in particular, was characterized by a hidden order underlying an apparent disorder in the operation of certain components of the Haitian’s health system, the existence of instability, unpredictability as well as structural invariance at various levels of governance. Moreover, this research also enabled us to show that these characteristics are maintained by the presence of three well articulated and coherent social action systems at all levels of the health pyramid. They are those related to bilateral cooperation agencies, those related to international foundations and global initiatives fighting against AIDS, and finally those associated with the United Nations Organizations. These social action systems are also associated with other more complex systems outside the Haiti’s health system. On the basis of these results, we proposed a new approach to understand the process of introducing change in health care systems of developing countries that would fit into the logic that supports the setting up a larger interconnections and diversification among various organizations involved.
195

The Spatial and Temporal Distribution of the Metal Mineralisation in Eastern Australia and the Relationship of the Observed Patterns to Giant Ore Deposits

Robinson, Larry J. Unknown Date (has links)
The introduced mineral deposit model (MDM) is the product of a trans-disciplinary study, based on Complexity and General Systems Theory. Both investigate the abstract organization of phenomena, independent of their substance, type, or spatial or temporal scale of existence. The focus of the research has been on giant, hydrothermal mineral deposits. They constitute <0.001% of the total number of deposits yet contain 70-85% of the world's metal resources. Giants are the definitive exploration targets. They are more profitable to exploit and less susceptible to fluctuations of the market. Consensus has it that the same processes that generate small deposits also form giants but those processes are simply longer, vaster, and larger. Heat is the dominant factor in the genesis of giant mineral deposits. A paleothermal map shows where the vast heat required to generate a giant has been concentrated in a large space, and even allows us to deduce the duration of the process. To generate a paleothermal map acceptable to the scientific community requires reproducibility. Experimentation with various approaches to pattern recognition of geochemical data showed that the AUTOCLUST algorithm not only gave reproducibility but also gave the most consistent, most meaningful results. It automatically extracts boundaries based on Voronoi and Delaunay tessellations. The user does not specify parameters; however, the modeller does have tools to explore the data. This approach is near ideal in that it removes much of the human-generated bias. This algorithm reveals the radial, spatial distribution, of gold deposits in the Lachlan Fold Belt of southeastern Australia at two distinct scales – repeating patterns every ~80 km and ~230 km. Both scales of patterning are reflected in the geology. The ~80 km patterns are nested within the ~230 km patterns revealing a self-similar, geometrical relationship. It is proposed that these patterns originate from Rayleigh-Bénard convection in the mantle. At the Rayleigh Number appropriate for the mantle, the stable planform is the spoke pattern, where hot mantle material is moving upward near the centre of the pattern and outward along the radial arms. Discontinuities in the mantle, Rayleigh-Bénard convection in the mantle, and the spatial distribution of giant mineral deposits, are correlative. The discontinuities in the Earth are acting as platforms from which Rayleigh-Bénard convection can originate. Shallow discontinuities give rise to plumelets, which manifest at the crust as repeating patterns ranging, from ~100 to ~1,000 km in diameter. Deeper discontinuities give rise to plumes, which become apparent at the crust as repeating patterns ranging from >1,000 to ~4,000 km in diameter. The deepest discontinuities give rise to the superplumes, which become detectable at the crust as repeating patterns ranging from >4,000 to >10,000 km in diameter. Rayleigh-Bénard convection concentrates the reservoir of heat in the mantle into specific locations in the crust; thereby providing the vast heat requirements for the processes that generate giant, hydrothermal mineral deposits. The radial spatial distribution patterns observed for gold deposits are also present for base metal deposits. At the supergiant Broken Hill deposit in far western New South Wales, Australia, the higher temperature Broken Hill-type deposits occur in a radial pattern while the lower temperature deposits occur in concentric patterns. The supergiant Broken Hill deposit occurs at the very centre of the pattern. If the supergiant Broken Hill Deposit was buried beneath alluvium, water or younger rocks, it would now be possible to predict its location with accuracy measured in tens of square kilometres. This predictive accuracy is desired by every exploration manager of every exploration company. The giant deposits at Broken Hill, Olympic Dam, and Mount Isa all occur on the edge of an annulus. There are at least two ways of creating an annulus on the Earth's surface. One is through Rayleigh-Bénard convection and the other is through meteor impact. It is likely that only 'large' meteors (those >10 km in diameter) would have any permanent impact on the mantle. Lesser meteors would leave only a superficial scar that would be eroded away. The permanent scars in the mantle act as ‘accidental templates’ consisting of concentric and possibly radial fractures that impose those structures on any rocks that were subsequently laid down or emplaced over the mantle. In southeastern Australia, the proposed Deniliquin Impact structure has been an 'accidental template' providing a 'line-of-least-resistance' for the ascent of the ~2,000 km diameter, offshore, Cape Howe Plume. The western and northwestern radial arms of this plume have created the very geometry of the Lachlan Fold Belt, as well as giving rise to the spatial distribution of the granitic rocks in that belt and ultimately to the gold deposits. The interplay between the templating of the mantle by meteor impacts and the ascent of plumelets, plumes or superplumes from various discontinuities in the mantle is quite possibly the reason that mineral deposits occur where they do.
196

The Spatial and Temporal Distribution of the Metal Mineralisation in Eastern Australia and the Relationship of the Observed Patterns to Giant Ore Deposits

Robinson, Larry J. Unknown Date (has links)
The introduced mineral deposit model (MDM) is the product of a trans-disciplinary study, based on Complexity and General Systems Theory. Both investigate the abstract organization of phenomena, independent of their substance, type, or spatial or temporal scale of existence. The focus of the research has been on giant, hydrothermal mineral deposits. They constitute <0.001% of the total number of deposits yet contain 70-85% of the world's metal resources. Giants are the definitive exploration targets. They are more profitable to exploit and less susceptible to fluctuations of the market. Consensus has it that the same processes that generate small deposits also form giants but those processes are simply longer, vaster, and larger. Heat is the dominant factor in the genesis of giant mineral deposits. A paleothermal map shows where the vast heat required to generate a giant has been concentrated in a large space, and even allows us to deduce the duration of the process. To generate a paleothermal map acceptable to the scientific community requires reproducibility. Experimentation with various approaches to pattern recognition of geochemical data showed that the AUTOCLUST algorithm not only gave reproducibility but also gave the most consistent, most meaningful results. It automatically extracts boundaries based on Voronoi and Delaunay tessellations. The user does not specify parameters; however, the modeller does have tools to explore the data. This approach is near ideal in that it removes much of the human-generated bias. This algorithm reveals the radial, spatial distribution, of gold deposits in the Lachlan Fold Belt of southeastern Australia at two distinct scales – repeating patterns every ~80 km and ~230 km. Both scales of patterning are reflected in the geology. The ~80 km patterns are nested within the ~230 km patterns revealing a self-similar, geometrical relationship. It is proposed that these patterns originate from Rayleigh-Bénard convection in the mantle. At the Rayleigh Number appropriate for the mantle, the stable planform is the spoke pattern, where hot mantle material is moving upward near the centre of the pattern and outward along the radial arms. Discontinuities in the mantle, Rayleigh-Bénard convection in the mantle, and the spatial distribution of giant mineral deposits, are correlative. The discontinuities in the Earth are acting as platforms from which Rayleigh-Bénard convection can originate. Shallow discontinuities give rise to plumelets, which manifest at the crust as repeating patterns ranging, from ~100 to ~1,000 km in diameter. Deeper discontinuities give rise to plumes, which become apparent at the crust as repeating patterns ranging from >1,000 to ~4,000 km in diameter. The deepest discontinuities give rise to the superplumes, which become detectable at the crust as repeating patterns ranging from >4,000 to >10,000 km in diameter. Rayleigh-Bénard convection concentrates the reservoir of heat in the mantle into specific locations in the crust; thereby providing the vast heat requirements for the processes that generate giant, hydrothermal mineral deposits. The radial spatial distribution patterns observed for gold deposits are also present for base metal deposits. At the supergiant Broken Hill deposit in far western New South Wales, Australia, the higher temperature Broken Hill-type deposits occur in a radial pattern while the lower temperature deposits occur in concentric patterns. The supergiant Broken Hill deposit occurs at the very centre of the pattern. If the supergiant Broken Hill Deposit was buried beneath alluvium, water or younger rocks, it would now be possible to predict its location with accuracy measured in tens of square kilometres. This predictive accuracy is desired by every exploration manager of every exploration company. The giant deposits at Broken Hill, Olympic Dam, and Mount Isa all occur on the edge of an annulus. There are at least two ways of creating an annulus on the Earth's surface. One is through Rayleigh-Bénard convection and the other is through meteor impact. It is likely that only 'large' meteors (those >10 km in diameter) would have any permanent impact on the mantle. Lesser meteors would leave only a superficial scar that would be eroded away. The permanent scars in the mantle act as ‘accidental templates’ consisting of concentric and possibly radial fractures that impose those structures on any rocks that were subsequently laid down or emplaced over the mantle. In southeastern Australia, the proposed Deniliquin Impact structure has been an 'accidental template' providing a 'line-of-least-resistance' for the ascent of the ~2,000 km diameter, offshore, Cape Howe Plume. The western and northwestern radial arms of this plume have created the very geometry of the Lachlan Fold Belt, as well as giving rise to the spatial distribution of the granitic rocks in that belt and ultimately to the gold deposits. The interplay between the templating of the mantle by meteor impacts and the ascent of plumelets, plumes or superplumes from various discontinuities in the mantle is quite possibly the reason that mineral deposits occur where they do.
197

The Spatial and Temporal Distribution of the Metal Mineralisation in Eastern Australia and the Relationship of the Observed Patterns to Giant Ore Deposits

Robinson, Larry J. Unknown Date (has links)
The introduced mineral deposit model (MDM) is the product of a trans-disciplinary study, based on Complexity and General Systems Theory. Both investigate the abstract organization of phenomena, independent of their substance, type, or spatial or temporal scale of existence. The focus of the research has been on giant, hydrothermal mineral deposits. They constitute <0.001% of the total number of deposits yet contain 70-85% of the world's metal resources. Giants are the definitive exploration targets. They are more profitable to exploit and less susceptible to fluctuations of the market. Consensus has it that the same processes that generate small deposits also form giants but those processes are simply longer, vaster, and larger. Heat is the dominant factor in the genesis of giant mineral deposits. A paleothermal map shows where the vast heat required to generate a giant has been concentrated in a large space, and even allows us to deduce the duration of the process. To generate a paleothermal map acceptable to the scientific community requires reproducibility. Experimentation with various approaches to pattern recognition of geochemical data showed that the AUTOCLUST algorithm not only gave reproducibility but also gave the most consistent, most meaningful results. It automatically extracts boundaries based on Voronoi and Delaunay tessellations. The user does not specify parameters; however, the modeller does have tools to explore the data. This approach is near ideal in that it removes much of the human-generated bias. This algorithm reveals the radial, spatial distribution, of gold deposits in the Lachlan Fold Belt of southeastern Australia at two distinct scales – repeating patterns every ~80 km and ~230 km. Both scales of patterning are reflected in the geology. The ~80 km patterns are nested within the ~230 km patterns revealing a self-similar, geometrical relationship. It is proposed that these patterns originate from Rayleigh-Bénard convection in the mantle. At the Rayleigh Number appropriate for the mantle, the stable planform is the spoke pattern, where hot mantle material is moving upward near the centre of the pattern and outward along the radial arms. Discontinuities in the mantle, Rayleigh-Bénard convection in the mantle, and the spatial distribution of giant mineral deposits, are correlative. The discontinuities in the Earth are acting as platforms from which Rayleigh-Bénard convection can originate. Shallow discontinuities give rise to plumelets, which manifest at the crust as repeating patterns ranging, from ~100 to ~1,000 km in diameter. Deeper discontinuities give rise to plumes, which become apparent at the crust as repeating patterns ranging from >1,000 to ~4,000 km in diameter. The deepest discontinuities give rise to the superplumes, which become detectable at the crust as repeating patterns ranging from >4,000 to >10,000 km in diameter. Rayleigh-Bénard convection concentrates the reservoir of heat in the mantle into specific locations in the crust; thereby providing the vast heat requirements for the processes that generate giant, hydrothermal mineral deposits. The radial spatial distribution patterns observed for gold deposits are also present for base metal deposits. At the supergiant Broken Hill deposit in far western New South Wales, Australia, the higher temperature Broken Hill-type deposits occur in a radial pattern while the lower temperature deposits occur in concentric patterns. The supergiant Broken Hill deposit occurs at the very centre of the pattern. If the supergiant Broken Hill Deposit was buried beneath alluvium, water or younger rocks, it would now be possible to predict its location with accuracy measured in tens of square kilometres. This predictive accuracy is desired by every exploration manager of every exploration company. The giant deposits at Broken Hill, Olympic Dam, and Mount Isa all occur on the edge of an annulus. There are at least two ways of creating an annulus on the Earth's surface. One is through Rayleigh-Bénard convection and the other is through meteor impact. It is likely that only 'large' meteors (those >10 km in diameter) would have any permanent impact on the mantle. Lesser meteors would leave only a superficial scar that would be eroded away. The permanent scars in the mantle act as ‘accidental templates’ consisting of concentric and possibly radial fractures that impose those structures on any rocks that were subsequently laid down or emplaced over the mantle. In southeastern Australia, the proposed Deniliquin Impact structure has been an 'accidental template' providing a 'line-of-least-resistance' for the ascent of the ~2,000 km diameter, offshore, Cape Howe Plume. The western and northwestern radial arms of this plume have created the very geometry of the Lachlan Fold Belt, as well as giving rise to the spatial distribution of the granitic rocks in that belt and ultimately to the gold deposits. The interplay between the templating of the mantle by meteor impacts and the ascent of plumelets, plumes or superplumes from various discontinuities in the mantle is quite possibly the reason that mineral deposits occur where they do.
198

The multilevel critical node problem : theoretical intractability and a curriculum learning approach

Nabli, Adel 08 1900 (has links)
Évaluer la vulnérabilité des réseaux est un enjeu de plus en plus critique. Dans ce mémoire, nous nous penchons sur une approche étudiant la défense d’infrastructures stratégiques contre des attaques malveillantes au travers de problèmes d'optimisations multiniveaux. Plus particulièrement, nous analysons un jeu séquentiel en trois étapes appelé le « Multilevel Critical Node problem » (MCN). Ce jeu voit deux joueurs s'opposer sur un graphe: un attaquant et un défenseur. Le défenseur commence par empêcher préventivement que certains nœuds soient attaqués durant une phase de vaccination. Ensuite, l’attaquant infecte un sous ensemble des nœuds non vaccinés. Finalement, le défenseur réagit avec une stratégie de protection. Dans ce mémoire, nous fournissons les premiers résultats de complexité pour MCN ainsi que ceux de ses sous-jeux. De plus, en considérant les différents cas de graphes unitaires, pondérés ou orientés, nous clarifions la manière dont la complexité de ces problèmes varie. Nos résultats contribuent à élargir les familles de problèmes connus pour être complets pour les classes NP, $\Sigma_2^p$ et $\Sigma_3^p$. Motivés par l’insolubilité intrinsèque de MCN, nous concevons ensuite une heuristique efficace pour le jeu. Nous nous appuyons sur les approches récentes cherchant à apprendre des heuristiques pour des problèmes d’optimisation combinatoire en utilisant l’apprentissage par renforcement et les réseaux de neurones graphiques. Contrairement aux précédents travaux, nous nous intéressons aux situations dans lesquelles de multiples joueurs prennent des décisions de manière séquentielle. En les inscrivant au sein du formalisme d’apprentissage multiagent, nous concevons un algorithme apprenant à résoudre des problèmes d’optimisation combinatoire multiniveaux budgétés opposant deux joueurs dans un jeu à somme nulle sur un graphe. Notre méthode est basée sur un simple curriculum : si un agent sait estimer la valeur d’une instance du problème ayant un budget au plus B, alors résoudre une instance avec budget B+1 peut être fait en temps polynomial quelque soit la direction d’optimisation en regardant la valeur de tous les prochains états possibles. Ainsi, dans une approche ascendante, nous entraînons notre agent sur des jeux de données d’instances résolues heuristiquement avec des budgets de plus en plus grands. Nous rapportons des résultats quasi optimaux sur des graphes de tailles au plus 100 et un temps de résolution divisé par 185 en moyenne comparé au meilleur solutionneur exact pour le MCN. / Evaluating the vulnerability of networks is a problem which has gain momentum in recent decades. In this work, we focus on a Multilevel Programming approach to study the defense of critical infrastructures against malicious attacks. We analyze a three-stage sequential game played in a graph called the Multilevel Critical Node problem (MCN). This game sees two players competing with each other: a defender and an attacker. The defender starts by preventively interdicting nodes from being attacked during what is called a vaccination phase. Then, the attacker infects a subset of non-vaccinated nodes and, finally, the defender reacts with a protection strategy. We provide the first computational complexity results associated with MCN and its subgames. Moreover, by considering unitary, weighted, undirected and directed graphs, we clarify how the theoretical tractability or intractability of those problems vary. Our findings contribute with new NP-complete, $\Sigma_2^p$-complete and $\Sigma_3^p$-complete problems. Motivated by the intrinsic intractability of the MCN, we then design efficient heuristics for the game by building upon the recent approaches seeking to learn heuristics for combinatorial optimization problems through graph neural networks and reinforcement learning. But contrary to previous work, we tackle situations with multiple players taking decisions sequentially. By framing them in a multi-agent reinforcement learning setting, we devise a value-based method to learn to solve multilevel budgeted combinatorial problems involving two players in a zero-sum game over a graph. Our framework is based on a simple curriculum: if an agent knows how to estimate the value of instances with budgets up to B, then solving instances with budget B+1 can be done in polynomial time regardless of the direction of the optimization by checking the value of every possible afterstate. Thus, in a bottom-up approach, we generate datasets of heuristically solved instances with increasingly larger budgets to train our agent. We report results close to optimality on graphs up to 100 nodes and a 185 x speedup on average compared to the quickest exact solver known for the MCN.
199

Evoluční algoritmy při řešení problému obchodního cestujícího / Evolutionary Algorithms for the Solution of Travelling Salesman Problem

Jurčík, Lukáš January 2014 (has links)
This diploma thesis deals with evolutionary algorithms used for travelling salesman problem (TSP). In the first section, there are theoretical foundations of a graph theory and computational complexity theory. Next section contains a description of chosen optimization algorithms. The aim of the diploma thesis is to implement an application that solve TSP using evolutionary algorithms.
200

Comparison of LDPC Block and LDPC Convolutional Codes based on their Decoding Latency

Hassan, Najeeb ul, Lentmaier, Michael, Fettweis, Gerhard P. January 2012 (has links)
We compare LDPC block and LDPC convolutional codes with respect to their decoding performance under low decoding latencies. Protograph based regular LDPC codes are considered with rather small lifting factors. LDPC block and convolutional codes are decoded using belief propagation. For LDPC convolutional codes, a sliding window decoder with different window sizes is applied to continuously decode the input symbols. We show the required Eb/N0 to achieve a bit error rate of 10 -5 for the LDPC block and LDPC convolutional codes for the decoding latency of up to approximately 550 information bits. It has been observed that LDPC convolutional codes perform better than the block codes from which they are derived even at low latency. We demonstrate the trade off between complexity and performance in terms of lifting factor and window size for a fixed value of latency. Furthermore, the two codes are also compared in terms of their complexity as a function of Eb/N0. Convolutional codes with Viterbi decoding are also compared with the two above mentioned codes.

Page generated in 0.0639 seconds