Spelling suggestions: "subject:"complexity theory."" "subject:"komplexity theory.""
161 |
K-Separator problem / Problème de k-SéparateurMohamed Sidi, Mohamed Ahmed 04 December 2014 (has links)
Considérons un graphe G = (V,E,w) non orienté dont les sommets sont pondérés et un entier k. Le problème à étudier consiste à la construction des algorithmes afin de déterminer le nombre minimum de nœuds qu’il faut enlever au graphe G pour que toutes les composantes connexes restantes contiennent chacune au plus k-sommets. Ce problème nous l’appelons problème de k-Séparateur et on désigne par k-séparateur le sous-ensemble recherché. Il est une généralisation du Vertex Cover qui correspond au cas k = 1 (nombre minimum de sommets intersectant toutes les arêtes du graphe) / Let G be a vertex-weighted undirected graph. We aim to compute a minimum weight subset of vertices whose removal leads to a graph where the size of each connected component is less than or equal to a given positive number k. If k = 1 we get the classical vertex cover problem. Many formulations are proposed for the problem. The linear relaxations of these formulations are theoretically compared. A polyhedral study is proposed (valid inequalities, facets, separation algorithms). It is shown that the problem can be solved in polynomial time for many special cases including the path, the cycle and the tree cases and also for graphs not containing some special induced sub-graphs. Some (k + 1)-approximation algorithms are also exhibited. Most of the algorithms are implemented and compared. The k-separator problem has many applications. If vertex weights are equal to 1, the size of a minimum k-separator can be used to evaluate the robustness of a graph or a network. Another application consists in partitioning a graph/network into different sub-graphs with respect to different criteria. For example, in the context of social networks, many approaches are proposed to detect communities. By solving a minimum k-separator problem, we get different connected components that may represent communities. The k-separator vertices represent persons making connections between communities. The k-separator problem can then be seen as a special partitioning/clustering graph problem
|
162 |
Complexity Theory of Leadership and Management InformationSimpson, Mark Aloysius 01 January 2018 (has links)
Implementing effective leadership strategies in management of information systems (MIS) can positively influence overall organizational performance. This study was an exploration of the general problem of failure to lead effectively in the current knowledge-based economy and the resulting deleterious effects on organizational performance and threats to continuing organizational viability. The specific problem was the lack of understanding regarding the interaction of leadership processes with MIS functions and the impact on organizational success. Managers' and employees' lived experiences of leadership in small- to medium-sized enterprises were explored, as well as how those experiences influenced the organization's adaptive responses regarding technology and performance in the knowledge-based economy. The complexity theory of leadership was applied as the theoretical foundation for this study. A phenomenological methodology was used. Data were collected through semi-structured interviews and analyzed through open coding to identify emergent themes from the data. The themes were leaders motivate employees' positive work-related behaviors, effective communication skills ensure accessibility and efficiency of the organizational information system, and leadership practices influence business productivity. This study contributes to social change by providing insights for managers and employees regarding effective strategies for working as teams and networks via the use of nontraditional leadership theory, which promotes company sustainability by demonstrating the benefits of responding to the changing economy.
|
163 |
Design of High Performance Flanges and its Influence on Manufacturing Costs, Structural Performance and Weight / Konstruktion av högpresterande flänsförbands inverkan på tillverkningskostnader, prestanda och viktAlcocer Bonifaz, Joaquin January 2019 (has links)
This project attempts to research the manufacturing cost, with an emphasis on machining, of high performance flanges for Turbine Rear Structure (TRS) applications, as well as the tradeoffs with structural performance and weight. A combination of traditional cost modelling techniques from the literature, as well as, the non-conventional manufacturing complexity index, as cost indicator are implemented. A multidisciplinary study is carried out with the aid of ANSYS Workbench in the form of computer simulated experiments to investigate tradeoffs in flanges. It is concluded that multidisciplinary studies of cost, performance and weight lacked model robustness to draw sound conclusions about flange design. However, the manufacturing complexity index after partial validation with experienced engineers shows promising results, and could be a way forward to estimate final machining operation cost for flanges in the future. / Syftet för detta projekt är att undersöka tillverkningskostnaden, med tonvikt på bearbetning av högpresterande flänsar för turbinapplikationer (TRS), samt dess relation till strukturella prestanda och vikt. Traditionella kostnadsmodelleringstekniker kombineras med det ickekonventionella tillverkningskomplexitetsindexet och används som kostnadsindikator. En tvärvetenskaplig studie genomförs med hjälp av ANSYS Workbench i form av dator simulerade experiment för att undersöka flänsavvägningar. En slutsats av studien är att multidisciplinära modeller av kostnad, prestanda och vikt saknade robusthet för att kunna dra djupgående slutsatser om prestandan för en flänsdesign. Tillverkningskomplexitetsindexet visar dock, efter partiell validering med erfarna ingenjörer, lovande resultat och kan vara framgångsrikt ett sätt att uppskatta den slutliga bearbetningskostnaden för flänsar.
|
164 |
Leading Change in Complex Systems: A Paradigm ShiftLeMaster, Cheryl Faye 29 August 2017 (has links)
No description available.
|
165 |
An Inquiry into Language Use in Multilinguals’ Writing: A Study of Third-Language LearnersTanova, Nadya 25 June 2012 (has links)
No description available.
|
166 |
Disjoint NP-pairs and propositional proof systemsBeyersdorff, Olaf 31 August 2006 (has links)
Die Theorie disjunkter NP-Paare, die auf natürliche Weise statt einzelner Sprachen Paare von NP-Mengen zum Objekt ihres Studiums macht, ist vor allem wegen ihrer Anwendungen in der Kryptografie und Beweistheorie interessant. Im Zentrum dieser Dissertation steht die Analyse der Beziehung zwischen disjunkten NP-Paaren und aussagenlogischen Beweissystemen. Haben die Anwendungen der NP-Paare in der Beweistheorie maßgeblich das Verständnis aussagenlogischer Beweissysteme gefördert, so beschreiten wir in dieser Arbeit gewissermaßen den umgekehrten Weg, indem wir Methoden der Beweistheorie zur genaueren Untersuchung des Verbands disjunkter NP-Paare heranziehen. Insbesondere ordnen wir jedem Beweissystem P eine Klasse DNPP(P) von NP-Paaren zu, deren Disjunktheit in dem Beweissystem P mit polynomiell langen Beweisen gezeigt werden kann. Zu diesen Klassen DNPP(P) zeigen wir eine Reihe von Resultaten, die illustrieren, dass robust definierten Beweissystemen sinnvolle Komplexitätsklassen DNPP(P) entsprechen. Als wichtiges Hilfsmittel zur Untersuchung aussagenlogischer Beweissysteme und der daraus abgeleiteten Klassen von NP-Paaren benutzen wir die Korrespondenz starker Beweissysteme zu erststufigen arithmetischen Theorien, die gemeinhin unter dem Schlagwort beschränkte Arithmetik zusammengefasst werden. In der Praxis trifft man statt auf zwei häufig auf eine größere Zahl konkurrierender Bedingungen. Daher widmen wir uns der Erweiterung der Theorie disjunkter NP-Paare auf disjunkte Tupel von NP-Mengen. Unser Hauptergebnis in diesem Bereich besteht in der Charakterisierung der Fragen nach der Existenz optimaler Beweissysteme und vollständiger NP-Paare mit Hilfe disjunkter Tupel. / Disjoint NP-pairs are an interesting complexity theoretic concept with important applications in cryptography and propositional proof complexity. In this dissertation we explore the connection between disjoint NP-pairs and propositional proof complexity. This connection is fruitful for both fields. Various disjoint NP-pairs have been associated with propositional proof systems which characterize important properties of these systems, yielding applications to areas such as automated theorem proving. Further, conditional and unconditional lower bounds for the separation of disjoint NP-pairs can be translated to results on lower bounds to the length of propositional proofs. In this way disjoint NP-pairs have substantially contributed to the understanding of propositional proof systems. Conversely, this dissertation aims to transfer proof-theoretic knowledge to the theory of NP-pairs to gain a more detailed understanding of the structure of the class of disjoint NP-pairs and in particular of the NP-pairs defined from propositional proof systems. For a proof system P we introduce the complexity class DNPP(P) of all disjoint NP-pairs for which the disjointness of the pair is efficiently provable in the proof system P. We exhibit structural properties of proof systems which make the previously defined canonical NP-pairs of these proof systems hard or complete for DNPP(P). Moreover, we demonstrate that non-equivalent proof systems can have equivalent canonical pairs and that depending on the properties of the proof systems different scenarios for DNPP(P) and the reductions between the canonical pairs exist. As an important tool for our investigation we use the connection of propositional proof systems and disjoint NP-pairs to theories of bounded arithmetic.
|
167 |
Preprocessing to Deal with Hard Problems / Upper and Lower Bounds for Kernelization of Graph ProblemsHols, Eva-Maria Christiana 22 May 2020 (has links)
In der klassischen Komplexitätstheorie unterscheiden wir zwischen der Klasse P von in Polynomialzeit lösbaren Problemen, und der Klasse NP-schwer von Problemen bei denen die allgemeine Annahme ist, dass diese nicht in Polynomialzeit lösbar sind. Allerdings sind viele Probleme, die wir lösen möchten, NP-schwer. Gleichzeitig besteht eine große Diskrepanz zwischen den empirisch beobachteten und den festgestellten worst-case Laufzeiten. Es ist bekannt, dass Vorverarbeitung oder Datenreduktion auf realen Instanzen zu Laufzeitverbesserungen führt. Hier stoßen wir an die Grenze der klassischen Komplexitätstheorie.
Der Fokus dieser Arbeit liegt auf Vorverarbeitungsalgorithmen für NP-schwere Probleme. Unser Ziel ist es, bestimmte Instanzen eines NP-schweren Problems vorverarbeiten zu können, indem wir die Struktur betrachten. Genauer gesagt, für eine gegebene Instanz und einen zusätzlichen Parameter l, möchten wir in Polynomialzeit eine äquivalente Instanz berechnen, deren Größe und Parameterwert nur durch eine Funktion im Parameterwert l beschränkt ist. In der parametrisierten Komplexitätstheorie heißen diese Algorithmen Kernelisierung.
Wir werden drei NP-schwere Graphenprobleme betrachten, nämlich Vertex Cover, Edge Dominating Set und Subset Feedback Vertex Set. Für Vertex Cover werden wir bekannte Ergebnisse für Kernelisierungen vereinheitlichen, wenn der Parameter die Größe einer Entfernungsmenge zu einer gegebenen Graphklasse ist. Anschließend untersuchen wir die Kernelisierbarkeit von Edge Dominating Set. Es stellt sich heraus, dass die Kernelisierbarkeit deutlich komplexer ist. Dennoch klassifizieren wir die Existenz einer polynomiellen Kernelisierung, wenn jeder Graph in der Graphklasse eine disjunkte Vereinigung von konstant großen Komponenten ist. Schließlich betrachten wir das Subset Feedback Vertex Set Problem und zeigen, dass es eine randomisierte polynomielle Kernelisierung hat, wenn der Parameter die Lösungsgröße ist. / In classical complexity theory, we distinguish between the class P, of polynomial-time solvable problems, and the class NP-hard, of problems where the widely-held belief is that we cannot solve these problems in polynomial time. Unfortunately, many of the problems we want to solve are NP-hard. At the same time, there is a large discrepancy between the empirically observed running times and the established worst-case bounds. Using preprocessing or data reductions on real-world instances is known to lead to huge improvements in the running time. Here we come to the limits of classical complexity theory.
In this thesis, we focus on preprocessing algorithms for NP-hard problems. Our goal is to find ways to preprocess certain instances of an NP-hard problem by considering the structure of the input instance. More precisely, given an instance and an additional parameter l, we want to compute in polynomial time an equivalent instance whose size and parameter value is bounded by a function in the parameter l only.
In the field of parameterized complexity, these algorithms are called kernelizations.
We will consider three NP-hard graph problems, namely Vertex Cover, Edge Dominating Set, and Subset Feedback Vertex Set. For Vertex Cover, we will unify known results for kernelizations when parameterized by the size of a deletion set to a specified graph class. Afterwards, we study the existence of polynomial kernelizations for Edge Dominating Set when parameterized by the size of a deletion set to a graph class. We point out that the existence of polynomial kernelizations is much more complicated than for Vertex Cover. Nevertheless, we fully classify the existence of polynomial kernelizations when every graph in the graph class is a disjoint union of constant size components. Finally, we consider graph cut problems, especially the Subset Feedback Vertex Set problem. We show that this problem has a randomized polynomial kernelization when the parameter is the solution size.
|
168 |
Randomness in complexity theory and logicsEickmeyer, Kord 01 September 2011 (has links)
Die vorliegende Dissertation besteht aus zwei Teilen, deren gemeinsames Thema in der Frage besteht, wie mächtig Zufall als Berechnungsressource ist. Im ersten Teil beschäftigen wir uns mit zufälligen Strukturen, die -- mit hoher Wahrscheinlichkeit -- Eigenschaften haben können, die von Computeralgorithmen genutzt werden können. In zwei konkreten Fällen geben wir bis dahin unbekannte deterministische Konstruktionen solcher Strukturen: Wir derandomisieren eine randomisierte Reduktion von Alekhnovich und Razborov, indem wir bestimmte unbalancierte bipartite Expandergraphen konstruieren, und wir geben eine Reduktion von einem Problem über bipartite Graphen auf das Problem, den minmax-Wert in Dreipersonenspielen zu berechnen. Im zweiten Teil untersuchen wir die Ausdrucksstärke verschiedener Logiken, wenn sie durch zufällige Relationssymbole angereichert werden. Unser Ziel ist es, Techniken aus der deskriptiven Komplexitätstheorie für die Untersuchung randomisierter Komplexitätsklassen nutzbar zu machen, und tatsächlich können wir zeigen, dass unsere randomisierten Logiken randomisierte Komlexitätsklassen einfangen, die in der Komplexitätstheorie untersucht werden. Unter Benutzung starker Ergebnisse über die Logik erster Stufe und die Berechnungsstärke von Schaltkreisen beschränkter Tiefe geben wir sowohl positive als auch negative Derandomisierungsergebnisse für unsere Logiken. Auf der negativen Seite zeigen wir, dass randomisierte erststufige Logik gegenüber normaler erststufiger Logik an Ausdrucksstärke gewinnt, sogar auf Strukturen mit einer eingebauten Additionsrelation. Außerdem ist sie nicht auf geordneten Strukturen in monadischer zweitstufiger Logik enthalten, und auch nicht in infinitärer Zähllogik auf beliebigen Strukturen. Auf der positiven Seite zeigen wir, dass randomisierte erststufige Logik auf Strukturen mit einem unären Vokabular derandomisiert werden kann und auf additiven Strukturen in monadischer Logik zweiter Stufe enthalten ist. / This thesis is comprised of two main parts whose common theme is the question of how powerful randomness as a computational resource is. In the first part we deal with random structures which possess -- with high probability -- properties than can be exploited by computer algorithms. We then give two new deterministic constructions for such structures: We derandomise a randomised reduction due to Alekhnovich and Razborov by constructing certain unbalanced bipartite expander graphs, and we give a reduction from a problem concerning bipartite graphs to the problem of computing the minmax-value in three-player games. In the second part we study the expressive power of various logics when they are enriched by random relation symbols. Our goal is to bridge techniques from descriptive complexity with the study of randomised complexity classes, and indeed we show that our randomised logics do capture complexity classes under study in complexity theory. Using strong results on the expressive power of first-order logic and the computational power of bounded-depth circuits, we give both positive and negative derandomisation results for our logics. On the negative side, we show that randomised first-order logic gains expressive power over standard first-order logic even on structures with a built-in addition relation. Furthermore, it is not contained in monadic second-order logic on ordered structures, nor in infinitary counting logic on arbitrary structures. On the positive side, we show that randomised first-order logic can be derandomised on structures with a unary vocabulary and is contained in monadic second-order logic on additive structures.
|
169 |
Being, eating and being eaten : deconstructing the ethical subjectVrba, Minka 12 1900 (has links)
Thesis (MPhil (Philosophy))--University of Stellenbosch, 2006. / This study constitutes a conceptual analysis and critique of the notion of the subject, and the concomitant notion of responsibility, as it has developed through the philosophical history of the modern subject. The aim of this study is to present the reader with a critical notion of responsibility. This study seeks to divorce such a position from the traditional, normative view of the subject, as typified by the Cartesian position. Following Derrida, a deconstructive reading of the subject’s conceptual development since Descartes is presented. What emerges from this reading is that, despite various re-conceptualisations of the subject by philosophers as influential and diverse as Nietzsche, Heidegger and Levinas, their respective positions continue to affirm the subject as human. The position presented in this study challenges this notion of the subject as human, with the goal of opening-up and displacing the ethical frontier between human and non-human. It is argued that displacing this ethical frontier introduces complex responsibilities. These complex responsibilities resist the violence inherent to normative positions that typically exclude the non-human – particularly the animal – from the sphere of responsibility.
|
170 |
Constructing “Climate Change Knowledge”de Ruijter, Susann Unknown Date (has links) (PDF)
During the last decades “Climate Change” has become a vital topic on national and international political agendas. There it is presented as an irrevocable fact of global impact and thus of universal relevance. What has often been neglected are local discourses of marginalized groups and their specific contextualization of “Climate Change” phenomena. The aim of this project, to develop another perspective along these dominant narratives, has resulted in the research question How is social reality reconstructed on the phenomenon of “Climate Change” among the “Emerging Black Farmers” in the Swartland region in Western Cape, South Africa?
Taken as an example, “Climate Change Knowledge” is reconstructed through a case study on the information exchange between the NGO Goedgedacht Trust and local small-scale farmers in the post-Apartheid context of on-going political, social, economic and educational transition in South Africa.
Applying a constructivist approach, “Climate Change Knowledge” is not understood as an objectively given, but a socially constructed “reality” that is based on the interdependency of socio-economic conditions and individual assets, including language skills and language practice, sets of social norms and values, as well as strategies of knowledge transfer.
The data set consists of qualitative data sources, such as application forms and interview material, which are triangulated. The rationale of a multi-layered data analysis includes a discursive perspective as well as linguistic and ethical “side perspectives”.
Epistemologically, the thesis is guided by assumptions of complexity theory, framing knowledge around “Climate Change” as a fluid, constantly changing system that is shaped by constant intra- and inter-systemic exchange processes, and characterized by non-linearity, self-organization and representation of its constituents. From this point of departure, a theoretical terminology has been developed, which differentiates between symbols, interrelations, contents and content clusters. These elements are located in a system of spatio-temporal orientation and embedded into a broader (socio-economic) context of “historicity”. Content clusters are remodelled with the help of concept maps. Starting from that, a local perspective on “Climate Change” is developed, adding an experiential notion to the global narratives.
The thesis concludes that there is no single reality about “Climate Change” and that the farmers’ “Climate Change Knowledge” highly depends on experiential relativity and spatio-temporal immediacy. Furthermore, analysis has shown that the system’s historicity and social manifestations can be traced in the scope and emphasis of the content clusters discussed. Finally the thesis demonstrates that characteristics of symbols, interconnections and contents range between dichotomies of direct and indirect, predictable versus unpredictable, awareness and negligence or threat and danger, all coexisting and creating a continuum of knowledge production.
|
Page generated in 0.0436 seconds