1031 |
Securing the Northern Region of Ghana? Development Aid and Security InterventionsTorto, Eric Obodai January 2013 (has links)
This dissertation offers a perspective through which we can explore the processes of joint development and security interventions in conflict-prone regions. In employing the experiences of the Northern Region of Ghana as my case study, this thesis examines the ways that the rationales of both development and security interventions are articulated in the field of practice. The central argument of the thesis is that most analyses of aid interventions, particularly those stemming from mainstream development literature, rarely interrogate the underlying rationales and assumptions behind the ideas, strategies and discourses employed in aid intervention. Notably, these rationales and assumptions tend to reduce the complexity of development and security challenges, and, as an end result, facilitate the implementation of technical solutions. The translation of development and security discourses and strategies into programmable practices as they encounter a local population is characterized by complex processes. Following the central argument of the thesis, the key research question interrogates the way that the rationales behind development aid and security interventions have been articulated in conflict- prone Northern Region and how they have been received by the local population. With the overarching aim of understanding the complexities associated with the joint articulation of development and security programmes, this study provides a unique and critical analysis of international development and security practices. The study also provides deeper understanding of the broad socio-economic and political contexts for the delivery of aid interventions. I scrutinize the rationales behind these interventions through the critical examination of colonial practices and three contemporary interventions: 1) Region-wide interventions, 2) the UN Human Security Program, and 3) Post-liberal interventions used as a panacea to prevailing implementation challenges. Based on the analysis of archival documents, alongside policy, program, and interview documents, my study reveals the ways that the development-security nexus perpetrates liberal practices in the declared conflict-prone Northern Region of Ghana. I also evaluate the way that the development-security nexus reconstitutes individuals as resilient subjects through practices of empowerment and entrepreneurialism, and demonstrates the contestations, contradictions, and colonial features that characterize interventions in the field of articulation.
|
1032 |
Communication Complexity of Remote State PreparationBab Hadiashar, Shima 24 September 2014 (has links)
Superdense coding and quantum teleportation are two phenomena which were not possible without prior entanglement. In superdense coding, one sends n bits of information using n/2 qubits in the presence of shared entanglement. However, we show that n bits of information cannot be sent with less than n bits of communication in LOCC protocols even in the presence of prior entanglement. This is an interesting result which will be used in the rest of this thesis.
Quantum teleportation uses prior entanglement and classical communication to send
an unknown quantum state. Remote state preparation (RSP) is the same distributed task,
but in the case that the sender knows the description of the state to be sent, completely.
We study the communication complexity of approximate remote state preparation in which
the goal is to prepare an approximation of the desired quantum state. Jain showed that the worst-case error communication complexity of RSP can be bounded from above in terms of the maximum possible information in an encoding [18]. He also showed that this quantity is a lower bound for communication complexity of exact remote state preparation [18].
In this thesis, we characterize the worst-case error and average-case error communication complexity of remote state preparation in terms of non-asymptotic information-theoretic quantities. We also utilize the bound we derived for the communication complexity of LOCC protocols in the first part of the thesis, to show that the average-case error communication complexity of RSP can be much smaller than the worst-case.
|
1033 |
Topics in discrete optimization: models, complexity and algorithmsHe, Qie 13 January 2014 (has links)
In this dissertation we examine several discrete optimization problems through the perspectives of modeling, complexity and algorithms. We first provide a probabilistic comparison of split and type 1 triangle cuts for mixed-integer programs with two rows and two integer variables in terms of cut coefficients and volume cutoff. Under a specific probabilistic model of the problem parameters, we show that for the above measure, the probability that a split cut is better than a type 1 triangle cut is higher than the probability that a type 1 triangle cut is better than a split cut. The analysis also suggests some guidelines on when type 1 triangle cuts are likely to be more effective than split cuts and vice versa. We next study a minimum concave cost network flow problem over a grid network. We give a polytime algorithm to solve this problem when the number of echelons is fixed. We show that the problem is NP-hard when the number of echelons is an input parameter. We also extend our result to grid networks with backward and upward arcs. Our result unifies the complexity results for several models in production planning and green recycling including the lot-sizing model, and gives the first polytime algorithm for some problems whose complexities were not known before. Finally, we examine how much complexity randomness will bring to a simple combinatorial optimization problem. We study a problem called the sell or hold problem (SHP). SHP is to sell k out of n indivisible assets over two stages, with known first-stage prices and random second-stage prices, to maximize the total expected revenue. Although the deterministic version of SHP is trivial to solve, we show that SHP is NP-hard when the second-stage prices are realized as a finite set of scenarios. We show that SHP is polynomially solvable when the number of scenarios in the second stage is constant. A max{1/2,k/n}-approximation algorithm is presented for the scenario-based SHP.
|
1034 |
Complexities of Order-Related Formal Language Extensions / Komplexiteter hos ordnings-relaterade utökningar av formella språkBerglund, Martin January 2014 (has links)
The work presented in this thesis discusses various formal language formalisms that extend classical formalisms like regular expressions and context-free grammars with additional abilities, most relating to order. This is done while focusing on the impact these extensions have on the efficiency of parsing the languages generated. That is, rather than taking a step up on the Chomsky hierarchy to the context-sensitive languages, which makes parsing very difficult, a smaller step is taken, adding some mechanisms which permit interesting spatial (in)dependencies to be modeled. The most immediate example is shuffle formalisms, where existing language formalisms are extended by introducing operators which generate arbitrary interleavings of argument languages. For example, introducing a shuffle operator to the regular expressions does not make it possible to recognize context-free languages like anbn, but it does capture some non-context-free languages like the language of all strings containing the same number of as, bs and cs. The impact these additions have on parsing has many facets. Other than shuffle operators we also consider formalisms enforcing repeating substrings, formalisms moving substrings around, and formalisms that restrict which substrings may be concatenated. The formalisms studied here all have a number of properties in common. They are closely related to existing regular and context-free formalisms. They operate in a step-wise fashion, deriving strings by sequences of rule applications of individually limited power. Each step generates a constant number of symbols and does not modify parts that have already been generated. That is, strings are built in an additive fashion that does not explode in size (in contrast to e.g. Lindenmayer systems). All languages here will have a semi-linear Parikh image. They feature some interesting characteristic involving order or other spatial constraints. In the example of the shuffle multiple derivations are in a sense interspersed in a way that each is unaware of. All of the formalisms are intended to be limited enough to make an efficient parsing algorithm at least for some cases a reasonable goal. This thesis will give intuitive explanations of a number of formalisms fulfilling these requirements, and will sketch some results relating to the parsing problem for them. This should all be viewed as preparation for the more complete results and explanations featured in the papers given in the appendices. / Denna avhandling diskuterar utökningar av klassiska formalismer inom formella språk, till exempel reguljära uttryck och kontextfria grammatiker. Utökningarna handlar på ett eller annat sätt omordning, och ett särskilt fokus ligger på att göra utökningarna på ett sätt som dels har intressanta spatiala/ordningsrelaterade effekter och som dels bevarar den effektiva parsningen som är möjlig för de ursprungliga klassiska formalismerna. Detta står i kontrast till att ta det större steget upp i Chomsky-hierarkin till de kontextkänsliga språken, vilket medför ett svårt parsningsproblem. Ett omedelbart exempel på en sådan utökning är s.k. shuffle-formalismer. Dessa utökar existerande formalismer genom att introducera operatorer som godtyckligt sammanflätar strängar från argumentspråk. Om shuffle-operator introduceras till de reguljära uttrycken ger det inte förmågan att känna igen t.ex. det kontextfria språket anbn, men det fångar istället vissa språk som inte är kontextfria, till exempel språket som består av alla strängar som innehåller lika många a:n, b:n och c:n. Sättet på vilket dessa utökningar påverkar parsningsproblemet är mångfacetterat. Utöver dessa shuffle-operatorer tas också formalismer där delsträngar kan upprepas, formalismer där delsträngar flyttas runt, och formalismer som begränsar hur delsträngar får konkateneras upp. Formalismerna som tas upp här har dock vissa egenskaper gemensamma. De är nära besläktade med de klassiska reguljära och kontextfria formalismerna. De arbetar stegvis, och konstruerar strängar genom successiva applikationer av individuellt enkla regler. Varje steg genererar ett konstant antal symboler och modifierar inte det som redan genererats. Det vill säga, strängar byggs additivt och längden på dem kan inte explodera (i kontrast till t.ex. Lindenmayer-system). Alla språk som tas upp kommer att ha en semi-linjär Parikh-avbildning. De har någon instressant spatial/ordningsrelaterad egenskap. Exempelvis sättet på vilket shuffle-operatorer sammanflätar annars oberoende deriveringar. Alla formalismerna är tänkta att vara begränsade nog att det är resonabelt att ha effektiv parsning som mål. Denna avhandling kommer att ge intuitiva förklaring av ett antal formalismer som uppfyller ovanstående krav, och kommer att skissa en blandning av resultat relaterade till parsningsproblemet för dem. Detta bör ses som förberedande inför läsning av de mer djupgående och komplexa resultaten och förklaringarna i de artiklar som finns inkluderade som appendix.
|
1035 |
Students' Experiences During Democratic Activities at a Canadian Free School: A Case StudyPrud'homme, Marc-Alexandre 09 February 2011 (has links)
While the challenge of improving young North Americans’ civic engagement seems to lie in the hands of schools, studying alternative ways of teaching citizenship education could benefit the current educational system. In this context, free schools (i.e., schools run democratically by students and teachers), guided by a philosophy that aims at engaging students civically through the democratic activities that they support, offer a relatively unexplored ground for research. The present inquiry is a case study using tools of ethnography and drawing upon some principles of complexity thinking. It aims at understanding students’ citizenship education experiences during democratic activities in a Canadian free school. It describes many experiences that can arise from these activities. They occurred within a school that operated democratically based on a consensus-model. More precisely, they took place during two kinds of democratic activities: class meetings, which regulated the social life of the school, and judicial committees, whose function was to solve conflicts at the school. During these activities, students mostly experienced a combination of feelings of appreciation, concernment and empowerment. While experiencing these feelings, they predominantly engaged in decision-making and conflict resolution processes. During these processes, students modified their conflict resolutions skills, various conceptions, and their participation in democratic activities and in the school. Based on these findings, the study concludes that students can develop certain skills and attitude associated to citizenship education during these activities and become active from a citizenship perspective. Hence, these democratic activities represent alternative strategies that can assist educators in teaching about citizenship.
|
1036 |
A Comparative Study of Habitat Complexity, Neuroanatomy, and Cognitive Behavior in Anolis LizardsPowell, Brian James January 2012 (has links)
<p>Changing environmental conditions may present substantial challenges to organisms experiencing them. In animals, the fastest way to respond to these changes is often by altering behavior. This ability, called behavioral flexibility, varies among species and can be studied on several levels. First, the extent of behavioral flexibility exhibited by a species can be determined by observation of that species' behavior, either in nature or in experimental settings. Second, because the central nervous system is the substrate determining behavior, neuroanatomy can be studied as the proximate cause of behavioral flexibility. Finally, the ultimate causation can be examined by studying ecological factors that favor the evolution of behavioral flexibility. In this dissertation, I investigate behavioral flexibility across all three levels by examining the relationship between habitat structure, the size of different structures within the brain and total brain size, and behavioral flexibility in six closely-related species of Puerto Rican <italic>Anolis</italic> lizards. <italic>Anolis</italic> lizards provide an excellent taxon for this study as certain species, including those used here, are classified as belonging to different ecomorphs and are morphologically and behaviorally specialized to distinct structural habitat types.</p><p>In order to determine the presence of behavioral flexibility in <italic>Anolis</italic>, I first presented <italic>Anolis evermanni</italic> with a series of tasks requiring motor learning and a single instance of reversal learning. <italic>Anolis evermanni</italic> demonstrated high levels of behavioral flexibility in both tasks.</p><p>To address the pattern of brain evolution in the <italic>Anolis</italic> brain, I used a histological approach to measure the volume of the whole brain, telencephalon, dorsal cortex, dorsomedial cortex, medial cortex, dorsal ventricular ridge, cerebellum, and medulla in six closely-related species of Puerto Rican <italic>Anolis</italic> lizards belonging to three ecomorphs. These data were analyzed to determine the relative contribution of concerted and mosaic brain evolution to <italic>Anolis</italic> brain evolution. The cerebellum showed a trend toward mosaic evolution while the remaining brain structures matched the predictions of concerted brain evolution. </p><p>I then examined the relationship between the complexity of structural habitat occupied by each species and brain size in order to determine if complex habitats are associated with relatively large brains. I measured brain volume using histological methods and directly measured habitat complexity in all six species. Using Principal Component Analysis, I condensed the measures of habitat structure to a single variable and corrected it for the scale of each lizard species' movement, calling the resulting measurement relevant habitat complexity. I tested the relationship between relative volume of the telencephalon, dorsal cortex, dorsomedial cortex, and whole brain against both relative habitat complexity and ecomorph classification. There was no relationship between the relative volume of any brain structure examined and either relevant habitat complexity or ecomorph. However, relevant habitat complexities for each species did not completely match their ecomorph classifications. </p><p>Finally, I tested the levels of behavioral flexibility of three species of <italic>Anolis</italic>, <italic>A. evermanni</italic>, <italic>A. pulchellus</italic>, and <italic>A. cristatellus</italic>, belonging to three distinct ecomorphs, by presenting them with tasks requiring motor and reversal learning. <italic>Anolis evermanni</italic> performed well in both tasks, while <italic>A. pulchellus</italic> required more trials to learn the motor task. Only a single <italic>Anolis cristatellus</italic> was able to perform either task. <italic>Anolis evermanni</italic> displayed lower levels of neophobia than the other species, which may be related to its superior performance.</p><p>In combination, this research suggests that <italic>Anolis</italic> of different ecomorphs display different levels of behavioral flexibility. At the proximate level, this difference in behavioral flexibility cannot be explained by changes in the relative size of the total brain or brain structures associated with cognitive abilities in other taxa. At the ultimate level, the size of the brain and several constituent structures cannot be predicted by habitat complexity. However, behavioral flexibility in certain tasks may be favored by utilization of complex habitats. Flexibility in different tasks is not correlated, rendering broad comparisons to a habitat complexity problematic.</p> / Dissertation
|
1037 |
Entwicklung eines Modelica Compiler BackEnds für große Modelle / Development of a Modelica Compiler BackEnd for Large ModelsFrenkel, Jens 03 February 2014 (has links) (PDF)
Die symbolische Aufbereitung mathematischer Modelle ist eine wesentliche Voraussetzung, um die Dynamik komplexer physikalischer Systeme mit Hilfe numerischer Simulationen zu untersuchen. Deren Modellierung mit gleichungsbasierten objektorientierten Sprachen bietet gegenüber der signalflussbasierenden Modellierung den Vorteil, dass die sich aus der Mathematik und Physik ergebenden Gleichungen in ihrer ursprünglichen Form zur Modellbeschreibung verwendet werden können. Die akausale Beschreibung mit Gleichungen erhöht die Wiederverwendbarkeit der Modelle und reduziert den Aufwand bei der Modellierung.
Die automatisierte Überführung der Gleichungen in Zuweisungen lässt sich theoretisch auf Modelle beliebiger Größe und Komplexität anwenden. In der praktischen Anwendung ergeben sich jedoch, insbesondere bei der automatisierten Überführung großer Modelle, mathematische Systeme mit sehr vielen Gleichungen und zeitabhängigen Variablen. Die daraus resultierenden langen Ausführungszeiten schränken die Anwendbarkeit der symbolischen Aufbereitung erheblich ein.
Die vorliegende Arbeit beschreibt den Prozess der automatisierten Überführung eines Modells bis zur numerischen Simulation. Alle Teilschritte werden detailliert untersucht und bezüglich ihrer theoretischen Komplexität bewertet. Geeignete Algorithmen werden im OpenModelica Compiler umgesetzt und bezüglich ihrer Laufzeit anhand praxisrelevanter Modelle miteinander verglichen und für jeden Teilschritt die beste Implementierung ausgewählt. Dadurch konnte ein nahezu linearer Zusammenhang zwischen Modellgröße und benötigter Zeit zur Überführung erreicht werden.
Zusätzlich bietet die Arbeit eine umfassende Dokumentation mit zahlreichen Empfehlungen für die Implementierung eines BackEnds eines Modelica Compilers. Dies erleichtert den Einstieg für neue Entwickler sowie die Weiterentwicklung bestehender Algorithmen. Letzteres wird durch ein modulares Konzept einer BackEnd-Pipeline unterstützt. Außerdem werden Methoden diskutiert, wie ein neues Modul innerhalb dieser Pipeline effizient getestet werden kann. / The symbolic treatment of mathematical models is essential to study the dynamics of complex physical systems by means of numerical simulations. In contrast to signal flow based approaches, modeling with equation-based and object-oriented languages has the advantage that the original equations can be used directly. The acausal description of equations increases reusability and reduces the effort for the modeller.
The automated transformation of equations into assignments can in theory be applied to models of any size and complexity. In practice, however, problems arise when large models, i.e. mathematical systems with many equations and time-dependent variables, shall be transformed. Long execution times that occur limit the applicability of symbolic processing considerably.
The present work describes the process of automated transformation from a model to program code which can be simulated numerically. All steps are examined in detail and evaluated in terms of its theoretical complexity. Suitable algorithms are implemented in the OpenModelica Compiler. Their execution times are compared by looking at models which are relevant to engineering. The best implementations for each sub-step are selected and combined to a Modelica Compiler BackEnd. Thus a relationship between model size and the time needed for transformation has been achieved which is mostly linear.
In addition, a comprehensive discussion with numerous recommendations for the implementation of a Modelica Compiler BackEnd is given. This is supposed to help new developers as well as to facilitate the development of existing algorithms. The latter is supported by a modular concept of a BackEnd pipeline. Moreover, methods are discussed how new modules can be tested efficiently using this pipeline.
|
1038 |
Independent component analysis for maternal-fetal electrocardiographyMarcynuk, Kathryn L. 09 January 2015 (has links)
Separating unknown signal mixtures into their constituent parts is a difficult problem in signal processing called blind source separation. One of the benchmark problems in this area is the extraction of the fetal heartbeat from an electrocardiogram in which it is overshadowed by a strong maternal heartbeat. This thesis presents a study of a signal separation technique called independent component analysis (ICA), in order to assess its suitability for the maternal-fetal ECG separation problem. This includes an analysis of ICA on deterministic, stochastic, simulated and recorded ECG signals. The experiments presented in this thesis demonstrate that ICA is effective on linear mixtures of known simulated or recorded ECGs. The performance of ICA was measured using visual comparison, heart rate extraction, and energy, information theoretic, and fractal-based measures. ICA extraction of clinically recorded maternal-fetal ECGs mixtures, in which the source signals were unknown, were successful at recovering the fetal heart rate.
|
1039 |
The complexity of digraph homomorphisms: Local tournaments, injective homomorphisms and polymorphismsSwarts, Jacobus Stephanus 19 December 2008 (has links)
In this thesis we examine the computational complexity of certain digraph homomorphism problems. A homomorphism between digraphs, denoted by $f: G \to H$, is a mapping from the vertices of $G$ to the vertices of $H$ such that the arcs of $G$ are preserved. The problem of deciding whether a homomorphism to a fixed digraph $H$ exists is known as the $H$-colouring problem.
We prove a generalization of a theorem due to Bang-Jensen, Hell and MacGillivray. Their theorem shows that for every semi-complete digraph $H$, $H$-colouring exhibits a dichotomy: $H$-colouring is either polynomial time solvable or it is NP-complete. We show that the class of local tournaments also exhibit a dichotomy. The NP-completeness results are found using direct NP-completeness reductions, indicator and vertex (and arc) sub-indicator constructions. The polynomial cases are handled by appealing to a result of Gutjhar, Woeginger and Welzl: the \underbar{$X$}-graft extension. We also provide a new proof of their result that follows directly from the consistency check. An unexpected result is the existence of unicyclic local tournaments with NP-complete homomorphism problems.
During the last decade a new approach to studying the complexity of digraph homomorphism problems has emerged. This approach focuses attention on so-called polymorphisms as a measure of the complexity of a digraph homomorphism problem. For a digraph $H$, a polymorphism of arity $k$ is a homomorphism $f: H^k \to H$.
Certain special polymorphisms are conjectured to be the key to understanding $H$-colouring problems. These polymorphisms are known as weak near unanimity functions (WNUFs). A WNUF of arity $k$ is a polymorphism $f: H^k \to H$ such that $f$ is idempotent an $f(y,x,x,\ldots,x)=f(x,y,x,\ldots,x)=f(x,x,y,\ldots,x) = \cdots = f(x,x,x,\ldots,y)$. We prove that a large class of polynomial time $H$-colouring problems all have a $\WNUF$. Furthermore we also prove some non-existence results for $\WNUF$s on certain digraphs. In proving these results, we develop a vertex (and arc) sub-indicator construction as well as an indicator construction in analogy with the ones developed by Hell and Ne{\v{s}}et{\v{r}}il. This is then used to show that all tournaments with at least two cycles do not admit a $\WNUF_k$ for $k>1$. This furnishes a new proof (in the case of tournaments) of the result by Bang-Jensen, Hell and MacGillivray referred to at the start. These results lend some support to the conjecture that $\WNUF$s are the ``right'' functions for measuring the complexity of $H$-colouring problems.
We also study a related notion, namely that of an injective homomorphism. A homomorphism $f: G \to H$ is injective if the restriction of $f$ to the in-neighbours of every vertex in $G$ is an injective mapping. In order to classify the complexity of these problems we develop an indicator construction that is suited to injective homomorphism problems.
For this type of digraph homomorphism problem we consider two cases: reflexive and irreflexive targets. In the case of reflexive targets we are able to classify all injective homomorphism problems as either belonging to the class of polynomial time solvable problems or as being NP-complete. Irreflexive targets pose more of a problem. The problem lies with targets of maximum in-degree equal to two. Targets with maximum in-degree one are polynomial, while targets with in-degree at least three are NP-complete. There is a transformation from (ordinary) graph homomorphism problems to injective, in-degree two, homomorphism problems (a reverse transformation also exists). This transformation provides some explanation as to the difficulty of the in-degree two case. We nonetheless classify all injective homomorphisms to irreflexive tournaments as either being a problem in P or a problem in the class of NP-complete problems. We also discuss some upper bounds on the injective oriented irreflexive (reflexive) chromatic number.
|
1040 |
ATC complexity measures: Formulas measuring workload and complexity at Stockholm TMADervic, Amina, Rank, Alexander January 2015 (has links)
Workload and complexity measures are, as of today, often imprecise and subjective. Currently, two commonly used workload and complexity measuring formulas are Monitor Alert Parameter and the “Bars”, both using the same measurement variables; amount of aircraft and time. This study creates formulas for quantifying ATC complexity. The study is done in an approach environment and is developed and tested on Stockholm TMA by the creation of 20 traffic scenarios. Ten air traffic controllers working in Stockholm TMA studied the complexity of the scenarios individually and ranked the scenarios in reference to each other. Five controllers evaluated scenario A1-A10. These scenarios were used as references when creating the formulas. The other half of the scenarios, B1-B10, ranked by another five controllers, was used as validation scenarios. Factors relevant to an approach environment were identified, and the data from the scenarios were extracted according to the identified factors. Moreover, a regression analysis was made with the ambition to reveal appropriate weights for each variable. At the first regression, called formula #1, some parameter values were identical. Also, some parameter weights became negative in the regression analysis. The basic requirements were not met and consequently, additional regressions were done; eventually forming formula #2. Formula #2 showed stable values and plausible parameter weights. When compared to a workload measuring model of today, formula #2 showed better performance. Despite the small amount of data samples, we were able to prove a genuine relation between three, of each other independent, variables and the traffic complexity.
|
Page generated in 0.0744 seconds