71 |
The impact of residential adventure education on primary school pupilsWilliams, Randall January 2012 (has links)
This is a mixed method study carried out from a pragmatist philosophical position. The research question is how (if at all) do primary school pupils change following a residential adventure education experience, how does any change relate to their experience during the residential and what implications does that have for the provision of residential adventure education? It is a three phase study. Phase 1 is quantitative: a survey to assess whether there is a correlation between the extent of residential opportunities and whole school performance measures. Phase 2 is qualitative: a series of interviews with headteachers, parents and policy makers to discover their perceptions of the impact of a residential programme. Phase 3 is quantitative: designing and testing an instrument to measure the impact on pupils of different aspects of a residential programme and comparing this with their classroom attainment and their social and emotional development. No relationship was found between the extent of residential opportunities and whole school performance measures, although it was found that opportunities are inversely correlated with deprivation. Interview data produced a rich source of evidence for the way in which different aspects of a course combine together to produce a powerful impact. Complexity theory was used as a theoretical perspective to suggest that a non-linear step change in self-confidence could arise naturally and possibly inevitably as a result of the fact that residential adventure education is a complex system. Analysis of the pupil impact survey showed that many different aspects of the experience combine to create the impact but that it can reliably be separated into four components: living with others, challenge, teacher relationships and learning about self. There was a significant correlation between the improvement in individual pupils’ classroom attainment over the course of a term and the impact that the residential had on them. There was a significant improvement from pre-course to post-course in pupils’ prosocial behaviour and a significant reduction in perceived hyperactivity.
|
72 |
Space in Proof ComplexityVinyals, Marc January 2017 (has links)
ropositional proof complexity is the study of the resources that are needed to prove formulas in propositional logic. In this thesis we are concerned with the size and space of proofs, and in particular with the latter. Different approaches to reasoning are captured by corresponding proof systems. The simplest and most well studied proof system is resolution, and we try to get our understanding of other proof systems closer to that of resolution. In resolution we can prove a space lower bound just by showing that any proof must have a large clause. We prove a similar relation between resolution width and polynomial calculus space that lets us derive space lower bounds, and we use it to separate degree and space. For cutting planes we show length-space trade-offs. This is, there are formulas that have a proof in small space and a proof in small length, but there is no proof that can optimize both measures at the same time. We introduce a new measure of space, cumulative space, that accounts for the space used throughout a proof rather than only its maximum. This is exploratory work, but we can also prove new results for the usual space measure. We define a new proof system that aims to capture the power of current SAT solvers, and we show a landscape of length-space trade-offs comparable to those in resolution. To prove these results we build and use tools from other areas of computational complexity. One area is pebble games, very simple computational models that are useful for modelling space. In addition to results with applications to proof complexity, we show that pebble game cost is PSPACE-hard to approximate. Another area is communication complexity, the study of the amount of communication that is needed to solve a problem when its description is shared by multiple parties. We prove a simulation theorem that relates the query complexity of a function with the communication complexity of a composed function. / <p>QC 20170509</p>
|
73 |
Efficient Ways to Upgrade Docker Containers in Cloud to Support Backward Compatibility : Various Upgrade Strategies to Measure ComplexityMADALA, SRAVYA January 2016 (has links)
If the present world scenario in telecommunication systems is considered thousands of systems are getting moved into the cloud because of its wide features. This thesis explains the efficient ways to upgrade Docker containers in a way to support backward compatibility. It mainly concerns about the high-availability of systems in the cloud environment during upgrades. Smaller changes can be implemented automatically to some extent. The minor changes can be handled by Apache Avro where schema is defined in it. But at some point Avro also cannot handle the situation which becomes much more complex. In a real world example, we need to perform major changes on the top of an application. Here we are testing different upgrade strategies and comparing the code complexity, total time to upgrade, network usage of single upgrades strategy versus multiple upgrade strategy with and without Use of Avro. When code complexity is compared the case without Avro performs well in single upgrade strategy with less time to upgrade all six instances but the network usage is more compared to multiple upgrades. So single upgrade strategy is better to maintain high availability in Cloud by performing the upgrades in an efficient manner.
|
74 |
Approximated transform and quantisation for complexity-reduced high efficiency video codingSazali, Mohd January 2017 (has links)
The transform-quantisation stage is one of the most complex operations in the state-of-the-art High Efficiency Video Coding (HEVC) standard, accounting for 11–41% share of the encoding complexity. This study aims to reduce its complexity, making it suitable for dedicated hardware accelerated architectures. Adopted methods include multiplier-free approach, Multiple-Constant Multiplication architectural designs, and exploiting useful properties of the well-known Discrete Cosine Transform. Besides, an approximation scheme was introduced to represent the original HEVC transform and quantisation matrix elements with more hardware-friendly integers. Out of several derived approximation alternatives, an approximated transform matrix (T16) and its downscaled version (ST16) were further evaluated. An approximated quantisation multipliers matrix (Q) and its combination with one transform matrix (ST16 + Q) were also assessed in HEVC reference software, HM-13.0, using test video sequences of High Definition (HD) quality or higher. Their hardware architectures were designed in IEEE-VHDL language targeting a Xilinx Virtex-6 Field Programmable Gate Array technology to estimate resource savings over original HEVC transform and quantisation. T16, ST16, Q, and ST16 + Q approximated transform or/and quantisation matrices provided average Bjøntegaard-Delta bitrate differences of 1.7%, 1.7%, 0.0%, and 1.7%, respectively, in entertainment scenario and 0.7%, 0.7%, -0.1%, and 0.7%, respectively, in interactive scenario against HEVC. Conversely, around 16.9%, 20.8%, 21.2%, and 25.9% hardware savings, respectively, were attained in the number of Virtex-6 slices compared with HEVC transform or/and quantisation. The developed architecture designs achieved a 200 MHz operating frequency, enabling them to support the encoding of Quad Full HD (3840 × 2160) videos at 60 frames per second. Comparing T16 and ST16 with similar designs in the literature yields better hardware efficiency measures (0.0687 and 0.0721, respectively, in mega sample/second/slice). The presented approximated transform and quantisation matrices may be applicable in a complexity-reduced HEVC encoding on hardware platforms with non-detrimental coding performance degradations.
|
75 |
Implicit Computational Complexity and Compilers / Complexité Implicite et compilateursRubiano, Thomas 01 December 2017 (has links)
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non-size-increasing programs is based on a simple principle : counting memory allocation and deallocation, particularly in loops. This way, we can detect programs which compute within a constant amount of space. This method can easily be expressed as property on control flow graphs. Because analyses on data’s behaviour are syntactic, they can be done at compile time. Because they are only static, those analyses are not always computable or easily computable and approximations are needed. “Size-Change Principle” from C. S. Lee, N. D. Jones et A. M. Ben-Amram presented a method to predict termination by observing resources evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into “real life” programs by manipulating intermediate representation languages in compilers. This give an accurate idea of the actual expressivity of these analyses and show that implicit computational complexity and compilers communities can fuel each other fruitfully. As we show in this thesis, the methods developed are quite generals and open the way to several new applications. / La théorie de la complexité´e s’intéresse à la gestion des ressources, temps ou espace, consommés par un programmel ors de son exécution. L’analyse statique nous permet de rechercher certains critères syntaxiques afin de catégoriser des familles de programmes. L’une des approches les plus fructueuses dans le domaine consiste à observer le comportement potentiel des données manipulées. Par exemple, la détection de programmes “non size increasing” se base sur le principe très simple de compter le nombre d’allocations et de dé-allocations de mémoire, en particulier au cours de boucles et on arrive ainsi à détecter les programmes calculant en espace constant. Cette méthode s’exprime très bien comme propriété sur les graphes de flot de contrôle. Comme les méthodes de complexité implicite fonctionnent à l’aide de critères purement syntaxiques, ces analyses peuvent être faites au moment de la compilation. Parce qu’elles ne sont ici que statiques, ces analyses ne sont pas toujours calculables ou facilement calculables, des compromis doivent être faits en s’autorisant des approximations. Dans le sillon du “Size-Change Principle” de C. S. Lee, N. D. Jones et A. M. Ben-Amram, beaucoup de recherches reprennent cette méthode de prédiction de terminaison par observation de l’évolution des ressources. Pour le moment, ces méthodes venant des théories de la complexité implicite ont surtout été appliquées sur des langages plus ou moins jouets. Cette thèse tend à porter ces méthodes sur de “vrais” langages de programmation en s’appliquant au niveau des représentations intermédiaires dans des compilateurs largement utilises. Elle fournit à la communauté un outil permettant de traiter une grande quantité d’exemples et d’avoir une idée plus précise de l’expressivité réelle de ces analyses. De plus cette thèse crée un pont entre deux communautés, celle de la complexité implicite et celle de la compilation, montrant ainsi que chacune peut apporter à l’autre.
|
76 |
Some remarks on consequences of Shor's Factoring AlgorithmKarl-Georg Schlesinger, kgschles@esi.ac.at 26 February 2001 (has links)
No description available.
|
77 |
A Plausibility Argument for \#P$Karl--Georg Schlesinger, kgschles@esi.ac.at 26 February 2001 (has links)
No description available.
|
78 |
The role of self-concept and narcissism in aggressionHook, Tarah Lynn 14 May 2007
It was hypothesized that the self-esteem instability and emotional reactivity associated with narcissism may be related to the simplicity of cognitive self-representation known as low self-complexity. The relationships among narcissism, self-concept, affect and violent behaviour were investigated in two studies with samples of federally sentenced violent and sexual offenders. In the first study, participants completed personality inventories and a measure of self-complexity, while changes in self-esteem were tracked across two weeks. In the second study, participants completed the same battery of measures as in the first study in addition to several new measures of anger, aggression and previous violent behaviour. Also, official records were consulted to obtain collateral information regarding violent behaviour. Experiences of positive and negative events and the resulting changes in affect and self-esteem were tracked over six weeks. It was expected that self-complexity would mediate reactivity to daily events such that individuals low in self-complexity and high in narcissistic personality traits would report the greatest shifts in self-esteem and emotion. When positive and negative self-complexity were considered separately, some support was found for the hypothesized buffering effect. Generally, higher positive self-complexity was associated with better coping while higher negative self-complexity was associated with less desirable reactions to events. Theoretical and clinical implications of this finding are discussed along with limitations of these studies and suggestions for future research.
|
79 |
Parameterized Enumeration of Neighbour Strings and Kemeny AggregationsSimjour, Narges January 2013 (has links)
In this thesis, we consider approaches to enumeration problems in the parameterized
complexity setting. We obtain competitive parameterized algorithms to enumerate all, as well as several of, the solutions for two related problems Neighbour String and Kemeny Rank Aggregation. In both problems, the goal is to find a solution that is as close as possible to a set of inputs (strings and total orders, respectively) according to some distance measure.
We also introduce a notion of enumerative kernels for which there is a bijection between solutions to the original instance and solutions to the kernel, and provide such a kernel for Kemeny Rank Aggregation, improving a previous kernel for the problem.
We demonstrate how several of the algorithms and notions discussed in this thesis are
extensible to a group of parameterized problems, improving published results for some other problems.
|
80 |
Exploiting the Computational Power of Ternary Content Addressable MemoryTirdad, Kamran January 2011 (has links)
Ternary Content Addressable Memory or in short TCAM is a special type of memory that can execute a certain set of operations in parallel on all of its words. Because of
power consumption and relatively small storage capacity, it has only been used in special environments. Over the past few years its cost has been reduced and its storage capacity has increased signifi cantly and these exponential trends are continuing. Hence it can be used in more general environments for larger problems. In this research we study how to exploit its computational power in order to speed up fundamental problems and needless to say that we barely scratched the surface. The main problems that has been addressed in our research are namely Boolean matrix multiplication, approximate subset queries using bloom filters, Fixed universe priority queues and network flow classi cation. For Boolean matrix multiplication our simple algorithm has a run time of O (d(N^2)/w) where N is the
size of the square matrices, w is the number of bits in each word of TCAM and d is the
maximum number of ones in a row of one of the matrices. For the Fixed universe priority
queue problems we propose two data structures one with constant time complexity and
space of O((1/ε)n(U^ε)) and the other one in linear space and amortized time complexity of
O((lg lg U)/(lg lg lg U)) which beats the best possible data structure in the RAM model namely Y-fast trees. Considering each word of TCAM as a bloom filter, we modify the hash functions of the bloom filter and propose a data structure which can use the information capacity of each word of TCAM more efi ciently by using the co-occurrence probability of possible members. And finally in the last chapter we propose a novel technique for network flow classi fication using TCAM.
|
Page generated in 0.0619 seconds