Spelling suggestions: "subject:"computer science. computer software."" "subject:"computer science. aomputer software.""
181 |
Effective solutions of recursive domain equationsKanda, Akira January 1980 (has links)
Solving recursive domain equations is one of the main concerns in the denotational semantics of programming languages, and in the algebraic specification of data types. Because we are to solve them for the specification of computable objects, effective solutions of them should be needed. Though general methods for obtaining solutions are well known, effectiveness of the solutions has not been explicitly investigated.* The main objective of this dissertation is to provide a categorical method for obtaining effective solutions of recursive domain equations. Thence we will provide effective models of denotational semantics and algebraic data types. The importance of considering the effectiveness of solutions is two-fold. First we can guarantee that for every denotational specification of a programming language and algebraic data type specification, implementation exists. Second, we have an instance of a computability theory where higher type computability and even infinite type computability can be discussed very smoothly. *While this dissertation has been written, Plotkin and Smyth obtained an alternative to our method which worked only for effectively given categories with universal objects.
|
182 |
Energy conscious adaptive securityTaramonli, Chryssanthi January 2014 (has links)
The rapid growth of information and communication systems in recent years has brought with it an increased need for security. Meanwhile, encryption, which constitutes the basis of the majority of security schemes, may imply a significant amount of energy consumption. Encryption algorithms, depending on their complexity, may consume a significant amount of computing resources, such as memory, battery power and processing time. Therefore, low energy encryption is crucial, especially for battery powered and passively powered devices. Thus, it is of great importance to achieve the desired security possible at the lowest cost of energy. The approach advocated in this thesis is based on the lack of energy implication in security schemes. It investigates the optimum security mode selection in terms of the energy consumption taking into consideration the security requirements and suggests a model for energy-conscious adaptive security in communications. Stochastic and statistical methods are implemented – namely reliability, concentration inequalities, regression analysis and betweenness centrality – to evaluate the performance of the security modes and a novel adaptive system is proposed as a flexible decision making tool for selecting the most efficient security mode at the lowest cost of energy. Several symmetric algorithms are simulated and the variation of four encryption parameters is examined to conclude the selection of the most efficient algorithm in terms of energy consumption. The proposed security approach is twofold, as it has the ability to adjust dynamically the encryption parameters or the energy consumption, either according to the energy limitations or the severity of the requested service.
|
183 |
Reasoning about systems with evolving structurePhilippou, Anna January 1996 (has links)
This thesis is concerned with the specification and verification of mobile systems, i.e. systems with dynamically-evolving communication topologies. The expressiveness and applicability of the πυ-calculus, an extension of the π-calculus with first-order data, is investigated for describing and reasoning about mobile systems. The theory of confluence and determinacy in the πυ-calculus is studied, with emphasis on results and techniques which facilitate process verification. The utility of the calculus for giving descriptions which are precise, natural and amenable to rigorous analysis is illustrated in three applications. First, the behaviour of a distributed protocol is analysed. The use of a mobile calculus makes it possible to capture important intuitions concerning the behaviour of the algorithm; the theory of confluence plays a central role in its correctness proof. Secondly, an analysis of concurrent operations on a dynamic search structure, the B-tree, is carried out. This exploits results obtained concerning a notion of partial confluence by whose use classes of systems in which interaction between components is of a certain disciplined kind may be analysed. Finally, the πυ-calculus is used to give a semantic definition for a concurrent-object programming language and it is shown how this definition can be used as a basis for reasoning about systems prescribed by programs. Syntactic conditions on programs are isolated and shown to guarantee determinacy. Transformation rules which increase the scope for concurrent activity within programs without changing their observable behaviour are given and their soundness proved.
|
184 |
Data mining of vehicle telemetry dataTaylor, Phillip January 2015 (has links)
Driving a safety critical task that requires a high level of attention and workload from the driver. Despite this, people often perform secondary tasks such as eating or using a mobile phone, which increase workload levels and divert cognitive and physical attention from the primary task of driving. As well as these distractions, the driver may also be overloaded for other reasons, such as dealing with an incident on the road or holding conversations in the car. One solution to this distraction problem is to limit the functionality of in-car devices while the driver is overloaded. This can take the form of withholding an incoming phone call or delaying the display of a non-urgent piece of information about the vehicle. In order to design and build these adaptions in the car, we must first have an understanding of the driver's current level of workload. Traditionally, driver workload has been monitored using physiological sensors or camera systems in the vehicle. However, physiological systems are often intrusive and camera systems can be expensive and are unreliable in poor light conditions. It is important, therefore, to use methods that are non-intrusive, inexpensive and robust, such as sensors already installed on the car and accessible via the Controller Area Network (CAN)-bus. This thesis presents a data mining methodology for this problem, as well as for others in domains with similar types of data, such as human activity monitoring. It focuses on the variable selection stage of the data mining process, where inputs are chosen for models to learn from and make inferences. Selecting inputs from vehicle telemetry data is challenging because there are many irrelevant variables with a high level of redundancy. Furthermore, data in this domain often contains biases because only relatively small amounts can be collected and processed, leading to some variables appearing more relevant to the classification task than they are really. Over the course of this thesis, a detailed variable selection framework that addresses these issues for telemetry data is developed. A novel blocked permutation method is developed and applied to mitigate biases when selecting variables from potentially biased temporal data. This approach is infeasible computationally when variable redundancies are also considered, and so a novel permutation redundancy measure with similar properties is proposed. Finally, a known redundancy structure between features in telemetry data is used to enhance the feature selection process in two ways. First the benefits of performing raw signal selection, feature extraction, and feature selection in different orders are investigated. Second, a two-stage variable selection framework is proposed and the two permutation based methods are combined. Throughout the thesis, it is shown through classification evaluations and inspection of the features that these permutation based selection methods are appropriate for use in selecting features from CAN-bus data.
|
185 |
A framework for adaptive personalised e-advertisementsAl Qudah, Dana January 2016 (has links)
The art of personalised e-advertising relies on attracting the user‟s attention to the recommended product, as it relates to their taste, interest and data. Whilst in practice, companies attempt various forms of personalisation; research of personalised e-advertising is rare, and seldom routed on solid theory. Adaptive hypermedia (AH) techniques have contributed to the development of personalised tools for adaptive content delivery, mostly in the educational domain. This study explores the use of these theories and techniques in a specific field – adaptive e-advertisements. This is accomplished firstly by structuring a theoretical framework that roots adaptive hypermedia into the domain of e-advertising and then uses this theoretical framework as the base for implementing and evaluating an adaptive e-advertisement system called “MyAds”. The novelty of this approach relies on a systematic design and evaluation based on adaptive hypermedia taxonomy. In particular, this thesis uses a user centric methodology to design and evaluate the proposed approach. It also reports on evaluations that investigated users‟ opinions on the appropriate design of MyAds. Another set of evaluations reported on users‟ perceptions of the implemented system, allowing for a reflection on the users‟ acceptance level of e-advertising. The results from both implicit and explicit feedback indicated that users found the MyAds system acceptable and agreed that the implemented user modelling and AH features within the system contributed to achieving acceptance, within their e-advertisement experience due to the different personalisation methods.
|
186 |
Parameterized complexity : permutation patterns, graph arrangements, and matroid parametersMach, Lukáš January 2015 (has links)
The theory of parameterized complexity is an area of computer science focusing on refined analysis of hard algorithmic problems. In the thesis, we give two complexity lower bounds and define two novel parameters for matroids. The first lower bound is a kernelization lower bound for the Permutation Pattern Matching problem, which is concerned with finding a permutation pattern inside another input permutation. Our result states that unless a certain (widely believed) complexity hypothesis fails, it is impossible to construct a polynomial time algorithm taking an instance of the Permutation Pattern Matching problem and producing an equivalent instance of size bounded by a polynomial of the length of the pattern. Obtaining such lower bounds has been posed by Stephane Vialette as an open problem. We then prove a subexponential lower bound for the computational complexity of the Optimum Linear Arrangement problem. In our theorem, we assume a conjecture about the computational complexity of a variation of the Min Bisection problem. The two matroid parameters introduced in this work are called amalgam-width and branch-depth. Amalgam-width is a generalization of the branch-width parameter that allows for algorithmic applications even for matroids that are not finitely representable. We prove several results, including a theorem stating that deciding monadic second-order properties is fixed-parameter tractable for general matroids parameterized by amalgam-width. Branch-depth, the other newly introduced matroid parameter, is an analogue of graph tree-depth. We prove several statements relating graph tree-depth and matroid branch-depth. We also present an algorithm that efficiently approximates the value of the parameter on a general oracle-given matroid.
|
187 |
Performance modelling and optimisation of inertial confinement fusion simulation codesBird, Robert F. January 2016 (has links)
Legacy code performance has failed to keep up with that of modern hardware. Many new hardware features remain under-utilised, with the majority of code bases still unable to make use of accelerated or heterogeneous architectures. Code maintainers now accept that they can no longer rely solely on hardware improvements to drive code performance, and that changes at the software engineering level need to be made. The principal focus of the work presented in this thesis is an analysis of the changes legacy Inertial Confinement Fusion (ICF) codes need to make in order to efficiently use current and future parallel architectures. We discuss the process of developing a performance model, and demonstrate the ability of such a model to make accurate predictions about code performance for code variants on a range of architectures. We build on the knowledge gained from such a process, and examine how Particle-in-Cell (PIC) codes must change in order to move towards the required levels of portable and future-proof performance needed to leverage the capabilities of modern hardware. As part of this investigation, we present an OpenCL port of the legacy code EPOCH, as well as a fully featured mini-app representing EPOCH. Finally, as a direct consequence of these investigations, we directly apply these performance optimisations to the production version EPOCH, culminating in a speedup of over 2x for the core algorithm.
|
188 |
Developing energy-aware workload offloading frameworks in mobile cloud computingGao, Bo January 2015 (has links)
Mobile cloud computing is an emerging field of research that aims to provide a platform on which intelligent and feature-rich applications are delivered to the user at any time and at anywhere. Computation offload between mobile and cloud plays a key role in this vision and ensures that the integration between mobile and cloud is both seamless and energy-efficient. In this thesis, we develop a suite of energy-aware workload offloading frameworks to accommodate the efficient execution of mobile workflows on a mobile cloud platform. We start by looking at two energy objectives of a mobile cloud platform. While the first objective aims at minimising the overall energy cost of the platform, the second objective aims at the longevity of the platform taking into account the residual battery power of each device. We construct optimisation models for both objectives and develop two efficient algorithms to approximate the optimal solution. According to simulation results, our greedy autonomous offload (GAO) algorithm is able to efficiently produce allocation schemes that are close to optimal. Next, we look at the task allocation problem from a workflow's perspective and develop energy-aware offloading strategies for time-constrained mobile workflows. We demonstrate the effect of software and hardware characteristics have over the offload-efficiency of mobile workflows with a workflow-oriented greedy autonomous offload (WGAO) algorithm, an extension to the GAO algorithm. Thirdly, we propose a novel network I-O model to describe the bandwidth dependencies and allocation problem in mobile networks. This model lays the foundation for further objective developments such as the cost-based and adaptive bandwidth allocation schemes which we also present in this thesis. Lastly, we apply a game theoretical approach to model the non-cooperative behaviour of mobile cloud applications that reside on the same device. Mixed-strategy Nash equilibrium is derived for the offload game which further quantifies the price of anarchy of the system.
|
189 |
A pedagogical framework for enhancing skills of references and citationsToor, Saba K. January 2015 (has links)
References and citations form a basis for scientific research and creation/discovery of knowledge. However literature reviews had indicated that many errors are present in scholarly papers published in journals and conferences as well as in books and articles. Furthermore, course works of students studying in higher education institutions contain mistakes in references lists and in-text citations. Problems that stem from these inaccuracies are multifarious and range from the act of plagiarism, not acknowledging the source, problems in information access and retrieval as well as causing inaccuracies in ranking articles and journals, thus hindering the growth of knowledge. Based on the importance of this global issue this research was initiated. The first objective of our research was to determine root causes for the presence of mistakes and inadequacies in references and citations within the academic arena. We chose the academic arena because they are the training grounds for education and scientific research. Furthermore, through this research we sought a unique practical solution for this issue. In order to conduct a thorough and comprehensive investigation into the above mentioned problems, and to achieve the aim of proposing a suitable solution, we divided our research work into three main phases. First phase was the investigative phase. During this phase a thorough literature review was conducted. As a result of this review, research questions were formed. Both quantitative and qualitative methods were adopted to investigate the causes of erroneous references and citations. Triangulation research methodology was used to get reliable and comprehensive information. Data received through these methods were analyzed and core issues such as inadequate feedback and training in referencing task were highlighted. In the second phase, termed as solution phase, a pedagogical framework was proposed to resolve issues that were reported during the investigative phase. A conceptual framework was built on the principles of Learning theories and spaced repetition theory. To evaluate this framework experiments were conducted. This was done in the third and final phase of the research which was termed as evaluation phase. Two types of experiments were conducted, first type was in a traditional classroom environment and the second type of experiment was with students who chose to work independently (without tutors). Data were collected and analyzed from these experiments using both quantitative methods and qualitative methods and were analyzed. This research provides insight into causes of errors within referencing tasks of students in higher education. It Indicates that reform in the pedagogy for teaching this skill is needed. Furthermore a unique pedagogy is presented. Results from experiments have indicated that through the proposed operational model improvements of referencing skills have been seen.
|
190 |
Simulating collective motion from particles to birdsMiller, Adam Morrison January 2015 (has links)
The main work of this thesis is the construction of a 3D computer model of animal flocking based on vision. The model took an additional input, to those usually considered in tradition models: the projection of all other flock members on to an individual's field of view. Making 2D models is easy (in fact 4 new ones are included in this thesis), but we should be drawing parallels with experimental data for behaviour in animal systems and we should be cautious indeed when drawing conclusions, based on those models. It is common in the literature not to compare model behaviours with measurable quantities of natural flocks. However this work makes a concerted effort to do so in the case of the 3D model. A direct comparison was made in this work between the simulations and an empirical study of starling flocks, of the scaling behaviour of the maximum distance through the flock and the number of flock members, for which the agreement was very good. Other flock properties were compared with the natural flocks, but with less satisfactory results. A careful literature survey was made to investigate and ultimately support the biological plausibility of the 3D projection model. Biological and physiological plausibility is a factor not often considered by computational modellers. A series of novel and related 2D computer flocking models were investigated with hopes to find a single flocking rule that could manifest the most important features of collective motion and thereby be highly parsimonious. The final part of this thesis concerns a 2D computer model of photothermophoresis based on langevin dynamics, which it may be possible to use to find evidence of a density transition found in the continuum model. There was some evidence that a transition from a transparent diffuse state to an opaque compact one may exist for the discrete particle simulation.
|
Page generated in 0.1447 seconds