11 |
Higher order mutation testingJia, Y. January 2013 (has links)
Mutation testing is a fault-based software testing technique that has been studied widely for over three decades. To date, work in this field has focused largely on first order mutants because it is believed that higher order mutation testing is too computationally expensive to be practical. This thesis argues that some higher order mutants are potentially better able to simulate real world faults and to reveal insights into programming bugs than the restricted class of first order mutants. This thesis proposes a higher order mutation testing paradigm which combines valuable higher order mutants and non-trivial first order mutants together for mutation testing. To overcome the exponential increase in the number of higher order mutants a search process that seeks fit mutants (both first and higher order) from the space of all possible mutants is proposed. A fault-based higher order mutant classification scheme is introduced. Based on different types of fault interactions, this approach classifies higher order mutants into four categories: expected, worsening, fault masking and fault shifting. A search-based approach is then proposed for locating subsuming and strongly subsuming higher order mutants. These mutants are a subset of fault mask and fault shift classes of higher order mutants that are more difficult to kill than their constituent first order mutants. Finally, a hybrid test data generation approach is introduced, which combines the dynamic symbolic execution and search based software testing approaches to generate strongly adequate test data to kill first and higher order mutants.
|
12 |
Transactional concurrency control for resource constrained applicationsSolaiman, Kamal Mabrok Moftah January 2014 (has links)
Transactions have long been used as a mechanism for ensuring the consistency of databases. Databases, and associated transactional approaches, have always been an active area of research as different application domains and computing architectures have placed ever more elaborate requirements on shared data access. As transactions typically provide consistency at the expense of timeliness (abort/retry) and resource (duplicate shared data and locking), there has been substantial efforts to limit these two aspects of transactions while still satisfying application requirements. In environments where clients are geographically distant from a database the consistency/performance trade-off becomes acute as any retrieval of data over a network is not only expensive, but relatively slow compared to co-located client/database systems. Furthermore, for battery powered clients the increased overhead of transactions can also be viewed as a significant power overhead. However, for all their drawbacks transactions do provide the data consistency that is a requirement for many application types. In this Thesis we explore the solution space related to timely transactional systems for remote clients and centralised databases with a focus on providing a solution, that, when compared to other's work in this domain: (a) maintains consistency; (b) lowers latency; (c) improves throughput. To achieve this we revisit a technique first developed to decrease disk access times via local caching of state (for aborted transactions) to tackle the problems prevalent in real-time databases. We demonstrate that such a technique (rerun) allows a significant change in the typical structure of a transaction (one never before considered, even in rerun systems). Such a change itself brings significant performance success not only in the traditional rerun local database solution space, but also in the distributed solution space. A byproduct of our improvements also, one can argue, brings about a "greener" solution as less time coupled with improved throughput affords improved battery life for mobile devices.
|
13 |
New strategies for automated random testingAhmad, Mian Asbat January 2014 (has links)
The ever increasing reliance on software-intensive systems is driving research to discover software faults more effectively and more efficiently. Despite intensive research, very few approaches have studied and used knowledge about fault domains to improve the testing or the feedback given to developers. The present thesis addresses this shortcoming: it leverages fault co-localization in a new random testing strategy called Dirt Spot Sweep- ing Random (DSSR), and it presents two new strategies: Automated Discovery of Failure Domain (ADFD) and Automated Discovery of Failure Domain+ (ADFD+). These improve the feedback given to developers by deducing more information about the failure domain (i.e. point, block, strip) in an automated way. The DSSR strategy adds the value causing the failure and its neighbouring values to the list of interesting values for exploring the underlying failure domain. The comparative evaluation showed significantly better performance of DSSR over Random and Random+ strategies. The ADFD strategy finds failures and failure domains and presents the pass and fail domains in graphical form. The results obtained by evaluating error-seeded numerical programs indicated highly effective performance of the ADFD strategy. The ADFD+ strategy is an extended version of ADFD strategy with respect to algorithm and graphical presentation of failure domains. In comparison with Randoop, ADFD+ strategy successfully detected all failures and failure domains while Randoop identified individual failures but could not detect failure domains. The ADFD and ADFD+ techniques were enhanced by integration of the automatic invariant detector Daikon, and the precision of identifying failure domains was determined through extensive experimental evaluation of real world Java projects contained in a database, namely Qualitas Corpus. The analyses of results, cross-checked by manual testing indicated that the ADFD and ADFD+ techniques are highly effective in providing assistance but are not an alternative to manual testing.
|
14 |
Model-driven engineering for analysing, modelling and comparing cloud computing service level agreementsAlkandari, Fatima A. A. A. January 2014 (has links)
In cloud computing, service level agreements (SLAs) are critical, and underpin a pay- per-consumption business model. Different cloud providers offer different services (of different qualities) on demand, and their pre-defined SLAs are used both to advertise services and to define contracts with consumers. However, different providers express their SLAs using their own vocabularies, typically defined in terms of their own tech- nologies and unique selling points. This can make it difficult for consumers to compare cloud SLAs systematically and precisely. We propose a modelling framework that pro- vides mechanisms that can be used systematically and semi-automatically to model and compare cloud SLAs and consumer requirements. Using MDE principles and tools, we propose a metamodel for cloud provider SLAs and cloud consumer requirements, and thereafter demonstrate how to use model comparison technology for automating differ- ent matching processes, thus helping consumers to choose between different providers. We also demonstrate how the matching process can be interactively configured to take into account consumer preferences, via weighting models. The resulting framework can thus be used to better automate high-level consumer interactions with disparate cloud computing technology platforms.
|
15 |
A coalgebraic semantics for imperative programming languagesAbou-Saleh, Faris January 2014 (has links)
In the theory of programming languages, one often takes two complementary perspectives. In operational semantics, one defines and reasons about the behaviour of programs; and in denotational semantics, one abstracts away implementation details, and reasons about programs as mathematical objects or denotations. The denotational semantics should be compositional, meaning that denotations of programs are determined by the denotations of their parts. It should also be adequate with respect to operational equivalence: programs with the same denotation should be behaviourally indistinguishable. One often has to prove adequacy and compositionality independently for different languages, and the proofs are often laborious and repetitive. These proofs were provided systematically in the context of process algebras by the mathematical operational semantics framework of Turi and Plotkin – which represented transition systems as coalgebras, and program syntax by free algebras; operational specifications were given by distributive laws of syntax over behaviour. By framing the semantics on this abstract level, one derives denotational and operational semantics which are guaranteed to be adequate and compositional for a wide variety of examples. However, despite speculation on the possibility, it is hard to apply the framework to programming languages, because one obtains undesirably fine-grained behavioural equivalences, and unconventional notions of operational semantics. Moreover, the behaviour of these languages is often formalised in a different way – such as computational effects, which may be thought of as an interface between programs and external factors such as non-determinism or a variable store; and comodels, or transition systems which implement these effects. This thesis adapts the mathematical operational semantics framework to provide semantics for various classes of programming languages. After identifying the need for such an adaptation, we show how program behaviour may be characterised by final coalgebras in suitably order- enriched Kleisli categories. We define both operational and denotational semantics, first for languages with syntactic effects, and then for languages with effects and/or comodels given by a Lawvere theory. To ensure adequacy and compositionality, we define concrete and abstract operational rule-formats for these languages, based on the idea of evaluation-in-context; we give syntactic and then categorical proofs that those properties are guaranteed by operational specifications in these rule-formats.
|
16 |
Functional neuroimaging : a sparse modelling approachYan, Shulin January 2014 (has links)
Developments in technology have enabled scientists to study brain function in an unprecedented way. Functional neuroimaging is the use of neuroimaging technologies to capture information about the state of a brain, with the goal of studying the relationship between mental functions and brain activity. One such technology is functional magnetic resonance imaging (fMRI), which produces a signal that can be used to create cross-sectional images of the brain. These images can be used to measure brain activity in different sections of the brain. In fMRI recording there is a tradeoff between spatial and temporal resolution. My first contribution in this thesis is to present a novel algorithm for generating cross-sectional images. This is a signal processing problem with high dimensionality, but few measurements. My algorithm uses ideas from sparse modelling because variations in functional MR images are sparse over time in the wavelet domain. It will enable high resolution images to be generated using fewer measurements. Sequences of functional MR images are recorded while subjects perform different tasks. The second contribution of this thesis is a machine learning technique to predict different tasks from the captured fMRI sequences. Existing methods perform poorly at this prediction task due to the curse of dimensionality. I overcome this problem by designing a novel sparse modelling method based on the assumption that the active brain region in response to a target task is sparse in the whole brain area. The final contribution to this thesis is the design of different assessment criteria for selecting the most relevant voxels to interpret the neural activity. The conventional selection method uses the assessment of predictive performance, resulting in many false positive selections due to the small number of samples. To overcome this problem, I introduce the concept of stability. My method selects the relevant voxels using the assessments of both predictive performance and stability, which significantly reduces the selection error while maintaining the predictive performance.
|
17 |
Uncertainty and social considerations for mobile assistive robot navigationCorrea Villanueva, Javier January 2014 (has links)
An increased interest in mobile robots has been seen over the past years. The wide range of possible applications, from vacuum cleaners to assistant robots, makes such robots an interesting solution to many everyday problems. A key requirement for the mass deployment of such robots is to ensure they can safely navigate around our daily living environments. A robot colliding with or bumping into a person may be, in some contexts, unacceptable. For example, if a robot working around elderly people collides with one of them, it may cause serious injuries. This thesis explores four major components required for effective robot navigation: sensing the static environment, detection and tracking of moving people, obstacle and people avoidance with uncertainty measurement, and basic social navigation considerations. First, to guarantee adherence to basic safety constraints, sensors and algorithms required to measure the complex structure of our daily living environments are explored. Not only do the static components of the environment have to be measured, but so do any people present. A people detection and tracking algorithm, aimed for a crowded environment is proposed, thus enhancing the robot's perception capabilities. Our daily living environments present many inherent sources of uncertainty for robots, one of them arising due to the robot's inability to know people's intentions as they move. To solve this problem, a motion model that assumes unknown long-term intentions is proposed. This is used in conjunction with a novel uncertainty aware local planner to create feasible trajectories. In social situations, the presence of groups of people cannot be neglected when navigating. To avoid the robot interrupting groups of people, it first needs to be able to detect such groups. A group detector is proposed which relies on a set of gaze- and geometric-based features. Avoiding group disruption is finally incorporated into the navigation algorithm by means of taking into account the probability of disrupting a group's activities. The effectiveness of the four different components is evaluated using real world and simulated data, demonstrating the benefits for mobile robot navigation.
|
18 |
Metastability : an emergent phenomenon in networks of spiking neuronsBhowmik, David January 2014 (has links)
It is widely recognised that different brain areas perform different specialised functions. However, it remains an open question how different brain areas coordinate with each other and give rise to global brain states and high-level cognition. Recent theories suggest that transient periods of synchronisation and desynchronisation provide a mechanism for dynamically integrating and forming coalitions of functionally related neural areas, and that at these times conditions are optimal for information transfer. Empirical evidence from human resting state networks has shown a tendency for multiple brain areas to synchronise for short amounts of time, and for different synchronous groups to appear at different times. In dynamical systems terms, this behaviour resembles metastability - an intrinsically driven movement between transient, attractor-like states. However, it remains an open question what the underlying mechanism is that gives rise to these observed phenomena. The thesis first establishes that oscillating neural populations display a great amount of spectral complexity, with several rhythms temporally coexisting in the same and different structures. The thesis next explores inter-band frequency modulation between neural oscillators. The results show that oscillations in different neural populations, and in different frequency bands, modulate each other so as to change frequency. Further to this, the interaction of these fluctuating frequencies in the network as a whole is able to drive different neural populations towards episodes of synchrony. Finally, a symbiotic relationship between metastability and underlying network structure is elucidated, in which the presence of plasticity, responding to the interactions between different neural areas, will naturally form modular small-world networks that in turn further promote metastability. This seemingly inevitable drive towards metastabilty in simulation suggests that it should also be present in biological brains. The conclusion drawn is that these key network characteristics, and the metastable dynamics they promote, facilitate versatile exploration, integration, and communication between functionally related neural areas, and thereby support sophisticated cognitive processing in the brain.
|
19 |
Synthedemic modelling and prediction of Internet-based phenomenaNika, Maria January 2014 (has links)
The study of infectious disease dynamics, termed epidemiology, has been an important area of research for hundreds of years. Nowadays, it is increasingly realised that such techniques may have application to epidemics of a socio-technological nature. Indeed, the proliferation of the Internet has created new opportunities to study the mechanisms behind the emergence and dynamic behaviour of online phenomena such as Internet-based popularity outbursts. The contributions of this thesis are threefold. Firstly, we explore how classical epidemiological models can be applied to model the Internet-based spreading of YouTube video views and BitTorrent downloads. We assess the potential for epidemiology to explain such phenomena, by progressively fitting and parameterising mono-epidemic models from a single data trace. We investigate the characterisation of parameter uncertainty by applying maximum likelihood-based techniques to obtain isosurfaces for different confidence intervals. We also study parameter recoverability from single stochastic simulation trajectories. Secondly, we propose a novel paradigm for modelling and predicting Internet-based phenomena. This framework is based on the composition of multiple compartmental epidemiological models, each of which is hypothesised to correspond to an underlying spreading mechanism. Our multiple-epidemic modelling approach regards data sets as the manifestation of a number of synthesised epidemics. This approach is termed 'synthedemic' modelling. It is inspired by Fourier analysis, but instead of harmonic wave forms, our components are compartmental epidemiological models. We present results from applying the synthedemic model to several epidemic outbreak datasets: synthetic SIR/SEIR, Influenza, Swine flu reported cases, YouTube video views and BitTorrent music downloads. Finally, we extend the well-known SIR model in order to investigate the potential influence of reinforcing and inhibiting interactions between epidemics. The result is the first mathematical model that can reflect the dynamics of mutually reinforcing or inhibiting epidemics, via the syndemic and counter-syndemic interaction effects in multiple overlapping populations. Our findings relating to the effect of the degree of overlap between populations are consistent with existing literature on travel restrictions.
|
20 |
Professional tennis : quantitative models and ranking algorithmsSpanias, Demetris January 2014 (has links)
Professional singles tennis is a popular global sport that attracts spectators and speculators alike. In recent years, financial trading related to sport outcomes has become a reality, thanks to the rise of online betting exchanges and the ever increasing development and deployment of quantitative models for sports. This thesis investigates the extent to which the outcome of a match between two professional tennis players can be forecast using quantitative models parameterised by historical data. Three different approaches are explored, each having its own advantages and disadvantages. Firstly, the problem is approached using a Markov chain to model a tennis point, estimating the probability of a player winning a point while serving. Such a probability can be used as a parameter to existing hierarchical models to estimate the probability of a player winning the match. We demonstrate how this probability can be estimated using varying subsets of historical player data and investigate their effect on results. Averaged historical data over varying opponents with different skill sets, does not necessarily provide a fair basis of comparison when evaluating the performance of players. The second approach presented is a technique that uses data, which includes only matches played against common opponents, to find the difference between the modelled players' probability of winning a point on their serve against each common opponent. This difference in probability for each common opponent is a 'transitive contribution' towards victory for the match being modelled. By combining these 'contributions' the 'Common-Opponent' model overcomes the problems of using average historical statistics at the cost of a shrinking data set. Finally, the thesis ventures into the field of player rankings. Rankings provide a fast and simple method for predicting match winners and comparing players. We present a variety of methods to generate such player rankings, either by making use of network analysis or hierarchical models. The generated rankings are then evaluated using their ability to correctly represent the subset of matches that were used to generate them as well as their ability to forecast future matches.
|
Page generated in 0.0316 seconds