421 |
A photoplethysmography system optimised for pervasive cardiac monitoringPatterson, James January 2013 (has links)
Photoplethysmography is a non-invasive sensing technique which infers instantaneous cardiac function from an optical measurement of blood vessels. This thesis presents a photoplethysmography based sensor system that has been developed speci fically for the requirements of a pervasive healthcare monitoring system. Continuous monitoring of patients requires both the size and power consumption of the chosen sensor solution to be minimised to ensure the patients will be willing to use the device. Pervasive sensing also requires that the device be scalable for manufacturing in high volume at a build cost that healthcare providers are willing to accept. System level choice of both electronic circuits and signal processing techniques are based on their sensitivity to cardiac biosignals, robustness against noise inducing artefacts and simplicity of implementation. Numerical analysis is used to justify the implementation of a technique in hardware. Circuit prototyping and experimental data collection is used to validate a technique's application. The entire signal chain operates in the discrete-time domain which allows all of the signal processing to be implemented in firmware on an embedded processor which minimised the number of discrete components while optimising the trade-off between power and bandwidth in the analogue front-end. Synchronisation of the optical illumination and detection modules enables high dynamic range rejection of both AC and DC independent light sources without compromising the biosignal. Signal delineation is used to reduce the required communication bandwidth as it preserves both amplitude and temporal resolution of the non-stationary photoplethysmography signals allowing more complicated analytical techniques to be performed at the other end of communication channel. The complete sensing system is implemented on a single PCB using only commercial-off -the-shelf components and consumes less than 7.5mW of power. The sensor platform is validated by the successful capture of physiological data in a harsh optical sensing environment.
|
422 |
SymbexNet : checking network protocol implementations using symbolic executionSong, JaeSeung January 2013 (has links)
The implementations of network protocols, such as DNS, DHCP and Zeroconf, are prone to flaws, security vulnerabilities and interoperability issues caused by ambiguous requirements in protocol specifications. Detecting such problems is not easy because (i) many bugs manifest themselves only after prolonged operation; (ii) the state space of complex protocol implementations is large; and (iii) problems often require additional information about correct behaviour from specifications. This thesis presents a novel approach to detect various types of flaws in network protocol implementations by combining symbolic execution and rule-based packet matching. The core idea behind our approach is to generate automatically high-coverage test input packets for a network protocol implementation. For this, the protocol implementation is run using a symbolic execution engine to obtain test input packets. These packets are then used to detect potential violations of rules that constrain permitted input and output packets and were derived from the protocol specification. We propose a technique that repeatedly performs symbolic execution on selected test input packets to achieve broad and deep exploration of the implementation state space. In addition, we use the generated test packets to check interoperability between different implementations of the same network protocol. We present a system based on these techniques, SYMBEXNET, and show that it can automatically generate test input packets that achieve high source code coverage and discover various bugs. We evaluate SYMBEXNET on multiple implementations of two network protocols: Zeroconf, a service discovery protocol, and DHCP, a network configuration protocol. SYMBEXNET is able to discover non-trivial bugs as well as interoperability problems, most of which have been confirmed by the developers.
|
423 |
Practical and effcient runtime taint trackingPapagiannis, Ioannis January 2013 (has links)
Runtime taint tracking is a technique for controlling data propagation in applications. It is typically used to prevent disclosure of confidential information or to avoid application vulnerabilities. Taint tracking systems intercept application operations at runtime, associate meta-data with the data being processed and inspect the meta-data to detect unauthorised data propagation. To keep metadata up-to-date, every attempt of the application to access and process data is intercepted. To ensure that all data propagation is monitored, different categories of data (e.g. confidential and public data) are kept isolated. In practice, the interception of application operations and the isolation of different categories of data are hard to achieve. Existing applications, language interpreters and operating systems need to be re-engineered while keeping metadata up-to-date incurs significant overhead at runtime. In this thesis we show that runtime taint tracking can be implemented with minimal changes to existing infrastructure and with reduced overhead compared to previous approaches. In other words, we suggest methods to achieve both practical and efficient runtime taint tracking. Our key observation is that applications in specific domains are typically implemented in high-level languages and use a subset of the available language features. This facilitates the implementation of a taint tracking system because it needs to support only parts of a programming language and it may leverage features of the execution platform. This thesis explores three different applications domains. We start with event processing applications in Java, for which we introduce a novel solution to achieve isolation and a practical method to declare restrictions about data propagation. We then focus on securing PHP web applications. We show that if taint tracking is restricted to a small part of an application, the runtime overhead is significantly reduced without sacrificing effectiveness. Finally, we target accidental data disclosure in Ruby web applications. Ruby emerges as an ideal choice for a practical taint tracking system because it supports meta-programming facilities that simplify interception and isolation.
|
424 |
Automated construction of Petri net performance models from high-precision location tracking dataAnastasiou, Nikolas January 2013 (has links)
Stochastic performance models are widely used to analyse the performance and reliability of systems that involve the flow and processing of customers and resources. However, model formulation and parameterisation are traditionally manual and thus expensive, intrusive and error-prone. This thesis illustrates the feasibility of automated performance model construction from high-precision location tracking data. In particular, we present a methodology based on a four-stage data processing pipeline which automatically constructs Coloured Generalised Stochastic Petri Net (CGSPN) performance models from an input dataset consisting of raw location tracking traces. The output performance model can be visualised using PIPE2, the platform independent Petri Net editor. The developed methodology can be applied to customer-processing systems which support multiple customers classes and can capture the initial and inter-routing probability of the customer flow of the underlying system. Furthermore, it detects any presence-based synchronisation conditions that may be inherent in the underlying system and the presence of service cycles. Service time distributions, one for each customer class, of each service area in the system and travelling time distributions between pairs of service areas are also characterised. PEPERCORN, the tool that implements the developed methodology, is also presented. In addition to the latter, this thesis presents LocTrackJINQS, the extensible, location-aware Queueing Network simulator. LocTrackJINQS was developed to support location-based research. It has the ability to simulate a user-specified Queueing Network and while simulation progresses, it generates and outputs location tracking data – associated with the movement of the customers in the network – in a trace file. Our methodology is evaluated through six case studies. These case studies use synthetic location tracking data generated by LocTrackJINQS. The obtained results suggest that the methodology can infer the abstract structure of the system – specified in terms of the locations and service radii of the system’s service areas (max error 0.320 m and 0.277 m respectively) and customer flow – and approximate its service time delays well. In fact, the maximum relative entropy value that was obtained between the simulated and inferred service time distributions is 0.324 nats. Furthermore, whenever synchronisation between service areas takes place, the simulated synchronisation conditions are successfully inferred.
|
425 |
Spatio-temporal modeling and analysis of brain developmentSerag, Ahmed January 2013 (has links)
The incidence of preterm birth is increasing and has emerged as a leading cause of neurodevelopmental impairment in childhood. In early development, defined here as the period before and around birth, the brain undergoes significant morphological, functional and appearance changes. The scope and rate of change is arguably greater than at any other time in life, but quantitative markers of this period of development are limited. Improved understanding of cerebral changes during this critical period is important for mapping normal growth, and for investigating mechanisms of injury associated with risk factors for maldevelopment such as premature birth. The objective of this thesis is the development of methods for spatio-temporal modeling and quantitative measures of brain development that can assist understanding the patterns of normal growth and can guide interventions designed to reduce the burden of preterm brain injury. An approach for constructing high-definition spatio-temporal atlases of the developing brain is introduced. A novelty in the proposed approach is the use of a time-varying kernel width, to overcome the variations in the distribution of subjects at different ages. This leads to an atlas that retains a consistent level of detail at every time-point. The resulting 4D fetal and neonatal average atlases have greater anatomic definition than currently available 4D atlases, an important factor in improving registrations between the atlas and individual subjects with clear anatomical structures and atlas-based automatic segmentation. The fetal atlas provides a natural benchmark for assessing preterm born neonates and gives some insight into differences between the groups. Also, a novel framework for longitudinal registration which can accommodate large intra-subject anatomical variations is introduced. The framework exploits previously developed spatio-temporal atlases, which can aid the longitudinal registration process as it provides prior information about the missing anatomical evolution between two scans taken over large time-interval. Finally, a voxel-wise analysis framework is proposed which complements the analysis of changes in brain morphology by the study of spatio-temporal signal intensity changes in multi-modal MRI, which can offer a useful marker of neurodevelopmental changes.
|
426 |
Oscillatory dynamics as a mechanism of integration in complex networks of neuronsWildie, Mark January 2013 (has links)
The large-scale integrative mechanisms of the brain, the means by which the activity of functionally segregated neuronal regions are combined, are not well understood. There is growing agreement that a flexible mechanism of integration must be present in order to support the myriad changing cognitive demands under which we are placed. Neuronal communication through phase-coherent oscillation stands as the prominent theory of cognitive integration. The work presented in this thesis explores the role of oscillation and synchronisation in the transfer and integration of information in the brain. It is first shown that complex metastable dynamics suitable for modelling phase-coherent neuronal synchronisation emerge from modularity in networks of delay and pulse-coupled oscillators. Within a restricted parameter regime these networks display a constantly changing set of partially synchronised states where some modules remain highly synchronised while others desynchronise. An examination of network phase dynamics shows increasing coherence with increasing connectivity between modules. The metastable chimera states that emerge from the activity of modular oscillator networks are demonstrated to be synchronous with a constant phase relationship as would be required of a mechanism of large-scale neural integration. A specific example of functional phase-coherent synchronisation within a spiking neural system is then developed. Competitive stimulus selection between converging population encoded stimuli is demonstrated through entrainment of oscillation in receiving neurons. The behaviour of the model is shown to be analogous to well-known competitive processes of stimulus selection such as binocular rivalry, matching key experimentally observed properties for the distribution and correlation of periods of entrainment under differing stimuli strength. Finally two new measures of network centrality, knotty-centrality and set betweenness centrality, are developed and applied to empirically derived human structural brain connectivity data. It is shown that human brain organisation exhibits a topologically central core network within a modular structure consistent with the generation of synchronous oscillation with functional phase dynamics.
|
427 |
Software performance engineering using virtual time program executionBaltas, Nikolaos January 2013 (has links)
In this thesis we introduce a novel approach to software performance engineering that is based on the execution of code in virtual time. Virtual time execution models the timing-behaviour of unmodified applications by scaling observed method times or replacing them with results acquired from performance model simulation. This facilitates the investigation of "what-if" performance predictions of applications comprising an arbitrary combination of real code and performance models. The ability to analyse code and models in a single framework enables performance testing throughout the software lifecycle, without the need to to extract performance models from code. This is accomplished by forcing thread scheduling decisions to take into account the hypothetical time-scaling or model-based performance specifications of each method. The virtual time execution of I/O operations or multicore targets is also investigated. We explore these ideas using a Virtual EXecution (VEX) framework, which provides performance predictions for multi-threaded applications. The language-independent VEX core is driven by an instrumentation layer that notifies it of thread state changes and method profiling events; it is then up to VEX to control the progress of application threads in virtual time on top of the operating system scheduler. We also describe a Java Instrumentation Environment (JINE), demonstrating the challenges involved in virtual time execution at the JVM level. We evaluate the VEX/JINE tools by executing client-side Java benchmarks in virtual time and identifying the causes of deviations from observed real times. Our results show that VEX and JINE transparently provide predictions for the response time of unmodified applications with typically good accuracy (within 5-10%) and low simulation overheads (25-50% additional time). We conclude this thesis with a case study that shows how models and code can be integrated, thus illustrating our vision on how virtual time execution can support performance testing throughout the software lifecycle.
|
428 |
Computationally unifying urban masterplanningBirch, David Alan January 2013 (has links)
Urban masterplanning is the process of creating a coherent design for developing a campus, suburb, city or region. Unfortunately these design and analysis teams face challenges which prevent rapid quantitative analysis of design iterations; precluding potential design improvement. These include limited automation, poor integration of modelling disciplines and, in particular, very limited scope for design space exploration. This thesis investigates these challenges and their solutions. A computational frame- work HierSynth is presented to help computationally unify the design and analysis sides of the urban masterplanning community. The key contribution of this thesis is HierSynths data model. This presents a reconceptualization of the workflow graph by composing it with tree based design-decompositions commonly found in architectural interoperability formats. This is achieved through a hierarchy of design queries, templates and analyses which when executed form a design hierarchy annotated with evaluated analyses. This enables detailed multi-scale analysis directly on design elements whilst supporting scenario generation and design space exploration capabilities and techniques to explore design improvements. The HierSynth framework is evaluated by application to a major commercial masterplanning project with Arup North America and is used to explore the most effective techniques for generating design insight. HierSynth enabled an order-of-magnitude more analysis iterations and previously infeasible design space exploration to answer design questions. During this collaboration an unexpected challenge was identified in maintaining and debugging complex, highly interrelated analysis models implemented as spreadsheets. A toolkit to address this is developed and applied to several generations of complex multi-disciplinary sustainability models. In summary this thesis presents evidence of the need for, implementation of, and practical benefits from, computationally unifying urban masterplanning design and analysis. The key contribution is a compositional data model supporting this unification. Finally avenues for further work are explored to further aid this community including data provenance and supporting smart cities.
|
429 |
Inferring useful static types for duck typed languagesLamaison, Alexander January 2013 (has links)
Complete and precise identification of types is essential to the effectiveness of programming aids such as refactoring or code completion. Existing approaches that target dynamically typed languages infer types using flow analysis, but flow analysis does not cope well with heavily used features such as heterogeneous containers and implicit interfaces. Our solution makes the assumption that programs that are known to work do not encounter run-time type errors which allows us to derive extra type information from the way values are used, rather than simply where those values originate. This is in keeping with the “duck typing” philosophy of many dynamically typed languages. The information we derive must be conservative, so we describe and formalise a technique to ‘freeze’ the duck type of a variable using the features, such as named methods, that are provably present on any run of the program. Development environments can use these sets of features to provide code-completion suggestions and API documentation, amongst other things. We show that these sets of features can be used to refine imprecise flow analysis results by using the frozen duck type to perform a structural type-cast. We first formalise this for an idealised duck-typed language semantics and then show to what extent the technique would work for a real-world language, Python. We demonstrate its effectiveness by performing an analysis of several real-world Python programs which shows that we can infer the types of method-call receivers more precisely than can flow analysis alone.
|
430 |
Inferring queueing network models from high-precision location tracking dataHorng, Tzu-Ching January 2013 (has links)
Stochastic performance models are widely used to analyse the performance and reliability of systems that involve the flow and processing of customers. However, traditional methods of constructing a performance model are typically manual, time-consuming, intrusive and labour-intensive. The limited amount and low quality of manually-collected data often lead to an inaccurate picture of customer flows and poor estimates of model parameters. Driven by advances in wireless sensor technologies, recent real-time location systems (RTLSs) enable the automatic, continuous and unintrusive collection of high-precision location tracking data, in both indoor and outdoor environment. This high-quality data provides an ideal basis for the construction of high-fidelity performance models. This thesis presents a four-stage data processing pipeline which takes as input high-precision location tracking data and automatically constructs a queueing network performance model approximating the underlying system. The first two stages transform raw location traces into high-level “event logs” recording when and for how long a customer entity requests service from a server entity. The third stage infers the customer flow structure and extracts samples of time delays involved in the system; including service time, customer interarrival time and customer travelling time. The fourth stage parameterises the service process and customer arrival process of the final output queueing network model. To collect large-enough location traces for the purpose of inference by conducting physical experiments is expensive, labour-intensive and time-consuming. We thus developed LocTrack- JINQS, an open-source simulation library for constructing simulations with location awareness and generating synthetic location tracking data. Finally we examine the effectiveness of the data processing pipeline through four case studies based on both synthetic and real location tracking data. The results show that the methodology performs with moderate success in inferring multi-class queueing networks composed of single-server queues with FIFO, LIFO and priority-based service disciplines; it is also capable of inferring different routing policies, including simple probabilistic routing, class-based routing and shortest-queue routing.
|
Page generated in 0.0322 seconds