291 |
3D human body modelling from range dataDekker, Laura Daye January 2000 (has links)
This thesis describes the design, implementation and application of an integrated and fully automated system for interpreting whole-body range data. The system is shown to be capable of generating complete surface models of human bodies, and robustly extracting anatomical features for anthropometry, with minimal intrusion on the subject. The ability to automate this process has enormous potential for personalised digital models in medicine, ergonomics, design and manufacture and for populating virtual environments. The techniques developed within this thesis now form the basis of a commercial product. However, the technical difficulties are considerable. Human bodies are highly varied and many of the features of interest are extremely subtle. The underlying range data is typically noisy and is sparse at occluded areas. In addressing these problems this thesis makes five main research contributions. Firstly, the thesis describes the design, implementation and testing of the whole integrated and automated system from scratch, starting at the image capture hardware. At each stage the tradeoffs between performance criteria are discussed, and experiments are described to test the processes developed. Secondly, a combined data-driven and model-based approach is described and implemented, for surface reconstruction from the raw data. This method addresses the whole body surface, including areas where body segments touch, and other occluded areas. The third contribution is a library of operators, designed specifically for shape description and measurement of the human body. The library provides high-level relational attributes, an "electronic tape measure" to extract linear and curvilinear measurements,as well as low-level shape information, such as curvature. Application of the library is demonstrated by building a large set of detectors to find anthropometric features, based on the ISO 8559 specification. Output is compared against traditional manual measurements and a detailed analysis is presented. The discrepancy between these sets of data is only a few per cent on most dimensions, and the system's reproducibility is shown to be similar to that of skilled manual measurers. The final contribution is that the mesh models and anthropometric features, produced by the system, have been used as a starting point to facilitate other research, Such as registration of multiple body images,draping clothing and advanced surface modelling techniques.
|
292 |
Effective design, configuration, and use of digital CCTVKeval, H. U. January 2009 (has links)
It is estimated that there are five million CCTV cameras in use today. CCTV is used by a wide range of organisations and for an increasing number of purposes. Despite this, there has been little research to establish whether these systems are fit for purpose. This thesis takes a socio-technical approach to determine whether CCTV is effective, and if not, how it could be made more effective. Humancomputer interaction (HCI) knowledge and methods have been applied to improve this understanding and what is needed to make CCTV effective; this was achieved in an extensive field study and two experiments. In Study 1, contextual inquiry was used to identify the security goals, tasks, technology and factors which affected operator performance and the causes at 14 security control rooms. The findings revealed a number of factors which interfered with task performance, such as: poor camera positioning, ineffective workstation setups, difficulty in locating scenes, and the use of low-quality CCTV recordings. The impact of different levels of video quality on identification and detection performance was assessed in two experiments using a task-focused methodology. In Study 2, 80 participants identified 64 face images taken from four spatially compressed video conditions (32, 52, 72, and 92 Kbps). At a bit rate quality of 52 Kbps (MPEG-4), the number of faces correctly identified reached significance. In Study 3, 80 participants each detected 32 events from four frame rate CCTV video conditions (1, 5, 8, and 12 fps). Below 8 frames per second, correct detections and task confidence ratings decreased significantly. These field and empirical research findings are presented in a framework using a typical CCTV deployment scenario, which has been validated through an expert review. The contributions and limitations of this thesis are reviewed, and suggestions for how the framework should be further developed are provided.
|
293 |
Dynamical models and machine learning for supervised segmentationShepherd, T. January 2009 (has links)
This thesis is concerned with the problem of how to outline regions of interest in medical images, when the boundaries are weak or ambiguous and the region shapes are irregular. The focus on machine learning and interactivity leads to a common theme of the need to balance conflicting requirements. First, any machine learning method must strike a balance between how much it can learn and how well it generalises. Second, interactive methods must balance minimal user demand with maximal user control. To address the problem of weak boundaries,methods of supervised texture classification are investigated that do not use explicit texture features. These methods enable prior knowledge about the image to benefit any segmentation framework. A chosen dynamic contour model, based on probabilistic boundary tracking, combines these image priors with efficient modes of interaction. We show the benefits of the texture classifiers over intensity and gradient-based image models, in both classification and boundary extraction. To address the problem of irregular region shape, we devise a new type of statistical shape model (SSM) that does not use explicit boundary features or assume high-level similarity between region shapes. First, the models are used for shape discrimination, to constrain any segmentation framework by way of regularisation. Second, the SSMs are used for shape generation, allowing probabilistic segmentation frameworks to draw shapes from a prior distribution. The generative models also include novel methods to constrain shape generation according to information from both the image and user interactions. The shape models are first evaluated in terms of discrimination capability, and shown to out-perform other shape descriptors. Experiments also show that the shape models can benefit a standard type of segmentation algorithm by providing shape regularisers. We finally show how to exploit the shape models in supervised segmentation frameworks, and evaluate their benefits in user trials.
|
294 |
Impact analysis of database schema changesMaule, A. January 2010 (has links)
When database schemas require change, it is typical to predict the effects of the change, first to gauge if the change is worth the expense, and second, to determine what must be reconciled once the change has taken place. Current techniques to predict the effects of schema changes upon applications that use the database can be expensive and error-prone, making the change process expensive and difficult. Our thesis is that an automated approach for predicting these effects, known as an impact analysis, can create a more informed schema change process, allowing stakeholders to obtain beneficial information, at lower costs than currently used industrial practice. This is an interesting research problem because modern data-access practices make it difficult to create an automated analysis that can identify the dependencies between applications and the database schema. In this dissertation we describe a novel analysis that overcomes these difficulties. We present a novel analysis for extracting potential database queries from a program, called query analysis. This query analysis builds upon related work, satisfying the additional requirements that we identify for impact analysis. The impacts of a schema change can be predicted by analysing the results of query analysis, using a process we call impact calculation. We describe impact calculation in detail, and show how it can be practically and efficiently implemented. Due to the level of accuracy required by our query analysis, the analysis can become expensive, so we describe existing and novel approaches for maintaining an efficient and computational tractable analysis. We describe a practical and efficient prototype implementation of our schema change impact analysis, called SUITE. We describe how SUITE was used to evaluate our thesis, using a historical case study of a large commercial software project. The results of this case study show that our impact analysis is feasible for large commercial software applications, and likely to be useful in real-world software development.
|
295 |
Multiobjective genetic programming for financial portfolio management in dynamic environmentsHassan, G. N. A. January 2010 (has links)
Multiobjective (MO) optimisation is a useful technique for evolving portfolio optimisation solutions that span a range from high-return/high-risk to low-return/low-risk. The resulting Pareto front would approximate the risk/reward Efficient Frontier [Mar52], and simplifies the choice of investment model for a given client’s attitude to risk. However, the financial market is continuously changing and it is essential to ensure that MO solutions are capturing true relationships between financial factors and not merely over fitting the training data. Research on evolutionary algorithms in dynamic environments has been directed towards adapting the algorithm to improve its suitability for retraining whenever a change is detected. Little research focused on how to assess and quantify the success of multiobjective solutions in unseen environments. The multiobjective nature of the problem adds a unique feature to be satisfied to judge robustness of solutions. That is, in addition to examining whether solutions remain optimal in the new environment, we need to ensure that the solutions’ relative positions previously identified on the Pareto front are not altered. This thesis investigates the performance of Multiobjective Genetic Programming (MOGP) in the dynamic real world problem of portfolio optimisation. The thesis provides new definitions and statistical metrics based on phenotypic cluster analysis to quantify robustness of both the solutions and the Pareto front. Focusing on the critical period between an environment change and when retraining occurs, four techniques to improve the robustness of solutions are examined. Namely, the use of a validation data set; diversity preservation; a novel variation on mating restriction; and a combination of both diversity enhancement and mating restriction. In addition, preliminary investigation of using the robustness metrics to quantify the severity of change for optimum tracking in a dynamic portfolio optimisation problem is carried out. Results show that the techniques used offer statistically significant improvement on the solutions’ robustness, although not on all the robustness criteria simultaneously. Combining the mating restriction with diversity enhancement provided the best robustness results while also greatly enhancing the quality of solutions.
|
296 |
Location intelligence : a decision support system for business site selectionWeber, P. January 2011 (has links)
As one of the leading ‘world cities’, London is home to a highly internationalised workforce and is particularly reliant on foreign direct investment (FDI) for its continued economic success. In the face of increasing global competition and a very difficult economic climate, the capital must compete effectively to encourage and support such investors. Given these pressures, the need for a coherent framework for data and methodologies to inform business location decisions is apparent. The research sets out to develop a decision support system to iteratively explore, compare and rank London’s business neighbourhoods. This is achieved through the development, integration and evaluation of spatial data and its manipulation to create an interactive framework to model business location decisions. The effectiveness of the resultant framework is subsequently assessed using a scenario based user evaluation. In this thesis, a geo-business classification for London is created, drawing upon the methods and practices common to geospatial neighbourhood classifications used for profiling consumers. The geo-business classification method encapsulates relevant location variables using Principal Components Analysis into a set of composite area characteristics. Next, the research investigates the implementation of an appropriate Multi-Criteria Decision Making methodology, in this case Analytical Hierarchy Process (AHP) allowing the aggregation of the geo-business classification and decision makers’ preferences into discrete decision alternatives. Lastly, the results of the integration of both data and model through the development of, and evaluation of a web-based prototype are presented. The development of this novel business location decision support framework enables not only improved location decision-making, but also the development of enhanced intelligence on the relative attractiveness of business neighbourhoods according to investor types.
|
297 |
High specificity automatic function assignment for enzyme sequencesRoden, D. L. January 2011 (has links)
The number of protein sequences being deposited in databases is currently growing rapidly as a result of large-scale high throughput genome sequencing efforts. A large proportion of these sequences have no experimentally determined structure. Also, relatively few have high quality, specific, experimentally determined functions. Due to the time, cost and technical complexity of experimental procedures for the determination of protein function this situation is unlikely to change in the near future. Therefore, one of the major challenges for bioinformatics is the ability to automatically assign highly accurate, high-specificity functional information to these unknown protein sequences. As yet this problem has not been successfully solved to a level both acceptable in terms of detailed accuracy and reliability for use as a basis for detailed biological analysis on a genome wide, automated, high-throughput scale. This research thesis aims to address this shortfall through the provision and benchmarking of methods that can be used towards improving the accuracy of high-specificity protein function prediction from enzyme sequences. The datasets used in these studies are multiple alignments of evolutionarily related protein sequences, identified through the use of BLAST sequence database searches. Firstly, a number of non-standard amino acid substitution matrices were used to re-score the benchmark multiple sequence alignments. A subset of these matrices were shown to improve the accuracy of specific function annotation, when compared to both the original BLAST sequence similarity ordering and a random sequence selection model. Following this, two established methods for the identification of functional specificity determining amino acid residues (fSDRs) were used to identify regions within the aligned sequences that are functionally and phylogenetically informative. These localised sequence regions were then used to re-score the aligned sequences and provide an assessment of their ability to improve the specific functional annotation of the benchmark sequence sets. Finally, a machine learning approach (support vector machines) was followed to evaluate the possibility of identifying fSDRs, which improve the annotation accuracy, directly from alignments of closely related protein sequences without prior knowledge of their specific functional sub-types. The performance of this SVM based method was then assessed by applying it to the automatic functional assignment of a number of well studied classes of enzymes.
|
298 |
Context-driven methodologies for context-aware and adaptive systemsSama, M. January 2011 (has links)
Applications which are both context-aware and adapting, enhance users’ experience by anticipating their need in relation with their environment and adapt their behavior according to environmental changes. Being by definition both context-aware and adaptive these applications suffer both from faults related to their context-awareness and to their adaptive nature plus from a novel variety of faults originated by the combination of the two. This research work analyzes, classifies, detects, and reports faults belonging to this novel class aiming to improve the robustness of these Context-Aware Adaptive Applications (CAAAs). To better understand the peculiar dynamics driving the CAAAs adaptation mechanism a general high-level architectural model has been designed. This architectural model clearly depicts the stream of information coming from sensors and being computed all the way to the adaptation mechanism. The model identifies a stack of common components representing increasing abstractions of the context and their general interconnections. Known faults involving context data can be re-examined according to this architecture and can be classified in terms of the component in which they are happening and in terms of their abstraction from the environment. Resulting from this classification is a CAAA-oriented fault taxonomy. Our architectural model also underlines that there is a common evolutionary path for CAAAs and shows the importance of the adaptation logic. Indeed most of the adaptation failures are caused by invalid interpretations of the context by the adaptation logic. To prevent such faults we defined a model, the Adaptation Finite-State Machine (A-FSM), describing how the application adapts in response to changes in the context. The A-FSM model is a powerful instrument which allows developers to focus in those context-aware and adaptive aspects in which faults reside. In this model we have identified a set of patterns of faults representing the most common faults in this application domain. Such faults are represented as violation of given properties in the A-FSM. We have created four techniques to detect such faults. Our proposed algorithms are based on three different technologies: enumerative, symbolic and goal planning. Such techniques compensate each other. We have evaluated them by comparing them to each other using both crafted models and models extracted from existing commercial and free applications. In the evaluation we observe the validity, the readability of the reported faults, the scalability and their behavior in limited memory environments. We conclude this Thesis by suggesting possible extensions.
|
299 |
Global optimisation techniques for image segmentation with higher order modelsGomes Vicente, S. A. January 2011 (has links)
Energy minimisation methods are one of the most successful approaches to image segmentation. Typically used energy functions are limited to pairwise interactions due to the increased complexity when working with higher-order functions. However, some important assumptions about objects are not translatable to pairwise interactions. The goal of this thesis is to explore higher order models for segmentation that are applicable to a wide range of objects. We consider: (1) a connectivity constraint, (2) a joint model over the segmentation and the appearance, and (3) a model for segmenting the same object in multiple images. We start by investigating a connectivity prior, which is a natural assumption about objects. We show how this prior can be formulated in the energy minimisation framework and explore the complexity of the underlying optimisation problem, introducing two different algorithms for optimisation. This connectivity prior is useful to overcome the “shrinking bias” of the pairwise model, in particular in interactive segmentation systems. Secondly, we consider an existing model that treats the appearance of the image segments as variables. We show how to globally optimise this model using a Dual Decomposition technique and show that this optimisation method outperforms existing ones. Finally, we explore the current limits of the energy minimisation framework. We consider the cosegmentation task and show that a preference for object-like segmentations is an important addition to cosegmentation. This preference is, however, not easily encoded in the energy minimisation framework. Instead, we use a practical proposal generation approach that allows not only the inclusion of a preference for object-like segmentations, but also to learn the similarity measure needed to define the cosegmentation task. We conclude that higher order models are useful for different object segmentation tasks. We show how some of these models can be formulated in the energy minimisation framework. Furthermore, we introduce global optimisation methods for these energies and make extensive use of the Dual Decomposition optimisation approach that proves to be suitable for this type of models.
|
300 |
Inference in Bayesian time-series modelsBracegirdle, C. I. January 2013 (has links)
Time series-data accompanied with a sequential ordering-occur and evolve all around us. Analysing time series is the problem of trying to discern and describe a pattern in the sequential data that develops in a logical way as the series continues, and the study of sequential data has occurred for a long period across a vast array of fields, including signal processing, bioinformatics, and finance-to name but a few. Classical approaches are based on estimating the parameters of temporal evolution of the process according to an assumed model. In econometrics literature, the field is focussed on parameter estimation of linear (regression) models with a number of extensions. In this thesis, I take a Bayesian probabilistic modelling approach in discrete time, and focus on novel inference schemes. Fundamentally, Bayesian analysis replaces parameter estimates by quantifying uncertainty in the value, and probabilistic inference is used to update the uncertainty based on what is observed in practice. I make three central contributions. First, I discuss a class of latent Markov model which allows a Bayesian approach to internal process resets, and show how inference in such a model can be performed efficiently, before extending the model to a tractable class of switching time series models. Second, I show how inference in linear-Gaussian latent models can be extended to allow a Bayesian approach to variance, and develop a corresponding variance-resetting model, the heteroskedastic linear-dynamical system. Third, I turn my attention to cointegration-a headline topic in finance-and describe a novel estimation scheme implied by Bayesian analysis, which I show to be empirically superior to the classical approach. I offer example applications throughout and conclude with a discussion.
|
Page generated in 0.052 seconds