• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Unwritten procedural modeling with the straight skeleton

Kelly, Tom January 2014 (has links)
Creating virtual models of urban environments is essential to a disparate range of applications, from geographic information systems to video games. However, the large scale of these environments ensures that manual modeling is an expensive option. Procedural modeling is a automatic alternative that is able to create large cityscapes rapidly, by specifying algorithms that generate streets and buildings. Existing procedural modeling systems rely heavily on programming or scripting - skills which many potential users do not possess. We therefore introduce novel user interface and geometric approaches, particularly generalisations of the straight skeleton, to allow urban procedural modeling without programming. We develop the theory behind the types of degeneracy in the straight skeleton, and introduce a new geometric building block, the mixed weighted straight skeleton. In addition we introduce a simplifcation of the skeleton event, the generalised intersection event. We demonstrate that these skeletons can be applied to two urban procedural modeling systems that do not require the user to write programs. The first application of the skeleton is to the subdivision of city blocks into parcels. We demonstrate how the skeleton can be used to create highly realistic city block subdivisions. The results are shown to be realistic for several measures when compared against the ground truth over several large data sets. The second application of the skeleton is the generation of building's mass models. Inspired by architect's use of plan and elevation drawings, we introduce a system that takes a floor plan and set of elevations and extrudes a solid architectural model. We evaluate the interactive and procedural elements of the user interface separately, finding that the system is able to procedurally generate large urban landscapes robustly, as well as model a wide variety of detailed structures.
62

Sensor fusion with Gaussian processes

Feng, Shimin January 2014 (has links)
This thesis presents a new approach to multi-rate sensor fusion for (1) user matching and (2) position stabilisation and lag reduction. The Microsoft Kinect sensor and the inertial sensors in a mobile device are fused with a Gaussian Process (GP) prior method. We present a Gaussian Process prior model-based framework for multisensor data fusion and explore the use of this model for fusing mobile inertial sensors and an external position sensing device. The Gaussian Process prior model provides a principled mechanism for incorporating the low-sampling-rate position measurements and the high-sampling-rate derivatives in multi-rate sensor fusion, which takes account of the uncertainty of each sensor type. We explore the complementary properties of the Kinect sensor and the built-in inertial sensors in a mobile device and apply the GP framework for sensor fusion in the mobile human-computer interaction area. The Gaussian Process prior model-based sensor fusion is presented as a principled probabilistic approach to dealing with position uncertainty and the lag of the system, which are critical for indoor augmented reality (AR) and other location-aware sensing applications. The sensor fusion helps increase the stability of the position and reduce the lag. This is of great benefit for improving the usability of a human-computer interaction system. We develop two applications using the novel and improved GP prior model. (1) User matching and identification. We apply the GP model to identify individual users, by matching the observed Kinect skeletons with the sensed inertial data from their mobile devices. (2) Position stabilisation and lag reduction in a spatially aware display application for user performance improvement. We conduct a user study. Experimental results show the improved accuracy of target selection, and reduced delay from the sensor fusion system, allowing the users to acquire the target more rapidly, and with fewer errors in comparison with the Kinect filtered system. They also reported improved performance in subjective questions. The two applications can be combined seamlessly in a proxemic interaction system as identification of people and their positions in a room-sized environment plays a key role in proxemic interactions.
63

Theoretical and practical aspects of typestate

McGinniss, Iain January 2014 (has links)
The modelling and enforcement of typestate constraints in object oriented languages has the potential to eliminate a variety of common and difficult to diagnose errors. While the theoretical foundations of typestate are well established in the literature, less attention has been paid to the practical aspects: is the additional complexity justifiable? Can typestate be reasoned about effectively by "real" programmers? To what extent can typestate constraints be inferred, to reduce the burden of large type annotations? This thesis aims to answer these questions and provide a holistic treatment of the subject, with original contributions to both the theorical and practical aspects of typestate.
64

Data mining of range-based classification rules for data characterization

Tziatzios, Achilleas January 2014 (has links)
Advances in data gathering have led to the creation of very large collections across different fields like industrial site sensor measurements or the account statuses of a financial institution's clients. The ability to learn classification rules, rules that associate specific attribute values with a specific class label, from this data is important and useful in a range of applications. While many methods to facilitate this task have been proposed, existing work has focused on categorical datasets and very few solutions that can derive classification rules of associated continuous ranges (numerical intervals) have been developed. Furthermore, these solutions have solely relied in classification performance as a means of evaluation and therefore focus on the mining of mutually exclusive classification rules and the correct prediction of the most dominant class values. As a result existing solutions demonstrate only limited utility when applied for data characterization tasks. This thesis proposes a method that derives range-based classification rules from numerical data inspired by classification association rule mining. The presented method searches for associated numerical ranges that have a class value as their consequent and meet a set of user defined criteria. A new interestingness measure is proposed for evaluating the density of range-based rules and four heuristic based approaches are presented for targeting different sets of rules. Extensive experiments demonstrate the effectiveness of the new algorithm for classification tasks when compared to existing solutions and its utility as a solution for data characterization.
65

A framework for preserving privacy in e-government

Almagwashi, Haya January 2015 (has links)
Today the world is relying heavily on the use of Information and Communication Technologies (ICT) in performing daily tasks and governments are no exception. Governments around the world are utilising latest ICT to provide government services in the form of electronic services (e-services) in a phenomena called the electronic government (e-government). These services vary from providing general information to the provision of advanced services. However, one of the major obstacles facing the adoption of e-government services is the challenging privacy issues arising from the sharing of user’s information between government agencies and third parties. Many privacy frameworks have been proposed by governments and researchers to tackle these issues, however, the adoption of these frameworks is limited as they lack the consideration of users’ perspective. This thesis uses Soft Systems Methodology (SSM) to investigate the concepts relevant to e-government, and preserving privacy in the context of e-government. Using SSM, Conceptual Models(CMs) relevant to the concepts under investigation were developed and used to review and to identify the limitations of existing frameworks in the literature and to determine the requirements for preserving privacy in an e-government context. A general framework for Privacy REquirements in E-GOVernment (PRE_EGOV) is proposed based on the developed CMs. The proposed framework considers the perspectives of relevant stakeholders and the ownership rights of information about users. The CM relevant to preserving privacy and the elements of the PRE_EGOV framework were evaluated against stakeholders’ perspectives using a survey. The applicability of the proposed framework is demonstrated by applying it on a real world case study. The insight gained from the analysis of the case study and the survey’s results increased confidence in the usefulness of the proposed framework and showed that a system thinking approach to tackle such complex, multi-disciplinary problem can result in a promising solution that is more likely to be accepted by involved stakeholders. The work in this research has been published in three full papers and a poster. The developed Conceptual Models and proposed framework have found acceptance in E-government research community [1, 2, 3, 4] as well as in other research communities [5].
66

Robust processing of diffusion weighted image data

Parker, Greg January 2014 (has links)
The work presented in this thesis comprises a proposed robust diffusion weighted magnetic resonance imaging (DW-MRI) pipeline, each chapter detailing a step designed to ultimately transform raw DW-MRI data into segmented bundles of coherent fibre ready for more complex analysis or manipulation. In addition to this pipeline we will also demonstrate, where appropriate, ways in which each step could be optimized for the maxillofacial region, setting the groundwork for a wider maxillofacial modelling project intended to aid surgical planning. Our contribution begins with RESDORE, an algorithm designed to automatically identify corrupt DW-MRI signal elements. While slower than the closest alternative, RESDORE is also far more robust to localised changes in SNR and pervasive image corruptions. The second step in the pipeline concerns the retrieval of accurate fibre orientation distribution functions (fODFs) from the DW-MRI signal. Chapter 4 comprises a simulation study exploring the application of spherical deconvolution methods to `generic' fibre; finding that the commonly used constrained spherical harmonic deconvolution (CSHD) is extremely sensitive to calibration but, if handled correctly, might be able to resolve muscle fODFs in vivo. Building upon this information, Chapter 5 conducts further simulations and in vivo image experimentation demonstrating that this is indeed the case, allowing us to demonstrate, for the first time, anatomically plausible reconstructions of several maxillofacial muscles. To complete the proposed pipeline, Chapter 6 then introduces a method for segmenting whole volume streamline tractographies into anatomically valid bundles. In addition to providing an accurate segmentation, this shape-based method does not require computationally expensive inter-streamline comparisons employed by other approaches, allowing the algorithm to scale linearly with respect to the number of streamlines within the dataset. This is not often true for comparison based methods which in the best case scale in higher linear time but more often by O(N2) complexity.
67

Models for information propagation in opportunistic networks

Coombs, Richard January 2014 (has links)
The topic of this thesis is Opportunistic Networks (OPNETS), a type of mobile ad hoc network in which data are propagated by the movement of the network devices and by short-range wireless transmissions. This allows data to spread to many devices across large distances without the use of any infrastructure or powerful hardware. OPNET technology is in its fairly early stages of development and has a lot of potential for research. There are many applications that could benefit from OPNETS, such as sensor networks or social networks. However, before the technology can be used with confidence, research must be undertaken to better understand its behaviour and how it can be improved. In this thesis, the way in which information propagates in an OPNET is studied. Methodical parameter studies are performed to measure the rate at which information reaches new recipients, the speed at which information travels across space, and the persistence of information in the network. The key parameters being studied are device density, device speed, wireless signal radius and message transmission time. Furthermore, device interaction schemes based on epidemiological models are studied to find how they affect network performance. Another contribution of this thesis is the development of theoretical models for message spread in regions of one-dimensional (1D) and two-dimensional (2D) space. These models are based on preliminary theoretical models of network device interaction; specifically, the rate at which devices move within range of each other and the length of time that they remain within range. A key contribution of this thesis is in acknowledging that data transmissions between devices do not occur instantaneously. Due to latency in wireless communications, the time taken to transmit data is proportional to the amount of data being transferred. Non-instantaneous transmissions may fail before completion. Investigation is made into the effect this has on the rate of information propagation in OPNETS.
68

Inferring interestingness in online social networks

Webberley, William January 2014 (has links)
Information sharing and user-generated content on the Internet has given rise to the increased presence of uninteresting and ‘noisy’ information in media streams on many online social networks. Although there is a lot of ‘interesting’ information also shared amongst users, the noise increases the cognitive burden in terms of the users’ abilities to identify what is interesting and may increase the chance of missing content that is useful or important. Additionally, users on such platforms are generally limited to receiving information only from those that they are directly linked to on the social graph, meaning that users exist within distinct content ‘bubbles’, further limiting the chance of receiving interesting and relevant information from outside of the immediate social circle. In this thesis, Twitter is used as a platform for researching methods for deriving “interestingness” through popularity as given by the mechanism of retweeting, which allows information to be propagated further between users on Twitter’s social graph. Retweet behaviours are studied, and features; such as those surrounding Tweet audience, information redundancy, and propagation depth through path-length, are uncovered to help relate retweet action to the underlying social graph and the communities it represents. This culminates in research into a methodology for assigning scores to Tweets based on their ‘quality’, which is validated and shown to perform well in various situations.
69

Mathematically inspired approaches to face recognition in uncontrolled conditions : super resolution and compressive sensing

Al-Hassan, Nadia January 2014 (has links)
Face recognition systems under uncontrolled conditions using surveillance cameras is becoming essential for establishing the identity of a person at a distance from the camera and providing safety and security against terrorist, attack, robbery and crime. Therefore, the performance of face recognition in low-resolution degraded images with low quality against images with high quality/and of good resolution/size is considered the most challenging tasks and constitutes focus of this thesis. The work in this thesis is designed to further investigate these issues and the following being our main aim: “To investigate face identification from a distance and under uncontrolled conditions by primarily addressing the problem of low-resolution images using existing/modified mathematically inspired super resolution schemes that are based on the emerging new paradigm of compressive sensing and non-adaptive dictionaries based super resolution.” We shall firstly investigate and develop the compressive sensing (CS) based sparse representation of a sample image to reconstruct a high-resolution image for face recognition, by taking different approaches to constructing CS-compliant dictionaries such as Gaussian Random Matrix and Toeplitz Circular Random Matrix. In particular, our focus is on constructing CS non-adaptive dictionaries (independent of face image information), which contrasts with existing image-learnt dictionaries, but satisfies some form of the Restricted Isometry Property (RIP) which is sufficient to comply with the CS theorem regarding the recovery of sparsely represented images. We shall demonstrate that the CS dictionary techniques for resolution enhancement tasks are able to develop scalable face recognition schemes under uncontrolled conditions and at a distance. Secondly, we shall clarify the comparisons of the strength of sufficient CS property for the various types of dictionaries and demonstrate that the image-learnt dictionary far from satisfies the RIP for compressive sensing. Thirdly, we propose dictionaries based on the high frequency coefficients of the training set and investigate the impact of using dictionaries on the space of feature vectors of the low-resolution image for face recognition when applied to the wavelet domain. Finally, we test the performance of the developed schemes on CCTV images with unknown model of degradation, and show that these schemes significantly outperform existing techniques developed for such a challenging task. However, the performance is still not comparable to what could be achieved in controlled environment, and hence we shall identify remaining challenges to be investigated in the future.
70

Cloud broker based trust assessment of cloud service providers

Pawar, Pramod S. January 2015 (has links)
Cloud computing is emerging as the future Internet technology due to its advantages such as sharing of IT resources, unlimited scalability and flexibility and high level of automation. Along the lines of rapid growth, the cloud computing technology also brings in concerns of security, trust and privacy of the applications and data that is hosted in the cloud environment. With large number of cloud service providers available, determining the providers that can be trusted for efficient operation of the service deployed in the provider’s environment is a key requirement for service consumers. In this thesis, we provide an approach to assess the trustworthiness of the cloud service providers. We propose a trust model that considers real-time cloud transactions to model the trustworthiness of the cloud service providers. The trust model uses the unique uncertainty model used in the representation of opinion. The Trustworthiness of a cloud service provider is modelled using opinion obtained from three different computations, namely (i) compliance of SLA (Service Level Agreement) parameters (ii) service provider satisfaction ratings and (iii) service provider behaviour. In addition to this the trust model is extended to encompass the essential Cloud characteristics, credibility for weighing the feedbacks and filtering mechanisms to filter the dubious feedback providers. The credibility function and the early filtering mechanisms in the extended trust model are shown to assist in the reduction of impact of malicious feedback providers.

Page generated in 0.0601 seconds