• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Declarative CAD feature recognition : an efficient approach

Niu, Zhibin January 2015 (has links)
Feature recognition aids CAD model simplification in engineering analysis and machining path in manufacturing. In the domain of CAD model simplification, classic feature recognition approaches face two challenges: 1) insufficient performances; 2) engineering features are diverse, and no system can hard-code all possible features in advance. A declarative approach allows engineers to specify new features without having to design algorithms to find them. However, naive translation of declarations leads to executable algorithms with high time complexity. Inspired by relational database management systems (RDBMS), I suppose that if there exists a way to turn a feature declaration into an SQL query that is further processed by a database engine interfaced to a CAD modeler, the optimizations can be utilized for “free”. Testbeds are built to verify the idea. Initially, I devised a straightforward translator to turn feature declarations into queries. Experiments on SQLite show it gives a quasiquadratic performance for common features. Then it is extended with a new translator and PostgreSQL. In the updated version, I have made a significant breakthrough – my approach is the first to achieve linear time performance with respect to model size for common features, and acceptable times for real industrial models. I learn from the testbeds that PostgreSQL uses hash joins reduce the search space enable a fast feature finding. Besides, I have further improved the performance by: (i) lazy evaluation, which can be used to reduce the workload on the CAD modeler, and (ii) predicate ordering, which reorders the query plan by taking into account the time needed to compute various geometric operations. Experimental results are presented to validate their benefits.
712

Text mining patient experiences from online health communities

Greenwood, Mark January 2015 (has links)
Social media has had an impact on how patients experience healthcare. Through online channels, patients are sharing information and their experiences with potentially large audiences all over the world. While sharing in this way may offer immediate benefits to themselves and their readership (e.g. other patients) these unprompted, self-authored accounts of illness are also an important resource for healthcare researchers. They offer unprecedented insight into understanding patients’experience of illness. Work has been undertaken through qualitative analysis in order to explore this source of data and utilising the information expressed through these media. However, the manual nature of the analysis means that scope is limited to a small proportion of the hundreds of thousands of authors who are creating content. In our research, we aim to explore utilising text mining to support traditional qualitative analysis of this data. Text mining uses a number of processes in order to extract useful facts from text and analyse patterns within – the ultimate aim is to generate new knowledge by analysing textual data en mass. We developed QuTiP – a Text Mining framework which can enable large scale qualitative analyses of patient narratives shared over social media. In this thesis, we describe QuTiP and our application of the framework to analyse the accounts of patients living with chronic lung disease. As well as a qualitative analysis, we describe our approaches to automated information extraction, term recognition and text classification in order to automatically extract relevant information from blog post data. Within the QuTiP framework, these individual automated approaches can be brought together to support further analyses of large social media datasets.
713

Policy-based asset sharing in collaborative environments

Parizas, Christos January 2015 (has links)
Resource sharing is an important but complex problem to be solved. The problem is exacerbated in a dynamic coalition context, due to multi-partner constraints (imposed by security, privacy and general operational issues) placed on the resources. Take for example scenarios such as emergency response operations, corporate collaborative environments, or even short-lived opportunistic networks, where multi-party teams are formed, utilizing and sharing their own resources in order to support collective endeavors, which otherwise would be difficult, if not impossible, to achieve by a single party. Policy-Based Management Systems (PBMS) have been proposed as a suitable paradigm to reduce this complexity and provide a means for effective resource sharing. The overarching problem that this thesis deals with, is the development of PBMS techniques and technologies that will allow in a dynamic and transparent way, users that operate in collaborative environments to share their assets through high-level policies. To do so, it focuses on three sub-problems each one of which is related to a different aspect of a PBMS, making three key contributions. The first is a novel model, which proposes an alternative way for asset sharing, better fit than the traditional approaches when dealing with collaborative and dynamic environments. In order for all of the existing asset sharing approaches to comply with situational changes, an extra overhead is needed due to the fact that the decision making centre – and therefore, the policy making centre – is far away from where the changes take place unlike the event-driven approach proposed in this thesis. The second contribution is the proposal of an efficient, high-level policy conflict analysis mechanism, that provides a more transparent – in terms of user interaction – alternative way for maintaining unconflicted PBMS. Its discrete and sequential execution, breaks down the analysis process into discrete steps, making the conflict analysis more efficient compared to existing approaches, while eases human policy authors to track the whole process interfacing with it, in a near to natural language representation. The contribution of the third piece of research work is an interest-based policy negotiation mechanism, for enhancing asset sharing while promoting collaboration in coalition environments. The enabling technology for achieving the last two contributions (contribution 2 & 3) is a controlled natural language representation, which is used for defining a policy language. For evaluating the proposed ideas, in the first and third contributions we run simulation experiments while we simulate and also conduct formal analysis for the second one.
714

Automated development of clinical prediction models using genetic programming

Bannister, Christian January 2015 (has links)
Genetic programming is an Evolutionary Computing technique, inspired by biological evolution, capable of discovering complex non-linear patterns in large datasets. Genetic programming is a general methodology, the specific implementation of which requires development of several different specific elements such as problem representation, fitness, selection and genetic variation. Despite the potential advantages of genetic programming over standard statistical methods, its applications to survival analysis are at best rare, primarily because of the difficulty in handling censored data. The aim of this work was to develop a genetic programming approach for survival analysis and demonstrate its utility for the automatic development of clinical prediction models using cardiovascular disease as a case study. We developed a tree-based untyped steady-state genetic programming approach for censored longitudinal data, comparing its performance to the de facto statistical method—Cox regression—in the development of clinical prediction models for the prediction of future cardiovascular events in patients with symptomatic and asymptomatic cardiovascular disease, using large observational datasets. We also used genetic programming to examine the prognostic significance of different risk factors together with their non-linear combinations for the prognosis of health outcomes in cardiovascular disease. These experiments showed that Cox regression and the developed steady-state genetic programming approach produced similar results when evaluated in common validation datasets. Despite slight relative differences, both approaches demonstrated an acceptable level of discriminative and calibration at a range of times points. Whilst the application of genetic programming did not provide more accurate representations of factors that predict the risk of both symptomatic and asymptomatic cardiovascular disease when compared with existing methods, genetic programming did offer comparable performance. Despite generally comparable performance, albeit in slight favour of the Cox model, the predictors selected for representing their relationships with the outcome were quite different and, on average, the models developed using genetic programming used considerably fewer predictors. The results of the genetic programming confirm the prognostic significance of a small number of the most highly associated predictors in the Cox modelling; age, previous atherosclerosis, and albumin for secondary prevention; age, recorded diagnosis of ’other’ cardiovascular disease, and ethnicity for primary prevention in patients with type 2 diabetes. When considered as a whole, genetic programming did not produce better performing clinical prediction models, rather it utilised fewer predictors, most of which were the predictors that Cox regression estimated be most strongly associated with the outcome, whilst achieving comparable performance. This suggests that genetic programming may better represent the potentially non-linear relationship of (a smaller subset of) the strongest predictors. To our knowledge, this work is the first study to develop a genetic programming approach for censored longitudinal data and assess its value for clinical prediction in comparison with the well-known and widely applied Cox regression technique. Using empirical data this work has demonstrated that clinical prediction models developed by steady-state genetic programming have predictive ability comparable to those developed using Cox regression. The genetic programming models were more complex and thus more difficult to validate by domain experts, however these models were developed in an automated fashion, using fewer input variables, without the need for domain specific knowledge and expertise required to appropriately perform survival analysis. This work has demonstrated the strong potential of genetic programming as a methodology for automated development of clinical prediction models for diagnostic and prognostic purposes in the presence of censored data. This work compared untuned genetic programming models that were developed in an automated fashion with highly tuned Cox regression models that was developed in a very involved manner that required a certain amount of clinical and statistical expertise. Whilst the highly tuned Cox regression models performed slightly better in validation data, the performance of the automatically generated genetic programming models were generally comparable. The comparable performance demonstrates the utility of genetic programming for clinical prediction modelling and prognostic research, where the primary goal is accurate prediction. In aetiological research, where the primary goal is to examine the relative strength of association between risk factors and the outcome, then Cox regression and its variants remain as the de facto approach.
715

Multi-GNSS signals acquisition techniques for software defined receivers

Albu-Rghaif, Ali January 2015 (has links)
Any commercially viable wireless solution onboard Smartphones should resolve the technical issues as well as preserving the limited resources available such as processing and battery. Therefore, integrating/combining the process of more than one function will free up much needed resources that can be then reused to enhance these functions further. This thesis details my innovative solutions that integrate multi-GNSS signals of specific civilian transmission from GPS, Galileo and GLONASS systems, and process them in a single RF front-end channel (detection and acquisition), ideal for GNSS software receiver onboard Smartphones. During the course of my PhD study, the focus of my work was on improving the reception and processing of localisation techniques based on signals from multi-satellite systems. I have published seven papers on new acquisition solutions for single and multi-GNSS signals based on the bandpass sampling and the compressive sensing techniques. These solutions, when applied onboard Smartphones, shall not only enhance the performance of the GNSS localisation solution but also reduce the implementation complexity (size and processing requirements) and thus save valuable processing time and battery energy. Firstly, my research has exploited the bandpass sampling technique, if being a good candidate for processing multi-signals at the same time. This portion of the work has produced three methods. The first method is designed to detect the GPS, Galileo and GLONASS-CDMA signals’ presence at an early stage before the acquisition process. This is to avoid wasting processing resources that are normally spent on chasing signals not present/non-existent. The second focuses on overcoming the ambiguity when acquiring Galileo-OS signal at a code phase resolution equal to 0.5 Chip or higher and this achieved by multiplying the received signal with the generated sub-carrier frequency. This new conversion saves doing a complete correlation chain processing when compared to conventionally used methods. The third method simplifies the joining implementation of the Galileo-OS data-pilot signal acquisition by constructing an orthogonal signal so as to acquire them in a single correlation chain, yet offering the same performance as using two correlation chains. Secondly, the compressive sensing technique is used to acquire multi-GNSS signals to achieve computation complexity reduction over correlator based methods, like Matched Filter, while still maintaining acquisition integrity. As a result of this research work, four implementation methods were produced to handle single or multi-GNSS signals. The first of these methods is designed to change dynamically the number and the size of the required channels/correlators according to the received GPS signal-power during the acquisition process. This adaptive solution offers better fix capability when the GPS receiver is located in a harsh signal environment, or it will save valuable processing/decoding time when the receiver is outdoors. The second method enhances the sensing process of the compressive sensing framework by using a deterministic orthogonal waveform such as the Hadamard matrix, which enabled us to sample the signal at the information band and reconstruct it without information loss. This experience in compressive sensing led the research to manage more reduction in terms of computational complexity and memory requirements in the third method that decomposes the dictionary matrix (representing a bank of correlators), saving more than 80% in signal acquisition process without loss of the integration between the code and frequency, irrespective of the signal strength. The decomposition is realised by removing the generated Doppler shifts from the dictionary matrix, while keeping the carrier frequency fixed for all these generated shifted satellites codes. This novelty of the decomposed dictionary implementation enabled other GNSS signals to be combined with the GPS signal without large overhead if the two, or more, signals are folded or down-converted to the same intermediate frequency. The fourth method is, therefore, implemented for the first time, a novel compressive sensing software receiver that acquires both GPS and Galileo signals simultaneously. The performance of this method is as good as that of a Matched Filter implementation performance. However, this implementation achieves a saving of 50% in processing time and produces a fine frequency for the Doppler shift at resolution within 10Hz. Our experimental results, based on actual RF captured signals and other simulation environments, have proven that all above seven implementation methods produced by this thesis retain much valuable battery energy and processing resources onboard Smartphones.
716

Human gait recognition under neutral and non-neutral gait sequences

Sabir, Azhin Tahir January 2015 (has links)
Rapid advances in biometrics technology makes their use for person‘s identity more acceptable in a variety of applications, especially in the areas of the interest in security and surveillance. The upsurge in terrorist attacks in the past few years has focused research on biometric systems that have the ability to identify individuals from a distance, and this is spearheading research interest in Gait biometric due to being unobtrusive and less dependent on high image/video quality. Gait biometric is a behavioral trait that aims to identify individuals from image sequences based on their walking style. The growing list of possible civil as well as security applications for various purposes is paralleled by the emergence of a variety of research challenges in dealing with a various external as well as internal factors influencing the performance of Gait Recognition (GR) in unconstrained recording conditions. This thesis is concerned with Gait Recognition in unconstrained scenarios aims to address research questions covering (1) The selection of sets of features for a gait signature; (2) The effects of gender and/or recoding condition case (neutral, carrying a bag, coat wearing) on the performance of GR schemes; (3) Integrating gender and/or case classifications into GR; and (4) The role of emerging Kinect sensor technology, with its capability of sensing human skeletal features in GR and applications. Accordingly, our objectives will focus on investigating, developing and testing the performance of using a variety of gait sequencefeatures for the various components/tasks and their integration. Our tests are based on large number of experiments based on CASIA B database as well as an in-house database of Kinect sensor recording. In all experiments, we use different dimension reduction and feature selection methods do reduce the dimensions in these proposed feature vectors, such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Fisher Score, followed by different classification methods like; k-nearest-neighbour (k-NN), Support Vector Machine (SVM), Naive Bayes and linear discriminant classifier (LDC), to test the performance of the proposed methods. The initial part is focused on reviewing existing background removal for indoor and outdoor scenarios and developing more efficient versions primarily by adopting the work for wavelet domain rather than the traditional spatial domain based schemes. These include motion detection by frame differencing and Mixture of Gaussians, the latter being more reliable for outdoor scenarios. Subsequently, we investigated a variety of features that can be extractedfrom various subbands of wavelet-decomposed frames of different body parts (partitioned according to the golden ratio). We gradually built sets of features, together with their fused combinations, that can categorized as hybrid of model-based and motion-based models. The first list of features developed to deal with Neutral Gait Recognition (NGR) includes: Spatio-Temporal Model (STM), Legs Motion Detection Feature (LMD), and the Statistical model of the approximation LL-wavelet subband images (AWM). We shall demonstrate that fusing these features achieves accuracy of 97%, which is comparable to the state of the art. These features will be shown to achieve 96% accuracy in gender classification (GC), and we shall establish that the NGR2 scheme that integrates GC into NGR improves the accuracy by a noticeable percentage. Testing the performance of these NGR schemes in recognising non-neutral cases revealed the challenges of Unrestricted Gait Recognition (UGR). The second part of the thesis is focused on developing UGR schemes. For this, first a new statistical wavelet feature set extracted from high frequency subbands, called Detail coefficients Wavelet Model (DWM) was added to the previous list. Using different combinations of these schemes, will be shown to significantly improve the performance for non-neutral gait cases, but to less extent in the coat wearing case. We then develop a Gait Sequence Case Detection (GSCD) which has excellent performance. We will show that integrating GSCD and GC together into UGR improves the performance for all cases. We shall also investigate the different UGS scheme that generalizes existing work on Gait Energy and Gait Entropy images (GEI and GEnI) features but in the wavelet domain and in different body parts. Testing these two schemes, and their fusion, post the PCA dimension reduction yield much improved accuracy for the non-neutral cases compared to existing scheme GEI and GEnI schemes, but are significantly outperformed by the last scheme. However, by fusing the UGS scheme with the GSCD+GC+UGR scheme above we will get best accuracy that outperform the state of the art in GR specially in the non-neutral cases. The thesis ended by conducting a rather limited investigation on the use of the Kinect sensors for GR. We develop two sets of features: Horizontal Distance Features and Vertical Distance Features from small set of skeleton point trajectories. The experimental result on neutral was very successful but for the unrestricted gait recognition (with the 5 case variations) satisfactory but not optimal performance relies on the gallery including balanced number of samples from all cases.
717

The utility of analogy in systems sciences

Robinson, Sionade Ann January 1990 (has links)
The structure of the thesis reflects the three main areas of investigation. The legitimacy of analogy as a systems concept, the derivation of a model of analogy for systems thinkers and the description of a framework for practice. In the first section we are concerned with establishing an appreciation and understanding of the potential utility in the concept of analogy for systems thinkers. Having briefly surveyed the history of analogy in systems thinking and acknowledging the CUITent methodological interest in metaphor we note that our interest in analogy has been a target for our critics and led to a loss of credibility. The thesis calls for a re-evaluation of this situation and we hence describe a system thinker's view of science as the grounds on which the utility of analogy is normally dismissed. The first three chapters show that the basis on which science attacks analogy as invalid and inappropriate is itself contentious and that identified 'weaknesses' in the scientific framework can become strengths in the re-conceptualisation of a model of analogy. We consider and distinguish the dynamic relationships between analogy, model and metaphor. In the second section having established the potential value of analogy as a concept, the thesis develops an explanation of how a model of analogy for systems thinkers can be conceptualised. In the development of the model we will consider particular implications of three types of analogy, 'positive', 'negative' and 'neutral' analogy and discuss the suggestion that they reveal possibilities for exploring different and contrasting rationalities; these issues will be discussed looking at the relationship between analogy and rationality and in this context the validity of the argument from analogy. In the final section the thesis asserts that that systems thinking should not shy away from explicit use of analogy and shows how can use the framework of analogy to reconceptualise systems concepts.
718

Investigation of a hierarchical context-aware architecture for rule-based customisation of mobile computing service

Meng, Zhaozong January 2014 (has links)
The continuous technical progress in mobile device built-in modules and embedded sensing techniques creates opportunities for context-aware mobile applications. The context-aware computing paradigm exploits the relevant context as implicit input to characterise the user and physical environment and provide a computing service customised to the contextual situation. However, heterogeneity in techniques, complexity of contextual situation, and gap between raw sensor data and usable context keep the techniques from truly integration for extensive use. Studies in this area mainly focus on feasibility demonstration of the emerging techniques, and they lack general architecture support and appropriate service customisation strategy. This investigation aims to provide general system architecture and technical approaches to deal with the heterogeneity problem and efficiently utilise the dynamic context towards proactive computing service that is customised to the contextual situation. The main efforts of this investigation are the approaches to gathering, handling, and utilising the dynamic context information in an efficient way and the decision making and optimisation methods for computing service customisation. In brief, the highlights of this thesis cover the following aspects: (1) a hierarchical context-aware computing architecture supporting interoperable distribution and further use of context; (2) an in-depth analysis and classification of context and the corresponding context acquisition methods; (3) context modelling and context data representation for efficient and interoperable use of context; (4) a rule-based service customisation strategy with a rule generation mechanism to supervise the service customisation. In addition, feasibility demonstration of the proposed system and contribution justification of this investigation are conducted through case studies and prototype implementations. One case study uses mobile built-in sensing techniques to improve the usability and efficiency of mobile applications constrained by resource limitation, and the other employs the mobile terminal and embedded sensing techniques to predict users’ expectations for home facility automatic control. Results demonstrate the feasibility of the proposed context handling architecture and service customisation methods. It shows great potential for employing the context of the computing environment for context-aware adaptation in pervasive and mobile applications but also indicates some underlying problems for further study.
719

An investigation of the digital sublime in video game production

Betts, Thomas January 2014 (has links)
This research project examines how video games can be programmed to generate the sense of the digital sublime. The digital sublime is a term proposed by this research to describe experiences where the combination of code and art produces games that appear boundless and autonomous. The definition of this term is arrived at by building on various texts and literature such as the work of Kant, Deleuze and Wark and on video games such as Proteus, Minecraft and Love. The research is based on the investigative practice of my work as an artist-programmer and demonstrates how games can be produced to encourage digitally sublime scenarios. In the three games developed for this thesis I employ computer code as an artistic medium, to generate games that explore permutational complexity and present experiences that walk the margins between confusion and control. The structure of this thesis begins with a reading of the Kantian sublime, which I introduce as the foundation for my definition of the digital sublime. I then combine this reading with elements of contemporary philosophy and computational theory to establish a definition applicable to the medium of digital games. This definition is used to guide my art practice in the development of three games that examine different aspects of the digital sublime such as autonomy, abstraction, complexity and permutation. The production of these games is at the core of my research methodology and their development and analysis is used to produce contributions in the following areas. 1. New models for artist-led game design. This includes methods that re-contextualise existing aesthetic forms such as futurism, synaesthesia and romantic landscape through game design and coding. It also presents techniques that merge visuals and mechanics into a format developed for artistic and philosophical enquiry. 2. The development of new procedural and generative techniques in the programming of video games. This includes the implementation of a realtime marching cubes algorithm that generates fractal noise filtered terrain. It also includes a versatile three-dimensional space packing architectural construction algorithm. 3. A new reading of the digital sublime. This reading draws from the Kantian sublime and the writings of Deleuze, Wark and De Landa in order to present an understanding of the digital sublime specific to the domain of art practice within video games. These contributions are evidenced in the writing of this thesis and in the construction of the associated portfolio of games.
720

Reservoir Computing with high non-linear separation and long-term memory for time-series data analysis

Butcher, John B. January 2012 (has links)
Left unchecked the degradation of reinforced concrete can result in the weakening of a structure and lead to both hazardous and costly problems throughout the built environment. In some cases failure to recognise the problem and apply appropriate remedies has already resulted in fatalities. The problem increases with the age of any structures and consequently has become more pressing throughout the latter half of the 20th century. It is therefore of paramount importance to assess and repair these structures using an accurate and cost-effective approach. ElectroMagnetic Anomaly Detection (EMAD) is one such approach where currently analysis is performed visually, which is undesirable. A relatively new Recurrent Artificial Neural Network (RANN) approach which overcomes problems which have prohibited the widespread use of RANNs, Reservoir Computing (RC), is investigated here. This research aimed to automate the detection of defects within reinforced concrete using RC while gaining further insights into fundamental properties of an RC architecture when applied to real-world time-series datasets. As a product of these studies a novel RC architecture, Reservoir with Random Static Projections (R2SP), has been developed. R2SP helps to address what this research shows to be an antagonistic trade-off between a standard RC architecture’s ability to transform its input data onto a highly non-linear state space whilst at the same time possessing a short-term memory of its previous inputs. The R2SP architecture provided a significant improvement in performance for each dataset investigated when compared to a standard RC approach as a result of overcoming the aforementioned trade-off. The implementation of an R2SP architecture is now planned to be incorporated on a new version of the EMAD data collection apparatus to give fast or near to real-time information about areas of potential problems in real-world concrete structures.

Page generated in 0.0992 seconds