• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 53
  • 45
  • 26
  • 7
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 497
  • 209
  • 208
  • 208
  • 208
  • 208
  • 77
  • 77
  • 57
  • 52
  • 49
  • 42
  • 40
  • 39
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Computer simulations of protein folding

Williams, Haydn Wyn January 2011 (has links)
Computer simulations of biological systems provide novel data while both supporting and challenging traditional experimental methods. However, continued innovation is required to ensure that these technologies are able to work with increasingly complex systems. Coarse–grained approximations of protein structure have been studied using a lattice model designed to find low–energy conformations. A hydrogen–bonding term has been introduced. The ability to form β–sheet has been demonstrated, and the intricacies of reproducing the more complex α–helix on a lattice have been considered. An alternative strategy, that of better utilising computing power through the technique of milestoning, has shown good agreement with previous experimental and computational work. The increased efficiency allows significantly less extreme simulation conditions to be applied than those used in alternative simulation methods, and allows more simulation repeats. Finally, the principles of Least Action Dynamics have been employed to combine the two approaches described above. By splitting a simulation trajectory into a number of smaller components, and using the lattice model to optimise the path from a start structure to an end structure, it has been possible to efficiently generate dynamical information using an alternative method to traditional molecular dynamics.
232

Analysing functional genomics data using novel ensemble, consensus and data fusion techniques

Glaab, Enrico January 2011 (has links)
Motivation: A rapid technological development in the biosciences and in computer science in the last decade has enabled the analysis of high-dimensional biological datasets on standard desktop computers. However, in spite of these technical advances, common properties of the new high-throughput experimental data, like small sample sizes in relation to the number of features, high noise levels and outliers, also pose novel challenges. Ensemble and consensus machine learning techniques and data integration methods can alleviate these issues, but often provide overly complex models which lack generalization capability and interpretability. The goal of this thesis was therefore to develop new approaches to combine algorithms and large-scale biological datasets, including novel approaches to integrate analysis types from different domains (e.g. statistics, topological network analysis, machine learning and text mining), to exploit their synergies in a manner that provides compact and interpretable models for inferring new biological knowledge. Main results: The main contributions of the doctoral project are new ensemble, consensus and cross-domain bioinformatics algorithms, and new analysis pipelines combining these techniques within a general framework. This framework is designed to enable the integrative analysis of both large- scale gene and protein expression data (including the tools ArrayMining, Top-scoring pathway pairs and RNAnalyze) and general gene and protein sets (including the tools TopoGSA , EnrichNet and PathExpand), by combining algorithms for different statistical learning tasks (feature selection, classification and clustering) in a modular fashion. Ensemble and consensus analysis techniques employed within the modules are redesigned such that the compactness and interpretability of the resulting models is optimized in addition to the predictive accuracy and robustness. The framework was applied to real-word biomedical problems, with a focus on cancer biology, providing the following main results: (1) The identification of a novel tumour marker gene in collaboration with the Nottingham Queens Medical Centre, facilitating the distinction between two clinically important breast cancer subtypes (framework tool: ArrayMining) (2) The prediction of novel candidate disease genes for Alzheimer’s disease and pancreatic cancer using an integrative analysis of cellular pathway definitions and protein interaction data (framework tool: PathExpand, collaboration with the Spanish National Cancer Centre) (3) The prioritization of associations between disease-related processes and other cellular pathways using a new rule-based classification method integrating gene expression data and pathway definitions (framework tool: Top-scoring pathway pairs) (4) The discovery of topological similarities between differentially expressed genes in cancers and cellular pathway definitions mapped to a molecular interaction network (framework tool: TopoGSA, collaboration with the Spanish National Cancer Centre) In summary, the framework combines the synergies of multiple cross-domain analysis techniques within a single easy-to-use software and has provided new biological insights in a wide variety of practical settings.
233

Dynamical systems techniques in the analysis of neural systems

Wedgwood, Kyle January 2013 (has links)
As we strive to understand the mechanisms underlying neural computation, mathematical models are increasingly being used as a counterpart to biological experimentation. Alongside building such models, there is a need for mathematical techniques to be developed to examine the often complex behaviour that can arise from even the simplest models. There are now a plethora of mathematical models to describe activity at the single neuron level, ranging from one-dimensional, phenomenological ones, to complex biophysical models with large numbers of state variables. Network models present even more of a challenge, as rich patterns of behaviour can arise due to the coupling alone. We first analyse a planar integrate-and-fire model in a piecewise-linear regime. We advocate using piecewise-linear models as caricatures of nonlinear models, owing to the fact that explicit solutions can be found in the former. Through the use of explicit solutions that are available to us, we categorise the model in terms of its bifurcation structure, noting that the non-smooth dynamics involving the reset mechanism give rise to mathematically interesting behaviour. We highlight the pitfalls in using techniques for smooth dynamical systems in the study of non-smooth models, and show how these can be overcome using non-smooth analysis. Following this, we shift our focus onto the use of phase reduction techniques in the analysis of neural oscillators. We begin by presenting concrete examples showcasing where these techniques fail to capture dynamics of the full system for both deterministic and stochastic forcing. To overcome these failures, we derive new coordinate systems which include some notion of distance from the underlying limit cycle. With these coordinates, we are able to capture the effect of phase space structures away from the limit cycle, and we go on to show how they can be used to explain complex behaviour in typical oscillatory neuron models.
234

Exploring new factors and the question of 'which' in user acceptance studies of healthcare software

Mohamadali, Noor Azizah K. S. January 2013 (has links)
The importance of user acceptance of technology is critical for the success of technology implementation in the health-care sector. Spending on the procurement of new technology is growing in the hope of improving patient care as well as providing better services to the public, thus it is important that the technology is used to achieve its intended purpose. Success or failure of technology implementation depends upon the acceptance of the user and this is evident through the growing number of studies on evaluation particularly on user acceptance of the technology. While various models and frameworks have been developed to address factors associated with technology acceptance, they provide little understanding on the reasons for discrepancies in acceptance of the same system among different users. In response to this issue, this thesis proposes a theoretical model which suggests the role of ‘fit’ between user, technology and organization as an antecedent of user acceptance factors. This model was suggested based on a review of the literature and was empirically investigated on medical students’ intention to use medically related software. The proposed model in this thesis integrates three very well known existing models namely the Unified Theory of Acceptance and Use of Technology (UTAUT), the DeLone McLean IS Success Model and the Task-Technology Fit Model. The model is examined as a single model, which investigates (1) the effect of perceived fit between user, technology and organization on factors defined by UTAUT and the IS Success Model; (2) the effect of perceived fit between user, technology and organization on management support and information security expectancy construct; and (3) the effect of management support and information security expectancy on intention to use. In particular, this thesis seeks to investigate the role of ‘fit’ between user, technology and organization variable as an antecedent of performance expectancy, effort expectancy, social influence, facilitating conditions, software quality, service quality, information quality, management support and information security expectancy. This thesis also investigates the role of management support and information security expectancy constructs on intention to use which, to the best of researcher’s knowledge, have not been investigated together with an integrated model, as proposed in this thesis. Further, it presents and discusses empirical findings from the Internet survey and Drop-off approaches of 113 respondents which examined students’ intention to use medically related software using the Partial Least Square (PLS) approach to Structural Equation Modeling (SEM). WarpPLS version 3.0 software was used to analyze the empirical data in this thesis. The findings of this thesis support the hypothesized relationship proposed in the theoretical model. Specifically, the results revealed that perceived user-technology-organization fit has a significant effect on all the factors defined in the model except for social influence. The results also provide strong evidence of the relationships between the management support and information security expectancy constructs with the intention to use construct. This thesis contributes to theoretical and practical knowledge by providing, for the first time, evidence about the relationship between perceived user-technology-organization fit with constructs defined by both UTAUT and the IS Success Model. Further, the relationships between perceived user-technology-organization fit with management support and information security constructs are shown. Additionally this thesis provides empirical support on the relationship between the management support and information security expectancy constructs with the intention to use construct. The introduction and inclusion of organization fit with user and technology fit contributes to the body of knowledge in evaluation studies and provides a more complete model within user acceptance studies to help to understand the reasons for different acceptance among users of the same system or technology. Further, this thesis investigates the applicability of the multi-criteria decision analysis (MCDA) techniques to answer the question of ‘which’ in evaluation studies particularly within user acceptance studies. Existing evaluation studies provide the means to answer the question of what, why, who, when and how, but not explicitly the question of ‘which’. The question of ‘which’ needs to be explicitly addressed and specifically recognized in user acceptance studies. Although various studies implicitly provide the answer to the question of ‘which’, the importance of ‘which’ as the most critical factor or the most influential factor should be addressed explicitly in user acceptance studies. This thesis examined three well used methods which are classical AHP, Fuzzy AHP Changs’ method and Fuzzy AHP a and l method, to assign weights between various factors and subfactors of user acceptance. Acceptance factors of two different types of software were computed using each of these methods. The empirical data were collected from medical students for medically-related software and from research students for research-related software. The approaches examined, in this second part of thesis, are not intended to show which is the best method or techniques to evaluate user acceptance, but rather to illustrate the various options which are available within MCDA approaches to derive weights among evaluation items and subsequently provide an answer to address the question of ‘which’ explicitly within user acceptance studies. The results of assigning weights to factors and sub-factors using three different methods provide strong justification on the applicability of the MCDA methods as a decision support tool. The results show that these methods produced the same ranking of the factors which influence user acceptance (with slight variation using Fuzzy Chang’s method on medical software acceptance). The inclusion of the ‘which’ question can enhance evaluation studies in the health informatics research and findings related to user acceptance of health-care technology.
235

Novel guidelines for the analysis of single nucleotide polymorphisms in disease association studies

Fiaschi, Linda January 2011 (has links)
How genetic mutations such as Single Nucleotide Polymorphisms (SNPs) affect the risk of contracting a specific disease is still an open question for numerous different medical conditions. Two problems related to SNPs analysis are (i) the selection of computational techniques to discover possible single and multiple SNP associations; and (ii) the size of the latest datasets, which may contain millions of SNPs. In order to find associations between SNPs and diseases, two popular techniques are investigated and enhanced. Firstly, the ‘Transmission Disequilibrium Test’ for familybased analysis is considered. The fixed length of haplotypes provided by this approach represents a possible limit to the quality of the obtained results. For this reason, an adaptation is proposed to select the minimum number of SNPs that are responsible for disease predisposition. Secondly, decision tree algorithms for case-control analysis in situations of unrelated individuals are considered. The application of a single tool may lead to limited analysis of the genetic association to a specific condition. Thus, a novel consensus approach is proposed exploiting the strengths of three different algorithms, ADTree, C4.5 and Id3. Results obtained suggest the new approach achieves improved performance. The recent explosive growth in size of current SNPs databases has highlighted limitations in current techniques. An example is ‘Linkage Disequilibrium’ which identifies redundancy in multiple SNPs. Despite the high accuracies obtained by this method, it exhibits poor scalability for large datasets, which severely impacts on its performance. Therefore, a new fast scalable tool based on ‘Linkage Disequilibrium’ is developed to reduce the size through the measurement and elimination of redundancy between SNPs included in the initial dataset. Experimental evidence validates the potentially improved performance of the new method.
236

The use of learning styles in adaptive hypermedia

Brown, Elizabeth January 2007 (has links)
Computer-based learning has become a common phenomenon in the modern age. Many distance-learning systems distribute educational resources on the Internet and indeed entire study programmes are now widely available online. Such a large amount of content and information can be intimidating to learners, who may exhibit different individual characteristics, such as variation in goals, interests, motivation and/or learning preferences. This suggests that a uniform approach taken by learning environments to deliver materials and resources to students is not appropriate and that personalisation of such materials/resources should address users' differences to provide a customised learning experience, thus enhancing its effectiveness, lowering drop-out rates and maintaining high student motivation. This thesis addresses the latter issue of learning preferences, specifically investigating learning styles as an adaptation mechanism for personalised computer-based learning. A number of previous studies indicated the positive effect that this kind of adaptation provides, but under closer examination these were not conducted in a scientifically rigorous manner and thus their findings are somewhat limited. This research utilises a quantitative and highly objective approach to investigate visual/verbal and sequential/global learning styles in different user groups. Three user trials were carried out to discover whether there were any benefits to using these learning styles for studying in an adapted environment. Overall, no statistically significant benefits were found and these findings now shed doubt as to whether learning styles are indeed an effective mechanism for personalised learning.
237

A tool for using the control of character animation to help teach children communication skills

Ying, Liangzhong January 2012 (has links)
Effective Communication is an integral part of everyday life but recent studies show that in the UK many children fail to obtain this essential skill. The traditional approach to teaching communication skills is still important in school but new developments and the increasing availability of technology in the classroom, offer the potential for new ways to approach this teaching. A number of research institutions in the UK, for example the British Film Institute, are investigating how to use media such as films and television, in the classroom, in order to enhance children’s learning. Making cartoon films is a potentially valuable teaching approach but the lack of a suitable software tool to support this aspiration limits its viability. Existing software tools do reference the learning of communication skills as one of their features but they do not see this as a major learning objective. The aim of this thesis is to investigate three hypotheses based around the concept of using cartoon animation as a tool to enhance learning of communication. The first of these is that film production software which gives children control of character animation design may significantly stimulate their interest in exploring how to express their feelings. The second is that a correctly designed tool will integrate well into classroom teaching to produce effective learning and finally that the same tool can be used to extend learning of these skills outside the classroom. As a result a software tool has been developed to help children engage with the features of character animation, while learning how to express their feelings through storytelling in films. By using the tool, children experience the major steps of character animation design in filmmaking and in this process, each step is specially designed to fulfil their needs and stimulate them to use emotions. Experiments were carried out both in the classroom and out of school. The result indicated that older primary children had significant engagement in exploring emotional usage on the virtual characters and further analysis revealed children’s engagement was associated with age, social adjustment and computer experience. Moreover, using tool-assisted teaching in the classroom could bring some positive effects which do not exist with conventional teaching. In the out of school testing, around half of the children had positive reactions to accepting parental guidance and some of them (including their parents) had significant engagement in exploring emotional usage.
238

Hyper-heuristics for grouping problems

Elhag, Anas January 2015 (has links)
Grouping problems are hard to solve combinatorial optimization problems which require partitioning of objects into a minimum number of subsets while another additional objective is simultaneously optimized. Considerable research e ort has recently been directed towards automated problem-independent reusable heuristic search methodologies such as hyper-heuristics, which operate on a space formed by a set of low level heuristics rather than solutions, directly. Hyper-heuristics are commonly split into two main categories: selection hyper-heuristics, which are the focus of the work presented in this thesis, and generation hyper-heuristics. Most of the recently proposed selection hyper-heuristics are iterative and make use of two key methods which are employed successively; heuristic selection and move acceptance. At each step, a new solution is produced after a selected heuristic is applied to the solution at hand and then the move acceptance method is used to decide whether the resultant solution replaces the current one or not. This thesis presents a novel generic single point-based selection hyper-heuristic search framework, referred to as grouping hyper-heuristic framework. The proposed framework deals with one solution at any given decision point during the search process and embeds axed set of reusable standard low level heuristics specifically designed for the grouping problems. The use of standard heuristics enables the re-usability of the whole framework across different grouping problem domains with less development effort. The proposed grouping hyper-heuristic framework is based on a bi-objective formulation of any given grouping problem. Inspired from multi-objective optimization, a set of high quality solutions is maintained during the search process, capturing the trade-of between the number of groups and the additional objective for the given grouping problem. Moreover, the grouping framework includes a special two-phased acceptance mechanism that use the traditional move acceptance method only to make a preliminary decision regarding whether to consider the new solution for acceptance or not. The performance of different selection hyper-heuristics combining different components, implemented based on the proposed framework is investigated on a range of sample grouping problem domains, including graph coloring, exam timetabling and data clustering domains. Additionally, the selection hyper-heuristics performing the best on each domain are compared to the previously proposed problem-specific algorithms from the scientific literature. The empirical results shows that the grouping hyper-heuristics built based on the proposed framework are not only sufficiently general, but also able to obtain high quality solutions, competitive to some previously proposed approaches. The selection hyper-heuristic employing the 'reinforcement learning' heuristic selection method and embedding the 'iteration limited threshold accepting' move acceptance method performs the best in the overall across those grouping problem domains.
239

Multi-stage hyper-heuristics for optimisation problems

Kheiri, Ahmed January 2014 (has links)
There is a growing interest towards self configuring/tuning automated general-purpose reusable heuristic approaches for combinatorial optimisation, such as, hyper-heuristics. Hyper-heuristics are search methodologies which explore the space of heuristics rather than the solutions to solve a broad range of hard computational problems without requiring any expert intervention. There are two common types of hyper-heuristics in the literature: selection and generation methodologies. This work focuses on the former type of hyper-heuristics. Almost all selection hyper-heuristics perform a single point based iterative search over the space of heuristics by selecting and applying a suitable heuristic to the solution in hand at each decision point. Then the newly generated solution is either accepted or rejected using an acceptance method. This improvement process is repeated starting from an initial solution until a set of termination criteria is satisfied. The number of studies on the design of hyper-heuristic methodologies has been rapidly increasing and currently, we already have a variety of approaches, each with their own strengths and weaknesses. It has been observed that different hyper-heuristics perform differently on a given subset of problem instances and more importantly, a hyper-heuristic performs differently as the set of low level heuristics vary. This thesis introduces a general "multi-stage" hyper-heuristic framework enabling the use and exploitation of multiple selection hyper-heuristics at different stages during the search process. The goal is designing an approach utilising multiple hyper-heuristics for a more effective and efficient overall performance when compared to the performance of each constituent selection hyper-heuristic. The level of generality that a hyper-heuristic can achieve has always been of interest to the hyper-heuristic researchers. Hence, a variety of multi-stage hyper-heuristics based on the framework are not only applied to the real-world combinatorial optimisation problems of high school timetabling, multi-mode resource-constrained multi-project scheduling and construction of magic squares, but also tested on the well known hyper-heuristic benchmark of CHeSC 2011. The empirical results show that the multi-stage hyper-heuristics designed based on the proposed framework are still inherently general, easy-to-implement, adaptive and reusable. They can be extremely effective solvers considering their success in the competitions of ITC 2011 and MISTA 2013. Moreover, a particular multi-stage hyper-heuristic outperformed the state-of-the-art selection hyper-heuristic from CHeSC 2011.
240

Large scale retinal modeling for the design of new generation retinal prostheses

Tran, Trung Kien January 2015 (has links)
With the help of modern technology, blindness caused by retinal diseases such as age-related macular degeneration or retinitis pigmentosa is now considered reversible. Scientists from various fields such as Neuroscience, Electrical Engineering, Computer Science, and Bioscience have been collaborating to design and develop retinal prostheses, with the aim of replacing malfunctioning parts of the retina and restoring vision in the blind. Human trials conducted to test retinal prostheses have yielded encouraging results, showing the potential of this approach in vision recovery. However, a retinal prosthesis has several limitations with regard to its hardware and biological functions, and several attempts have been made to overcome these limitations. This thesis focuses on the biological aspects of retinal prostheses: the biological processes occurring inside the retina and the limitations of retinal prostheses corresponding to those processes have been analysed. Based on these analyses, three major findings regarding information processing inside the retina have been presented and these findings have been used to conceptualise retinal prostheses that have the characteristics of asymmetrical and separate pathway stimulations. In the future, when nanotechnology gains more popularity and is completely integrated inside the prosthesis, this concept can be utilized to restore useful visual information such as colour, depth, and contrast to achieve high-quality vision in the blind.

Page generated in 0.0398 seconds