• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Automated design of energy functions for protein structure prediction by means of genetic programming and improved structure similarity assessment

Widera, Paweł January 2010 (has links)
The process of protein structure prediction is a crucial part of understanding the function of the building blocks of life. It is based on the approximation of a protein free energy that is used to guide the search through the space of protein structures towards the thermodynamic equilibrium of the native state. A function that gives a good approximation of the protein free energy should be able to estimate the structural distance of the evaluated candidate structure to the protein native state. This correlation between the energy and the similarity to the native is the key to high quality predictions. State-of-the-art protein structure prediction methods use very simple techniques to design such energy functions. The individual components of the energy functions are created by human experts with the use of statistical analysis of common structural patterns that occurs in the known native structures. The energy function itself is then defined as a simple weighted sum of these components. Exact values of the weights are set in the process of maximisation of the correlation between the energy and the similarity to the native measured by a root mean square deviation between coordinates of the protein backbone. In this dissertation I argue that this process is oversimplified and could be improved on at least two levels. Firstly, a more complex functional combination of the energy components might be able to reflect the similarity more accurately and thus improve the prediction quality. Secondly, a more robust similarity measure that combines different notions of the protein structural similarity might provide a much more realistic baseline for the energy function optimisation. To test these two hypotheses I have proposed a novel approach to the design of energy functions for protein structure prediction using a genetic programming algorithm to evolve the energy functions and a structural similarity consensus to provide a reference similarity measure. The best evolved energy functions were found to reflect the similarity to the native better than the optimised weighted sum of terms, and therefore opening a new interesting area of research for the machine learning techniques.
282

Selection of simulation variance reduction techniques through a fuzzy expert system

Adewunmi, Adrian January 2010 (has links)
In this thesis, the design and development of a decision support system for the selection of a variance reduction technique for discrete event simulation studies is presented. In addition, the performance of variance reduction techniques as stand alone and combined application has been investigated. The aim of this research is to mimic the process of human decision making through an expert system and also handle the ambiguity associated with representing human expert knowledge through fuzzy logic. The result is a fuzzy expert system which was subjected to three different validation tests, the main objective being to establish the reasonableness of the systems output. Although these validation tests are among the most widely accepted tests for fuzzy expert systems, the overall results were not in agreement with expectations. In addition, results from the stand alone and combined application of variance reduction techniques, demonstrated that more instances of stand alone applications performed better at reducing variance than the combined application. The design and development of a fuzzy expert system as an advisory tool to aid simulation users, constitutes a significant contribution to the selection of variance reduction technique(s), for discrete event simulation studies. This is a novelty because it demonstrates the practicalities involved in the design and development process, which can be used on similar decision making problems by discrete event simulation researchers and practitioners using their own knowledge and experience. In addition, the application of a fuzzy expert system to this particular discrete event simulation problem, demonstrates the flexibility and usability of an alternative to the existing algorithmic approach. Under current experimental conditions, a new specific class of systems, in particular the Crossdocking Distribution System has been identified, for which the application of variance reduction techniques, i.e. Antithetic Variates and Control Variates are beneficial for variance reduction.
283

A study of evolutionary multiobjective algorithms and their application to knapsack and nurse scheduling problems

Le, Khoi Nguyen January 2011 (has links)
Evolutionary algorithms (EAs) based on the concept of Pareto dominance seem the most suitable technique for multiobjective optimisation. In multiobjective optimisation, several criteria (usually conflicting) need to be taken into consideration simultaneously to assess a quality of a solution. Instead of finding a single solution, a set of trade-off or compromise solutions that represents a good approximation to the Pareto optimal set is often required. This thesis presents an investigation on evolutionary algorithms within the framework of multiobjective optimisation. This addresses a number of key issues in evolutionary multiobjective optimisation. Also, a new evolutionary multiobjective (EMO) algorithm is proposed. Firstly, this new EMO algorithm is applied to solve the multiple 0/1 knapsack problem (a wellknown benchmark multiobjective combinatorial optimisation problem) producing competitive results when compared to other state-of-the-art MOEAs. Secondly, this thesis also investigates the application of general EMO algorithms to solve real-world nurse scheduling problems. One of the challenges in solving real-world nurse scheduling problems is that these problems are highly constrained and specific-problem heuristics are normally required to handle these constraints. These heuristics have considerable influence on the search which could override the effect that general EMO algorithms could have in the solution process when applied to this type of problems. This thesis outlines a proposal for a general approach to model the nurse scheduling problems without the requirement of problem-specific heuristics so that general EMO algorithms could be applied. This would also help to assess the problems and the performance of general EMO algorithms more fairly.
284

Infobiotics : computer-aided synthetic systems biology

Blakes, Jonathan January 2013 (has links)
Until very recently Systems Biology has, despite its stated goals, been too reductive in terms of the models being constructed and the methods used have been, on the one hand, unsuited for large scale adoption or integration of knowledge across scales, and on the other hand, too fragmented. The thesis of this dissertation is that better computational languages and seamlessly integrated tools are required by systems and synthetic biologists to enable them to meet the significant challenges involved in understanding life as it is, and by designing, modelling and manufacturing novel organisms, to understand life as it could be. We call this goal, where everything necessary to conduct model-driven investigations of cellular circuitry and emergent effects in populations of cells is available without significant context-switching, “one-pot” in silico synthetic systems biology in analogy to “one-pot” chemistry and “one-pot” biology. Our strategy is to increase the understandability and reusability of models and experiments, thereby avoiding unnecessary duplication of effort, with practical gains in the efficiency of delivering usable prototype models and systems. Key to this endeavour are graphical interfaces that assists novice users by hiding complexity of the underlying tools and limiting choices to only what is appropriate and useful, thus ensuring that the results of in silico experiments are consistent, comparable and reproducible. This dissertation describes the conception, software engineering and use of two novel software platforms for systems and synthetic biology: the Infobiotics Workbench for modelling, in silico experimentation and analysis of multi-cellular biological systems; and DNA Library Designer with the DNALD language for the compact programmatic specification of combinatorial DNA libraries, as the first stage of a DNA synthesis pipeline, enabling methodical exploration biological problem spaces. Infobiotics models are formalised as Lattice Population P systems, a novel framework for the specification of spatially-discrete and multi-compartmental rule-based models, imbued with a stochastic execution semantics. This framework was developed to meet the needs of real systems biology problems: hormone transport and signalling in the root of Arabidopsis thaliana, and quorum sensing in the pathogenic bacterium Pseudomonas aeruginosa. Our tools have also been used to prototype a novel synthetic biological system for pattern formation, that has been successfully implemented in vitro. Taken together these novel software platforms provide a complete toolchain, from design to wet-lab implementation, of synthetic biological circuits, enabling a step change in the scale of biological investigations that is orders of magnitude greater than could previously be performed in one in silico “pot”.
285

Supporting the information systems requirements of distributed healthcare teams

Skilton, Alysia January 2011 (has links)
The adoption of a patient-centric approach to healthcare delivery in the National Health Service (NHS) in the UK has led to changing requirements for information systems supporting the work of health and care practitioners. In particular, the patient-centric approach emphasises teamwork and cross-boundary coordination and collaboration. Although a great deal of both time and money has been invested in modernising healthcare information systems, they do not yet meet the requirements of patient-centric work. Current proposals for meeting these needs focus on providing cross-boundary information access in the form of an integrated Electronic Patient Record (EPR). This research considers the requirements that are likely to remain unmet after an integrated EPR is in place and how to meet these. Because the patient-centric approach emphasises teamwork, a conceptual model which uses care team meta-data to track and manage team members and professional roles is proposed as a means to meet this broader range of requirements. The model is supported by a proof of concept prototype which leverages team information to provide tailored information access, targeted notifications and alerts, and patient and team management functionality. Although some concerns were raised regarding implementation, the proposal was met with enthusiasm by both clinicians and developers during evaluation. However, the area of need is broad and there is still a great deal of work to be done if this work is to be taken forward.
286

Addressing the reactiveness problem in sensor networks using rich task representation

Borowiecki, Konrad January 2011 (has links)
Sensor networks are increasingly important in many domains, for example, environmental monitoring, emergency response, and military operations. There is a great interest in making these networks more flexible, so they can be more easily deployed to meet the needs of new tasks. The research problem is lack of reactiveness of a system utilising a sensor network in a dynamic real-time domain, where the state of sensors and tasks might change many times (e.g. due to a sensor malfunction, or a change in task requirements or priorities). In such domains (e.g. firefighting or the military) we want to minimise the time spent manually configuring the sensor network, as any delay dramatically endangers the outcome of a task or a delay’s effects might be unacceptable, e.g. the loss of a human life. The current way of deploying sensors in the problem context involves four consecutive steps: Direction, Collection, Processing and Dissemination (DCPD). These steps form a cycle, called the DCPD loop. Automating this loop as much as possible would be a big step towards solving the reactiveness problem. Service-Oriented Sensor Networks (SOSN), allow sensors to be discovered, accessed, and combined with other information-processing services, thus enabling an efficient sensor exploitation. They are only a partial solution to the problem, as they don’t employ explicit representations of a user’s information-requiring tasks. Therefore, a machine processable expression of a user’s task (task representation, TR), allowing automation of the DCPD steps, is needed. We showed that, currently, there is no TR that can completely automate the loop, but that we can create such a hybrid of current TRs (called HTR) that automates the loop more than the individual TRs. Our literature review revealed four TRs. Using the identified TRs, we formed three high level designs of task representations. None of them covered the loop completely thus by enrichment of one of the built HTRs with the missing concepts, we finally obtained one that covers the DCPD loop fully. We tested the four hybrids in a simulation run for four scenarios with distinctive likelihoods of change of task and platform states. It showed that significant benefits are gained just by reusing existing technologies and that the reactiveness problem can be effectively tackled by that approach, particularly visible in the emergency response scenario, characterised by low task and high platform changeability.
287

Environmentally adaptive noise estimation for active sonar

Bareš, Robert January 2012 (has links)
Noise is frequently encountered when processing data from the natural environment, and is of particular concern for remote-sensing applications where the accuracy of data gathered is limited by the noise present. Rather than merely accepting that sonar noise results in unavoidable error in active sonar systems, this research explores various methodologies to reduce the detrimental effect of noise. Our approach is to analyse the statistics of sonar noise in trial data, collected by a long-range active sonar system in a shallow water environment, and apply this knowledge to target detection. Our detectors are evaluated against imulated targets in simulated noise, simulated targets embedded in noise-only trial data, and trial data containing real targets. First, we demonstrate that the Weibull and K-distributions offer good models of sonar noise in a cluttered environment, and that the K-distribution achieves the greatest accuracy in the tail of the distribution. We demonstrate the limitations of the Kolmogorov-Smirnov goodness-of-fit test in the context of detection by thresholding, and investigate the upper-tail Anderson-Darling test for goodness-of-fit analysis. The upper-tail Anderson-Darling test is shown to be more suitable in the context of detection by thresholding, as it is sensitive to the far-right tail of the distribution, which is of particular interest for detection at low false alarm rates. We have also produced tables of critical values for K-distributed data evaluated by the upper-tail Anderson-Darling test. Having established suitable models for sonar noise, we develop a number of detection statistics. These are based on the box-car detector, and the generalized likelihood ratio test with a Rician target model. Our performance analysis shows that both types of detector benefit from the use of the noise model provided by the K-distribution. We also demonstrate that for weak signals, our GLRT detectors are able to achieve greater probability of detection than the box-car detectors. The GLRT detectors are also easily extended to use more than one sample in a single test, an approach that we show to increase probability of detection when processing simulated targets. A fundamental difficulty in estimating model parameters is the small sample size. Many of the pings in our trial data overlap, covering the same region of the sea. It is therefore possible to make use of samples from multiple pings of a region, increasing the sample size. For static targets, the GLRT detector is easily extended to multi-ping processing, but this is not as easy for moving targets. We derive a new method of combining noise estimates over multiple pings. This calculation can be applied to either static or moving targets, and is also shown to be useful for generating clutter maps. We then perform a brief performance analysis on trial data containing real targets, where we show that in order to perform well, the GLRT detector requires a more accurate model of the target than the Rician distribution is able to provide. Despite this, we show that both GLRT and box-car detectors, when using the K-distribution as a noise model, can achieve a small improvement in the probability of detection by combining estimates of the noise parameters over multiple pings.
288

Investigation of data dissemination techniques for opportunistic networks

Lenando, Halikul January 2012 (has links)
An opportunistic network is an infrastructure-less peer to peer network, created between devices that are mobile and wireless enabled. The links between devices are dynamic and often short-lived. Therefore, disseminating data from a source to recipients with a quality of service guarantee and efficiency is a very challenging problem. Furthermore, the interactions between devices are based on opportunity and are dependent on the devices mobility, which have extreme diverse patterns. The aim of this thesis is to investigate dissemination of data in opportunistic networks. In particular two conflicting objectives are studied: minimising the overhead costs and maximising the information coverage over time. We also take into account the effects of mobility. Extensive computer simulation is developed to explore models for information dissemination and mobility. On top of existing mobility models (i.e. Random Walk, Random, Waypoint and Gauss Markov) a hybrid model is derived from the Random Waypoint and Gauss Markov mobility models. The effect on mobility model on dissemination performance is found to be highly significant. This is based on sensitivity analysis on mobility and node density. We first consider different baseline push techniques for data dissemination. We propose four different push techniques, namely Pure Push, Greedy, L-Push and Spray and Relay to analyse the impact of different push techniques to the information dissemination performances. The results present different trade-offs between objectives. As a strategy to manage overheads, we consider controlling to which nodes information is pushed to by establishing a social network between devices. A logical social network can be built between mobile devices if they repeatedly see each other, and can be defined in different ways. This is important because it shows how content may potentially flow to devices. We explore the effects of mobility for different definitions of the social network. This shows how different local criteria for defining links in a social network lead to different social structures. Finally we consider the effect of combining the social structure and intelligent push techniques to further improve the data dissemination performance in opportunistic networks. We discover that prioritising pushing over a social network is able to minimise the overhead costs but it introduces a dissemination delay.
289

Interoperability between heterogeneous and distributed biodiversity data sources in structured data networks

Sundaravadivelu, Rathinasabapathy January 2010 (has links)
The extensive capturing of biodiversity data and storing them in heterogeneous information systems that are accessible on the internet across the globe has created many interoperability problems. One is that the data providers are independent of others and they can run systems which were developed on different platforms at different times using different software products to respond to different needs of information. A second arises from the data modelling used to convert the real world data into a computerised data structure which is not conditioned by a universal standard. Most importantly the need for interoperation between these disparate data sources is to get accurate and useful information for further analysis and decision making. The software representation of a universal or a single data definition structure for depicting a biodiversity entity is ideal. But this is not necessarily possible when integrating data from independently developed systems. The different perspectives of the real-world entity when being modelled by independent teams will result in the use of different terminologies, definition and representation of attributes and operations for the same real-world entity. The research in this thesis is concerned with designing and developing an interoperable flexible framework that allows data integration between various distributed and heterogeneous biodiversity data sources that adopt XML standards for data communication. In particular the problems of scope and representational heterogeneity among the various XML data schemas are addressed. To demonstrate this research a prototype system called BUFFIE (Biodiversity Users‘ Flexible Framework for Interoperability Experiments) was designed using a hybrid of Object-oriented and Functional design principles. This system accepts the query information from the user in a web form, and designs an XML query. This request query is enriched and is made more specific to data providers using the data provider information stored in a repository. These requests are sent to the different heterogeneous data resources across the internet using HTTP protocol. The responses received are in varied XML formats which are integrated using knowledge mapping rules defined in XSLT & XML. The XML mappings are derived from a biodiversity domain knowledgebase defined for schema mappings of different data exchange protocols. The integrated results are presented to users or client programs to do further analysis. The main results of this thesis are: (1) A framework model that allows interoperation between the heterogeneous data source systems. (2) Enriched querying improves the accuracy of responses by finding the correct information existing among autonomous, distributed and heterogeneous data resources. (3) A methodology that provides a foundation for extensibility as any new network data standards in XML can be added to the existing protocols. The presented approach shows that (1) semi automated mapping and integration of datasets from the heterogeneous and autonomous data providers is feasible. (2) Query enriching and integrating the data allows the querying and harvesting of useful data from various data providers for helpful analysis.
290

Investigating heuristic and meta-heuristic algorithms for solving pickup and delivery problems

Hosny, Manar Ibrahim January 2010 (has links)
The development of effective decision support tools that can be adopted in the transportation industry is vital in the world we live in today, since it can lead to substantial cost reduction and efficient resource consumption. Solving the Vehicle Routing Problem (VRP) and its related variants is at the heart of scientific research for optimizing logistic planning. One important variant of the VRP is the Pickup and Delivery Problem (PDP). In the PDP, it is generally required to find one or more minimum cost routes to serve a number of customers, where two types of services may be performed at a customer location, a pickup or a delivery. Applications of the PDP are frequently encountered in every day transportation and logistic services, and the problem is likely to assume even greater prominence in the future, due to the increase in e-commerce and Internet shopping. In this research we considered two particular variants of the PDP, the Pickup and Delivery Problem with Time Windows (PDPTW), and the One-commodity Pickup and Delivery Problem (1-PDP). In both problems, the total transportation cost should be minimized, without violating a number of pre-specified problem constraints. In our research, we investigate heuristic and meta-heuristic approaches for solving the selected PDP variants. Unlike previous research in this area, though, we try to focus on handling the difficult problem constraints in a simple and effective way, without complicating the overall solution methodology. Two main aspects of the solution algorithm are directed to achieve this goal, the solution representation and the neighbourhood moves. Based on this perception, we tailored a number of heuristic and meta-heuristic algorithms for solving our problems. Among these algorithms are: Genetic Algorithms, Simulated Annealing, Hill Climbing and Variable Neighbourhood Search. In general, the findings of the research indicate the success of our approach in handling the difficult problem constraints and devising simple and robust solution mechanisms that can be integrated with vehicle routing optimization tools and used in a variety of real world applications

Page generated in 0.0777 seconds