• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5630
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9096
  • 9096
  • 3034
  • 1698
  • 1538
  • 1530
  • 1425
  • 1369
  • 1202
  • 1188
  • 1168
  • 1131
  • 1117
  • 1029
  • 1028
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
781

AN ADAPTIVE RULE-BASED SYSTEM

Stackhouse, Christian Paul, 1960- January 1987 (has links)
Adaptive systems are systems whose characteristics evolve over time to improve their performance at a task. A fairly new area of study is that of adaptive rule-based systems. The system studied for this thesis uses meta-knowledge about rules, rulesets, rule performance, and system performance in order to improve its overall performance in a problem domain. An interesting and potentially important phenomenon which emerged is that the performance the system learns while solving a problem appears to be limited by an inherent break-even level of complexity. That is, the cost to the system of acquiring complexity does not exceed its benefit for that problem. If the problem is made more difficult, however, more complexity is required, the benefit of complexity becomes greater than its cost, and the system complexity begins increasing, ultimately to the new break-even point. There is no apparent ultimate limit to the complexity attainable.
782

Graph-based protein-protein interaction prediction in Saccharomyces cerevisiae

Paradesi, Martin Samuel Rao January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / William H. Hsu / The term 'protein-protein interaction (PPI)' refers to the study of associations between proteins as manifested through biochemical processes such as formation of structures, signal transduction, transport, and phosphorylation. PPI play an important role in the study of biological processes. Many PPI have been discovered over the years and several databases have been created to store the information about these interactions. von Mering (2002) states that about 80,000 interactions between yeast proteins are currently available from various high-throughput interaction detection methods. Determining PPI using high-throughput methods is not only expensive and time-consuming, but also generates a high number of false positives and false negatives. Therefore, there is a need for computational approaches that can help in the process of identifying real protein interactions. Several methods have been designed to address the task of predicting protein-protein interactions using machine learning. Most of them use features extracted from protein sequences (e.g., amino acids composition) or associated with protein sequences directly (e.g., GO annotation). Others use relational and structural features extracted from the PPI network, along with the features related to the protein sequence. When using the PPI network to design features, several node and topological features can be extracted directly from the associated graph. In this thesis, important graph features of a protein interaction network that help in predicting protein interactions are identified. Two previously published datasets are used in this study. A third dataset has been created by combining three PPI databases. Several classifiers are applied on the graph attributes extracted from protein interaction networks of these three datasets. A detailed study has been performed in this present work to determine if graph attributes extracted from a protein interaction network are more predictive than biological features of protein interactions. The results indicate that the performance criteria (such as Sensitivity, Specificity and AUC score) improve when graph features are combined with biological features.
783

A pilot study to integrate HIV drug resistance gold standard interpretation algorithms using neural networks

Singh, Y., Mars, M. January 2013 (has links)
Published Article / There are several HIV drug resistant interpretation algorithms which produce different resistance measures even if applied to the same resistance profile. This discrepancy leads to confusion in the mind of the physician when choosing the best ARV therapy.
784

Generalizable surrogate models for the improved early-stage exploration of structural design alternatives in building construction

Nourbakhsh, Mehdi 27 May 2016 (has links)
The optimization of complex structures is extremely time consuming. To obtain their optimization results, researchers often wait for several hours and even days. Then, if they have to make a slight change in their input parameters, they must run their optimization problem again. This iterative process of defining a problem and finding a set of optimized solutions may take several days and sometimes several weeks. Therefore, to reduce optimization time, researchers have developed various approximation-based models that predict the results of time-consuming analysis. These simple analytical models, known as “meta- or surrogate models,” are based on data available from limited analysis runs. These “models of the model” seek to approximate computation-intensive functions within a considerably shorter time than expensive simulation codes that require significant computing power. One of the limitations of metamodels (or interchangeably surrogate models) developed for the structural approximation of trusses and space frames is lack of generalizability. Since such metamodels are exclusively designed for a specific structure, they can predict the performance of only the structures for which they are designed. For instance, if a metamodel is designed for a ten-bar truss, it cannot predict the analysis results of another ten-bar truss with different boundary conditions. In addition, they cannot be re-used if the topology of a structure changes (e.g., from a ten-bar truss to a 12-bar truss). If designers change the topology, they must generate new sample data and re-train their model. Therefore, the predictability of these exclusive models is limited. From a combination of the analysis of data from structures with various geometries, the objective of this study is to create, test, and validate generalizable metamodels that predict the results of finite element analysis. Developing these models requires two main steps: feature generation and model creation. In the first step, involving the use of 11 features for nodes and three for members, the physical representation of four types of domes, slabs, and walls were transformed into numerical values. Then, by randomly varying the cross-sectional area, the stress value of each member was recorded. In the second step, these feature vectors were used to create, test, and verify various metamodels in an examination of four hypotheses. The results of the hypotheses show that with generalizable metamodels, the analysis of data from various structures can be combined and used for predicting the performance of the members of structures or new structures within the same class of geometry. For instance, given the same radius for all domes, a metamodel generated from the analysis of data from a 700-, 980-, and 1,525-member dome can predict the structural performance of the members of these domes or a new dome with 250 members. In addition, the results show that generalizable metamodels are able to more closely predict the results of a finite element analysis than metamodels exclusively created for a specific structure. A case study was selected to examine the application of generalizable metamodels for the early-stage exploration of structural design alternatives in a construction project. The results illustrates that the optimization with generalizable metamodels reduces the time and cost of the project, fostering more efficient planning and more rapid decision-making by architects, contractors, and engineers at the early stage of construction projects.
785

Integrating and querying semantic annotations

Chen, Luying January 2014 (has links)
Semantic annotations are crucial components in turning unstructured text into more meaningful and machine-understandable information. The acquisition of the mass of semantically-enriched information would allow applications that consume the information to gain wide benefits. At present there are a plethora of commercial and open-source services or tools for enriching documents with semantic annotations. Since there has been limited effort to compare such annotators, this study first surveys and compares them in multiple dimensions, including the techniques, the coverage and the quality of annotations. The overlap and the diversity in capabilities of annotators motivate the need of semantic annotation integration: middleware that produces a unified annotation with improved quality on top of diverse semantic annotators. The integration of semantic annotations leads to new challenges, both compared to usual data integration scenarios and to standard aggregation of machine learning tools. A set of approaches to these challenges are proposed that perform ontology-aware aggregation, adapting Maximum Entropy Markov models to the setting of ontology-based annotations. These approaches are further compared with the existing ontology-unaware supervised approaches, ontology-aware unsupervised methods and individual annotators, demonstrating their effectiveness by an overall improvement in all the testing scenarios. A middleware system – ROSeAnn and its corresponding APIs have been developed. In addition, this study also concerns the availability and usability of semantic-rich data. Thus the second focus of this thesis aims to allow users to query text annotated with different annotators by using both explicit and implicit knowledge. We describe our first step towards this, a query language and a prototype system – QUASAR that provides a uniform way to query multiple facets of annotated documents. We will show how integrating semantic annotations and utilizing external knowledge help in increasing the quality of query answers over annotated documents.
786

Hyperparameter optimisation for multiple kernels

Pilkington, Nicholas Charles Victor January 2014 (has links)
No description available.
787

Image Analysis and Deep Learning for Applications in Microscopy

Ishaq, Omer January 2016 (has links)
Quantitative microscopy deals with the extraction of quantitative measurements from samples observed under a microscope. Recent developments in microscopy systems, sample preparation and handling techniques have enabled high throughput biological experiments resulting in large amounts of image data, at biological scales ranging from subcellular structures such as fluorescently tagged nucleic acid sequences to whole organisms such as zebrafish embryos. Consequently, methods and algorithms for automated quantitative analysis of these images have become increasingly important. These methods range from traditional image analysis techniques to use of deep learning architectures. Many biomedical microscopy assays result in fluorescent spots. Robust detection and precise localization of these spots are two important, albeit sometimes overlapping, areas for application of quantitative image analysis. We demonstrate the use of popular deep learning architectures for spot detection and compare them against more traditional parametric model-based approaches. Moreover, we quantify the effect of pre-training and change in the size of training sets on detection performance. Thereafter, we determine the potential of training deep networks on synthetic and semi-synthetic datasets and their comparison with networks trained on manually annotated real data. In addition, we present a two-alternative forced-choice based tool for assisting in manual annotation of real image data. On a spot localization track, we parallelize a popular compressed sensing based localization method and evaluate its performance in conjunction with different optimizers, noise conditions and spot densities. We investigate its sensitivity to different point spread function estimates. Zebrafish is an important model organism, attractive for whole-organism image-based assays for drug discovery campaigns. The effect of drug-induced neuronal damage may be expressed in the form of zebrafish shape deformation. First, we present an automated method for accurate quantification of tail deformations in multi-fish micro-plate wells using image analysis techniques such as illumination correction, segmentation, generation of branch-free skeletons of partial tail-segments and their fusion to generate complete tails. Later, we demonstrate the use of a deep learning-based pipeline for classifying micro-plate wells as either drug-affected or negative controls, resulting in competitive performance, and compare the performance from deep learning against that from traditional image analysis approaches.
788

Supervised Learning Methods to Enhance Customer Lifetime Value Models for Multi-Channel Retail Sales Organizations

Shrewsbury, Billy John 01 January 2013 (has links)
Customer lifetime value models (CLTV) are a critical component of customer relationship management strategies. Over time, numerous approaches have been used to estimate the lifetime value (LTV) of a customer or segment of customers to make appropriate decisions on how to distribute marketing dollars and make other customer- related business decisions. In recent years, the development of lower cost data warehousing strategies and the ease with which customer data is captured has increased the volume of data available to firms to utilize in such models. This is, in part, a result of the rise in use of the Internet to interact with customers. Even with the additional data available from Internet interactions, much of the current research in this field relies on membership, subscription based, or contract term data, with little, if any research addressing today's multi-channel retail environment. The robustness of data available for use in application to customer lifetime value models is another result coming from the combination of increased volume of data available, along with advances in the fields of data warehousing and data mining techniques. Existing statistical models for predicting LTV have limitations. Recent advances in machine learning techniques have allowed researchers to apply these techniques to problems similar to customer lifetime value estimation. These techniques can be applied to LTV models. This dissertation develops and evaluates methods for estimating LTV in a multi-channel retail environment. It builds on existing models and introduces supervised learning methods, specifically feed-forward neural networks and regression trees into the prediction models to develop and evaluate new methods for LTV modeling in multi-channel retail environments. The new models proposed by this dissertation present an easier-to-implement solution to predicting churn and the future purchase value of a customer, which are the two key elements of LTV models. These elements provide the multi-channel retail firm with data comparable in customer relationship management utility to LTV data used by organizations whose customer value is rooted in membership, subscription, or contract term data.
789

A Smartphone-Based Gait Data Collection System for the Prediction of Falls in Elderly Adults

Martinez, Matthew, De Leon, Phillip L. 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / Falls prevention efforts for older adults have become increasingly important and are now a significant research effort. As part of the prevention effort, analysis of gait has become increasingly important. Data is typically collected in a laboratory setting using 3-D motion capture, which can be time consuming, invasive and requires expensive and specialized equipment as well as trained operators. Inertial sensors, which are smaller and more cost effective, have been shown to be useful in falls research. Smartphones now contain Micro Electro-Mechanical (MEM) Inertial Measurement Units (IMUs), which make them a compelling platform for gait data acquisition. This paper reports the development of an iOS app for collecting accelerometer data and an offline machine learning system to classify a subject, based on this data, as faller or non-faller based on their history of falls. The system uses the accelerometer data captured on the smartphone, extracts discriminating features, and then classifies the subject based on the feature vector. Through simulation, our preliminary and limited study suggests this system has an accuracy as high as 85%. Such a system could be used to monitor an at-risk person's gait in order to predict an increased risk of falling.
790

An advanced non-intrusive load monitoring technique and its application in smart grid building energy management systems

He, Dawei 27 May 2016 (has links)
The objective of the proposed research is to develop an intelligent load modeling, identification, and prediction technology to provide granular load energy consumption and performance details and drive building energy reduction, demand reduction, and proactive equipment maintenance. Electricity consumption in commercial and residential sectors accounts for about 70% of the total electricity generation in United States. Buildings are the most important consumers, and contribute to over 80% of the consumptions in these two sectors. To reduce electrical energy spending and carbon emission, several studies from Pacific Northwest National Lab (PNNL) and National Renewable Energy Lab (NREL) prove that if equipped with the proper technologies, a commercial or a residential building can potentially improve energy savings of buildings by up to about 10% to 30% of their usage. However, the market acceptance of these new technologies today is still not sufficient, and the reason is generally acknowledged to be the lack of solution to quantify the contributions of these new technologies to the energy savings, and the invisibility of the loads in buildings. A non-intrusive load monitoring (NILM) system is proposed in this dissertation, which can identify every individual load in buildings and record the energy consumption, time-of-day variations and other relevant statistics of the identified load, with no access to the individual component. The challenge of such a non-intrusive load monitoring is to find features that are unique for a particular load and then to match a measured feature of an unknown load against a database or library of known. Many problems exist in this procedure and the proposed research is going to focus on three directions to overcome the bottlenecks. They are respectively fundamental load studies for a model-driven feature extraction, adaptive identification algorithms for load space extendibility, and the practical simplifications for the real industrial applications. The simulation results show the great potentials of this new technology in building energy monitoring and management.

Page generated in 0.0701 seconds