• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 1
  • Tagged with
  • 241
  • 241
  • 241
  • 121
  • 91
  • 87
  • 63
  • 42
  • 37
  • 35
  • 35
  • 30
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Use of Artificial Intelligence for Malaria Drug Discovery

Keshavarzi Arshadi, Arash 01 January 2019 (has links)
Antimalarial drugs are becoming less effective due to the emergence of drug resistance. At this time, resistance has been reported for all available antimalarial marketed drugs, including artemisinin, thus creating a perpetual need for alternative drug candidates. The traditional drug discovery approach of high throughput screening (HTS) of large compound libraries for identification of new drug leads is time-consuming and resource-intensive. While virtual screening, which enables finding drug candidates in-silico, is one solution to this problem, the accuracy of these models is limited. Artificial intelligence (AI) however has demonstrated highly accurate performances in chemical property prediction utilizing either structure-based or ligand-based approaches. Leveraging this ability and the existing models, AI could be a suitable alternative to blind-search HTS or feature-based virtual screening. This model would recognize patterns within data and allow the search for hit compounds to be done in an intelligent manner. In this work, we introduce DeepMalaria, a deep-learning-based process capable of predicting the anti-plasmodial properties and parasite to human selectivity of compounds from their SMILES. This graph-based model is trained on nearly 13,000 publicly available antiplasmodial compounds from GlaxoSmithKline (GSK) which are currently being used to find novel antimalarial drug candidates. We used this model for predicting hit compounds from a macrocyclic based compound library. To validate the DeepMalaria generated hits, we utilized the widely used SYBR Green I fluorescence-based phenotypic screening. DeepMalaria was able to predict all compounds that showed nanomolar activity and 87.5% of the compounds with an inhibition rate of 50% or more at 1 µM. Further experiments to reveal the compounds' mechanism of action has shown us that one of the hit compounds, DC-9237, inhibits all intraerythrocytic asexual stages of Plasmodium falciparum, and is a fast-acting compound, making it a strong candidate for further optimization.
32

Influencing Exploration in Actor-Critic Reinforcement Learning Algorithms

Gough, Andrew R 01 June 2018 (has links) (PDF)
Reinforcement Learning (RL) is a subset of machine learning primarily concerned with goal-directed learning and optimal decision making. RL agents learn based on a reward signal discovered from trial and error in complex, uncertain environments with the goal of maximizing positive reward signals. RL approaches need to scale up as they are applied to more complex environments with extremely large state spaces. Inefficient exploration methods cannot sufficiently explore complex environments in a reasonable amount of time, and optimal policies will be unrealized resulting in RL agents failing to solve an environment. This thesis proposes a novel variant of the Actor-Advantage Critic (A2C) algorithm. The variant is validated against two state-of-the-art RL algorithms, Deep Q-Network (DQN) and A2C, across six Atari 2600 games of varying difficulty. The experimental results are competitive with state-of-the-art and achieve lower variance and quicker learning speed. Additionally, the thesis introduces a metric to objectively quantify the difficulty of any Markovian environment with respect to the exploratory capacity of RL agents.
33

A fuzzy hierarchical decision model and its application in networking datacenters and in infrastructure acquisitions and design

Khader, Michael 01 January 2009 (has links)
According to several studies, an inordinate number of major business decisions to acquire, design, plan, and implement networking infrastructures fail. A networking infrastructure is a collaborative group of telecommunications systems providing services needed for a firm's operations and business growth. The analytical hierarchy process (AHP) is a well established decision-making process used to analyze decisions related to networking infrastructures. AHP is concerned with decomposing complex decisions into a set of factors and solutions. However, AHP has difficulties in handling uncertainty in decision information. This study addressed the research question of solutions to AHP deficiencies. The solutions were accomplished through the development of a model capable of handling decisions with incomplete information and uncertain decision operating environment. This model is based on AHP framework and fuzzy sets theory. Fuzzy sets are sets whose memberships are gradual. A member of a fuzzy set may have a strong, weak, or a moderate membership. The methodology for this study was based primarily on the analytical research design method, which is neither quantitative nor qualitative, but based on mathematical concepts, proofs, and logic. The model's constructs were verified by a simulated practical case study based on current literature and the input of networking experts. To further verify the research objectives, the investigator developed, tested, and validated a software platform. The results showed tangible improvements in analyzing complex networking infrastructure decisions. The ability of this model to analyze decisions with incomplete information and uncertain economic outlook can be employed in the socially important areas such as renewable energy, forest management, and environmental studies to achieve large savings.
34

Evaluating Heuristics and Crowding on Center Selection in K-Means Genetic Algorithms

McGarvey, William 01 January 2014 (has links)
Data clustering involves partitioning data points into clusters where data points within the same cluster have high similarity, but are dissimilar to the data points in other clusters. The k-means algorithm is among the most extensively used clustering techniques. Genetic algorithms (GA) have been successfully used to evolve successive generations of cluster centers. The primary goal of this research was to develop improved GA-based methods for center selection in k-means by using heuristic methods to improve the overall fitness of the initial population of chromosomes along with crowding techniques to avoid premature convergence. Prior to this research, no rigorous systematic examination of the use of heuristics and crowding methods in this domain had been performed. The evaluation included computational experiments involving repeated runs of the genetic algorithm in which values that affect heuristics or crowding were systematically varied and the results analyzed. Genetic algorithm performance under the various configurations was analyzed based upon (1) the fitness of the partitions produced, and by (2) the overall time it took the GA to converge to good solutions. Two heuristic methods for initial center seeding were tested: Density and Separation. Two crowding techniques were evaluated on their ability to prevent premature convergence: Deterministic and Parent Favored Hybrid local tournament selection. Based on the experiment results, the Density method provides no significant advantage over random seeding either in discovering quality partitions or in more quickly evolving better partitions. The Separation method appears to result in an increased probability of the genetic algorithm finding slightly better partitions in slightly fewer generations, and to more quickly converge to quality partitions. Both local tournament selection techniques consistently allowed the genetic algorithm to find better quality partitions than roulette-wheel sampling. Deterministic selection consistently found better quality partitions in fewer generations than Parent Favored Hybrid. The combination of Separation center seeding and Deterministic selection performed better than any other combination, achieving the lowest mean best SSE value more than twice as often as any other combination. On all 28 benchmark problem instances, the combination identified solutions that were at least as good as any identified by extant methods.
35

File Fragment Classification Using Neural Networks with Lossless Representations

Hiester, Luke 01 May 2018 (has links)
This study explores the use of neural networks as universal models for classifying file fragments. This approach differs from previous work in its lossless feature representation, with fragments’ bits as direct input, and its use of feedforward, recurrent, and convolutional networks as classifiers, whereas previous work has only tested feedforward networks. Due to the study’s exploratory nature, the models were not directly evaluated in a practical setting; rather, easily reproducible experiments were performed to attempt to answer the initial question of whether this approach is worthwhile to pursue further, especially due to its high computational cost. The experiments tested classification of fragments of homogeneous file types as an idealized case, rather than using a realistic set of types, because the types of interest are highly application-dependent. The recurrent networks achieved 98 percent accuracy in distinguishing 4 file types, suggesting that this approach may be capable of yielding models with sufficient performance for practical applications. The potential applications depend mainly on the model performance gains achievable by future work but include binary mapping, deep packet inspection, and file carving.
36

Sonar sensor interpretation for ectogeneous robots

Gao, Wen 01 January 2005 (has links)
We have developed four generations of sonar scanning systems to automatically interpret surrounding environment. The first two are stationary 3D air-coupled ultrasound scanning systems and the last two are packaged as sensor heads for mobile robots. Template matching analysis is applied to distinguish simple indoor objects. It is conducted by comparing the tested echo with the reference echoes. Important features are then extracted and drawn in the phase plane. The computer then analyzes them and gives the best choices of the tested echoes automatically. For cylindrical objects outside, an algorithm has been presented to distinguish trees from smooth circular poles based on analysis of backscattered sonar echoes. The echo data is acquired by a mobile robot which has a 3D air-coupled ultrasound scanning system packaged as the sensor head. Four major steps are conducted. The final Average Asymmetry-Average Squared Euclidean Distance phase plane is segmented to tell a tree from a pole by the location of the data points for the objects interested. For extended objects outside, we successfully distinguished seven objects in the campus by taking a sequence scans along each object, obtaining the corresponding backscatter vs. scan angle plots, forming deformable template matching, extracting interesting feature vectors and then categorizing them in a hyper-plane. We have also successfully taught the robot to distinguish three pairs of objects outside. Multiple scans are conducted at different distances. A two-step feature extraction is conducted based on the amplitude vs. scan angle plots. The final Slope1 vs. Slope2 phase plane not only separates the rectangular objects from the corresponding cylindrical.
37

Knowledge-Based System for Flight Information Management

Ricks, Wendell R. 01 January 1990 (has links)
No description available.
38

Deep Probabilistic Models for Camera Geo-Calibration

Zhai, Menghua 01 January 2018 (has links)
The ultimate goal of image understanding is to transfer visual images into numerical or symbolic descriptions of the scene that are helpful for decision making. Knowing when, where, and in which direction a picture was taken, the task of geo-calibration makes it possible to use imagery to understand the world and how it changes in time. Current models for geo-calibration are mostly deterministic, which in many cases fails to model the inherent uncertainties when the image content is ambiguous. Furthermore, without a proper modeling of the uncertainty, subsequent processing can yield overly confident predictions. To address these limitations, we propose a probabilistic model for camera geo-calibration using deep neural networks. While our primary contribution is geo-calibration, we also show that learning to geo-calibrate a camera allows us to implicitly learn to understand the content of the scene.
39

A recurrent neural network architecture for biomedical event trigger classification

Bopaiah, Jeevith 01 January 2018 (has links)
A “biomedical event” is a broad term used to describe the roles and interactions between entities (such as proteins, genes and cells) in a biological system. The task of biomedical event extraction aims at identifying and extracting these events from unstructured texts. An important component in the early stage of the task is biomedical trigger classification which involves identifying and classifying words/phrases that indicate an event. In this thesis, we present our work on biomedical trigger classification developed using the multi-level event extraction dataset. We restrict the scope of our classification to 19 biomedical event types grouped under four broad categories - Anatomical, Molecular, General and Planned. While most of the existing approaches are based on traditional machine learning algorithms which require extensive feature engineering, our model relies on neural networks to implicitly learn important features directly from the text. We use natural language processing techniques to transform the text into vectorized inputs that can be used in a neural network architecture. As per our knowledge, this is the first time neural attention strategies are being explored in the area of biomedical trigger classification. Our best results were obtained from an ensemble of 50 models which produced a micro F-score of 79.82%, an improvement of 1.3% over the previous best score.
40

LeaF: A Learning-based Fault Diagnostic System for Multi-Robot Teams

Kannan, Balajee 01 May 2007 (has links)
The failure-prone complex operating environment of a standard multi-robot application dictates some amount of fault-tolerance to be incorporated into every system. In fact, the quality of the incorporated fault-tolerance has a direct impact on the overall performance of the system. Despite the extensive work being done in the field of multi-robot systems, there does not exist a general methodology for fault diagnosis and recovery. The objective of this research, in part, is to provide an adaptive approach that enables the robot team to autonomously detect and compensate for the wide variety of faults that could be experienced. The key feature of the developed approach is its ability to learn useful information from encountered faults, unique or otherwise, towards a more robust system. As part of this research, we analyzed an existing multi-agent architecture, CMM – Causal Model Method – as a fault diagnostic solution for a sample multi-robot application. Based on the analysis, we claim that a causal model approach is effective for anticipating and recovering from many types of robot team errors. However, the analysis also showed that the CMM method in its current form is incomplete as a turn-key solution. Due to the significant number of possible failure modes in a complex multi-robot application, and the difficulty in anticipating all possible failures in advance, one cannot guarantee the generation of a complete a priori causal model that identifies and specifies all faults that may occur in the system. Therefore, based on these preliminary studies, we designed an alternate approach, called LeaF: Learning based Fault diagnostic architecture for multi-robot teams. LeaF is an adaptive method that uses its experience to update and extend its causal model to enable the team, over time, to better recover from faults when they occur. LeaF combines the initial fault model with a case-based learning algorithm, LID – Lazy Induction of Descriptions — to allow robot team members to diagnose faults and to automatically update their causal models. The modified LID algorithm uses structural similarity between fault characteristics as a means of classifying previously un-encountered faults. Furthermore, the use of learning allows the system to identify and categorize unexpected faults, enable team members to learn from problems encountered by others, and make intelligent decisions regarding the environment. To evaluate LeaF, we implemented it in two challenging and dynamic physical multi-robot applications. The other significant contribution of the research is the development of metrics to measure the fault-tolerance, within the context of system performance, for a multi-robot system. In addition to developing these metrics, we also outline potential methods to better interpret the obtained measures towards truly understanding the capabilities of the implemented system. The developed metrics are designed to be application independent and can be used to evaluate and/or compare different fault-tolerance architectures like CMM and LeaF. To the best of our knowledge, this approach is the only one that attempts to capture the effect of intelligence, reasoning, or learning on the effective fault-tolerance of the system, rather than relying purely on traditional redundancy based measures. Finally, we show the utility of the designed metrics by applying them to the obtained physical robot experiments, measuring the effective fault-tolerance and system performance, and subsequently analyzing the calculated measures to help better understand the capabilities of LeaF.

Page generated in 0.1101 seconds