• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

X-Ray Micro- and Nano-Diffraction Imaging on Human Mesenchymal Stem Cells and Differentiated Cells

Bernhardt, Marten 15 June 2016 (has links)
No description available.
2

Automated Text Mining and Ranked List Algorithms for Drug Discovery in Acute Myeloid Leukemia

Tran, Damian January 2019 (has links)
Evidence-based software engineering (EBSE) solutions for drug discovery that are effective, affordable, and accessible all-in-one are lacking. This thesis chronicles the progression and accomplishments of the AiDA (Artificially-intelligent Desktop Assistant) functional artificial intelligence (AI) project for the purposes of drug discovery in the challenging acute myeloid leukemia context (AML). AiDA is a highly automated combined natural language processing (NLP) and spreadsheet feature extraction solution that harbours potential to disrupt the state of current research investigation methods using big data and aggregated literature. The completed work includes a text-to-function (T2F) NLP method for automated text interpretation, a ranked-list algorithm for multi-dataset analysis, and a custom multi-purpose neural network engine presented to the user using an open-source graphics engine. Validation of the deep learning engine using MNIST and CIFAR machine learning benchmark datasets showed performance comparable to state-of-the-art libraries using similar architectures. An n-dimensional word embedding method for the handling of unstructured natural language data was devised to feed convolutional neural network (CNN) models that over 25 random permutations correctly predicted functional responses to up to 86.64% of over 300 validation transcripts. The same CNN NLP infrastructure was then used to automate biomedical context recognition in >20000 literature abstracts with up to 95.7% test accuracy over several permutations. The AiDA platform was used to compile a bidirectional ranked list of potential gene targets for pharmaceuticals by extracting features from leukemia microarray data, followed by mining of the PubMed biomedical citation database to extract recyclable pharmaceutical candidates. Downstream analysis of the candidate therapeutic targets revealed enrichments in AML- and leukemic stem cell (LSC)-related pathways. The applicability of the AiDA algorithms in whole and part to the larger biomedical research field is explored. / Thesis / Master of Science (MSc) / Lead generation is an integral requirement of any research organization in all fields and is typically a time-consuming and therefore expensive task. This is due to the requirement of human intuition to be applied iteratively over a large body of evidence. In this thesis, a new technology called the Artificially-intelligent Desktop Assistant (AiDA) is explored in order to provide a large number of leads from accumulated biomedical information. AiDA was created using a combination of classical statistics, deep learning methods, and modern graphical interface engineering. It aims to simplify the interface between the researcher and an assortment of bioinformatics tasks by organically interpreting written text messages and responding with the appropriate task. AiDA was able to identify several potential targets for new pharmaceuticals in acute myeloid leukemia (AML), a cancer of the blood, by reading whole-genome data. It then discovered appropriate therapeutics by automatically scanning through the accumulated body of biomedical research papers. Analysis of the discovered drug targets shows that together, they are involved in key biological processes that are known by the scientific community to be involved in leukemia and other cancers.
3

Developing Artificial Intelligence-Based Decision Support for Resilient Socio-Technical Systems

Ali Lenjani (8921381) 15 June 2020 (has links)
<div>During 2017 and 2018, two of the costliest years on record regarding natural disasters, the U.S. experienced 30 events with total losses of $400 billion. These exuberant costs arise primarily from the lack of adequate planning spanning the breadth from pre-event preparedness to post-event response. It is imperative to start thinking about ways to make our built environment more resilient. However, empirically-calibrated and structure-specific vulnerability models, a critical input required to formulate decision-making problems, are not currently available. Here, the research objective is to improve the resilience of the built environment through an automated vision-based system that generates actionable information in the form of probabilistic pre-event prediction and post-event assessment of damage. The central hypothesis is that pre-event, e.g., street view images, along with the post-event image database, contain sufficient information to construct pre-event probabilistic vulnerability models for assets in the built environment. The rationale for this research stems from the fact that probabilistic damage prediction is the most critical input for formulating the decision-making problems under uncertainty targeting the mitigation, preparedness, response, and recovery efforts. The following tasks are completed towards the goal.</div><div>First, planning for one of the bottleneck processes of the post-event recovery is formulated as a decision making problem considering the consequences imposed on the community (module 1). Second, a technique is developed to automate the process of extracting multiple street-view images of a given built asset, thereby creating a dataset that illustrates its pre-event state (module 2). Third, a system is developed that automatically characterizes the pre-event state of the built asset and quantifies the probability that it is damaged by fusing information from deep neural network (DNN) classifiers acting on pre-event and post-event images (module 3). To complete the work, a methodology is developed to enable associating each asset of the built environment with a structural probabilistic vulnerability model by correlating the pre-event structure characterization to the post-event damage state (module 4). The method is demonstrated and validated using field data collected from recent hurricanes within the US.</div><div>The vision of this research is to enable the automatic extraction of information about exposure and risk to enable smarter and more resilient communities around the world.</div>

Page generated in 0.0682 seconds