Spelling suggestions: "subject:"automated data analysis"" "subject:"utomated data analysis""
1 |
X-Ray Micro- and Nano-Diffraction Imaging on Human Mesenchymal Stem Cells and Differentiated CellsBernhardt, Marten 15 June 2016 (has links)
No description available.
|
2 |
Automated Text Mining and Ranked List Algorithms for Drug Discovery in Acute Myeloid LeukemiaTran, Damian January 2019 (has links)
Evidence-based software engineering (EBSE) solutions for drug discovery that are effective, affordable, and accessible all-in-one are lacking. This thesis chronicles the progression and accomplishments of the AiDA (Artificially-intelligent Desktop Assistant) functional artificial intelligence (AI) project for the purposes of drug discovery in the challenging acute myeloid leukemia context (AML). AiDA is a highly automated combined natural language processing (NLP) and spreadsheet feature extraction solution that harbours potential to disrupt the state of current research investigation methods using big data and aggregated literature. The completed work includes a text-to-function (T2F) NLP method for automated text interpretation, a ranked-list algorithm for multi-dataset analysis, and a custom multi-purpose neural network engine presented to the user using an open-source graphics engine. Validation of the deep learning engine using MNIST and CIFAR machine learning benchmark datasets showed performance comparable to state-of-the-art libraries using similar architectures. An n-dimensional word embedding method for the handling of unstructured natural language data was devised to feed convolutional neural network (CNN) models that over 25 random permutations correctly predicted functional responses to up to 86.64% of over 300 validation transcripts. The same CNN NLP infrastructure was then used to automate biomedical context recognition in >20000 literature abstracts with up to 95.7% test accuracy over several permutations. The AiDA platform was used to compile a bidirectional ranked list of potential gene targets for pharmaceuticals by extracting features from leukemia microarray data, followed by mining of the PubMed biomedical citation database to extract recyclable pharmaceutical candidates. Downstream analysis of the candidate therapeutic targets revealed enrichments in AML- and leukemic stem cell (LSC)-related pathways. The applicability of the AiDA algorithms in whole and part to the larger biomedical research field is explored. / Thesis / Master of Science (MSc) / Lead generation is an integral requirement of any research organization in all fields and is typically a time-consuming and therefore expensive task. This is due to the requirement of human intuition to be applied iteratively over a large body of evidence. In this thesis, a new technology called the Artificially-intelligent Desktop Assistant (AiDA) is explored in order to provide a large number of leads from accumulated biomedical information. AiDA was created using a combination of classical statistics, deep learning methods, and modern graphical interface engineering. It aims to simplify the interface between the researcher and an assortment of bioinformatics tasks by organically interpreting written text messages and responding with the appropriate task. AiDA was able to identify several potential targets for new pharmaceuticals in acute myeloid leukemia (AML), a cancer of the blood, by reading whole-genome data. It then discovered appropriate therapeutics by automatically scanning through the accumulated body of biomedical research papers. Analysis of the discovered drug targets shows that together, they are involved in key biological processes that are known by the scientific community to be involved in leukemia and other cancers.
|
3 |
Developing Artificial Intelligence-Based Decision Support for Resilient Socio-Technical SystemsAli Lenjani (8921381) 15 June 2020 (has links)
<div>During 2017 and 2018, two of the costliest years on record regarding natural disasters, the U.S. experienced 30 events with total losses of $400 billion. These exuberant costs arise primarily from the lack of adequate planning spanning the breadth from pre-event preparedness to post-event response. It is imperative to start thinking about ways to make our built environment more resilient. However, empirically-calibrated and structure-specific vulnerability models, a critical input required to formulate decision-making problems, are not currently available. Here, the research objective is to improve the resilience of the built environment through an automated vision-based system that generates actionable information in the form of probabilistic pre-event prediction and post-event assessment of damage. The central hypothesis is that pre-event, e.g., street view images, along with the post-event image database, contain sufficient information to construct pre-event probabilistic vulnerability models for assets in the built environment. The rationale for this research stems from the fact that probabilistic damage prediction is the most critical input for formulating the decision-making problems under uncertainty targeting the mitigation, preparedness, response, and recovery efforts. The following tasks are completed towards the goal.</div><div>First, planning for one of the bottleneck processes of the post-event recovery is formulated as a decision making problem considering the consequences imposed on the community (module 1). Second, a technique is developed to automate the process of extracting multiple street-view images of a given built asset, thereby creating a dataset that illustrates its pre-event state (module 2). Third, a system is developed that automatically characterizes the pre-event state of the built asset and quantifies the probability that it is damaged by fusing information from deep neural network (DNN) classifiers acting on pre-event and post-event images (module 3). To complete the work, a methodology is developed to enable associating each asset of the built environment with a structural probabilistic vulnerability model by correlating the pre-event structure characterization to the post-event damage state (module 4). The method is demonstrated and validated using field data collected from recent hurricanes within the US.</div><div>The vision of this research is to enable the automatic extraction of information about exposure and risk to enable smarter and more resilient communities around the world.</div>
|
4 |
Automated Estimation of Forest Row Spacing and Detection of Clearances: An Experimental StudyMohammad, Waled Khalid January 2024 (has links)
Background: This research explores the integration of satellite imagery and imageprocessing techniques to innovate forest monitoring methods. Traditional approachesoften fall short in scale and efficiency, necessitating enhanced techniques for accurateforest structure analysis. Objectives: The main goal is to develop a software prototype capable of automat-ing the measurement of tree row spacing and detecting clearing areas within forests,thereby facilitating more informed and efficient forest management and conservationefforts. Methods: The study employed computer vision techniques and image processingalgorithms using OpenCV to process high-resolution satellite images. The develop-ment and testing of the prototype involved iterative enhancements to refine accuracyand functionality. Results: The findings demonstrate that the prototype successfully identifies andmeasures forest structural features with high accuracy, confirming the effectivenessof integrating computational techniques with ecological monitoring practices. Conclusions: The successful application of satellite imagery and image processingsignificantly enhances forest monitoring capabilities, promoting sustainable forestmanagement. This research underscores the potential of technology to transformenvironmental conservation efforts by providing detailed, reliable data that supportsproactive management strategies. / Bakgrund: Denna forskning utforskar integrationen av satellitbilder och avanceradebildbehandlingstekniker för att innovera metoder för skogsövervakning. Traditionellatillvägagångssätt är ofta bristfälliga i skala och effektivitet, vilket kräver förbättradetekniker för noggrann analys av skogsstrukturer. Syften: Huvudmålet är att utveckla en programvaruprototyp som kan automatiseramätningen av trädradsavstånd och upptäcka kalhyggen inom skogar, vilket underlät-tar mer informerad och effektiv skogsförvaltning och bevarandeinsatser.Metoder: Studien använde datorsynstekniker och bildbehandlingsalgoritmer medOpenCV för att bearbeta högupplösta satellitbilder. Utvecklingen och testningen avprototypen involverade iterativa förbättringar för att förfina noggrannhet och funktionalitet. Resultat: Resultaten visar att prototypen framgångsrikt identifierar och mäterskogsstrukturella egenskaper med hög noggrannhet, vilket bekräftar effektivitetenav att integrera avancerade datatekniker med ekologiska övervakningsmetoder. Slutsatser: Den framgångsrika tillämpningen av satellitbilder och bildbehandlingförbättrar avsevärt möjligheterna till skogsövervakning och främjar hållbar skogsför-valtning. Denna forskning understryker teknikens potential att transformera miljöbe-varande insatser genom att tillhandahålla detaljerade, tillförlitliga data som stöderproaktiva förvaltningsstrategie
|
Page generated in 0.0577 seconds