• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

APPLICATION OF MANIFOLD EMBEDDING OF THE MOLECULAR SURFACE TO SOLID-STATE PROPERTY PREDICTION

Nicholas J Huls (16642551) 01 August 2023 (has links)
<p><br></p><p>The pharmaceutical industry depends on deeply understanding pharmaceutical excipients and active ingredients. The physicochemical properties must be sufficiently understood to create a safe and efficacious drug product. High-throughput methods have reduced the time and material required to measure many properties appropriately. However, some are more difficult to evaluate. One such property is solubility or the equilibrium dissolvable content of the material. Solubility is an essential factor in determining the bioavailability of an active ingredient and, therefore, directly impacts the effectiveness and marketability of the drug product.</p><p>Solubility can be a challenging, time-consuming, material-intensive property to measure correctly. Due to the challenge associated with determining experimental values, researchers have devoted a significant amount of time toward the accurate prediction of solubility values of drug-like compounds. This remains a difficult task as there are two hurdles to overcome: data quality and specificity of molecular descriptors. Large databases of reliable solubility values have become more readily available in recent years, lowering the first barrier to more accurate solubility predictions. The second hurdle has proven more challenging to overcome. Advances in artificial intelligence (AI) have provided opportunities for improvement in estimations. Expressly, the subsets of machine learning and neural networks have provided the ability to evaluate vast quantities of data with relative ease. The remaining barrier arises from appropriately selecting AI techniques with descriptors that accurately describe relevant features. Although many attempts have been made, no single set of descriptors with either data-driven approaches or <i>ab initio</i> methods has accurately predicted solubility.</p><p>The research within this dissertation focuses on an attempt to lower the second barrier to solubility prediction by starting with molecular features that are most important to solubility. By deriving molecular descriptors from the electronic properties on the surface of molecules, we obtain precise descriptions of the strength and locality of intermolecular interactions, critical factors in the extent of solubility. The novel molecular descriptors are readily integrated into a Deep-sets based Graph and Self-Attention Neural Network, which evaluates predictive performance. The findings of this research indicate significant improvement in predicting intrinsic solubility over other literature-reported methods.</p>
672

Emphysema Classification via Deep Learning

Molin, Olov January 2023 (has links)
Emphysema is an incurable lung airway disease and a hallmark of Chronic Obstructive Pulmonary Disease (COPD). In recent decades, Computed Tomography (CT) has been used as a powerful tool for the detection and quantification of different diseases, including emphysema. The use of CT comes with a potential risk: ionizing radiation. It involves a trade-off between image quality and the risk of radiation exposure. However, early detection of emphysema is important as emphysema is an independent risk marker for lung cancer, and it also possesses evident qualities that make it a candidate for sub-classification of COPD. In this master's thesis, we use state-of-the-art deep learning models for pulmonary nodule detection to classify emphysema at an early stage of the disease's progression. We also demonstrate that deep learning denoising techniques can be applied to low-dose CT scans to improve the model's performance. We achieved an F-score of 0.66, an AUC score of 0.80, and an accuracy of 81.74%. The impact of denoising resulted in an increase of 1.57 percent units in accuracy and a 0.0332 increase in the F-score. In conclusion, this makes it possible to use low-dose CT scans for early detection of emphysema with State-of-The-Art deep-learning models for pulmonary nodule detection.
673

COHORTFINDER: A DATA-DRIVEN, OPEN-SOURCE, TOOL FOR PARTITIONING PATHOLOGY AND IMAGING COHORTS TO YIELD ROBUST MACHINE LEARNING MODELS

Fan, Fan 26 May 2023 (has links)
No description available.
674

Deep learning-based segmentation of anatomical structures in MR images

Ledberg, Rasmus January 2023 (has links)
Magnetic resonance imaging (MRI) is a powerful imaging tool for diagnostics, which AMRA uses to segment and quantify certain anatomical regions. This thesis investigate the possibilities of using deep learning for the particular task of AMRAs segmentation, both for ordinary regions (fat and muscle regions) and injured muscles.The main approach performs muscle and fat segmentation separately, and compares results for three approaches; a full resolution approach, a down-sample approach (trained on down-sampled images) and an ensemble approach (uses voting among the 7 best networks).The results shows that deep learning segmentation is possible for the task, with satisfactory results. The down-sampled approach works best for fat segmentation, which can be related to the inconsistently over-segmented ground truth fat masks. It is therefore unnecessary with the additional resolution, which might only impair the performance. The down-sampled approach achieves better results also for muscle segmentation. Ensemble learning does in general not improve the neither the segmentation dice score nor the biomarker predictions. Injured muscles are more difficult to predict due to smaller muscles in the particular used dataset, and an increased data versatility. As a summary, deep learning shows great potential for the task. The results are overall satisfactory (mostly for a down-sampled approach), but further work needs to be done for injured muscles in order to make it clinically useful.
675

Deep learning-based algorithm improved radiologists’ performance in bone metastases detection on CT / 深層学習を用いたアルゴリズムにより放射線科医のCTでの骨転移検出能が向上した

Noguchi, Shunjiro 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第24473号 / 医博第4915号 / 新制||医||1062(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 溝脇 尚志, 教授 黒田 知宏, 教授 花川 隆 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
676

Leveraging deep learning for identification and structural determination of novel protein complexes from \(in\) \(situ\) electron cryotomography of \(Mycoplasma\) \(pneumoniae\) / Tiefenlernen als Werkzeug zur Identifizierung und Strukturbestimmung neuer Proteinkomplexe aus der \(in\)-\(situ\)-Elektronenkryotomographie von \(Mycoplasma\) \(pneumoniae\)

Somody, Joseph Christian Campbell January 2023 (has links) (PDF)
The holy grail of structural biology is to study a protein in situ, and this goal has been fast approaching since the resolution revolution and the achievement of atomic resolution. A cell's interior is not a dilute environment, and proteins have evolved to fold and function as needed in that environment; as such, an investigation of a cellular component should ideally include the full complexity of the cellular environment. Imaging whole cells in three dimensions using electron cryotomography is the best method to accomplish this goal, but it comes with a limitation on sample thickness and produces noisy data unamenable to direct analysis. This thesis establishes a novel workflow to systematically analyse whole-cell electron cryotomography data in three dimensions and to find and identify instances of protein complexes in the data to set up a determination of their structure and identity for success. Mycoplasma pneumoniae is a very small parasitic bacterium with fewer than 700 protein-coding genes, is thin enough and small enough to be imaged in large quantities by electron cryotomography, and can grow directly on the grids used for imaging, making it ideal for exploratory studies in structural proteomics. As part of the workflow, a methodology for training deep-learning-based particle-picking models is established. As a proof of principle, a dataset of whole-cell Mycoplasma pneumoniae tomograms is used with this workflow to characterize a novel membrane-associated complex observed in the data. Ultimately, 25431 such particles are picked from 353 tomograms and refined to a density map with a resolution of 11 Å. Making good use of orthogonal datasets to filter search space and verify results, structures were predicted for candidate proteins and checked for suitable fit in the density map. In the end, with this approach, nine proteins were found to be part of the complex, which appears to be associated with chaperone activity and interact with translocon machinery. Visual proteomics refers to the ultimate potential of in situ electron cryotomography: the comprehensive interpretation of tomograms. The workflow presented here is demonstrated to help in reaching that potential. / Der heilige Gral der Strukturbiologie ist die Untersuchung eines Proteins in situ, und dieses Ziel ist seit der Auflösungsrevolution und dem Erreichen der atomaren Auflösung in greifbare Nähe gerückt. Das Innere einer Zelle ist keine verdünnte Umgebung, und Proteine haben sich so entwickelt, dass sie sich falten und so funktionieren, wie es in dieser Umgebung erforderlich ist; daher sollte die Untersuchung einer zellulären Komponente idealerweise die gesamte Komplexität der zellulären Umgebung umfassen. Die Abbildung ganzer Zellen in drei Dimensionen mit Hilfe der Elektronenkryotomographie ist die beste Methode, um dieses Ziel zu erreichen, aber sie ist mit einer Beschränkung der Probendicke verbunden und erzeugt verrauschte Daten, die sich nicht für eine direkte Analyse eignen. In dieser Dissertation wird ein neuartiger Workflow zur systematischen dreidimensionalen Analyse von Ganzzell-Elektronenkryotomographiedaten und zur Auffindung und Identifizierung von Proteinkomplexen in diesen Daten entwickelt, um eine erfolgreiche Bestimmung ihrer Struktur und Identität zu ermöglichen. Mycoplasma pneumoniae ist ein sehr kleines parasitäres Bakterium mit weniger als 700 proteinkodierenden Genen. Es ist dünn und klein genug, um in grossen Mengen durch Elektronenkryotomographie abgebildet zu werden, und kann direkt auf den für die Abbildung verwendeten Gittern wachsen, was es ideal für Sondierungsstudien in der strukturellen Proteomik macht. Als Teil des Workflows wird eine Methodik für das Training von Deep-Learning-basierten Partikelpicken-Modellen entwickelt. Als Proof-of-Principle wird ein Dataset von Ganzzell-Tomogrammen von Mycoplasma pneumoniae mit diesem Workflow verwendet, um einen neuartigen membranassoziierten Komplex zu charakterisieren, der in den Daten beobachtet wurde. Insgesamt wurden 25431 solcher Partikel aus 353 Tomogrammen gepickt und zu einer Dichtekarte mit einer Auflösung von 11 Å verfeinert. Unter Verwendung orthogonaler Datensätze zur Filterung des Suchraums und zur Überprüfung der Ergebnisse wurden Strukturen für Protein-Kandidaten vorhergesagt und auf ihre Eignung für die Dichtekarte überprüft. Letztendlich wurden mit diesem Ansatz neun Proteine als Bestandteile des Komplexes gefunden, der offenbar mit der Chaperonaktivität in Verbindung steht und mit der Translocon-Maschinerie interagiert. Das ultimative Potenzial der In-situ-Elektronenkryotomographie – die umfassende Interpretation von Tomogrammen – wird als visuelle Proteomik bezeichnet. Der hier vorgestellte Workflow soll dabei helfen, dieses Potenzial auszuschöpfen.
677

An integrated study of the early cretaceous (Valanginian) reservoir from the Gamtoos Basin, offshore South Africa with special reference to seismic cacies, formation evaluation and static reservoir modeling

Ayodele, Oluwatoyin January 2019 (has links)
Philosophiae Doctor - PhD / Integrated approaches in the study of petroleum exploration are increasingly becoming significant in recent times and have yielded much better result as oil exploration is a combination of different related topics. The production capacity in hydrocarbon exploration has been the major concern for oil and gas industries. In the present work an integrated approach was made with seismic, well logs and biostratigraphy for predicting the depositional environment and to understand the heterogeneity within the reservoirs belonging to Valanginian (Early Cretaceous) age of Gamtoos Basin, Offshore South Africa. Objectively, the integrated work was mainly based on seismic stratigraphy (seismic sequence and seismic facie analysis) for interpretation of the depositional environments with combination of microfossil biostratigraphic inputs. The biostratigraphic study provides evidences of paleo depth from benthic foraminifera and information about bottom condition within the sedimentary basin, changing of depositional depth during gradual basinal fill during the Valanginian time. The petrophysical characterization of the reservoir succession was based on formation evaluation studies using well logs to investigate the hydrocarbon potential of the reservoir across Valanginian depositional sequence. Further, the static modeling from 2D-seismic data interpreted to a geological map to 3D-numerical modeling by stochastic model to quantify the evaluation of uncertainty for accurate characterisation of the reservoir sandstones and to provide better understanding of the spatial distribution of the discrete and continuous Petrophysical properties within the study area.
678

Pre-Illinoian Glaciation and Landscape Evolution in the Cincinnati, Ohio / Northern Kentucky Region

Nealon, John S. 27 September 2013 (has links)
No description available.
679

Dual-Attention Generative Adversarial Network and Flame and Smoke Analysis

Li, Yuchuan 30 September 2021 (has links)
Flame and smoke image processing and analysis could improve performance to detect smoke or fire and identify many complicated fire hazards, eventually to help firefighters to fight fires safely. Deep Learning applied to image processing has been prevailing in recent years among image-related research fields. Fire safety researchers also brought it into their studies due to its leading performance in image-related tasks and statistical analysis. From the perspective of input data type, traditional fire research is based on simple mathematical regressions or empirical correlations relying on sensor data, such as temperature. However, data from advanced vision devices or sensors can be analyzed by applying deep learning beyond auxiliary methods in data processing and analysis. Deep Learning has a bigger capacity in non-linear problems, especially in high-dimensional spaces, such as flame and smoke image processing. We propose a video-based real-time smoke and flame analysis system with deep learning networks and fire safety knowledge. It takes videos of fire as input and produces analysis and prediction for flashover of fire. Our system consists of four modules. The Color2IR Conversion module is made by deep neural networks to convert RGB video frames into InfraRed (IR) frames, which could provide important thermal information of fire. Thermal information is critically important for fire hazard detection. For example, 600 °C marks the start of a flashover. As RGB cameras cannot capture thermal information, we propose an image conversion module from RGB to IR images. The core of this conversion is a new network that we innovatively proposed: Dual-Attention Generative Adversarial Network (DAGAN), and it is trained using a pair of RGB and IR images. Next, Video Semantic Segmentation Module helps extract flame and smoke areas from the scene in the RGB video frames. We innovated to use synthetic RGB video data generated and captured from 3D modeling software for data augmentation. After that, a Video Prediction Module takes the RGB video frames and IR frames as input and produces predictions of the subsequent frames of their scenes. Finally, a Fire Knowledge Analysis Module predicts if flashover is coming or not, based on fire knowledge criteria such as thermal information extracted from IR images, temperature increase rate, the flashover occurrence temperature, and increase rate of lowest temperature. For our contributions and innovations, we introduce a novel network, DAGAN, by applying foreground and background attention mechanisms in the image conversion module to help reduce the hardware device requirement for flashover prediction. Besides, we also make use of combination of thermal information from IR images and segmentation information from RGB images in our system for flame and smoke analysis. We also apply a hybrid design of deep neural networks and a knowledge-based system to achieve high accuracy. Moreover, data augmentation is also applied on the Video Semantic Segmentation Module by introducing synthetic video data for training. The test results of flashover prediction show that our system has leading places quantitative and qualitative in terms of various metrics compared with other existing approaches. It can give a flashover prediction as early as 51 seconds with 94.5% accuracy before it happens.
680

DEMOCRATISING DEEP LEARNING IN MICROBIAL METABOLITES RESEARCH / DEMOCRATISING DEEP LEARNING IN NATURAL PRODUCTS RESEARCH

Dial, Keshav January 2023 (has links)
Deep learning models are dominating performance across a wide variety of tasks. From protein folding to computer vision to voice recognition, deep learning is changing the way we interact with data. The field of natural products, and more specifically genomic mining, has been slow to adapt to these new technological innovations. As we are in the midst of a data explosion, it is not for lack of training data. Instead, it is due to the lack of a blueprint demonstrating how to correctly integrate these models to maximise performance and inference. During my PhD, I showcase the use of large language models across a variety of data domains to improve common workflows in the field of natural product drug discovery. I improved natural product scaffold comparison by representing molecules as sentences. I developed a series of deep learning models to replace archaic technologies and create a more scalable genomic mining pipeline decreasing running times by 8X. I integrated deep learning-based genomic and enzymatic inference into legacy tooling to improve the quality of short-read assemblies. I also demonstrate how intelligent querying of multi-omic datasets can be used to facilitate the gene signature prediction of encoded microbial metabolites. The models and workflows I developed are wide in scope with the hopes of blueprinting how these industry standard tools can be applied across the entirety of natural product drug discovery. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.2152 seconds