• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1850
  • 57
  • 54
  • 38
  • 37
  • 37
  • 19
  • 13
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2668
  • 2668
  • 1104
  • 955
  • 832
  • 608
  • 579
  • 488
  • 487
  • 463
  • 438
  • 432
  • 411
  • 410
  • 373
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Dual-Attention Generative Adversarial Network and Flame and Smoke Analysis

Li, Yuchuan 30 September 2021 (has links)
Flame and smoke image processing and analysis could improve performance to detect smoke or fire and identify many complicated fire hazards, eventually to help firefighters to fight fires safely. Deep Learning applied to image processing has been prevailing in recent years among image-related research fields. Fire safety researchers also brought it into their studies due to its leading performance in image-related tasks and statistical analysis. From the perspective of input data type, traditional fire research is based on simple mathematical regressions or empirical correlations relying on sensor data, such as temperature. However, data from advanced vision devices or sensors can be analyzed by applying deep learning beyond auxiliary methods in data processing and analysis. Deep Learning has a bigger capacity in non-linear problems, especially in high-dimensional spaces, such as flame and smoke image processing. We propose a video-based real-time smoke and flame analysis system with deep learning networks and fire safety knowledge. It takes videos of fire as input and produces analysis and prediction for flashover of fire. Our system consists of four modules. The Color2IR Conversion module is made by deep neural networks to convert RGB video frames into InfraRed (IR) frames, which could provide important thermal information of fire. Thermal information is critically important for fire hazard detection. For example, 600 °C marks the start of a flashover. As RGB cameras cannot capture thermal information, we propose an image conversion module from RGB to IR images. The core of this conversion is a new network that we innovatively proposed: Dual-Attention Generative Adversarial Network (DAGAN), and it is trained using a pair of RGB and IR images. Next, Video Semantic Segmentation Module helps extract flame and smoke areas from the scene in the RGB video frames. We innovated to use synthetic RGB video data generated and captured from 3D modeling software for data augmentation. After that, a Video Prediction Module takes the RGB video frames and IR frames as input and produces predictions of the subsequent frames of their scenes. Finally, a Fire Knowledge Analysis Module predicts if flashover is coming or not, based on fire knowledge criteria such as thermal information extracted from IR images, temperature increase rate, the flashover occurrence temperature, and increase rate of lowest temperature. For our contributions and innovations, we introduce a novel network, DAGAN, by applying foreground and background attention mechanisms in the image conversion module to help reduce the hardware device requirement for flashover prediction. Besides, we also make use of combination of thermal information from IR images and segmentation information from RGB images in our system for flame and smoke analysis. We also apply a hybrid design of deep neural networks and a knowledge-based system to achieve high accuracy. Moreover, data augmentation is also applied on the Video Semantic Segmentation Module by introducing synthetic video data for training. The test results of flashover prediction show that our system has leading places quantitative and qualitative in terms of various metrics compared with other existing approaches. It can give a flashover prediction as early as 51 seconds with 94.5% accuracy before it happens.
282

DEMOCRATISING DEEP LEARNING IN MICROBIAL METABOLITES RESEARCH / DEMOCRATISING DEEP LEARNING IN NATURAL PRODUCTS RESEARCH

Dial, Keshav January 2023 (has links)
Deep learning models are dominating performance across a wide variety of tasks. From protein folding to computer vision to voice recognition, deep learning is changing the way we interact with data. The field of natural products, and more specifically genomic mining, has been slow to adapt to these new technological innovations. As we are in the midst of a data explosion, it is not for lack of training data. Instead, it is due to the lack of a blueprint demonstrating how to correctly integrate these models to maximise performance and inference. During my PhD, I showcase the use of large language models across a variety of data domains to improve common workflows in the field of natural product drug discovery. I improved natural product scaffold comparison by representing molecules as sentences. I developed a series of deep learning models to replace archaic technologies and create a more scalable genomic mining pipeline decreasing running times by 8X. I integrated deep learning-based genomic and enzymatic inference into legacy tooling to improve the quality of short-read assemblies. I also demonstrate how intelligent querying of multi-omic datasets can be used to facilitate the gene signature prediction of encoded microbial metabolites. The models and workflows I developed are wide in scope with the hopes of blueprinting how these industry standard tools can be applied across the entirety of natural product drug discovery. / Thesis / Doctor of Philosophy (PhD)
283

The Evaluation of Current Spiking Neural Network Conversion Methods in Radar Data

Smith, Colton C. January 2021 (has links)
No description available.
284

Map-Based Trajectory Learning for Geolocalization using Deep Learning

Zha, Bing January 2021 (has links)
No description available.
285

Human gait movement analysis using wearable solutions and Artificial Intelligence

Davarzani, Samaneh 09 December 2022 (has links) (PDF)
Gait recognition systems have gained tremendous attention due to its potential applications in healthcare, criminal investigation, sports biomechanics, and so forth. A new solution to gait recognition tasks can be provided by wearable sensors integrated in wearable objects or mobile devices. In this research a sock prototype designed with embedded soft robotic sensors (SRS) is implemented to measure foot ankle kinematic and kinetic data during three experiments designed to track participants’ feet ankle movement. Deep learning and statistical methods have been employed to model SRS data against Motion capture system (MoCap) to determine their ability to provide accurate kinematic and kinetic data using SRS measurements. In the first study, the capacitance of SRS related to foot-ankle basic movements was quantified during the gait movements of twenty participants on a flat surface and a cross-sloped surface. I have conducted another study regarding kinematic features in which deep learning models were trained to estimate the joint angles in sagittal and frontal planes measured by a MoCap system. Participant-specific models were established for ten healthy subjects walking on a treadmill. The prototype was tested at various walking speeds to assess its ability to track movements for multiple speeds and generalize models for estimating joint angles in sagittal and frontal planes. The focus of the last study is measuring the kinetic features and the goal is determining the validity of SRS measurements, to this end the pressure data measured with SRS embedded into the sock prototype would be compared with the force plate data.
286

Software Requirements Classification Using Word Embeddings and Convolutional Neural Networks

Fong, Vivian Lin 01 June 2018 (has links) (PDF)
Software requirements classification, the practice of categorizing requirements by their type or purpose, can improve organization and transparency in the requirements engineering process and thus promote requirement fulfillment and software project completion. Requirements classification automation is a prominent area of research as automation can alleviate the tediousness of manual labeling and loosen its necessity for domain-expertise. This thesis explores the application of deep learning techniques on software requirements classification, specifically the use of word embeddings for document representation when training a convolutional neural network (CNN). As past research endeavors mainly utilize information retrieval and traditional machine learning techniques, we entertain the potential of deep learning on this particular task. With the support of learning libraries such as TensorFlow and Scikit-Learn and word embedding models such as word2vec and fastText, we build a Python system that trains and validates configurations of Naïve Bayes and CNN requirements classifiers. Applying our system to a suite of experiments on two well-studied requirements datasets, we recreate or establish the Naïve Bayes baselines and evaluate the impact of CNNs equipped with word embeddings trained from scratch versus word embeddings pre-trained on Big Data.
287

GRAPH NEURAL NETWORKS BASED ON MULTI-RATE SIGNAL DECOMPOSITION FOR BEARING FAULT DIAGNOSIS.pdf

Guanhua Zhu (15454712) 12 May 2023 (has links)
<p>Roller bearings are the common components used in the mechanical systems for mechanical processing and production. The running state of roller bearings often determines the machining accuracy and productivity on a manufacturing line. Roller bearing failure may lead to the shutdown of production lines, resulting in serious economic losses. Therefore, the research on roller bearing fault diagnosis has a great value. This thesis research first proposes a method of signal frequency spectral resampling to tackle the problem of bearing fault detection at different rotating speeds using a single speed dataset for training the network such as the one dimensional convolutional neural network (1D CNN). Second, this research work proposes a technique to connect the graph structures constructed from spectral components of the different bearing fault frequency bands into a sparse graph structure, so that the fault identification can be carried out effectively through a graph neural network in terms of the computation load and classification rate. Finally, the frequency spectral resampling method for feature extraction is validated using our self-collected datasets. The performance of the graph neural network with our proposed sparse graph structure is validated using the Case Western Reserve University (CWRU) dataset as well as our self-collected datasets. The results show that our proposed method achieves higher bearing fault classification accuracy than those recently proposed by other researchers using machine learning approaches and neural networks.</p>
288

Assessing the Streamline Plausibility Through Convex Optimization for Microstructure Informed Tractography(COMMIT) with Deep Learning / Bedömning av strömlinjeformligheten genom konvex optimering för mikrostrukturinformerad traktografi (COMMIT) med djupinlärning

Wan, Xinyi January 2023 (has links)
Tractography is widely used in the brain connectivity study from diffusion magnetic resonance imaging data. However, lack of ground truth and plenty of anatomically implausible streamlines in the tractograms have caused challenges and concerns in the use of tractograms such as brain connectivity study. Tractogram filtering methods have been developed to remove the faulty connections. In this study, we focus on one of these filtering methods, Convex Optimization Modeling for Microstructure Informed Tractography (COMMIT), which tries to find a set of streamlines that best reconstruct the diffusion magnetic resonance imaging data with global optimization approach. There are biases with this method when assessing individual streamlines. So a method named randomized COMMIT(rCOMMIT) is proposed to obtain multiple assessments for each streamline. The acceptance rate from this method is introduced to the streamlines and divides them into three groups, which are regarded as pseudo ground truth from rCOMMIT. Therefore, the neural networks are able to train on the pseudo ground truth on classification tasks. The trained classifiers distinguish the obtained groups of plausible and implausible streamlines with accuracy around 77%. Following the same methodology, the results from rCOMMIT and randomized SIFT are compared. The intersections between two methods are analyzed with neural networks as well, which achieve accuracy around 87% in binary task between plausible and implausible streamlines.
289

Extracting Topography from Historic Topographic Maps Using GIS-Based Deep Learning

Pierce, Briar Z, Ernenwein, Eileen G 25 April 2023 (has links)
Historical topographic maps are valuable resources for studying past landscapes, but two-dimensional cartographic features are unsuitable for geospatial analysis. They must be extracted and converted into digital formats. This has been accomplished by researchers using sophisticated image processing and pattern recognition techniques, and more recently, artificial intelligence. While these methods are sometimes successful, they require a high level of technical expertise, limiting their accessibility. This research presents a straightforward method practitioners can use to create digital representations of historical topographic data within commercially available Geographic Information Systems (GIS) software. This study uses convolutional neural networks to extract elevation contour lines from a 1940 United States Geological Survey (USGS) topographic map in Sevier County, TN, ultimately producing a Digital Elevation Model (DEM). The topographically derived DEM (TOPO-DEM) is compared to a modern LiDAR-derived DEM to analyze its quality and utility. GIS-capable historians, archaeologists, geographers, and others can use this method in their research and land management practices.
290

SINGLE MOLECULE ANALYSIS AND WAVEFRONT CONTROL WITH DEEP LEARNING

Peiyi Zhang (15361429) 27 April 2023 (has links)
<p>  </p> <p>        Analyzing single molecule emission patterns plays a critical role in retrieving the structural and physiological information of their tagged targets, and further, understanding their interactions and cellular context. These emission patterns of tiny light sources (i.e. point spread functions, PSFs) simultaneously encode information such as the molecule’s location, orientation, the environment within the specimen, and the paths the emitted photons took before being captured by the camera. However, retrieving multiple classes of information beyond the 3D position from complex or high-dimensional single molecule data remains challenging, due to the difficulties in perceiving and summarizing a comprehensive yet succinct model. We developed smNet, a deep neural network that can extract multiplexed information near the theoretical limit from both complex and high-dimensional point spread functions. Through simulated and experimental data, we demonstrated that smNet can be trained to efficiently extract both molecular and specimen information, such as molecule location, dipole orientation, and wavefront distortions from complex and subtle features of the PSFs, which otherwise are considered too complex for established algorithms. </p> <p>        Single molecule localization microscopy (SMLM) forms super-resolution images with a resolution of several to tens of nanometers, relying on accurate localization of molecules’ 3D positions from isolated single molecule emission patterns. However, the inhomogeneous refractive indices distort and blur single molecule emission patterns, reduce the information content carried by each detected photon, increase localization uncertainty, and thus cause significant resolution loss, which is irreversible by post-processing. To compensate tissue induced aberrations, conventional sensorless adaptive optics methods rely on iterative mirror-changes and image-quality metrics to compensate aberrations. But these metrics result in inconsistent, and sometimes opposite, metric responses which fundamentally limited the efficacy of these approaches for aberration correction in tissues. Bypassing the previous iterative trial-then-evaluate processes, we developed deep learning driven adaptive optics (DL-AO), for single molecule localization microscopy (SMLM) to directly infer wavefront distortion and compensate distortion near real-time during data acquisition. our trained deep neural network monitors the individual emission patterns from single molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter (Kalman), and drives a deformable mirror to compensate sample induced aberrations. We demonstrated that DL-AO restores single molecule emission patterns approaching the conditions untouched by specimen and improves the resolution and fidelity of 3D SMLM through brain tissues over 130 µm, with as few as 3-20 mirror changes.</p>

Page generated in 0.1443 seconds