• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1875
  • 57
  • 57
  • 38
  • 37
  • 37
  • 19
  • 14
  • 12
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2719
  • 2719
  • 1120
  • 976
  • 844
  • 619
  • 581
  • 496
  • 493
  • 469
  • 447
  • 444
  • 419
  • 414
  • 383
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Deep Self-Modeling for Robotic Systems

Kwiatkowski, Robert January 2022 (has links)
As self-awareness is important to human higher level cognition so too is the ability to self-model important to performing complex behaviors. The power of these self-models is one that I demonstrate grows with the complexity of problems being solved, and thus provides the framework for higher level cognition. I demonstrate that self-models can be used to effectively control and improve on existing control algorithms to allow agents to perform complex tasks. I further investigate new ways in which these self-models can be learned and applied to increase their efficacy and improve the ability of these models to generalize across tasks and bodies. Finally, I demonstrate the overall power of these self-models to allow for complex tasks to be completed with little data across a variety of bodies and using a number of algorithms.
352

Development and Application of Tree Species Identification System Using UAV and Deep Learning / ドローンとディープラーニングを用いた樹種識別システムの開発及びその応用

Onishi, Masanori 23 March 2022 (has links)
京都大学 / 新制・課程博士 / 博士(農学) / 甲第23944号 / 農博第2493号 / 新制||農||1090(附属図書館) / 学位論文||R4||N5379(農学部図書室) / 京都大学大学院農学研究科森林科学専攻 / (主査)教授 德地 直子, 教授 北山 兼弘, 教授 神﨑 護, 准教授 伊勢 武史 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
353

Image Segmentation Using Deep Learning

Akbari, Nasrin 27 September 2022 (has links)
The image segmentation task divides an image into regions of similar pixels based on brightness, color, and texture, in which every pixel in the image is as- signed to a label. Segmentation is vital in numerous medical imaging applications, such as quantifying the size of tissues, the localization of diseases, treatment plan- ning, and surgery guidance. This thesis focuses on two medical image segmentation tasks: retinal vessel segmentation in fundus images and brain segmentation in 3D MRI images. Finally, we introduce LEON, a lightweight neural network for edge detection. The first part of this thesis proposes a lightweight neural network for retinal blood vessel segmentation. Our model achieves cutting-edge outcomes with fewer parameters. We obtained the most outstanding performance results on CHASEDB1 and DRIVE datasets with an F1 measure of 0.8351 and 0.8242, respectively. Our model has few parameters (0.34 million) compared to other networks such as ladder net with 1.5 million parameters and DCU-net with 1 million parameters. The second part of this thesis investigates the association between whole and re- gional volumetric alterations with increasing age in a large group of healthy subjects (n=6739, age range: 30–80). We used a deep learning model for brain segmentation for volumetric analysis to extract quantified whole and regional brain volumes in 95 classes. Segmentation methods are called edge or boundary-based methods based on finding abrupt changes and discontinuities in the intensity value. The third part of the thesis introduces a new Lightweight Edge Detection Network (LEON). The proposed approach is designed to integrate the advantages of the deformable unit and DepthWise Separable convolutions architecture to create a lightweight back- bone employed for efficient feature extraction. Our experiments on BSDS500 and NYUDv2 show that LEON, while requiring only 500000 parameters, outperforms the current lightweight edge detectors without using pre-trained weights. / Graduate / 2022-10-12
354

Use of Deep Learning in Detection of COVID-19 in Chest Radiography

Handrock, Sarah Nicole 01 August 2022 (has links)
This paper examines the use of convolutional neural networks to classify Covid-19 in chest radiographs. Three network architectures are compared: VGG16, ResNet-50, and DenseNet-121 along with preprocessing methods which include contrast limited adaptive histogram equalization and non-local means denoising. Chest radiographs from patients with healthy lungs, lung cancer, non-Covid pneumonia, tuberculosis, and Covid-19 were used for training and testing. Networks trained using radiographs that were preprocessed using contrast limited adaptive histogram equalization and non-local means denoising performed better than those trained on the original radiographs. DenseNet-121 performed slightly better in terms of accuracy, performance, and F1 score than all other networks but was not found to be statistically better performing than VGG16.
355

Gait recognition using Deep Learning

Seger, Amanda January 2022 (has links)
Gait recognition is important for identifying suspects in criminal investigations. This study will study the potential of using models based on transfer learning for this purpose. Both supervised and unsupervised learning will be examined. For the supervised learning part, the data is labeled and we investigate how accurate the models can be, and the impact of different walking conditions. Unsupervised learning is when the data is unlabeled and this part will determine if clustering can be used to identify groups of individuals without knowing who it is. Two deep learning models, the InceptionV3 model and the ResNet50V2, model are utilized, and the Gait Energy image method is used as gait representation. After optimization analysis, the models achieved the highest prediction accuracy of 100 percent when only including normal walking conditions and 99.25 percent when including different walking conditions such as carrying a backpack and wearing a coat, making them applicable for use in real-world investigations, provided that the data is labeled. Due to the apparent sensitivity of the models to varying camera angles, the clustering part resulted in an accuracy of approximately 30 percent. For unsupervised learning on gait recognition to be applicable in the real world, additional enhancements are required.
356

POCS Augmented CycleGAN for MR Image Reconstruction

Yang, Hanlu January 2020 (has links)
Traditional Magnetic Resonance Imaging (MRI) reconstruction methods, which may be highly time-consuming and sensitive to noise, heavily depend on solving nonlinear optimization problems. By contrast, deep learning (DL)-based reconstruction methods do not need any explicit analytical data model and are robust to noise due to its large data-based training, which both make DL a versatile tool for fast and high-fidelity MR image reconstruction. While DL can be performed completely independently of traditional methods, it can, in fact, benefit from incorporating these established methods to achieve better results. To test this hypothesis, we proposed a hybrid DL-based MR image reconstruction method, which combines two state-of-the-art deep learning networks, U-Net and Generative Adversarial Network with Cycle loss (CycleGAN), with a traditional data reconstruction method: Projection Onto Convex Sets (POCS). Experiments were then performed to evaluate the method by comparing it to several existing state-of-the-art methods. Our results demonstrate that the proposed method outperformed the current state-of-the-art in terms of higher peak signal-to-noise ratio (PSNR) and higher Structural Similarity Index (SSIM). / Electrical and Computer Engineering
357

Multi-Platform Genomic Data Fusion with Integrative Deep Learning

Oni, Olatunji January 2019 (has links)
The abundance of next-generation sequencing (NGS) data has encouraged the adoption of machine learning methods to aid in the diagnosis and treatment of human disease. In particular, the last decade has shown the extensive use of predictive analytics in cancer research due to the prevalence of rich cellular descriptions of genetic and transcriptomic profiles of cancer cells. Despite the availability of wide-ranging forms of genomic data, few predictive models are designed to leverage multidimensional data sources. In this paper, we introduce a deep learning approach using neural network based information fusion to facilitate the integration of multi-platform genomic data, and the prediction of cancer cell sub-class. We propose the dGMU (deep gated multimodal unit), a series of multiplicative gates that can learn intermediate representations between multi-platform genomic data and improve cancer cell stratification. We also provide a framework for interpretable dimensionality reduction and assess several methods that visualize and explain the decisions of the underlying model. Experimental results on nine cancer types and four forms of NGS data (copy number variation, simple nucleotide variation, RNA expression, and miRNA expression) showed that the dGMU model improved the classification agreement of unimodal approaches and outperformed other fusion strategies in class accuracy. The results indicate that deep learning architectures based on multiplicative gates have the potential to expedite representation learning and knowledge integration in the study of cancer pathogenesis. / Thesis / Master of Science (MSc)
358

Multi-label Classification and Sentiment Analysis on Textual Records

Guo, Xintong January 2019 (has links)
In this thesis we have present effective approaches for two classic Nature Language Processing tasks: Multi-label Text Classification(MLTC) and Sentiment Analysis(SA) based on two datasets. For MLTC, a robust deep learning approach based on convolution neural network(CNN) has been introduced. We have done this on almost one million records with a related label list consists of 20 labels. We have divided our data set into three parts, training set, validation set and test set. Our CNN based model achieved great result measured in F1 score. For SA, data set was more informative and well-structured compared with MLTC. A traditional word embedding method, Word2Vec was used for generating word vector of each text records. Following that, we employed several classic deep learning models such as Bi-LSTM, RCNN, Attention mechanism and CNN to extract sentiment features. In the next step, a classification frame was designed to graded. At last, the start-of-art language model, BERT which use transfer learning method was employed. In conclusion, we compared performance of RNN-based model, CNN-based model and pre-trained language model on classification task and discuss their applicability. / Thesis / Master of Science in Electrical and Computer Engineering (MSECE) / This theis purposed two deep learning solution to both multi-label classification problem and sentiment analysis problem.
359

Modelos computacionales de movimiento ocular

Biondi, Juan Andrés 10 February 2021 (has links)
El análisis de los movimientos oculares constituye un importante desafío dada la gran cantidad de información presente en los mismos. Estos movimientos proveen numerosas claves para estudiar diversos procesos cognitivos considerando, entre otros aspectos, el modo y el tiempo en que se codi fica la información y qué parte de los datos obtenidos se usan o se ignoran. Avanzar en el entendimiento de los procesos involucrados en tareas de alta carga cognitiva puede ayudar en la detección temprana de enfermedades neurodegenerativas tales como el mal de Alzheimer o el de Parkinson. A su vez, la comprensión de estos procesos puede ampliar el abordaje de una gran variedad de temas vinculados con el modelado y control del sistema oculomotor humano. Durante el desarrollo de esta Tesis Doctoral se llevaron a cabo tres experimentos que utilizan técnicas de deep-learning y modelos lineales de efecto mixto a n de identi car patrones de movimiento ocular a partir del estudio de situaciones controladas. La primera experiencia tiene como objetivo diferenciar adultos mayores sanos de adultos mayores con posible enfermedad de Alzheimer, utilizando deep-learning con denoise-sparse-autoencoders y un clasifi cador, a partir de información del movimiento ocular durante la lectura. Los resultados obtenidos, con un 89;8% de efectividad en la clasi ficación por oración y 100% por sujeto, son satisfactorios. Esto sugiere que el uso de esta técnica es una alternativa factible para esta tarea. La segunda experiencia tiene como objetivo demostrar la factibilidad de la utilización de la dilatación de la pupila como un marcador cognitivo, en este caso mediante modelos lineales de efecto mixto. Los resultados indican que la dilatación se ve influenciada por la carga cognitiva, la semántica y las características específi cas de la oración, por lo que representa una alternativa viable para el análisis cognitivo. El tercero y último experimento tiene como objetivo comprobar la efectividad de la utilización de redes neuronales recurrentes, con unidades LSTM, para lograr una clasifi cación efectiva en rangos etarios correspondientes a jóvenes sanos y adultos mayores sanos, a partir del análisis de la dinámica de la pupila. Los resultados obtenidos demuestran que la utilización de esta técnica tiene un alto potencial en este campo logrando clasifi car jóvenes vs. adultos mayores con una efectividad media por oración de 76;99% y una efectividad media por sujeto del 90;24 %, utilizando información del ojo derecho o información binocular. Los resultados de estos estudios permiten afi rmar que la utilización de técnicas de deep learning, que no han sido exploradas para resolver problemas como los planteados utilizando eye-tracking, constituyen un gran área de interés. / TEXTO PARCIAL en período de teletrabajo
360

AI-ML Powered Pig Behavior Classification and Body Weight Prediction

Bharadwaj, Sanjana Manjunath 31 May 2024 (has links)
Precision livestock farming technologies have been widely researched over the last decade. These technologies help in monitoring animal health and welfare parameters in a continuous, automated fashion. Under this umbrella of precision livestock farming, this study focuses on activity classification and body weight prediction in pigs. Activity monitoring is essential for understanding the health and growth of pigs. To automate this task effectively, we propose efficient and accurate sensor-based deep learning (DL) solutions. Among these, the 2D Residual Networks emerged as the best performing model, achieving an accuracy of 95.6%. This accuracy was 15.6% higher than that of other machine learning approaches. Additionally, accurate pig weight estimation is crucial for pork production, as it provides valuable insights into growth rates, disease prevalence, and overall health. Traditional manual methods of estimating pig weights are time-consuming and labor-intensive. To address this issue, we propose a novel approach that utilizes deep learning techniques on depth images for weight prediction. Through a custom image preprocessing pipeline, we train DL models to extract meaningful information from depth images for weight prediction. Our findings show that XceptionNet gives promising results, with a mean absolute error of 2.82 kg and a mean absolute percentage error of 7.42%. In comparison, the best performing statistical model, support vector machine, achieved a mean absolute error of 4.51 kg mean absolute percentage error of 15.56%. / Master of Science / With the increasing demand for food production in recent decades, the livestock farming industry faces significant pressure to modernize its methods. Traditional manual tasks such as activity monitoring and body weight measurement have been time-consuming and labor-intensive. Moreover, manual handling of animals can cause stress, negatively affecting their health. To address these challenges, this study proposes deep learning-based solutions for both activity classification and automated body weight prediction. For activity classification, our solution incorporates strategic data preprocessing techniques. Among various learning techniques, our deep learning model, the 2D Residual Networks, achieved an accuracy of 95.6%, surpassing other approaches by 15.6%. Furthermore, this study also compares statistical models with deep learning models for the body weight prediction task. Our analysis demonstrates that deep learning models outperform statistical models in terms of accuracy and inference time. Specifically, XceptionNet yielded promising results, with a mean absolute error of 2.82 kg and a mean absolute percentage error of 7.42%, outperforming the best statistical model by nearly 8%.

Page generated in 0.3085 seconds