• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1930
  • 59
  • 57
  • 38
  • 38
  • 37
  • 21
  • 16
  • 14
  • 14
  • 7
  • 4
  • 4
  • 2
  • 1
  • Tagged with
  • 2812
  • 2812
  • 1145
  • 1013
  • 871
  • 643
  • 582
  • 516
  • 506
  • 482
  • 467
  • 461
  • 426
  • 422
  • 404
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

SeedQuant: A Deep Learning-based Census Tool for Seed Germination of Root Parasitic Plants

Ramazanova, Merey 30 April 2020 (has links)
Witchweeds and broomrapes are root parasitic weeds that represent one of the main threats to global food security. By drastically reducing host crops’ yield, the parasites are often responsible for enormous economic losses estimated in billions of dollars annually. Parasitic plants rely on a chemical cue in the rhizosphere, indicating the presence of a host plant in proximity. Using this host dependency, research in parasitic plants focuses on understanding the necessary triggers for parasitic seeds germination, to either reduce their germination in presence of crops or provoke germination without hosts (i.e. suicidal germination). For this purpose, a number of synthetic analogs and inhibitors have been developed and their biological activities studied on parasitic plants around the world using various protocols. Current studies are using germination-based bioassays, where pre-conditioned parasitic seeds are placed in the presence of a chemical or plant root exudates, from which the germination ratio is assessed. Although these protocols are very sensitive at the chemical level, the germination rate recording is time consuming, represents a challenging task for researchers, and could easily be sped up leveraging automated seeds detection algorithms. In order to accelerate such protocols, we propose an automatic seed censing tool using computer vision latest development. We use a deep learning approach for object detection with the algorithm Faster R-CNN to count and discriminate germinated from non-germinated seeds. Our method has shown an accuracy of 95% in counting seeds on completely new images, and reduces the counting time by a significant margin, from 5 min to a fraction of second per image. We believe our proposed software 5 “SeedQuant” will be of great help for lab bioassays to perform large scale chemicals screening for parasitic seeds applications.
212

An Empirical Study of the Distributed Ellipsoidal Trust Region Method for Large Batch Training

Alnasser, Ali 10 February 2021 (has links)
Neural networks optimizers are dominated by first-order methods, due to their inexpensive computational cost per iteration. However, it has been shown that firstorder optimization is prone to reaching sharp minima when trained with large batch sizes. As the batch size increases, the statistical stability of the problem increases, a regime that is well suited for second-order optimization methods. In this thesis, we study a distributed ellipsoidal trust region model for neural networks. We use a block diagonal approximation of the Hessian, assigning consecutive layers of the network to each process. We solve in parallel for the update direction of each subset of the parameters. We show that our optimizer is fit for large batch training as well as increasing number of processes.
213

Comparing a gang-like scheduler with the default Kubernetes scheduler in a multi-tenant serverless distributed deep learning training environment

Lövenvald, Frans-Lukas January 2021 (has links)
Systems for running distributed deep learning training on the cloud have recently been developed. An important component of a distributed deep learning job handler is its resource allocation scheduler. This scheduler allocates computing resources to parts of a distributed training architecture. In this thesis, a serverless distributed deep learning job handler using Kubernetes was built to compare the job completion time when two different Kubernetes schedulers are used. The default Kubernetes scheduler and a gang-like custom scheduler. These schedulers were compared by performing experiments with different configurations of deep learning models, resource count selection and number of concurrent jobs. No significant difference in job completion time between the schedulers could be found. However, two benefits were found in the gang scheduler compared to the default scheduler. First, prevention of resource deadlocks where one or multiple jobs are locking resources but are unable to start. Second, reduced risk of epoch straggling, where jobs are allocated too few workers to be able to complete epochs in a reasonable time. Thus preventing other jobs from using the resources locked by the straggler job.
214

Deep learning methods for predicting flows in power grids : novel architectures and algorithms / Méthode d'apprentissage profond (deep learning) pour prévoir les flux dans les réseaux de transports d'électricité : nouvelles architectures et algorithmes

Donnot, Benjamin 13 February 2019 (has links)
Cette thèse porte sur les problèmes de sécurité sur le réseau électrique français exploité par RTE, le Gestionnaire de Réseau de Transport (GRT). Les progrès en matière d'énergie durable, d'efficacité du marché de l'électricité ou de nouveaux modes de consommation poussent les GRT à exploiter le réseau plus près de ses limites de sécurité. Pour ce faire, il est essentiel de rendre le réseau plus "intelligent". Pour s'attaquer à ce problème, ce travail explore les avantages des réseaux neuronaux artificiels. Nous proposons de nouveaux algorithmes et architectures d'apprentissage profond pour aider les opérateurs humains (dispatcheurs) à prendre des décisions que nous appelons " guided dropout ". Ceci permet de prévoir les flux électriques consécutifs à une modification volontaire ou accidentelle du réseau. Pour se faire, les données continues (productions et consommations) sont introduites de manière standard, via une couche d'entrée au réseau neuronal, tandis que les données discrètes (topologies du réseau électrique) sont encodées directement dans l'architecture réseau neuronal. L’architecture est modifiée dynamiquement en fonction de la topologie du réseau électrique en activant ou désactivant des unités cachées. Le principal avantage de cette technique réside dans sa capacité à prédire les flux même pour des topologies de réseau inédites. Le "guided dropout" atteint une précision élevée (jusqu'à 99% de précision pour les prévisions de débit) tout en allant 300 fois plus vite que des simulateurs de grille physiques basés sur les lois de Kirchoff, même pour des topologies jamais vues, sans connaissance détaillée de la structure de la grille. Nous avons également montré que le "guided dropout" peut être utilisé pour classer par ordre de gravité des évènements pouvant survenir. Dans cette application, nous avons démontré que notre algorithme permet d'obtenir le même risque que les politiques actuellement mises en œuvre tout en n'exigeant que 2 % du budget informatique. Le classement reste pertinent, même pour des cas de réseau jamais vus auparavant, et peut être utilisé pour avoir une estimation globale de la sécurité globale du réseau électrique. / This thesis addresses problems of security in the French grid operated by RTE, the French ``Transmission System Operator'' (TSO). Progress in sustainable energy, electricity market efficiency, or novel consumption patterns push TSO's to operate the grid closer to its security limits. To this end, it is essential to make the grid ``smarter''. To tackle this issue, this work explores the benefits of artificial neural networks. We propose novel deep learning algorithms and architectures to assist the decisions of human operators (TSO dispatchers) that we called “guided dropout”. This allows the predictions on power flows following of a grid willful or accidental modification. This is tackled by separating the different inputs: continuous data (productions and consumptions) are introduced in a standard way, via a neural network input layer while discrete data (grid topologies) are encoded directly in the neural network architecture. This architecture is dynamically modified based on the power grid topology by switching on or off the activation of hidden units. The main advantage of this technique lies in its ability to predict the flows even for previously unseen grid topologies. The "guided dropout" achieves a high accuracy (up to 99% of precision for flow predictions) with a 300 times speedup compared to physical grid simulators based on Kirchoff's laws even for unseen contingencies, without detailed knowledge of the grid structure. We also showed that guided dropout can be used to rank contingencies that might occur in the order of severity. In this application, we demonstrated that our algorithm obtains the same risk as currently implemented policies while requiring only 2% of today's computational budget. The ranking remains relevant even handling grid cases never seen before, and can be used to have an overall estimation of the global security of the power grid.
215

CAN DEEP LEARNING BEAT TRADITIONAL ECONOMETRICS IN FORECASTING OF REALIZED VOLATILITY?

Björnsjö, Filip January 2020 (has links)
Volatility modelling is a field dominated by classic Econometric methods such as the Nobel Prize winning Autoregressive conditional heteroskedasticity (ARCH) model. This paper therefore investigates if the field of Deep Learning can live up to the hype and outperform classic Econometrics in forecasting of realized volatility. By letting the Heterogeneous AutoRegressive model of Realized Volatility with multiple jump components (HAR-RV-CJ) represent the Econometric field as benchmark model, we compare its efficiency in forecasting realized volatility to four Deep Learning models. The results of the experiment show that the HAR-RV-CJ performs in line with the four Deep Learning models: Feed Forward Neural Network (FNN), Recurrent Neural Network (RNN), Long Short Term Memory network (LSTM) and Gated Recurrent Unit Network (GRU). Hence, the paper cannot conclude that the field of Deep Learning is superior to classic Econometrics in forecasting of realized volatility.
216

AI-based Age Estimation using X-ray Hand Images : A comparison of Object Detection and Deep Learning models

Westerberg, Erik January 2020 (has links)
Bone age assessment can be useful in a variety of ways. It can help pediatricians predict growth, puberty entrance, identify diseases, and assess if a person lacking proper identification is a minor or not. It is a time-consuming process that is also prone to intra-observer variation, which can cause problems in many ways. This thesis attempts to improve and speed up bone age assessments by using different object detection methods to detect and segment bones anatomically important for the assessment and using these segmented bones to train deep learning models to predict bone age. A dataset consisting of 12811 X-ray hand images of persons ranging from infant age to 19 years of age was used. In the first research question, we compared the performance of three state-of-the-art object detection models: Mask R-CNN, Yolo, and RetinaNet. We chose the best performing model, Yolo, to segment all the growth plates in the phalanges of the dataset. We proceeded to train four different pre-trained models: Xception, InceptionV3, VGG19, and ResNet152, using both the segmented and unsegmented dataset and compared the performance. We achieved good results using both the unsegmented and segmented dataset, although the performance was slightly better using the unsegmented dataset. The analysis suggests that we might be able to achieve a higher accuracy using the segmented dataset by adding the detection of growth plates from the carpal bones, epiphysis, and the diaphysis. The best performing model was Xception, which achieved a mean average error of 1.007 years using the unsegmented dataset and 1.193 years using the segmented dataset. / <p>Presentationen gjordes online via Zoom. </p>
217

Deep learning for promoter recognition: a robust testing methodology

Perez Martell, Raul Ivan 29 April 2020 (has links)
Understanding DNA sequences has been an ongoing endeavour within bioinfor- matics research. Recognizing the functionality of DNA sequences is a non-trivial and complex task that can bring insights into understanding DNA. In this thesis, we study deep learning models for recognizing gene regulating regions of DNA, more specifi- cally promoters. We first consider DNA modelling as a language by training natural language processing models to recognize promoters. Afterwards, we delve into current models from the literature to learn how they achieve their results. Previous works have focused on limited curated datasets to both train and evaluate their models using cross-validation, obtaining high-performing results across a variety of metrics. We implement and compare three models from the literature against each other, us- ing their datasets interchangeably throughout the comparison tests. This highlights shortcomings within the training and testing datasets for these models, prompting us to create a robust promoter recognition testing dataset and developing a testing methodology, that creates a wide variety of testing datasets for promoter recognition. We then, test the models from the literature with the newly created datasets and highlight considerations to take in choosing a training dataset. To help others avoid such issues in the future, we open-source our findings and testing methodology. / Graduate
218

VECTOR REPRESENTATION TO ENHANCE POSE ESTIMATION FROM RGB IMAGES

Zongcheng Chu (8791457) 03 May 2020 (has links)
Head pose estimation is an essential task to be solved in computer vision. Existing research for pose estimation based on RGB images mainly uses either Euler angles or quaternions to predict pose. Nevertheless, both Euler angle- and quaternion-based approaches encounter the problem of discontinuity when describing three-dimensional rotations. This issue makes learning visual pattern more difficult for the convolutional neural network(CNN) which, in turn, compromises the estimation performance. To solve this problem, we introduce TriNet, a novel method based on three vectors converted from three Euler angles(roll, pitch, yaw). The orthogonality of the three vectors enables us to implement a complementary multi-loss function, which effectively reduces the prediction error. Our method achieves state-of-the-art performance on the AFLW2000, AFW and BIWI datasets. We also extend our work to general object pose estimation and show results in the experiment part.
219

Privacy-Preserving Facial Recognition Using Biometric-Capsules

Tyler Stephen Phillips (8782193) 04 May 2020 (has links)
<div>In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design. </div><div><br></div><div>In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods.</div>
220

Physics Informed Neural Networks for Engineering Systems

Sukirt (8828960) 13 May 2020 (has links)
<div>This thesis explores the application of deep learning techniques to problems in fluid mechanics, with particular focus on physics informed neural networks. Physics</div><div>informed neural networks leverage the information gathered over centuries in the</div><div>form of physical laws mathematically represented in the form of partial differential</div><div>equations to make up for the dearth of data associated with engineering and physi-</div><div>cal systems. To demonstrate the capability of physics informed neural networks, an</div><div>inverse and a forward problem are considered. The inverse problem involves discov-</div><div>ering a spatially varying concentration ?field from the observations of concentration</div><div>of a passive scalar. A forward problem involving conjugate heat transfer is solved as</div><div>well, where the boundary conditions on velocity and temperature are used to discover</div><div>the velocity, pressure and temperature ?fields in the entire domain. The predictions of</div><div>the physics informed neural networks are compared against simulated data generated</div><div>using OpenFOAM.</div>

Page generated in 0.0242 seconds