• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 54
  • 15
  • 13
  • 13
  • 7
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 313
  • 313
  • 125
  • 97
  • 75
  • 74
  • 72
  • 60
  • 49
  • 46
  • 46
  • 45
  • 44
  • 44
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Quantification of Land Cover Surrounding Planned Disturbances Using UAS Imagery

Zachary M Miller (11819132) 19 December 2021 (has links)
<p>Three prescribed burn sites and seven selective timber harvest sites were surveyed using a UAS equipped with a PPK-triggered RGB sensor to determine optimal image collection parameters surrounding each type of disturbance and land cover. The image coordinates were corrected with a third-party base station network (CORS) after the flight, and photogrammetrically processed to produce high-resolution georeferenced orthomosaics. This addressed the first objective of this study, which was to <i>establish effective data procurement methods from both before and after planned </i>disturbances. <br></p><p>Orthomosaic datasets surrounding both a prescribed burn and a selective timber harvest, were used to classify land covers through geographic image-based analysis (GEOBIA). The orthomosaic datasets were segmented into image objects, before classification with a machine-learning algorithm. Land covers for the prescribed prairie burn were 1) bare ground, 2) litter, 3) green vegetation, and 4) burned vegetation. Land covers for the selective timber harvest were 1) mature canopy, 2) understory vegetation, and 3) bare ground. 65 samples per class were collected for prairie burn datasets, and 80 samples per class were collected for timber harvest datasets to train the classifier. A supported vector machines (SVM) algorithm was used to produce four land cover classifications for each site surrounding their respective planned disturbance. Pixel counts for each class were multiplied by the ground sampled distance (GSD) to obtain area calculations for land covers. Accuracy assessments were conducted by projecting 250 equalized stratified random (ESR) reference points onto the georeferenced orthomosaic datasets to compare the classification to the imagery through visual interpretation. This addressed the second objective of this study, which was to <i>establish effective data classification methods from both before and after planned </i>disturbances.<br></p><p>Finally, a two-tailed t-Test was conducted with the overall accuracies for each disturbance type and land cover. Results showed no significant difference in the overall accuracy between land covers. This was done to address the third objective of this study which was to <i>determine if a significant difference exists between the classification accuracies between planned disturbance types</i>. Overall, effective data procurement and classification parameters were established for both <i>before </i>and <i>after </i>two common types of <i>planned </i>disturbances within the CHF region, with slightly better results for prescribed burns than for selective timber harvests.<br></p>
312

AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources

Priyank Kalgaonkar (10911822) 05 August 2021 (has links)
Research work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.<br>
313

Image Retrieval in Digital Libraries: A Large Scale Multicollection Experimentation of Machine Learning techniques

Moreux, Jean-Philippe, Chiron, Guillaume 16 October 2017 (has links)
While historically digital heritage libraries were first powered in image mode, they quickly took advantage of OCR technology to index printed collections and consequently improve the scope and performance of the information retrieval services offered to users. But the access to iconographic resources has not progressed in the same way, and the latter remain in the shadows: manual incomplete and heterogeneous indexation, data silos by iconographic genre. Today, however, it would be possible to make better use of these resources, especially by exploiting the enormous volumes of OCR produced during the last two decades, and thus valorize these engravings, drawings, photographs, maps, etc. for their own value but also as an attractive entry point into the collections, supporting discovery and serenpidity from document to document and collection to collection. This article presents an ETL (extract-transform-load) approach to this need, that aims to: Identify and extract iconography wherever it may be found, in image collections but also in printed materials (dailies, magazines, monographies); Transform, harmonize and enrich the image descriptive metadata (in particular with machine learning classification tools); Load it all into a web app dedicated to image retrieval. The approach is pragmatically dual, since it involves leveraging existing digital resources and (virtually) on-the-shelf technologies. / Si historiquement, les bibliothèques numériques patrimoniales furent d’abord alimentées par des images, elles profitèrent rapidement de la technologie OCR pour indexer les collections imprimées afin d’améliorer périmètre et performance du service de recherche d’information offert aux utilisateurs. Mais l’accès aux ressources iconographiques n’a pas connu les mêmes progrès et ces dernières demeurent dans l’ombre : indexation manuelle lacunaire, hétérogène et non viable à grande échelle ; silos documentaires par genre iconographique ; recherche par le contenu (CBIR, content-based image retrieval) encore peu opérationnelle sur les collections patrimoniales. Aujourd’hui, il serait pourtant possible de mieux valoriser ces ressources, en particulier en exploitant les énormes volumes d’OCR produits durant les deux dernières décennies (tant comme descripteur textuel que pour l’identification automatique des illustrations imprimées). Et ainsi mettre en valeur ces gravures, dessins, photographies, cartes, etc. pour leur valeur propre mais aussi comme point d’entrée dans les collections, en favorisant découverte et rebond de document en document, de collection à collection. Cet article décrit une approche ETL (extract-transform-load) appliquée aux images d’une bibliothèque numérique à vocation encyclopédique : identifier et extraire l’iconographie partout où elle se trouve (dans les collections image mais aussi dans les imprimés : presse, revue, monographie) ; transformer, harmoniser et enrichir ses métadonnées descriptives grâce à des techniques d’apprentissage machine – machine learning – pour la classification et l’indexation automatiques ; charger ces données dans une application web dédiée à la recherche iconographique (ou dans d’autres services de la bibliothèque). Approche qualifiée de pragmatique à double titre, puisqu’il s’agit de valoriser des ressources numériques existantes et de mettre à profit des technologies (quasiment) mâtures.

Page generated in 0.0793 seconds