• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

HYPERSPECTRAL IMAGE CLASSIFICATION FOR DETECTING FLOWERING IN MAIZE

Karoll Jessenia Quijano Escalante (8802608) 07 May 2020 (has links)
<div>Maize (Zea mays L.) is one of the most important crops worldwide for its critical importance in agriculture, economic stability, and food security. Many agricultural research and commercial breeding programs target the efficiency of this crop, seeking to increase productivity with fewer inputs and becoming more environmentally sustainable and resistant to impacts of climate and other external factors. For the purpose of analyzing the performance of the new varieties and management strategies, accurate and constant monitoring is crucial and yet, still performed mostly manually, becoming labor-intensive, time-consuming, and costly.<br></div><div>Flowering is one of the most important stages for maize, and many other grain crops, requiring close attention during this period. Any physical or biological negative impact in the tassel, as a reproductive organ, can have significant consequences to the overall grain development, resulting in production losses. Remote sensing observation technologies are currently seeking to close the gap in phenotyping in monitoring the development of the plants’ geometric structure and chemistry-related responses over the growth and reproductive cycle.</div><div>For this thesis, remotely sensed hyperspectral imagery were collected, processed and, explored to detect tassels in maize crops. The data were acquired in both a controlled facility using an imaging conveyor, and from the fields using a PhenoRover (wheel-based platform) and a low altitude UAV. Two pixel-based classification experiments were performed on the original hyperspectral imagery (HSI) using Spectral Angle Mapper (SAM) and Support Vector Machine (SVM) supervised classifiers. Feature reduction methods, including Principal Component Analysis (PCA), Locally Linear Embedding (LLE), and Isometric Feature Mapping (Isomap) were also investigated, both to identify features for annotating the reference data and in conjunction with classification.</div><div>Collecting the data from different systems allowed the identification of strengths and weaknesses for each system and the associated tradeoffs. The controlled facility allowed stable lighting and very high spatial and spectral resolution, although it lacks on supplying information about the plants’ interactions in field conditions. Contrarily, the in-field data from the PhenoRover </div><div>and the UAV exposed the complications related to the plant’s density within the plots and the variability in the lighting conditions due to long times of data collection required. The experiments implemented in this study successfully classified pixels as tassels for all images, performing better with higher spatial resolution and in the controlled environment. For the SAM experiment, nonlinear feature extraction via Isomap was necessary to achieve good results, although at a significant computational expense. Dimension reduction did not improve results for the SVM classifier.</div>
2

Dissertation_Meghdad_revised_2.pdf

Seyyed Meghdad Hasheminasab (14030547) 30 November 2022 (has links)
<p> </p> <p>Modern remote sensing platforms such as unmanned aerial vehicles (UAVs) that can carry a variety of sensors including RGB frame cameras, hyperspectral (HS) line cameras, and LiDAR sensors are commonly used in several application domains. In order to derive accurate products such as point clouds and orthophotos, sensors’ interior and exterior orientation parameters (IOP and EOP) must be established. These parameters are derived/refined in a triangulation framework through minimizing the discrepancy between conjugate features extracted from involved datasets. Existing triangulation approaches are not general enough to deal with varying nature of data from different sensors/platforms acquired in diverse environmental conditions. This research develops a generic triangulation framework that can handle different types of primitives (e.g., point, linear, and/or planar features), and sensing modalities (e.g., RGB cameras, HS cameras, and/or LiDAR sensors) for delivering accurate products under challenging conditions with a primary focus on digital agriculture and stockpile monitoring application domains. </p>
3

Pointwise approach for texture analysis and characterization from very high resolution remote sensing images / Approche ponctuelle pour l'analyse et la caractérisation de texture dans les images de télédétection à très haute résolution

Pham, Minh Tân 20 September 2016 (has links)
Ce travail de thèse propose une nouvelle approche ponctuelle pour l'analyse de texture dans l'imagerie de télédétection à très haute résolution (THR). Cette approche ne prend en compte que des points caractéristiques, et non pas tous les pixels dans l'image, pour représenter et caractériser la texture. Avec l'augmentation de la résolution spatiale des capteurs satellitaires, les images THR ne vérifient que faiblement l'hypothèse de stationnarité. Une telle approche devient donc pertinente étant donné que seuls l'interaction et les caractéristiques des points-clés sont exploitées. De plus, puisque notre approche ne considère pas tous les pixels dans l'image comme le font la plupart des méthodes denses de la littérature, elle est plus à-même de traiter des images de grande taille acquises par des capteurs THR. Dans ce travail, la méthode ponctuelle est appliquée en utilisant des pixels de maxima locaux et minima locaux (en intensité) extraits à partir de l'image. Elle est intégrée dans plusieurs chaînes de traitement en se fondant sur différentes techniques existantes telles la théorie des graphes, la notion de covariance, la mesure de distance géométrique, etc. En conséquence, de nombreuses applications basées sur la texture sont abordées en utilisant des données de télédétection (images optiques et radar), telles l'indexation d'images, la segmentation, la classification et la détection de changement, etc. En effectuant des expériences dédiées à chaque application thématique, la pertinence et l'efficacité du cadre méthodologique proposé sont confirmées et validées. / This thesis work proposes a novel pointwise approach for texture analysis in the scope of very high resolution (VHR) remote sensing imagery. This approach takes into consideration only characteristic pixels, not all pixels of the image, to represent and characterize textural features. Due to the fact that increasing the spatial resolution of satellite sensors leads to the lack of stationarity hypothesis in the acquired images, such an approach becomes relevant since only the interaction and characteristics of keypoints are exploited. Moreover, as this technique does not need to consider all pixels inside the image like classical dense approaches, it is more capable to deal with large-size image data offered by VHR remote sensing acquisition systems. In this work, our pointwise strategy is performed by exploiting the local maximum and local minimum pixels (in terms of intensity) extracted from the image. It is integrated into several texture analysis frameworks with the help of different techniques and methods such as the graph theory, the covariance-based approach, the geometric distance measurement, etc. As a result, a variety of texture-based applications using remote sensing data (both VHR optical and radar images) are tackled such as image retrieval, segmentation, classification, and change detection, etc. By performing dedicated experiments to each thematic application, the effectiveness and relevance of the proposed approach are confirmed and validated.
4

Ameliorating Environmental Effects on Hyperspectral Images for Improved Phenotyping in Greenhouse and Field Conditions

Dongdong Ma (9224231) 14 August 2020 (has links)
Hyperspectral imaging has become one of the most popular technologies in plant phenotyping because it can efficiently and accurately predict numerous plant physiological features such as plant biomass, leaf moisture content, and chlorophyll content. Various hyperspectral imaging systems have been deployed in both greenhouse and field phenotyping activities. However, the hyperspectral imaging quality is severely affected by the continuously changing environmental conditions such as cloud cover, temperature and wind speed that induce noise in plant spectral data. Eliminating these environmental effects to improve imaging quality is critically important. In this thesis, two approaches were taken to address the imaging noise issue in greenhouse and field separately. First, a computational simulation model was built to simulate the greenhouse microclimate changes (such as the temperature and radiation distributions) through a 24-hour cycle in a research greenhouse. The simulated results were used to optimize the movement of an automated conveyor in the greenhouse: the plants were shuffled with the conveyor system with optimized frequency and distance to provide uniform growing conditions such as temperature and lighting intensity for each individual plant. The results showed the variance of the plants’ phenotyping feature measurements decreased significantly (i.e., by up to 83% in plant canopy size) in this conveyor greenhouse. Secondly, the environmental effects (i.e., sun radiation) on <a>aerial </a>hyperspectral images in field plant phenotyping were investigated and modeled. <a>An artificial neural network (ANN) method was proposed to model the relationship between the image variation and environmental changes. Before the 2019 field test, a gantry system was designed and constructed to repeatedly collect time-series hyperspectral images with 2.5 minutes intervals of the corn plants under varying environmental conditions, which included sun radiation, solar zenith angle, diurnal time, humidity, temperature and wind speed. Over 8,000 hyperspectral images of </a>corn (<i>Zea mays </i>L.) were collected with synchronized environmental data throughout the 2019 growing season. The models trained with the proposed ANN method were able to accurately predict the variations in imaging results (i.e., 82.3% for NDVI) caused by the changing environments. Thus, the ANN method can be used by remote sensing professionals to adjust or correct raw imaging data for changing environments to improve plant characterization.
5

Semantic Labeling of Large Geographic Areas Using Multi-Date and Multi-View Satellite Images and Noisy OpenStreetMap Labels

Bharath Kumar Comandur Jagannathan Raghunathan (9187466) 31 July 2020 (has links)
<div>This dissertation addresses the problem of how to design a convolutional neural network (CNN) for giving semantic labels to the points on the ground given the satellite image coverage over the area and, for the ground truth, given the noisy labels in OpenStreetMap (OSM). This problem is made challenging by the fact that -- (1) Most of the images are likely to have been recorded from off-nadir viewpoints for the area of interest on the ground; (2) The user-supplied labels in OSM are frequently inaccurate and, not uncommonly, entirely missing; and (3) The size of the area covered on the ground must be large enough to possess any engineering utility. As this dissertation demonstrates, solving this problem requires that we first construct a DSM (Digital Surface Model) from a stereo fusion of the available images, and subsequently use the DSM to map the individual pixels in the satellite images to points on the ground. That creates an association between the pixels in the images and the noisy labels in OSM. The CNN-based solution we present yields a 4-8% improvement in the per-class segmentation IoU (Intersection over Union) scores compared to the traditional approaches that use the views independently of one another. The system we present is end-to-end automated, which facilitates comparing the classifiers trained directly on true orthophotos vis-`a-vis first training them on the off-nadir images and subsequently translating the predicted labels to geographical coordinates. This work also presents, for arguably the first time, an in-depth discussion of large-area image alignment and DSM construction using tens of true multi-date and multi-view WorldView-3 satellite images on a distributed OpenStack cloud computing platform.</div>

Page generated in 0.0749 seconds