Spelling suggestions: "subject:"bthermal imagery"" "subject:"3thermal imagery""
1 |
Integrating Remote Sensing Techniques into Forest Monitoring: Selected Topics with a Focus on Thermal Remote SensingNölke, Nils 05 April 2016 (has links)
No description available.
|
2 |
Identification of Disease Stress in Turfgrass Canopies Using Thermal Imagery and Automated Aerial Image AnalysisHenderson, Caleb Aleksandr 04 June 2021 (has links)
Remote sensing techniques are important for detecting disease within the turfgrass canopy. Herein, we look at two such techniques to assess their viability in detecting and isolating turfgrass diseases. First, thermal imagery is used to detect differences in canopy temperature associated with the onset of brown patch infection in tall fescue. Sixty-four newly seeded stands of tall fescue were arranged in a randomized block design with two runs with eight blocks each containing four inoculum concentrations within a greenhouse. Daily measurements were taken of the canopy and ambient temperature with a thermal camera. After five consecutive days differences were detected in canopy – ambient temperature in both runs (p=0.0015), which continued for the remainder of the experiment. Moreover, analysis of true colour imagery during this time yielded no significant differences between groups. A field study comparing canopy temperature of adjacent symptomatic and asymptomatic tall fescue and creeping bentgrass canopies showed differences as well (p<0.0492). The second project attempted to isolate spring dead spot from aerial imagery of bermudagrass golf course fairways using a Python script. Aerial images from unmanned aerial vehicle flights were collected from four fairways at Nicklaus Course of Bay Creek Resort in Cape Charles, VA. Accuracy of the code was measured by creating buffer zones around code generated points and measuring how many disease centers measured by hand were eclipsed. Accuracies measured as high as 97% while reducing coverage of the fairway by over 30% compared to broadcast applications. Point density maps of the hand and code points also appeared similar. These data provide evidence for new opportunities in remote turfgrass disease detection. / Master of Science in Life Sciences / Turfgrasses are ubiquitous, from home lawns to sports fields, where they are used for their durability and aesthetics. Disease within the turfgrass canopy can ruin these aspects of the turfgrass reducing its overall quality. This makes detection and management of disease within the canopy an important part of maintaining turfgrass. Here we look at the effectiveness of imaging techniques in detecting and isolating disease within cool-season and warm-season turfgrasses. We test the capacity for thermal imagery to detect the infection of tall fescue (Festuca arundenacea) with Rhizoctonia solani, the causal agent of brown patch. In greenhouse experiments, differences were detected in normalized canopy temperature between differing inoculation levels at five days post inoculation, and in field conditions we were able to observe differences in canopy temperature between adjacent symptomatic and non-symptomatic stands. We also developed a Python script to automatically identify and record the location of spring dead spot damage within mosaicked images of bermudagrass golf fairways captured via unmanned aerial vehicle. The developed script primarily used Hough transform to mark the circular patches within the fairway and recorded the GPS coordinates of each disease center. When compared to disease incidence maps created manually the script was able to achieve accuracies as high as 97% while reducing coverage of the fairway by over 30% compared to broadcast applications. Point density maps created from points in the code appeared to match those created manually. Both findings have the potential to be used as tools to help turfgrass managers.
|
3 |
The Usefulness of Ground Penetrating Radar in locating burials in Charity Hospital Cemetery, New OrleansMitchell, Monique Tashell 16 May 2008 (has links)
The Charity Hospital Cemetery in New Orleans, Louisiana, was used as a potter's field for over 150 years. When Charity Hospital considered selling a portion of the property ground penetrating radar (GPR) and thermal infrared (TIR) data were collected in the cemetery to locate unmarked graves. The TIR data could not be used because the expert died before compiling the TIR data. Therefore, the GPR data was the sole source of subsurface information. GPR anomalies were used to excavate 3 areas where bones and hospital supplies were subsequently found, unfortunately very limited analyses were possible on the analog GPR data. The study presented here involved digitizing data and conducting a more thorough analysis of map patterns to determine whether GPR data could be used reliably to locate burials in the cemetery. The study's result indicates that GPR is a reliable source for burial detection and other anomalies in the subsurface.
|
4 |
Agronomic measurements to validate airborne video imagery for irrigated cotton managementRoth, Guy W, n/a January 1993 (has links)
Water is a major factor limiting cotton production and farmers must aim to
optimise crop water use through timely irrigation scheduling decisions. Airborne
video imagery when calibrated with a low density of ground based observations, offers
the potential for near real time monitoring of crop condition, through sequential
coverages of entire cotton fields. Using commercially available video equipment
mounted on a light aircraft images were acquired of field experiments that were
established in commercial cotton fields to test if the imagery could monitor changes in
crop condition. Ground data collected from these experiments were used to evaluate
green, red, near infrared and thermal band imagery for irrigated crop management.
Prior to acquiring imagery, a ground radiometer study was conducted to
investigate if canopy reflectance changed with the onset of crop water stress. Canopy
reflectance decreased in the near infrared and green bands during the five day period
prior to the crop's normal irrigation date. Red reflectance increased only after the crop
irrigation was due, when the crop was suffering from water stress. The greatest
change in canopy reflectance was in the near infrared region, attributable in part to a
decrease in ground cover caused by canopy architectural changes including leaf
wilting. The results of this experiment were used to select spectral filters for the video cameras.
A range of crop conditions were identified in the imagery including; crop
waterlogging, wheeltrack soil compaction, crop nitrogen status, different varieties,
crop maturity, canopy development, soil moisture status, cotton yield and nutgrass
weeds. Thermal imagery was the most successful for distinguishing differences in the
crop soil moisture status. Near infrared imagery was most closely related to crop
canopy development and is recommended for monitoring crop growth.
Linear relationships were found between spectral responses in the imagery,
crop reflectance (%) and crop temperature measured on the ground. Near infrared
reflectance linearly increased, while spectral responses in the green, red and thermal
bands exhibited an inverse relationship with plant height and ground cover. Imagery
collected early in the season was affected by the soil background. Final lint yield was
related to imagery in the red band. As the soil moisture level declined, crop
temperature increased while reflectance in the green band decreased. To ensure an
accurate relationship between soil moisture and thermal imagery, separate calibration
equations are recommended for different stages in the season.
Green, red and near infrared imagery were affected by the sun angle that
caused one side of the imagery to appear brighter than the other. This problem was
greatest in the green and red bands, but was not evident in the thermal imagery.
Changes in solar radiation and air temperature on some occasions caused greater
variation to the imagery between flights, than changes in crop condition per se.
Therefore, it is not aIways possible to directly determine the soil moisture status from
canopy temperature. Further research is required to correct imagery for environmental
variables such as solar radiation, air temperature and vapour pressure deficit.
Thermal imagery offers many improvements to current irrigation scheduling
techniques including the facilitation of locating more representative ground sampling
points. Thermal imagery also enables cotton fields on a farm to be ranked according to
their soil moisture status. This then provides farmers with a visual picture of the crop
water status across the whole farm, which is not possible using conventional ground
scheduling techniques. At this stage, airborne video imagery will not replace soil
moisture data collected for irrigation scheduling, however offers potential to enhance
irrigation scheduling methods by addressing the problem of crop variability within
cotton fields.
|
5 |
Utilizing natural scene statistics and blind image quality analysis of infrared imageryKaser, Jennifer Yvonne 09 December 2013 (has links)
With the increasing number and affordability of image capture devices, there is an increasing demand to objectively analyze and compare the quality of images. Image quality can also be used as an indicator to determine if the source image is of high enough quality to perform analysis on. When applied to real world scenarios, use of a blind algorithm is essential since a flawless reference image typically is unavailable. Recent research has shown promising results in no reference image quality utilizing natural scene statistics in the visual image space. Research has also shown that although the statistical profiles vary slightly, there are statistical regularities in IR images as well which would indicate that natural scene statistical models may be able to be applied. In this project, I will analyze BRISQUE quality features of IR images and determine if the algorithm can successfully be applied to IR images. Additionally, in order to validate the usefulness of these techniques, the BRISQUE quality features are analyzed using a detection algorithm to determine if they can be used to predict conditions which may cause missed detections. / text
|
6 |
Multispectral Image Analysis for Object Recognition and ClassificationViau, Claude January 2016 (has links)
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate some form of decision-making process. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various field including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance.
The fundamental objectives of this research project were to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. The goal was not to find a new way to “fuse” the visual and thermal images together but rather establish a methodology to extract multispectral descriptors in order to improve a machine vision system’s ability to recognize specific classes of objects.A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM’s class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets. Commonly used performance metrics were applied to assess the sensitivity, specificity and accuracy of each classifier.
The research demonstrated that the highest recognition rate was achieved by an expert system (multiple classifiers) that combined the expertise of the visual-only classifier, the thermal-only classifier and the combined visual-thermal classifier.
|
7 |
Validation of Image Based Thermal Sensing Technology for Glyphosate Resistant Weed IdentificationEide, Austin Joshua January 2020 (has links)
From 2019 to 2020, greenhouse and field research was conducted at North Dakota State University to investigate the canopy temperature response of waterhemp (Amaranthus rudis), kochia (Kochia scoparia), common ragweed (Ambrosia artemisiifolia), horseweed (Conyza canadensis), Palmer amaranth (Amaranthus palmeri), and red root pigweed (Amaranthus retroflexus) after glyphosate application to identify glyphosate resistance. In these experiments, thermal images were captured of randomized glyphosate resistant populations and glyphosate susceptible populations of each weed species. The weed canopies' thermal values were extracted and submitted to statistical testing and various classifiers in an attempt to discriminate between resistant and susceptible populations. Glyphosate resistant horseweed, when collected within greenhouse conditions, was the only biotype reliably classified using significantly cooler temperature signatures than its susceptible counterpart. For field conditions, image based machine learning classifiers using thermal data were outperformed by classifiers made using additional multispectral data, suggesting thermal is not a reliable predictor of glyphosate resistance.
|
8 |
Detection rates of northern bobwhite coveys using a small unmanned aerial system-mounted thermal cameraMartin, Megan Elaine 25 November 2020 (has links)
Northern bobwhite (Colinus virginianus) monitoring (e.g., covey-call surveys) is labor-intensive and imprecise. We evaluated the influence of bobwhite covey size and cover type on covey detectability when surveyed with a thermal camera-equipped small unmanned aerial system (sUAS). We placed bobwhite groups (3, 6, and 12 individuals/cage) among three cover types (grass, shrub, forest) on a private farm in Clay County, Mississippi (3 replicates, 27 total cages). At civil twilight, the sUAS flew over cages at 30 m, capturing photographs every 5 s. We asked 31 volunteers to evaluate 57 photographs for covey presence. Overall true positive rate was 0.551, but improved with increasing covey size. Coveys in grass had lowest true positive rate by photograph (0.403), followed by forest (0.562) and shrub (0.605). Results indicate that thermal sUAS could be a viable method for surveying intact bobwhite coveys, especially if detection of smaller groups and those in denser vegetation improves.
|
9 |
Multi-spectral Fusion for Semantic Segmentation NetworksEdwards, Justin 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Semantic segmentation is a machine learning task that is seeing increased utilization
in multiples fields, from medical imagery, to land demarcation, and autonomous vehicles.
Semantic segmentation performs the pixel-wise classification of images, creating a new, seg-
mented representation of the input that can be useful for detected various terrain and objects
within and image. Recently, convolutional neural networks have been heavily utilized when
creating neural networks tackling the semantic segmentation task. This is particularly true
in the field of autonomous driving systems.
The requirements of automated driver assistance systems (ADAS) drive semantic seg-
mentation models targeted for deployment on ADAS to be lightweight while maintaining
accuracy. A commonly used method to increase accuracy in the autonomous vehicle field is
to fuse multiple sensory modalities. This research focuses on leveraging the fusion of long
wave infrared (LWIR) imagery with visual spectrum imagery to fill in the inherent perfor-
mance gaps when using visual imagery alone. This comes with a host of benefits, such as
increase performance in various lighting conditions and adverse environmental conditions.
Utilizing this fusion technique is an effective method of increasing the accuracy of a semantic
segmentation model. Being a lightweight architecture is key for successful deployment on
ADAS, as these systems often have resource constraints and need to operate in real-time.
Multi-Spectral Fusion Network (MFNet) [1] accomplishes these parameters by leveraging
a sensory fusion approach, and as such was selected as the baseline architecture for this
research.
Many improvements were made upon the baseline architecture by leveraging a variety
of techniques. Such improvements include the proposal of a novel loss function categori-
cal cross-entropy dice loss, introduction of squeeze and excitation (SE) blocks, addition of
pyramid pooling, a new fusion technique, and drop input data augmentation. These improve-
ments culminated in the creation of the Fast Thermal Fusion Network (FTFNet). Further
improvements were made by introducing depthwise separable convolutional layers leading to
lightweight FTFNet variants, FTFNet Lite 1 & 2.
13
The FTFNet family was trained on the Multi-Spectral Road Scenarios (MSRS) and MIL-
Coaxials visual/LWIR datasets. The proposed modifications lead to an improvement over
the baseline in mean intersection over union (mIoU) of 2.92% and 2.03% for FTFNet and
FTFNet Lite 2 respectively when trained on the MSRS dataset. Additionally, when trained
on the MIL-Coaxials dataset, the FTFNet family showed improvements in mIoU of 8.69%,
4.4%, and 5.0% for FTFNet, FTFNet Lite 1, and FTFNet Lite 2.
|
10 |
Simultaneous object detection and segmentation using top-down and bottom-up processingSharma, Vinay 07 January 2008 (has links)
No description available.
|
Page generated in 0.0647 seconds