• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 8
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integrated multi-spectral imaging, analysis and treatment of an Egyptian tunic.

Haldane, E.A., Gillies, Sara, O'Connor, Sonia A., Batt, Catherine M., Stern, Ben January 2010 (has links)
no
2

National Guard Data Relay and the LAV Sensor System

Defibaugh, June, Anderson, Norman 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The Defense Evaluation Support Activity (DESA) is an independent Office of the Secretary of Defense (OSD) activity that provides tailored evaluation support to government organizations. DESA provides quick-response support capabilities and performs activities ranging from studies to large-scale field activities that include deployment, instrumentation, site setup, event execution, analysis and report writing. The National Guard Bureau requested DESA's assistance in the development and field testing of the Light Armored Vehicle (LAV) Sensor Suite (LSS). LSS was integrated by DESA to provide a multi-sensor suite that detects and identifies ground targets on foot or in vehicles with minimal operator workload. The LSS was designed primarily for deployment in high density drug trafficking areas along the northern and southern borders using primarily commercial-off-the-shelf and government-off-the-shelf equipment. Field testing of the system prototype in summer of 1995 indicates that the LSS will provide a significant new data collection and transfer capability to the National Guard in control of illegal drug transfer across the U.S. borders.
3

Deriving bathymetry from multispectral and hyperspectral imagery

Carmody, James Daniel, Physical, Environmental & Mathematical Sciences, Australian Defence Force Academy, UNSW January 2007 (has links)
Knowledge of water depth is a crucial for planning military amphibious operations. Bathymetry from remote sensing with multispectral or hyperspectral imagery provides an opportunity to acquire water depth data faster than traditional hydrographic survey methods without the need to deploy a hydrographic survey vessel. It also provides a means of collecting bathymetric data covertly. This research explores two techniques for deriving bathymetry and assesses them for use by those involved in providing support to military operations. To support this aim a fieldwork campaign was undertaken in May, 2000, in northern Queensland. The fieldwork collected various inherent and apparent water optical properties and was concurrent with airborne hyperspectral imagery collection, space-based multispectral imagery collection and a hydrographic survey. The water optical properties were used to characterise the water and to understand how they affect deriving bathymetry from imagery. The hydrographic data was used to assess the performance of the bathymetric techniques. Two methods for deriving bathymetry were trialled. One uses a ratio of subsurface irradiance reflectance at two wavelengths and then tunes the result with known water depths. The other inverts the radiative transfer equation utilising the optical properties of the water to derive water depth. Both techniques derived water depth down to approximately six to seven metres. At that point the Cowley Beach waters became optically deep. Sensitivity analysis of the inversion method found that it was most sensitive to errors in vertical attenuation Kd and to errors in transforming the imagery into subsurface irradiance reflectance, R(0-) units. Both techniques require a priori knowledge to derive depth and a more sophisticated approach would be required to determine water depth without prior knowledge of the area of interest. This research demonstrates that water depth can be accurately mapped with optical techniques in less than ideal optical conditions. It also demonstrates that the collection of inherent and apparent optical properties is important for validating remotely sensed imagery.
4

Multi-spectral remote sensing of native vegetation condition

Sheffield, Kathryn Jane, kathryn.sheffield@dpi.vic.gov.au January 2009 (has links)
Native vegetation condition provides an indication of the state of vegetation health or function relative to a stated objective or benchmark. Measures of vegetation condition provide an indication of the vegetation's capacity to provide habitat for a range of species and ecosystem functions through the assessment of selected vegetation attributes. Subsets of vegetation attributes are often combined into vegetation condition indices or metrics, which are used to provide information for natural resource management. Despite their value as surrogates of biota and ecosystem function, measures of vegetation condition are rarely used to inform biodiversity assessments at scales beyond individual stands. The extension of vegetation condition information across landscapes, and approaches for achieving this, using remote sensing technologies, is a key focus of the work presented in this thesis. The aim of this research is to assess the utility of multi-spectral remotely sensed data for the recovery of stand-level attributes of native vegetation condition at landscape scales. The use of remotely sensed data for the assessment of vegetation condition attributes in fragmented landscapes is a focus of this study. The influence of a number of practical issues, such as spatial scale and ground data sampling methodology, are also explored. This study sets limitations on the use of this technology for vegetation condition assessment and also demonstrates the practical impact of data quality issues that are frequently encountered in these types of applied integrated approaches. The work presented in this thesis demonstrates that while some measures of vegetation condition, such as vegetation cover and stem density, are readily recoverable from multi-spectral remotely sensed data, others, such as hollow-bearing trees and log length, are not easily derived from this type of data. The types of information derived from remotely sensed data, such as texture measures and vegetation indices, that are useful for vegetation condition assessments of this nature are also highlighted. The utility of multi-spectral remotely sensed data for the assessment of stand-level vegetation condition attributes is highly dependent on a number of factors including the type of attribute being measured, the characteristics of the vegetation, the sensor characteristics (i.e. the spatial, spectral, temporal, and radiometric resolution), and other spatial data quality considerations, such as site homogeneity and spatial scale. A series of case studies are presented in this thesis that explores the effects of these factors. These case studies demonstrate the importance of different aspects of spatial data and how data manipulation can greatly affect the derived relationships between vegetation attributes and remotely sensed data. The work documented in this thesis provides an assessment of what can be achieved from two sources of multi-spectral imagery in terms of recovery of individual vegetation attributes from remotely sensed data. Potential surrogate measures of vegetation condition that can be derived across broad scales are identified. This information could provide a basis for the development of landscape scale multi-spectral remotely sensed based vegetation condition assessment approaches, supplementing information provided by established site-based vegetation condition assessment approaches.
5

Localisation par vision multi-spectrale : Application aux systèmes embarqués / Multi-spectral vision localisation : An embedded systems application

Gonzalez, Aurelien 08 July 2013 (has links)
La problématique SLAM (Simultaneous Localization and Mapping) est un thème largement étudié au LAAS depuis plusieurs années. L'application visée concerne le développement d'un système d'aide au roulage sur aéroport des avions de ligne, ce système devant être opérationnel quelques soient les conditions météorologiques et de luminosité (projet SART financé par la DGE en partenariat avec principalement FLIR Systems, Latécoère et Thales).Lors de conditions de visibilité difficile (faible luminosité, brouillard, pluie...), une seule caméra traditionnelle n'est pas suffisante pour assurer la fonction de localisation. Dans un premier temps, on se propose d'étudier l'apport d'une caméra infrarouge thermique.Dans un deuxième temps, on s'intéressera à l'utilisation d'une centrale inertielle et d'un GPS dans l'algorithme de SLAM, la centrale aidant à la prédiction du mouvement, et le GPS à la correction des divergences éventuelles. Enfin, on intègrera dans ce même SLAM des pseudo-observations issues de l'appariement entre des segments extraits des images, et ces mêmes segments contenus dans une cartographie stockée dans une base de données. L'ensemble des observations et pseudo-observations a pour but de localiser le porteur à un mètre près.Les algorithmes devant être portés sur un FPGA muni d'un processeur de faible puissance par rapport aux PC standard (400 MHz), un co-design devra donc être effectué entre les éléments logiques du FPGA réalisant le traitement d'images à la volée et le processeur embarquant le filtre de Kalman étendu (EKF) pour le SLAM, de manière à garantir une application temps-réel à 30 Hz. Ces algorithmes spécialement développés pour le co-design et les systèmes embarqués avioniques seront testés sur la plate-forme robotique du LAAS, puis portés sur différentes cartes de développement (Virtex 5, Raspberry, PandaBoard...) en vue de l'évaluation des performances / The SLAM (Simultaneous Localization and Mapping) problematic is widely studied from years at LAAS. The aimed application is the development of a helping rolling system for planes on airports. This system has to work under any visibility and weather conditions ("SART" project, funding by DGE, with FLIR Systems, Thalès and Latecoère).During some weather conditions (fog, rain, darkness), one only visible camera is not enough to complete this task of SLAM. Firstly, in this thesis, we will study what an infrared camera can bring to SLAM problematic, compared to a visible camera, particularly during hard visible conditions.Secondly, we will focus on using Inertial Measurement Unit (IMU) and GPS into SLAM algorithm, IMU helping on movement prediction, and GPS helping on SLAM correction step. Finally, we will fit in this SLAM algorithm pseudo-observations coming from matching between points retrieved from images, and lines coming from map database. The main objective of the whole system is to localize the vehicle at one meter.These algorithms aimed to work on a FPGA with a low-power processor (400MHz), a co-design between the hardware (processing images on the fly) and the software (embedding an Extended Kalman Filter (EKF) for the SLAM), has to be realized in order to guarantee a real-time application at 30 Hz. These algorithms will be experimented on LAAS robots, then embedded on different boards (Virtex 5, Raspberry Pi, PandaBoard...) for performances evaluation
6

Early Forest Fire Detection via Principal Component Analysis of Spectral and Temporal Smoke Signature

Garges, David Casimir 01 June 2015 (has links) (PDF)
The goal of this study is to develop a smoke detecting algorithm using digital image processing techniques on multi-spectral (visible & infrared) video. By utilizing principal component analysis (PCA) followed by spatial filtering of principal component images the location of smoke can be accurately identified over a period of exposure time with a given frame capture rate. This result can be further analyzed with consideration of wind factor and fire detection range to determine if a fire is present within a scene. Infrared spectral data is shown to contribute little information concerning the smoke signature. Moreover, finalized processing techniques are focused on the blue spectral band as it is furthest away from the infrared spectral bands and because it experimentally yields the largest footprint in the processed principal component images in comparison to other spectral bands. A frame rate of .5 images/sec (1 image every 2 seconds) is determined to be the maximum such that temporal variance of smoke can be captured. The study also shows eigenvectors corresponding to the principal components that best represent smoke and are valuable indications of smoke temporal signature. Raw video data is taken through rigorous pre-processing schemes to align frames from respective spectral band both spatially and temporally. A multi-paradigm numerical computing program, MATLAB, is used to match the field of view across five spectral bands: Red, Green, Blue, Long-Wave Infrared, and Mid-Wave Infrared. Extracted frames are aligned temporally from key frames throughout the data capture. This alignment allows for more accurate digital processing for smoke signature. v Clustering analysis on RGB and HSV value systems reveal that color alone is not helpful to segment smoke. The feature values of trees and other false positives are shown to be too closely related to features of smoke for in solely one instance in time. A temporal principal component transform on the blue spectral band eliminates static false positives and emphasizes the temporal variance of moving smoke in images with higher order. A threshold adjustment is applied to a blurred blue principal component of non-unity principal component order and smoke results can be finalized using median filtering. These same processing techniques are applied to difference images as a more simple and traditional technique for identifying temporal variance and results are compared.
7

An Application of Artificial Intelligence Techniques in Classifying Tree Species with LiDAR and Multi-Spectral Scanner Data

Posadas, Benedict Kit A 09 August 2008 (has links)
Tree species identification is an important element in many forest resources applications such as wildlife habitat management, inventory, and forest damage assessment. Field data collection for large or mountainous areas is often cost prohibitive, and good estimates of the number and spatial arrangement of species or species groups cannot be obtained. Knowledge-based and neural network species classification models were constructed for remotely sensed data of conifer stands located in the lower mountain regions near McCall, Idaho, and compared to field data. Analyses for each modeling system were made based on multi-spectral sensor (MSS) data alone and MSS plus LiDAR (light detection and ranging) data. The neural network system produced models identifying five of six species with 41% to 88% producer accuracies and greater overall accuracies than the knowledge-based system. The neural network analysis that included a LiDAR derived elevation variable plus multi-spectral variables gave the best overall accuracy at 63%.
8

Multispectral Image Labeling for Unmanned Ground Vehicle Environments

Teresi, Michael Bryan 01 July 2015 (has links)
Described is the development of a multispectral image labeling system with emphasis on Unmanned Ground Vehicles(UGVs). UGVs operating in unstructured environments face significant problems detecting viable paths when LIDAR is the sole source for perception. Promising advances in computer vision and machine learning has shown that multispectral imagery can be effective at detecting materials in unstructured environments [1][2][3][4][5][6]. This thesis seeks to extend previous work[6][7] by performing pixel level classification with multispectral features and texture. First the images are spatially registered to create a multispectral image cube. Visual, near infrared, shortwave infrared, and visible/near infrared polarimetric data are considered. The aligned images are then used to extract features which are fed to machine learning algorithms. The class list includes common materials present in rural and urban scenes such as vehicles, standing water, various forms of vegetation, and concrete. Experiments are conducted to explore the data requirement for a desired performance and the selection of a hyper-parameter for the textural features. A complete system is demonstrated, progressing from the data collection and labeling to the analysis of the classifier performance. / Master of Science
9

Imagerie multi-spectrale par résonance des plasmons de surface : développement et applications / Multi-spectral imaging for surface plasmon resonance sensors : development and applications

Sereda, Alexandra 25 November 2014 (has links)
Dépistage du VIH, test de grossesse, mais également surveillance des eaux, détection de contaminants agro-alimentaires : la biodétection est au coeur des problématiques de santé actuelles. Dans ce contexte, les biocapteurs plasmoniques connaissent depuis quelques années un essor particulièrement important : de plus en plus de sociétés, telles que HORIBA Scientific, proposent des prototypes commerciaux, destinés tant à des utilisateurs du domaine de la recherche que de l'industrie. Basée sur le phénomène de résonance des plasmons de surface (communément appelé SPR) la biodétection plasmonique repose sur l'extrême sensibilité d’une onde évanescente se propageant à l’interface entre un film d’or, la biopuce, et le milieu diélectrique couvrant, siège des interactions biomoléculaires étudiées. De manière plus concrète, toute adsorption de matériel biologique se produisant à cette interface entraîne une modification importante des propriétés optiques d’un faisceau de lumière réfléchi par la biopuce : le principe de transduction par SPR consiste alors à mesurer directement ces variations. A l'heure actuelle, différents modes d'interrogation, offrant des performances intéressantes, mais également des limitations propres à chaque configuration. Pour répondre aux exigences de précision et de dynamique de mesure posées par de nombreuses applications, un développement théorique et instrumental, présenté dans ce document, a été initié dans le but de proposer un nouveau un nouveau mode d'interrogation des biopuces plasmoniques : l'interrogation multi-spectrale. Les résultats obtenus par cette technique ont été exploités pour concevoir et réaliser une source multi-spectrale à base de LEDs, particulièrement avantageuse vis-à-vis des configurations existant à l'heure actuelle. La caractérisation du système développé dans le cadre du diagnostic génétique (mucoviscidose) et celui du cancer, ouvre la voie à une nouvelle génération de biocapteurs performants, compacts et de coût relativement raisonnable, présentant un potentiel industriel certain. / Biodetection is at the core of the current health concerns, as shown through the variety of applications to HIV screening, food contaminant analysis or water quality monitoring. In this field, plasmonic biosensing is a well-established label-free technique on the market: commercial systems from HORIBA Scientific are currently available for both research and industrial users.Based on the surface plasmon resonance (SPR) phenomenon, plasmonic biodetection uses the high sensitivity of an evanescent wave propagating along a metallic film (forming the biochip) and the surrounding dielectric medium interface. More specifically, the adsorption of biomolecules onto the metal surface induces a strong change in the optical properties of a light beam reflected by the biochip: the main principle of plasmonic transduction consists in measuring these physical changes. Several interrogation techniques have therefore been developed to access such optical information, but they fail in meeting the most demanding user requirements for precise, real-time, high-throughput measurement.Initiated by these issues, the instrumentation work presented in this document has led to the development of a novel SPR interrogation technique, referred to as multi-spectral interrogation. Moreover, the promising results obtained have been pushed forward to propose a multi-spectral illumination system based on LEDs, providing attractive performances compared to existing configurations. The biosensing potential of the developed system, demonstrated through applications to genetic diagnosis and cancer detection, opens the door to a new generation of compact, high-performance, low-cost SPR sensors.
10

Road Network Extraction From High-resolution Multi-spectral Satellite Images

Karaman, Ersin 01 December 2012 (has links) (PDF)
In this thesis, an automatic road extraction algorithm for multi-spectral images is developed. The developed model extracts elongated structures from images by using edge detection, segmentation and clustering techniques. The study also extracts non-road regions like vegetative fields, bare soils and water bodies to obtain more accurate road map. The model is constructed in a modular approach that aims to extract roads with different characteristics. Each module output is combined to create a road score map. The developed algorithm is tested on 8-band WorldView-2 satellite images. It is observed that, the proposed road extraction algorithm yields 47 % precision and 70 % recall. The approach is also tested on the lower spectral resolution images with four-band, RGB and gray level. It is observed that the additional four bands provide an improvement of 12 % for precision and 3 % for recall. Road type analysis is also in the scope of this study. Roads are classified into asphalt, concrete and unpaved using Gaussian Mixture Models. Other linear objects such as railroads and water canals may also be extracted by this process. An algorithm that classifies drive roads and railroads for very high resolution images is also investigated. It is based on the Fourier descriptors that identify the presence of railroad sleepers. Water canals are also extracted in multi-spectral images by using spectral ratios that employ the near infrared bands. Structural properties are used to distinguish water canals from other water bodies in the image.

Page generated in 0.0636 seconds