• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 34
  • 14
  • 9
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 281
  • 281
  • 78
  • 75
  • 46
  • 43
  • 38
  • 33
  • 33
  • 32
  • 31
  • 28
  • 26
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Sensor Fusion with Coordinated Mobile Robots / Sensorfusion med koordinerade mobila robotar

Holmberg, Per January 2003 (has links)
<p>Robust localization is a prerequisite for mobile robot autonomy. In many situations the GPS signal is not available and thus an additional localization system is required. A simple approach is to apply localization based on dead reckoning by use of wheel encoders but it results in large estimation errors. With exteroceptive sensors such as a laser range finder natural landmarks in the environment of the robot can be extracted from raw range data. Landmarks are extracted with the Hough transform and a recursive line segment algorithm. By applying data association and Kalman filtering along with process models the landmarks can be used in combination with wheel encoders for estimating the global position of the robot. If several robots can cooperate better position estimates are to be expected because robots can be seen as mobile landmarks and one robot can supervise the movement of another. The centralized Kalman filter presented in this master thesis systematically treats robots and extracted landmarks such that benefits from several robots are utilized. Experiments in different indoor environments with two different robots show that long distances can be traveled while the positional uncertainty is kept low. The benefit from cooperating robots in the sense of reduced positional uncertainty is also shown in an experiment. </p><p>Except for localization algorithms a typical autonomous robot task in the form of change detection is solved. The change detection method, which requires robust localization, is aimed to be used for surveillance. The implemented algorithm accounts for measurement- and positional uncertainty when determining whether something in the environment has changed. Consecutive true changes as well as sporadic false changes are detected in an illustrative experiment.</p>
202

Earth satellites and air and ground-based activities

Ekblad, Ulf January 2004 (has links)
This thesis, Earth satellites and detection of air andground based activities by Ulf Ekblad of the Physics departmentat the Royal Institute of Technology (KTH), addresses theproblem of detecting military activities in imagery. Examplesof various techniques are presented. In particular, problemsassociated with "novelties" and "changes" in an image arediscussed and various algorithms presented. The imagery usedincludes satellite imagery, aircraft imagery, and photos offlying aircraft. The timely delivery of satellite imagery is limited by thelaws of celestial mechanics. This and other information aspectsof imagery are treated. It is e.g. shown that dozens ofsatellites may be needed if daily observations of a specificsite on Earth are to be conducted from low Earth orbit. New findings from bioinformatics and studies of small mammalvisual systems are used. The Intersecting Cortical Model (ICM),which is a reduced variant of the Pulse-Coupled Neural Network(PCNN), is used on various problems among which are changedetection. Still much more could be learnt from biologicalsystems with respect to pre- and post-processing as well asintermediate processing stages. Simulated satellite imagery is used for determining theresolution limit for detection of tanks. The necessary pixelsize is shown to be around 6 m under the conditions of thissimulation. Difference techniques are also tested on Landsat satelliteimagery with the purpose of detecting underground nuclearexplosions. In particular, it is shown that this can easily bedone with 30 m resolution images, at least in the case studied.Satellite imagery from SPOT is used for detecting undergroundnuclear explosions prior to the detonations, i.e. under certainconditions 10 m resolution images can be used to detectpreparations of underground nuclear explosions. This type ofinformation is important for ensuring the compliance of nucleartest ban treaties. Furthermore, the necessity for havingcomplementary information in order to be able to interpretimages is also shown. Keywords: Remote sensing, reconnaissance, sensor,information acquisition, satellite imagery, image processing,image analysis, change detection, pixel difference, neuronnetwork, cortex model, PCNN, ICM, entanglement, Earthobservation, nuclear explosion, SPOT, Landsat, verification,orbit.
203

Sensor Fusion with Coordinated Mobile Robots / Sensorfusion med koordinerade mobila robotar

Holmberg, Per January 2003 (has links)
Robust localization is a prerequisite for mobile robot autonomy. In many situations the GPS signal is not available and thus an additional localization system is required. A simple approach is to apply localization based on dead reckoning by use of wheel encoders but it results in large estimation errors. With exteroceptive sensors such as a laser range finder natural landmarks in the environment of the robot can be extracted from raw range data. Landmarks are extracted with the Hough transform and a recursive line segment algorithm. By applying data association and Kalman filtering along with process models the landmarks can be used in combination with wheel encoders for estimating the global position of the robot. If several robots can cooperate better position estimates are to be expected because robots can be seen as mobile landmarks and one robot can supervise the movement of another. The centralized Kalman filter presented in this master thesis systematically treats robots and extracted landmarks such that benefits from several robots are utilized. Experiments in different indoor environments with two different robots show that long distances can be traveled while the positional uncertainty is kept low. The benefit from cooperating robots in the sense of reduced positional uncertainty is also shown in an experiment. Except for localization algorithms a typical autonomous robot task in the form of change detection is solved. The change detection method, which requires robust localization, is aimed to be used for surveillance. The implemented algorithm accounts for measurement- and positional uncertainty when determining whether something in the environment has changed. Consecutive true changes as well as sporadic false changes are detected in an illustrative experiment.
204

A Spatio-Temporal Analysis of Landscape Change within the Eastern Terai, India : Linking Grassland and Forest Loss to Change in River Course and Land Use

Biswas, Tanushree 01 May 2010 (has links)
Land degradation is one of the most important drivers of landscape change around the globe. This dissertation examines land use-land cover change within a mosaic landscape in Eastern Terai, India, and shows evidence of anthropogenic factors contributing to landscape change. Land use and land cover change were examined within the Alipurduar Subdivision, a representative of the Eastern Terai landscape and the Jaldapara Wildlife Sanctuary, a protected area nested within Alipurduar through the use of multi-temporal satellite data over the past 28 years (1978 - 2006). This study establishes the potential of remote sensing technology to identify the drivers of landscape change; it provides an assessment of how regional drivers of landscape change influence the change within smaller local study extents and provides a methodology to map different types of grassland and monitor their loss within the region. The Normalized Difference Vegetation Index (NDVI) and a Normalized Difference Dry Index (NDDI) were found instrumental in change detection and the classification of different grasslands found inside the park based on their location, structure, and composition. Successful spectral segregation of different types of grasslands and their direct association with different grassland specialist species (e.g., hispid hare, hog deer, Bengal florican) clearly showed the potential of remote sensing technology to efficiently monitor these grasslands and assist in species conservation. Temporal analysis provided evidence of the loss of dense forest and grasslands within both study areas with a considerably higher rate of loss outside the protected area than inside. Results show a decline of forest from 40% in 1978 to 25% in 2006 across Alipurduar. Future trends project forest cover and grassland within Alipurduar to reduce to 15% and 5%, respectively. Within the Alipurduar, deforestation due to growth of tea industry was the primary driver of change. Flooding changed the landscape, but more intensely inside the wildlife preserve. Change of the river course inside Jaldapara during the flood of 1968 significantly altered the distribution of grassland inside the park. Unless, the direction of landscape change is altered, future trends predict growth of the tea industry within the region, increased forest loss, and homogenization of the landscape.
205

Entwicklung eines halbautomatisierten Verfahrens zur Detektion neuer Siedlungsflächen durch vergleichende Untersuchungen hochauflösender Satelliten- und Luftbilddaten / Development of a Semi-Automated Process for the Detection of New Settlement Areas by Comparativ Examination of High-Resolution Satellite Imagery and Aerial Photographs

Reder, Johannes 09 April 2006 (has links) (PDF)
Knowledge about land use and land cover represents an important information basis for various planning applications. In particular, urban and suburban regions are subject to a high dynamic development. The detection and identification of changes is therefore an important instrument to follow and accompany the developments by planning. Here, aerial photography and, increasingly, satellite images serve as an important basis for information. The recognition and mapping of changes is still a time-consuming and cost-intensive matter which is mostly realized by visual interpretation of aerial photography and to an increasing degree of high- and ultra-high-resolution satellite images. Within the scope of the present work a new, robust and largely automated process based on a statistical change analysis is developed and presented. Basis for the data are multitemporal high-resolution satellite image data. The generated suspect areas, respectively areas of change, are supposed to function as clues in order to facilitate the process of the visual interpretation of multitemporal image datasets with regard to change mapping, since only marked areas of change have to undergo further examination. Consequently, this process can be used as a tool to ease and accelerate the updating of planning bases in general and maps in particular so far realised by visual interpretation. However, the automation of the process is not only supposed to serve the purpose of saving time and cost but also to bring the interpretation process to a higher level of objectivity. In order to improve the quality of the whole process, for the preprocessing of the image data selected methods of image processing have been integrated. Through the use of additional geo-information reference data for the automated calculation of the areas of change, a further refinement of the results can be reached. The obtained results in the first time-cut (1997-1998) can be proved and verified by a different data-take (1997-2000). To reach a convenient use and a good distribution of the developed method, the process has been implemented by means of the widespread image processing software ERDAS IMAGINE. This allows to make the developed method available for other users, since it can easily be integrated into the working environment of ERDAS IMAGINE. / Das Wissen um die Landnutzung und Landbedeckung ist für planerische Anwendungsgebiete eine wichtige Informationsgrundlage. Gerade urbane und suburbane Regionen unterliegen einer hohen Entwicklungsdynamik. Das Erkennen und Aufzeigen von Veränderungen ist somit ein wichtiges Instrument um Entwicklungen zu verfolgen und planerisch zu begleiten. Luft- und zunehmend Satellitenbilder dienen hierfür als wichtige Informationsgrundlage. Das Erkennen und Kartieren von Veränderungen ist nach wie vor eine zeitaufwändige und kostenintensive Angelegenheit, die überwiegend durch visuelle Interpretation von Luft- und zunehmend auch mit hoch- und höchstauflösenden Satellitenbildern realisiert wird. In dieser Arbeit wird ein neues, robustes, weitgehend automatisiertes, auf einem statistischen Ansatz beruhendes Verfahren der Veränderungsanalyse entwickelt und vorgestellt. Die Datengrundlage bilden multitemporale, hoch auflösende Satellitenbilddaten. Die generierten Verdachts- bzw. Veränderungsflächen sollen als Anhaltspunkte fungieren, um den Prozess der visuellen Interpretation von multitemporalen Bilddatensätzen in Hinsicht auf eine Veränderungskartierung zu erleichtern, da nur als Veränderungsflächen markierte Areale einer weiteren Untersuchung unterzogen werden müssen. Das Verfahren kann somit als Werkzeug dienen, die durch visuelle Interpretation realisierte Aktualisierung von Planungsgrundlagen bzw. Kartenwerken zu erleichtern und zu beschleunigen. Die Automatisierung des Verfahrens soll jedoch nicht allein dem Zweck der Zeit- und Kostenersparnis dienen, sondern auch den Interpretationsprozess objektiver gestalten. Um die Qualität des Verfahrens zu erhöhen, werden ausgewählte Methoden der Bildverarbeitung für die Vorverarbeitung der Bilder in das Verfahren integriert. Durch das Einbinden zusätzlicher Geobasisdaten in die automatisierte Berechnung der Veränderungsflächen kann eine weitere Verbesserung der Ergebnisse erzielt werden. Die Ergebnisse, der im ersten Zeitschnitt (1997-1998) untersuchten Datensätze, werden mit Hilfe eines weiteren Zeitschnitts (1997-2000) überprüft und verifiziert. Um eine unkomplizierte Anwendung und Verbreitung der Methode zu erreichen, wurde das Verfahren mit Hilfe der weit verbreiteten Bildverarbeitungssoftware ERDAS IMAGINE realisiert. Dies ermöglicht, das Verfahren auch anderen Nutzern zur Verfügung zu stellen, da es problemlos in die Arbeitsumgebung des Bildverarbeitungssystems ERDAS IMAGINE integriert werden kann
206

Earth satellites and air and ground-based activities

Ekblad, Ulf January 2004 (has links)
<p>This thesis, Earth satellites and detection of air andground based activities by Ulf Ekblad of the Physics departmentat the Royal Institute of Technology (KTH), addresses theproblem of detecting military activities in imagery. Examplesof various techniques are presented. In particular, problemsassociated with "novelties" and "changes" in an image arediscussed and various algorithms presented. The imagery usedincludes satellite imagery, aircraft imagery, and photos offlying aircraft.</p><p>The timely delivery of satellite imagery is limited by thelaws of celestial mechanics. This and other information aspectsof imagery are treated. It is e.g. shown that dozens ofsatellites may be needed if daily observations of a specificsite on Earth are to be conducted from low Earth orbit.</p><p>New findings from bioinformatics and studies of small mammalvisual systems are used. The Intersecting Cortical Model (ICM),which is a reduced variant of the Pulse-Coupled Neural Network(PCNN), is used on various problems among which are changedetection. Still much more could be learnt from biologicalsystems with respect to pre- and post-processing as well asintermediate processing stages.</p><p>Simulated satellite imagery is used for determining theresolution limit for detection of tanks. The necessary pixelsize is shown to be around 6 m under the conditions of thissimulation.</p><p>Difference techniques are also tested on Landsat satelliteimagery with the purpose of detecting underground nuclearexplosions. In particular, it is shown that this can easily bedone with 30 m resolution images, at least in the case studied.Satellite imagery from SPOT is used for detecting undergroundnuclear explosions prior to the detonations, i.e. under certainconditions 10 m resolution images can be used to detectpreparations of underground nuclear explosions. This type ofinformation is important for ensuring the compliance of nucleartest ban treaties. Furthermore, the necessity for havingcomplementary information in order to be able to interpretimages is also shown.</p><p>Keywords: Remote sensing, reconnaissance, sensor,information acquisition, satellite imagery, image processing,image analysis, change detection, pixel difference, neuronnetwork, cortex model, PCNN, ICM, entanglement, Earthobservation, nuclear explosion, SPOT, Landsat, verification,orbit.</p>
207

Détection de points chauds de déforestation à Bornéo de 2000 à 2009 à partir d'images MODIS

Dorais, Alexis 01 1900 (has links)
Les forêts de Bornéo sont inestimables. En plus d’une faune et d’une flore riche et diversifiée, ses milieux naturels constituent d’efficaces réservoirs de carbone. En outre, la matière ligneuse qui y est abondante fait l’objet d’une exploitation intensive. Par contre, c’est le potentiel agricole de l’île qui crée le plus d’enthousiasme, principalement en ce qui concerne la culture du palmier à huile. Pour tenter de mieux comprendre et surveiller le phénomène, nous avons développé des méthodes de détection de la déforestation et de la dégradation des forêts. Ces méthodes doivent tenir compte des caractéristiques propres à l’île. C’est que Bornéo est abondamment affectée par une nébulosité constante qui complexifie considérablement son observation à partir des satellites. Malgré ces contraintes, nous avons produit une série chronologique annuelle des points chauds de déforestation et de dégradation des forêts pour les années 2000 à 2009. / Borneo’s forests are priceless. Beyond the richness and diversity of its fauna and flora, its natural habitats constitute efficient carbon reservoirs. Unfortunately, the vast forests of the island are rapidly being cut down, both by the forestry industry and the rapidly expanding oil palm industry. In this context, we’ve developed methods to detect deforestation and forest degradation in order to better understand and monitor the phenomena. In doing so, the peculiarities of Borneo, such as the persistent cloud cover, had to be accounted for. Nevertheless, we succeeded in producing a time series of the yearly forest degradation and deforestations hotspots for the year 2000 through the year 2009. / Ce travail s’inscrit dans le cadre d’un programme de recherches appuyé par le Conseil de recherches en sciences humaines du Canada.
208

Urban Change Detection Using Multitemporal SAR Images

Yousif, Osama January 2015 (has links)
Multitemporal SAR images have been increasingly used for the detection of different types of environmental changes. The detection of urban changes using SAR images is complicated due to the complex mixture of the urban environment and the special characteristics of SAR images, for example, the existence of speckle. This thesis investigates urban change detection using multitemporal SAR images with the following specific objectives: (1) to investigate unsupervised change detection, (2) to investigate effective methods for reduction of the speckle effect in change detection, (3) to investigate spatio-contextual change detection, (4) to investigate object-based unsupervised change detection, and (5) to investigate a new technique for object-based change image generation. Beijing and Shanghai, the largest cities in China, were selected as study areas. Multitemporal SAR images acquired by ERS-2 SAR and ENVISAT ASAR sensors were used for pixel-based change detection. For the object-based approaches, TerraSAR-X images were used. In Paper I, the unsupervised detection of urban change was investigated using the Kittler-Illingworth algorithm. A modified ratio operator that combines positive and negative changes was used to construct the change image. Four density function models were tested and compared. Among them, the log-normal and Nakagami ratio models achieved the best results. Despite the good performance of the algorithm, the obtained results suffer from the loss of fine geometric detail in general. This was a consequence of the use of local adaptive filters for speckle suppression. Paper II addresses this problem using the nonlocal means (NLM) denoising algorithm for speckle suppression and detail preservation. In this algorithm, denoising was achieved through a moving weighted average. The weights are a function of the similarity of small image patches defined around each pixel in the image. To decrease the computational complexity, principle component analysis (PCA) was used to reduce the dimensionality of the neighbourhood feature vectors. Simple methods to estimate the number of significant PCA components to be retained for weights computation and the required noise variance were proposed. The experimental results showed that the NLM algorithm successfully suppressed speckle effects, while preserving fine geometric detail in the scene. The analysis also indicates that filtering the change image instead of the individual SAR images was effective in terms of the quality of the results and the time needed to carry out the computation. The Markov random field (MRF) change detection algorithm showed limited capacity to simultaneously maintain fine geometric detail in urban areas and combat the effect of speckle. To overcome this problem, Paper III utilizes the NLM theory to define a nonlocal constraint on pixels class-labels. The iterated conditional mode (ICM) scheme for the optimization of the MRF criterion function is extended to include a new step that maximizes the nonlocal probability model. Compared with the traditional MRF algorithm, the experimental results showed that the proposed algorithm was superior in preserving fine structural detail, effective in reducing the effect of speckle, less sensitive to the value of the contextual parameter, and less affected by the quality of the initial change map. Paper IV investigates object-based unsupervised change detection using very high resolution TerraSAR-X images over urban areas. Three algorithms, i.e., Kittler-Illingworth, Otsu, and outlier detection, were tested and compared. The multitemporal images were segmented using multidate segmentation strategy. The analysis reveals that the three algorithms achieved similar accuracies. The achieved accuracies were very close to the maximum possible, given the modified ratio image as an input. This maximum, however, was not very high. This was attributed, partially, to the low capacity of the modified ratio image to accentuate the difference between changed and unchanged areas. Consequently, Paper V proposes a new object-based change image generation technique. The strong intensity variations associated with high resolution and speckle effects render object mean intensity unreliable feature. The modified ratio image is, therefore, less efficient in emphasizing the contrast between the classes. An alternative representation of the change data was proposed. To measure the intensity of change at the object in isolation of disturbances caused by strong intensity variations and speckle effects, two techniques based on the Fourier transform and the Wavelet transform of the change signal were developed. Qualitative and quantitative analyses of the result show that improved change detection accuracies can be obtained by classifying the proposed change variables. / <p>QC 20150529</p>
209

Image Analysis Applications of the Maximum Mean Discrepancy Distance Measure

Diu, Michael January 2013 (has links)
The need to quantify distance between two groups of objects is prevalent throughout the signal processing world. The difference of group means computed using the Euclidean, or L2 distance, is one of the predominant distance measures used to compare feature vectors and groups of vectors, but many problems arise with it when high data dimensionality is present. Maximum mean discrepancy (MMD) is a recent unsupervised kernel-based pattern recognition method which may improve differentiation between two distinct populations over many commonly used methods such as the difference of means, when paired with the proper feature representations and kernels. MMD-based distance computation combines many powerful concepts from the machine learning literature, such as data distribution-leveraging similarity measures and kernel methods for machine learning. Due to this heritage, we posit that dissimilarity-based classification and changepoint detection using MMD can lead to enhanced separation between different populations. To test this hypothesis, we conduct studies comparing MMD and the difference of means in two subareas of image analysis and understanding: first, to detect scene changes in video in an unsupervised manner, and secondly, in the biomedical imaging field, using clinical ultrasound to assess tumor response to treatment. We leverage effective computer vision data descriptors, such as the bag-of-visual-words and sparse combinations of SIFT descriptors, and choose from an assessment of several similarity kernels (e.g. Histogram Intersection, Radial Basis Function) in order to engineer useful systems using MMD. Promising improvements over the difference of means, measured primarily using precision/recall for scene change detection, and k-nearest neighbour classification accuracy for tumor response assessment, are obtained in both applications.
210

Probabilistic Fault Management in Networked Systems

Steinert, Rebecca January 2014 (has links)
Technical advances in network communication systems (e.g. radio access networks) combined with evolving concepts based on virtualization (e.g. clouds), require new management algorithms in order to handle the increasing complexity in the network behavior and variability in the network environment. Current network management operations are primarily centralized and deterministic, and are carried out via automated scripts and manual interventions, which work for mid-sized and fairly static networks. The next generation of communication networks and systems will be of significantly larger size and complexity, and will require scalable and autonomous management algorithms in order to meet operational requirements on reliability, failure resilience, and resource-efficiency. A promising approach to address these challenges includes the development of probabilistic management algorithms, following three main design goals. The first goal relates to all aspects of scalability, ranging from efficient usage of network resources to computational efficiency. The second goal relates to adaptability in maintaining the models up-to-date for the purpose of accurately reflecting the network state. The third goal relates to reliability in the algorithm performance in the sense of improved performance predictability and simplified algorithm control. This thesis is about probabilistic approaches to fault management that follow the concepts of probabilistic network management (PNM). An overview of existing network management algorithms and methods in relation to PNM is provided. The concepts of PNM and the implications of employing PNM-algorithms are presented and discussed. Moreover, some of the practical differences of using a probabilistic fault detection algorithm compared to a deterministic method are investigated. Further, six probabilistic fault management algorithms that implement different aspects of PNM are presented. The algorithms are highly decentralized, adaptive and autonomous, and cover several problem areas, such as probabilistic fault detection and controllable detection performance; distributed and decentralized change detection in modeled link metrics; root-cause analysis in virtual overlays; event-correlation and pattern mining in data logs; and, probabilistic failure diagnosis. The probabilistic models (for a large part based on Bayesian parameter estimation) are memory-efficient and can be used and re-used for multiple purposes, such as performance monitoring, detection, and self-adjustment of the algorithm behavior. / <p>QC 20140509</p>

Page generated in 0.1501 seconds