• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 281
  • 73
  • 23
  • 15
  • 10
  • 7
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 511
  • 511
  • 126
  • 117
  • 112
  • 103
  • 98
  • 94
  • 94
  • 74
  • 73
  • 69
  • 66
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Identificação dos sintomas de ferrugem em áreas cultivadas com cana-de-açúcar / Identification of symptoms of rust in sugar cane plantations.

Dias, Desirée Nagliati 16 February 2004 (has links)
Áreas cultivadas com cana-de-açúcar podem sofrer o ataque do fungo Puccinia melanocephala e variedades suscetíveis desenvolvem uma doença conhecida por ferrugem da cana-de-açúcar. Por afetar, geralmente, áreas imensas, os prejuízos são grandes. Atualmente, a avaliação da doença é feita por especialistas que percorrem as áreas plantadas analisando visualmente as folhas e atribuindo à região um determinado grau de infecção. Esse modelo pode ser considerado subjetivo pois, dependendo da experiência e acuidade visual do especialista, a avaliação de uma mesma área pode apresentar resultados divergentes. Diante desta situação, este trabalho apresenta uma abordagem para automatizar o processo de identificação e avaliação, criando alternativas para minimizar os prejuízos. Este trabalho apresenta um método para classificação dos níveis de infecção da ferrugem por meio da análise de imagens aéreas de canaviais, adquiridas por um aeromodelo. Dessas fotos são extraídas características baseadas nas cores, as quais são classificadas por meio de uma rede neural backpropagation. Além disso, foi implementado um método para segmentação de imagens digitais de folhas de cana-de-açúcar infectadas com o intuito de corroborar a avaliação manual feita por especialistas. Os resultados mostram que o método é eficaz na discriminação dos três níveis de infecção disponíveis, além disso, indicam que este pode ser igualmente eficiente na discriminação dos nove níveis de infecção da escala adotada. / Cultivated areas of sugar cane may be targeted by the fungus Puccinia melanocephala and susceptible varieties may develop a disease known as sugar cane rust. Because the disease affects, in general, very large areas, the losses are very considerable. Currently, the evaluation of the disease is carried out by experts who must walk through the plantations analysing the leaves visually and assigning a certain degree of infection to the area. This model is somehow subjective because, due to experts’ experience and visual acuity, the evaluation for a specific area may present divergent results. In face of this problem, this work presents an approach to automate the process of identification and evaluation of the disease, as a new means to minimise the losses. This work shows a method to classify the infection levels of sugar cane rust through the analysis of aerial images of sugar cane plantations, acquired by an aeromodel. From these pictures, some characteristics are based on colours are extracted and further classified by a Backpropagation Neural Network. Furthermore, it has been implemented a method for the segmentation of digital images of sugar cane leaves infected by rust. This is done to corroborate the manual evaluation done by experts. The results have shown that the method is capable of discriminating the three levels of infection available and they also indicate that it can also be equally efficient in the discrimination of the nine distinct infection levels of the adopted scale.
92

Extração de features 3D para o reconhecimento de objetos em nuvem de pontos / 3D feature extraction for objects recognition in point clouds

Sales, Daniel Oliva 16 October 2017 (has links)
A detecção e reconhecimento de objetos é uma tarefa fundamental em aplicações relacionadas à navegação autônoma de robôs móveis e veículos inteligentes. Com a evolução tecnológica nos sistemas sensoriais, surgiram equipamentos capazes de detectar e representar os elementos presentes no ambiente de forma tridimensional, em estruturas chamadas nuvem de pontos. Os sensores 3D geralmente capturam um grande volume de pontos em curtos intervalos de tempo, o que demanda técnicas robustas para processamento dessa informação além de tolerância a eventuais ruídos nos dados. Uma abordagem frequentemente utilizada na área de Visão Computacional para redução de dimensionalidade é a extração de features robustas, armazenando um subconjunto de informações representativas e simplificadas do conjunto de dados. Esta tese apresenta uma metodologia de classificação de objetos em nuvens de pontos 3D através da extração de features 3D globais. Foi desenvolvido um novo descritor 3D invariante à escala, translação e rotação denominado 3D-CSD (3D-Contour Sample Distances) para representação da superfície dos objetos presentes no ambiente, e utilizado um método de aprendizado supervisionado para reconhecimento de padrões. Os experimentos realizados envolveram o uso de Redes Neurais Artificiais para o reconhecimento de diferentes classes de objetos, avaliando e validando a metodologia proposta. Os resultados obtidos demostraram a viabilidade da aplicação desta abordagem para o reconhecimento de objetos em sistemas de percepção 3D. / Objects detection and recognition is a critical task in applications for mobile robots and intelligent vehicles autonomous navigation. With the advent of many 3D sensors, environment elements can be detected and represented in three-dimensional mode, in structures known as point clouds. 3D sensors usually capture a large amount of points at high rates, requiring robust techniques to process this information and also deal with noise on input data. A common approach in the Computer Vision field for dimensionality reduction is the use of robust features extraction techniques. This way, only a subset with representative and simplified information from the dataset is processed. This thesis presents a methodology for objects recognition in 3D point clouds using global 3D features extraction. A novel 3D descriptor invariant to scale, translation and rotation named 3D-CSD (3D-Contour Sample Distances) was developed to represent the objects surface, and a supervised learning method used for pattern recognition. The experiments were performed using Artificial Neural Networks for the recognition of different classes of objects, evaluating and validating the proposed methodology. Obtained results demonstrated the feasibility of this approach application for object recognition in 3D perception systems.
93

Uma metodologia para extração de conhecimento em séries temporais por meio da identificação de motifs e da extração de características / A methodology to extract knowledge from time series using motif identification and feature extraction

Maletzke, André Gustavo 30 April 2009 (has links)
Mineração de dados tem sido cada vez mais aplicada em distintas áreas com o objetivo de extrair conhecimento interessante e relevante de grandes conjuntos de dados. Nesse contexto, aprendizado de máquina fornece alguns dos principais métodos utilizados em mineração de dados. Dentre os métodos empregados em aprendizado de máquina destacam-se os simbólicos que possuem como principal contribuição a interpretabilidade. Entretanto, os métodos de aprendizado de máquina tradicionais, como árvores e regras de decisão, não consideram a informação temporal presente nesses dados. Este trabalho propõe uma metodologia para extração de conhecimento de séries temporais por meio da extração de características e da identificação de motifs. Características e motifs são utilizados como atributos para a extração de conhecimento por métodos de aprendizado de máquina. Essa metodologia foi avaliada utilizando conjuntos de dados conhecidos na área. Foi realizada uma análise comparativa entre a metodologia e a aplicação direta de métodos de aprendizado de máquina sobre as séries temporais. Os resultados mostram que existe diferença estatística significativa para a maioria dos conjuntos de dados avaliados. Finalmente, foi realizado um estudo de caso preliminar referente ao monitoramento ambiental do reservatório da Usina Hidrelétrica Itaipu Binacional. Nesse estudo somente a identificação de motifs foi aplicada. Foram utilizadas séries temporais referentes à temperatura da água coletadas em distintas regiões do reservatório. Nesse estudo observou-se a existência de um padrão na distribuição dos motifs identificados para cada região do reservatório, corroborando com resultados consagrados na literatura / Data mining has been applied to several areas with the objective of extracting interesting and relevant knowledge from large data bases. In this scenario, machine learning provides some of the main methods employed in data mining. Symbolic learning are among the most used machine learning methods since these methods can provide models that can be interpreted by domain experts. However, traditional machine learning methods, such as decision trees and decision rules, do not take into account the temporal information present into data. This work proposes a methodology to extract knowledge from time series data using feature extraction and motif identification. Features and motifs are used as attributes for knowledge extraction performed by machine learning methods. This methodology was evaluated using some well-known data sets. In addition, we compared the proposed methodology to the approach that feeds machine learning algorithms with raw time series data. Results show that there are statistically significant differences for most of the data sets employed in the study. Finally, it is presented a preliminary study with environmental monitoring data from the Itaipu reservoir, made available by Itaipu Binacional. This study is restricted to the application of motif identification. We have used time series of water temperature collected from several regions of the reservoir. In this study, a pattern in motif distribution was observed for each region of the reservoir, agreeing with some well-known literature results
94

Extração de Características Utilizando Análise de Componentes Independentes para Spike Sorting. / Features extraction Using Independent component analysis for Spike Sorting.

LOPES, Marcus Vinicius de Sousa 27 February 2013 (has links)
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-09-04T15:04:55Z No. of bitstreams: 1 Marcos Vinicius Lopes.pdf: 7214975 bytes, checksum: 3d8e5de44c75de5f02b3f6101759f37a (MD5) / Made available in DSpace on 2017-09-04T15:04:55Z (GMT). No. of bitstreams: 1 Marcos Vinicius Lopes.pdf: 7214975 bytes, checksum: 3d8e5de44c75de5f02b3f6101759f37a (MD5) Previous issue date: 2013-02-27 / CAPES / Independent component analysis (ICA) is a method which objective is to find a non gaussian, linear or non linear representation such that the components are statistically independent. As a representation, tries to capture the input data essential structure. One of ICA applications is feature extraction. A main digital signal processing issue is finding a satisfactory representation, whether for image, speech signal or any signal type for purposes such as compression and de-noise. ICA can be aplied in this direction to propose generative models of the phenomena to be represented. This work presents the problem of spike classification in extracellular records, denominated spike sorting. It is assumed that the waveforms of spikes depend on factors such as the morphology of the neuron and the distance from the electrode, so that different neurons will present different forms of spikes. However, since different neurons may have similar spikes, what makes classification very difficult, the problem is even worse due to background noise and variation os spikes of the same neuron. The spike sorting algorithm is usually divided into three parts: firstly, the spikes are detected, then projected into a feature space (with possible dimensionality reduction) to facilitate differentiation between the waveforms from different neurons, finally the cluster algorithm is run for identifying these characteristics so the spikes from the same neuron. Here, we propose the use of ICA in feature extraction stage, being this step critical to the spike sorting process, thus distinguishing the activity of each neuron detected, supporting the analysis of neural population activity near the electrode. The method was compared with conventional techniques such as Principal Component Analysis and Wavelets, demonstrating a significant improvement in results. / A análise de componentes independentes (ICA, do inglês Indepdendent Component Analysis) é um método no qual o objetivo é encontrar uma representação linear ou não linear, não-gaussiana, tal que as componentes sejam estatisticamente independentes. Como uma representação busca capturar a estrutura essencial dos dados de entrada. Uma das aplicações de ICA é em extração de características. Um grande problema no processamento digital de sinais é encontrar uma representação adequada, seja para imagem, sinal de fala ou qualquer outro tipo de sinal para objetivos como compressão e remoção de ruído. ICA pode ser aplicada nesta direção ao tentar propor modelos geradores para os fenômenos a serem representados. Neste trabalho é apresentado o problema da classificação de espículas em gravações extracelulares, denominado spike sorting. Assume-se que as formas de onda das espículas dependem de fatores como a morfologia do neurônio e da distância deste para o eletrodo, então diferentes neurônios irão apresentar diferentes formas de espículas. Contudo diferentes neurônios podem apresentar espículas semelhantes, tornando a classificação mais difícil, o problema ainda é agravado devido ao ruído de fundo e a variação das espículas de um mesmo neurônio. O algoritmo de spike sorting geralmente é dividido em três partes: inicialmente as espículas são detectadas, em seguida são projetadas em um espaço de características (podendo haver redução de dimensionalidade) para facilitar a diferenciação entre as formas de onda de diferentes neurônios, por fim é feito o agrupamento dessas características identificando assim as espículas pertencentes ao mesmo neurônio. Aqui propomos a utilização de ICA na etapa de extração de características das espículas, sendo esta etapa crítica para o processo de spike sorting, permitindo assim distinguir a atividade de cada neurônio detectado, auxiliando a análise da atividade da população neural próxima ao eletrodo. O método foi comparado com técnicas convencionais como Análise de componentes principais (PCA, do inglês Principal Component Analysis) e Wavelets, demonstrando significativa melhora nos resultados.
95

Towards Developing Computer Vision Algorithms and Architectures for Real-world Applications

January 2018 (has links)
abstract: Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for object segmentation and feature extraction for objects and actions recognition in video data, and sparse feature selection algorithms for medical image analysis, as well as automated feature extraction using convolutional neural network for blood cancer grading. To detect and classify objects in video, the objects have to be separated from the background, and then the discriminant features are extracted from the region of interest before feeding to a classifier. Effective object segmentation and feature extraction are often application specific, and posing major challenges for object detection and classification tasks. In this dissertation, we address effective object flow based ROI generation algorithm for segmenting moving objects in video data, which can be applied in surveillance and self driving vehicle areas. Optical flow can also be used as features in human action recognition algorithm, and we present using optical flow feature in pre-trained convolutional neural network to improve performance of human action recognition algorithms. Both algorithms outperform the state-of-the-arts at their time. Medical images and videos pose unique challenges for image understanding mainly due to the fact that the tissues and cells are often irregularly shaped, colored, and textured, and hand selecting most discriminant features is often difficult, thus an automated feature selection method is desired. Sparse learning is a technique to extract the most discriminant and representative features from raw visual data. However, sparse learning with \textit{L1} regularization only takes the sparsity in feature dimension into consideration; we improve the algorithm so it selects the type of features as well; less important or noisy feature types are entirely removed from the feature set. We demonstrate this algorithm to analyze the endoscopy images to detect unhealthy abnormalities in esophagus and stomach, such as ulcer and cancer. Besides sparsity constraint, other application specific constraints and prior knowledge may also need to be incorporated in the loss function in sparse learning to obtain the desired results. We demonstrate how to incorporate similar-inhibition constraint, gaze and attention prior in sparse dictionary selection for gastroscopic video summarization that enable intelligent key frame extraction from gastroscopic video data. With recent advancement in multi-layer neural networks, the automatic end-to-end feature learning becomes feasible. Convolutional neural network mimics the mammal visual cortex and can extract most discriminant features automatically from training samples. We present using convolutinal neural network with hierarchical classifier to grade the severity of Follicular Lymphoma, a type of blood cancer, and it reaches 91\% accuracy, on par with analysis by expert pathologists. Developing real world computer vision applications is more than just developing core vision algorithms to extract and understand information from visual data; it is also subject to many practical requirements and constraints, such as hardware and computing infrastructure, cost, robustness to lighting changes and deformation, ease of use and deployment, etc.The general processing pipeline and system architecture for the computer vision based applications share many similar design principles and architecture. We developed common processing components and a generic framework for computer vision application, and a versatile scale adaptive template matching algorithm for object detection. We demonstrate the design principle and best practices by developing and deploying a complete computer vision application in real life, building a multi-channel water level monitoring system, where the techniques and design methodology can be generalized to other real life applications. The general software engineering principles, such as modularity, abstraction, robust to requirement change, generality, etc., are all demonstrated in this research. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
96

Adapting iris feature extraction and matching to the local and global quality of iris image / Comparaison des personnes par l'iris : adaptation des étapes d'extraction de caractéristiques et de comparaison à la qualité locale et globale des images d'entrées

Cremer, Sandra 09 October 2012 (has links)
La reconnaissance d'iris est un des systèmes biométriques les plus fiables et les plus précis. Cependant sa robustesse aux dégradations des images d'entrées est limitée. Généralement les systèmes basés sur l'iris peuvent être décomposés en quatre étapes : segmentation, normalisation, extraction de caractéristiques et comparaison. Des dégradations de la qualité des images d'entrées peuvent avoir des répercussions sur chacune de ses étapes. Elles compliquent notamment la segmentation, ce qui peut engendrer des images normalisées contenant des distorsions ou des artefacts non détectés. De plus, la quantité d'information disponible pour la comparaison peut être réduite. Dans cette thèse, nous proposons des solutions pour améliorer la robustesse des étapes d'extraction de caractéristiques et de comparaison à la dégradation des images d'entrées. Nous travaillons avec deux algorithmes pour ces deux étapes, basés sur les convolutions avec des filtres de Gabor 2D, mais des processus de comparaison différents. L'objectif de la première partie de notre travail est de contrôler la qualité et la quantité d'information sélectionnée pour la comparaison dans les images d'iris normalisées. Dans ce but nous avons défini des mesures de qualité locale et globale qui mesurent la quantité d'occlusions et la richesse de la texture dans les images d'iris. Nous utilisons ces mesures pour déterminer la position et le nombre de régions à exploiter pour l'extraction. Dans une seconde partie de ce travail, nous étudions le lien entre la qualité des images et les performances de reconnaissance des deux algorithmes de reconnaissance décrits ci-dessus. Nous montrons que le second est plus robuste aux images dégradées contenant des artefacts, des distorsions ou une texture pauvre. Enfin, nous proposons un système complet pour la reconnaissance d'iris, qui combine l'utilisation de nos mesures de qualités locale et globale pour optimiser les performances des algorithmes d'extraction de caractéristiques et de comparaison / Iris recognition has become one of the most reliable and accurate biometric systems available. However its robustness to degradations of the input images is limited. Generally iris based systems can be cut into four steps : segmentation, normalization, feature extraction and matching. Degradations of the input image quality can have repercussions on all of these steps. For instance, they make the segmentation more difficult which can result in normalized iris images that contain distortion or undetected artefacts. Moreover the amount of information available for matching can be reduced. In this thesis we propose methods to improve the robustness of the feature extraction and matching steps to degraded input images. We work with two algorithms for these two steps. They are both based on convolution with 2D Gabor filters but use different techniques for matching. The first part of our work is aimed at controlling the quality and quantity of information selected in the normalized iris images for matching. To this end we defined local and global quality metrics that measure the amount of occlusion and the richness of texture in iris images. We use these measures to determine the position and the number of regions to exploit for feature extraction and matching. In the second part, we study the link between image quality and the performance of the two recognition algoritms just described. We show that the second one is more robust to degraded images that contain artefacts, distortion or a poor iris texture. Finally, we propose a complete system for iris recognition that combines the use of our local and global quality metrics to optimize recognition performance
97

Segmentação do espaço urbano por meio da tecnologia Lidar aerotransportado. / Segmentation of urban space through airborne LIDAR technology.

Ferreira, Flávia Renata 28 August 2014 (has links)
O LiDAR (Light Detection And Ranging) vem-se consolidando como tecnologia de mapeamento, contribuindo com a ciência da informação geográfica. Este trabalho fez uma revisão do estado da arte da tecnologia LiDAR aerotransportado ou ALS (Airborne Laser Scanner), em constante mudança e aperfeiçoamento, no que diz respeito aos sistemas sensores e a estrutura de armazenamento das informações adquiridas. Inicialmente foi apresentado um panorama da utilização do LiDAR aerotransportado na produção de modelos digitais de elevação, em levantamentos de linhas de transmissão, no setor de transportes, e foi dada ênfase à tarefa de extração da vegetação e de edificações, detectando também o solo exposto. Para a extração de edificações, foram apresentados diversos conceitos desenvolvidos nos últimos quatro anos. Na parte prática foi utilizada uma região de teste para comparar feições urbanas obtidas pela classificação automática, realizada pelo software TerraScan, com feições homólogas provenientes de uma base cartográfica de referência, mostrando convergências e divergências entre os dois produtos. Foi realizada uma análise de declividade para determinação de bordas das edificações e, com isso, realizar a segmentação dessas feições. Foi realizado um controle de qualidade cartográfica do produto LiDAR que pudesse classificar esse produto quanto ao padrão de exatidão cartográfica digital. O produto obtido pelo LiDAR atendeu às classes B, C e D da nova norma brasileira a partir da escala 1:10.000. Também foi proposto e realizado o controle de qualidade altimétrico a partir das curvas de nível do produto cartográfico de referência. Recomenda-se a utilização cuidadosa desse produto em função da escala do mapeamento e das necessidades do usuário. / LiDAR (Light Detection And Ranging) has been consolidated as a mapping technology, contributing to the science of geographic information. This paper reviewed the state of the art of the LiDAR airborne technology or ALS (Airborne Laser Scanner), in constant change and improvement, with respect to the sensors and systems structure for storing acquired information. Initially, an overview was presented regarding the use of airborne LiDAR in producing digital elevation models, in surveys of transmission lines and the transportation sector. Emphasis was given to the task of extracting vegetation and buildings, also detecting the exposed soil. For the extraction of buildings, many concepts developed over the past four years were presented. In the practical part, a region test was used to compare the urban features obtained by the automatic classification performed by TerraScan software, with corresponding features from a cartographic reference product, showing similarities and differences between them. An analysis to determine the slope of the edges of the buildings was accomplished and, therefore, the segmentation of these features. The quality control of cartographic LiDAR product was performed in order to classify this product as the standard for digital cartographic accuracy. The product obtained by LiDAR met classes B, C and D of the new Brazilian standard in the 1:10,000 scale. Quality control of altimetry from the curves of the cartographic reference product level was also proposed and performed. We recommend the careful use of the product depending on the scale of the mapping and on users needs.
98

Proposal For a Vision-Based Cell Morphology Analysis System

González García, Jaime January 2008 (has links)
<p>One of the fields where image processing finds its application but that remains as anunexplored territory is the analysis of cell morphology. This master thesis proposes a systemto carry out this research and sets the necessary technical basis to make it feasible, rangingfrom the processing of time-lapse sequences using image segmentation to the representation,description and classification of cells in terms of morphology.</p><p>Due to the highly variability of cell morphological characteristics several segmentationmethods have been implemented to face each of the problems encountered: Edge-detection,region-growing and marked watershed were found to be successful processing algorithms.This variability inherent to cells and the fact that human eye has a natural disposition to solvesegmentation problems finally lead to the development of a user-friendly interactiveapplication, the <em>Time Lapse Sequence Processor</em> (TLSP). Although it was initially consideredas a mere interface to perform cell segmentation, TLSP concept has evolved into theconstruction of a complete multifunction tool to perform cell morphology analysis:segmentation, morphological data extraction, analysis and management, cell tracking andrecognition system, etc. In its last version, TLSP v0.2 Alpha contains several segmentationtools, improved user interface and, data extraction and management capabilities.</p><p>Finally, a wide set of recommendations and improvements have been discussed, pointing the path for future development in this area.</p>
99

Signal Processing Using Wavelets in a Ground Penetrating Radar System / Signalbehandling med wavelets i ett markpenetrerande radarsystem

Andréasson, Thomas January 2003 (has links)
<p>This master's thesis explores whether time-frequency techniques can be utilized in a ground penetrating radar system. The system studied is the HUMUS system which has been developed at FOI, and which is used for the detection and classification of buried land mines. </p><p>The objective of this master's thesis is twofold. First of all it is supposed to give a theoretical introduction to the wavelet transform and wavelet packets, and also to introduce general time-frequency transformations. Secondly, the thesis presents and implements an adaptive method, which is used to perform the task of a feature extractor. </p><p>The wavelet theory presented in this thesis gives a first introduction to the concept of time-frequency transformations. The wavelet transform and wavelet packets are studied in detail. The most important goal of this introduction is to define the theoretical background needed for the second objective of the thesis. However, some additional concepts will also be introduced, since they were deemed necessary to include in an introduction to wavelets. </p><p>To illustrate the possibilities of wavelet techniques in the existing HUMUS system, one specific application has been chosen. The application chosen is feature extraction. The method for feature extraction described in this thesis uses wavelet packets to transform theoriginal radar signal into a form where the features of the signal are better revealed. One of the algorithms strengths is its ability to adapt itself to the kind of input radar signals expected. The algorithm will pick the "best" wavelet packet from a large number of possible wavelet packets.</p><p>The method we use in this thesis emanates from a previously publicized dissertation. The method proposed in that dissertation has been modified to the specific environment of the HUMUS system. It has also been implemented in MATLAB, and tested using data obtained by the HUMUS system. The results are promising; even"weak"objects can be revealed using the method.</p>
100

Simultaneous Localization And Mapping in a Marine Environment using Radar Images

Svensson, Henrik January 2009 (has links)
<p>Simultaneous Localization And Mapping (SLAM) is a process of mapping an unknown environment and at the same time keeping track of the position within this map. In this theses, SLAM is performed in a marine environent using radar images only.</p><p>A SLAM solution is presented. It uses SIFT to compare pairs of radar images. From these comparisons, measurements of the boat movements are obtained. A type of Kalman filter (Exactly Sparse Delayed-state Filter, ESDF) uses these measurements to estimate the trajectory of the boat. Once the trajectory is estimated, the radar images are joined together in order to create a map.</p><p>The presented solution is tested and the estimated trajectory is compared to GPS data. Results show that the method performs well for at least shorter periods of time.</p>

Page generated in 0.4503 seconds