• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 1
  • Tagged with
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Map-Based Trajectory Learning for Geolocalization using Deep Learning

Zha, Bing January 2021 (has links)
No description available.
2

Uma análise da geolocalização e gameficação para o desenvolvimento de aplicações móveis / An analysis of geolocation and gamefication for mobile application development

Coelho, Rafael Cortat 29 May 2013 (has links)
Made available in DSpace on 2016-04-29T14:23:19Z (GMT). No. of bitstreams: 1 Rafael Cortat Coelho.pdf: 8574730 bytes, checksum: faa2981774cc2e7dd5a9eea8ecc35623 (MD5) Previous issue date: 2013-05-29 / Since Web 2.0 has been created, the users role has faced massive changes, leading to a type of user experience characterized by the content generation and collaboration. Supported by modern mobile devices powered with advanced functions, collaboration became a reality in several segments of society, which now takes advantage of the new opportunities brought by mobile applications. In this context, a strong demand for geolocalization feature can be observed, as the practice characterized by the interaction of groups that share even temporally the same geographical region continuously grows. Thereby, the triad SoLoMo (social, local, mobile) is inserted into the scenario and becomes the main topic of this research. In addition to this, we can detect the growth of the incorporation of game mechanics into digital environments, a characteristic called gamification , which derives from the word game . This enhancement is relevant since it is an artifice able to motivate users to interact more and better with the environment and with other users. Among the applications that exploit all these features, Foursquare, which is a social network that allows users to indicate where they are at a certain moment, deserves full attention. Thus, it s relevant for this research to include an analysis of this tool, given its importance to the market. Therefore, the purpose of this research, which has a descriptive exploratory methodology, is to evidence how this scenario affects and encourages the development of mobile applications. In order to accomplish this, the research was conducted based on the contributions of Pierre Levy, André Lemos and Lucia Santaella on collaboration, social networking and collective intelligence. Further, the contributions of Martha Gabriel, Rachel Rechuero and the duos Hugo Fuks & Mariano Pimentel and Gabe Zichermann & Christopher Cunningham on more specific topics complemented the theoretical foundation / Desde que a Web 2.0 foi criada, o papel do internauta sofreu fortes mudanças, culminando na experiência do usuário marcada pela produção de conteúdo e pela colaboração. Com o suporte de dispositivos móveis dotados de cada vez mais recursos, a colaboração se tornou uma realidade nos mais variados segmentos da sociedade, que passaram a se valer das novas possibilidades trazidas pelas aplicações móveis. Nota-se, nesse contexto, uma forte demanda pelo recurso da geolocalização, uma vez que cresce a prática caracterizada pela interação de grupos que dividem mesmo que temporariamente uma mesma região geográfica. Dessa forma, surge a tríade SoLoMo (social, local, móvel), assunto central deste trabalho. Em complemento, percebe-se o crescimento da incorporação de elementos de jogos nos ambientes digitais, característica chamada de gameficação , termo advindo de game (jogo). Esse reforço se justifica pelo fato de se tratar de um artifício capaz de motivar os usuários a interagir mais e melhor com o ambiente e com outros usuários. Dentre as aplicações que exploram todas essas características, destaca-se o Foursquare, rede social que permite ao usuário indicar sua localização em tempo real. Dessa forma, tornou-se pertinente ao trabalho, também, a análise dessa ferramenta, dada a sua importância para o mercado. O objetivo desta pesquisa, de metodologia exploratória descritiva, pauta-se, pois, em evidenciar de que forma esse cenário interfere e incentiva o desenvolvimento de aplicações móveis. Para isso, a dissertação foi conduzida, inicialmente, com base nas contribuições de Pierre Levy, André Lemos e Lucia Santaella sobre colaboração, redes sociais e inteligência coletiva. E num momento posterior, as contribuições de Martha Gabriel, Raquel Recuero e das duplas Hugo Fuks & Mariano Pimentel e Gabe Zichermann & Christopher Cunningham, sobre temas mais específicos complementaram a fundamentação teórica
3

Mesure d’intégrité par l’exploitation des signaux de navigation par satellites / Exploitation of the GNSS signals for integrity measurement

Charbonnieras, Christophe 04 December 2017 (has links)
Dans le cadre des systèmes de positionnement par satellite GNSS (« Global Navigation Satellite Systems »), l’intégritéde la navigation d’un utilisateur est gérée en réception par la détection, l’identification voire l’exclusion de mesures depseudo-distance jugées erronées. Généralement basés sur le concept a posteriori RAIM (« Receiver Autonomous IntegrityMonitoring »), les algorithmes de contrôle autonome d’intégrité fournissent de hautes performances pour l’aviation civile,dont le contexte de navigation est caractérisé par une forte visibilité des satellites et peu de signaux parasites captéspar l’antenne réceptrice. L’algorithme WLSR RAIM est communément utilisé dans ce cadre. Néanmoins, les techniquesRAIM ne sont pas compatibles avec la navigation terrestre en milieu contraint. En effet, le contexte urbain est notammentcaractérisé par un masquage récurrent des signaux satellitaires directs ainsi que la réception de multi-trajets générés parl’environnement proche du récepteur. RAIM ne prend pas en compte l’ensemble des données disponibles en réception,dégradant ainsi fortement ses performances. Il est donc nécessaire de développer des méthodes de contrôle d’intégritécompatibles avec un tel contexte de navigation. Pour cela, la thèse propose d’étudier l’apport d’informations GNSS a priorinon utilisées par les techniques RAIM. Deux paramètres principaux ont été exploités : le signal GNSS brut reçu et lesestimations de directions d’arrivée des signaux satellitaires DOA (« Direction Of Arrival »). La première étape a consisté à implémenter une méthode a priori qui évalue la cohérence du positionnement estimé par rapport au signal brut directement reçu. Cette méthode a été nommée Direct-RAIM (D-RAIM) et a démontré une forte sensibilité de détection, permettant d’anticiper d’éventuels risques sur la navigation et de caractériser plus finement la qualité de l’environnement proche du récepteur. Toutefois, le caractère a priori de l’approche engendre de potentielles non détection d’erreurs en cas de modèle de signal défectueux. Afin de contourner cette limitation, un couplage WLSRRAIM – D-RAIM a été développé, nommé Hybrid-RAIM (H-RAIM). Une telle approche permet de combiner robustesse etsensibilité apportées par ces techniques respectives. Le second axe de recherche a mis en évidence la contribution de l’information des DOA dans un contrôle autonome d’intégrité. L’intégration d’un réseau d’antennes en réception permet d’obtenir l’estimation des DOA pour l’ensemble dela constellation visible. Théoriquement, l’évolution jointe des DOA est directement liée à l’attitude du réseau. Cet aspectpermet donc de détecter toute incohérence sur une ou plusieurs voies en cas d’estimation(s) de DOA biaisée(s), par rapportà l’ensemble de la constellation. L’algorithme RANSAC (« RANdom SAmple Consensus») a été utilisé afin de détecter toutcomportement aberrant dans l’estimation des DOA, et ainsi mesurer la confiance que l’utilisateur peut placer dans chaquevoie. L’algorithme WLSR RAIM RANSAC a ainsi été implémenté. L’intégration de la composante DOA permet d’ajouterun degré de liberté dans le contrôle autonome d’intégrité côté récepteur et ainsi d’affiner la détection voire l’exclusiond’erreurs. Au cours de cette thèse, un récepteur logiciel a été implémenté, permettant de traiter des signaux Galileo, de lagénération du signal jusqu’au positionnement puis au contrôle d’intégrité. Ce récepteur a pu être évalué à partir de donnéessimulées en environnement urbain. / In Global Navigation Satellite Systems (GNSS) applications, integrity is managed at the reception side by detection,identification and exclusion of faulty pseudorange measurements. Usually based on the a posteriori Receiver AutonomousIntegrity Monitoring (RAIM) concept, integrity techniques provide high performances for civil aviation, with a navigationcontext defined by a clear-sky environment. WLSR RAIM is commonly used. Nevertheless, RAIM techniques are notcompatible with a terrestrial navigation in harsh environments. For instance, urban areas are characterized by a poorvisibility and the reception of many multipaths derived from the receiver closed-environment. RAIM does not consider allthe available data in the reception chain, which dramatically deteriorates the detection performances. Hence, it is necessaryto develop integrity process compatible with such a navigation context. This PhD work studies the contribution of GNSSa priori information, disused by conventional RAIM techniques. Two main parameters have been exploited : the receivedraw GNSS signal and the Directions Of Arrival (DOA) estimations.This first step was devoted to the development of an a priori method which evaluates the consistence of the estimatedPosition Velocity Time (PVT) vector of the receiver with respect to the raw GNSS signal. This method has been calledDirect-RAIM (D-RAIM) and has shown high detection sensitivity, allowing the user to anticipate navigation risks and todefine precisely the quality of the receiver closed-environment. However, the a priori aspect of this approach may lead tonavigation error missed detections if the signal model is getting flawed. In order to circumvent this limitation, a WLSRRAIM – D-RAIM coupling has been developed, called Hybrid-RAIM (H-RAIM). Such an approach merges the robustnessand the sensitivity brought by both techniques.The second research step has brought to light the contribution of the DOA information in an autonomous integritymonitoring. Using an antenna array, the user can get the DOA estimations for all satellites in view. Theoretically, the DOAjoint evolution is directly correlated with the array rotation angles. Hence, any mismatch on the DOA estimations withrespect to the global constellation can be detected. RANdom Sample Consensus (RANSAC) algorithm has been used inorder to detect any faulty DOA evolution, derived from inconsistencies in reception linked to potential navigation risks :RANSAC measures the trust that the user can place in each channel. Therefore, a WLSR RAIM RANSAC algorithmhas been developed. The integration of the DOA component adds a degree of freedom in receiver autonomous integritymonitoring, refining the error detection and exclusion.Last but not least, a software receiver has been implemented processing Galileo data, from the signal generation to positioningand integrity monitoring. This software has been evaluated by simulated data characterizing urban environments.
4

Géolocalisation d'émetteurs en une étape : Algorithmes et performances / Transmitters geolocalization in one step : Algorithms and performance

Delestre, Cyrile 26 January 2016 (has links)
Le contexte de cette thèse est celui de la géolocalisation d'émetteurs (estimation de la position dans l’espace) de radiocommunication à partir de plusieurs stations multi-capteur qui sont spatialement éloignées. Les méthodes conventionnelles de géolocalisation telles que la triangulation sont en 2 étapes (la première étape estime des paramètres intermédiaires et la seconde étape "fusionne" ces mesures effectuées sur plusieurs stations afin de fournir la position des émetteurs). Les méthodes en 1 étape quant à elles utilisent les observations issues de toutes les antennes pour estimer directement et de manière optimale la position des sources. Le fait de traiter les signaux directement sur l'antenne globale (composée de toutes les stations d'antenne locales) entraîne un effet large bande sur les signaux entre stations. La thèse propose d'étudier l'effet large bande résiduel présent sur l'antenne globale des méthodes en 1 étape. Elle propose ensuite des améliorations sur des méthodes de géolocalisation en 1 étape, notamment grâce à l'apport de la théorie de matrice aléatoire à grande dimension et à l'introduction d'une nouvelle méthode nommée LOST-FIND. Finalement, une nouvelle approche visant à aborder différemment le problème large bande a été introduite donnant l'algorithme TARGET. / The context of the thesis is the transmitters geolocalization (position estimation in the space) of radiocommunication from several widely spaced multi-sensor stations. The conventional geolocalization methods as the tirnagulation are in 2 staps (the first step estimates intermediate parameters and the second step "merges" these mesures from the stations in order to obtain the transmitters positions). The 1 stap methods use the observertions from all the stations to directly and optimally estimate the sources positions. Directly handling the signals on the global array (composed of all the local stations) leads to a broadband effect on the signals between the stations.The thesis proposes to study the residual broadband effect on the global array of the 1 step methods. Then we propose improvements on some 1 step geolocalization methods, especially based on the random matrix theory in large dimension and on the introduction of a new method named LOST-FIND. Finally, a new approach differently tackling the braodband problem has been introduced and leads to TARGET algorithm.
5

Développement d'un modèle d'estimation des variables de trafic urbain basé sur l'utilisation des technologies de géolocalisation / Leveraging geolocalization technologies to model and estimate urban traffic

Hofleitner, Aude 04 December 2012 (has links)
Face à l’augmentation de la mobilité, les politiques de développement durable cherchent à optimiser l’utilisation des infrastructures de transport existantes. En particulier, les systèmes d’information du trafic à large échelle ont le potentiel d’optimiser l’utilisation du réseau de transport. Ils doivent fournir aux usagers une information fiable en temps réel leur permettant d’optimiser leurs choix d’itinéraires. Ils peuvent également servir d’outils d’aide à la décision pour les gestionnaires du réseau. La thèse étudie comment l’émergence des services Internet sur les téléphones portables et la rapide prolifération des systèmes de géolocalisation permet le développement de nouveaux services d’estimation et d’information des conditions de trafic en réseau urbain. L’utilisation des données provenant de véhicules traceurs nécessite le développement de modèles et d’algorithmes spécifiques, afin d’extraire l’information de ces données qui ne sont envoyées, jusqu’à présent, que par une faible proportion des véhicules circulant sur le réseau et avec une fréquence faible. La variabilité des conditions de circulations, due à la présence de feux de signalisation, motive une approche statistique de la dynamique du trafic, tout en intégrant les principes physiques hydrodynamiques (formation et dissolution de files d’attentes horizontales). Ce modèle statistique permet d’intégrer de façon robuste les données bruitées envoyées par les véhicules traceurs, d’estimer les paramètres physiques caractérisant la dynamique du trafic et d’obtenir l’expression paramétrique de la loi de probabilité des temps de parcours entre deux points quelconques du réseau. La thèse s’appuie sur les données et les infrastructures développées par le projet Mobile Millennium à l’Université de Californie, Berkeley pour valider les modèles et algorithmes proposés. Les résultats soulignent l’importance du développement de modèles statistiques et d’algorithmes adaptés aux données disponibles pour développer un système opérationnel d’estimation du trafic à large échelle / Sustainable mobility development requires the optimization of existing transportation infrastructure. In particular, ubiquitous traffic information systems have the potential to optimize the use of the transportation network. The system must provide accurate and reliable traffic information in real-time to optimize mobility choices. Successful implementations are also valuable tools for traffic management agencies. The thesis studies how the emergence of Internet services and location based services on mobile devices enable the development of novel Intelligent Transportation Systems which estimate and broadcast traffic conditions in arterial networks. Sparsely sampled probe data is the main source of arterial traffic data with the prospect of broad coverage in the near future. The small number of vehicles that report their position at a given time and the low sampling frequency require specific models and algorithms to extract valuable information from the available data. On the one hand, the variability of traffic conditions in urban networks, caused mainly by the presence of traffic lights, motivates a statistical approach of arterial traffic dynamics. On the other hand, an accurate modeling of the physics of arterial traffic from hydrodynamic theory (formation and dissolution of horizontal queues) ensures the physical validity of the model. The thesis proposes to integrate the dynamical model of arterial traffic in a statistical framework to integrate noisy measurements from probe vehicle data and estimate physical parameters, which characterize the traffic dynamics. In particular, the thesis derives and estimates the probability distributions of vehicle location and of travel time between arbitrary locations. The thesis leverages the data and the infrastructure developed by the Mobile Millennium project at the University of California, Berkeley to validate the models and algorithms. The results underline the importance to design statistical models for sparsely sampled probe vehicle data in order to develop the next generation of operation large-scale traffic information systems
6

Raffinement de la localisation d’images provenant de sites participatifs pour la mise à jour de SIG urbain / Refining participative website’s images localization for urban GIS updates

Semaan, Bernard 14 December 2018 (has links)
Les villes sont des zones actives : tous les jours de nouvelles constructions ont lieu, des immeubles sont démolis ou des locaux commerciaux changent d'enseigne. Les gestionnaires des Systèmes d'information géographique de la ville ont pour but de mettre le plus à jour possible leurs modèles numériques de la ville. Ces modèles peuvent se composer de cartes en 2D mais aussi de modèles en 3D qui peuvent provenir d'une reconstruction à partir d'images. Ces dernières peuvent avoir été prises depuis le ciel comme depuis le sol. La cartographie participative, comme le permet la plateforme "OpenStreetMap.org", a émergé pour mettre à disposition de tous l'information géographique et maintenir les cartes 2D à jour par les utilisateurs de la plateforme. Dans le but d'améliorer le processus de mise à jour et suivant le même esprit que les approches participatives, nous proposons d'utiliser les plateformes de partage de photos comme "Flickr", "Twitter", etc. Les images téléchargées sur ces plates-formes possèdent une localisation imprécise de l'image sans information sur l'orientation de la photographie. Nous proposons alors un système qui aide à trouver une meilleure localisation et retrouve une information d'orientation de la photographie. Le système utilise les informations visuelles de l'image ainsi que les informations sémantiques. Pour cela nous présentons une chaîne de traitement automatisée composée de trois couches : la couche d'extraction et de pré-traitement des données, la couche d'extraction et de traitement des primitives, la couche de prise de décision. Nous présentons ensuite les résultats de l'ensemble de ce système que nous appelons "Data Gathering system for image Pose Estimation"(DGPE). Nous présentons aussi dans cette thèse une méthode que nous avons appelée "Segments Based Building Detection"(SBBD) pour la détection d'immeubles simples. Nous avons aussi testé cette méthode sous diverses conditions de prise de vue (occultations, variations climatiques, etc.). Nous comparons cette méthode de détection à une autre méthode de l'état de l'art en utilisant plusieurs bases d'images. / Cities are active spots in the earth globe. They are in constant change. New building constructions, demolitions and business changes may apply on daily basis. City managers aim to keep as much as possible an updated digital model of the city. The model may consist of 2D maps but may also be a 3D reconstruction or a street imagery sequence. In order to share the geographical information and keep a 2D map updated, collaborative cartography was born. "OpenStreetMap.org" platform is one of the most known platforms in this field. In order to create an active collaborative database of street imagery we suggest using 2D images available on image sharing platforms like "Flickr", "Twitter", etc. Images downloaded from such platforms feature a rough localization and no orientation information. We propose a system that helps finding a better localization of the images and providing an information about the camera orientation they were shot with. The system uses both visual and semantic information existing in a single image. To do that, we present a fully automatic processing chain composed of three main layers: Data retrieval and preprocessing layer, Features extraction layer, Decision Making layer. We then present the whole system results combining both semantic and visual information processing results. We call our system Data Gathering system for image Pose Estimation (DGPE). We also present a new automatic method for simple architecture building detection we have developed and used in our system. This method is based on segments detected in the image and was called Segments Based Building Detection (SBBD). We test our method against some weather changes and occlusion problems. We finally compare our building detection results with another state-of-the-art method using several images databases.
7

Airborne Angle-Only Geolocalization

Kallin, Tove January 2021 (has links)
Airborne angle-only geolocalization is the localization of objects on ground level from airborne vehicles (AV) using bearing measurements, namely azimuth and elevation. This thesis aims to introduce elevation data of the terrain to the airborne angle-only geolocalization problem and to demonstrate that it could be applicable for localization of jammers. Jammers are often used for deliberate interference with malicious intent which could interfere with the positioning system of a vehicle. It is important to locate the jammers to either avoid them or to remove them.    Three localization methods, i.e. the nonlinear least squares (NLS), the extended Kalman filter (EKF) and the unscented Kalman filter (UKF), are implemented and tested on simulated data. The methods are also compared to the theoretical lower bound, the Cramér-Rao Lower Bound (CRLB), to see if there is an efficient estimator. The simulated data are different scenarios where the number of AVs, the relative flight path of the AVs and the knowledge of the terrain can differ. Using the knowledge of the terrain elevation, the methods give more consistent localization than without it. Without elevation data, the localization relies on good geometry of the problem, i.e. the relative flight path of the AVs, while the geometry is not as critical when elevation data is available. However, the elevation data does not always improve the localization for certain geometries.    There is no method that is clearly better than the others when elevation data is used. The methods’ performances are very similar and they all converge to the CRLB but that could also be an advantage. This makes the usage of elevation data not restricted to a certain method and it leaves more up to the implementer which method they prefer.
8

GPS-Free UAV Geo-Localization Using a Reference 3D Database

Karlsson, Justus January 2022 (has links)
The goal of this thesis has been global geolocalization using only visual input and a 3D database for reference. In recent years Convolutional Neural Networks (CNNs) have seen huge success in the task of classifying images. The flattened tensors at the final layers of a CNN can be viewed as vectors describing different input image features. Two networks were trained so that satellite and aerial images taken from different views of the same location had feature vectors that were similar. The networks were also trained so that images taken from different locations had different feature vectors. After training, the position of a given aerial image can then be estimated by finding the satellite image with a feature vector that is the most similar to that of the aerial image.  A previous method called Where-CNN was used as a baseline model. Batch-Hard triplet loss, the Adam optimizer, and a different CNN backbone were tested as possible augmentations to this method. The models were trained on 2640 different locations in Linköping and Norrköping. The models were then tested on a sequence of 4411 query images along a path in Jönköping. The search region had 1449 different locations constituting a total area of 24km2.  In Top-1% accuracy, there was a significant improvement over the baseline, increasing from 61.62% accuracy to 88.62%. The environment was modeled as a Hidden Markov Model to filter the sequence of guesses. The Viterbi algorithm was then used to find the most probable path. This filtering procedure reduced the average error along the path from 2328.0 m to just 264.4 m for the best model. Here the baseline had an average error of 563.0 m after filtering.  A few different 3D methods were also tested. One drawback was that no pretrained weights existed for these models, as opposed to the 2D models, which were pretrained on the ImageNet dataset. The best 3D model achieved a Top-1% accuracy of 70.41%. It should be noted that the best 2D model without using any pretraining achieved a lower Top-1% accuracy of 49.38%. In addition, a 3D method for efficiently doing convolution on sparse 3D data was presented. Compared to the straight-forward method, it was almost 2.5 times faster while still having comparable accuracy at individual query prediction.  While there was a significant improvement over the baseline, it was not significant enough to provide reliable and accurate localization for individual images. For global navigation, using the entire Earth as search space, the information in a 2D image might not be enough to be uniquely identifiable. However, the 3D CNN techniques tested did not improve the results of the pretrained 2D models. The use of more data and experimentation with different 3D CNN architectures is a direction in which further research would be exciting.

Page generated in 0.1215 seconds