• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

UAV Enabled IoT Network Designs for Enhanced Estimation, Detection, and Connectivity

Bushnaq, Osama 11 1900 (has links)
The Internet of Things (IoT) is a foundational building block for the upcoming information revolution. Particularly, the IoT bridges the cyber domain to anything within our physical world which enables unprecedented monitoring, connectivity, and smart control. The utilization of Unmanned Aerial Vehicles (UAVs) can offer an extra level of flexibility which results in more advanced and efficient connectivity and data aggregation. In the first part of the thesis, we focus on the optimal IoT devices placement and, the spectral and energy budgets management for accurate source estimation. Practical aspects such as measurement accuracy, communication quality, and energy harvesting are considered. The problem is formed such that a set of cheap and expensive sensors are placed to minimize the estimation error under limited system cost. The IoT revolution relies on aggregating big data from massive numbers of devices that are widely scattered in our environment. These devices are expected to be of low- complexity, low-cost, and limited power supply, which impose stringent constraints on the network operation. Aerial data transmission offers strong line-of-sight links and flexible/instant deployment. The UAV-enabled IoT networks can, for instance, offer solutions to avoid and manage natural disasters such as forest fire. We investigate in this thesis the aerial data aggregation for field estimation, wildfire detection, and connection coverage enhancement via UAVs. To accomplish the network task, the field of interest is divided into several subregions over which the UAVs hover to collect samples from the underlying nodes. To this end, we formulate and solve optimization problems to minimize total hovering and traveling times. This goal is fulfilled by optimizing the UAV hovering locations, the hovering time at each location, and the trajectory traversed between hovering locations. Finally, we propose the utilization of the tethered UAV (T-UAV) to assist the terrestrial network, where the tether provides power supply and connects the T-UAV to the core network through a high capacity link. The T-UAV however has limited mobility due to the limited tether length. A stochastic geometry-based analysis is provided for the optimal coverage probability of T-UAV-assisted cellular networks.
2

Assessment of a Low Cost IR Laser Local Tracking Solution for Robotic Operations

Du, Minzhen 14 May 2021 (has links)
This thesis aimed to assess the feasibility of using an off-the-shelf virtual reality tracking system as a low cost precision pose estimation solution for robotic operations in both indoor and outdoor environments. Such a tracking solution has the potential of assisting critical operations related to planetary exploration missions, parcel handling/delivery, and wildfire detection/early warning systems. The boom of virtual reality experiences has accelerated the development of various low-cost, precision indoor tracking technologies. For the purpose of this thesis we choose to adapt the SteamVR Lighthouse system developed by Valve, which uses photo-diodes on the trackers to detect the rotating IR laser sheets emitted from the anchored base stations, also known as lighthouses. Some previous researches had been completed using the first generation of lighthouses, which has a few limitations on communication from lighthouses to the tracker. A NASA research has cited poor tracking performance under sunlight. We choose to use the second generation lighthouses which has improved the method of communication from lighthouses to the tracker, and we performed various experiments to assess their performance outdoors, including under sunlight. The studies of this thesis have two stages, the first stage focused on a controlled, indoor environment, having an Unmanned Aerial Vehicle (UAS) perform repeatable flight patterns and simultaneously tracked by the Lighthouse and a reference indoor tracking system, which showed that the tracking precision of the lighthouse is comparable to the industrial standard indoor tracking solution. The second stage of the study focused on outdoor experiments with the tracking system, comparing UAS flights between day and night conditions as well as positioning accuracy assessments with a CNC machine under indoor and outdoor conditions. The results showed matching performance between day and night while still comparable to industrial standard indoor tracking solution down to centimeter precision, and matching simulated CNC trajectory down to millimeter precision. There is also some room for improvement in regards to the experimental method and equipment used, as well as improvements on the tracking system itself needed prior to adaptation in real-world applications. / Master of Science / This thesis aimed to assess the feasibility of using an off-the-shelf virtual reality tracking system as a low cost precision pose estimation solution for robotic operations in both indoor and outdoor environments. Such a tracking solution has the potential of assisting critical operations related to planetary exploration missions, parcel handling/delivery, and wildfire detection/early warning systems. The boom of virtual reality experiences has accelerated the development of various low-cost, precision indoor tracking technologies. For the purpose of this thesis we choose to adapt the SteamVR Lighthouse system developed by Valve, which uses photo-diodes on the trackers to detect the rotating IR laser sheets emitted from the anchored base stations, also known as lighthouses. Some previous researches had been completed using the first generation of lighthouses, which has a few limitations on communication from lighthouses to the tracker. A NASA research has cited poor tracking performance under sunlight. We choose to use the second generation lighthouses which has improved the method of communication from lighthouses to the tracker, and we performed various experiments to assess their performance outdoors, including under sunlight. The studies of this thesis have two stages, the first stage focused on a controlled, indoor environment, having an Unmanned Aerial Vehicle (UAS) perform repeatable flight patterns and simultaneously tracked by the Lighthouse and a reference indoor tracking system, which showed that the tracking precision of the lighthouse is comparable to the industrial standard indoor tracking solution. The second stage of the study focused on outdoor experiments with the tracking system, comparing UAS flights between day and night conditions as well as positioning accuracy assessments with a CNC machine under indoor and outdoor conditions. The results showed matching performance between day and night while still comparable to industrial standard indoor tracking solution down to centimeter precision, and matching simulated CNC trajectory down to millimeter precision. There is also some room for improvement in regards to the experimental method and equipment used, as well as improvements on the tracking system itself needed prior to adaptation in real-world applications.
3

[pt] SINTETIZAÇÃO DE IMAGENS ÓTICAS MULTIESPECTRAIS A PARTIR DE DADOS SAR/ÓTICOS USANDO REDES GENERATIVAS ADVERSARIAS CONDICIONAIS / [en] SYNTHESIS OF MULTISPECTRAL OPTICAL IMAGES FROM SAR/OPTICAL MULTITEMPORAL DATA USING CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS

JOSE DAVID BERMUDEZ CASTRO 08 April 2021 (has links)
[pt] Imagens óticas são frequentemente afetadas pela presença de nuvens. Com o objetivo de reduzir esses efeitos, diferentes técnicas de reconstrução foram propostas nos últimos anos. Uma alternativa comum é explorar dados de sensores ativos, como Radar de Abertura Sintética (SAR), dado que são pouco dependentes das condições atmosféricas e da iluminação solar. Por outro lado, as imagens SAR são mais difíceis de interpretar do que as imagens óticas, exigindo um tratamento específico. Recentemente, as Redes Adversárias Generativas Condicionais (cGANs - Conditional Generative Adversarial Networks) têm sido amplamente utilizadas para aprender funções de mapeamento que relaciona dados de diferentes domínios. Este trabalho, propõe um método baseado em cGANSs para sintetizar dados óticos a partir de dados de outras fontes, incluindo dados de múltiplos sensores, dados multitemporais e dados em múltiplas resoluções. A hipótese desse trabalho é que a qualidade das imagens geradas se beneficia do número de dados utilizados como variáveis condicionantes para a cGAN. A solução proposta foi avaliada em duas bases de dados. Foram utilizadas como variáveis condicionantes dados corregistrados SAR, de uma ou duas datas produzidos pelo sensor Sentinel 1, e dados óticos de sensores da série Sentinel 2 e LANDSAT, respectivamente. Os resultados coletados dos experimentos demonstraram que a solução proposta é capaz de sintetizar dados óticos realistas. A qualidade das imagens sintetizadas foi medida de duas formas: primeiramente, com base na acurácia da classificação das imagens geradas e, em segundo lugar, medindo-se a similaridade espectral das imagens sintetizadas com imagens de referência. Os experimentos confirmaram a hipótese de que o método proposto tende a produzir melhores resultados à medida que se exploram mais variáveis condicionantes para a cGAN. / [en] Optical images from Earth Observation are often affected by the presence of clouds. In order to reduce these effects, different reconstruction techniques have been proposed in recent years. A common alternative is to explore data from active sensors, such as Synthetic Aperture Radar (SAR), as they are nearly independent on atmospheric conditions and solar lighting. On the other hand, SAR images are more difficult to interpret than optical images, requiring specific treatment. Recently, conditional Generative Adversarial Networks (cGANs) have been widely used to learn mapping functions that relate data of different domains. This work proposes a method based on cGANs to synthesize optical data from data of other sources: data of multiple sensors, multitemporal data and data at multiple resolutions. The working hypothesis is that the quality of the generated images benefits from the number of data used as conditioning variables for cGAN. The proposed solution was evaluated in two databases. As conditioning data we used co-registered data from SAR at one or two dates produced by the Sentinel 1 sensor, and optical images produced by the Sentinel 2 and LANDSAT satellite series, respectively. The experimental results demonstrated that the proposed solution is able to synthesize realistic optical data. The quality of the synthesized images was measured in two ways: firstly, based on the classification accuracy of the generated images and, secondly, on the spectral similarity of the synthesized images with reference images. The experiments confirmed the hypothesis that the proposed method tends to produce better results as we explore more conditioning data for the cGANs.

Page generated in 0.0708 seconds