• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The simulation of SAR imagery using discretised scattering models

Barratt, Nicholas Roy January 1995 (has links)
No description available.
2

Focusing ISAR images using fast adaptive time-frequency and 3D motion detection on simulated and experimental radar data /

Brinkman, Wade H. January 2005 (has links) (PDF)
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2005. / Thesis Advisor(s): Michael A. Morgan, Thayananthan Thayaparan. Includes bibliographical references (p. 119-120). Also available online.
3

The analysis of UWB radar system for microwave imaging application

Li, Lei January 2015 (has links)
Many research groups have conducted the investigation into UWB imaging radar system for various applications over the last decade. Due to the demanding security requirements, it is desirable to devise a convenient and reliable imaging system for concealed weapon detection. Therefore, this thesis presents my research into a low cost and compact UWB imaging radar system for security purpose. This research consists of two major parts: building the UWB imaging system and testing the imaging algorithms. Firstly, the time-domain UWB imaging radar system is developed based on a modulating scheme, achieving a receiver sensitivity of -78dBm and a receiver dynamic range of 69dB. A rotary UWB antenna linear array, comprising one central transmitting antenna and four side-by-side receiving antennas, is adopted to form 2D array in order to achieve a better cross-range resolution of the target. In operation, the rotation of the antenna array is automatically controlled through the computerised modules in LabVIEW. Two imaging algorithms have been extensively tested in the developed UWB radar system for a number of scenarios. In simulation, the “Delay and Sum (DAS)” method has been shown to be effective at mapping out the metallic targets in free space, but prone to errors in more complicated environments. However, the “Time Reversal (TR)” method can produce better images in more complex scenarios, where traditionally unfavorable multi-path interference becomes a valuable asset. These observations were verified in experiment in different testing environments, such as penetration through wooden boards, clutters and a stuffed sport bag. The detectable size of a single target is 8×8×1 cm3 with 30cm distance in a stuffed bag, while DAS can achieve the estimation of 7cm cross-range resolution and 15cm down-range resolution for two targets with sizes of 8×8×1 cm3 and 10×10×1 cm3, which fits within the theoretical prediction. In contrast, TR can distinguish them with a superior 4cm cross range resolution.
4

[pt] APLICAÇÃO E AVALIAÇÃO DO DESEMPENHO DE MÉTODOS DE APRENDIZADO PROFUNDO PARA CLASSIFICAÇÃO DE IMAGENS DE RADAR SAR (SYNTHETIC APERTURE RADAR) PARA MONITORAMENTO DE ÁREAS MARINHAS NA DETECÇÃO DE FEIÇÕES DE INTERESSE PARA A ÁREA DE ÓLEO E GÁS / [en] METHODS FOR CLASSIFICATION OF SAR (SYNTHETIC APERTURE RADAR) RADAR IMAGES FOR MONITORING MARINE AREAS IN DETECTING FEATURES OF INTEREST TO THE OIL AND GAS AREA

WILLIAM ALBERTO RAMIREZ RUIZ 15 September 2021 (has links)
[pt] O estudo dos eventos naturais e dos gerados pela atividade humana no mar tem tido uma grande prioridade para o setor de petróleo, isso devido à possibilidade de ter um evento perigoso para o ambiente marinho ou a área de produção. Nesse contexto, o objetivo deste trabalho é a avaliação de abordagens baseadas em aprendizado profundo para a classificação de eventos no mar usando imagens de radar de abertura sintética na área de óleo e gás. Métodos baseados em aprendizado profundo têm mostrado um ótimo desempenho através do uso de camadas convolucionais, onde as características são extraídas automaticamente através da definição de um kernel e stride. As seguintes arquiteturas são avaliadas neste trabalho: Inception V3, Xception, Inception ResNet V2, MobileNet, VGG16 e Deep Attention sampling. A avaliação é feita em uma metodologia de classificação de eventos no mar usando duas bases de dados de imagens de radar: a primeira contém 10 eventos comumente presentes no oceano ártico, e a segunda descreve um derramamento de óleo presente na costa da Louisiana. Nos experimentos realizados se obteve os melhores resultados com as arquiteturas Deep Attention sampling as quais atingiram valores de f1-score e Recall de até 0.82 por cento e 0.87 por cento respectivamente, para a classe de interesse no conjunto de dados de derramamento de óleo. Para o conjuntode dados de eventos naturais no mar, um alto desempenho foi evidenciado para arquiteturas baseadas no uso de módulos de Inception, tendo pontuações mais altas de F1-score e Recall para a arquitetura Xception. Além disso, foi observado uma melhoria de até 10 por cento e 13 por cento nas métricas f1-score e Recall no uso da atenção, em relação à sua arquitetura base (VGG16), e 4 por cento respeito a outras arquiteturas baseadas em módulos Inception, isto para o conjunto de dados de eventos no mar, demonstrando as vantagens de usar amostragem com atenção. / [en] The study of natural events and those generated by human activity at sea has been a high priority for the Oil and Gas industry, due to the possibility of a dangerous event for the marine environment or the production area. In this context, the objective of this work is the evaluation of approaches based on deep learning for the classification of events in the sea using synthetic aperture radar images in the oil and gas area. Methods based on deep learning have shown an excellent performance through the use of convolutional layers, where the characteristics are extracted automatically through the definition of a kernel and stride. The following architectures are evaluated in this work: Inception V3, Xception, Inception ResNet V2, MobileNet, VGG16, and Deep Attention sampling. The assessment is made using a methodology for classifying events at sea using two radar image databases: the first contains 10 events commonly present in the Arctic Ocean, and the second describes an oil spill present near the Louisiana coast. In the experiments performed, the best results were obtained with the Deep Attention sampling architectures, which reached f1- score and Recall values of up to 0.82 a per cent nd 0.87 per cent respectively, for the class of interest in the oil spill dataset. For the dataset of natural events in the sea, high performance was evidenced for architectures based on the non-use of Inception modules, having higher values of F1-score and Recall for an Xception architecture. Also, an improvement of up to 10 per cent and 13 per cent in the metrics f1- score and recall in the use of attention was observed, concerning its base architecture (VGG16), and 4 per cent with other architectures based on Inception modules, this for the dataset of events at sea, demonstrating the advantages of using Attention Sampling carefully.
5

Comparative Analysis of ISAR and Tomographic Radar Imaging at W-Band Frequencies

Hopkins, Nicholas Christian 24 May 2017 (has links)
No description available.
6

Physics-Based Near-Field Microwave Imaging Algorithms for Dense Layered Media

Ren, Kai January 2017 (has links)
No description available.
7

Scene Reconstruction From 4D Radar Data with GAN and Diffusion : A Hybrid Method Combining GAN and Diffusion for Generating Video Frames from 4D Radar Data / Scenrekonstruktion från 4D-radardata med GAN och Diffusion : En Hybridmetod för Generation av Bilder och Video från 4D-radardata med GAN och Diffusionsmodeller

Djadkin, Alexandr January 2023 (has links)
4D Imaging Radar is increasingly becoming a critical component in various industries due to beamforming technology and hardware advancements. However, it does not replace visual data in the form of 2D images captured by an RGB camera. Instead, 4D radar point clouds are a complementary data source that captures spatial information and velocity in a Doppler dimension that cannot be easily captured by a camera's view alone. Some discriminative features of the scene captured by the two sensors are hypothesized to have a shared representation. Therefore, a more interpretable visualization of the radar output can be obtained by learning a mapping from the empirical distribution of the radar to the distribution of images captured by the camera. To this end, the application of deep generative models to generate images conditioned on 4D radar data is explored. Two approaches that have become state-of-the-art in recent years are tested, generative adversarial networks and diffusion models. They are compared qualitatively through visual inspection and by two quantitative metrics: mean squared error and object detection count. It is found that it is easier to control the generative adversarial network's generative process through conditioning than in a diffusion process. In contrast, the diffusion model produces samples of higher quality and is more stable to train. Furthermore, their combination results in a hybrid sampling method, achieving the best results while simultaneously speeding up the diffusion process. / 4D bildradar får en alltmer betydande roll i olika industrier tack vare utveckling inom strålformningsteknik och hårdvara. Det ersätter dock inte visuell data i form av 2D-bilder som fångats av en RGB-kamera. Istället utgör 4D radar-punktmoln en kompletterande datakälla som representerar spatial information och hastighet i form av en Doppler-dimension. Det antas att vissa beskrivande egenskaper i den observerade miljön har en abstrakt representation som de två sensorerna delar. Därmed kan radar-datan visualiseras mer intuitivt genom att lära en transformation från fördelningen över radar-datan till fördelningen över bilderna. I detta syfte utforskas tillämpningen av djupa generativa modeller för bilder som är betingade av 4D radar-data. Två metoder som har blivit state-of-the-art de senaste åren testas: generativa antagonistiska nätverk och diffusionsmodeller. De jämförs kvalitativt genom visuell inspektion och med kvantitativa metriker: medelkvadratfelet och antalet korrekt detekterade objekt i den genererade bilden. Det konstateras att det är lättare att styra den generativa processen i generativa antagonistiska nätverk genom betingning än i en diffusionsprocess. Å andra sidan är diffusionsmodellen stabil att träna och producerar generellt bilder av högre kvalité. De bästa resultaten erhålls genom en hybrid: båda metoderna kombineras för att dra nytta av deras respektive styrkor. de identifierade begränsningarna i de enskilda modellerna och kurera datan för att jämföra hur dessa modeller skalar med större datamängder och mer variation.

Page generated in 0.0446 seconds