• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 56
  • 56
  • 14
  • 12
  • 11
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Motion Compensation of Interferometric Synthetic Aperture Radar

Duncan, David P. 07 July 2004 (has links) (PDF)
Deviations from a nominal, straight-line flight path of a synthetic aperture radar (SAR) lead to inaccurate and defocused radar images. This thesis is an investigation into the improvement of the motion compensation algorithm created for the BYU inteferometric synthetic aperture radar, YINSAR. The existing BYU SAR processing algorithm produces improved radar imagery but does not fully account for variations in attitude (roll, pitch, yaw) and does not function well with large position deviations. Results in this thesis demonstrate that a higher order motion compensation algorithm is not as effective as using a segmented reference track, coupled with the current lower-order motion compensation algorithm. Attitude variations cause a Doppler shift and are corrected by limiting the processed azimuth bandwidth or by reversing the frequency shift with a range-dependent filter. Another important area considered is the effects of motion compensation on interferometry. When performing interferometry with YINSAR, motion compensating both channels to a single track has two effects. First, the applied MOCO phase corrections remove the "flat-earth" differential phase from the interferogram. Second, range resampling coregisters the two images. All of these changes have helped to improve YINSAR imagery.
32

Vector Wavelet Transforms for the Coding of Static and Time-Varying Vector Fields

Hua, Li 02 August 2003 (has links)
Compression of vector-valued datasets is increasingly needed for addressing the significant storage and transmission burdens associated with research activities in large-scale computational fluid dynamics and environmental science. However, vector-valued compression schemes have traditionally received few investigations within the data-compression community. Consequently, this dissertation conducts a systematic study of effective algorithms for the coding of vectorvalued datasets and builds practical embedded compression systems for both static and timevarying vector fields. In generalizing techniques from the relatively mature field of image and video coding to vector data, three critical issues must be addressed: the design of a vector wavelet transform (VWT) that is amenable to vector-valued compression applications, the implementation of vector-valued intraframe coding that enables embedded coding, and the investigation of interframe-compression techniques that are appropriate for the complex temporal evolutions of vector features. In this dissertation, we initially invoke multiwavelets to construct VWTs. However, a balancing problem arises when existing multiwavelets are applied directly to vector data. We analyze extensively this performance failure and develop the omnidirectional balancing (OB) design criterion to rectify it. Employing the OB principle, we derive with a family of biorthogonal multiwavelets possessing desired balancing and symmetry properties and yielding performance far superior to that of VWTs implemented via other multiwavelets. In the second part of the dissertation, quantization schemes for vector-valued data are studied, and a complete embedded coding system for static vector fields is designed by combining a VWT with suitable vector-valued successive-approximation quantization. Finally, we extend several interframecompression techniques from video-coding applications to vector sequences for the compression of time-varying vector fields. Since the complexity of temporal evolutions of vector features limits the efficiency of the simple motion models which have been successful for natural video sources, we develop a novel approach to motion compensation which involves applying temporal decorrelation to only low-resolution information. This reduced-resolution motion-compensation technique results in significant improvement in terms of rate-distortion performance.
33

Doppler Shift Analysis for a Holographic Aperture Ladar System

Bobb, Ross Lee 11 May 2012 (has links)
No description available.
34

Compensation du mouvement respiratoire dans les images TEP/TDM thoraciques / Respiratory motion compensation in thoracic PET/CT images

Ouksili, Zehor 26 May 2010 (has links)
Cette thèse traite du mouvement respiratoire dans l'imagerie TEP/TDM. L'imagerie TEP est une modalité à exposition longue très influencée par les mouvements involontaires du patient. Ces mouvements produisent des artéfacts dont les conséquences sont sérieuses pour le diagnostic car les tumeurs paraissent plus larges et moins actives. Cette thèse contribue à la résolution de ce problème. En plus de proposer l'architecture d'un système d'acquisition TEP/TDM synchronisée à la respiration, on y développe trois méthodes de traitement de signal et d'images qui peuvent être appliquées pour résoudre différents sous-problèmes: Une méthode originale de segmentation et caractérisation du signal respiratoire pour découvrir les patterns respiratoires "normaux" du patient, une méthode de reconstruction TDM-4D qui permet d'obtenir des images anatomiques du corps à chaque niveau respiratoire désiré et un algorithme itératif amélioré pour reconstruire des images TEP-4D compensées en mouvement respiratoire. Toutes ces méthodes et algorithmes ont été validés et testés sur des données simulées, des données de fantômes, et des données réelles de patients. / This thesis deals with respiratory motion in PET/CT images. It is well known that PET is a modality that requires a long exposure time. During this time, patients moves and breath. These motions produce undesirable artefacts that alter seriously the images and their precision. This has important consequences when diagnosing thoracic, and particularly pulmonary, cancer. Tumours appear larger than they really are and their activity is weaker. This thesis proposes to contribute to solving these problems.We propose the architecture of an integrated PET/CT acquisition system synchronized to respiration. We also develop signal and image processing methods that can be applied to eliminating respiratory artefacts in CT and PET images. The thesis brings three main contributions : An original respiratory signal segmentation and characterization to detect "normal" respiratory patterns, a 4D-CT reconstruction method that creates 3D images of the whole body for any respiratory level and an enhanced iterative algorithm for reconstructing 4D-PET images without respiratory artefacts. The developed methods have validated and tested on simulated, phantom and real patients' data.
35

ROBUST BACKGROUND SUBTRACTION FOR MOVING CAMERAS AND THEIR APPLICATIONS IN EGO-VISION SYSTEMS

Sajid, Hasan 01 January 2016 (has links)
Background subtraction is the algorithmic process that segments out the region of interest often known as foreground from the background. Extensive literature and numerous algorithms exist in this domain, but most research have focused on videos captured by static cameras. The proliferation of portable platforms equipped with cameras has resulted in a large amount of video data being generated from moving cameras. This motivates the need for foundational algorithms for foreground/background segmentation in videos from moving cameras. In this dissertation, I propose three new types of background subtraction algorithms for moving cameras based on appearance, motion, and a combination of them. Comprehensive evaluation of the proposed approaches on publicly available test sequences show superiority of our system over state-of-the-art algorithms. The first method is an appearance-based global modeling of foreground and background. Features are extracted by sliding a fixed size window over the entire image without any spatial constraint to accommodate arbitrary camera movements. Supervised learning method is then used to build foreground and background models. This method is suitable for limited scene scenarios such as Pan-Tilt-Zoom surveillance cameras. The second method relies on motion. It comprises of an innovative background motion approximation mechanism followed by spatial regulation through a Mega-Pixel denoising process. This work does not need to maintain any costly appearance models and is therefore appropriate for resource constraint ego-vision systems. The proposed segmentation combined with skin cues is validated by a novel application on authenticating hand-gestured signature captured by wearable cameras. The third method combines both motion and appearance. Foreground probabilities are jointly estimated by motion and appearance. After the mega-pixel denoising process, the probability estimates and gradient image are combined by Graph-Cut to produce the segmentation mask. This method is universal as it can handle all types of moving cameras.
36

Robotic Catheters for Beating Heart Surgery

Kesner, Samuel Benjamin 12 December 2012 (has links)
Compliant and flexible cardiac catheters provide direct access to the inside of the heart via the vascular system without requiring clinicians to stop the heart or open the chest. However, the fast motion of the intracardiac structures makes it difficult to modify and repair the cardiac tissue in a controlled and safe manner. In addition, rigid robotic tools for beating heart surgery require the chest to be opened and the heart exposed, making the procedures highly invasive. The novel robotic catheter system presented here enables minimally invasive repair on the fast-moving structures inside the heart, like the mitral valve annulus, without the invasiveness or risks of stopped heart procedures. In this thesis, I investigate the development of 3D ultrasound-guided robotic catheters for beating heart surgery. First, the force and stiffness values of tissue structures in the left atrium are measured to develop design requirements for the system. This research shows that a catheter will experience contractile forces of 0.5 – 1.0 N and a mean tissue structure stiffness of approximately 0.1 N/mm while interacting with the mitral valve annulus. Next, this thesis presents the catheter system design, including force sensing, tissue resection, and ablation end effectors. In order to operate inside the beating heart, position and force control systems were developed to compensate for the catheter performance limitations of friction and deadzone backlash and evaluated with ex vivo and in vivo experiments. Through the addition of friction and deadzone compensation terms, the system is able to achieve position tracking with less than 1 mm RMS error and force tracking with 0.08 N RMS error under ultrasound image guidance. Finally, this thesis examines how the robotic catheter system enhances beating heart clinical procedures. Specifically, this system improves resection quality while reducing the forces experienced by the tissue by almost 80% and improves ablation performance by reducing contact resistance variations by 97% while applying a constant force on the moving tissue. / Engineering and Applied Sciences
37

Implementação da compensação de movimento em vídeo entrelaçado no terminal de acesso do SBTVD

Silva, Jonas dos Santos January 2013 (has links)
Uma sequencia de vídeo pode ser adquirida de forma progressiva ou entrelaçada. No padrão de codificação de vídeo H.264/AVC os campos de uma imagem entrelaçada podem ser codificados em modo frame (campos top e bottom entrelaçados) ou em modo field (campos top e bottom agrupados separadamente). Quando a escolha é adaptativa para cada par de macro blocos a codificação é chamada de Macroblock Adaptive Frame- Field (MBAFF). Inovações na predição inter-quadro do H.264/AVC contribuíram significantemente para a performance do padrão alcançar o dobro da taxa de compressão do seu antecessor (ITU, 1994), ao custo de um grande aumento de complexidade computacional do CODEC. Dentro da predição inter-quadro, o bloco de compensação de movimento (MC) é responsável pela reconstrução de um bloco de pixels. No decodificador apresentado em (BONATTO, 2012) está integrada uma solução em hardware para o MC que suporta a maior parte do conjunto de ferramentas do perfil Main do H.264/AVC. A compensação de movimento pode ser dividida em predição de vetores e processamento de amostras. No processamento de amostras é realizada a interpolação e a ponderação de amostras. O módulo de ponderação de amostras, ou predição ponderada, utiliza fatores de escala para escalonar as amostras na saída do MC. Isso é muito útil quando há esvanecimento no vídeo. Inicialmente este trabalho apresenta um estudo do processo de compensação de movimento, segundo o padrão de codificação de vídeo H.264/AVC. São abordadas todas as ferramentas da predição inter-quadro, incluindo o tratamento de vídeo entrelaçado e todos os possíveis modos de codificação para o mesmo. A seguir é apresentada uma arquitetura em hardware para a predição ponderada do MC. Esta arquitetura atende o perfil main do H.264/AVC, que prevê a decodificação de imagens frame, field ou MBAFF. A arquitetura apresentada é baseada no compensador de movimento contido no decodificador apresentado em (BONATTO, 2012), que não tem suporte a predição ponderada e a vídeo entrelaçado. A arquitetura proposta é composta por dois módulos: Scale Factor Prediction (SFP) e Weighted Samples Prediction (WSP) . A arquitetura foi desenvolvida em linguagem VHDL e a simulação temporal mostrou que a mesma pode decodificar imagens MBAFF em tempo real @60i. Dessa forma, tornando-se uma ferramenta muito útil ao desenvolvimento de sistemas de codificação e decodificação em HW. Não foi encontrada, na literatura atual, uma solução em hardware para compensação de movimento do padrão H.264/AVC com suporte a codificação MBAFF. / A video sequence can be acquired in a progressive or interlaced mode. In the video coding H.264/AVC standard an interlaced picture can be encoded in frame mode (top and bottom fields interlaced) or field mode (top and bottom fields combined separately). When the choice for each pair of macro-blocks coding is adaptive, it is called Macroblock Adaptive Frame-Field (MBAFF). The innovations in the inter-frame prediction of H.264/AVC contributed significantly to the performance of the standard that achieved twice the compression ratio of its predecessor (ITU, 1994), at the cost of a large increase in computational complexity of the CODEC. In the inter-frame prediction, the motion compensation (MC) module is responsible for the reconstruction of a pixel's block. In the decoder shown in (BONATTO 2012) an integrated hardware solution to the MC is included which can decode most of the H.264/AVC main profile tools. The motion compensation can be divided into motion vectors prediction and sample processing. In the sample processing part, samples interpolation and weighting are performed. The weighted samples prediction module uses scale factors to weight the samples for generating the output pixels. This is useful in video fading. Initially, this work presents a study of the motion compensation process, according to the H.264/AVC standard. It covers all of inter-frame prediction tools, including all possible coding modes for interlaced video. A hardware architecture for the weighted samples prediction of MC is shown next. It is in compliance with the main profile of H.264/AVC standard, therefore it can decode frame, field and MBAFF pictures. The architecture presented is based on the motion compensator used in the (BONATTO, 2012) decoder, which does not support the weighted prediction and interlaced video. The purposed architecture is composed by two modules: Scale Factor Prediction (SFP) and Weighted Samples Prediction (WSP). The hardware implementation was described using VHDL and the timing simulation has shown that it can decode MBAFF pictures in real time @60i. Therefore, this is an useful tool for hardware CODEC development. Similar hardware solution for H.264/AVC weighted prediction that supports MBAFF coding was not found is previous works.
38

Implementação da compensação de movimento em vídeo entrelaçado no terminal de acesso do SBTVD

Silva, Jonas dos Santos January 2013 (has links)
Uma sequencia de vídeo pode ser adquirida de forma progressiva ou entrelaçada. No padrão de codificação de vídeo H.264/AVC os campos de uma imagem entrelaçada podem ser codificados em modo frame (campos top e bottom entrelaçados) ou em modo field (campos top e bottom agrupados separadamente). Quando a escolha é adaptativa para cada par de macro blocos a codificação é chamada de Macroblock Adaptive Frame- Field (MBAFF). Inovações na predição inter-quadro do H.264/AVC contribuíram significantemente para a performance do padrão alcançar o dobro da taxa de compressão do seu antecessor (ITU, 1994), ao custo de um grande aumento de complexidade computacional do CODEC. Dentro da predição inter-quadro, o bloco de compensação de movimento (MC) é responsável pela reconstrução de um bloco de pixels. No decodificador apresentado em (BONATTO, 2012) está integrada uma solução em hardware para o MC que suporta a maior parte do conjunto de ferramentas do perfil Main do H.264/AVC. A compensação de movimento pode ser dividida em predição de vetores e processamento de amostras. No processamento de amostras é realizada a interpolação e a ponderação de amostras. O módulo de ponderação de amostras, ou predição ponderada, utiliza fatores de escala para escalonar as amostras na saída do MC. Isso é muito útil quando há esvanecimento no vídeo. Inicialmente este trabalho apresenta um estudo do processo de compensação de movimento, segundo o padrão de codificação de vídeo H.264/AVC. São abordadas todas as ferramentas da predição inter-quadro, incluindo o tratamento de vídeo entrelaçado e todos os possíveis modos de codificação para o mesmo. A seguir é apresentada uma arquitetura em hardware para a predição ponderada do MC. Esta arquitetura atende o perfil main do H.264/AVC, que prevê a decodificação de imagens frame, field ou MBAFF. A arquitetura apresentada é baseada no compensador de movimento contido no decodificador apresentado em (BONATTO, 2012), que não tem suporte a predição ponderada e a vídeo entrelaçado. A arquitetura proposta é composta por dois módulos: Scale Factor Prediction (SFP) e Weighted Samples Prediction (WSP) . A arquitetura foi desenvolvida em linguagem VHDL e a simulação temporal mostrou que a mesma pode decodificar imagens MBAFF em tempo real @60i. Dessa forma, tornando-se uma ferramenta muito útil ao desenvolvimento de sistemas de codificação e decodificação em HW. Não foi encontrada, na literatura atual, uma solução em hardware para compensação de movimento do padrão H.264/AVC com suporte a codificação MBAFF. / A video sequence can be acquired in a progressive or interlaced mode. In the video coding H.264/AVC standard an interlaced picture can be encoded in frame mode (top and bottom fields interlaced) or field mode (top and bottom fields combined separately). When the choice for each pair of macro-blocks coding is adaptive, it is called Macroblock Adaptive Frame-Field (MBAFF). The innovations in the inter-frame prediction of H.264/AVC contributed significantly to the performance of the standard that achieved twice the compression ratio of its predecessor (ITU, 1994), at the cost of a large increase in computational complexity of the CODEC. In the inter-frame prediction, the motion compensation (MC) module is responsible for the reconstruction of a pixel's block. In the decoder shown in (BONATTO 2012) an integrated hardware solution to the MC is included which can decode most of the H.264/AVC main profile tools. The motion compensation can be divided into motion vectors prediction and sample processing. In the sample processing part, samples interpolation and weighting are performed. The weighted samples prediction module uses scale factors to weight the samples for generating the output pixels. This is useful in video fading. Initially, this work presents a study of the motion compensation process, according to the H.264/AVC standard. It covers all of inter-frame prediction tools, including all possible coding modes for interlaced video. A hardware architecture for the weighted samples prediction of MC is shown next. It is in compliance with the main profile of H.264/AVC standard, therefore it can decode frame, field and MBAFF pictures. The architecture presented is based on the motion compensator used in the (BONATTO, 2012) decoder, which does not support the weighted prediction and interlaced video. The purposed architecture is composed by two modules: Scale Factor Prediction (SFP) and Weighted Samples Prediction (WSP). The hardware implementation was described using VHDL and the timing simulation has shown that it can decode MBAFF pictures in real time @60i. Therefore, this is an useful tool for hardware CODEC development. Similar hardware solution for H.264/AVC weighted prediction that supports MBAFF coding was not found is previous works.
39

Implementação da compensação de movimento em vídeo entrelaçado no terminal de acesso do SBTVD

Silva, Jonas dos Santos January 2013 (has links)
Uma sequencia de vídeo pode ser adquirida de forma progressiva ou entrelaçada. No padrão de codificação de vídeo H.264/AVC os campos de uma imagem entrelaçada podem ser codificados em modo frame (campos top e bottom entrelaçados) ou em modo field (campos top e bottom agrupados separadamente). Quando a escolha é adaptativa para cada par de macro blocos a codificação é chamada de Macroblock Adaptive Frame- Field (MBAFF). Inovações na predição inter-quadro do H.264/AVC contribuíram significantemente para a performance do padrão alcançar o dobro da taxa de compressão do seu antecessor (ITU, 1994), ao custo de um grande aumento de complexidade computacional do CODEC. Dentro da predição inter-quadro, o bloco de compensação de movimento (MC) é responsável pela reconstrução de um bloco de pixels. No decodificador apresentado em (BONATTO, 2012) está integrada uma solução em hardware para o MC que suporta a maior parte do conjunto de ferramentas do perfil Main do H.264/AVC. A compensação de movimento pode ser dividida em predição de vetores e processamento de amostras. No processamento de amostras é realizada a interpolação e a ponderação de amostras. O módulo de ponderação de amostras, ou predição ponderada, utiliza fatores de escala para escalonar as amostras na saída do MC. Isso é muito útil quando há esvanecimento no vídeo. Inicialmente este trabalho apresenta um estudo do processo de compensação de movimento, segundo o padrão de codificação de vídeo H.264/AVC. São abordadas todas as ferramentas da predição inter-quadro, incluindo o tratamento de vídeo entrelaçado e todos os possíveis modos de codificação para o mesmo. A seguir é apresentada uma arquitetura em hardware para a predição ponderada do MC. Esta arquitetura atende o perfil main do H.264/AVC, que prevê a decodificação de imagens frame, field ou MBAFF. A arquitetura apresentada é baseada no compensador de movimento contido no decodificador apresentado em (BONATTO, 2012), que não tem suporte a predição ponderada e a vídeo entrelaçado. A arquitetura proposta é composta por dois módulos: Scale Factor Prediction (SFP) e Weighted Samples Prediction (WSP) . A arquitetura foi desenvolvida em linguagem VHDL e a simulação temporal mostrou que a mesma pode decodificar imagens MBAFF em tempo real @60i. Dessa forma, tornando-se uma ferramenta muito útil ao desenvolvimento de sistemas de codificação e decodificação em HW. Não foi encontrada, na literatura atual, uma solução em hardware para compensação de movimento do padrão H.264/AVC com suporte a codificação MBAFF. / A video sequence can be acquired in a progressive or interlaced mode. In the video coding H.264/AVC standard an interlaced picture can be encoded in frame mode (top and bottom fields interlaced) or field mode (top and bottom fields combined separately). When the choice for each pair of macro-blocks coding is adaptive, it is called Macroblock Adaptive Frame-Field (MBAFF). The innovations in the inter-frame prediction of H.264/AVC contributed significantly to the performance of the standard that achieved twice the compression ratio of its predecessor (ITU, 1994), at the cost of a large increase in computational complexity of the CODEC. In the inter-frame prediction, the motion compensation (MC) module is responsible for the reconstruction of a pixel's block. In the decoder shown in (BONATTO 2012) an integrated hardware solution to the MC is included which can decode most of the H.264/AVC main profile tools. The motion compensation can be divided into motion vectors prediction and sample processing. In the sample processing part, samples interpolation and weighting are performed. The weighted samples prediction module uses scale factors to weight the samples for generating the output pixels. This is useful in video fading. Initially, this work presents a study of the motion compensation process, according to the H.264/AVC standard. It covers all of inter-frame prediction tools, including all possible coding modes for interlaced video. A hardware architecture for the weighted samples prediction of MC is shown next. It is in compliance with the main profile of H.264/AVC standard, therefore it can decode frame, field and MBAFF pictures. The architecture presented is based on the motion compensator used in the (BONATTO, 2012) decoder, which does not support the weighted prediction and interlaced video. The purposed architecture is composed by two modules: Scale Factor Prediction (SFP) and Weighted Samples Prediction (WSP). The hardware implementation was described using VHDL and the timing simulation has shown that it can decode MBAFF pictures in real time @60i. Therefore, this is an useful tool for hardware CODEC development. Similar hardware solution for H.264/AVC weighted prediction that supports MBAFF coding was not found is previous works.
40

Deinterlace Filter / Deinterlace Filter

Kuřina, Tomáš January 2009 (has links)
This document elaborates on the subject of video interlacing and its removal. It describes the interlacing of video, its history and the reasons that led to its use. The document also explains why it is necessary to remove interlacing and the basic methods that are used for it. It describes the proposed deinterlacing algorithm and its implementation, including description of inpainting and block matching. Included are also test results of both quality and speed of my deinterlacing algorithm. The final chapter describes the implementation as a console application and a DLL library.

Page generated in 0.1429 seconds