Spelling suggestions: "subject:"blockmatching"" "subject:"blockmatchning""
1 |
Enhanced computation time for fast block matching algorithmAhmed, Zaynab Anwer January 2013 (has links)
Video compression is the process of reducing the amount of data required to represent digital video while preserving an acceptable video quality. Recent studies on video compression have focused on multimedia transmission, videophones, teleconferencing, high definition television (HDTV), CD-ROM storage, etc. The idea of compression techniques is to remove the redundant information that exists in the video sequences. Motion compensated predictive coding is the main coding tool for removing temporal redundancy of video sequences and it typically accounts for 50-80% of the video encoding complexity. This technique has been adopted by all of the existing international video coding standards. It assumes that the current frame can be locally modelled as a translation of the reference frames. The practical and widely method used to carry out motion compensated prediction is block matching algorithm. In this method, video frames are divided into a set of non-overlapped macroblocks; each target macroblock of the current frame is compared with the search area in the reference frame in order to find the best matching macroblock. This will carry out displacement vectors that stipulate the movement of the macroblocks from one location to another in the reference frame. Checking all these locations is called full Search, which provides the best result. However, this algorithm suffers from long computational time, which necessitates improvement. Several methods of Fast Block Matching algorithm were developed to reduce the computation complexity. This thesis focuses on two classifications: the first is called the lossless block matching algorithm process, in which the computational time required to determine the matching macroblock of the full search is decreased while the resolution of the predicted frames is the same as for the full search. The second is called the lossy block matching algorithm process, which reduces the computational complexity effectively but the search result’s quality is not the same as for the full search.
|
2 |
Otimização do algoritmo de block matching aplicado a estudos elastográficos / Otimization of the block matching algorithm aplied to elastogtraphic studies.Lucio Pereira Neves 03 August 2007 (has links)
Este trabalho apresenta uma análise sobre um novo método de formação de imagem, utilizando aparelhos de ultra-som a elastografia. Esta técnica baseia-se no fato de que quando um meio elástico, como o tecido, é deformado por uma tensão constante e uniaxial, todos os pontos no meio possuem um nível de deformação longitudinal cujo componente principal está ao longo do eixo de deformação. Se elementos do tecido possuem um módulo elástico diferente dos demais, a deformação nestes elementos será relativamente maior ou menor. Elementos mais rígidos geralmente deformam-se menos. Desta forma, pode-se mapear e identificar estruturas com diferentes níveis de dureza. A comparação entre os mapas de RF de pré e pós-deformação foi realizada pela técnica de block matching. Esta técnica consiste em comparar regiões, ou kernels, no mapa de pré-deformação com regiões de mesmo tamanho no mapa de pós-deformação. Esta comparação é feita pela minimização de uma função custo. Nesta técnica, o tamanho do kernel, é um dos principais parâmetros para melhorar a precisão das medidas de deslocamento. O principal objetivo neste trabalho é aperfeiçoar o algoritmo de block matching visando melhorar a precisão da determinação de deslocamento em técnicas de deformação dinâmica e estática, mantendo o custo computacional baixo. Para isto, foram utilizados phantoms com e sem inclusões mais duras que o meio. Os phantoms foram submetidos a deformações estáticas e dinâmicas. Foi possível determinar o comportamento destes phantoms sob estas formas de deformação, e as faixas de kernel e funções custo que forneceram os melhores resultados. Também foram gerados elastogramas do phantom com inclusão. Estas imagens permitiram avaliar a influência dos diferentes kernels sobre a resolução dos elastogramas e a capacidade em diferenciar a lesão do tecido circundante. Comparando os elastogramas obtidos sobre deformação dinâmica, utilizando os kernels que apresentaram o melhor desempenho, com as respectivas imagens em modo B, pôde-se observar que a inclusão estava clara e bem delimitada. / This work provides an analysis about a new method for image formation using ultrasound devices elastography. This technique is based on the fact that when an elastic medium, as the tissue, is deformed under a constant and directional stress, all the points in the medium have a deformation level whose main component is along the deformation axis. If tissues elements have different elastic modules, the deformation in these elements will be higher or lower. Normally harder elements have lower deformations. In this way, one can detect and identify structures with different elastic levels. The comparisons between the pre and post-deformation RF maps were done by the block matching technique. This technique is based on the comparison of regions, or kernels, in the pre-deformation maps with regions of the same size in the post-deformation map. This is done by the minimization of a cost function. In this technique, the kernel size is one of the most important parameters to obtain better resolution and precision in the displacement measurements. The goal of this work is to optimize the block matching algorithm to improve the displacement estimates precision in both dynamic and static deformations, while keeping a low computational cost. To obtain this, we used phantoms with and without inclusions harder than the medium. These phantoms were submitted to both static and dynamic deformations. It was possible to estimate the behavior of these phantoms under these deformations, and the kernel range and cost functions that provided the best results. Also, we generated the elastograms of the phantom with the inclusion. These images allowed us to evaluate the influence of the different kernel sizes under the elastograms resolution and their capability in differentiate the lesion from the embedding tissue. Comparing the elastograms obtained under dynamic deformation that had the best performance, with the B mode images, we could conclude that the inclusion was well delimited and clear.
|
3 |
Otimização do algoritmo de block matching aplicado a estudos elastográficos / Otimization of the block matching algorithm aplied to elastogtraphic studies.Neves, Lucio Pereira 03 August 2007 (has links)
Este trabalho apresenta uma análise sobre um novo método de formação de imagem, utilizando aparelhos de ultra-som a elastografia. Esta técnica baseia-se no fato de que quando um meio elástico, como o tecido, é deformado por uma tensão constante e uniaxial, todos os pontos no meio possuem um nível de deformação longitudinal cujo componente principal está ao longo do eixo de deformação. Se elementos do tecido possuem um módulo elástico diferente dos demais, a deformação nestes elementos será relativamente maior ou menor. Elementos mais rígidos geralmente deformam-se menos. Desta forma, pode-se mapear e identificar estruturas com diferentes níveis de dureza. A comparação entre os mapas de RF de pré e pós-deformação foi realizada pela técnica de block matching. Esta técnica consiste em comparar regiões, ou kernels, no mapa de pré-deformação com regiões de mesmo tamanho no mapa de pós-deformação. Esta comparação é feita pela minimização de uma função custo. Nesta técnica, o tamanho do kernel, é um dos principais parâmetros para melhorar a precisão das medidas de deslocamento. O principal objetivo neste trabalho é aperfeiçoar o algoritmo de block matching visando melhorar a precisão da determinação de deslocamento em técnicas de deformação dinâmica e estática, mantendo o custo computacional baixo. Para isto, foram utilizados phantoms com e sem inclusões mais duras que o meio. Os phantoms foram submetidos a deformações estáticas e dinâmicas. Foi possível determinar o comportamento destes phantoms sob estas formas de deformação, e as faixas de kernel e funções custo que forneceram os melhores resultados. Também foram gerados elastogramas do phantom com inclusão. Estas imagens permitiram avaliar a influência dos diferentes kernels sobre a resolução dos elastogramas e a capacidade em diferenciar a lesão do tecido circundante. Comparando os elastogramas obtidos sobre deformação dinâmica, utilizando os kernels que apresentaram o melhor desempenho, com as respectivas imagens em modo B, pôde-se observar que a inclusão estava clara e bem delimitada. / This work provides an analysis about a new method for image formation using ultrasound devices elastography. This technique is based on the fact that when an elastic medium, as the tissue, is deformed under a constant and directional stress, all the points in the medium have a deformation level whose main component is along the deformation axis. If tissues elements have different elastic modules, the deformation in these elements will be higher or lower. Normally harder elements have lower deformations. In this way, one can detect and identify structures with different elastic levels. The comparisons between the pre and post-deformation RF maps were done by the block matching technique. This technique is based on the comparison of regions, or kernels, in the pre-deformation maps with regions of the same size in the post-deformation map. This is done by the minimization of a cost function. In this technique, the kernel size is one of the most important parameters to obtain better resolution and precision in the displacement measurements. The goal of this work is to optimize the block matching algorithm to improve the displacement estimates precision in both dynamic and static deformations, while keeping a low computational cost. To obtain this, we used phantoms with and without inclusions harder than the medium. These phantoms were submitted to both static and dynamic deformations. It was possible to estimate the behavior of these phantoms under these deformations, and the kernel range and cost functions that provided the best results. Also, we generated the elastograms of the phantom with the inclusion. These images allowed us to evaluate the influence of the different kernel sizes under the elastograms resolution and their capability in differentiate the lesion from the embedding tissue. Comparing the elastograms obtained under dynamic deformation that had the best performance, with the B mode images, we could conclude that the inclusion was well delimited and clear.
|
4 |
Motion compensation for 2D object-based video codingSteliaros, Michael Konstantinos January 1999 (has links)
No description available.
|
5 |
Fiabilité et précision en stéréoscopie : application à l'imagerie aérienne et satellitaire à haute résolutionSabater, Neus 07 December 2009 (has links) (PDF)
Cette thèse se situe dans le cadre du projet MISS (Mathématiques de l'Imagerie Stéréoscopique Spatiale) monté par le CNES en collaboration avec plusieurs laboratoires universitaires en 2007. Ce projet se donne l'objectif ambitieux de modéliser un satellite stéréoscopique, prenant deux vues non simultanées mais très rapprochées de la Terre en milieu urbain. Son but principal est d'obtenir une chaîne automatique de reconstruction urbaine à haute résolution à partir de ces deux vues. Ce projet se heurte toutefois à des problèmes de fond que la présente thèse s'attache à résoudre. Le premier problème est le rejet des matches qui pourraient se produire par hasard, notamment dans les zones d'ombres ou d'occlusion, et le rejet également des mouvements au sol (véhicules, piétons, etc.). La thèse propose une méthode de rejet de faux matches basée sur la méthodologie dite a contrario. On montre la consistance mathématique de cette méthode de rejet, et elle est validée sur des paires simulées exactes, sur des vérités terrain fournies par le CNES, et sur des paires classiques de benchmark (Middlebury). Les matches fiables restants représentent entre 40% et 90% des pixels selon les paires testées. Le second problème de fond abordé est la précision. En effet le type de stéréoscopie envisagé exige un très faible angle entre les deux vues, qui sont visuellement presque identiques. Pour obtenir un relief correct, il faut effectuer un recalage extrêmement précis, et calibrer le niveau de bruit qui permet un tel recalage. La thèse met en place une méthode de recalage subpixélien, qui sera démontrée être optimale par des arguments mathématiques et expérimentaux. Ces résultats étendent et améliorent les résultats obtenus au CNES par la méthode MARC. En particulier, il sera montré sur les images de benchmark Middlebury que la précision théorique permise par le bruit correspond bien à celle obtenue sur les matches fiables. Bien que ces résultats soient obtenus dans le cadre d'un dispositif d'acquisition précis (stéréoscopie aérienne ou satellitaire à faible angle), tous les résultats sont utilisables en stéréoscopie quelconque, comme montré dans beaucoup d'expériences.
|
6 |
Adaptive Search Range for Full-Search Motion EstimationChu, Kung-Hsien 17 August 2004 (has links)
Due to the progress of Internet technology and technical improvement, the growths of multimedia products and services ,such as Multimedia Message Service¡]MMS¡^, Multimedia on Demand¡]MoD¡^, Video Conferencing, and Digital TV, are very fast. All of these services need good video compression and audio compression standards to support. It is impossible to transmit source data of multimedia on networks. Motion Estimation needs the most computing complexity in the video compression. In our research, we focus on how to reduce candidate blocks and keep video quality.
We study some fast motion estimation algorithms and architectures, and design a fast motion estimation architecture which supports resolution of 1280x720 at 30fps frame rate in HDTV specification based on hierarchical motion estimation algorithm. In the limit of hardware resources and the compressed video quality, the architecture can improve inter-coding performance. Two adjacent MacroBlocks have similar Motion Vector in our observation. We arrange a 16x8 processing element array to deal with two adjacent MacroBlocks together. The design can reduce a lot of clock cycles in the hierarchical motion estimation architecture, and keep high video quality.
Furthermore, we propose a search range prediction method¡]called ASR¡^which reflect the motion behavior of video sequences into search range on MB-By-MB Basis. ASR can reduce the unnecessary operation of candidate blocks and keep very high video quality compared with Full Search Block Matching algorithm by the implementation in official software of the new video compression standard, Joint Model of H.264/AVC.
|
7 |
Fast Adaptive Block Based Motion Estimation for Video CompressionLuo, Yi 11 August 2009 (has links)
No description available.
|
8 |
Diffusion MRI processing for multi-comportment characterization of brain pathology / Caractérisation de pathologies cérébrales par l’analyse de modèles multi-compartiment en IRM de diffusionHédouin, Renaud 12 June 2017 (has links)
L'imagerie pondérée en diffusion est un type d'acquisition IRM spécifique basé sur la direction de diffusion des molécules d'eau dans le cerveau. Cela permet, au moyen de plusieurs acquisitions, de modéliser la microstructure du cerveau, comme la matière blanche qui à une taille très inférieur à la résolution du voxel. L'obtention d'un grand nombre d'images nécessite, pour un usage clinique, des techniques d'acquisition ultra rapide tel que l'imagerie parallèle. Malheureusement, ces images sont entachées de large distorsions. Nous proposons une méthode de recalage par blocs basée sur l'acquisition d'images avec des directions de phase d'encodage opposées. Cette technique spécialement conçue pour des images écho planaires, mais qui peut être générique, corrige les images de façon robuste tout en fournissant un champs de déformation. Cette transformation est applicable à une série entière d'image de diffusion à partir d'une seule image b 0 renversée, ce qui permet de faire de la correction de distorsion avec un temps d'acquisition supplémentaire minimal. Cet algorithme de recalage, qui a été validé à la fois sur des données synthétiques et cliniques, est disponible avec notre programme de traitement d'images Anima. A partir de ces images de diffusion, nous sommes capable de construire des modèles de diffusion multi-compartiment qui représentent la microstructure complexe du cerveau. Pour pouvoir produire des analyses statistiques sur ces modèles, nous devons être capable de faire du recalage, du moyennage, ou encore de créer un atlas d'images. Nous proposons une méthode générale pour interpoler des modèles multi-compartiment comme un problème de simplification basé sur le partitionnement spectral. Cette technique qui est adaptable pour n'importe quel modèle, a été validé à la fois sur des données synthétiques et réelles. Ensuite à partir d'une base de données recalée, nous faisons des analyses statistiques en extrayant des paramètres au niveau du voxel. Une tractographie, spécifiquement conçue pour les modèles multi-compartiment, est aussi utilisée pour faire des analyses en suivant les fibres de matière blanche. Ces outils sont conçus et appliqués à des données réelles pour contribuer à la recherche de biomarqueurs pour les pathologies cérébrales. / Diffusion weighted imaging (DWI) is a specific type of MRI acquisition based on the direction of diffusion of the brain water molecule. Its allow, through several acquisitions, to model brain microstructure, as white matter, which are significantly smaller than the voxel-resolution. To acquire a large number of images in a clinical use, very-fast acquisition technique are required as single-shot imaging, however these acquisitions suffer local large distortions. We propose a Block-Matching registration method based on a the acquisition of images with opposite phase-encoding directions (PED). This technique specially designs for Echo-Planar Images (EPI), but which could be generic, robustly correct images and provide a deformation field. This field is applicable to an entire DWI series from only one reversed b 0 allowing distortion correction with a minimal time acquisition cost. This registration algorithm has been validated both on a phantom data set and on in-vivo data and is available in our source medical image processing toolbox Anima. From these diffusion images, we are able to construct multi-compartments models (MCM) which could represented complex brain microstructure. We need to do registration, average, create atlas on these MCM to be able to make studies and produce statistic analysis. We propose a general method to interpolate MCM as a simplification problem based on spectral clustering. This technique, which is adaptable for any MCM, has been validated for both synthetic and real data. Then, from a registered dataset, we made analysis at a voxel-level doing statistic on MCM parameters. Specifically design tractography can also be perform to make analysis, following tracks, based on individual compartment. All these tools are designed and used on real data and contribute to the search of biomakers for brain diseases.
|
9 |
Soustava kamer jako stereoskopický senzor pro měření vzdálenosti v reálném čase / Real-time distance measurement with stereoscopic sensorJaneček, Martin January 2014 (has links)
Project shows calibration stereoscopic sensor. Also describes basic methods stereo-corespodation using library OpenCV. Project contains calculations of disparity maps on CPU or graphic card(using library OpenCL).
|
10 |
Soustava kamer jako stereoskopický senzor pro měření vzdálenosti v reálném čase / Real-time distance measurement with stereoscopic sensorJaneček, Martin January 2014 (has links)
Project shows calibration stereoscopic sensor. Also describes basic methods stereo-corespodation using library OpenCV. Project contains calculations of disparity maps on CPU or graphic card (using library OpenCL).
|
Page generated in 0.0468 seconds