61 |
Real-time photographic local tone reproduction using summed-area tables / Reprodução fotográfica local de tons em tempo real usando tabelas de áreas acumuladasSlomp, Marcos Paulo Berteli January 2008 (has links)
A síntese de imagens com alta faixa dinâmica é uma prática cada vez mais comum em computação gráfica. O desafio consiste em relacionar o grande conjunto de intensidades da imagem sintetizada com um sub-conjunto muito inferior suportado por um dispositivo de exibição, evitando a perda de detalhes contrastivos. Os operadores locais de reprodução de tons (local tone-mapping operators) são capazes de realizar tal compressão, adaptando o nível de luminância de cada pixel com respeito à sua vizinhança. Embora produzam resultados significativamente superiores aos operadores globais, o custo computacional é consideravelmente maior, o que vem impedindo sua utilização em aplicações em tempo real. Este trabalho apresenta uma técnica para aproximar o operador fotográfico local de reprodução de tons. Todas as etapas da técnica são implementadas em GPU, adequando-se ao cenário de aplicações em tempo real, sendo significativamente mais rápida que implementações existentes e produzindo resultados semelhantes. A abordagem é baseada no uso de tabelas de áreas acumuladas (summed-area tables) para acelerar a convolução das vizinhanças, usando filtros da média (box-filter), proporcionando uma solução elegante para aplicações que utilizam imagens em alta faixa dinâmica e que necessitam de performance sem comprometer a qualidade da imagem sintetizada. Uma investigação sobre algoritmos para a geração de somatórios pré-fixados (prefix sum) e uma possível melhoria para um deles também são apresentada. / High dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.
|
62 |
Nouvelle méthode de traitement d'images multispectrales fondée sur un modèle d'instrument pour la haut contraste : application à la détection d'exoplanètes / New method of multispectral image post-processing based on an instrument model for high contrast imaging systems : Application to exoplanet detectionYgouf, Marie 06 December 2012 (has links)
Ce travail de thèse porte sur l'imagerie multispectrale à haut contraste pour la détection et la caractérisation directe d'exoplanètes. Dans ce contexte, le développement de méthodes innovantes de traitement d'images est indispensable afin d'éliminer les tavelures quasi-statiques dans l'image finale qui restent à ce jour, la principale limitation pour le haut contraste. Bien que les aberrations résiduelles instrumentales soient à l'origine de ces tavelures, aucune méthode de réduction de données n'utilise de modèle de formation d'image coronographique qui prend ces aberrations comme paramètres. L'approche adoptée dans cette thèse comprend le développement, dans un cadre bayésien, d'une méthode d'inversion fondée sur un modèle analytique d'imagerie coronographique. Cette méthode estime conjointement les aberrations instrumentales et l'objet d'intérêt, à savoir les exoplanètes, afin de séparer correctement ces deux contributions. L'étape d'estimation des aberrations à partir des images plan focal (ou phase retrieval en anglais), est la plus difficile car le modèle de réponse instrumentale sur l'axe dont elle dépend est fortement non-linéaire. Le développement et l'étude d'un modèle approché d'imagerie coronographique plus simple se sont donc révélés très utiles pour la compréhension du problème et m'ont inspiré des stratégies de minimisation. J'ai finalement pu tester ma méthode et d'estimer ses performances en terme de robustesse et de détection d'exoplanètes. Pour cela, je l'ai appliquée sur des images simulées et j'ai notamment étudié l'effet des différents paramètres du modèle d'imagerie utilisé. J'ai ainsi démontré que cette nouvelle méthode, associée à un schéma d'optimisation fondé sur une bonne connaissance du problème, peut fonctionner de manière relativement robuste, en dépit des difficultés de l'étape de phase retrieval. En particulier, elle permet de détecter des exoplanètes dans le cas d'images simulées avec un niveau de détection conforme à l'objectif de l'instrument SPHERE. Ce travail débouche sur de nombreuses perspectives dont celle de démontrer l'utilité de cette méthode sur des images simulées avec des coronographes plus réalistes et sur des images réelles de l'instrument SPHERE. De plus, l'extension de la méthode pour la caractérisation des exoplanètes est relativement aisée, tout comme son extension à l'étude d'objets plus étendus tels que les disques circumstellaire. Enfin, les résultats de ces études apporteront des enseignements importants pour le développement des futurs instruments. En particulier, les Extremely Large Telescopes soulèvent d'ores et déjà des défis techniques pour la nouvelle génération d'imageurs de planètes. Ces challenges pourront très probablement être relevés en partie grâce à des méthodes de traitement d'image fondées sur un modèle direct d'imagerie. / This research focuses on high contrast multispectral imaging in the view of directly detecting and characterizing Exoplanets. In this framework, the development of innovative image post-processing methods is essential in order to eliminate the quasi-static speckles in the final image, which remain the main limitation for high contrast. Even though the residual instrumental aberrations are responsible for these speckles, no post-processing method currently uses a model of coronagraphic imaging, which takes these aberrations as parameters. The research approach adopted includes the development of a method, in a Bayesian Framework, based on an analytical coronagraphic imaging model and an inversion algorithm, to estimate jointly the instrumental aberrations and the object of interest, i.e. the exoplanets, in order to separate properly these two contributions. The instrumental aberration estimation directly from focal plane images, also named phase retrieval, is the most difficult step because the model of on-axis instrumental response, of which these aberrations depend on, is highly non-linear. The development and the study of an approximate model of coronagraphic imaging thus proved very useful to understand the problem at hand and inspired me some minimization strategies. I finally tested my method and estimated its performances in terms of robustness and exoplanets detection. For this, I applied it to simulated images and I studied the effect of the different parameters of the imaging model I used. The findings from this research provide evidence that this method, in association with an optimization scheme based on a good knowledge of the problem at hand, can operate in a relatively robust way, despite the difficulties of the phase retrieval step. In particular, it allows the detection of exoplanets in the case of simulated images with a detection level compliant with the goal of the SPHERE instrument. The next steps will be to verify the efficiency of this new method on simulated images using more realistic coronagraphs and on real images from the SPHERE instrument. In addition, the extension of the method for the characterization of exoplanets is relatively easy, as its extension to the study of larger objects such as circumstellar disks. Finally, the results of this work will also bring some crucial insights for the development of future instruments. In particular, the Extremely Large Telescopes have already risen some technical challenges for the next generation of planet finders, which may partly be addressed by an image processing method based on an imaging model.
|
63 |
Approche CRONE multivariable : développement et application à la régulation de bancs d'essais moteur haute dynamique / CRONE multivariable approach : development and application for high dynamic test-bench controlLamara, Abderrahim 27 May 2015 (has links)
Le travail de cette thèse s'inscrit dans le cadre du développement de la méthodologie CRONE (Commande Robuste d'Ordre Non Entier) multivariable. Il porte plus précisément sur la simplification de sa mise en oeuvre pour la synthèse de régulateurs multivariables robustes, avec une application au contrôle de bancs d'essais moteur haute dynamique, notamment ceux développés par l'entreprise D2T. Le premier chapitre présente les différents types de bancs d'essais, leur fonctionnement et leur problématique. Le deuxième chapitre présente tout d'abord une modélisation physique simple des bancs d'essais qui en permet ensuite leur simulation, puis est dédié à leur identification fréquentielle avec une application à un banc d'essais équipé d'un moteur Diesel. Le troisième chapitre présente la méthodologie CRONE multivariable et différents développements permettant d'en simplifier la mise en oeuvre notamment grâce à l'optimisation d'un nouveau paramètre. Illustrant ces développements, la boite à outils CRONE multivariable qui a été développée est alors utilisée pour la synthèse de la loi de commande d'une maquette de banc d'essais constituée de deux moteurs asynchrones. Finalement, le quatrième chapitre est dédié à la validation des développements présentés par leur application à un banc d'essais haute dynamique équipé d'un moteur Essence. Ce chapitre présente également les différents outils logiciel développés pour faciliter l'intégration de la méthodologie CRONE full MIMO aux produits D2T. / The work presented in this Thesis is part of the CRONE (robust control with fractional integration order) multivariable methodology development. It deals with the simplification of the robust multivariable control system implementation with application to the control of high dynamic engine test-benches, including those developed by the D2T company. The first chapter introduces different kind of test-benches and gives a general idea about how those systems work while explaining their problems. While the first part of chapter II presents a simple method for test-benches modeling which then allows their simulation, the second part is dedicated to their frequency-domain system identification. The defined identification procedure is applied to a test-bench equipped with a Diesel engine. The third chapter presents the CRONE multivariable methodology and different developments to simplify its use. Reflecting these developments, the developed CRONE multivariable toolbox is used to design a control system for a test-bench consists of two asynchronous motors. The fourth chapter is dedicated to validate the presented work by applying these developments to a high dynamic test-bench with a spark ignition engine. This chapter presents the various software tools developed to simplify the implementation to D2T products of the CRONE full MIMO methodology.
|
64 |
Constru??o e valida??o de um receptor GPS para uso espacialAlbuquerque, Glauberto Leilson Alves de 20 November 2009 (has links)
Made available in DSpace on 2014-12-17T14:55:41Z (GMT). No. of bitstreams: 1
GlaubertoLAA.pdf: 3487440 bytes, checksum: bcd5cef1c854f4d01f9c73419e1d7d42 (MD5)
Previous issue date: 2009-11-20 / Global Positioning System, or simply GPS, it is a radionavigation system developed by United States for military applications, but it becames very useful for civilian using. In the last decades Brazil has developed sounding rockets and today many projects to build micro and nanosatellites has appeared. This kind of vehicles named spacecrafts or high dynamic vehicles, can use GPS for its autonome location and trajectories controls. Despite of a huge number of GPS receivers available for civilian applications, they cannot used in high dynamic vehicles due environmental issues (vibrations, temperatures, etc.) or imposed dynamic working limits. Only a few nations have the technology to build GPS receivers for spacecrafts or high dynamic vehicles is available and they imposes rules who difficult the access to this receivers. This project intends to build a GPS receiver, to install them in a payload of a sounding rocket and data collecting to verify its correct operation when at the flight conditions. The inner software to this receiver was available in source code and it was tested in a software development platform named GPS Architect. Many organizations cooperated to support this project: AEB, UFRN, IAE, INPE e CLBI. After many phases: defining working conditions, choice and searching electronic, the making of the printed boards, assembling and assembling tests; the receiver was installed in a VS30 sounding rocket launched at Centro de Lan?amento da Barreira do Inferno in Natal/RN. Despite of the fact the locations data from the receiver were collected only the first 70 seconds of flight, this data confirms the correct operation of the receiver by the comparison between its positioning data and the the trajectory data from CLBI s tracking radar named ADOUR / O Sistema de Posicionamento Global, conhecido mundialmente pala sigla GPS, ? um sistema de radionavega??o constru?do pelos norte-americanos com inten??es militares, mas que encontraram, com o passar do tempo, muitas aplica??es de uso civil. No Brasil, al?m do desenvolvimento de foguetes de sondagem, come?am a aparecer projetos de constru??o de micro e nanosat?lites. Estes ve?culos denominados espaciais ou de alta din?mica podem, quando em voo, usufruir do sistema GPS para localiza??o aut?noma e verifica??o/controle das suas trajet?rias. Apesar da enorme disponibilidade de receptores GPS no mercado civil, estes n?o podem ser utilizados em ve?culos de alta din?mica, seja por quest?es ambientais (vibra??es, temperaturas elevadas, etc.) ou por prote??o l?gica (via software). Os receptores para uso em ve?culos de alta din?mica, ou ve?culos espaciais, fazem parte de uma tecnologia restrita a poucos pa?ses, que estabelecem regras muito r?gidas para suas aquisi??es. O presente projeto objetiva construir e validar funcionamento b?sico deste receptor ao instal?-lo num foguete de sondagem e coleta de dados em voo. O software a ser utilizado no receptor j? estava dispon?vel em c?digo fonte e testado em uma plataforma de desenvolvimento denominada GPS Architect. V?rios organismos cooperaram para realiza??o projeto: AEB, UFRN, IAE, INPE e CLBI. Ap?s v?rios passos para realiza??o do projeto: defini??o das condi??es de funcionamento, escolha e aquisi??o dos componentes eletr?nicos, fabrica??o das placas de circuito impresso, montagem e testes de integra??o; o mesmo foi instalado num foguete de sondagem VS30 lan?ado a partir do Centro de Lan?amento da Barreira do Inferno em Natal/RN. Apesar da coleta parcial dos dados do receptor, por falha t?cnica do sistema de telemetria do foguete, os resultados obtidos foram suficientes para validar o funcionamento do receptor a partir da compara??o entre os dados de trajetografia fornecidos pelo receptor GPS e o radar de trajetografia do CLBI conhecido como Radar ADOUR
|
65 |
Real-time photographic local tone reproduction using summed-area tables / Reprodução fotográfica local de tons em tempo real usando tabelas de áreas acumuladasSlomp, Marcos Paulo Berteli January 2008 (has links)
A síntese de imagens com alta faixa dinâmica é uma prática cada vez mais comum em computação gráfica. O desafio consiste em relacionar o grande conjunto de intensidades da imagem sintetizada com um sub-conjunto muito inferior suportado por um dispositivo de exibição, evitando a perda de detalhes contrastivos. Os operadores locais de reprodução de tons (local tone-mapping operators) são capazes de realizar tal compressão, adaptando o nível de luminância de cada pixel com respeito à sua vizinhança. Embora produzam resultados significativamente superiores aos operadores globais, o custo computacional é consideravelmente maior, o que vem impedindo sua utilização em aplicações em tempo real. Este trabalho apresenta uma técnica para aproximar o operador fotográfico local de reprodução de tons. Todas as etapas da técnica são implementadas em GPU, adequando-se ao cenário de aplicações em tempo real, sendo significativamente mais rápida que implementações existentes e produzindo resultados semelhantes. A abordagem é baseada no uso de tabelas de áreas acumuladas (summed-area tables) para acelerar a convolução das vizinhanças, usando filtros da média (box-filter), proporcionando uma solução elegante para aplicações que utilizam imagens em alta faixa dinâmica e que necessitam de performance sem comprometer a qualidade da imagem sintetizada. Uma investigação sobre algoritmos para a geração de somatórios pré-fixados (prefix sum) e uma possível melhoria para um deles também são apresentada. / High dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.
|
66 |
Real-time photographic local tone reproduction using summed-area tables / Reprodução fotográfica local de tons em tempo real usando tabelas de áreas acumuladasSlomp, Marcos Paulo Berteli January 2008 (has links)
A síntese de imagens com alta faixa dinâmica é uma prática cada vez mais comum em computação gráfica. O desafio consiste em relacionar o grande conjunto de intensidades da imagem sintetizada com um sub-conjunto muito inferior suportado por um dispositivo de exibição, evitando a perda de detalhes contrastivos. Os operadores locais de reprodução de tons (local tone-mapping operators) são capazes de realizar tal compressão, adaptando o nível de luminância de cada pixel com respeito à sua vizinhança. Embora produzam resultados significativamente superiores aos operadores globais, o custo computacional é consideravelmente maior, o que vem impedindo sua utilização em aplicações em tempo real. Este trabalho apresenta uma técnica para aproximar o operador fotográfico local de reprodução de tons. Todas as etapas da técnica são implementadas em GPU, adequando-se ao cenário de aplicações em tempo real, sendo significativamente mais rápida que implementações existentes e produzindo resultados semelhantes. A abordagem é baseada no uso de tabelas de áreas acumuladas (summed-area tables) para acelerar a convolução das vizinhanças, usando filtros da média (box-filter), proporcionando uma solução elegante para aplicações que utilizam imagens em alta faixa dinâmica e que necessitam de performance sem comprometer a qualidade da imagem sintetizada. Uma investigação sobre algoritmos para a geração de somatórios pré-fixados (prefix sum) e uma possível melhoria para um deles também são apresentada. / High dynamic range (HDR) rendering is becoming an increasingly popular technique in computer graphics. Its challenge consists on mapping the resulting images’ large range of intensities to the much narrower ones of the display devices in a way that preserves contrastive details. Local tone-mapping operators effectively perform the required compression by adapting the luminance level of each pixel with respect to its neighborhood. While they generate significantly better results when compared to global operators, their computational costs are considerably higher, thus preventing their use in real-time applications. This work presents a real-time technique for approximating the photographic local tone reproduction that runs entirely on the GPU and is significantly faster than existing implementations that produce similar results. Our approach is based on the use of summed-area tables for accelerating the convolution of the local neighborhoods with a box filter and provides an attractive solution for HDR rendering applications that require high performance without compromising image quality. A survey of prefix sum algorithms and possible improvements are also presented.
|
67 |
High Dynamic Range Panoramic Imaging with Scene MotionSilk, Simon January 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts.
Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario.
We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques.
We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.
|
68 |
Temporal coherency in video tone mapping / Influence de la cohérence temporelle dans les techniques de Vidéo Tone MappingBoitard, Ronan 16 October 2014 (has links)
L'un des buts principaux de l'imagerie numérique est d'une part la capture et d'autre part la reproduction de scènes réelles ou synthétiques sur des dispositifs d'affichage aux capacités restreintes. Les techniques d'imagerie traditionnelles sont limitées par la gamme de luminance qu'elles peuvent capturer et afficher. L'imagerie à grande gamme de luminance (High Dynamic Range – HDR) vise à dépasser cette limitation en capturant, représentant et affichant les quantités physique de la lumière présente dans une scène. Cependant, les technologies d'affichage existantes ne vont pas disparaitre instantanément, la compatibilité entre ces nouveaux contenus HDR et les contenus classiques est donc requise. Cette compatibilité est assurée par une opération de réduction des gammes de luminance (tone mapping) qui adapte les contenus HDR aux capacités restreintes des écrans. Bien que de nombreux opérateurs de tone mapping existent, ceux-ci se focalisent principalement sur les images fixes. Les verrous scientifiques associés au tone mapping de vidéo HDR sont plus complexes du fait de la dimension temporelle. Les travaux recherche menés dans la thèse se sont focalisés sur la préservation de la cohérence temporelle du vidéo tone mapping. Deux principaux axes de recherche ont été traités : la qualité subjective de contenus tone mappés et l'efficacité de la compression des vidéos HDR. En effet, tone mapper individuellement chaque image d'une séquence vidéo HDR engendre des artefacts temporels. Ces artefacts affectent la qualité visuelle de la vidéo tone mappée et il est donc nécessaire de les minimiser. Au travers de tests effectués sur des vidéos HDR avec différents opérateurs de tone mapping, nous avons proposé une classification des artefacts temporels en six catégories. Après avoir testé les opérateurs de tone mapping vidéo existants sur les différents types d'artefacts temporels, nous avons observé que seulement trois des six types d'artefacts étaient résolus. Nous avons donc créé une technique de post-traitement qui permet de réduire les 3 types d'artefacts non-considérés. Le deuxième aspect considéré dans la thèse concerne les relations entre compression et tone mapping. Jusque là, les travaux effectués sur le tone mapping et la vidéo compression se focalisaient sur l'optimisation du tone mapping de manière à atteindre des taux de compression élevés. Ces techniques modifient fortement le rendu, c'est à dire l'aspect de la vidéo, modifiant ainsi l'intention artistique initiale en amont dans la chaine de distribution (avant la compression). Dans ce contexte, nous avons proposé une technique qui permet de réduire l'entropie d'une vidéo tone mappée sans en modifier son rendu. Notre méthode adapte la quantification afin d'accroitre les corrélations entre images successives d'une vidéo. / One of the main goals of digital imagery is to improve the capture and the reproduction of real or synthetic scenes on display devices with restricted capabilities. Standard imagery techniques are limited with respect to the dynamic range that they can capture and reproduce. High Dynamic Range (HDR) imagery aims at overcoming these limitations by capturing, representing and displaying the physical value of light measured in a scene. However, current commercial displays will not vanish instantly hence backward compatibility between HDR content and those displays is required. This compatibility is ensured through an operation called tone mapping that retargets the dynamic range of HDR content to the restricted dynamic range of a display device. Although many tone mapping operators exist, they focus mostly on still images. The challenges of tone mapping HDR videos are more complex than those of still images since the temporal dimensions is added. In this work, the focus was on the preservation of temporal coherency when performing video tone mapping. Two main research avenues are investigated: the subjective quality of tone mapped video content and their compression efficiency. Indeed, tone mapping independently each frame of a video sequence leads to temporal artifacts. Those artifacts impair the visual quality of the tone mapped video sequence and need to be reduced. Through experimentations with HDR videos and Tone Mapping Operators (TMOs), we categorized temporal artifacts into six categories. We tested video tone mapping operators (techniques that take into account more than a single frame) on the different types of temporal artifact and we observed that they could handle only three out of the six types. Consequently, we designed a post-processing technique that adapts to any tone mapping operator and reduces the three types of artifact not dealt with. A subjective evaluation reported that our technique always preserves or increases the subjective quality of tone mapped content for the sequences and TMOs tested. The second topic investigated was the compression of tone mapped video content. So far, work on tone mapping and video compression focused on optimizing a tone map curve to achieve high compression ratio. These techniques changed the rendering of the video to reduce its entropy hence removing any artistic intent or constraint on the final results. That is why, we proposed a technique that reduces the entropy of a tone mapped video without altering its rendering.
|
69 |
Data analytics and methods for improved feature selection and matchingMay, Michael January 2012 (has links)
This work focuses on analysing and improving feature detection and matching. After creating an initial framework of study, four main areas of work are researched. These areas make up the main chapters within this thesis and focus on using the Scale Invariant Feature Transform (SIFT).The preliminary analysis of the SIFT investigates how this algorithm functions. Included is an analysis of the SIFT feature descriptor space and an investigation into the noise properties of the SIFT. It introduces a novel use of the a contrario methodology and shows the success of this method as a way of discriminating between images which are likely to contain corresponding regions from images which do not. Parameter analysis of the SIFT uses both parameter sweeps and genetic algorithms as an intelligent means of setting the SIFT parameters for different image types utilising a GPGPU implementation of SIFT. The results have demonstrated which parameters are more important when optimising the algorithm and the areas within the parameter space to focus on when tuning the values. A multi-exposure, High Dynamic Range (HDR), fusion features process has been developed where the SIFT image features are matched within high contrast scenes. Bracketed exposure images are analysed and features are extracted and combined from different images to create a set of features which describe a larger dynamic range. They are shown to reduce the effects of noise and artefacts that are introduced when extracting features from HDR images directly and have a superior image matching performance. The final area is the development of a novel, 3D-based, SIFT weighting technique which utilises the 3D data from a pair of stereo images to cluster and class matched SIFT features. Weightings are applied to the matches based on the 3D properties of the features and how they cluster in order to attempt to discriminate between correct and incorrect matches using the a contrario methodology. The results show that the technique provides a method for discriminating between correct and incorrect matches and that the a contrario methodology has potential for future investigation as a method for correct feature match prediction.
|
70 |
High dynamic range imaging applied to the study of sky vault luminance distribution mapping = Imagens de grande alcance dinâmico aplicadas ao mapeamento da distribuição de luminâncias da abóbada celeste / Imagens de grande alcance dinâmico aplicadas ao mapeamento da distribuição de luminâncias da abóbada celesteSouza, Dennis Flores de, 1984- 12 December 2014 (has links)
Orientadores: Paulo Sergio Scarazzato, Hélio Pedrini / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil, Arquitetura e Urbanismo / Made available in DSpace on 2018-08-26T13:49:42Z (GMT). No. of bitstreams: 1
Souza_DennisFloresde_D.pdf: 57790978 bytes, checksum: b6823404106d40d260ee7c43d24ecaa9 (MD5)
Previous issue date: 2014 / Resumo: O uso de imagens de grande alcance dinâmico (HDR) nos estudos de iluminação vem se tornando um expediente frequente pela capacidade de armazenamento de dados referentes à distribuição de luminâncias em uma cena. Diversos estudos comprovaram, por exemplo, as possibilidades de registro da luz natural por imagens digitais, uma vez que as características das imagens HDR puderam melhorar os resultados. Dentre as diferentes aplicações, o registro da abóbada celeste é um dos que mais pode se beneficiar dessa ferramenta, pois tal procedimento é mais simples do que aqueles realizados a partir de medições feitas por luminancímetros ou escâneres de céu. Além disso, atualmente a identificação dos tipos de céu ainda é feita em sua maioria utilizando métodos subjetivos. Isto se deve ao fato de não existir uma metodologia para comparar imagens HDR com modelos matemáticos, apenas métodos unidimensionais que focalizam um ou outro aspecto. Esta pesquisa teve por objetivo desenvolver um método multidimensional de identificação, classificação e extração de dados de iluminação natural a partir de imagens HDR da abóbada celeste. As imagens das câmeras foram calibradas segundo métodos disponíveis para estabelecer a confiabilidade da análise e interpretação dos dados, e foram obtidas em localidade com o mínimo de obstrução à visão da abóbada celeste. O método multidimensional de análise foi desenvolvido juntamente a uma rotina em MATLAB, que serviu ao propósito de verificar sua viabilidade e a precisão. Os dados extraídos foram testados na plataforma Flash, usando a linguagem ActionScript 3, para brevemente demonstrar as possibilidades de uso. Este método utiliza um sistema de classificação baseado na relevância das características identificadas na imagem, como a cobertura de nuvens e a distribuição de luminâncias, para escolher o tipo de céu da norma ISO 15469:2004 (e) / CIE S 011/E:2003 mais apropriado. Os resultados apontam para a viabilidade desse método em escolher o tipo de céu mais relevante de acordo com os dados extraídos da imagem HDR. A proposição deste método multidimensional de análise pode contribuir para a criação de um sistema de classificação e de um banco de dados digital úteis para futuros programas de simulação, providenciando dados de entrada obtidos a partir de medições de uma realidade física, facilmente registrada com precisão e confiabilidade a partir de imagens fotográficas / Abstract: Lately, the high dynamic range images (HDR) have experienced a significant growth in their usage in lighting studies, due to their capacity to store data of luminance distribution in a scene. Various studies have attested, for instance, the possibilities of using digital images in the register of daylighting, since the features of HDR images could enhance the results. Among different applications, the record of the light on the sky vault is one that can benefit most from HDR techniques, because this procedure is simpler than those performed by luminance meters or sky scanners measurements. Besides, the identification and classification of sky types are still done mostly by subjective methods. This can be explained by the unavailability of a methodology able to compare HDR images with mathematical models, although there are unidimensional methods that focus on one or another aspect of digital images. This research aimed at the development of a multidimensional method of identification, classification and extraction of daylight data from HDR images of the sky vault. The images registered by the camera were calibrated using available methods to establish the reliability of the analysis and interpretation of data. They were then obtained on a site with minimal obstruction to the vision of the sky vault. The multidimensional analysis method was developed in conjunction with a routine in MATLAB, which served the purpose of verifying its feasibility and accuracy. The extracted data were tested in Flash platform using ActionScript 3 language to briefly demonstrate the usage possibilities. This method relies on a classification system based on the relevance of the features identified in the image, such as cloud covering and luminance distribution, to choose the most appropriate sky type according to ISO 15469:2004 (e) / CIE S 011/E:2003 Standard. The results demonstrate the feasibility of this method in choosing the most relevant sky type according to the data extracted from the HDR image. The proposition of this multidimensional analysis method may contribute to the creation of a classification system and a digital database useful for future simulation software, providing input data from measurements of a physical reality, easily recorded with accuracy and confidence by photographic images / Doutorado / Arquitetura, Tecnologia e Cidade / Doutor em Arquitetura, Tecnologia e Cidade
|
Page generated in 0.0404 seconds