Spelling suggestions: "subject:"image warping"" "subject:"lmage warping""
1 |
Evaluation of Image Warping Algorithms for Implementation in FPGASerguienko, Anton January 2008 (has links)
<p>The target of this master thesis is to evaluate the Image Warping technique and propose a possible design for an implementation in FPGA. The Image Warping is widely used in the image processing for image correction and rectification. A DSP is a usual choice for implantation of the image processing algorithms, but to decrease a cost of the target system it was proposed to use an FPGA for implementation.</p><p>In this work a different Image Warping methods was evaluated in terms of performance, produced image quality, complexity and design size. Also, considering that it is not only Image Warping algorithm which will be implemented on the target system, it was important to estimate a possible memory bandwidth used by the proposed design. The evaluation was done by implemented a C-model of the proposed design with a finite datapath to simulate hardware implementation as close as possible.</p>
|
2 |
Evaluation of Image Warping Algorithms for Implementation in FPGASerguienko, Anton January 2008 (has links)
The target of this master thesis is to evaluate the Image Warping technique and propose a possible design for an implementation in FPGA. The Image Warping is widely used in the image processing for image correction and rectification. A DSP is a usual choice for implantation of the image processing algorithms, but to decrease a cost of the target system it was proposed to use an FPGA for implementation. In this work a different Image Warping methods was evaluated in terms of performance, produced image quality, complexity and design size. Also, considering that it is not only Image Warping algorithm which will be implemented on the target system, it was important to estimate a possible memory bandwidth used by the proposed design. The evaluation was done by implemented a C-model of the proposed design with a finite datapath to simulate hardware implementation as close as possible.
|
3 |
Automatic Cartoon Generation By Learning The Style Of An ArtistKuruoglu, Betul 01 September 2012 (has links) (PDF)
In this study, we suggest an algorithm for generating cartoons from face images automatically.
The suggested method learns drawing style of an artist and applies this style to the face images
in a database to create cartoons.
The training data consists of a set of face images and corresponding cartoons, drawn by the
same artist. Initially, a set of control points are labeled and indexed to characterize the face in
the training data set for both images and corresponding caricatures. Then, their features are
extracted to model the style of the artist. Finally, a similarity matrix of real face image set and
the input image are constructed. With the help of the similarity matrix, Distance-Weighted
Nearest Neighbor algorithm calculates the exaggeration coefficients which caricaturist would
have designed for the input image in his mind. In caricature generation phase, Moving Least
Squares algorithm is applied to distort the input image based on these coefficients. Caricatures
generated by this approach successfully cover most of the caricaturist&rsquo / s key characteristics in
his drawing.
|
4 |
Design of a Depth-Image-Based Rendering (DIBR) 3D Stereo View Synthesis EngineChang, Wei-Chun 01 September 2011 (has links)
Depth-Based Image Rendering (DIBR) is a popular method to generate 3D virtual image at different view positions using an image and a depth map. In general, DIBR consists of two major operations: image warping and hole filling. Image warping calculates the disparity from the depth map given some information of viewers and display screen. Hole filling is to calculate the color of pixel locations that do not correspond to any pixels in the original image after image warping. Although there are many different hole filling methods that determine the colors of the blank pixels, some undesirable artifacts are still observed in the synthesized virtual image. In this thesis, we present an approach that examines the geometry information near the region of blank pixels in order to reduce the artifacts near the edges of objects. Experimental results show that the proposed design can generate more natural shape around the edges of objects at the cost of more hardware and computation time.
|
5 |
A CONTROL MECHANISM TO THE ANYWHERE PIXEL ROUTERKrishnan, Subhasri 01 January 2007 (has links)
Traditionally large format displays have been achieved using software. A new technique of using hardware based anywhere pixel routing is explored in this thesis. Information stored in a Look Up Table (LUT) in the hardware can be used to tile two image streams to produce a seamless image display. This thesis develops a 1 input-image 1 output-image system that implements arbitrary image warping on the image, based a LUT stored in memory. The developed system control mechanism is first validated using simulation results. It is next validated via implementation to a Field Programmable Gate Array (FPGA) based hardware prototype and appropriate experimental testing. It was validated by changing the contents of the LUT and observing that the resulting changes on the pixel mapping were always correct.
|
6 |
Image VectorizationPrice, Brian L. 31 May 2006 (has links) (PDF)
We present a new technique for creating an editable vector graphic from an object in a raster image. Object selection is performed interactively in subsecond time by calling graph cut with each mouse movement. A renderable mesh is then computed automatically for the selected object and each of its (sub)objects by (1) generating a coarse object mesh; (2) performing recursive graph cut segmentation and hierarchical ordering of subobjects; (3) applying error-driven mesh refinement to each (sub)object. The result is a fully layered object hierarchy that facilitates object-level editing without leaving holes. Object-based vectorization compares favorably with current approaches in the representation and rendering quality. Object-based vectorization and complex editing tasks are performed in a few 10s of seconds.
|
7 |
Investigating and developing a model for iris changes under varied lighting conditionsPhang, Shiau Shing January 2007 (has links)
Biometric identification systems have several distinct advantages over other authentication technologies, such as passwords, in reliably recognising individuals. Iris based recognition is one such biometric recognition system. Unlike other biometrics such as fingerprints or face images, the distinct aspect of the iris comes from its randomly distributed features. The patterns of these randomly distributed features on the iris have been proved to be fixed in a person's lifetime, and are stable over time for healthy eyes except for the distortions caused by the constriction and dilation of the pupil. The distortion of the iris pattern caused by pupillary activity, which is mainly due changes in ambient lighting conditions, can be significant. One important question that arises from this is: How closely do two different iris images of the same person, taken at different times using different cameras, in different environments, and under different lighting conditions, agree with each other? It is also problematic for iris recognition systems to correctly identify a person when his/her pupil size is very different from the person's iris images, used at the time of constructing the system's data-base. To date, researchers in the field of iris recognition have made attempts to address this problem, with varying degrees of success. However, there is still a need to conduct in-depth investigations into this matter in order to arrive at more reliable solutions. It is therefore necessary to study the behaviour of iris surface deformation caused by the change of lighting conditions. In this thesis, a study of the physiological behaviour of pupil size variation under different normal indoor lighting conditions (100 lux ~ 1,200 lux) and brightness levels is presented. The thesis also presents the results of applying Elastic Graph Matching (EGM) tracking techniques to study the mechanisms of iris surface deformation. A study of the pupil size variation under different normal indoor lighting conditions was conducted. The study showed that the behaviour of the pupil size can be significantly different from one person to another under the same lighting conditions. There was no evidence from this study to show that the exact pupil sizes of an individual can be determined at a given illumination level. However, the range of pupil sizes can be estimated for a range of specific lighting conditions. The range of average pupil sizes under normal indoor lighting found was between 3 mm and 4 mm. One of the advantages of using EGM for iris surface deformation tracking is that it incorporates the benefit of the use of Gabor wavelets to encode the iris features for tracking. The tracking results showed that the radial stretch of the iris surface is nonlinear. However, the amount of extension of iris surface at any point on the iris during the stretch is approximately linear. The analyses of the tracking results also showed that the behaviour of iris surface deformation is different from one person to another. This implies that a generalised iris surface deformation model cannot be established for personal identification. However, a deformation model can be established for every individual based on their analysis result, which can be useful for personal verification using the iris. Therefore, analysis of the tracking results of each individual was used to model iris surface deformations for that individual. The model was able to estimate the movement of a point on the iris surface at a particular pupil size. This makes it possible to estimate and construct the 2D deformed iris image of a desired pupil size from a given iris image of another different pupil size. The estimated deformed iris images were compared with their actual images for similarity, using an intensitybased (zero mean normalised cross-correlation). The result shows that 86% of the comparisons have over 65% similarity between the estimated and actual iris image. Preliminary tests of the estimated deformed iris images using an open-source iris recognition algorithm have showed an improved personal verification performance. The studies presented in this thesis were conducted using a very small sample of iris images and therefore should not be generalised, before further investigations are conducted.
|
8 |
Algorithmes et analyses perceptuelles pour la navigation interactive basée image / Algorithms and perceptual analysis for interactive free viewpoint image-based navigationChaurasia, Gaurav 18 February 2014 (has links)
Nous présentons une approche de rendu à base d'images (IBR) qui permet, à partir de photos, de naviguer librement et générer des points de vue quelconques dans des scènes urbaines. Les approches précédentes dépendent des modèles 3D et donnent lieu à des rendus de qualité réduite avec beaucoup d'artefacts. Dans cette thèse, nous proposons une approximation basée sur l'image pour compenser le manque de précision de la géométrie 3D. Nous utilisons un warp d'image guidé par des cartes de profondeur quasi-denses qui donnent lieu à beaucoup moins d'artefacts. En se basant sur cette approche, nous avons développé une méthode entièrement automatique permettant de traiter les scènes complexes. Nous sur-segmentons les images d’entrées en superpixels qui limitent les occlusions sur les bords des objets. Nous introduisons la synthèse de profondeur pour créer une approximation de cette profondeur mal reconstruite dans certaines régions et calculons les warps sur les superpixels pour synthétiser le résultat final. Nous comparons nos résultats à de nombreuses approches récentes. Nous avons analysé les artefacts de l'IBR d'un point de vue perceptif en comparant les artefacts générés par le mélange de plusieurs images avec ceux des transitions temporelles brusques et avons élaboré une méthodologie pour la sélection d'un compromis idéal entre les deux. Nous avons également analysé les distorsions perspectives et avons développé un modèle quantitatif qui permet de prédire les distorsions en fonction des paramètres de capture et de visualisation. Comme application, nous avons mis en œuvre un système de réalité virtuelle qui utilise l'IBR à la place de l'infographie traditionnelle. / We present image-based rendering (IBR) approaches that allow free viewpoint walkthroughs of urban scenes using just a few photographs as input. Previous approaches depend upon 3D models which give artifacts as the quality of 3D model degrades. In this thesis, we propose image-based approximations to compensate for the lack of accurate 3D geometry. In the first project, we use discontinuous shape-preserving image warp guided by quasi-dense depth maps which gives far fewer rendering artifacts than previous approaches. We build upon this approach in the second project by developing a completely automated solution that is capable of handling more complex scenes. We oversegment input images into superpixels that capture all occlusion boundaries. We introduce depth synthesis to create approximate depth in very poorly reconstructed regions and compute shape-preserving warps on superpixels to synthesize the final result. We also compare our results to many recent approaches. We analyze IBR artifacts from a perceptual point of view. In the first study, we compare artifacts caused by blending multiple images with abrupt temporal transitions and develop guidelines for selecting the ideal tradeoff. We use vision science in another study to investigate perspective distortions and develop a quantitative model that predicts distortions as a function of capture and viewing parameters. We use guidelines from these experiments to motivate the design of our own IBR systems. We demonstrate the very first virtual reality system that uses IBR instead of traditional computer graphics. This drastically reduces the cost of modeling 3D scenes while producing highly realistic walkthroughs.
|
9 |
基於形態轉換的多種表情卡通肖像 / Automatic generation of caricatures with multiple expressions using transformative approach賴建安, Lai, Chien An Unknown Date (has links)
隨著數位影像軟、硬體裝置上進步與普及,普羅大眾對於影像的使用不僅限於日常生活之中,更隨著網路分享概念等Web技術的擴張,這些數量龐大的影像,在使用上更朝向娛樂化、趣味化及個人化的範疇。本論文提出結合影像處理中的人臉特徵分析(Facial Features Analysis)資訊以及影像內容分割(Image Content Segmentation)及影像變形轉換(Image Warping and Morphing)等技術,設計出可以將真實照片中的人臉轉換成為卡通化的肖像,供使用者於各類媒體上使用。卡通化肖像不但具有隱藏影像細節,保留部份隱私的優勢,同時又兼具充份擁有個人化特色的表徵,透過臉部動畫的參數(Facial Animation Parameters)設定,我們提出的卡通化系統更容許使用者依心情,來合成喜、怒、哀、樂等不同表情。另外,運用兩種轉描式(Rotoscoping)及圖像變形(Morphing)法,以不同的合成技巧來解決不同裝置在限定顏色及效果偏好上的各類需求。 / As the acquisition of digital images becomes more convenient, diversified applications of image collections have surfaced at a rapid pace. Not only have we witnessed the popularity of photo-sharing platforms, we have also seen strong demand for novel mechanism that offers personalized and creative entertainment in recent years. In this thesis, we proposed and implemented a personal caricature generator using transformative approaches. By combing facial feature detection, image segmentation and image warping/morphing techniques, the system is able to generate stylized caricature using only one reference image. The system can also produce multiple expressions by controlling the MPEG-4 facial animation parameters (FAP). Specifically, by referencing to various pre-drawn caricature in our database as well as feature points for mesh creation, personalized caricatures are automatically generated from the real photos using either rotoscoping or transformative approaches. The resulting caricature can be further modified to exhibit multiple facial expressions. Important issues regarding color reduction and vectorized representation of the caricature have also been discussed in this thesis.
|
10 |
Kalibrierverfahren und optimierte Bildverarbeitung für Multiprojektorsysteme / Calibration methods and optimized image processing for multi-projector display systemsHeinz, Marcel 28 November 2013 (has links) (PDF)
Gegenstand der vorliegenden Dissertation ist die Entwicklung von Kalibrierverfahren und Algorithmen zur Bildverarbeitung im Kontext von Multiprojektorsystemen mit dem Ziel, die Einsatzmöglichkeiten von derartigen Anlagen zu erweitern und die Nutzerakzeptanz solcher Systeme zu steigern. Die Arbeit konzentriert sich dabei insbesondere auf (annähernd) planare Mehrsegment-Projektionsanlagen, die aus preisgünstigen, nicht speziell für den Visualisierungbereich konzipierten Consumer- und Office-Projektoren aufgebaut werden.
Im ersten Teil der Arbeit werden bestehende Verfahren zur geometrischen Kalibrierung, zum Edge-Blending sowie zur Helligkeits- und Farbanpassung auf ihre Eignung im Hinblick auf die Anforderungen untersucht und Erweiterungen entwickelt. Für die kamerabasierte Geometrie- Kalibrierung wird mit Lininenpattern gearbeitet, wobei ein effizienter rekursiver Algorithmus zur Berechnung der Schnittpunkte bei leicht gekrümmten Oberflächen vorgestellt wird. Für das Edge-Blending wird ein generalisiertes Modell entwickelt, das mehrere bestehende Ansätze kombiniert und erweitert. Die vorgenommene Modifikation der Distanzfunktion erlaubt insbesondere die bessere Steuerung des Helligkeitsverlaufs und ermöglicht weichere Übergänge an den Grenzen der Überlappungszonen. Es wird weiterhin gezeigt, dass das Edge-Blending mit bestehenden Ansätzen zum Ausgleich der Helligkeitsunterschiede wie Luminance Attenutation Maps kombiniert werden kann.
Für die photometrische Kalibrierung ist die Kenntnis der Farb-Transferfunktion, also der Abbildung der Eingabe-Farbwerte auf die tatsächlich vom Projektor erzeugten Ausgaben, unerlässlich. Die herkömmlichen Ansätze betrachten dabei vorwiegend RGB-Projektoren, bei denen die dreidimensionale Transferfunktion in drei eindimensionale Funktionen für jeden Farbkanal zerlegt werden kann. Diese Annahme trifft jedoch auf die betrachteten Projektoren meist nicht zu. Insbesondere DLP-Projektoren mit Farbrad verfügen oft über zusätzliche Grundfarben, so dass der Farbraum deutlich von einem idealen RGB-Modell abweicht. In dieser Arbeit wird zunächst ein empirisches Modell einer Transferfunktion vorgestellt, das sich für derartige Projektoren besser eignet, allerdings die Helligkeit der Projektoren nicht vollständig ausnutzt.
Im zweiten Teil der Arbeit wird ein kamerabasiertes Messverfahren entwickelt, mit dem direkt die dreidimensionale Farb-Transferfunktion ermittelt werden kann. Gegenüber bestehenden Verfahren werden tausende von Farbsamples gleichzeitig erfasst, so dass die erreichbare Sampledichte unter praxisrelevanten Messbedingungen von 17x17x17 auf 64x64x64 erhöht und damit die Qualität der photometrischen Kalibrierung signifikant gesteigert werden kann. Weiterhin wird ein Schnellverfahren entwickelt, dass die Messungsdauer bei 17x17x17 Samples von mehreren Stunden mit bisherigen Verfahren auf weniger als 30 Minuten reduziert.
Im dritten Teil werden Algorithmen zur effizienten Bildverarbeitung entwickelt, die der GPU-basierten Anwendung der Kalibrierparameter auf die darzustellenden Bilddaten in Echtzeit dienen. Dabei werden die Möglichkeiten zur Vermeidung redundanter Berechnungsschritte beim Einsatz Stereoskopie-fähiger Projektoren ausgenutzt. Weiterhin wird das eigentliche Kalibrierverfahren effizient mit Verfahren zur Konvertierung von stereoskopischen Bildverfahren kombiniert. Es wird gezeigt, dass ein einzelner PC aus Standardkomponenten zur Ansteuerung einer Mehrsegment-Projektionsanlage mit bis zu 6 Projektoren ausreicht. Die Verwendung von DVI-Capture-Karten ermöglicht dabei den Betrieb einer solchen Anlage wie einen "großen Monitor" für beliebige Applikationen und Betriebssysteme.
|
Page generated in 0.0516 seconds