Spelling suggestions: "subject:"color consultancy"" "subject:"dolor consultancy""
1 |
Color Constancy for Stereo ImagingWen, Bo 21 August 2012 (has links)
No description available.
|
2 |
Learning a Color Algorithm from ExamplesHurlbert, Anya, Poggio, Tomaso 01 June 1987 (has links)
We show that a color algorithm capable of separating illumination from reflectance in a Mondrian world can be learned from a set of examples. The learned algorithm is equivalent to filtering the image data---in which reflectance and illumination are mixed---through a center-surround receptive field in individual chromatic channels. The operation resembles the "retinex" algorithm recently proposed by Edwin Land. This result is a specific instance of our earlier results that a standard regularization algorithm can be learned from examples. It illustrates that the natural constraints needed to solve a problemsin inverse optics can be extracted directly from a sufficient set of input data and the corresponding solutions. The learning procedure has been implemented as a parallel algorithm on the Connection Machine System.
|
3 |
Learning Object-Independent Modes of Variation with Feature Flow FieldsMiller, Erik G., Tieu, Kinh, Stauffer, Chris P. 01 September 2001 (has links)
We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single example of that object. We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.
|
4 |
Applied color processingZhang, Heng 29 November 2011 (has links)
The quality of a digital image pipeline relies greatly on its color reproduction which should at a minimum handle the color constancy, and the final judgment of the excellence of the pipeline is made through subjective observations by humans.
This dissertation addresses a few topics surrounding the color processing of digital image pipelines from a practical point of view. Color processing fundamentals will be discussed in the beginning to form a background understanding for the topics that follow.A memory color assisted illuminant estimation algorithm is then introduced after a review of memory colors and some modeling techniques. Spectral sensitivity of the camera is required by many color constancy algorithms but such data is often not readily
available. To tackle this problem, an alternative method to the spectral characterization for color constancy parameter calibration is proposed. Hue control in color reproduction can be of great importance especially when memory colors are concerned. A hue
constrained matrix optimization algorithm is introduced to address this issue, followed by a psychophysical study to systematically arrive at a recommendation for the optimized preferred color reproduction. At the end, a color constancy algorithm for high dynamic range scenes observing multiple illuminants is proposed. / Graduation date: 2012
|
5 |
Εκτίμηση του χρώματος αντικειμένων χρησιμοποιώντας πολλαπλές εικόνες της ίδιας σκηνής με διαφορετική θέση φωτιστικούΑθανασοπούλου, Βασιλική 07 June 2010 (has links)
Η παρούσα διπλωματική εργασία ασχολείται με την εκτίμηση του χρώματος αντικειμένων τα οποία απεικονίζονται σε φωτογραφίες, μέσω χρωματικής ισοστάθμισης και κατά συνέπεια μέσω διαχωρισμού των συνιστωσών ανάκλασης του φωτός. Συγκεκριμένα, εφαρμόσθηκε μια μέθοδος εκτίμησης του χρώματος αντικειμένων χρησιμοποιώντας μια βάση δεδομένων πολλαπλών εικόνων οι οποίες επεξεργάστηκαν ψηφιακά. Οι φωτογραφίες της βάσης δεδομένων απεικονίζουν την ίδια σκηνή και λήφθηκαν τοποθετώντας τη φωτεινή πηγή σε διαφορετική θέση στην περίπτωση κάθε φωτογραφίας. Η εργασία βασίστηκε κατά ένα μεγάλο μέρος σε άρθρα και πληροφορίες που συλλέχθηκαν από το διαδίκτυο. Τα πειραματικά αποτελέσματα ήταν ικανοποιητικά και έδειξαν πως ο συγκεκριμένος αλγόριθμος επιτυγχάνει χρωματική ισοστάθμιση και απόδοση χρωμάτων με καλή ακρίβεια και μικρή υπολογιστική πολυπλοκότητα. / The present Diploma Thesis deals with color estimation of objects which are depicted in photos, through color constancy and thus by separating the reflection components of light. We applied a method for estimating the color of objects using a database of multiple images that were digitally processed. The photos of the database represent the same scene and were obtained by placing the light source in a different location while capturing each photo. The present work was largely based on articles and information gathered from the internet. The experimental results were satisfactory and showed that the specific algorithm achieves color constancy and color estimation with good accuracy and low computational complexity.
|
6 |
Generování a detekce barevných markerů pro rozšířenou realitu / Synthesis and Detection of Color Markers for Augmented RealityBeťko, Peter January 2013 (has links)
This master's thesis emerges from the research in the field of uniform marker fields for augmented reality and broadens it by the possibility of generating colored marker fields based on arbitrary pictures. The thesis describes basics of augmented reality and explains the ideas from the Uniform Marker Fields paper. Presented is a program for generating colored marker fields as well as a tool for detecting markers and evaluating success rate of the detection. In addition, the work proposes a study of color constancy in the process of printing and recording. This study exceeds the scope of the requirements for this master's thesis; the knowledge can be used in arbitrary applications that use colors to code information. Finally, the integration of the knowledge into the marker field generating algorithm and resulting improvement of detection success rate are discussed.
|
7 |
Real-time Monocular Vision-based Tracking For Interactive Augmented RealitySpencer, Lisa 01 January 2006 (has links)
The need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous behavior, classifying video clips, and measuring athletic performance. In this dissertation we focus on augmented reality, but the methods and conclusions are applicable to a wide variety of other areas. In particular, our work deals with achieving real-time performance while tracking with augmented reality systems using a minimum set of commercial hardware. We have built prototypes that use both existing technologies and new algorithms we have developed. While performance improvements would be possible with additional hardware, such as multiple cameras or parallel processors, we have concentrated on getting the most performance with the least equipment. Tracking is a broad research area, but an essential component of an augmented reality system. Tracking of some sort is needed to determine the location of scene augmentation. First, we investigated the effects of illumination on the pixel values recorded by a color video camera. We used the results to track a simple solid-colored object in our first augmented reality application. Our second augmented reality application tracks complex non-rigid objects, namely human faces. In the color experiment, we studied the effects of illumination on the color values recorded by a real camera. Human perception is important for many applications, but our focus is on the RGB values available to tracking algorithms. Since the lighting in most environments where video monitoring is done is close to white, (e.g., fluorescent lights in an office, incandescent lights in a home, or direct and indirect sunlight outside,) we looked at the response to "white" light sources as the intensity varied. The red, green, and blue values recorded by the camera can be converted to a number of other color spaces which have been shown to be invariant to various lighting conditions, including view angle, light angle, light intensity, or light color, using models of the physical properties of reflection. Our experiments show how well these derived quantities actually remained constant with real materials, real lights, and real cameras, while still retaining the ability to discriminate between different colors. This color experiment enabled us to find color spaces that were more invariant to changes in illumination intensity than the ones traditionally used. The first augmented reality application tracks a solid colored rectangle and replaces the rectangle with an image, so it appears that the subject is holding a picture instead. Tracking this simple shape is both easy and hard; easy because of the single color and the shape that can be represented by four points or four lines, and hard because there are fewer features available and the color is affected by illumination changes. Many algorithms for tracking fixed shapes do not run in real time or require rich feature sets. We have created a tracking method for simple solid colored objects that uses color and edge information and is fast enough for real-time operation. We also demonstrate a fast deinterlacing method to avoid "tearing" of fast moving edges when recorded by an interlaced camera, and optimization techniques that usually achieved a speedup of about 10 from an implementation that already used optimized image processing library routines. Human faces are complex objects that differ between individuals and undergo non-rigid transformations. Our second augmented reality application detects faces, determines their initial pose, and then tracks changes in real time. The results are displayed as virtual objects overlaid on the real video image. We used existing algorithms for motion detection and face detection. We present a novel method for determining the initial face pose in real time using symmetry. Our face tracking uses existing point tracking methods as well as extensions to Active Appearance Models (AAMs). We also give a new method for integrating detection and tracking data and leveraging the temporal coherence in video data to mitigate the false positive detections. While many face tracking applications assume exactly one face is in the image, our techniques can handle any number of faces. The color experiment along with the two augmented reality applications provide improvements in understanding the effects of illumination intensity changes on recorded colors, as well as better real-time methods for detection and tracking of solid shapes and human faces for augmented reality. These techniques can be applied to other real-time video analysis tasks, such as surveillance and video analysis.
|
8 |
Color Emotions without Blue Light : Effect of a Blue Light Filter on the Emotional Perception of Colors / Färg känslor utan blått Ljus : Effekten av ett blåljusfilter på Känslomässig uppfattning av färgerLeefer van Leeuwen, Maximilian January 2023 (has links)
Blue light filters have become commonplace in modern technology. While there has been a substantial amount of research into their effects on sleep, there has been little into the effect on the media perceived through them. This study sought to examine whether the mechanisms responsible for the adaptation to ambient light conditions would counteract this effect. To do so, a digital survey was conducted in which participants rated 30 colors on 3 different emotional attributes: warmth, weight, and activity. Participants took the survey with or without a blue light filter active, and with or without external light. The external light was intended to eliminate or reduce the level of adaptation to the screen’s altered colors. After comparison between the groups, it was revealed that no significant subjective difference appeared between either of the test conditions. However, with external light, there was a difference in perceived warmth with and without the blue light filter. This implies that some sort of adaptation is involved, and is interfered with by qualities of ambient light. The prevalence of these usage conditions is left to future research, as is whether the specific extent of the difference caused by blue light filters is significant enough to design around. / Blåljusfilter har blivit vanliga i modern teknik. Även om det har gjorts en betydande mängd forskning om deras effekter på sömn, har det varit lite om effekten på media som uppfattas genom dem. Denna studie försökte undersöka om de mekanismer som är ansvariga för anpassningen till omgivande ljusförhållanden skulle motverka denna effekt. För att göra det genomfördes en digital undersökning där deltagarna betygsatte 30 färger på 3 olika känslomässiga egenskaper: värme, vikt och aktivitet. Deltagarna gjorde undersökningen med eller utan ett aktivt blåljusfilter och med eller utan externt ljus. Det yttre ljuset var avsett att eliminera eller minska nivån av anpassning till skärmens ändrade färger. Efter jämförelse mellan grupperna avslöjades att ingen signifikant subjektiv skillnad förekom mellan någon av testbetingelserna. Men med externt ljus var det skillnad i upplevd värme med och utan blåljusfiltret. Detta innebär att någon form av anpassning är inblandad och störs av egenskaperna hos omgivande ljus. Förekomsten av dessa användningsförhållanden överlämnas till framtida forskning, liksom om den specifika omfattningen av skillnaden som orsakas av blåljusfilter är tillräckligt stor för att kunna designas runt.
|
9 |
Réseaux de neurones convolutifs pour la segmentation sémantique et l'apprentissage d'invariants de couleur / Convolutional neural networks for semantic segmentation and color constancyFourure, Damien 12 December 2017 (has links)
La vision par ordinateur est un domaine interdisciplinaire étudiant la manière dont les ordinateurs peuvent acquérir une compréhension de haut niveau à partir d’images ou de vidéos numériques. En intelligence artificielle, et plus précisément en apprentissage automatique, domaine dans lequel se positionne cette thèse, la vision par ordinateur passe par l’extraction de caractéristiques présentes dans les images puis par la généralisation de concepts liés à ces caractéristiques. Ce domaine de recherche est devenu très populaire ces dernières années, notamment grâce aux résultats des réseaux de neurones convolutifs à la base des méthodes dites d’apprentissage profond. Aujourd’hui les réseaux de neurones permettent, entre autres, de reconnaître les différents objets présents dans une image, de générer des images très réalistes ou même de battre les champions au jeu de Go. Leurs performances ne s’arrêtent d’ailleurs pas au domaine de l’image puisqu’ils sont aussi utilisés dans d’autres domaines tels que le traitement du langage naturel (par exemple en traduction automatique) ou la reconnaissance de son. Dans cette thèse, nous étudions les réseaux de neurones convolutifs afin de développer des architectures et des fonctions de coûts spécialisées à des tâches aussi bien de bas niveau (la constance chromatique) que de haut niveau (la segmentation sémantique d’image). Une première contribution s’intéresse à la tâche de constance chromatique. En vision par ordinateur, l’approche principale consiste à estimer la couleur de l’illuminant puis à supprimer son impact sur la couleur perçue des objets. Les expériences que nous avons menées montrent que notre méthode permet d’obtenir des performances compétitives avec l’état de l’art. Néanmoins, notre architecture requiert une grande quantité de données d’entraînement. Afin de corriger en parti ce problème et d’améliorer l’entraînement des réseaux de neurones, nous présentons plusieurs techniques d’augmentation artificielle de données. Nous apportons également deux contributions sur une problématique de haut niveau : la segmentation sémantique d’image. Cette tâche, qui consiste à attribuer une classe sémantique à chacun des pixels d’une image, constitue un défi en vision par ordinateur de par sa complexité. D’une part, elle requiert de nombreux exemples d’entraînement dont les vérités terrains sont coûteuses à obtenir. D’autre part, elle nécessite l’adaptation des réseaux de neurones convolutifs traditionnels afin d’obtenir une prédiction dite dense, c’est-à-dire, une prédiction pour chacun pixel présent dans l’image d’entrée. Pour résoudre la difficulté liée à l’acquisition de données d’entrainements, nous proposons une approche qui exploite simultanément plusieurs bases de données annotées avec différentes étiquettes. Pour cela, nous définissons une fonction de coût sélective. Nous développons aussi une approche dites d’auto-contexte capturant d’avantage les corrélations existantes entre les étiquettes des différentes bases de données. Finalement, nous présentons notre troisième contribution : une nouvelle architecture de réseau de neurones convolutifs appelée GridNet spécialisée pour la segmentation sémantique d’image. Contrairement aux réseaux traditionnels, notre architecture est implémentée sous forme de grille 2D permettant à plusieurs flux interconnectés de fonctionner à différentes résolutions. Afin d’exploiter la totalité des chemins de la grille, nous proposons une technique d’entraînement inspirée du dropout. En outre, nous montrons empiriquement que notre architecture généralise de nombreux réseaux bien connus de l’état de l’art. Nous terminons par une analyse des résultats empiriques obtenus avec notre architecture qui, bien qu’entraînée avec une initialisation aléatoire des poids, révèle de très bonnes performances, dépassant les approches populaires souvent pré-entraînés / Computer vision is an interdisciplinary field that investigates how computers can gain a high level of understanding from digital images or videos. In artificial intelligence, and more precisely in machine learning, the field in which this thesis is positioned,computer vision involves extracting characteristics from images and then generalizing concepts related to these characteristics. This field of research has become very popular in recent years, particularly thanks to the results of the convolutional neural networks that form the basis of so-called deep learning methods. Today, neural networks make it possible, among other things, to recognize different objects present in an image, to generate very realistic images or even to beat the champions at the Go game. Their performance is not limited to the image domain, since they are also used in other fields such as natural language processing (e. g. machine translation) or sound recognition. In this thesis, we study convolutional neural networks in order to develop specialized architectures and loss functions for low-level tasks (color constancy) as well as high-level tasks (semantic segmentation). Color constancy, is the ability of the human visual system to perceive constant colours for a surface despite changes in the spectrum of illumination (lighting change). In computer vision, the main approach consists in estimating the color of the illuminant and then suppressing its impact on the perceived color of objects. We approach the task of color constancy with the use of neural networks by developing a new architecture composed of a subsampling operator inspired by traditional methods. Our experience shows that our method makes it possible to obtain competitive performances with the state of the art. Nevertheless, our architecture requires a large amount of training data. In order to partially correct this problem and improve the training of neural networks, we present several techniques for artificial data augmentation. We are also making two contributions on a high-level issue : semantic segmentation. This task, which consists of assigning a semantic class to each pixel of an image, is a challenge in computer vision because of its complexity. On the one hand, it requires many examples of training that are costly to obtain. On the other hand, it requires the adaptation of traditional convolutional neural networks in order to obtain a so-called dense prediction, i. e., a prediction for each pixel present in the input image. To solve the difficulty of acquiring training data, we propose an approach that uses several databases annotated with different labels at the same time. To do this, we define a selective loss function that has the advantage of allowing the training of a convolutional neural network from data from multiple databases. We also developed self-context approach that captures the correlations between labels in different databases. Finally, we present our third contribution : a new convolutional neural network architecture called GridNet specialized for semantic segmentation. Unlike traditional networks, implemented with a single path from the input (image) to the output (prediction), our architecture is implemented as a 2D grid allowing several interconnected streams to operate at different resolutions. In order to exploit all the paths of the grid, we propose a technique inspired by dropout. In addition, we empirically demonstrate that our architecture generalize many of well-known stateof- the-art networks. We conclude with an analysis of the empirical results obtained with our architecture which, although trained from scratch, reveals very good performances, exceeding popular approaches often pre-trained
|
10 |
Segmentation d'objets mobiles par fusion RGB-D et invariance colorimétrique / Mooving objects segmentation by RGB-D fusion and color constancyMurgia, Julian 24 May 2016 (has links)
Cette thèse s'inscrit dans un cadre de vidéo-surveillance, et s'intéresse plus précisément à la détection robustesd'objets mobiles dans une séquence d'images. Une bonne détection d'objets mobiles est un prérequis indispensableà tout traitement appliqué à ces objets dans de nombreuses applications telles que le suivi de voitures ou depersonnes, le comptage des passagers de transports en commun, la détection de situations dangereuses dans desenvironnements spécifiques (passages à niveau, passages piéton, carrefours, etc.), ou encore le contrôle devéhicules autonomes. Un très grand nombre de ces applications utilise un système de vision par ordinateur. Lafiabilité de ces systèmes demande une robustesse importante face à des conditions parfois difficiles souventcausées par les conditions d'illumination (jour/nuit, ombres portées), les conditions météorologiques (pluie, vent,neige) ainsi que la topologie même de la scène observée (occultations). Les travaux présentés dans cette thèsevisent à améliorer la qualité de détection d'objets mobiles en milieu intérieur ou extérieur, et à tout moment de lajournée.Pour ce faire, nous avons proposé trois stratégies combinables :i) l'utilisation d'invariants colorimétriques et/ou d'espaces de représentation couleur présentant des propriétésinvariantes ;ii) l'utilisation d'une caméra stéréoscopique et d'une caméra active Microsoft Kinect en plus de la caméra couleurafin de reconstruire l'environnement 3D partiel de la scène, et de fournir une dimension supplémentaire, à savoirune information de profondeur, à l'algorithme de détection d'objets mobiles pour la caractérisation des pixels ;iii) la proposition d'un nouvel algorithme de fusion basé sur la logique floue permettant de combiner les informationsde couleur et de profondeur tout en accordant une certaine marge d'incertitude quant à l'appartenance du pixel aufond ou à un objet mobile. / This PhD thesis falls within the scope of video-surveillance, and more precisely focuses on the detection of movingobjects in image sequences. In many applications, good detection of moving objects is an indispensable prerequisiteto any treatment applied to these objects such as people or cars tracking, passengers counting, detection ofdangerous situations in specific environments (level crossings, pedestrian crossings, intersections, etc.), or controlof autonomous vehicles. The reliability of computer vision based systems require robustness against difficultconditions often caused by lighting conditions (day/night, shadows), weather conditions (rain, wind, snow...) and thetopology of the observed scene (occultation...).Works detailed in this PhD thesis aim at reducing the impact of illumination conditions by improving the quality of thedetection of mobile objects in indoor or outdoor environments and at any time of the day. Thus, we propose threestrategies working as a combination to improve the detection of moving objects:i) using colorimetric invariants and/or color spaces that provide invariant properties ;ii) using passive stereoscopic camera (in outdoor environments) and Microsoft Kinect active camera (in outdoorenvironments) in order to partially reconstruct the 3D environment, providing an additional dimension (a depthinformation) to the background/foreground subtraction algorithm ;iii) a new fusion algorithm based on fuzzy logic in order to combine color and depth information with a certain level ofuncertainty for the pixels classification.
|
Page generated in 0.0507 seconds