• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 12
  • 12
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Enhanced target detection in CCTV network system using colour constancy

Soori, U 02 June 2016 (has links)
The focus of this research is to study how targets can be more faithfully detected in a multi-camera CCTV network system using spectral feature for the detection. The objective of the work is to develop colour constancy (CC) methodology to help maintain the spectral feature of the scene into a constant stable state irrespective of variable illuminations and camera calibration issues. Unlike previous work in the field of target detection, two versions of CC algorithms have been developed during the course of this work which are capable to maintain colour constancy for every image pixel in the scene: 1) a method termed as Enhanced Luminance Reflectance CC (ELRCC) which consists of a pixel-wise sigmoid function for an adaptive dynamic range compression, 2) Enhanced Target Detection and Recognition Colour Constancy (ETDCC) algorithm which employs a bidirectional pixel-wise non-linear transfer PWNLTF function, a centre-surround luminance enhancement and a Grey Edge white balancing routine. The effectiveness of target detections for all developed CC algorithms have been validated using multi-camera ‘Imagery Library for Intelligent Detection Systems’ (iLIDS), ‘Performance Evaluation of Tracking and Surveillance’ (PETS) and ‘Ground Truth Colour Chart’ (GTCC) datasets. It is shown that the developed CC algorithms have enhanced target detection efficiency by over 175% compared with that without CC enhancement. The contribution of this research has been one journal paper published in the Optical Engineering together with 3 conference papers in the subject of research.
2

Enhanced target detection in CCTV network system using colour constancy

Soori, Umair January 2014 (has links)
The focus of this research is to study how targets can be more faithfully detected in a multi-camera CCTV network system using spectral feature for the detection. The objective of the work is to develop colour constancy (CC) methodology to help maintain the spectral feature of the scene into a constant stable state irrespective of variable illuminations and camera calibration issues. Unlike previous work in the field of target detection, two versions of CC algorithms have been developed during the course of this work which are capable to maintain colour constancy for every image pixel in the scene: 1) a method termed as Enhanced Luminance Reflectance CC (ELRCC) which consists of a pixel-wise sigmoid function for an adaptive dynamic range compression, 2) Enhanced Target Detection and Recognition Colour Constancy (ETDCC) algorithm which employs a bidirectional pixel-wise non-linear transfer PWNLTF function, a centre-surround luminance enhancement and a Grey Edge white balancing routine. The effectiveness of target detections for all developed CC algorithms have been validated using multi-camera ‘Imagery Library for Intelligent Detection Systems’ (iLIDS), ‘Performance Evaluation of Tracking and Surveillance’ (PETS) and ‘Ground Truth Colour Chart’ (GTCC) datasets. It is shown that the developed CC algorithms have enhanced target detection efficiency by over 175% compared with that without CC enhancement. The contribution of this research has been one journal paper published in the Optical Engineering together with 3 conference papers in the subject of research.
3

Computational framework for the white point interpretation based on nameability

Tous Terrades, Francesc 28 July 2006 (has links)
En aquest treball presentem un marc per a l'estimació del punt blanc en imatges sota condicions no calibrades, on considerem múltiples solucions interpretades. D'aquesta manera, proposem la utilització d'una cua visual que ha estat relacionada amb la constància de color: aparellament de colors. Aquest aparellament de colors està guiat per la introducció d'informació semàntica referent al contingut de la imatge. Així doncs, introduïm informació d'alt nivell dels colors que esperem trobar en les imatges. Tenint en compte aquestes dues idees, aparellament de colors i informació semàntica, i les aproximacions computacionals a la constància de color existents, proposem un mètode d'estimació de punt blanc per condicions no calibrades que lliura múltiples solucions, en funció de diferents interpretacions dels colors d'una escena. Plantegem l'extracció de múltiples solucions ja que pot permetre extreure més informació de l'escena que els algorismes clàssics de constància de color. En aquest cas, les múltiples solucions venen ponderades pel seu grau d'aparellament dels colors amb la informació semàntica introduïda. Finalment demostrem que la solució plantejada permet reduir el conjunt de solucions possibles a un conjunt més significant, que és petit i fàcilment interpretable. El nostre estudi està emmarcat en un projecte d'anotació d'imatges que pretén obtenir descriptors que representen la imatge, en concret, els descriptors de la llum de l'escena. Definim dos contextos diferents per aquest projecte: condicions calibrades, quan coneixem alguna informació del sistema d'adquisició, i condicions no calibrades, quan no coneixem res del procés d'adquisició. Si bé ens hem centrat en el cas no calibrat, pel cas calibrat hem proposat també un mètode computacional de constància de color que introdueix l'assumpció de 'món gris' relaxada per a generar un conjunt de solucions possibles més reduït. Aquest mètode té un bon rendiment, similar al dels mètodes existents, i redueix el tamany del conjunt de solucions obtingut. / In this work we present a framework for white point estimation of images under uncalibrated conditions where multiple interpretable solutions can be considered. In this way, we propose to use the colour matching visual cue that has been proved as related to colour constancy. The colour matching process is guided by the introduction of semantic information regarding the image content. Thus, we introduce high-level information of colours we expect to find in the images. Considering these two ideas, colour matching and semantic information, and existing computational colour constancy approaches, we propose a white point estimation method for uncalibrated conditions which delivers multiple solutions according to different interpretations of the colours in a scene. However, we present the selection of multiple solutions which enables to obtain more information of the scene than existing colour constancy methods, which normally select a unique solution. In this case, the multiple solutions are weighted by the degree of colour matching between colours in the image and semantic information introduced. Finally, we prove that the feasible set of solutions can be reduced to a smaller and more significant set with a semantic interpretation. Our study is framed in a global image annotation project which aims to obtain descriptors which depict the image, in this work we focus on illuminant descriptors.We define two different sets of conditions for this project: (a) calibrated conditions, when we have some information about the acquisition process and (b) uncalibrated conditions, when we do not know the acquisition process. Although we have focused on the uncalibrated case, for calibrated conditions we also propose a colour constancy method which introduces the relaxed grey-world assumption to produce a reduced feasible set of solutions. This method delivers good performance similar to existing methods and reduces the size of the feasible set obtained.
4

Reflecting on a room of one reflectance

Ruppertsberg, Alexa I., Bloj, Marina January 2007 (has links)
No / We present a numerical analysis of rendered pairs of rooms, in which the spectral power distribution of the illuminant in one room matched the surface reflectance function in the other room, and vice versa. We ask whether distinction between the rooms is possible and on what cues this discrimination is based. Using accurately rendered three-dimensional (3D) scenes, we found that room pairs can be distinguished based on indirect illumination, as suggested by A. L. Gilchrist and A. Jacobsen (1984). In a simulated color constancy scenario, we show that indirect illumination plays a pivotal role as areas of indirect illumination undergo a smaller appearance change than areas of direct illumination. Our study confirms that indirect illumination can play a critical role in surface color recovery and shows how computer rendering programs, which model the light¿object interaction according to the laws of physics, are valuable tools that can be used to analyze and explore what image information is available to the visual system from 3D scenes.
5

Color constancy improves for real 3D objects

Hedrich, Monika, Bloj, Marina, Ruppertsberg, Alexa I. January 2009 (has links)
No / In this study human color constancy was tested for two-dimensional (2D) and three-dimensional (3D) setups with real objects and lights. Four different illuminant changes, a natural selection task and a wide choice of target colors were used. We found that color constancy was better when the target color was learned as a 3D object in a cue-rich 3D scene than in a 2D setup. This improvement was independent of the target color and the illuminant change. We were not able to find any evidence that frequently experienced illuminant changes are better compensated for than unusual ones. Normalizing individual color constancy hit rates by the corresponding color memory hit rates yields a color constancy index, which is indicative of observers¿ true ability to compensate for illuminant changes.
6

Video content analysis for intelligent forensics

Fraz, Muhammad January 2014 (has links)
The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild.
7

Human colour perception : a psychophysical study of human colour perception for real and computer-simulated two-dimensional and three-dimensional objects

Hedrich, Monika January 2009 (has links)
No description available.
8

Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

Lluis-Gomez, Alexis L. January 2015 (has links)
One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device.
9

Assessment of Grapevine Vigour Using Image Processing / Tillämpning av bildbehandlingsmetoder inom vinindustrin

Bjurström, Håkan, Svensson, Jon January 2002 (has links)
This Master’s thesis studies the possibility of using image processing as a tool to facilitate vine management, in particular shoot counting and assessment of the grapevine canopy. Both are areas where manual inspection is done today. The thesis presents methods of capturing images and segmenting different parts of a vine. It also presents and evaluates different approaches on how shoot counting can be done. Within canopy assessment, the emphasis is on methods to estimate canopy density. Other possible assessment areas are also discussed, such as canopy colour and measurement of canopy gaps and fruit exposure. An example of a vine assessment system is given.
10

Assessment of Grapevine Vigour Using Image Processing / Tillämpning av bildbehandlingsmetoder inom vinindustrin

Bjurström, Håkan, Svensson, Jon January 2002 (has links)
<p>This Master’s thesis studies the possibility of using image processing as a tool to facilitate vine management, in particular shoot counting and assessment of the grapevine canopy. Both are areas where manual inspection is done today. The thesis presents methods of capturing images and segmenting different parts of a vine. It also presents and evaluates different approaches on how shoot counting can be done. Within canopy assessment, the emphasis is on methods to estimate canopy density. Other possible assessment areas are also discussed, such as canopy colour and measurement of canopy gaps and fruit exposure. An example of a vine assessment system is given.</p>

Page generated in 0.0686 seconds