• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 11
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 47
  • 29
  • 22
  • 17
  • 17
  • 15
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Processamento e estilização de dados RGB-Z em tempo real

Jesus, Alicia Isolina Pretel January 2014 (has links)
Orientador: Prof. Dr. João Paulo Gois / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciências da computação, 2014. / O desenvolvimento tecnológico de dispositivos de captura 3D nos últimos anos permitiram que os usuários acessassem dados 3D de forma fácil e com baixo custo. Neste trabalho estamos interessados no processamento de dados de câmeras que produzem seqüências de imagens (canais RGB) e as informações de profundidade dos objetos que compõem a cena (canal Z) simultaneamente. Atualmente o dispositivo mais popular para a produção deste tipo de informação é o Microsoft Kinect, originalmente usado para rastreamento de movimentos em aplicações de jogos. A informação de profundidade, juntamente com as imagens permite a produção de muitos efeitos visuais de re-iluminação, abstração, segmentação de fundo, bem como a modelagem da geometria da cena. No entanto, o sensor de profundidade tende a gerar dados ruidosos, onde filtros multidimensionais para estabilizar os quadros de vídeo são necessários. Nesse sentido, este trabalho desenvolve e avalia um conjunto de ferramentas para o processamento de vídeos RGB-Z, desde filtros para estabilização de vídeos até efeitos gráficos (renderings não-fotorrealísticos). Para tal, um framework que captura e processa os dados RGB-Z interativamente foi proposto. A implementação deste framework explora programação em GPU com o OpenGL Shading Language (GLSL). / The technological development of 3D capture devices in recent years has enabled users to easily access 3D data easily an in a low cost. In this work we are interested in processing data from cameras that produce sequences of images (RGB-channels) and the depth information of objects that compose the scene (Z-channel) simultaneously. Currently the most popular device for producing this type of information is the Microsoft Kinect, originally used for tracking movements in game applications. The depth information coupled with the images allow the production of many visual eects of relighting, abstraction, background segmentation as well as geometry modeling from the scene. However, the depth sensor tends to generate noisy data, where multidimensional filters to stabilize the frames of the video are required. In that sense this work developed and evaluated a set of tools for video processing in RGB-Z, from filters to video stabilization to the graphical eects (based on non-photorealistic rendering). To this aim, an interactive framework that captures and processes RGB-Z data interactively was presented. The implementation of this framework explores GPU programming with OpenGL Shading Language (GLSL).
52

Protection de vidéo comprimée par chiffrement sélectif réduit / Protection of compressed video with reduced selective encryption

Dubois, Loïc 15 November 2013 (has links)
De nos jours, les vidéos et les images sont devenues un moyen de communication très important. L'acquisition, la transmission, l'archivage et la visualisation de ces données visuelles, que ce soit à titre professionnel ou privé, augmentent de manière exponentielle. En conséquence, la confidentialité de ces contenus est devenue un problème majeur. Pour répondre à ce problème, le chiffrement sélectif est une solution qui assure la confidentialité visuelle des données en ne chiffrant qu'une partie des données. Le chiffrement sélectif permet de conserver le débit initial et de rester conforme aux standards vidéo. Ces travaux de thèse proposent plusieurs méthodes de chiffrement sélectif pour le standard vidéo H.264/AVC. Des méthodes de réduction du chiffrement sélectif grâce à l'architecture du standard H.264/AVC sont étudiées afin de trouver le ratio de chiffrement minimum mais suffisant pour assurer la confidentialité visuelle des données. Les mesures de qualité objectives sont utilisées pour évaluer la confidentialité visuelle des vidéos chiffrées. De plus, une nouvelle mesure de qualité est proposée pour analyser le scintillement des vidéos au cours du temps. Enfin, une méthode de chiffrement sélectif réduit régulé par des mesures de qualité est étudiée afin d'adapter le chiffrement en fonction de la confidentialité visuelle fixée. / Nowadays, videos and images are major sources of communication for professional or personal purposes. Their number grow exponentially and the confidentiality of the content has become a major problem for their acquisition, transmission, storage, and display. In order to solve this problem, selective encryption is a solution which provides visual privacy by encrypting only a part of the data. Selective encryption preserves the initial bit-rate and maintains compliance with the syntax of the standard video. This Ph.D thesis offers several methods of selective encryption for H.264/AVC video standard. Reduced selective encryption methods, based on the H.264/AVC architecture, are studied in order to find the minimum ratio of encryption but sufficient to ensure visual privacy. Objective quality measures are used to assess the visual privacy of encrypted videos. In addition, a new quality measure is proposed to analyze the video flicker over time. Finally, a method for a reduced selective encryption regulated by quality measures is studied to adapt the encryption depending on the visual privacy fixed.
53

Nonlinear Interactive Source-filter Model For Voiced Speech

Koc, Turgay 01 October 2012 (has links) (PDF)
The linear source-filter model (LSFM) has been used as a primary model for speech processing since 1960 when G. Fant presented acoustic speech production theory. It assumes that the source of voiced speech sounds, glottal flow, is independent of the filter, vocal tract. However, acoustic simulations based on the physical speech production models show that, especially when the fundamental frequency (F0) of source harmonics approaches to the first formant frequency (F1) of vocal tract filter, the filter has significant effects on the source due to the nonlinear coupling between them. In this thesis, as an alternative to linear source-filter model, nonlinear interactive source-filter models are proposed for voiced speech. This thesis has two parts, in the first part, a framework for the coupling of the source and the filter is presented. Then, two interactive system models are proposed assuming that glottal flow is a quasi-steady Bernoulli flow and acoustics in vocal tract is linear. In these models, instead of glottal flow, glottal area is used as a source for voiced speech. In the proposed interactive models, the relation between the glottal flow, glottal area and vocal tract is determined by the quasi-steady Bernoulli flow equation. It is theoretically shown that linear source-filter model is an approximation of the nonlinear models. Estimation of ISFM&rsquo / s parameters from only speech signal is a nonlinear blind deconvolution problem. The problem is solved by a robust method developed based on the acoustical interpretation of the systems. Experimental results show that ISFMs produce source-filter coupling effects seen in the physical simulations and the parameter estimation method produce always stable and better performing models than LSFM model. In addition, a framework for the incorporation of the source-filter interaction into classical source-filter model is presented. The Rosenberg source model is extended to an interactive source for voiced speech and its performance is evaluated on a large speech database. The results of the experiments conducted on vowels in the database show that the interactive Rosenberg model is always better than its noninteractive version. In the second part of the thesis, LSFM and ISFMs are compared by using not only the speech signal but also HSV (High Speed Endocopic Video) of vocal folds in a system identification approach. In this case, HSV and speech are used as a reference input-output data for the analysis and comparison of the models. First, a new robust HSV processing algorithm is developed and applied on HSV images to extract the glottal area. Then, system parameters are estimated by using a modified version of the method proposed in the first part. The experimental results show that speech signal can contain some harmonics of the fundamental frequency of the glottal area other than those contained in the glottal area signal. Proposed nonlinear interactive source-filter models can generate harmonics components in speech and produce more realistic speech sounds than LSFM.
54

Example-based Rendering of Textural Phenomena

Kwatra, Vivek 19 July 2005 (has links)
This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.
55

Rastreamento veicular em vídeos de tráfego com aplicação em contagem de veículos

Silva, Christiano Bouvié da January 2012 (has links)
Esta dissertação apresenta o desenvolvido de um método, baseado em agrupamento de partículas, para realizar a contagem de veículos em vídeos de tráfego. Tal procedimento é importante em sistemas de tráfego inteligentes, ou como uma ferramenta auxiliar no planejamento de vias urbanas. Utilizando técnicas de processamento de imagens e agrupamento de partículas, o método proposto utiliza-se da coerência de movimento e posição espacial existente entre partículas extraídas das imagens de vídeo para agrupá-las, formando figuras convexas que são analisadas em busca de possíveis veículos. Essa análise leva em consideração a morfologia das figuras convexas e a informação de fundo da imagem, para unir ou dividir os agrupamentos. Após a identificação de um veículo, o mesmo é rastreado utilizando-se similaridade de histograma de cores, aplicado em janelas centradas nas partículas. A contagem dos veículos ocorre em laços virtuais definidos pelo usuário nas pistas desejadas, através da intersecção das figuras convexas rastreadas com estes laços virtuais. Testes foram realizados utilizando-se vídeos de seis cenas diferentes, totalizando 81.000 quadros. Os resultados das contagens de veículos obtidos foram comparados a dois métodos atuais. Um método possui abordagem similar ao método proposto (KIM, 2008), que tenta fixar agrupamentos de partículas em formas elipsoidais. O outro método (SÁNCHEZ et al., 2011) rastreia objetos conectados, quando estes são diferentes do fundo, através da intersecção destes objetos entre quadros adjacentes. Considerando-se o universo total de veículos analisados, 1085 veículos, os resultados obtidos pelo método proposto apresentaram uma diferença absoluta na contagem dos veículos intermediária aos métodos comparativos, 53 veículos contra 66 e 27 para (KIM, 2008) e (SÁNCHEZ et al., 2011) respectivamente, sendo o único método que contou menos veículos que o valor real, enquanto os métodos comparativos contaram veículos além do valor real. O método proposto perde 102 veículos, valor inferior ao método de (SÁNCHEZ et al., 2011), 181, e praticamente o mesmo número que o método de (KIM, 2008), 101. Já os veículos detectados mais de uma vez apresentam valores inferiores para o método proposto, 49, em relação aos métodos comparativos, 167 para (KIM, 2008) e 208 para (SÁNCHEZ et al., 2011). / This dissertation presents the developed of a method based on particle group, to conduct the count of vehicles in traffic videos. This procedure is important in intelligent traffic systems, or as an auxiliary tool in the planning of urban streets. Using image processing techniques and grouping of particles, the proposed method uses the coherence of spatial position and movement between particles extracted from the video footage to assemble them into convex figures that are parsed in search of possible vehicles. This analysis takes into account the morphology of convex figures and background information of the image, to merge or split the groups. After the identification of a vehicle, it is tracked using color histogram similarity, applied in Windows centered on particles. The count of vehicles occurs in user-defined virtual loops on the tracks desired, through the intersection of convex figures traced with these virtual loops. Tests were performed using videos of six different scenes, totaling 81,000 frames. The results of vehicle counts obtained were compared to two current methods. A method has similar approach to proposed method (KIM, 2008), which attempts to establish groups of particles in ellipsoidal shapes. The other method (SÁNCHEZ et al., 2011) tracks connected objects, when these are different from the background, through the intersection of these objects between adjacent frames. Considering the total universe of vehicles examined, 1085, the results obtained by the proposed method showed an intermediate absolute difference in counting of vehicles to comparative methods, 53 against 66 and 27 vehicles for (KIM, 2008) and (SÁNCHEZ et al., 2011) respectively. The proposed method is the only one that counted vehicles less than the real value, while comparative methods counted vehicles beyond the real value. The proposed method loses 102 vehicles, lower than value to (SÁNCHEZ et al., 2011), 181, and roughly the same number as the method of (KIM, 2008), 101. The number of vehicles detected more than once are lower for the proposed method in relation to the comparative methods, 49 vehicles against 167 to (Kim, 2008) and 208 to (SANCHEZ et al., 2011).
56

Rastreamento veicular em vídeos de tráfego com aplicação em contagem de veículos

Silva, Christiano Bouvié da January 2012 (has links)
Esta dissertação apresenta o desenvolvido de um método, baseado em agrupamento de partículas, para realizar a contagem de veículos em vídeos de tráfego. Tal procedimento é importante em sistemas de tráfego inteligentes, ou como uma ferramenta auxiliar no planejamento de vias urbanas. Utilizando técnicas de processamento de imagens e agrupamento de partículas, o método proposto utiliza-se da coerência de movimento e posição espacial existente entre partículas extraídas das imagens de vídeo para agrupá-las, formando figuras convexas que são analisadas em busca de possíveis veículos. Essa análise leva em consideração a morfologia das figuras convexas e a informação de fundo da imagem, para unir ou dividir os agrupamentos. Após a identificação de um veículo, o mesmo é rastreado utilizando-se similaridade de histograma de cores, aplicado em janelas centradas nas partículas. A contagem dos veículos ocorre em laços virtuais definidos pelo usuário nas pistas desejadas, através da intersecção das figuras convexas rastreadas com estes laços virtuais. Testes foram realizados utilizando-se vídeos de seis cenas diferentes, totalizando 81.000 quadros. Os resultados das contagens de veículos obtidos foram comparados a dois métodos atuais. Um método possui abordagem similar ao método proposto (KIM, 2008), que tenta fixar agrupamentos de partículas em formas elipsoidais. O outro método (SÁNCHEZ et al., 2011) rastreia objetos conectados, quando estes são diferentes do fundo, através da intersecção destes objetos entre quadros adjacentes. Considerando-se o universo total de veículos analisados, 1085 veículos, os resultados obtidos pelo método proposto apresentaram uma diferença absoluta na contagem dos veículos intermediária aos métodos comparativos, 53 veículos contra 66 e 27 para (KIM, 2008) e (SÁNCHEZ et al., 2011) respectivamente, sendo o único método que contou menos veículos que o valor real, enquanto os métodos comparativos contaram veículos além do valor real. O método proposto perde 102 veículos, valor inferior ao método de (SÁNCHEZ et al., 2011), 181, e praticamente o mesmo número que o método de (KIM, 2008), 101. Já os veículos detectados mais de uma vez apresentam valores inferiores para o método proposto, 49, em relação aos métodos comparativos, 167 para (KIM, 2008) e 208 para (SÁNCHEZ et al., 2011). / This dissertation presents the developed of a method based on particle group, to conduct the count of vehicles in traffic videos. This procedure is important in intelligent traffic systems, or as an auxiliary tool in the planning of urban streets. Using image processing techniques and grouping of particles, the proposed method uses the coherence of spatial position and movement between particles extracted from the video footage to assemble them into convex figures that are parsed in search of possible vehicles. This analysis takes into account the morphology of convex figures and background information of the image, to merge or split the groups. After the identification of a vehicle, it is tracked using color histogram similarity, applied in Windows centered on particles. The count of vehicles occurs in user-defined virtual loops on the tracks desired, through the intersection of convex figures traced with these virtual loops. Tests were performed using videos of six different scenes, totaling 81,000 frames. The results of vehicle counts obtained were compared to two current methods. A method has similar approach to proposed method (KIM, 2008), which attempts to establish groups of particles in ellipsoidal shapes. The other method (SÁNCHEZ et al., 2011) tracks connected objects, when these are different from the background, through the intersection of these objects between adjacent frames. Considering the total universe of vehicles examined, 1085, the results obtained by the proposed method showed an intermediate absolute difference in counting of vehicles to comparative methods, 53 against 66 and 27 vehicles for (KIM, 2008) and (SÁNCHEZ et al., 2011) respectively. The proposed method is the only one that counted vehicles less than the real value, while comparative methods counted vehicles beyond the real value. The proposed method loses 102 vehicles, lower than value to (SÁNCHEZ et al., 2011), 181, and roughly the same number as the method of (KIM, 2008), 101. The number of vehicles detected more than once are lower for the proposed method in relation to the comparative methods, 49 vehicles against 167 to (Kim, 2008) and 208 to (SANCHEZ et al., 2011).
57

Desenvolvimento e implementação de instrumentação eletrônica para criação de estímulos visuais para experimentos com o duto óptico da mosca / High-performance visual stimulation system for use in neuroscience experiments with the blowfly

Mario Alexandre Gazziro 23 September 2009 (has links)
O presente trabalho descreve o desenvolvimento de geradores de estímulos visuais para serem utilizados em experimentos de neurociência com invertebrados, tais como moscas. O experimento consiste na visualização de uma imagem fixa que é movida horizontalmente de acordo com os dados de estímulo recebidos. O sistema é capaz de exibir 640x480 pixels com 256 níveis intensidade a 200 frames por segundo em monitores de varredura convencional. É baseado em hardware reconfigurável (FPGA), incluindo a lógica para gerar as temporizações do vídeo, dos sinais de sincronismo, assim como da memória de vídeo. Uma lógica de controle especial foi incluída para atualizar o deslocamento horizontal da imagem, de acordo com os estímulos desejados, a uma taxa de 200 quadros por segundo. Em um dos geradores desenvolvidos, a fim de duplicar a resolução de posicionamento horizontal, passos artificiais entre-pixels foram implementados usando dois frame buffers de vídeo, contendo respectivamente os pixels ímpares e pares da imagem original a ser exibida. Esta implementação gerou um efeito visual capaz de dobrar a capacidade de posicionamento horizontal deste gerador. / This thesis describes the development of many visual stimulus generators to be used in neuroscience experiments with invertebrates such as flies. The experiment consists in the visualization of a fixed image which is moved horizontally according to the received stimulus data. The system is capable to display 640x480 pixels with 256 intensity levels at 200 frames per second on conventional raster monitors. It´s based on reconfigurable hardware (FPGA), includes the logic to generate video timings and synchronization signals as well as the video memory. Special control logic was included to update the horizontal image offsets according to the desired stimulus data at 200 fps. In one of the developed generators, with the intent to double the horizontal positioning resolution, artificial interpixel steps are implemented using two video frame buffer containing respectively the odd and the even pixels of the original image to be displayed. This implementation generates a visual effect capable to double the horizontal positioning capabilities of the generator.
58

Rastreamento veicular em vídeos de tráfego com aplicação em contagem de veículos

Silva, Christiano Bouvié da January 2012 (has links)
Esta dissertação apresenta o desenvolvido de um método, baseado em agrupamento de partículas, para realizar a contagem de veículos em vídeos de tráfego. Tal procedimento é importante em sistemas de tráfego inteligentes, ou como uma ferramenta auxiliar no planejamento de vias urbanas. Utilizando técnicas de processamento de imagens e agrupamento de partículas, o método proposto utiliza-se da coerência de movimento e posição espacial existente entre partículas extraídas das imagens de vídeo para agrupá-las, formando figuras convexas que são analisadas em busca de possíveis veículos. Essa análise leva em consideração a morfologia das figuras convexas e a informação de fundo da imagem, para unir ou dividir os agrupamentos. Após a identificação de um veículo, o mesmo é rastreado utilizando-se similaridade de histograma de cores, aplicado em janelas centradas nas partículas. A contagem dos veículos ocorre em laços virtuais definidos pelo usuário nas pistas desejadas, através da intersecção das figuras convexas rastreadas com estes laços virtuais. Testes foram realizados utilizando-se vídeos de seis cenas diferentes, totalizando 81.000 quadros. Os resultados das contagens de veículos obtidos foram comparados a dois métodos atuais. Um método possui abordagem similar ao método proposto (KIM, 2008), que tenta fixar agrupamentos de partículas em formas elipsoidais. O outro método (SÁNCHEZ et al., 2011) rastreia objetos conectados, quando estes são diferentes do fundo, através da intersecção destes objetos entre quadros adjacentes. Considerando-se o universo total de veículos analisados, 1085 veículos, os resultados obtidos pelo método proposto apresentaram uma diferença absoluta na contagem dos veículos intermediária aos métodos comparativos, 53 veículos contra 66 e 27 para (KIM, 2008) e (SÁNCHEZ et al., 2011) respectivamente, sendo o único método que contou menos veículos que o valor real, enquanto os métodos comparativos contaram veículos além do valor real. O método proposto perde 102 veículos, valor inferior ao método de (SÁNCHEZ et al., 2011), 181, e praticamente o mesmo número que o método de (KIM, 2008), 101. Já os veículos detectados mais de uma vez apresentam valores inferiores para o método proposto, 49, em relação aos métodos comparativos, 167 para (KIM, 2008) e 208 para (SÁNCHEZ et al., 2011). / This dissertation presents the developed of a method based on particle group, to conduct the count of vehicles in traffic videos. This procedure is important in intelligent traffic systems, or as an auxiliary tool in the planning of urban streets. Using image processing techniques and grouping of particles, the proposed method uses the coherence of spatial position and movement between particles extracted from the video footage to assemble them into convex figures that are parsed in search of possible vehicles. This analysis takes into account the morphology of convex figures and background information of the image, to merge or split the groups. After the identification of a vehicle, it is tracked using color histogram similarity, applied in Windows centered on particles. The count of vehicles occurs in user-defined virtual loops on the tracks desired, through the intersection of convex figures traced with these virtual loops. Tests were performed using videos of six different scenes, totaling 81,000 frames. The results of vehicle counts obtained were compared to two current methods. A method has similar approach to proposed method (KIM, 2008), which attempts to establish groups of particles in ellipsoidal shapes. The other method (SÁNCHEZ et al., 2011) tracks connected objects, when these are different from the background, through the intersection of these objects between adjacent frames. Considering the total universe of vehicles examined, 1085, the results obtained by the proposed method showed an intermediate absolute difference in counting of vehicles to comparative methods, 53 against 66 and 27 vehicles for (KIM, 2008) and (SÁNCHEZ et al., 2011) respectively. The proposed method is the only one that counted vehicles less than the real value, while comparative methods counted vehicles beyond the real value. The proposed method loses 102 vehicles, lower than value to (SÁNCHEZ et al., 2011), 181, and roughly the same number as the method of (KIM, 2008), 101. The number of vehicles detected more than once are lower for the proposed method in relation to the comparative methods, 49 vehicles against 167 to (Kim, 2008) and 208 to (SANCHEZ et al., 2011).
59

RFID Tag : RFID tag positioning and identification by using infrared and visual wavelength

Gülerman, Ender January 2012 (has links)
This thesis project aims to develop an active Radio Frequency Identification tag (RFID) which uses an interesting method for positioning and ID detection. In this project, rather than classical ways of positioning methods such as triangulation or radio maps, infrared light and a camera with an infrared filter was used for the positioning. Tag identification detection is done by applying image analysis on camera images. When a specific part is wanted from the warehouse, this part is addressed through the active RFID system and the tag attached to that part starts to blink with the tag ID. A camera with an infrared filter above the goods in the ceiling finds the blinking infrared led, detects the tag’s position by image analysis, and confirms the ID with the requested ID number. A led transmitting visual light is used to ensure the tag also can be seen by the forklift driver in the warehouse environment when he is in close range of the part. First of all, related work and scientific papers were examined mostly from the IEEE database, which was instrumental in constructing this thesis project. Under the circumstances of low power consumption and the demands from the tag, additional possible components for an RFID tag such as an infrared led, a visual led, transistors for the LED amplifier stage and an LDO (Low-dropout) voltage regulator are chosen. Necessary technical calculations such as gain, power consumption are calculated. The RFID tag is built with these components, and transferred into the software environment .First the schematic is drawn and footprints created for the each component and the case styles are decided for transferring the circuit into the layout environment. For the radio circuit part which is used for the communication between the server and the tag, transmission lines of PCB demands are examined and the necessary calculations are made for impedance matching to prevent any disorder. After preparation of the PCB, gerber files are sent for the manufacturing process and the hardware part is completed. The components are mounted and the LED’s blinking time interval is set depending on the camera’s applicable frame speed, relevant tests for the ID detection and positioning (see fig.1). With optimisation of the time interval for recognition of the ID, an algorithm for the positioning of the RFID tag is developed and the related ID detection algorithm is developed for the real time applications by using a camera. As a result of this thesis project, instead of using complex systems for the positioning, such as triangulation or creating a radio map with multiple readers etc. a basic solution is produced as an alternative.  The efficiency of the system, the distance that allows the positioning and how applicable the system is are examined.
60

Automatic Stereoscopic 3D Chroma-Key Matting Using Perceptual Analysis and Prediction

Yin, Ling January 2014 (has links)
This research presents a novel framework for automatic chroma keying and the optimizations for real-time and stereoscopic 3D processing. It first simulates the process of human perception on isolating foreground elements in a given scene by perceptual analysis, and then predicts foreground colours and alpha map based on the analysis results and the restored clean background plate rather than direct sampling. Besides, an object level depth map is generated through stereo matching on a carefully determined feature map. In addition, three prototypes on different platforms have been implemented according to their hardware capability based on the proposed framework. To achieve real-time performance, the entire procedures are optimized for parallel processing and data paths on the GPU, as well as heterogeneous computing between GPU and CPU. The qualitative comparisons between results generated by the proposed algorithm and other existing algorithms show that the proposed one is able to generate more acceptable alpha maps and foreground colours especially in those regions that contain translucencies and details. And the quantitative evaluations also validate our advantages in both quality and speed.

Page generated in 0.1099 seconds