• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 19
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Desenvolvimento e implementação de instrumentação eletrônica para criação de estímulos visuais para experimentos com o duto óptico da mosca / High-performance visual stimulation system for use in neuroscience experiments with the blowfly

Gazziro, Mario Alexandre 23 September 2009 (has links)
O presente trabalho descreve o desenvolvimento de geradores de estímulos visuais para serem utilizados em experimentos de neurociência com invertebrados, tais como moscas. O experimento consiste na visualização de uma imagem fixa que é movida horizontalmente de acordo com os dados de estímulo recebidos. O sistema é capaz de exibir 640x480 pixels com 256 níveis intensidade a 200 frames por segundo em monitores de varredura convencional. É baseado em hardware reconfigurável (FPGA), incluindo a lógica para gerar as temporizações do vídeo, dos sinais de sincronismo, assim como da memória de vídeo. Uma lógica de controle especial foi incluída para atualizar o deslocamento horizontal da imagem, de acordo com os estímulos desejados, a uma taxa de 200 quadros por segundo. Em um dos geradores desenvolvidos, a fim de duplicar a resolução de posicionamento horizontal, passos artificiais entre-pixels foram implementados usando dois frame buffers de vídeo, contendo respectivamente os pixels ímpares e pares da imagem original a ser exibida. Esta implementação gerou um efeito visual capaz de dobrar a capacidade de posicionamento horizontal deste gerador. / This thesis describes the development of many visual stimulus generators to be used in neuroscience experiments with invertebrates such as flies. The experiment consists in the visualization of a fixed image which is moved horizontally according to the received stimulus data. The system is capable to display 640x480 pixels with 256 intensity levels at 200 frames per second on conventional raster monitors. It´s based on reconfigurable hardware (FPGA), includes the logic to generate video timings and synchronization signals as well as the video memory. Special control logic was included to update the horizontal image offsets according to the desired stimulus data at 200 fps. In one of the developed generators, with the intent to double the horizontal positioning resolution, artificial interpixel steps are implemented using two video frame buffer containing respectively the odd and the even pixels of the original image to be displayed. This implementation generates a visual effect capable to double the horizontal positioning capabilities of the generator.
12

Wireless video sensor network and its applications in digital zoo

Karlsson, Johannes January 2010 (has links)
Most computing and communicating devices have been personal computers that were connected to Internet through a fixed network connection. It is believed that future communication devices will not be of this type. Instead the intelligence and communication capability will move into various objects that surround us. This is often referred to as the "Internet of Things" or "Wireless Embedded Internet". This thesis deals with video processing and communication in these types of systems. One application scenario that is dealt with in this thesis is real-time video transmission over wireless ad-hoc networks. Here a set of devices automatically form a network and start to communicate without the need for any previous infrastructure. These devices act as both hosts and routers and can build up large networks where they forward information for each other. We have identified two major problems when sending real-time video over wireless ad-hoc networks. One is the reactive design used by most ad-hoc routing protocols. When nodes move some links that are used in the communication path between the sender and the receiver may disappear. The reactive routing protocols wait until some links on the path breaks and then start to search for a new path. This will lead to long interruptions in packet delivery and does not work well for real-time video transmission. Instead we propose an approach where we identify when a route is about to break and start to search for new routes before this happen. This is called a proactive approach. Another problem is that video codecs are very sensitive for packet losses and at the same time the wireless ad-hoc network is very error prone. The most common way to handle lost packets in video codecs is to periodically insert frames that are not predictively coded. This method periodically corrects errors regardless there has been an error or not. The method we propose is to insert frames that are not predictively coded directly after a packet has been lost, and only if a packet has been lost. Another area that is dealt with in this thesis is video sensor networks. These are small devices that have communication and computational capacity, they are equipped with an image sensor so that they can capture video. Since these devices in general have very limited resources in terms of energy, computation, communication and memory they demand a lot of the video compression algorithms used. In standard video compression algorithms the complexity is high for the encoder while the decoder has low complexity and is just passively controlled by the encoder. We propose video compression algorithms for wireless video sensor networks where complexity is reduced in the encoder by moving some of the image analysis to the decoder side. We have implemented our approach on actual low-power sensor nodes to test our developed algorithms. Finally we have built a "Digital Zoo" that is a complete system including a large scale outdoor video sensor network. The goal is to use the collected data from the video sensor network to create new experiences for physical visitors in the zoo, or "cyber" visitors from home. Here several topics that relate to practical deployments of sensor networks are addressed.
13

Desenvolvimento e implementação de instrumentação eletrônica para criação de estímulos visuais para experimentos com o duto óptico da mosca / High-performance visual stimulation system for use in neuroscience experiments with the blowfly

Mario Alexandre Gazziro 23 September 2009 (has links)
O presente trabalho descreve o desenvolvimento de geradores de estímulos visuais para serem utilizados em experimentos de neurociência com invertebrados, tais como moscas. O experimento consiste na visualização de uma imagem fixa que é movida horizontalmente de acordo com os dados de estímulo recebidos. O sistema é capaz de exibir 640x480 pixels com 256 níveis intensidade a 200 frames por segundo em monitores de varredura convencional. É baseado em hardware reconfigurável (FPGA), incluindo a lógica para gerar as temporizações do vídeo, dos sinais de sincronismo, assim como da memória de vídeo. Uma lógica de controle especial foi incluída para atualizar o deslocamento horizontal da imagem, de acordo com os estímulos desejados, a uma taxa de 200 quadros por segundo. Em um dos geradores desenvolvidos, a fim de duplicar a resolução de posicionamento horizontal, passos artificiais entre-pixels foram implementados usando dois frame buffers de vídeo, contendo respectivamente os pixels ímpares e pares da imagem original a ser exibida. Esta implementação gerou um efeito visual capaz de dobrar a capacidade de posicionamento horizontal deste gerador. / This thesis describes the development of many visual stimulus generators to be used in neuroscience experiments with invertebrates such as flies. The experiment consists in the visualization of a fixed image which is moved horizontally according to the received stimulus data. The system is capable to display 640x480 pixels with 256 intensity levels at 200 frames per second on conventional raster monitors. It´s based on reconfigurable hardware (FPGA), includes the logic to generate video timings and synchronization signals as well as the video memory. Special control logic was included to update the horizontal image offsets according to the desired stimulus data at 200 fps. In one of the developed generators, with the intent to double the horizontal positioning resolution, artificial interpixel steps are implemented using two video frame buffer containing respectively the odd and the even pixels of the original image to be displayed. This implementation generates a visual effect capable to double the horizontal positioning capabilities of the generator.
14

Renderização interativa de câmeras virtuais a partir da integração de múltiplas câmeras esparsas por meio de homografias e decomposições planares da cena / Interactive virtual camera rendering from multiple sparse cameras using homographies and planar scene decompositions

Jeferson Rodrigues da Silva 10 February 2010 (has links)
As técnicas de renderização baseadas em imagens permitem que novas visualizações de uma cena sejam geradas a partir de um conjunto de imagens, obtidas a partir de pontos de vista distintos. Pela extensão dessas técnicas para o tratamento de vídeos, podemos permitir a navegação no tempo e no espaço de uma cena obtida a partir de múltiplas câmeras. Nesse trabalho, abordamos o problema de gerar novas visualizações fotorealistas de cenas dinâmicas, com objetos móveis independentes, a partir de vídeos obtidos de múltiplas câmeras com pontos de vista distintos. Os desafios para a solução do problema envolvem a fusão das imagens das múltiplas câmeras minimizando as diferenças de brilho e cor entre elas, a detecção e extração dos objetos móveis da cena e a renderização de novas visualizações combinando um modelo estático da cena com os modelos aproximados dos objetos móveis. Além disso, é importante que novas visualizações possam ser geradas em taxas de quadro interativas de maneira a permitir que um usuário navegue com naturalidade pela cena renderizada. As aplicações dessas técnicas são diversas e incluem aplicações na área de entretenimento, como nas televisões digitais interativas que permitem que o usuário escolha o ponto de vista de filmes ou eventos esportivos, e em simulações para treinamento usando realidade virtual, onde é importante que se haja cenas realistas e reconstruídas a partir de cenas reais. Apresentamos um algoritmo para a calibração das cores capaz de minimizar a diferença de cor e brilho entre as imagens obtidas a partir de câmeras que não tiveram as cores calibradas. Além disso, descrevemos um método para a renderização interativa de novas visualizações de cenas dinâmicas capaz de gerar visualizações com qualidade semelhante à dos vídeos da cena. / Image-based rendering techniques allow the synthesis of novel scene views from a set of images of the scene, acquired from different viewpoints. By extending these techniques to make use of videos, we can allow the navigation in time and space of a scene acquired by multiple cameras. In this work, we tackle the problem of generating novel photorealistic views of dynamic scenes, containing independent moving objects, from videos acquired by multiple cameras with different viewpoints. The challenges presented by the problem include the fusion of images from multiple cameras while minimizing the brightness and color differences between them, the detection and extraction of the moving objects and the rendering of novel views combining a static scene model with approximate models for the moving objects. It is also important to be able to generate novel views in interactive frame rates allowing a user to navigate and interact with the rendered scene. The applications of these techniques are diverse and include applications in the entertainment field, with interactive digital televisions that allow the user to choose the viewpoint while watching movies or sports events, and in virtual-reality training simulations, where it is important to have realistic scenes reconstructed from real scenes. We present a color calibration algorithm for minimizing the color and brightness differences between images acquired from cameras that didn\'t have their colors calibrated. We also describe a method for interactive novel view rendering of dynamic scenes that provides novel views with similar quality to the scene videos.
15

On-line Analýza Dat s Využitím Vizuálních Slovníků / On-line Data Analysis Based on Visual Codebooks

Beran, Vítězslav Unknown Date (has links)
Práce představuje novou adaptabilní metodu pro on-line vyhledávání videa v reálném čase pomocí vizuálních slovníků. Nová metoda se zaměřuje na nízkou výpočetní náročnost a přesnost vyhledání při on-line použití. Metoda vychází z technik využitých u statických vizuálních slovníků. Tyto běžné techniky jsou upraveny tak, aby byly schopné se adaptovat na proměnlivá data. Postupy, které toto u nové metody řeší, jsou - dynamická inverzní frekvence dokumentů, adaptabilní vizuální slovník a proměnlivý invertovaný index. Navržený postup byl vyhodnocen na úloze vyhledávání videa a prezentované výsledky ukazují, jaké vlastnosti má adaptabilní metoda ve srovnání se statickým přístupem. Nová adaptabilní metoda je založena na konceptu plovoucího okna, který definuje, jakým způsobem se vybírají data pro adaptaci a ke zpracování. Společně s konceptem je definován i matematický aparát, který umožňuje vyhodnotit, jak koncept nejlépe využít pro různé metody zpracování videa. Praktické využití adaptabilní metody je konkrétně u systémů pro zpracování videa, kde se očekává změna v charakteru vizuálních dat nebo tam, kde není předem známo, jakého charakteru vizuální data budou.
16

Real-Time Mobile Video Compression and Streaming: Live Video from Mobile Devices over Cell Phone Networks

Uti, Ngozi V. 19 September 2011 (has links)
No description available.
17

QoS provisioning for adaptive video streaming over P2P networks / Transport adaptatif et contrôle de la qualité des services vidéo sur les réseaux pair-à-pair

Mushtaq, Mubashar 12 December 2008 (has links)
Actuellement, nous constatons une augmentation de demande de services vidéo sur les réseaux P2P. Ces réseaux jouent un rôle primordial dans la transmission de contenus multimédia à grande échelle pour des clients hétérogènes. Cependant, le déploiement de services vidéo temps réel sur les réseaux P2P a suscité un grand nombre de défis dû à l’hétérogénéité des terminaux et des réseaux d’accès, aux caractéristiques dynamiques des pairs, et aux autres problèmes hérités des protocoles TCP/IP, à savoir les pertes de paquets, les délais de transfert et la variation de la bande passante de bout-en-bout. Dans ce contexte, l’objectif de cette thèse est d’analyser les différents problèmes et de proposer un mécanisme de transport vidéo temps réel avec le provisionnement de la qualité de Service (QoS). Ainsi, nous proposons trois contributions majeures. Premièrement, il s’agit d’un mécanisme de streaming vidéo adaptatif permettant de sélectionner les meilleurs pair émetteurs. Ce mécanisme permet de structurer les pairs dans des réseaux overlay hybrides avec une prise en charge des caractéristiques sémantiques du contenu et des caractéristiques physiques du lien de transport. Nous présentons ensuite un mécanisme d’ordonnancement de paquets vidéo combiné à une commutation de pairs et/ou de flux pour assurer un transport lisse. Finalement, une architecture de collaboration entre les applications P2P et les fournisseurs de services / réseaux est proposée pour supporter un contrôle d’admission de flux. / There is an increasing demand for scalable deployment of real-time multimedia streaming applications over Internet. In this context, Peer-to-Peer (P2P) networks are playing an important role for supporting robust and large-scale transmission of multimedia content to heterogeneous clients. However, the deployment of real-time video streaming applications over P2P networks arises lot of challenges due to heterogeneity of terminals and access networks, dynamicity of peers, and other problems inherited from IP network. Real-time streaming applications are very sensitive to packet loss, jitter / transmission delay, and available end-to-end bandwidth. These elements have key importance in QoS provisioning and need extra consideration for smooth delivery of video streaming applications over P2P networks. Beside the abovementioned issues, P2P applications lack of awareness in constructing their overlay topologies and do not have any explicit interaction with service and network providers. This situation leads to inefficient utilization of network resources and may cause potential violation of peering agreements between providers. The aim of this thesis is to analyze these issues and to propose an adaptive real-time transport mechanism for QoS provisioning of Scalable Video Coding (SVC) applications over P2P networks. Our contributions in this dissertation are threefold. First, we propose a hybrid overlay organization mechanism allowing intelligent organization of sender peers based on network-awareness, media- awareness, and quality-awareness. This overlay organization is further used for an appropriate selection of best sender peers, and the efficient switching of peers to ensure a smooth video delivery when any of the sender peers is no more reliable. Second, we propose a packet video scheduling mechanism to assign different parts of the video content to specific peers. Third, we present a service provider driven P2P network framework that enables effective interaction between service / network providers and P2P applications to perform QoS provisioning mechanism for the video streaming.
18

Semi-synchronous video for deaf telephony with an adapted synchronous codec

Ma, Zhenyu January 2009 (has links)
Masters of Science / Communication tools such as text-based instant messaging, voice and video relay services, real-time video chat and mobile SMS and MMS have successfully been used among Deaf people. Several years of field research with a local Deaf community revealed that disadvantaged South African Deaf people preferred to communicate with both Deaf and hearing peers in South African Sign Language as opposed to text.Synchronous video chat and video relay services provided such opportunities. Both types of services are commonly available in developed regions, but not in developing countries like South Africa. This thesis reports on a workaround approach to design and develop an asynchronous video communication tool that adapted synchronous video codecs to store-and-forward video delivery. This novel asynchronous video tool provided high quality South African Sign Language video chat at the expense of some additional latency. Synchronous video codec adaptation consisted of comparing codecs,and choosing one to optimise in order to minimise latency and preserve video quality.Traditional quality of service metrics only addressed real-time video quality and related services. There was no such standard for asynchronous video communication. Therefore, we also enhanced traditional objective video quality metrics with subjective assessment metrics conducted with the local Deaf community.
19

Robot s autonomním audio-vizuálním řízením / Robot with autonomous audio-video control

Dvořáček, Štěpán January 2019 (has links)
This thesis describes the design and realization of a mobile robot with autonomous audio-visual control. This robot is able of movement based on sensors consisting of camera and microphone. The mechanical part consists of components made with 3D print technology and omnidirectional Mecanum wheels. Software utilizes OpenCV library for image processing and computes MFCC a DTW for voice command detection.

Page generated in 0.0762 seconds