• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 17
  • 7
  • 4
  • 1
  • 1
  • Tagged with
  • 82
  • 32
  • 19
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Construção de mapas de ambiente para navegação de robôs móveis com visão omnidirecional estéreo. / Map building for mobile robot navigation with omnidirectional stereo vision.

Deccó, Cláudia Cristina Ghirardello 23 April 2004 (has links)
O problema de navegação de robôs móveis tem sido estudado ao longo de vários anos, com o objetivo de se construir um robô com elevado grau de autonomia. O aumento da autonomia de um robô móvel está relacionado com a capacidade de aquisição de informações e com a automatização de tarefas, tal como a construção de mapas de ambiente. Sistemas de visão são amplamente utilizados em tarefas de robôs autônomos devido a grande quantidade de informação contida em uma imagem. Além disso, sensores omnidirecionais catadióptricos permitem ainda a obtenção de informação visual em uma imagem de 360º, dispensando o movimento da câmera em direções de interesse para a tarefa do robô. Mapas de ambiente podem ser construídos para a implementação de estratégias de navegações mais autônomas. Nesse trabalho desenvolveu-se uma metodologia para a construção de mapas para navegação, os quais são a representação da geometria do ambiente. Contém a informação adquirida por um sensor catadióptrico omnidirecional estéreo, construído por uma câmera e um espelho hiperbólico. Para a construção de mapas, os processos de alinhamento, correspondência e integração, são efetuados utilizando-se métricas de diferença angular e de distância entre os pontos. A partir da fusão dos mapas locais cria-se um mapa global do ambiente. O processo aqui desenvolvido para a construção do mapa global permite a adequação de algoritmos de planejamento de trajetória, estimativa de espaço livre e auto-localização, de maneira a obter uma navegação autônoma. / The problem of mobile robot navigation has been studied for many years, aiming at build a robot with an high degree of autonomy. The increase in autonomy of a mobile robot is related to its capacity of acquisition of information and the “automation" of tasks, such as the environment map building. In this aspect vision has been widely used due to the great amount of information in an image. Besides that catadioptric omnidirectional sensors allow to get visual information in a 360o image, discharging the need of camera movement in directions of interest for the robot task. Environment maps may be built for an implementation of strategies of more autonomous navigations. In this work a methodology is developed for building maps for robot navigations, which are the representation of the environment geometry. The map contains the information received by a stereo omnidirectional catadioptric sensor built by a camera and a hyperbolic mirror. For the map building, the processes of alignment, registration and integration are performed using metric of angular difference and distance between the points. From the fusion of local maps a global map of the environment is created. The method developed in this work for global map building allows to be coupled with algorithms of path planning, self-location and free space estimation, so that autonomous robot navigation can be obtained.
42

On-Vehicle Planar Antenna Designs for Wireless Communications

Liu, Yung-Tao 16 May 2005 (has links)
In this dissertation, many novel low-cost planar antenna designs are presented for on-vehicle applications. Promising planar antennas showing the desired broadside and omnidirectional radiation patterns and having low-profile configurations are demonstrated. Also, studies on controlling the radiation patterns are conducted. Details of the measured and simulated results of the studied antennas are presented and discussed.
43

Parameter Extraction And Image Enhancement For Catadioptric Omnidirectional Cameras

Bastanlar, Yalin 01 April 2005 (has links) (PDF)
In this thesis, catadioptric omnidirectional imaging systems are analyzed in detail. Omnidirectional image (ODI) formation characteristics of different camera-mirror configurations are examined and geometrical relations for panoramic and perspective image generation with common mirror types are summarized. A method is developed to determine the unknown parameters of a hyperboloidal-mirrored system using the world coordinates of a set of points and their corresponding image points on the ODI. A linear relation between the parameters of the hyperboloidal mirror is determined as well. Conducted research and findings are instrumental for calibration of such imaging systems. The resolution problem due to the up-sampling while transferring the pixels from ODI to the panoramic image is defined. Enhancing effects of standard interpolation methods on the panoramic images are analyzed and edge detection-based techniques are developed for improving the resolutional quality of the panoramic images. Also, the projection surface alternatives of generating panoramic images are evaluated.
44

Visual odometry: comparing a stereo and a multi-camera approach / Odometria visual: comparando métodos estéreo e multi-câmera

Ana Rita Pereira 25 July 2017 (has links)
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results. / O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
45

Characterization of Energy and Performance Bottlenecks in an Omni-directional Camera System

January 2018 (has links)
abstract: Generating real-world content for VR is challenging in terms of capturing and processing at high resolution and high frame-rates. The content needs to represent a truly immersive experience, where the user can look around in 360-degree view and perceive the depth of the scene. The existing solutions only capture and offload the compute load to the server. But offloading large amounts of raw camera feeds takes longer latencies and poses difficulties for real-time applications. By capturing and computing on the edge, we can closely integrate the systems and optimize for low latency. However, moving the traditional stitching algorithms to battery constrained device needs at least three orders of magnitude reduction in power. We believe that close integration of capture and compute stages will lead to reduced overall system power. We approach the problem by building a hardware prototype and characterize the end-to-end system bottlenecks of power and performance. The prototype has 6 IMX274 cameras and uses Nvidia Jetson TX2 development board for capture and computation. We found that capturing is bottlenecked by sensor power and data-rates across interfaces, whereas compute is limited by the total number of computations per frame. Our characterization shows that redundant capture and redundant computations lead to high power, huge memory footprint, and high latency. The existing systems lack hardware-software co-design aspects, leading to excessive data transfers across the interfaces and expensive computations within the individual subsystems. Finally, we propose mechanisms to optimize the system for low power and low latency. We emphasize the importance of co-design of different subsystems to reduce and reuse the data. For example, reusing the motion vectors of the ISP stage reduces the memory footprint of the stereo correspondence stage. Our estimates show that pipelining and parallelization on custom FPGA can achieve real time stitching. / Dissertation/Thesis / Prototype / Masters Thesis Electrical Engineering 2018
46

Auto-localização e construção de mapas de ambiente para robôs móveis baseados em visão omnidirecional estéreo. / Simultaneous localization and map building for mobile robots with omnidirectional estereo vision.

Paulo Roberto Godoi de Oliveira 14 April 2008 (has links)
Este projeto consiste no desenvolvimento de um sistema para auto-localização e construção de mapas de ambiente para robôs móveis em um ambiente estruturado, ou seja, que pode ser descrito através de primitivas geométricas. O mapa é construído a partir da reconstrução de imagens adquiridas por um sistema de visão omnidirecional estéreo baseado em um espelho duplo de perfil hiperbólico. A partir de uma única imagem obtida, utilizandose algoritmos de visão estéreo, realiza-se a reconstrução tridimensional do ambiente em torno do robô e, assim, obtêm-se as distâncias de objetos presentes no ambiente ao sistema de visão. A partir da correspondência da reconstrução de várias imagens tomadas em diferentes posições cria-se o mapa do ambiente. Além do mapa global do ambiente o sistema também realiza o cálculo da localização do robô no ambiente utilizando informações obtidas na correspondência da reconstrução da seqüência de imagens e a odometria do robô. O sistema de construção de mapas de ambiente e auto-localização do robô é testado em um ambiente virtual e um ambiente real. Os resultados obtidos tanto na construção do mapa global do ambiente, como na localização do robô, mostram que o sistema é capaz de obter informação com a acuracidade necessária para permitir a sua utilização para navegação de robôs móveis. O tempo computacional necessário para reconstruir as imagens, calcular a posição do robô e criar o mapa global do ambiente possibilita que o sistema desenvolvido seja usado em uma aplicação que necessite da geração do mapa global em um intervalo de tempo na ordem de poucos segundos. Ressalta-se que este projeto teve como ponto de partida um projeto de iniciação científica financiado pela FAPESP. Esse trabalho de iniciação científica foi publicado na forma de um trabalho de conclusão de curso (Oliveira, 2005). / This project aims the development of a system for self localization and environment map building for mobile robots in a structured environment. The map is built from images acquired by an omnidirectional stereo system with a hyperbolic double lobed mirror. From a single acquired image, using stereo vision algorithms, the environment around the robot is tridimensionally reconstruct and the distances of objects in the environment from the system are calculated. From the matching of several reconstructed environments obtained from images taken in different positions the global environment map is created. Besides the global map the system also calculates the localization of the mobile robot using information obtained from the matching of the sequence of image reconstructions and the robot odometry. The map building and robot localization system is tested in virtual and real environments. The computational time required to make the calculation is of the order of few seconds. The results obtained both for the map building and for the robot localization show that the system is capable of generating information with enough accuracy to allow it to be used for mobile robot navigation. This project had as start point a scientific initiation project supported by FAPESP. The scientific initiation project was published as a graduation work (Oliveira, 2005).
47

Apports combinés de la vision omnidirectionnelle et polarimétrique pour la navigation de robots / Combining omnidirectional vision with polarization vision for robot navigation

Shabayek, Abd El Rahman 28 November 2012 (has links)
La polarisation est le phénomène qui décrit les orientations des oscillations des ondes lumineuses qui sont limitées en direction. La lumière polarisée est largement utilisée dans le règne animal,à partir de la recherche de nourriture, la défense et la communication et la navigation. Le chapitre (1) aborde brièvement certains aspects importants de la polarisation et explique notre problématique de recherche. Nous visons à utiliser un capteur polarimétrique-catadioptrique car il existe de nombreuses applications qui peuvent bénéficier d'une telle combinaison en vision par ordinateur et en robotique, en particulier pour l'estimation d'attitude et les applications de navigation. Le chapitre (2) couvre essentiellement l'état de l'art de l'estimation d'attitude basée sur la vision.Quand la lumière non-polarisée du soleil pénètre dans l'atmosphère, l'air entraine une diffusion de Rayleigh, et la lumière devient partiellement linéairement polarisée. Le chapitre (3) présente les motifs de polarisation de la lumière naturelle et couvre l'état de l'art des méthodes d'acquisition des motifs de polarisation de la lumière naturelle utilisant des capteurs omnidirectionnels (par exemple fisheye et capteurs catadioptriques). Nous expliquons également les caractéristiques de polarisation de la lumière naturelle et donnons une nouvelle dérivation théorique de son angle de polarisation.Notre objectif est d'obtenir une vue omnidirectionnelle à 360 associée aux caractéristiques de polarisation. Pour ce faire, ce travail est basé sur des capteurs catadioptriques qui sont composées de surfaces réfléchissantes et de lentilles. Généralement, la surface réfléchissante est métallique et donc l'état de polarisation de la lumière incidente, qui est le plus souvent partiellement linéairement polarisée, est modifiée pour être polarisée elliptiquement après réflexion. A partir de la mesure de l'état de polarisation de la lumière réfléchie, nous voulons obtenir l'état de polarisation incident. Le chapitre (4) propose une nouvelle méthode pour mesurer les paramètres de polarisation de la lumière en utilisant un capteur catadioptrique. La possibilité de mesurer le vecteur de Stokes du rayon incident est démontré à partir de trois composants du vecteur de Stokes du rayon réfléchi sur les quatre existants.Lorsque les motifs de polarisation incidents sont disponibles, les angles zénithal et azimutal du soleil peuvent être directement estimés à l'aide de ces modèles. Le chapitre (5) traite de l'orientation et de la navigation de robot basées sur la polarisation et différents algorithmes sont proposés pour estimer ces angles dans ce chapitre. A notre connaissance, l'angle zénithal du soleil est pour la première fois estimé dans ce travail à partir des schémas de polarisation incidents. Nous proposons également d'estimer l'orientation d'un véhicule à partir de ces motifs de polarisation.Enfin, le travail est conclu et les possibles perspectives de recherche sont discutées dans le chapitre (6). D'autres exemples de schémas de polarisation de la lumière naturelle, leur calibrage et des applications sont proposées en annexe (B).Notre travail pourrait ouvrir un accès au monde de la vision polarimétrique omnidirectionnelle en plus des approches conventionnelles. Cela inclut l'orientation bio-inspirée des robots, des applications de navigation, ou bien la localisation en plein air pour laquelle les motifs de polarisation de la lumière naturelle associés à l'orientation du soleil à une heure précise peuvent aboutir à la localisation géographique d'un véhicule / Polarization is the phenomenon that describes the oscillations orientations of the light waves which are restricted in direction. Polarized light has multiple uses in the animal kingdom ranging from foraging, defense and communication to orientation and navigation. Chapter (1) briefly covers some important aspects of polarization and explains our research problem. We are aiming to use a polarimetric-catadioptric sensor since there are many applications which can benefit from such combination in computer vision and robotics specially robot orientation (attitude estimation) and navigation applications. Chapter (2) mainly covers the state of art of visual based attitude estimation.As the unpolarized sunlight enters the Earth’s atmosphere, it is Rayleigh-scattered by air, and it becomes partially linearly polarized. This skylight polarization provides a significant clue to understanding the environment. Its state conveys the information for obtaining the sun orientation. Robot navigation, sensor planning, and many other applications may benefit from using this navigation clue. Chapter (3) covers the state of art in capturing the skylight polarization patterns using omnidirectional sensors (e.g fisheye and catadioptric sensors). It also explains the skylight polarization characteristics and gives a new theoretical derivation of the skylight angle of polarization pattern. Our aim is to obtain an omnidirectional 360 view combined with polarization characteristics. Hence, this work is based on catadioptric sensors which are composed of reflective surfaces and lenses. Usually the reflective surface is metallic and hence the incident skylight polarization state, which is mostly partially linearly polarized, is changed to be elliptically polarized after reflection. Given the measured reflected polarization state, we want to obtain the incident polarization state. Chapter (4) proposes a method to measure the light polarization parameters using a catadioptric sensor. The possibility to measure the incident Stokes is proved given three Stokes out of the four reflected Stokes. Once the incident polarization patterns are available, the solar angles can be directly estimated using these patterns. Chapter (5) discusses polarization based robot orientation and navigation and proposes new algorithms to estimate these solar angles where, to the best of our knowledge, the sun zenith angle is firstly estimated in this work given these incident polarization patterns. We also propose to estimate any vehicle orientation given these polarization patterns. Finally the work is concluded and possible future research directions are discussed in chapter (6). More examples of skylight polarization patterns, their calibration, and the proposed applications are given in appendix (B). Our work may pave the way to move from the conventional polarization vision world to the omnidirectional one. It enables bio-inspired robot orientation and navigation applications and possible outdoor localization based on the skylight polarization patterns where given the solar angles at a certain date and instant of time may infer the current vehicle geographical location.
48

Two View Line-Based Matching, Motion Estimation and Reconstruction for Central Imaging Systems / Mise en correspondance de lignes à partir de deux vues, estimation du mouvement et reconstruction pour les systèmes centraux

Mosaddegh, Saleh 17 October 2011 (has links)
L'objectif principal de cette thèse est de développer des algorithmes génériques d'estimation du mouvement et de la structure à partir d'images de scènes prises par différents types de systèmes d'acquisition centrale : caméra perspective,fish-eye et systèmes catadioptriques, notamment. En supposant que la correspondance entre les pixels de l'image et les lignes de vue dans l'espace est connue, nous travaillons sur des images sphériques, plutôt que sur des images planes (projection des images sur la sphère unitaire), ce qui nous permet de considérer des points sur une vue mieux adaptée aux images omnidirectionnelles et d'utiliser un modèle générique valable pour tous les capteurs centraux. Dans la première partie de cette thèse, nous développons une approche générique de mise en correspondance simple de lignes à partir d'images de scènes urbaines ou péri-urbaines sous la contrainte d'un faible déplacement du capteur,ainsi qu'une contrainte rapide et originale pour apparier des lignes d'un environnement plan par morceaux, indépendante du mouvement de la caméra centrale. Ensuite, nous introduisons une méthode unique et effcace pour estimer le recouvrement entre deux segments sur des images perspectives, diminuant considérablement le temps global de calcul par rapport aux algorithmes connus.Enfin, dans la dernière partie de cette thèse, nous développons un algorithme d'estimation du mouvement et de reconstruction de surfaces pour les scènes planes par morceaux applicable à toutes sortes d'images centrales, à partir de deux vues uniquement et ne nécessitant qu'un nombre minime de correspondances de ligne. Pour démontrer les performances de ces algorithmes, nous les avons expérimentés avec diverses images réelles acquises à partir d'une caméra perspective,une lentille fish-eye, et deux différents types de capteurs paracatadioptriques(l'un est composé d'un miroir simple, et l'autre d'un miroir double). / The primary goal of this thesis is to develop generic motion and structure algorithms for images taken from constructed scenes by various types of central imaging systems including perspective, fish-eye and catadioptric systems. As-suming that the mapping between the image pixels and their 3D rays in space is known, instead of image planes, we work on image spheres (projection of the images on a unit sphere) which enable us to present points over the entire viewsphere suitable for presenting omnidirectional images. In the first part of this thesis, we develop a generic and simple line matching approach for images taken from constructed scenes under a short baseline motion as well as a fast and original geometric constraint for matching lines in planar constructed scenes insensible to the motion of the camera for all types of centralimages including omnidirectional images.Next, we introduce a unique and efficient way of computing overlap between two segments on perspective images which considerably decreases the over all computational time of a segment-based motion estimation and reconstruction algorithm. Finally in last part of this thesis, we develop a simple motion estima-tion and surface reconstruction algorithm for piecewise planar scenes applicable to all kinds of central images which uses only two images and is based on mini-mum line correspondences.To demonstrate the performance of these algorithms we experiment withvarious real images taken by a simple perspective camera, a fish-eye lens, and two different kinds of paracatadioptric sensors, the first one is a folded catadioptric camera and the second one is a classic paracatadioptric system composed of a parabolic mirror in front of a telecentric lens.
49

Omnidirectional pong playing robot : Pong playing robot using kiwi drive and a PID controller / Flerdimensionell pongrobot

Björklund, Filip, Strand, Christopher January 2019 (has links)
This project goal was to determine the flexibility of an omnidirectional robot with a physical implementation of the video game pong. A robot was created to follow and catch a ball and could play against a human player. The challenge of the project was to create a stable system that could move in a straight path and catch the ball within a reasonable distance from the other player. A camera was used to implement an image recognition system that could determine the two-dimensional position of the ball and hard coded values for the size of the ball was used to simulate a three-dimensional position. Given these values, the robot was able to follow the ball and push the ball when close. For the omnidirectional system, socalled kiwi drive with three DC motors and omnidirectional wheels was used. Ultrasonic sensors were also used to stop the robot if a nearby wall was too close. To make the robot move in a straight path, control theory together with a compass module was used to measure the angular error which was fed as feedback to the system. This enabled the robot to travel in a straight path and catch the ball. The results of the project showed that it is possible to control an omnidirectional robot with control theory in a stable manner. Using image recognition with a web camera together with OpenCV is fast enough to create a fast robotic system that can successfully complete a given task. / Detta projekts mål var att analysera hur flexibel det går att göra en robot med flerdimensionella hjul, det vill säga en robot som har hjul som gör att den kan röra sig med tre frihetsgrader. Detta gjordes genom att implementera en fysisk version av datorspelet pong. I projektet byggdes en robot som kunde följa och fånga en boll samt spela mot en mänsklig spelare. Utmaningen i projektet var att skapa ett stabilt system som kunde möjliggöra för roboten att färdas en rak väg och fånga bollen inom ett rimligt avstånd från motspelaren. En webbkamera användes för att implementera ett bildigenkänningssystem som kunde avgöra den tvådimensionella positionen för bollen och hårdkodade värden på bollens storlek användes för att simulera en tredimensionell position. Givet dessa värden lyckades roboten följa efter bollen och trycka ifrån den när bollen närmade sig. Tre stycken DC-motorer med tillhörande hjul användes för att skapa en treaxlig konfiguration för det flerdimensionella systemet. Ultraljudssensorer användes för att stanna roboten om den kom för nära en vägg i spelplanen. För att få roboten att röra sig längs en rak linje användes en kompassmodul för att mäta vinkelfelet som uppstod när roboten körde på ett felaktigt sätt. Detta vinkelfel användes som återkoppling för en PID-regulator vilket i sin tur m¨ojliggjorde f¨or roboten att kunna följa och fånga bollen längs en rak linje. Resultaten från projektet visade att en flerdimensionell robot går att kontrollera på ett stabilt sätt genom en PIDregulator och bildigenkänning med hjälp av en webbkamera och OpenCV ¨ar tillräckligt snabbt för att kunna skapa ett robotsystem som kan lösa en given uppgift.
50

Offline H.264 encoding method for omnidirectional videos with empirical region-of-interest

Sormain, Rémi January 2017 (has links)
Panoramic virtual reality is an emerging technology that has recently gained the attention of both the research community and regular consumers. It allows the users to immerse themselves in omnidirectional videos with the help of a virtual reality headset : thanks to an increasing amount of affordable head-mounted-displays, any recent smartphone can offer a decent panoramic virtual reality experience. However since omnidirectional videos are videos with a large field-of-view that covers the entire sphere around the camera, they require large resolutions and thus high bitrates. This master degree project conducted at RE’FLEKT GmbH is an exploratory work that seeks to reduce the panoramic video bitrate. Because of the nature of omnidirectional videos, the user can only see a subpart of each video frame, and thus some zones of the video can attract more attention than others. The purpose of this study is to introduce the concept of region-of-interest encoding in panoramic VR. The main contribution is a method to encode panoramic videos in an H.264 video format stream with a space-variant level of details depending on the zones that attract the most the viewers’ interest. First, the region-of-interest are detected through a head-tracking module combined with a Gaussian attention model. Then, the reference video is encoded with the open source x264 encoder, with a quantization step adjusted to the region-of-interest information. The International Telecommunications Union standard subjective tests show that this method can perform better than classic H.264 encoding only in specific cases. / Panoramisk virtuell verklighet (VR) är en kommande teknik som nyligen har mött intresse från forskarsamhället och vanliga konsumenter. Det gör det möjligt för användarna att fördjupa sig i videor upptagna från flera riktningar, med hjälp av ett VR-headset : tack vare ett växande antal billiga och huvudburna bildskärmar, erbjuder alla nya smarttelefoner en passande panoramisk VR-erfarenhet. Men på grund av den breda synvinkeln i flerriktade media behöver videor med 360 graders synfält stor upplösning och därför höga bithastigheter. Detta masterexamensarbete som utförts på RE’FLEKT GmbH är ett utforskande arbete som strävar efter att reducera panoramabildens bithastighet. I flerriktade videoklipp kan användaren bara se en del av varje bildruta, härigenom får somliga zoner mer uppmärksamhet än andra. Syftet med denna studie är att introducera begreppet region-av-intresse (ROI) kodning i panoramisk VR. Huvudbidraget är en metod för att koda panoramisk video i en H.264-ström med en varierande nivå av detaljer som beror på de zoner som får mest av tittarnas intresse. Först detekteras ROI genom en huvudspårningsmodul kombinerad med en gaussisk uppmärksamhetsmodell. Därefter kodas referensvideoen med x264-kodaren (öppen källkod) med hjälp av ROI-informationen. ITU-standardens subjektiva test visar att den här metoden kan fungera bättre än klassisk H.264-kodning i enskilda fall.

Page generated in 0.1094 seconds