• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 11
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Effect of additional compression features on h.264 surveillance video

Comstedt, Erik January 2017 (has links)
In video surveillance business, a recurring topic of discussion is quality versus data usage. A higher quality allows for more details to be captured at the cost of a higher bit rate, and for cameras monitoring events 24 hours a day, limiting data usage can quickly become a factor to consider. The purpose of this thesis has been to apply additional compression features to a h.264 video steam, and evaluate their effects on the videos overall quality. Using a surveillance camera, recordings of video streams were obtained. These recordings had constant GOP and frame rates. By breaking down one of these videos to an image sequence, it was possible to encode the image sequence into video streams with variable GOP/FPS using the software Ffmpeg. Additionally a user test was performed on these video streams, following the DSCQS standard from the ITU-R recom- mendation. The participants had to subjectively determine the quality of video streams. The results from the these tests showed that the participants did not no- tice any considerable difference in quality between the normal videos and the videos with variable GOP/FPS. Based of these results, the thesis has shown that that additional compression features can be applied to h.264 surveillance streams, without having a substantial effect on the video streams overall quality.
2

CSTN LCD Frame Rate Controller For Image Quality Enhancement

Lee, Chien-te 20 July 2010 (has links)
This thesis is mainly focused on FRC (Frame Rate Control) method which can be used for LCD panels, where a new algorithm is proposed to improve the flicker problem. The proposed algorithm can be implemented by simple digital circuits with low power consumption. The proposed design can be applied in both mono- and color- STN panels. It can generate 32768 colors in a panel without any flicker and motion line problems, which can only allow 8 colors originally. The major contribution in this thesis is to add a location number to each pixel of the panel.Notably, the numbers for all the pixels can not be a regular pattern. Otherwise, the flicker problem is resolved at the expense of a serious motion line issue. The consequence is poor display quality. To resolve both the flicker and motion line problem, we propose to employ a PRSG (Pseudo Random Sequence Generator) which generates a non-regular number sequence for all the pixels. Therefore, all the ON pixels can be dispersed on the panel in all frames.
3

Knowledge-Based Video Compression for Robots and Sensor Networks

Williams, Chris Williams 11 July 2006 (has links)
Robot and sensor networks are needed for safety, security, and rescue applicationssuch as port security and reconnaissance during a disaster. These applications rely on realtimetransmission of images, which generally saturate the available wireless networkinfrastructure. Knowledge-based Compression is a strategy for reducing the video frametransmission rate between robots or sensors and remote operators. Because images mayneed to be archived as evidence and/or distributed to multiple applications with differentpost processing needs, lossy compression schemes, such as MPEG, H.26x, etc., are notacceptable. This work proposes a lossless video server system consisting of three classesof filters (redundancy, task, and priority) which use different levels of knowledge (localsensed environment, human factors associated with a local task, and relative globalpriority of a task) at the application layer of the network. It demonstrates the redundancyand task filters for realistic robot search scenarios. The redundancy filter is shown toreduce the overall transmission bandwidth by 24.07% to 33.42%, and when combinedwith the task filter, reduces overall transmission bandwidth by 59.08% to 67.83%. Byitself, the task filter has the capability to reduce transmission bandwidth by 32.95% to33.78%. While Knowledge-based Compression generally does not reach the same levels ofreduction as MPEG, there are instances where the system outperforms MPEG encoding.
4

TÉCNICAS DE MEJORA DE LA EFICIENCIA DE CODIFICACIÓN DE VÍDEO

Usach Molina, Pau 07 January 2016 (has links)
[EN] This Thesis presents a set of tools that allows the improvement of the digital video coding efficiency by exploiting the fundamentals of the state of the art video coding standards. This work has been focused both on research and on the application of the results to the encoding of digital video in real time mobile environments. The first contribution is an automatic shot change detection algorithm integrated in the encoding process. This algorithm is based on the monitoring of the coding mode of the macroblocks of the sequence, and the proper definition of a set of parameters provides excellent detection rates, precision and recall. The results also indicate an improvement on the encoded video quality when these detection techniques are used, which triggers the definition of a content-based keyframe selection algorithm. With this method, the optimal position of reference pictures can be determined. These keyframes are then used by the encoder to perform temporal prediction of the subsequent frames, which improves the compression rate and the encoded video quality (both objective and subjective). This quality improvement is the main objective of this Thesis. In the last part of this work, a rate control algorithm for variable bitrate and frame rate environments has been defined, being able to generate a bitstream that quickly follows the varying conditions of the mobile channel. In parallel to all this work, a set of training and test sequences has been obtained, providing an optimal environment for the design, development, configuration, optimization and test of the algorithms described here. / [ES] Esta tesis presenta un conjunto de herramientas que permiten mejorar la eficiencia de codificación de vídeo mediante la explotación de los fundamentos en los que se basan los principales estándares de codificación actuales. El trabajo se ha orientado tanto a la investigación como a la aplicación de los resultados a la codificación de vídeo en tiempo real en entornos móviles. En primer lugar se ha definido un algoritmo de detección automática de cambios de plano para entornos de tiempo real integrado en el proceso de codificación. Este algoritmo está basado en la monitorización del modo de codificación de los macrobloques de la secuencia y la correcta definición de un conjunto de parámetros consigue unas tasas de detección, una precisión y una eficacia superiores a otros métodos similares existentes en la literatura. Los resultados muestran también una mejora en la calidad del vídeo codificado al aplicar estas técnicas de detección, lo que lleva a la definición de un algoritmo de selección de imágenes de referencia (keyframes) basado en el contenido. Así se pueden obtener las posiciones óptimas para las imágenes de referencia utilizadas por el codificador para realizar predicciones temporales que aumentan la calidad tanto objetiva como subjetiva del vídeo codificado, lo que constituye a su vez el objetivo principal de esta tesis. Por último, se ha diseñado un algoritmo de control de tasa capaz de obtener un bitstream que se adapta rápidamente a los cambios tanto de bitrate como de tasa de imágenes por segundo producidos en el canal móvil. Paralelamente, se ha obtenido un conjunto de secuencias de entrenamiento y test que proporcionan un entorno óptimo para el diseño, desarrollo, configuración, optimización y prueba de los algoritmos aquí descritos. / [CAT] Aquesta tesi presenta un conjunt de ferramentes que permeten millorar la eficiència de codificació de vídeo digital mitjançant l'explotació dels fonaments en els que es basen els principals estàndards de codificació actuals. El treball ha estat orientat tant a la investigació com a l'aplicació dels resultats a la codificació de vídeo en temps real en entorns mòbils. En primer lloc s'ha definit un algoritme de detecció automàtica de canvis de plànol integrat en el propi procés de codificació. Aquest algoritme s'ha basat en la monitorització del mode de codificació dels macroblocs de la seqüència, i la correcta definició d'un conjunt de paràmetres de configuració permet aconseguir unes taxes de detecció, una precisió i una eficàcia superiors a altres mètodes similars presents a la literatura. Aquests resultats també indiquen una millora en la qualitat del vídeo codificat al aplicar aquestes tècniques de detecció la qual ens porta a la definició d'un algoritme de selecció d'imatges de referència (keyframes) basada en el contingut. Amb aquest algoritme es poden obtenir les posicions òptimes per a les imatges de referència utilitzades pel codificador per a realitzar prediccions temporals òptimes que augmenten la qualitat tant objectiva com subjectiva del vídeo codificat. Amb esta millora s'assoleix l'objectiu principal d'aquesta tesi. Per últim, s'ha dissenyat un algoritme de control de taxa capaç d'obtenir un bitstream que s'adapta ràpidament als canvis tant de bitrate com de taxa d'imatges per segon requerits per les condicions canviants del canal mòbil. Paral·lelament s'ha obtingut un conjunt de seqüències d'entrenament i test que permet disposar d'un entorn òptim per al disseny, desenvolupament, configuració, optimització i prova dels algoritmes descrits en aquestes fulles. / Usach Molina, P. (2015). TÉCNICAS DE MEJORA DE LA EFICIENCIA DE CODIFICACIÓN DE VÍDEO [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59446 / TESIS
5

A Study of Limited-Diffraction Array Beam and Steered Plane Wave Imaging

Wang, Jing 20 June 2006 (has links)
No description available.
6

ADVANCED IMAGE AND VIDEO INTERPOLATION TECHNIQUES BASED ON NONLOCAL-MEANS FILTERING

Dehghannasiri, Roozbeh 10 1900 (has links)
<p>In this thesis, we study three different image interpolation applications in high definition (HD) video processing: video de-interlacing, frame rate up-conversion, and view interpolation. We propose novel methods for these applications which are based on the concept of Nonlocal-Means (NL-Means).</p> <p>In the first part of this thesis, we introduce a new de-interlacing method which uses NL-Means algorithm. In this method, every interpolated pixel is set to a weighted average of its neighboring pixels in the current, previous, and the next frames. Weights of the pixels used in this filtering are calculated according to the radiometric distance between the surrounding areas of the pixel being interpolated and the neighboring pixels. One of the main challenges of the NL-Means is finding a suitable size for the neighborhoods (similarity window) that we want to find radiometric distance for them. We address this problem by using a steering kernel in our distance function to adapt the effective size of similarity window to the local information of the image. In order to calculate the weights of the filter, we need to have an estimate of the progressive frames. Therefore, we introduce a low computational initial de-interlacing method. This method interpolates the missing pixel along a direction based on two criteria of having minimum variation and being used by the above or below pixels. This method preserves the edge structures and yields superior visual quality compared to the simple edge-based line-averaging and many other simple iv de-interlacing methods.</p> <p>The second part of this thesis is devoted to the frame rate up-conversion application. Our frame rate up-conversion method is based on two main steps: NL-Means and foreground /background segmentation. In this method, for every pixel being interpolated first we check whether it belongs to the background or foreground. If the pixel belongs to the background and the values of the next and previous frames’ pixels are the same, we simply set the pixel intensity to the intensity of its location in the previous or next frame. If the pixel belongs to the foreground, we use NL-Means based interpolation for it. We adjust the equations of the NL-means for frame rate up-conversion so that we do not need to have the neighborhoods of the intermediate for calculating the weights of the filter. The comparison of our method with other existing methods shows the better performance of our method.</p> <p>In the third part of this thesis, we introduce a novel view interpolation method without using disparity estimation. In this method, we let every pixel in the intermediate view be the output of the NL-means using the pixels in the reference views. The experimental results demonstrate the better quality of our results compared with other algorithms which use disparity estimation.</p> / Master of Applied Science (MASc)
7

Novel Image Interpolation Schemes with Applications to Frame Rate Conversion and View Synthesis

Rezaee Kaviani, Hoda January 2018 (has links)
Image interpolation is the process of generating a new image utilizing a set of available images. The available images may be taken with a camera at different times, or with multiple cameras and from different viewpoints. Usually, the interpolation problem in the first scenario is called Frame Rate-Up Conversion (FRUC), and the second one view synthesis. This thesis focuses on image interpolation and addresses both FRUC and view synthesis problems. We propose a novel FRUC method using optical flow motion estimation and a patch-based reconstruction scheme. FRUC interpolates new frames between original frames of a video to increase the number of frames, and increases motion continuity. In our approach first, forward and backward motion vectors are obtained using an optical flow algorithm, and reconstructed versions of the current and previous frames are generated by our patch-based reconstruction scheme. Using the original and reconstructed versions of the current and previous frames, two mismatch masks are obtained. Then two versions of the middle frame are generated using a patch-based scheme, with estimated motion vectors and the current and previous frames. Finally, a middle mask, which identifies the mismatch areas of the two middle frames is reconstructed. Using these three masks, the best candidates for interpolation are selected and fused to obtain the final middle frame. Due to the patch-based nature of our interpolation scheme most of the holes and cracks will be filled. Although there is always a probability of having holes, the size and number of such holes are much smaller than those that would be generated using pixel-based mapping. The rare holes are filled using existing hole-filling algorithms. With fewer and smaller holes, simpler hole-filling algorithms can be applied to the image and the overall complexity of the required post processing decreases. View synthesis is the process of generating a new (virtual) view using available ones. Depending on the amount of available geometric information, view synthesis techniques can be divided into three categories: Image Based Rendering (IBR), Depth Image Based Rendering (DIBR), and Model Based Rendering (MBR). We introduce an adaptive version, patch-based scheme for IBR. This patch-based scheme reduces the size and number of holes during reconstruction. The size of patch is determined in response to edge information for better reconstruction, especially near the boundaries. In the first stage of the algorithm, disparity is obtained using optical flow estimation. Then, a reconstructed version of the left and right views are generated using our adaptive patch-based algorithm. The mismatches between each view and its reconstructed version are obtained in the mismatch detection steps. This stage results in two masks as outputs, which help with the refinement of disparities and the selection of the best patches for final synthesis. Finally, the remaining holes are filled using our simple hole filling scheme and the refined disparities. The adaptive version still benefits from the overlapping effect of the patches for hole reduction. However, compared with our fixed-size version, it results in better reconstruction near the edges, object boundaries, and inside the highly textured areas. We also propose an adaptive patch-based scheme for DIBR. The proposed method avoids unnecessary warping which is a computationally expensive step in DIBR. We divide nearby views into blocks, and only warp the center of each block. To have a better reconstruction near the edges and depth discontinuities, the block size is selected adaptively. In the blending step, an approach is introduced to calculate and refine the blending weights. Many of the existing DIBR schemes warp all pixels of nearby views during interpolation which is unnecessary. We show that using our adaptive patch-based scheme, it is possible to reduce the number of required warping without degrading the overall quality compared with existing schemes. / Thesis / Doctor of Philosophy (PhD)
8

High Speed CMOS Image Sensor

January 2016 (has links)
abstract: High speed image sensors are used as a diagnostic tool to analyze high speed processes for industrial, automotive, defense and biomedical application. The high fame rate of these sensors, capture a series of images that enables the viewer to understand and analyze the high speed phenomena. However, the pixel readout circuits designed for these sensors with a high frame rate (100fps to 1 Mfps) have a very low fill factor which are less than 58%. For high speed operation, the exposure time is less and (or) the light intensity incident on the image sensor is less. This makes it difficult for the sensor to detect faint light signals and gives a lower limit on the signal levels being detected by the sensor. Moreover, the leakage paths in the pixel readout circuit also sets a limit on the signal level being detected. Therefore, the fill factor of the pixel should be maximized and the leakage currents in the readout circuits should be minimized. This thesis work presents the design of the pixel readout circuit suitable for high speed and low light imaging application. The circuit is an improvement to the 6T pixel readout architecture. The designed readout circuit minimizes the leakage currents in the circuit and detects light producing a signal level of 350µV at the cathode of the photodiode. A novel layout technique is used for the pixel, which improves the fill factor of the pixel to 64.625%. The read out circuit designed is an integral part of high speed image sensor, which is fabricated using a 0.18 µm CMOS technology with the die size of 3.1mm x 3.4 mm, the pixel size of 20µm x 20 µm, number of pixel of 96 x 96 and four 10-bit pipelined ADC’s. The image sensor achieves a high frame rate of 10508 fps and readout speed of 96 M pixels / sec. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2016
9

Manipulering av bildhastighet och dess känslomässiga påverkan på tittarupplevelse vid olika format / Manipulation of frame rate and its emotional effect on viewer perception in different formats

O'Grady, William, Währme, Emil January 2023 (has links)
Frame rate is a fundamental element of creating the illusion of movement in video based media. For almost a century film has been produced in agreement with a standard frame rate of 24 frames per second, originally established due to technical limitations. This number lives on for films today, despite many technological innovations and other video based media formats straying from this standard. With contemporary video technology, content cannot only be recorded in higher frame rate; frames can also be artificially interpolated. So called Frame Interpolation technology now comes as a pre-installed feature on most televisions. As a consequence, this has formed a debate on how video based media should be presented, not least when it is artificially generated outside of the creators’ control. This study therefore aims to explore how manipulation of a video clip’s frame rate influences the viewer experience and thereby if the use of Frame Interpolation technology in televisions is justified. A study was conducted wherein participants were shown video clips in their original frame rate and compared them to artificially manipulated copies. The results showed that there is no definitive frame rate that is preferred by all participants and that some participants did not perceive any difference at all. It is also shown that the artificial manipulation of frame rate is generally not appreciated, and that criticisms against its use are misguided in terms of content shown. It is then discussed how television manufacturers should reconsider the use of Frame Interpolation technology. Lastly, we affirm how the results of this study are limited in accuracy by its scope. Further exploration of the subject is suggested to further consider these results found here and the results of earlier papers. / Bildhastigheten i videobaserad media är en fundamental aspekt i hur vi översätter stillbild till rörlig bild. Sedan ett sekel tillbaka produceras film enligt en standard bildhastighet på 24 bilder per sekund, på grund av tekniska begränsningar. Den siffran lever kvar än idag, trots tekniska innovationer samt andra videobaserade medier som töjt på denna standard. Med modern teknik kan media inte bara spelas in i högre bildhastigheter; bilder kan också artificiellt interpoleras. Frame Interpolation-teknik som den kallas kommer numera förinställd på de flesta tv-apparater. Som konsekvens har det förts debatt för och emot högre bildhastigheter, inte minst när de manipuleras av tv-tillverkare utöver skaparnas kontroll. Den här studien vill ta reda på hur manipulering av ett videoklipps bildhastighet påverkar människors känslomässiga tittarupplevelse och därigenom om bruk av Frame Interpolation-teknik i tv-apparater är motiverad. Undersökningen testade deltagarna genom att visa klipp i sin ursprungliga bildhastighet i jämförelse med artificiellt manipulerade kopior. Studien visade att det inte binärt går att bestämma en bildhastighet som deltagarna fann definitivt bäst och att skillnaden inte är uppenbar för alla. Resultatet visar också att artificiell manipulering av bildhastighet inte uppskattas, och att kritiken riktar sig mot fel innehåll. Det diskuteras därför om tv-tillverkare bör överväga användningen av Frame Interpolation-teknik. Slutligen klargörs det varför man ska ställa sig kritisk inför resultaten utifrån studiens begränsningar. Vidare forskning föreslås som kan stödja studiens och liknande studiers slutsatser.
10

Development of novel ultrasound techniques for imaging and elastography : from simulation to real-time implementation / Sviluppo di tecniche originali ad ultrasuoni per applicazioni di imaging ed elastografia : dalla simulazione all'implementazione in tempo reale / Développement de nouvelles techniques ultrasonores pour des applications d’imagerie ou d’élastographie : de la simulation à l'implémentation temps-réel

Ramalli, Alessandro 02 April 2012 (has links)
Les techniques ultrasonores offrent de nombreux avantages, à la fois par leur utilisation facile et la sécurité du patient. De plus, la recherche, visant à étendre les possibles champs d’applications, est particulièrement active. Cependant, l’accès à des équipements adaptés et supportant des logiciels est conditio sine qua non pour l’expérimentation de nouvelles techniques. Ce projet de thèse traite des problématiques de traitement du signal et d'image dans un contexte d'imagerie médicale et vise à répondre à deux objectifs scientifiques: le premier consiste à contribuer au développement d'une puissante plateforme de recherche (ULA-OP) alors que le second a pour objectif d'introduire et de valider, grâce à cette plateforme, des méthodes de traitement non-standard qui ne pourraient pas être évaluées avec des équipements médicaux commerciaux. ULA-OP, un équipement recherche qui donne accès aux développeurs à une grande liberté de contrôle et de configuration l’ensemble des parties actives du système, de la transmission aux traitements des signaux échographiques. Il offre aussi la possibilité d’accéder aux signaux bruts à n’importe quel niveau de la chaîne de réception. Durant cette thèse, les capacités du système ont été améliorées en implémentant des outils logiciels comme des simulateurs de champ acoustique (propagation linéaire et non-linéaire), et en développant des programmes de génération de signaux post-échographique. L’ULA-OP a été crucial pour développer et tester différentes techniques non-standard telles qu’un schéma adaptatif de formation de voie et une méthode d’imagerie Doppler couleur/vecteur, qui seront détaillés dans le manuscrit. En particulier, une nouvelle méthode a été développée pour des applications d’élastographie quasi-statique. Cette méthode, basée sur un algorithme d’estimation du mouvement dans le domaine fréquentiel et combinée à une méthode d’imagerie haute fréquence, a permis d’améliorer la qualité des élastogrammes obtenus. Cette nouvelle méthode a d’abord été testée in-vitro par des traitements hors ligne des signaux reçus et pour ensuite être implémentée en temps réel sur le ULA-OP. Les résultats obtenus montrent que cette technique est performante et que les élastogrammes présentent une qualité supérieure comparée à ceux obtenues avec les méthodes connues de la littérature / Ultrasound techniques offer many advantages, in terms of both ease of realization and patients’ safety. The research aimed at expanding the fields of application, is nowadays particularly active. The availability of suitable hardware and supporting software tools is condicio sine qua non for the experimentation of new techniques. This Ph.D project addresses signal/image processing issues in medical ultrasound and seeks to achieve two major scientific goals: the first is to contribute to the development of a powerful ultrasound research platform (ULA¬OP), while the second is introducing and validating, through this platform, non-standard methods which could not be tested with commercial equipment. ULA-OP is a research system, which gives developers great freedom in terms of management and control of every section, from signal transmission to echo-signal processing; it also offers the possibility to access raw data at any point in the receive chain. During the thesis, the capabilities of the system were improved by creating advanced software tools, such as acoustic field simulators (for linear and nonlinear propagation), and by developing echo-signals post-elaboration programs. ULA-OP was crucial to develop and test various non-standard techniques such as an adaptive beamforming scheme and a color/vector Doppler imaging method, which will be detailed in this thesis. In particular, a novel technique was developed for quasi-static elastography applications. This technique, based on a frequency domain displacement estimation algorithm, combined with a high-frame-rate averaging method, aims at improving the quality of the elastograms. The new method was first tested in-vitro by offline processing the received signals, and then it was implemented in real-time on ULA-OP. The results show that this technique is effective and that the obtained elastograms present higher quality compared with those obtained with standard algorithms

Page generated in 0.0629 seconds