• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 32
  • 29
  • 28
  • 17
  • 17
  • 16
  • 15
  • 14
  • 12
  • 11
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The effects of evaluation and rotation on descriptors and similarity measures for a single class of image objects

06 June 2008 (has links)
“A picture is worth a thousand words”. If this proverb were taken literally we all know that every person interprets images or photos differently in terms of its content. This is due to the semantics contained in these images. Content-based image retrieval has become a vast area of research in order to successfully describe and retrieve images according to the content. In military applications, intelligence images such as those obtained by the defence intelligence group are taken (mostly on film), developed and then manually annotated thereafter. These photos are then stored in a filing system according to certain attributes such as the location, content etc. To retrieve these images at a later stage might take days or even weeks to locate. Thus, the need for a digital annotation system has arisen. The images of the military contain various military vehicles and buildings that need to be detected, described and stored in a database. For our research we want to look at the effects that the rotation and elevation angle of an object in an image has on the retrieval performance. We chose model cars in order to be able to control the environment the photos were taken in such as the background, lighting, distance between the objects, and the camera etc. There are also a wide variety of shapes and colours of these models to obtain and work with. We look at the MPEG-7 descriptor schemes that are recommended by the MPEG group for video and image retrieval as well as implement three of them. For the military it could be required that when the defence intelligence group is in the field, that the images be directly transmitted via satellite to the headquarters. We have therefore included the JPEG2000 standard which gives a compression performance increase of 20% over the original JPEG standard. It is also capable to transmit images wirelessly as well as securely. Including the MPEG-7 descriptors that we have implemented, we have also implemented the fuzzy histogram and colour correlogram descriptors. For our experimentation we implemented a series of experiments in order to determine the effects that rotation and elevation has on our model vehicle images. Observations are made when each vehicle is considered separately and when the vehicles are described and combined into a single database. After the experiments are done we look at the descriptors and determine which adjustments could be made in order to improve the retrieval performance thereof. / Dr. W.A. Clarke
32

Error resilience in JPEG2000

Natu, Ambarish Shrikrishna, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2003 (has links)
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
33

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
<p>Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. </p><p>This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. </p><p>First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).</p>
34

Focus controlled image coding based on angular and depth perception / Fokusstyrd bildkodning baserad på vinkel och djup perception

Grangert, Oskar January 2003 (has links)
<p>In normal image coding the image quality is the same in all parts of the image. When it is known where in the image a single viewer is focusing it is possible to lower the image quality in other parts of the image without lowering the perceived image quality. This master's thesis introduces a coding scheme based on depth perception where the quality of the parts of the image that correspond to out-of-focus scene objects is lowered to obtain data reduction. To obtain further data reduction the method is combined with angular perception coding where the quality is lowered in parts of the image corresponding to the peripheral visual field. It is concluded that depth perception coding can be done without lowering the perceived image quality and that the coding gain increases as the two methods are combined.</p>
35

Estimation of visual focus for control of a FOA-based image coder / Estimering av visuellt fokus för kontroll av en FOA-baserad bildkodare

Carlén, Stefan January 2003 (has links)
<p>A major feature of the human eye is the compressed sensitiveness of the retina. An image coder, which makes use of this, can heavily encode the parts of the image which is not close to the focus of our eyes. Existing image coding schemes require that the gaze direction of the viewer is measured. However, a great advantage would be if an estimator predicts the focus of attention (FOA) regions in the image. </p><p>This report presents such an implementation, which is based on a model that mimics many of the biological features of the human visual system (HVS). For example, it uses a center-surround mechanism, which is a replica of the receptive fields of the neurons in the HVS. </p><p>An extra feature of the implementation is the extension to handle video sequences, and the expansion of the FOA:s. The test results of the system show good results from a large variety of images.</p>
36

Multiresolution image segmentation based on camporend random fields: Application to image coding

Marqués Acosta, Fernando 22 November 1992 (has links)
La segmentación de imágenes es una técnica que tiene como finalidad dividir una imagen en un conjunto de regiones, asignando a cada objeto en la escena una o varias regiones. Para obtener una segmentación correcta, cada una de las regiones debe cumplir con un criterio de homogeneidad impuesto a priori. Cuando se fija un criterio de homogeneidad, lo que implícitamente se esta haciendo es asumir un modelo matemático que caracteriza las regiones.En esta tesis se introduce un nuevo tipo de modelo denominado modelo jerárquico, ya que tiene dos niveles diferentes sobrepuestos uno sobre el otro. El nivel inferior (o subyacente) modela la posición que ocupa cada una de las regiones dentro de la imagen; mientras que, por su parte, el nivel superior (u observable) esta compuesto por un conjunto de submodelos independientes (un submodelo por región) que caracterizan el comportamiento del interior de las regiones. Para el primero se usa un campo aleatorio Markoviano de orden dos que modelara los contornos de las regiones, mientras que para el segundo nivel se usa un modelo Gausiano. En el trabajo se estudian los mejores potenciales que deben asignarse a los tipos de agrupaciones que permiten definir los contornos. Con todo ello la segmentación se realiza buscando la partición más probable (criterio MAP) para una realización concreta (imagen observable).El proceso de búsqueda de la partición optima para imágenes del tamaño habitual seria prácticamente inviable desde un punto de vista de tiempo de cálculo. Para que se pueda realizar debe partirse de una estimación inicial suficientemente buena y de una algoritmo rápido de mejora como es una búsqueda local. Para ello se introduce la técnica de segmentación piramidal (multirresolucion). La pirámide se genera con filtrado Gausiano y diezmado. En el nivel mas alto de la pirámide, al tener pocos píxels, si que se puede encontrar la partición óptima.
37

Improving compression ratio in backup / Förbättring av kompressionsgrad för säkerhetskopiering

Zeidlitz, Mattias January 2012 (has links)
This report describes a master thesis performed at Degoo Backup AB in Stockholm, Sweden in the spring of 2012. The purpose was to design a compression suite in Java which aims to improve the compression ratio for file types assumed to be commonly used in a backup software. A tradeoff between compression ratio and compression speed has been made in order to meet the requirement that the compression suite has to be able to compress the data fast enough. A study of the best performing existing compression algorithms has been made in order to be able to choose the best suitable compression algorithm for every possible scenario and file type specific compression algorithms have been developed in order to further improve the compression ratio for files considered needing improved compression. The resulting compression performance is presented for file types assumed to be common in a backup software and the overall performance is good. The final conclusion is that the compression suite fulfills all requirements set of this thesis. / Denna rapport beskriver ett examensarbete genomfört på Degoo Backup AB i Stockholm under våren 2012. Syftet var att designa en kompressionssvit i Java vilket siktar på att förbättra kompressionsgraden för filtyper som anses vara vanliga att användas i en säkerhetskopieringsprogramvara. En avvägning mellan kompressionsgrad och kompressionshastighet har gjort för att uppfylla kravet att kompressionssviten ska kunna komprimera data tillräckligt snabbt. En studie över de bäst presterande existerande kompressionsalgoritmerna har gjorts för att möjliggöra ett val av den bäst anpassade kompressionsalgoritmen för varje tänkbar situation. Dessutom har filtypsspecifika komprimeringsalgoritmer utvecklats för att ytterligare förbättra kompressionsgraden för filer som anses var av behov av förbättrad komprimering. Den resulterande kompressionsprestandan finns presenterad för filtyper som antas vara vanliga i en säkerhetskopieringsprogramvara och på det hela taget är prestandan bra. Slutsatsen är att kompressionssviten uppfyller alla krav som var uppsatta för detta examensarbete.
38

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).
39

Focus controlled image coding based on angular and depth perception / Fokusstyrd bildkodning baserad på vinkel och djup perception

Grangert, Oskar January 2003 (has links)
In normal image coding the image quality is the same in all parts of the image. When it is known where in the image a single viewer is focusing it is possible to lower the image quality in other parts of the image without lowering the perceived image quality. This master's thesis introduces a coding scheme based on depth perception where the quality of the parts of the image that correspond to out-of-focus scene objects is lowered to obtain data reduction. To obtain further data reduction the method is combined with angular perception coding where the quality is lowered in parts of the image corresponding to the peripheral visual field. It is concluded that depth perception coding can be done without lowering the perceived image quality and that the coding gain increases as the two methods are combined.
40

Estimation of visual focus for control of a FOA-based image coder / Estimering av visuellt fokus för kontroll av en FOA-baserad bildkodare

Carlén, Stefan January 2003 (has links)
A major feature of the human eye is the compressed sensitiveness of the retina. An image coder, which makes use of this, can heavily encode the parts of the image which is not close to the focus of our eyes. Existing image coding schemes require that the gaze direction of the viewer is measured. However, a great advantage would be if an estimator predicts the focus of attention (FOA) regions in the image. This report presents such an implementation, which is based on a model that mimics many of the biological features of the human visual system (HVS). For example, it uses a center-surround mechanism, which is a replica of the receptive fields of the neurons in the HVS. An extra feature of the implementation is the extension to handle video sequences, and the expansion of the FOA:s. The test results of the system show good results from a large variety of images.

Page generated in 0.8088 seconds