• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 13
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 29
  • 17
  • 16
  • 14
  • 14
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Video quality encoding characterization and comparison / Kvalificering och jämförelse av videokvaliteter

Andersson, Julia, Hultqvist, Andreas January 2019 (has links)
Adaptive streaming is a popular technique that allows quality adaption for videos based on the current playback conditions. The purpose of this thesis is to investigate how chunks in video files downloaded from YouTube correlate to each other. We investigate how the chunk size characteristics depend on the category and encoding of the video. The main focus is to analyze the chunk sizes of the video, focusing on distinctness between 360$^\circ$ and 2D videos. This is performed using the YouTube API. The videos are downloaded and analysed using youtube-dl and mkv-info. The results show that chunk sizes for adjacent qualities have higher correlation and that videos having a similarity between scenes have higher correlation. In addition, 360$^\circ$ videos differ primarily from regular 2D videos by the amount of qualities used and a generally higher correlation for all qualities.
12

Blind image and video quality assessment using natural scene and motion models

Saad, Michele Antoine 05 November 2013 (has links)
We tackle the problems of no-reference/blind image and video quality evaluation. The approach we take is that of modeling the statistical characteristics of natural images and videos, and utilizing deviations from those natural statistics as indicators of perceived quality. We propose a probabilistic model of natural scenes and a probabilistic model of natural videos to drive our image and video quality assessment (I/VQA) algorithms respectively. The VQA problem is considerably different from the IQA problem since it imposes a number of challenges on top of the challenges faced in the IQA problem; namely the challenges arising from the temporal dimension in video that plays an important role in influencing human perception of quality. We compare our IQA approach to the state of the art in blind, reduced reference and full-reference methods, and we show that it is top performing. We compare our VQA approach to the state of the art in reduced and full-reference methods (no blind VQA methods that perform reliably well exist), and show that our algorithm performs as well as the top performing full and reduced reference algorithms in predicting human judgments of quality. / text
13

Adaptive video transmission over wireless channels with optimized quality of experiences

Chen, Chao, active 2013 18 February 2014 (has links)
Video traffic is growing rapidly in wireless networks. Different from ordinary data traffic, video streams have higher data rates and tighter delay constraints. The ever-varying throughput of wireless links, however, cannot support continuous video playback if the video data rate is kept at a high level. To this end, adaptive video transmission techniques are employed to reduce the risk of playback interruptions by dynamically matching the video data rate to the varying channel throughput. In this dissertation, I develop new models to capture viewers' quality of experience (QoE) and design adaptive transmission algorithms to optimize the QoE. The contributions of this dissertation are threefold. First, I develop a new model for the viewers' QoE in rate-switching systems in which the video source rate is adapted every several seconds. The model is developed to predict an important aspect of QoE, the time-varying subjective quality (TVSQ), i.e., the up-to-the-moment subjective quality of a video as it is played. I first build a video database of rate-switching videos and measure TVSQs via a subjective study. Then, I parameterize and validate the TVSQ model using the measured TVSQs. Finally, based on the TVSQ model, I design an adaptive rate-switching algorithm that optimizes the time-averaged TVSQs of wireless video users. Second, I propose an adaptive video transmission algorithm to optimize the Overall Quality (OQ) of rate-switching videos, i.e., the viewers' judgement on the quality of the whole video. Through the subjective study, I find that the OQ is strongly correlated with the empirical cumulative distribution function (eCDF) of the video quality perceived by viewers. Based on this observation, I develop an adaptive video transmission algorithm that maximizes the number of video users who satisfy given constraints on the eCDF of perceived video qualities. Third, I propose an adaptive transmission algorithm for scalable videos. Different from the rate-switching systems, scalable videos support rate adaptation for each video frame. The proposed adaptive transmission algorithm maximizes the time-averaged video quality while maintaining continuous video playback. When the channel throughput is high, the algorithm increases the video data rate to improve video quality. Otherwise, the algorithm decreases the video data rate to buffer more videos and to reduce the risk of playback interruption. Simulation results show that the performance of the proposed algorithm is close to a performance upper bound. / text
14

Natural scene statistics based blind image quality assessment in spatial domain

Mittal, Anish 05 August 2011 (has links)
We propose a natural scene statistic based quality assessment model Refer- enceless Image Spatial QUality Evaluator (RISQUE) which extracts marginal statistics of local normalized luminance signals and measures 'un-naturalness' of the distorted image based on measured deviation of them. We also model distribution of pairwise products of adjacent normalized luminance signals providing us with orientation distortion information. Although multi-scale, the model is defined in the space domain avoiding costly frequency or wavelet transforms. The frame work is simple, fast, human perception based and shown to perform statistically better than other proposed no reference algorithms and full reference structural similarity index(SSIM). / text
15

Systematic Overview of Savings versus Quality for H.264/SVC / Systematisk översikt över besparingar kontra kvalitet för H.264/SVC.

Varisetty, Tilak, Edara, Praveen January 2012 (has links)
The demand for efficient video coding techniques has increased in the recent past, resulting in the evolution of various video compression techniques. SVC (Scalable video coding) is the recent amendment of H.264/AVC (Advanced Video Coding), which adds a new dimension by providing the possibility of encoding a video stream into a combination of different sub streams that are scalable in areas corresponding to spatial resolution, temporal resolution and quality. Introduction of the scalability aspect is an effective video coding technique in a network scenario where the client can decode the sub stream depending on the available bandwidth in the network. A graceful degradation in the video quality is expected when any of the spatial, temporal or the quality layer is removed. Still the amount of degradation in video quality has to be measured in terms of Quality of Experience (QoE) from the user’s perspective. To measure the degradation in video quality, video streams consisting of different spatial and temporal layers have been extracted and efforts have been put to remove each layer starting from a higher dependency layer or the Enhancement layer and ending up with the lowest dependency layer or the Base layer. Extraction of a temporally downsampled layer had challenges with frame interpolation and to overcome this, temporal interpolation was employed. Similarly, a spatial downsampled layer has been upsampled in the spatial domain in order to compare with the original stream. Later, an objective video quality assessment has been made by comparing the extracted substream containing fewer layers that are downsampled both spatially and temporally with the original stream containing all layers. The Mean Opinion Scores (MOS) were obtained from objective tool named Perceptual Evaluation of Video Quality (PEVQ). The experiment is carried out for each layers and also for different test videos. Subjective tests were also performed to evaluate the user experience. The results provide recommendations to SVC capable router about the video quality available for each layer and hence the network transcoder can transmit a specific layer depending on the network conditions and capabilities of the decoding device. / Efterfrågan på effektiva video kodningstekniker har ökat under de senaste åren, vilket resulterar i utvecklingen av olika tekniker videokomprimering. SVC (Scalable Video Coding) är den senaste ändringen av H.264/AVC (Advanced Video Coding), vilket ger en ny dimension genom att möjligheten att koda en videoström till en kombination av olika sub strömmar som är skalbara i områden som motsvarar rumslig upplösning, tidsupplösning och kvalitet. Introduktion av skalbarhet aspekten är en effektiv video kodningsteknik i ett nätverk scenario där kunden kan avkoda sub strömmen beroende på den tillgängliga bandbredden i nätverket. En elegant nedbrytning i videokvaliteten förväntas när någon av den rumsliga, tidsmässiga eller kvaliteten skiktet avlägsnas. Fortfarande mängden nedbrytning i videokvalitet måste mätas i termer "Quality of Experience" (QoE) från användarens perspektiv. För att mäta försämring i video-kvalitet, har videoströmmar består av olika rumsliga och tidsmässiga skikt hämtats och ansträngningar har lagts för att ta bort varje lager från ett högre beroende lager eller förbättrande lagret och slutar upp med den lägsta beroendet lagret eller basen skikt. Extraktion av ett tidsmässigt nedsamplas lager hade problem med ram interpolation och för att övervinna detta, var temporal interpolering används. På liknande sätt har en rumslig nedsamplas skikt har uppsamplas i rumsdomänen för att jämföra med den ursprungliga strömmen. Senare har en objektiv videokvalitet bedömning gjorts genom att jämföra den extraherade underströmmen med färre lager som nedsamplade både rumsligt och tidsmässigt med den ursprungliga strömmen innehållande alla lager. De genomsnittliga yttrande poäng (MOS) erhölls från objektivt verktyg som heter Perceptuell utvärdering av Videokvalitet (PEVQ). Experimentet utförs för varje skikt och även för olika test video. Subjektiva tester utfördes också för att utvärdera användarupplevelsen. Resultaten ger rekommendationer till SVC kapabel router om videokvaliteten för varje lager och därmed nätverket kodomvandlaren kan överföra ett visst lager beroende på nätverksförhållanden och kapacitet avkodnings anordningen. / Tilak Varisetty, 518, Gamlainfartsvägen, Annebo, Karlskrona -37141, Mobil: 0723060131
16

Remote desktop protocols : A comparison of Spice, NX and VNC

Hagström, Martin January 2012 (has links)
This thesis compares the remote desktop protocol Spice to NX and VNC taking into consideration user experience when viewing multimedia content. By measuring the quality of the protocols by viewing a video in a slow-motion benchmark compared to ordinary speeds it is shown that Spice has a low video quality compared to VNC. It is likely that due to a large amount of data sent, Spice does not manage to reach a high quality user experience.
17

Video Quality Metric improvement using motion and spatial masking

Näkne, Henrik January 2016 (has links)
Objective video quality assessment is of great importance in video compression and other video processing applications. In today's encoders Peak Signal to Noise Ratio or Sum of Absolute Differences are often used, though these metrics have limited correlation to perceived quality. In this paper other block-based quality measures are evaluated with superior performance on compression distortion when evaluating correlation with Mean Opinion Scores. The major results are that Block-based Visual Information Fidelity with optical flow and intra-frame Gaussian weighting outperforms PSRN, VIF, and SSIM. Also, a block-based weighted Mean Squared Error method is proposed that performs better than PSRN and SSIM, however not VIF and BB-VIF, with the advantage of high locality, which is useful in video encoding. The previously mentioned weighting methods have not been evaluated with SSIM, which is proposed for further studies.
18

Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks

Kong, Lingchao 01 October 2019 (has links)
No description available.
19

Kvalita obrazu a služeb v širokopásmových multimediálních sítích a systémech budoucnosti / Video and Data Services Quality in the Future Broadband Multimedia Systems and Networks

Kufa, Jan January 2018 (has links)
Téma doktorské práce je zaměřeno na analýzu zpracování signálů v širokopásmových multimediálních sítích a systémech budoucnosti, kde se předpokládají systémy s ultra vysokým rozlišením (UHDTV), vysokým snímkovým kmitočtem (HFR) a stereoskopické systémy (3D). Tyto systémy budou umožňovat vysoce účinnou zdrojovou kompresi obrazu, zvuku a dat a také jejich vysoce účinný přenos, a to jak při volném vysílání (např. DVB-T2), tak ve službách placené televize (např. IPTV). Cílem práce je analýza a vyhodnocení kvality obrazu a služeb v těchto systémech na základě objektivních metrik a subjektivních testů. Práce se dále zaměřuje na analýzu vnímané kvality u stereoskopické televize, kódovací účinnost moderních stereoskopických enkoderů a vlivu sekvencí na uživatelský komfort.
20

Adaptive Video Streaming : Adapting video quality to radio links with different characteristics

Eklöf, William January 2008 (has links)
During the last decade, the data rates provided by mobile networks have improved to the point that it is now feasible to provide richer services, such as streaming multimedia, to mobile users. However, due to factors such as radio interference and cell load, the throughput available to a client varies over time. If the throughput available to a client decreases below the media’s bit rate, the client’s buffer will eventually become empty. This causes the client to enter a period of rebuffering, which degrades user experience. In order to avoid this, a streaming server may provide the media at different bit rates, thereby allowing the media’s bit rate (and quality) to be modified to fit the client’s bandwidth. This is referred to as adaptive streaming. The aim of this thesis is to devise an algorithm to find the media quality most suitable for a specific client, focusing on how to detect that the user is able to receive content at a higher rate. The goal for such an algorithm is to avoid depleting the client buffer, while utilizing as much of the bandwidth available as possible without overflowing the buffers in the network. In particular, this thesis looks into the difficult problem of how to do adaptation for live content and how to switch to a content version with higher bitrate and quality in an optimal way. This thesis examines if existing adaptation mechanisms can be improved by considering the characteristics of different mobile networks. In order to achieve this, a study of mobile networks currently in use has been conducted, as well as experiments with streaming over live networks. The experiments and study indicate that the increased available throughput can not be detected by passive monitoring of client feedback. Furthermore, a higher data rate carrier will not be allocated to a client in 3G networks, unless the client is sufficiently utilizing the current carrier. This means that a streaming server must modify its sending rate in order to find its maximum throughput and to force allocation of a higher data rate carrier. Different methods for achieving this are examined and discussed and an algorithm based upon these ideas was implemented and evaluated. It is shown that increasing the transmission rate by introducing stuffed packets in the media stream allows the server to find the optimal bit rate for live video streams without switching up to a bit rate which the network can not support. This thesis was carried out during the summer and autumn of 2008 at Ericsson Research, Multimedia Technologies in Kista, Sweden. / Under det senaste decenniet har överföringshastigheterna i mobilnätet ökat så pass mycket att detnu är möjligt att erbjuda användarna mer avancerade tjänster, som till exempel strömmandemultimedia. I mobilnäten varierar dock klientens bandbredd med avseende på tiden på grund avfaktorer som störningar på radiolänken och lasten i cellen. Om en klients överföringshastighetsjunker till mindre än mediets bithastighet, kommer klientens buffert till slut att bli tom. Dettaleder till att klienten inleder en period av ombuffring, vilket försämrar användarupplevelsen. Föratt undvika detta kan en strömmande server erbjuda mediet i flera olika bithastigheter, vilket gördet möjligt för servern att anpassa bithastigheten (och därmed kvalitén) till klientens bandbredd.Denna metod kallas för adaptive strömning. Syftet för detta examensarbete är att utveckla en algoritm, som hittar den bithastighet som är bästlämpad för en specifik användare med fokus på att upptäcka att en klient kan ta emot media avhögre kvalité. Målet för en sådan algoritm är att undvika att klientens buffert blir tom ochsamtidigt utnyttja så mycket av bandbredden som möjligt utan att fylla nätverksbuffertarna. Merspecifikt undersöker denna rapport det svåra problemet med hur adaptering för direktsänd mediakan utföras. Examensarbetet undersöker om existerande adapteringsmekanismer kan förbättras genom attbeakta de olika radioteknologiers egenskaper. I detta arbete ingår både en studie avradioteknologier, som för tillfället används kommersiellt, samt experiment med strömmandemedia över dessa. Resultaten från studien och experimenten tyder på att ökad bandbredd inte kanupptäckas genom att passivt övervaka ”feedback” från klienten. Vidare kommer inte användarenatt allokeras en radiobärare med högre överföringshastighet i 3G-nätverk, om inte den nuvarandebäraren utnyttjas maximalt. Detta innebär att en strömmande server måste variera sinsändningshastighet både för att upptäcka om mer bandbredd är tillgänglig och för att framtvingaallokering av en bärare med högre hastighet. Olika metoder för att utföra detta undersöks ochdiskuteras och en algoritm baserad på dessa idéer utvecklas. Detta examensarbete utfördes under sommaren och hösten 2008 vid Ericsson Research,Multimedia Technologies i Kista, Sverige.

Page generated in 0.0598 seconds