• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Implementation of Measurement Module For Seamless Vertical Handover

Ickin, Selim January 2010 (has links)
Research on heterogeneous seamless handover has become popular, since the wireless networking systems were introduced with mobility. The typical current vertical handover mechanism consists of an architecture built at Layer 3 and implemented to serve for different technologies. Even though it has advantages in terms of simplicity, there still exist drawbacks such as the need of adaptation of the network architecture to different network technologies and systems. Providing transparency to the heterogeneous seamless handover can be provided by conducting the handover process at a higher layer. By that way, efficient handover decisions for vertical handover are made with more number of constraints that will lead to high performance, accessibility and low cost. Making this possible is by providing Quality of Experience (QoE) and obtaining current information of the throughput by measurements in the network. Analyzing and interpreting the statistics collected through measurements are vital in terms of decision making and to decide when to perform vertical handover. This thesis consists of the implementation of measurement module in two different approaches (Payload Dependent Approach and Payload Independent Approach) that will provide these statistics to the storage module in PERIMETER project.
2

Avaliação objetiva e subjetiva de qualidade de vídeo via rede IP com variação de atraso. / Objective and subjective assessment of video quality over IP network with packet delay variation.

Coaquira Begazo, Dante 04 October 2012 (has links)
Atualmente existe uma grande variedade de serviços de telecomunicações focados na transmissão de voz, vídeo e dados através de redes complexas, embora, em muitos casos, o usuário final não seja atendido com um nível de qualidade aceitável. Neste trabalho, se avalia como o serviço de streaming de vídeo em uma rede com protocolo Internet (IP) pode ser afetado por uma condição adversa da rede, tal como a variação de atraso (jitter). São mostrados os resultados de avaliações objetivas e subjetivas de streaming de vídeo que indicam que a qualidade de vídeo é diretamente afetada por fatores de degradação da rede IP como a variação de atraso de pacotes. Além disso, se verifica que cenas de maior movimento também são mais afetadas pela variação de atraso. Para a realização dos testes, utiliza-se um cenário de emulação de rede isolado, no qual são parametrizadas diferentes condições de rede. Assim, no canal de transmissão são configurados diversos valores de variação de atraso, obtendo-se uma Base de Dados de vídeos com diferentes graus de degradação de qualidade. Estes vídeos são avaliados utilizando métodos subjetivos: Índice por Categorias Absolutas (ACR - Absolute Category Rating) e Índice por Categorias de Degradação (DCR - Degradation Category Rating) e métricas objetivas: Relação Sinal-Ruído de Pico (PSNR - Peak Signal to Noise Ratio), Índice de Similaridade Estrutural (SSIM - Structural Similarity Index) e Medição de Qualidade de Vídeo (VQM - Video Quality Metric). Com a finalidade de mostrar o desempenho das métricas objetivas em relação às subjetivas são empregados os coeficientes de correlação, além do erro de predição e quadrático médio. Adicionalmente, é importante destacar que a partir dos resultados obtidos são estabelecidos intervalos de valores de variação de atraso para os quais a qualidade do vídeo é considerada aceitável ou não para o usuário final. Finalmente, obtém-se uma Base de Dados de vídeos com diferentes graus de degradação da qualidade e que pode ser utilizada em futuras pesquisas. / Nowadays, there is a wide range of telecommunications services focused on the transmission of voice, video and data across complex networks, although, in many cases, the end user is not satisfied with an acceptable quality level. In this work it is assessed how the video streaming service over an IP network can be affected by adverse conditions on the network, such as packet delay variation (jitter). Results from objective and subjective video streaming assessments are shown and indicate that the video quality is directly affected by IP network degradation factors such as packet delay variation. Beyond that, it is also verified that scenes with great movement content are also more sensitive to packet delay variation. For experimental tests, a network emulation totally isolated is considered, for which different network conditions are parameterized. Then, to the transmission channel, different values of packet delay variation are configured and videos are assessed, using subjective metrics: Absolute Category Rating (ACR) and Degradation Category Rating (DCR), and objective metrics: Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM) and Video Quality Metric (VQM). In order to show the performance of objective metrics in relation to subjective ones correlation coefficients are employed as well as the prediction error and the mean square error. Finally, it is important to note that, from the results obtained, it can be established a range of delay variation values in which the video quality is acceptable or not at the end user. Additionally, a Video Data Base is obtained with different degrees of quality degradation and which can be used for future researches.
3

Avaliação objetiva e subjetiva de qualidade de vídeo via rede IP com variação de atraso. / Objective and subjective assessment of video quality over IP network with packet delay variation.

Dante Coaquira Begazo 04 October 2012 (has links)
Atualmente existe uma grande variedade de serviços de telecomunicações focados na transmissão de voz, vídeo e dados através de redes complexas, embora, em muitos casos, o usuário final não seja atendido com um nível de qualidade aceitável. Neste trabalho, se avalia como o serviço de streaming de vídeo em uma rede com protocolo Internet (IP) pode ser afetado por uma condição adversa da rede, tal como a variação de atraso (jitter). São mostrados os resultados de avaliações objetivas e subjetivas de streaming de vídeo que indicam que a qualidade de vídeo é diretamente afetada por fatores de degradação da rede IP como a variação de atraso de pacotes. Além disso, se verifica que cenas de maior movimento também são mais afetadas pela variação de atraso. Para a realização dos testes, utiliza-se um cenário de emulação de rede isolado, no qual são parametrizadas diferentes condições de rede. Assim, no canal de transmissão são configurados diversos valores de variação de atraso, obtendo-se uma Base de Dados de vídeos com diferentes graus de degradação de qualidade. Estes vídeos são avaliados utilizando métodos subjetivos: Índice por Categorias Absolutas (ACR - Absolute Category Rating) e Índice por Categorias de Degradação (DCR - Degradation Category Rating) e métricas objetivas: Relação Sinal-Ruído de Pico (PSNR - Peak Signal to Noise Ratio), Índice de Similaridade Estrutural (SSIM - Structural Similarity Index) e Medição de Qualidade de Vídeo (VQM - Video Quality Metric). Com a finalidade de mostrar o desempenho das métricas objetivas em relação às subjetivas são empregados os coeficientes de correlação, além do erro de predição e quadrático médio. Adicionalmente, é importante destacar que a partir dos resultados obtidos são estabelecidos intervalos de valores de variação de atraso para os quais a qualidade do vídeo é considerada aceitável ou não para o usuário final. Finalmente, obtém-se uma Base de Dados de vídeos com diferentes graus de degradação da qualidade e que pode ser utilizada em futuras pesquisas. / Nowadays, there is a wide range of telecommunications services focused on the transmission of voice, video and data across complex networks, although, in many cases, the end user is not satisfied with an acceptable quality level. In this work it is assessed how the video streaming service over an IP network can be affected by adverse conditions on the network, such as packet delay variation (jitter). Results from objective and subjective video streaming assessments are shown and indicate that the video quality is directly affected by IP network degradation factors such as packet delay variation. Beyond that, it is also verified that scenes with great movement content are also more sensitive to packet delay variation. For experimental tests, a network emulation totally isolated is considered, for which different network conditions are parameterized. Then, to the transmission channel, different values of packet delay variation are configured and videos are assessed, using subjective metrics: Absolute Category Rating (ACR) and Degradation Category Rating (DCR), and objective metrics: Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM) and Video Quality Metric (VQM). In order to show the performance of objective metrics in relation to subjective ones correlation coefficients are employed as well as the prediction error and the mean square error. Finally, it is important to note that, from the results obtained, it can be established a range of delay variation values in which the video quality is acceptable or not at the end user. Additionally, a Video Data Base is obtained with different degrees of quality degradation and which can be used for future researches.
4

Design of Variation-Tolerant Circuits for Nanometer CMOS Technology: Circuits and Architecture Co-Design

Abu-Rahma, Mohamed Hassan 11 1900 (has links)
Aggressive scaling of CMOS technology in sub-90nm nodes has created huge challenges. Variations due to fundamental physical limits, such as random dopants fluctuation (RDF) and line edge roughness (LER) are increasing significantly with technology scaling. In addition, manufacturing tolerances in process technology are not scaling at the same pace as transistor's channel length due to process control limitations (e.g., sub-wavelength lithography). Therefore, within-die process variations worsen with successive technology generations. These variations have a strong impact on the maximum clock frequency and leakage power for any digital circuit, and can also result in functional yield losses in variation-sensitive digital circuits (such as SRAM). Moreover, in nanometer technologies, digital circuits show an increased sensitivity to process variations due to low-voltage operation requirements, which are aggravated by the strong demand for lower power consumption and cost while achieving higher performance and density. It is therefore not surprising that the International Technology Roadmap for Semiconductors (ITRS) lists variability as one of the most challenging obstacles for IC design in nanometer regime. To facilitate variation-tolerant design, we study the impact of random variations on the delay variability of a logic gate and derive simple and scalable statistical models to evaluate delay variations in the presence of within-die variations. This work provides new design insight and highlights the importance of accounting for the effect of input slew on delay variations, especially at lower supply voltages. The derived models are simple, scalable, bias dependent and only require the knowledge of easily measurable parameters. This makes them useful in early design exploration, circuit/architecture optimization as well as technology prediction (especially in low-power and low-voltage operation). The derived models are verified using Monte Carlo SPICE simulations using industrial 90nm technology. Random variations in nanometer technologies are considered one of the largest design considerations. This is especially true for SRAM, due to the large variations in bitcell characteristics. Typically, SRAM bitcells have the smallest device sizes on a chip. Therefore, they show the largest sensitivity to different sources of variations. With the drastic increase in memory densities, lower supply voltages and higher variations, statistical simulation methodologies become imperative to estimate memory yield and optimize performance and power. In this research, we present a methodology for statistical simulation of SRAM read access yield, which is tightly related to SRAM performance and power consumption. The proposed flow accounts for the impact of bitcell read current variation, sense amplifier offset distribution, timing window variation and leakage variation on functional yield. The methodology overcomes the pessimism existing in conventional worst-case design techniques that are used in SRAM design. The proposed statistical yield estimation methodology allows early yield prediction in the design cycle, which can be used to trade off performance and power requirements for SRAM. The methodology is verified using measured silicon yield data from a 1Mb memory fabricated in an industrial 45nm technology. Embedded SRAM dominates modern SoCs and there is a strong demand for SRAM with lower power consumption while achieving high performance and high density. However, in the presence of large process variations, SRAMs are expected to consume larger power to ensure correct read operation and meet yield targets. We propose a new architecture that significantly reduces array switching power for SRAM. The proposed architecture combines built-in self-test (BIST) and digitally controlled delay elements to reduce the wordline pulse width for memories while ensuring correct read operation; hence, reducing switching power. A new statistical simulation flow was developed to evaluate the power savings for the proposed architecture. Monte Carlo simulations using a 1Mb SRAM macro from an industrial 45nm technology was used to examine the power reduction achieved by the system. The proposed architecture can reduce the array switching power significantly and shows large power saving - especially as the chip level memory density increases. For a 48Mb memory density, a 27% reduction in array switching power can be achieved for a read access yield target of 95%. In addition, the proposed system can provide larger power saving as process variations increase, which makes it a very attractive solution for 45nm and below technologies. In addition to its impact on bitcell read current, the increase of local variations in nanometer technologies strongly affect SRAM cell stability. In this research, we propose a novel single supply voltage read assist technique to improve SRAM static noise margin (SNM). The proposed technique allows precharging different parts of the bitlines to VDD and GND and uses charge sharing to precisely control the bitline voltage, which improves the bitcell stability. In addition to improving SNM, the proposed technique also reduces memory access time. Moreover, it only requires one supply voltage, hence, eliminates the need of large area voltage shifters. The proposed technique has been implemented in the design of a 512kb memory fabricated in 45nm technology. Results show improvements in SNM and read operation window which confirms the effectiveness and robustness of this technique.
5

Design of Variation-Tolerant Circuits for Nanometer CMOS Technology: Circuits and Architecture Co-Design

Abu-Rahma, Mohamed Hassan 11 1900 (has links)
Aggressive scaling of CMOS technology in sub-90nm nodes has created huge challenges. Variations due to fundamental physical limits, such as random dopants fluctuation (RDF) and line edge roughness (LER) are increasing significantly with technology scaling. In addition, manufacturing tolerances in process technology are not scaling at the same pace as transistor's channel length due to process control limitations (e.g., sub-wavelength lithography). Therefore, within-die process variations worsen with successive technology generations. These variations have a strong impact on the maximum clock frequency and leakage power for any digital circuit, and can also result in functional yield losses in variation-sensitive digital circuits (such as SRAM). Moreover, in nanometer technologies, digital circuits show an increased sensitivity to process variations due to low-voltage operation requirements, which are aggravated by the strong demand for lower power consumption and cost while achieving higher performance and density. It is therefore not surprising that the International Technology Roadmap for Semiconductors (ITRS) lists variability as one of the most challenging obstacles for IC design in nanometer regime. To facilitate variation-tolerant design, we study the impact of random variations on the delay variability of a logic gate and derive simple and scalable statistical models to evaluate delay variations in the presence of within-die variations. This work provides new design insight and highlights the importance of accounting for the effect of input slew on delay variations, especially at lower supply voltages. The derived models are simple, scalable, bias dependent and only require the knowledge of easily measurable parameters. This makes them useful in early design exploration, circuit/architecture optimization as well as technology prediction (especially in low-power and low-voltage operation). The derived models are verified using Monte Carlo SPICE simulations using industrial 90nm technology. Random variations in nanometer technologies are considered one of the largest design considerations. This is especially true for SRAM, due to the large variations in bitcell characteristics. Typically, SRAM bitcells have the smallest device sizes on a chip. Therefore, they show the largest sensitivity to different sources of variations. With the drastic increase in memory densities, lower supply voltages and higher variations, statistical simulation methodologies become imperative to estimate memory yield and optimize performance and power. In this research, we present a methodology for statistical simulation of SRAM read access yield, which is tightly related to SRAM performance and power consumption. The proposed flow accounts for the impact of bitcell read current variation, sense amplifier offset distribution, timing window variation and leakage variation on functional yield. The methodology overcomes the pessimism existing in conventional worst-case design techniques that are used in SRAM design. The proposed statistical yield estimation methodology allows early yield prediction in the design cycle, which can be used to trade off performance and power requirements for SRAM. The methodology is verified using measured silicon yield data from a 1Mb memory fabricated in an industrial 45nm technology. Embedded SRAM dominates modern SoCs and there is a strong demand for SRAM with lower power consumption while achieving high performance and high density. However, in the presence of large process variations, SRAMs are expected to consume larger power to ensure correct read operation and meet yield targets. We propose a new architecture that significantly reduces array switching power for SRAM. The proposed architecture combines built-in self-test (BIST) and digitally controlled delay elements to reduce the wordline pulse width for memories while ensuring correct read operation; hence, reducing switching power. A new statistical simulation flow was developed to evaluate the power savings for the proposed architecture. Monte Carlo simulations using a 1Mb SRAM macro from an industrial 45nm technology was used to examine the power reduction achieved by the system. The proposed architecture can reduce the array switching power significantly and shows large power saving - especially as the chip level memory density increases. For a 48Mb memory density, a 27% reduction in array switching power can be achieved for a read access yield target of 95%. In addition, the proposed system can provide larger power saving as process variations increase, which makes it a very attractive solution for 45nm and below technologies. In addition to its impact on bitcell read current, the increase of local variations in nanometer technologies strongly affect SRAM cell stability. In this research, we propose a novel single supply voltage read assist technique to improve SRAM static noise margin (SNM). The proposed technique allows precharging different parts of the bitlines to VDD and GND and uses charge sharing to precisely control the bitline voltage, which improves the bitcell stability. In addition to improving SNM, the proposed technique also reduces memory access time. Moreover, it only requires one supply voltage, hence, eliminates the need of large area voltage shifters. The proposed technique has been implemented in the design of a 512kb memory fabricated in 45nm technology. Results show improvements in SNM and read operation window which confirms the effectiveness and robustness of this technique.
6

Método de avaliação de qualidade de vídeo por otimização condicionada. / Video quality assessment method based on constrained optimization.

Begazo, Dante Coaquira 24 November 2017 (has links)
Esta Tese propõe duas métricas objetivas para avaliar a percepção de qualidade de vídeos sujeitos a degradações de transmissão em uma rede de pacotes. A primeira métrica usa apenas o vídeo degradado, enquanto que a segunda usa os vídeos de referência e degradado. Esta última é uma métrica de referência completa (FR - Full Reference) chamada de QCM (Quadratic Combinational Metric) e a primeira é uma métrica sem referência (NR - No Reference) chamada de VQOM (Viewing Quality Objective Metric). Em particular, o procedimento de projeto é aplicado à degradação de variação de atraso de pacotes (PDV - Packet Delay Variation). A métrica NR é descrita por uma spline cúbica composta por dois polinômios cúbicos que se encontram suavemente num ponto chamado de nó. Para o projeto de ambas métricas, colhem-se opiniões de observadores a respeito das sequências de vídeo degradadas que compõem o conjunto. A função objetiva inclui o erro quadrático total entre as opiniões e suas estimativas paramétricas, ainda consideradas como expressões algébricas. Acrescentam-se à função objetiva três condições de igualdades de derivadas tomadas no nó, cuja posição é especificada dentro de uma grade fina de pontos entre o valor mínimo e o valor máximo do fator de degradação. Essas condições são afetadas por multiplicadores de Lagrange e adicionadas à função objetiva, obtendo-se o lagrangiano, que é minimizado pela determinação dos coeficientes subótimos dos polinômios em função de cada valor do nó na grade. Finalmente escolhe-se o valor do nó que produz o erro quadrático mínimo, determinando assim os valores finais para dos coeficientes do polinômio. Por outro lado, a métrica FR é uma combinação não-linear de duas métricas populares, a PSNR (Peak Signal-to-Noise Ratio) e a SSIM (Structural Similarity Index). Um polinômio completo de segundo grau de duas variáveis é usado para realizar a combinação, porque é sensível a ambas métricas constituintes, evitando o sobreajuste em decorrência do baixo grau. Na fase de treinamento, o conjunto de valores dos coeficientes do polinômio é determinado através da minimização do erro quadrático médio para as opiniões sobre a base de dados de treino. Ambas métricas, a VQOM e a QCM, são treinadas e validadas usando uma base de dados, e testadas com outra independente. Os resultados de teste são comparados com métricas NR e FR recentes através de coeficientes de correlação, obtendo-se resultados favoráveis para as métricas propostas. / This dissertation proposes two objective metrics for estimating human perception of quality for video subject to transmission degradation over packet networks. The first metric just uses traffic data while the second one uses both the degraded and the reference video sequences. That is, the latter is a full reference (FR) metric called Quadratic Combinational Metric (QCM) and the former one is a no reference (NR) metric called Viewing Quality Objective Metric (VQOM). In particular, the design procedure is applied to packet delay variation (PDV) impairments, whose compensation or control is very important to maintain quality. The NR metric is described by a cubic spline composed of two cubic polynomials that meet smoothly at a point called a knot. As the first step in the design of either metric, the spectators score a training set of degraded video sequences. The objective function for designing the NR metric includes the total square error between the scores and their parametric estimates, still regarded as algebraic expressions. In addition, the objective function is augmented by the addition of three equality constraints for the derivatives at the knot, whose position is specified within a fine grid of points between the minimum value and the maximum value of the degradation factor. These constraints are affected by Lagrange multipliers and added to the objective function to obtain the Lagrangian, which is minimized by the suboptimal polynomial coefficients determined as a function of each knot in the grid. Finally, the knot value is selected that yields the minimum square error. By means of the selected knot value, the final values of the polynomial coefficients are determined. On the other hand, the FR metric is a nonlinear combination of two popular metrics, namely, the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM). A complete second-degree two-variable polynomial is used for the combination since it is sensitive to both constituent metrics while avoiding overfitting. In the training phase, the set of values for the coefficients of this polynomial is determined by minimizing the mean square error to the opinions over the training database. Both metrics, the VQOM and the QCM, are trained and validated using one database and tested with a different one. The test results are compared with recent NR and FR metrics by means of correlation coefficients, obtaining favorable results for the proposed metrics.
7

Método de avaliação de qualidade de vídeo por otimização condicionada. / Video quality assessment method based on constrained optimization.

Dante Coaquira Begazo 24 November 2017 (has links)
Esta Tese propõe duas métricas objetivas para avaliar a percepção de qualidade de vídeos sujeitos a degradações de transmissão em uma rede de pacotes. A primeira métrica usa apenas o vídeo degradado, enquanto que a segunda usa os vídeos de referência e degradado. Esta última é uma métrica de referência completa (FR - Full Reference) chamada de QCM (Quadratic Combinational Metric) e a primeira é uma métrica sem referência (NR - No Reference) chamada de VQOM (Viewing Quality Objective Metric). Em particular, o procedimento de projeto é aplicado à degradação de variação de atraso de pacotes (PDV - Packet Delay Variation). A métrica NR é descrita por uma spline cúbica composta por dois polinômios cúbicos que se encontram suavemente num ponto chamado de nó. Para o projeto de ambas métricas, colhem-se opiniões de observadores a respeito das sequências de vídeo degradadas que compõem o conjunto. A função objetiva inclui o erro quadrático total entre as opiniões e suas estimativas paramétricas, ainda consideradas como expressões algébricas. Acrescentam-se à função objetiva três condições de igualdades de derivadas tomadas no nó, cuja posição é especificada dentro de uma grade fina de pontos entre o valor mínimo e o valor máximo do fator de degradação. Essas condições são afetadas por multiplicadores de Lagrange e adicionadas à função objetiva, obtendo-se o lagrangiano, que é minimizado pela determinação dos coeficientes subótimos dos polinômios em função de cada valor do nó na grade. Finalmente escolhe-se o valor do nó que produz o erro quadrático mínimo, determinando assim os valores finais para dos coeficientes do polinômio. Por outro lado, a métrica FR é uma combinação não-linear de duas métricas populares, a PSNR (Peak Signal-to-Noise Ratio) e a SSIM (Structural Similarity Index). Um polinômio completo de segundo grau de duas variáveis é usado para realizar a combinação, porque é sensível a ambas métricas constituintes, evitando o sobreajuste em decorrência do baixo grau. Na fase de treinamento, o conjunto de valores dos coeficientes do polinômio é determinado através da minimização do erro quadrático médio para as opiniões sobre a base de dados de treino. Ambas métricas, a VQOM e a QCM, são treinadas e validadas usando uma base de dados, e testadas com outra independente. Os resultados de teste são comparados com métricas NR e FR recentes através de coeficientes de correlação, obtendo-se resultados favoráveis para as métricas propostas. / This dissertation proposes two objective metrics for estimating human perception of quality for video subject to transmission degradation over packet networks. The first metric just uses traffic data while the second one uses both the degraded and the reference video sequences. That is, the latter is a full reference (FR) metric called Quadratic Combinational Metric (QCM) and the former one is a no reference (NR) metric called Viewing Quality Objective Metric (VQOM). In particular, the design procedure is applied to packet delay variation (PDV) impairments, whose compensation or control is very important to maintain quality. The NR metric is described by a cubic spline composed of two cubic polynomials that meet smoothly at a point called a knot. As the first step in the design of either metric, the spectators score a training set of degraded video sequences. The objective function for designing the NR metric includes the total square error between the scores and their parametric estimates, still regarded as algebraic expressions. In addition, the objective function is augmented by the addition of three equality constraints for the derivatives at the knot, whose position is specified within a fine grid of points between the minimum value and the maximum value of the degradation factor. These constraints are affected by Lagrange multipliers and added to the objective function to obtain the Lagrangian, which is minimized by the suboptimal polynomial coefficients determined as a function of each knot in the grid. Finally, the knot value is selected that yields the minimum square error. By means of the selected knot value, the final values of the polynomial coefficients are determined. On the other hand, the FR metric is a nonlinear combination of two popular metrics, namely, the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM). A complete second-degree two-variable polynomial is used for the combination since it is sensitive to both constituent metrics while avoiding overfitting. In the training phase, the set of values for the coefficients of this polynomial is determined by minimizing the mean square error to the opinions over the training database. Both metrics, the VQOM and the QCM, are trained and validated using one database and tested with a different one. The test results are compared with recent NR and FR metrics by means of correlation coefficients, obtaining favorable results for the proposed metrics.
8

A Study of Factors Which Influence QoD of HTTP Video Streaming Based on Adobe Flash Technology

Sun, Bin, Uppatumwichian, Wipawat January 2013 (has links)
Recently, there has been a significant rise in the Hyper-Text Transfer Protocol (HTTP) video streaming usage worldwide. However, the knowledge of performance of HTTP video streaming is still limited, especially in the aspect of factors which affect video quality. The reason is that HTTP video streaming has different characteristics from other video streaming systems. In this thesis, we show how the delivered quality of a Flash video playback is affected by different factors from diverse layers of the video delivery system, including congestion control algorithm, delay variation, playout buffer length, video bitrate and so on. We introduce Quality of Delivery Degradation (QoDD) then we use it to measure how much the Quality of Delivery (QoD) is degraded in terms of QoDD. The study is processed in a dedicated controlled environment, where we could alter the influential factors and then measure what is happening. After that, we use statistic method to analyze the data and find the relationships between influential factors and quality of video delivery which are expressed by mathematic models. The results show that the status and choices of factors have a significant impact on the QoD. By proper control of the factors, the quality of delivery could be improved. The improvements are approximately 24% by TCP memory size, 63% by congestion control algorithm, 30% by delay variation, 97% by delay when considering delay variation, 5% by loss and 92% by video bitrate.
9

Random Local Delay Variability : On-chip Measurement And Modeling

Das, Bishnu Prasad 06 1900 (has links)
This thesis focuses on random local delay variability measurement and its modeling. It explains a circuit technique to measure the individual logic gate delay in silicon to study within-die variation. It also suggests a Process, Voltage and Temperature (PVT)-aware gate delay model for voltage and temperature scalable linear Statistical Static Timing Analysis (SSTA). Technology scaling allows packing billions of transistors inside a single chip. However, it is difficult to fabricate very small transistor with deterministic characteristic which leads to variations. Transistor level random local variations are growing rapidly in each technology generation. However, there is requirement of quantification of variation in silicon. We propose an all-digital circuit technique to measure the on-chip delay of an individual logic gate (both inverting and non-inverting) in its unmodified form based on a reconfigurable ring oscillator structure. A test chip is fabricated in 65nm technology node to show the feasibility of the technique. Delay measurements of different nominally identical inverters in close physical proximity show variations of up to 28% indicating the large impact of local variations. The huge random delay variation in silicon motivates the inclusion of random local process parameters in delay model. In today’s low power design with multiple supply domain leads to non-uniform supply profile. The switching activity across the chip is not uniform which leads to variation of temperature. Accurate timing prediction motivates the necessity of Process, Voltage and Temperature (PVT) aware delay model. We use neural networks, which are well known for their ability to approximate any arbitrary continuous function. We show how the model can be used to derive sensitivities required for voltage and temperature scalable linear SSTA for an arbitrary voltage and temperature point. Using the voltage and temperature scalable linear SSTA on ISCAS 85 benchmark shows promising results with average error in mean delay is less than 1.08% and average error in standard deviation is less than 2.65% and errors in predicting the 99% and 1% probability point are 1.31% and 1% respectively with respect to SPICE.

Page generated in 0.5101 seconds