• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 13
  • 13
  • 11
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 105
  • 105
  • 26
  • 21
  • 20
  • 17
  • 15
  • 15
  • 15
  • 14
  • 12
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Peer-to-peer stream merging for stored multimedia

Zhu, Qing 02 May 2007 (has links)
<p>In recent years, with the fast development of resource capability of both the Internet and personal computers, multimedia applications like video-on-demand (VOD) streaming have gained dramatic growth and been shown to be potential killer applications in the current and next-generation Internet. Scalable deployment of these applications has become a hot problem area due to the potentially high server and network bandwidth required in these systems.</p><p>The conventional approach in a VOD streaming system dedicates a media stream for each client request, which is not scalable in a wide-area delivery system serving potentially very large numbers of clients. Recently, various efficient delivery techniques have been proposed to improve the scalability of VOD delivery systems. One approach is to use a scalable delivery protocol based on multicast, such as periodic broadcast or stream merging. These protocols have been mostly developed for single-server based systems and attempt to have each media stream serve as many clients as possible, so as to minimize the required server and network bandwidth. However, the performance improvements possible with techniques that deliver all streams from a single server are limited, especially regarding the required network bandwidth. Another approach is based on proxy caching and content replication, such as in content delivery networks (CDN). Although this approach is able to effectively distribute load across multiple CDN servers, the cost of this approach may be high.</p><p>With the focus on further improving the system efficiency regarding the server and network bandwidth requirement, a new scalable streaming protocol is developed in this work. It adapts a previously proposed technique called hierarchical multicast stream merging (HMSM) to use a peer-to-peer delivery approach. To be more efficient in media delivery, the conventional early merging policy associated with HMSM is extended to be compatible with the peer-to-peer environment, and various peer selection policies are designed for initiation of media streams. The impact of limited peer resource capability is also studied in this work. In the performance study, a number of simulation experiments are conducted to evaluate the performance of the new protocol and various design policies, and promising results are reported.
12

The Scheduling Policy with Bandwidth Balancing for Video-on-Demand Systems

Sung, Hsin-Hung 24 August 2005 (has links)
As streaming video and audio over the Internet become popular, the deployment of a large-scale multimedia streaming application requires an enormous amount of server and network resources. In a video-on-demand environment, batching of video requests are often used to reduced I/O demand and improve throughput. Since users may leave if they experience long waits, a good video scheduling policy needs to consider not only the batch size but also the user defection probabilities and wait times. The common scheduling policies are the first-come-first-served (FCFS), the maximum queue length (MQL), and the maximum factored queue length (MFQL). But these schemes may choose the same video and serve the same video requests. Users choosing other video can not receive the video segment and may leave after waiting a long time. In this paper, we propose a batching policy that schedules the video with the concept of the bandwidth balancing scheme in DQDB networks. We refer to this as the SPBB policy. Our goal is to make sure that users can get the video segment and don¡¦t leave the video-on-demand system.
13

Fault Tolerant Video Services Using Java Media Framework

Tsaur, Gong-Ming 10 July 2001 (has links)
Video on demand (VoD) services are becoming more and more popular today. As high bandwidth communication infrastructure is being established in many countries around the world, high bandwidth communication lines will reach millions of family in the near future. Due to the increasing improvement of communication technology, more and more families enjoy the VoD services which are provided by telecommunication companies and cable TV provider via the Internet. In such a case, scalability and fault tolerance will be the key issues. We propose an architecture for VoD services which is based on multi-server circumstance. In our proposed architecture, each movie is replicated on a subset of the servers. When a server crashes or disconnects from its clients, it is replaced by another server in a transparent way. Regarding of load balancing problem, clients are also migrated from one server to another when a new server is brought up. The benefit of our service is to use common hardware and general network technologies (e.g. TCP/IP). In addition, we provide a machine-independent environment to let the servers and clients execute on any machine through the network. Furthermore, we can get the media players from a web browser by utilizing the cross-platform characteristics of Java. The client host does not need to install any relevant applications. Since Java Media Framework (JMF) provides a unified architecture and messaging protocol for managing the acquisition, processing, and delivery of time-based media data. It can support many standard media content types, such as AIFF, AVI, MIDI, MPEG, QuickTime, and WAV. Using JMF, we can create applets and applications to present, manipulate, and store time-based media.
14

Modeling and Simulation of a Video-on-Demand Network Implementing Adaptive Source-Level Control and Relative Rate Marking Flow Control for the Available Bit Rate Service

Taylor, Elvin Lattis Jr. 16 January 1998 (has links)
The Available Bit Rate (ABR) service class for the Asynchronous Transfer Mode (ATM) protocol was originally designed to manage data traffic. ABR flow control makes no guarantees concerning cell transfer delay or cell delay variation. A closed-loop feedback mechanism is used for traffic management. To use this class of service for video transport, the video source will accept feedback from the network and adapt its source rate based on this status information. The objective of this research is to assess the ability of the ATM ABR service class to deliver Moving Picture Experts Group version 1 (MPEG-1) video. Three approaches to source-level control are compared: (i) arbitrary loss or no control method, (ii) selective discard of MPEG B-pictures, and (iii) selective discard of MPEG B- and P-pictures. Performance is evaluated based on end-to-end delay, congested queue occupancy levels, network utilization, and jitter. A description of the investigation, assumptions, limitations, and results of the simulation study are included. / Master of Science
15

Optimal Link Utilization and Enhanced Quality of Service Using Dynamic Bandwidth Reservation for Pre-recorded Video

Kishore, Mukul 11 December 2003 (has links)
Video-on-Demand (VoD) is a service that allows people to request and view stored videos or movies of their choice directly online from a VoD service provider. The selected streaming videos are then delivered over the broadband Internet. The bursty nature of Variable-Bit-Rate (VBR) compressed video (such as MPEG) poses some important issues for video delivery over high-speed networks due to its significant bit rate variation over multiple time scales. However, sufficient quality of service (QoS) mechanisms must be in place before it can be widely enabled and deployed over Internet. Conventionally a static bandwidth level close to the peak rate is reserved for a streaming video flow. Any static allocation of network resources for VBR video traffic would be difficult and inefficient considering the peak rate to be significantly higher than the average data rate. Since the traffic pattern over time is already known for pre-recorded videos, this issue is addressed by the Renegotiated Constant Bit Rate (RCBR) service which proposes QoS allocation over multiple time scales. Since this mechanism has been tested via simulations and analysis only we implemented it on a real test bed with a VoD server and clients to study its performance. We observed that under heavy bandwidth constraints the performance of RCBR is much better than traditional CBR in terms of packet loss rate. We also implement a new Adaptive Buffer Window mechanism and the concept of application level smoothing to increase the scalability of a VoD server. / Master of Science
16

Bandwidth adaptors for heterogeneous broadcast-based video-on-demand systems

Oger, David 01 July 2002 (has links)
No description available.
17

Scalable on-demand streaming of stored complex multimedia

Zhao, Yanping 09 August 2004 (has links)
Previous research has developed a number of efficient protocols for streaming popular multimedia files on-demand to potentially large numbers of concurrent clients. These protocols can achieve server bandwidth usage that grows much slower than linearly with the file request rate, and with the inverse of client start-up delay. This hesis makes the following three main contributions to the design and performance evaluation of such protocols. The first contribution is an investigation of the network bandwidth requirements for scalable on-demand streaming. The results suggest that the minimum required network bandwidth for scalable on-demand streaming typically scales as K/ln(K) as the number of client sites K increases for fixed request rate per client site, and as ln(N/(ND+1)) as the total file request rate N increases or client start-up delay D decreases, for a fixed number of sites. Multicast delivery trees configured to minimize network bandwidth usage rather than latency are found to only modestly reduce the minimum required network bandwidth. Furthermore, it is possible to achieve close to the minimum possible network and server bandwidth usage simultaneously with practical scalable delivery protocols. Second, the thesis addresses the problem of scalable on-demand streaming of a more complex type of media than is typically considered, namely variable bit rate (VBR) media. A lower bound on the minimum required server bandwidth for scalable on-demand streaming of VBR media is derived. The lower bound analysis motivates the design of a new immediate service protocol termed VBR bandwidth skimming (VBRBS) that uses constant bit rate streaming, when sufficient client storage space is available, yet fruitfully exploits the knowledge of a VBR profile. Finally, the thesis proposes non-linear media containing parallel sequences of data frames, among which clients can dynamically select at designated branch points, and investigates the design and performance issues in scalable on-demand streaming of such media. Lower bounds on the minimum required server bandwidth for various non-linear media scalable on-demand streaming approaches are derived, practical non-linear media scalable delivery protocols are developed, and, as a proof-of-concept, a simple scalable delivery protocol is implemented in a non-linear media streaming prototype system.
18

Scalable on-demand streaming of stored complex multimedia

Zhao, Yanping 09 August 2004
Previous research has developed a number of efficient protocols for streaming popular multimedia files on-demand to potentially large numbers of concurrent clients. These protocols can achieve server bandwidth usage that grows much slower than linearly with the file request rate, and with the inverse of client start-up delay. This hesis makes the following three main contributions to the design and performance evaluation of such protocols. The first contribution is an investigation of the network bandwidth requirements for scalable on-demand streaming. The results suggest that the minimum required network bandwidth for scalable on-demand streaming typically scales as K/ln(K) as the number of client sites K increases for fixed request rate per client site, and as ln(N/(ND+1)) as the total file request rate N increases or client start-up delay D decreases, for a fixed number of sites. Multicast delivery trees configured to minimize network bandwidth usage rather than latency are found to only modestly reduce the minimum required network bandwidth. Furthermore, it is possible to achieve close to the minimum possible network and server bandwidth usage simultaneously with practical scalable delivery protocols. Second, the thesis addresses the problem of scalable on-demand streaming of a more complex type of media than is typically considered, namely variable bit rate (VBR) media. A lower bound on the minimum required server bandwidth for scalable on-demand streaming of VBR media is derived. The lower bound analysis motivates the design of a new immediate service protocol termed VBR bandwidth skimming (VBRBS) that uses constant bit rate streaming, when sufficient client storage space is available, yet fruitfully exploits the knowledge of a VBR profile. Finally, the thesis proposes non-linear media containing parallel sequences of data frames, among which clients can dynamically select at designated branch points, and investigates the design and performance issues in scalable on-demand streaming of such media. Lower bounds on the minimum required server bandwidth for various non-linear media scalable on-demand streaming approaches are derived, practical non-linear media scalable delivery protocols are developed, and, as a proof-of-concept, a simple scalable delivery protocol is implemented in a non-linear media streaming prototype system.
19

Admission Control and Media Delivery Subsystems for Video on Demand Proxy Server

Qazzaz, Bahjat 21 June 2004 (has links)
El desarrollo y las avances recientes de la tecnología de los ordenadores y de la tecnología de alta velocidad de redes han hecho posible que las aplicaciones de video bajo demanda estén conectadas a "shared-computing" servidores reemplazando los sistemas tradicionales donde cada aplicación tenía su propia máquina dedicada para servirla. La aplicación de video bajo demanda permite a los usuarios seleccionar de una lista de videos su película favorita y ver su reproducción a su gusto.Sin embargo, la aplicación de video bajo demanda se considera como una de las aplicaciones que debería soportar largos "video streams", que consumen muchos recursos como el anch de banda de red y I/O, a gran número de clientes. Por eso, el servidor de video debería asegurar los recursos necesarios para cada "stream" durante un periodo de tiempo largo (e.g. 7200 segundos) para que los clientes reproduzcan el video sin "jitter" y "starvation" en sus búferes.Esta tesis presenta el diseño y la implementación de un Servidor Proxy de Video (VPS) que puede proveer video bajo demanda interactiva. El VPS consiste de tres componentes (partes) principales. La primera parte es el Modulo de Control de Admisión (ACM) que recibe las peticiones de los clientes, negocia los recursos requeridos, y decide si la petición puede ser aceptada o rechazada basado en la disponibilidad de los recursos. La segunda parte es el Modulo de Manejo de los Recursos (RMM) que maneja los recursos del sistema como el CPU, la Memoria, la Red, y el Disco. Este consta de cuatro "brokers" que reservan a los recursos necesarios basado en una política predefinida. La tercera parte es el algoritmo CB_MDA "Credit_Based Media Delivery Algorithm" que controla y regula el flujo de los "streams" del video. La CB_MDA utiliza una combinación de canales unicast y "multicast" para transmitir el video. Los "streams" de "multicast" se inician para empezar a emitir el video desde el principio, mientras los canales unicast se usan para juntar los llegados tardes a un "stream multicast" apropiado. En la implementación, el CB_MDA detecta los momentos cuando el servidor tiene disponibilidad de recursos y les asigna a los usuarios apropiados para crear un trabajo en adelanto. / The recent advances and development of inexpensive computers and high speed networking technology have enabled the Video on Demand (VoD) application to connect to shared-computing servers, replacing the traditional computing environments where each application was having its own dedicated special purpose computing hardware. The VoD application enables the viewer to select, from a list of video files, his favourite video file and watch its reproduction at will.However, the VoD application is known as one of the applications that must provide long-lived video streams which consume high resources such as I/O and network bandwidth to a large number of clients. Therefore, a video server must secure the necessary resources for each stream during a long period of time (e.g. 7200 seconds) so that the clients can reproduce (play) the video data without witnessing jitter or starvation in their buffers.This thesis presents the design and implementation for a video proxy server (VPS) which can provide interactive video on demand. The VPS consists of three main parts. The first part is the Admission Control Module which receives the clients' requests, negotiates the required resources, and decides whether to accept or reject a client based on the available resources. The second part is the Resources Management Module which manages several shared resources such as the CPU, the Memory, the Network and the Disk It consists of four brokers that can reserve the necessary resources based on a predefined policy. The third part is the CB_MDA algorithm which is responsible for regulating the resources assignment and scheduling the video streams. The CB_MDA uses a combination of multicast and unicast channels for transmitting the video data. The multicast streams are initiated to start a video file from the beginning while the unicast channels are used to join the later arrivals to the appropriate multicast stream. In the implementation, the CB_MDA discovers the period of time when the server has plenty of resources an assigns them to appropriate clients in order to create work-ahead video data.The thesis further goes beyond the design of the VPS and presents a video client architecture that can synchronize with the server and work as a plug-in for producing the video data on different players such as MPEG-Berkely player, Xine.etc.
20

Supporting Scalable and Resilient Video Streaming Applications in Evolving Networks

Guo, Meng 24 August 2005 (has links)
While the demand for video streaming services has risen rapidly in recent years, supporting video streaming service to a large number of receivers still remains a challenging task. Issues of video streaming in the Internet, such as scalability, and reliability are still under extensive research. Recently proposed network contexts such as overlay networks, and mobile ad hoc networks pose even tougher challenges. This thesis focuses on supporting scalable video streaming applications under various network environments. More specifically, this thesis investigates the following problems: i) Server selection in replicated batching video on demand (VoD) systems: we find out that, to optimize the user perceived latency, it is vital to consider the server state information and channel allocation schemes when making server selection decisions. We develop and evaluate a set of server selection algorithms that use increasingly more information. ii) Scalable live video streaming with time shifting and video patching: we consider the problem of how to enable continuous live video streaming to a large group of clients in cooperative but unreliable overlay networks. We design a server-based architecture which uses a combined technique of time-shifting video server and P2P video patching. iii) A Cooperative patching architecture in overlay networks: We design a cooperative patching architecture which shifts video patching responsibility completely to the client side. An end-host retrieves lost data from other end-hosts within the same multicast group. iv) V3: a vehicle to vehicle video streaming architecture: We propose V3, an architecture to provide live video streaming service to driving vehicles through vehicle-to-vehicle (V2V) networks. V3 incorporates a novel signaling mechanism to continuously trigger video sources to send video data back to the receiver. It also adopts a store-carry-and-forward approach to transmit video data in a partitioned network environment. We also develop a multicasting framework that enables live video streaming applications from multiple sources to multiple receivers in V2V networks. A message integration scheme is used to suppress the signaling overhead, and a two-level tree-based routing approach is adopted to forward the video data.

Page generated in 0.0523 seconds