Spelling suggestions: "subject:"media streaming"" "subject:"media xtreaming""
1 |
A Framework For Efficient Data Distribution In Peer-to-peer Networks.Purandare, Darshan 01 January 2008 (has links)
Peer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider's problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content. This leaves high capacity nodes with very little or no motivation to contribute. Many times such resourceful nodes exit the swarm or don't even participate. The whole scenario is unfavorable and disappointing for P2P networks in general, where participation is a must and a very important feature. As the number of users increases in the swarm, the swarm becomes robust and scalable. Other important issues in the present day P2P system are below optimal Quality of Service (QoS) in terms of download time, end-to-end latency and jitter rate, uplink utilization, excessive cross ISP traffic, security and cheating threats etc. These current day problems in P2P networks serve as a motivation for present work. To this end, we present an efficient data distribution framework in Peer-to-Peer (P2P) networks for media streaming and file sharing domain. The experiments with our model, an alliance based peering scheme for media streaming, show that such a scheme distributes data to the swarm members in a near-optimal way. Alliances are small groups of nodes that share data and other vital information for symbiotic association. We show that alliance formation is a loosely coupled and an effective way to organize the peers and our model maps to a small world network, which form efficient overlay structures and are robust to network perturbations such as churn. We present a comparative simulation based study of our model with CoolStreaming/DONet (a popular model) and present a quantitative performance evaluation. Simulation results show that our model scales well under varying workloads and conditions, delivers near optimal levels of QoS, reduces cross ISP traffic considerably and for most cases, performs at par or even better than Cool-Streaming/DONet. In the next phase of our work, we focussed on BitTorrent P2P model as it the most widely used file sharing protocol. Many studies in academia and industry have shown that though BitTorrent scales very well but is far from optimal in terms of fairness to end users, download time and uplink utilization. Furthermore, random peering and data distribution in such model lead to suboptimal performance. Lately, new breed of BitTorrent clients like BitTyrant have shown successful strategic attacks against BitTorrent. Strategic peers configure the BitTorrent client software such that for very less or no contribution, they can obtain good download speeds. Such strategic nodes exploit the altruism in the swarm and consume resources at the expense of other honest nodes and create an unfair swarm. More unfairness is generated in the swarm with the presence of heterogeneous bandwidth nodes. We investigate and propose a new token-based anti-strategic policy that could be used in BitTorrent to minimize the free-riding by strategic clients. We also proposed other policies against strategic attacks that include using a smart tracker that denies the request of strategic clients for peer listmultiple times, and black listing the non-behaving nodes that do not follow the protocol policies. These policies help to stop the strategic behavior of peers to a large extent and improve overall system performance. We also quantify and validate the benefits of using bandwidth peer matching policy. Our simulations results show that with the above proposed changes, uplink utilization and mean download time in BitTorrent network improves considerably. It leaves strategic clients with little or no incentive to behave greedily. This reduces free riding and creates fairer swarm with very little computational overhead. Finally, we show that our model is self healing model where user behavior changes from selfish to altruistic in the presence of the aforementioned policies.
|
2 |
Achieving Quality of Service Guarantees for Delay Sensitive Applications in Wireless NetworksAbedini, Navid 2012 August 1900 (has links)
In the past few years, we have witnessed the continuous growth in popularity of delay-sensitive applications. Applications like live video streaming, multimedia conferencing, VoIP and online gaming account for a major part of Internet traffic these days. It is also predicted that this trend will continue in the coming years. This emphasizes the significance of developing efficient scheduling algorithms in communication networks with guaranteed low delay performance. In our work, we try to address the delay issue in some major instances of wireless communication networks.
First, we study a wireless content distribution network (CDN), in which the requests for the content may have service deadlines. Our wireless CDN consists of a media vault that hosts all the content in the system and a number of local servers (base stations), each having a cache for temporarily storing a subset of the content. There are two major questions associated with this framework: (i) content caching: which content should be loaded in each cache? and (ii) wireless network scheduling: how to appropriately schedule the transmissions from wireless servers? Using ideas from queuing theory, we develop provably optimal algorithms to jointly solve the caching and scheduling problems.
Next, we focus on wireless relay networks. It is well accepted that network coding can enhance the performance of these networks by exploiting the broadcast nature of the wireless medium. This improvement is usually evaluated in terms of the number of required transmissions for delivering flow packets to their destinations. In this work, we study the effect of delay on the performance of network coding by characterizing a trade-off between latency and the performance gain achieved by employing network coding. More specifically, we associate a holding cost for delaying packets before delivery and a transmission cost for each broadcast transmission made by the relay node. Using a Markov decision process (MDP) argument, we prove a simple threshold-based policy is optimal in the sense of minimum long-run average cost.
Finally, we analyze delay-sensitive applications in wireless peer-to-peer (P2P) networks. We consider a hybrid network which consists of (i) an expensive base station-to-peer (B2P) network with unicast transmissions, and (ii) a free broadcast P2P network. In such a framework, we study two popular applications: (a) a content distribution application with service deadlines, and (b) a multimedia live streaming application. In both problems, we utilize random linear network coding over finite fields to simplify the coordination of the transmissions. For these applications, we provide efficient algorithms to schedule the transmissions such that some quality of service (QoS) requirements are satisfied with the minimum cost of B2P usage. The algorithms are proven to be throughput optimal for sufficiently large field sizes and perform reasonably well for finite fields.
|
3 |
Mainframes and media streaming solutions : How to make mainframes great againBerg, Linus, Ståhl, Felix January 2020 (has links)
Mainframes has been used for well over 50 years and are built for processing demanding workloads fast, with the latest models using IBM’s z/Architecture processors. In the time of writing, the mainframes are a central unit of the world’s largest corporations in banking, finance and health care. Performing, for example, heavy loads of transaction processing. When IBM bought RedHat and acquired the container orchestration platform OpenShift, the IBM lab in Poughkeepsie figured that a new opportunity for the mainframe might have opened. A media streaming server built with OpenShift, running on a mainframe. This is interesting because a media streaming solution built with OpenShift might perform better on a mainframe than on a traditional server. The initial question they proposed was ’Is it worth running streaming solutions on OpenShift on a Mainframe?’. First, the solution has to be built and tested on a mainframe to confirm that such a solution actually works. Later, IBM will perform a benchmark to see if the solution is viable to sell. The authors method includes finding the best suitable streaming software according to some criterias that has to be met. Nginx was the winner, being the only tested software that was open-source, scalable, runnable in a container and supported adaptive streaming. With the software selected, configuration with Nginx, Docker and OpenShift resulted in a fully functional proof-of-concept. Unfortunately, due to the Covid-19 pandemic, the authors never got access to a mainframe, as promised, to test the solution, however, OpenShift is platform agnostic and should, theoretically, run on a mainframe. The authors built a base solution that can easily be expanded with functionality, the functionality left to be built by IBM engineers is included in the future works section, it includes for example, live streaming, and mainframe benchmarking. / Stordatorer har använts i över 50 år och är byggda för att snabbt kunna bearbeta krävande arbetsbelastningar, med de senaste modellerna som använder IBMs z/Architecture processorer. I skrivande stund är stordatorerna en central enhet i världens största företag inom bank, finans och hälsovård. De utför, till exempel, väldigt stora mängder transaktionsbehandling. När IBM köpte RedHat och förvärvade container-hanteringsplattformen OpenShift, tänkte laboratoriet i Poughkeepsie att en ny möjlighet för stordatorn kanske hade öppnats. En mediaströmningsserver byggd med OpenShift, som körs på en stordator. Detta är intressant eftersom en mediaströmningslösning byggd med OpenShift kan fungera bättre på en stordator än på en traditionell server. Den initiala frågan som ställdes var ’Är det värt att köra strömningslösningar på Openshift på en Mainframe?’. Först måste lösningen byggas och testas på en stordator för att bekräfta att en sådan lösning faktiskt fungerar. Senare kommer IBM att utföra ett riktmärke för att se om lösningen är lämplig att sälja. Författarnas metod inkluderar att hitta den bästa strömningsprogramvaran enligt vissa kriterier som måste uppfyllas. Nginx var vinnaren samt den enda testade programvaran som var öppen källkod, skalbar, körbar i en container och stödde adaptiv strömning. Med den valda programvaran resulterade konfigurationen av Nginx, Docker och OpenShift i en fullt funktionell konceptlösning. På grund av Covid-19-pandemin, fick författarna aldrig tillgång till en stordator, som utlovat, för att testa lösningen. OpenShift är dock plattformsagnostisk och ska teoretiskt sett kunna köras på en stordator. Det som författarna lämnade åt framtida ingenjörer att utforska är en studie som inkluderar fler mjukvaror, även betalversioner, eftersom den här studien endast innehåller öppen källkod. Samt en utvidgning av den befintliga lösningens funktionensuppsättning.
|
Page generated in 0.0721 seconds