Spelling suggestions: "subject:"buffer emphasizing"" "subject:"cuffer emphasizing""
1 |
Measurement and Method for Receiver Buffer Sizing in Video StreamingMastoureshgh, Sahel 01 May 2012 (has links)
Video streaming has become increasingly popular with commercial video streaming applications such as YouTube accounting for a large quantity of Internet traffic. While streaming video is sensitive to bandwidth jitter, a receiver buffer can ameliorate the effects of jitter by adjusting to the difference between the transmission rate and the playback rate. Unfortunately, there are few studies to determine the best size of the receiver buffer for TCP streaming. In this work, we investigate how the buffer size of video streaming applications changes with respect to variation in bandwidth. We model the video streaming system over TCP using simulation to develop our buffering algorithm. We propose using a dynamic client buffer size based on measured bandwidth variation to achieve fewer interruptions in video streaming playback. To evaluate our approach, we implement an application to run experiments comparing our algorithm with the buffer size of commercial video streaming.
|
2 |
Stable and scalable congestion control for high-speed heterogeneous networksZhang, Yueping 10 October 2008 (has links)
For any congestion control mechanisms, the most fundamental design objectives
are stability and scalability. However, achieving both properties are very challenging
in such a heterogeneous environment as the Internet. From the end-users' perspective,
heterogeneity is due to the fact that different flows have different routing paths and
therefore different communication delays, which can significantly affect stability of the
entire system. In this work, we successfully address this problem by first proving a
sufficient and necessary condition for a system to be stable under arbitrary delay. Utilizing this result, we design a series of practical congestion control protocols (MKC
and JetMax) that achieve stability regardless of delay as well as many additional
appealing properties. From the routers' perspective, the system is heterogeneous because the incoming traffic is a mixture of short- and long-lived, TCP and non-TCP
flows. This imposes a severe challenge on traditional buffer sizing mechanisms, which
are derived using the simplistic model of a single or multiple synchronized long-lived
TCP flows. To overcome this problem, we take a control-theoretic approach and
design a new intelligent buffer sizing scheme called Adaptive Buffer Sizing (ABS),
which based on the current incoming traffic, dynamically sets the optimal buffer size
under the target performance constraints. Our extensive simulation results demonstrate that ABS exhibits quick responses to changes of traffic load, scalability to a
large number of incoming flows, and robustness to generic Internet traffic.
|
3 |
Towards Optimal Buffer Size in Wi-Fi NetworksShowail, Ahmad 19 January 2016 (has links)
Buffer sizing is an important network configuration parameter that impacts the quality of data traffic. Falling memory cost and the fallacy that ‘more is better’ lead to over provisioning network devices with large buffers. Over-buffering or the so called ‘bufferbloat’ phenomenon creates excessive end-to-end delay in today’s networks. On the other hand, under-buffering results in frequent packet loss and subsequent under-utilization of network resources. The buffer sizing problem has been studied extensively for wired networks. However, there is little work addressing the unique challenges of wireless environment. In this dissertation, we discuss buffer sizing challenges in wireless networks, classify the state-of-the-art solutions, and propose two novel buffer sizing schemes. The first scheme targets buffer sizing in wireless multi-hop networks where the radio spectral resource is shared among a set of con- tending nodes. Hence, it sizes the buffer collectively and distributes it over a set of interfering devices. The second buffer sizing scheme is designed to cope up with recent Wi-Fi enhancements. It adapts the buffer size based on measured link characteristics and network load. Also, it enforces limits on the buffer size to maximize frame aggregation benefits. Both mechanisms are evaluated using simulation as well as testbed implementation over half-duplex and full-duplex wireless networks. Experimental evaluation shows that our proposal reduces latency by an order of magnitude.
|
4 |
Improving manufacturing systems using integrated discrete event simulation and evolutionary algorithmsKang, Parminder January 2012 (has links)
High variety and low volume manufacturing environment always been a challenge for organisations to maintain their overall performance especially because of the high level of variability induced by ever changing customer demand, high product variety, cycle times, routings and machine failures. All these factors consequences poor flow and degrade the overall organisational performance. For most of the organisations, therefore, process improvement has evidently become the core component for long term survival. The aim of this research here is to develop a methodology for automating operations in process improvement as a part of lean creative problem solving process. To achieve the stated aim, research here has investigated the job sequence and buffer management problem in high variety/low volume manufacturing environment, where lead time and total inventory holding cost are used as operational performance measures. The research here has introduced a novel approach through integration of genetic algorithms based multi-objective combinatorial optimisation and discrete event simulation modelling tool to investigate the effect of variability in high variety/low volume manufacturing by considering the effect of improvement of selected performance measures on each other. Also, proposed methodology works in an iterative manner and allows incorporating changes in different levels of variability. The proposed framework improves over exiting buffer management methodologies, for instance, overcoming the failure modes of drum-buffer-rope system and bringing in the aspect of automation. Also, integration of multi-objective combinatorial optimisation with discrete event simulation allows problem solvers and decision makers to select the solution according to the trade-off between selected performance measures.
|
5 |
關鍵鏈專案管理中多重專案排程與控制之緩衝管理方法研究 / Buffer Management for Multi Project Scheduling and Control in Critical Chain Project Management吳敬賢, Nuntasukasame, Noppadon Unknown Date (has links)
無 / Critical Chain Project Management (CCPM) has merged in last few years as a novel approach for managing projects. While there were many previous researches studied CCPM concerning with single project management, but CCPM multi project management was hardly paid attention, especially capacity-constraint buffer sizing approach. However, there were some research papers which examined and illustrated CCPM under multi-project environment; those papers assumed all the subprojects were identical. Despite the fact that such situation is impractical.
The purpose of this dissertation is to compare Cut and paste method (C&PM) with Root square error method (RSEM) for applying in project buffer, feeding buffer and capacity-constraint buffer sizing and to change some subproject parameters which make an impact on the project schedule for multi-project scheduling.
Keywords: Critical chain project management, Multi Project Scheduling, Buffer Management, Capacity constraint buffer, Buffer sizing method.
|
6 |
The impact of multitasking on critical chain portfoliosGhaffari, Mahdi January 2017 (has links)
Critical Chain Project Management (CCPM) is a project scheduling technique which has been developed to overcome some of the deficiencies of traditional methods and where, in a single project environment, the critical chain is the longest chain of activities in a project network, taking into account both activity precedence and resource dependencies. In multi-project environments, the constraint is the resource which impedes projects' earlier completion. CCPM relies on buffers to protect the critical chain and monitor/control the project. The literature review conducted by this study reveals that the research on CCPM principles in multi-project environments is still extremely scarce. The review also suggests that outright elimination of multitasking (i.e. switching back and forth among two or more concurrent tasks) by imposing a relay race mentality (i.e. starting a task as soon as it becomes available and finishing it as soon as possible), as one of the main features of CCPM, might worsen the resource constraints of CCPM portfolios and cause creation of over-protective buffers. It further implies that there is also a good level of multitasking that can benefit such environments by improving resource availability and requiring shorter protective buffers. This research aims to bridge the gap by investigating the impact of level of multitasking on resource availability issues and project and feeding buffer sizing in CCPM portfolios with different resource capacities. This is pursued through adopting a deductive approach and developing five research hypotheses, considering ten different levels of resource capacity, testing the hypotheses by conducting Monte Carlo simulations of randomly generated project data and comparing the results with deterministic duration values of the same portfolios with 30%, 40% and 50% feeding and project buffer sizes. In total, ten portfolios with similar size, variability and complexity levels, each containing four projects, were simulated. It was concluded that: firstly, some limited levels of multitasking, determined in relation to the level of resource capacity, can be beneficial to time performance of CCPM portfolios; secondly, shorter buffer sizes can be accounted for by abolishing the ban on multitasking while maintaining a lower rate of resource capacity; finally, the element of relay race work ethic that completely bans multitasking should not be implemented as it proved to be counterproductive in terms of resource availability. Seven recommendations and a buffer sizing framework are provided as complementary guidelines to practitioners' own experience, knowledge and judgment, in addition to an explanation of theoretical and practical contributions and suggestions for future research.
|
7 |
Buffer Techniques For Stochastic Resource Constrained Project Scheduling With Stochastic Task Insertions ProblemsGrey, Jennifer 01 January 2007 (has links)
Project managers are faced with the challenging task of managing an environment filled with uncertainties that may lead to multiple disruptions during project execution. In particular, they are frequently confronted with planning for routine and non-routine unplanned work: known, identified, tasks that may or may not occur depending upon various, often unpredictable, factors. This problem is known as the stochastic task insertion problem, where tasks of deterministic duration occur stochastically. Traditionally, project managers may include an extra margin within deterministic task times or an extra time buffer may be allotted at the end of the project schedule to protect the final project completion milestone. Little scientific guidance is available to better integrate buffers strategically into the project schedule. Motivated by the Critical Chain and Buffer Management approach of Goldratt, this research identifies, defines, and demonstrates new buffer sizing techniques to improve project duration and stability metrics associated with the stochastic resource constrained project scheduling problem with stochastic task insertions. Specifically, this research defines and compares partial buffer sizing strategies for projects with varying levels of resource and network complexity factors as well as the level and location of the stochastically occurring tasks. Several project metrics may be impacted by the stochastic occurrence or non-occurrence of a task such as the project makespan and the project stability. New duration and stability metrics are developed in this research and are used to evaluate the effectiveness of the proposed buffer sizing techniques. These "robustness measures" are computed through the comparison of the characteristics of the initial schedule (termed the infeasible base schedule), a modified base schedule (or as-run schedule) and an optimized version of the base schedule (or perfect knowledge schedule). Seven new buffer sizing techniques are introduced in this research. Three are based on a fixed percentage of task duration and the remaining four provide variable buffer sizes based upon the location of the stochastic task in the schedule and knowledge of the task stochasticity characteristic. Experimental analysis shows that partial buffering produces improvements in the project stability and duration metrics when compared to other baseline scheduling approaches. Three of the new partial buffering techniques produced improvements in project metrics. One of these partial buffers was based on a fixed percentage of task duration and the other two used a variable buffer size based on knowledge of the location of the task in the project network. This research provides project schedulers with new partial buffering techniques and recommendations for the type of partial buffering technique that should be utilized when project duration and stability performance improvements are desired. When a project scheduler can identify potential unplanned work and where it might occur, the use of these partial buffer techniques will yield a better estimated makespan. Furthermore, it will result in less disruption to the planned schedule and minimize the amount of time that specific tasks will have to move to accommodate the unplanned tasks.
|
8 |
Improving TCP Data Transportation for Internet of ThingsKhan, Jamal Ahmad 31 August 2018 (has links)
Internet of Things (IoT) is the idea that every device around us is connected and these devices continually collect and communicate data for analysis at a large scale in order to enable better end user experience, resource utilization and device performance. Therefore, data is central to the concept of IoT and the amount being collected is growing at an unprecedented rate. Current networking systems and hardware are not fully equipped to handle influx of data at this scale which is a serious problem because it can lead to erroneous interpretation of the data resulting in low resource utilization and bad end user experience defeating the purpose of IoT. This thesis aims at improving data transportation for IoT. In IoT systems, devices are connected to one or more cloud services over the internet via an access link. The cloud processes the data sent by the devices and sends back appropriate instructions. Hence, the performance of the two ends of the network ie the access networks and datacenter network, directly impacts the performance of IoT.
The first portion of the our research targets improvement of the access networks by improving access link (router) design. Among the important design aspects of routers is the size of their output buffer queue. %Selecting an appropriate size of this buffer is crucial because it impacts two key metrics of an IoT system: 1) access link utilization and 2) latency. We have developed a probabilistic model to calculate the size of the output buffer that ensures high link utilization and low latency for packets. We have eliminated limiting assumptions of prior art that do not hold true for IoT. Our results show that for TCP only traffic, buffer size calculated by the state of the art schemes results in at least 60% higher queuing delay compared to our scheme while achieving almost similar access link utilization, loss-rate, and goodput. For UDP only traffic, our scheme achieves at least 91% link utilization with very low queuing delays and aggregate goodput that is approx. 90% of link capacity. Finally, for mixed traffic scenarios our scheme achieves higher link utilization than TCP only and UDP only scenarios as well as low delays, low loss-rates and aggregate goodput that is approx 94% of link capacity.
The second portion of the thesis focuses on datacenter networks. Applications that control IoT devices reside here. Performance of these applications is affected by the choice of TCP used for data communication between Virtual Machines (VM). However, cloud users have little to no knowledge about the network between the VMs and hence, lack a systematic method to select a TCP variant. We have focused on characterizing TCP Cubic, Reno, Vegas and DCTCP from the perspective of cloud tenants while treating the network as a black box. We have conducted experiments on the transport layer and the application layer. The observations from our transport layer experiments show TCP Vegas outperforms the other variants in terms of throughput, RTT, and stability. Application layer experiments show that Vegas has the worst response time while all other variants perform similarly. The results also show that different inter-request delay distributions have no effect on the throughput, RTT, or response time. / Master of Science / Internet of Things (IoT) is the idea that every electronic device around us, like watches, thermostats and even refrigerators, is connected to one another and these devices continually collect and communicate data. This data is analyzed at a large scale in order to enable better user experience and improve the utilization and performance of the devices. Therefore, data is central to the concept of IoT and because of the unprecedented increase in the number of connected devices, the amount being collected is growing at an unprecedented rate. Current computer networks over which the data is transported, are not fully equipped to handle influx of data at this scale. This is a serious problem because it can lead to erroneous analysis of the data, resulting in low device utilization and bad user experience, hence, defeating the purpose of IoT. This thesis aims at improving data transportation for IoT by improving different components involved in computer networks. In IoT systems, devices are connected to cloud computing services over the internet through a router. The router acts a gateway to send data to and receive data from the cloud services. The cloud services act as the brain of IoT i.e. they process the data sent by the devices and send back appropriate instructions for the devices to perform. Hence, the performance of the two ends of the network i.e. routers in the access networks and cloud services in datacenter network, directly impacts the performance of IoT.
The first portion of our research targets the design of routers. Among the important design aspects of routers is their size of their output buffer queue which holds the data packets to be sent out. We have developed a novel probabilistic model to calculate the size of the output buffer that ensures that the link utilization stays high and the latency of the IoT devices stays low, ensuring good performance. Results show that that our scheme outperforms state-of-the-art schemes for TCP only traffic and shows very favorable results for UDP only and mixed traffic scenarios.
The second portion of the thesis focuses on improving application service performance in datacenter networks. Applications that control IoT devices reside in the cloud and their performance is directly affected by the protocol chosen to send data between different machines. However, cloud users have almost no knowledge about the configuration of the network between the machines allotted to them in the cloud. Hence, they lack a systematic method to select a protocol variant that is suitable for their application. We have focused on characterizing different protocols: TCP Cubic, Reno, Vegas and DCTCP from the perspective of cloud tenants while treating the network as a black-box (unknown). We have provided in depth analysis and insights into the throughput and latency behaviors which should help the cloud tenants make a more informed choice of TCP congestion control.
|
9 |
Etude d’un système hybride de kitting semi-automatisé dans le secteur automobile : conception du système et modèle d’optimisation pour l’affectation des pièces aux pickers / Analysis of a hybrid robot–operator kitting system in the automotive industry : design and optimal assignment of parts to pickersBoudella, Mohamed El Amine 19 September 2018 (has links)
Cette thèse, réalisée en collaboration avec le Groupe Renault dans le cadre d’un projet d’automatisation du kitting, s’intéresse à l’optimisation du processus de kitting en termes de maximisation du temps de cycle. Pour cela, nous étudions différentes configurations de système de kitting hybride avec robots(s) et opérateur(s) travaillant en série et séparés par un stock de découplage. Le(s) robot(s) commence(nt) la préparation des kits de pièces puis le(s) opérateur(s) se trouvant dans la partie manuelle du kitting récupère(nt) cette préparation et la complète(nt) avec les pièces affectées à cette zone.Notre objectif est de développer un outil d’aide à la décision permettant d’évaluer la performance d’un kitting hybride et de simuler son fonctionnement dans une configuration donnée (layout, politique de picking, etc.) avant son déploiement physique.Tout d’abord, à travers une modélisation des opérationsélémentaires de kitting effectuées par des robots et des opérateurs (prise et dépose, déplacement, etc.), nous proposons un modèle de temps de cycle permettant d’évaluer la performance du système hybride en termes de temps de cycle. Ensuite, nous développons un modèle d’affectation de pièces (PLMNE) permettant de les répartir entre kitting robotisé et manuel. L’objectif est de minimiser les temps de cycle et d’équilibrer la charge de travail entre les deux modes de kitting. Le modèle est appliqué à deux études de cas pratiques issues d’une usine Renault. Les résultats permettent d’identifier les paramètres qui impactent le plus les temps de cycle et le choix d’affectation des pièces entre kitting automatisé et manuel. Enfin, nous développons un modèle de simulation afin de calculer la taille optimale du stock de découplage entre kitting automatisé et manuel dans le but de maximiser la cadence du système hybride de kitting. / In this thesis, conducted with Renault in the context of a kitting automation project, we are interested in the optimisation of kitting processes in terms of cycle time maximisation. To do so, we study different configurations of hybrid robot-operator kitting systems where robots (two types of robots considered) and operators are connected in series by an intermediate buffer (to decouple their activities). The robotic kitting area starts the preparation of kits then the operators in the manual kitting area retrieve the preparation of robots and complete with the remaining parts.Our objective is to develop a decision-making tool that assesses the hybrid system performance in a given configuration (layout, picking policy, etc.).First, through a modelling of elementary kitting operations performed by robots and operators (pick and place, travel, etc.), we develop a cycle time model to assess the performance of hybrid kitting systems. Then, we develop an assignment model that assigns parts (formulated as a mixed integer linear programming (MILP) problem) either to robotic or manual kitting areas with the objective of minimising cycle times and balancing workload between them. The model is applied to two case studies pertaining to a Renault plant. This analysis identifies the parameters that influence cycle times and the choice between robotic and manual kitting. Finally, we develop a simulation model to find the optimal buffer size between robotic and manual kitting so that throughput is maximised.
|
10 |
Garanties de performance pour les flots IP dans l'architecture Flow-Aware Networking / Flow-level performance guarantees for IP traffic in the Flow-Aware Networking architectureAugé, Jordan 27 November 2014 (has links)
La thèse s'intéresse à la réalisation d'une architecture de Qualité de Service en rupture avec les approches classiques, permettant d'offrir des garanties de bout en bout pour le trafic. L'approche Flow-Aware Networking considère le trafic au niveau des flux applicatifs, pour lesquels des modèles de trafic simples mais robustes conduisent à une relation fondamentale entre ressources offertes par le réseau, la demande générée, et la performance obtenue. En particulier, l'architecture de routeur Cross-Protect propose la combinaison d'un ordonnancement fair queueing et d'un contrôle d'admission afin d'assurer de manière implicite la performance des flots streaming et élastique, sans nécessiter ni marquage ni procotole de signalisation. Dans un tel contexte, nous considérons le dimensionnement des buffers au sein des routeurs, l'introduction d'un ordonnancement de type fair queueing dans le réseau et son impact sur la performance des protocoles TCP, ainsi que la réalisation d'un algorithme de contrôle d'admission approprié. Pour terminer, une déclinaison de cette architecture pour le réseau d'accès est proposée. / The thesis deals with the realization of a Quality of Service architecture that breaks with traditional approaches, and allows end-to-end performance guarantees for the traffic. The Flow-Aware Networking approach considers the traffic at the flow level, for which simple but robust traffic models lead to a fundamental relationship between the resources offered by the network, the demand and the obtained performance. In particular, the Cross-Protect router architecture proposes the combination of a fair queueing scheduler, and an admission control so as to implicitly ensure the performance of both streaming and elastic flows, without the need for any marking nor signalization protocol. In such a context, we consider the sizing of router buffers, the introduction of fair queueing scheduling inside the network and its impact on the performance of TCP protocols, as well as the realization of a suitable admission control algorithm. Finally, a declination of this architecture for the access network is proposed.
|
Page generated in 0.0802 seconds