Spelling suggestions: "subject:"schedule""
21 |
Optimality and robustness in opportunistic scheduler design for wireless networksSadiq, Bilal 26 October 2010 (has links)
We investigate in detail two multiuser opportunistic scheduling problems in centralized wireless systems: the scheduling of "delay-sensitive" flows with packet delay requirements of a few tens to few hundreds of milliseconds over the air interface, and the scheduling of "best-effort" flows with the objective of minimizing mean file transfer delay.
Schedulers for delay-sensitive flows are characterized by a fundamental tradeoff between "maximizing total service rate by being opportunistic" and "balancing unequal queues (or delays) across users". In choosing how to realize this tradeoff in schedulers, our key premise is that "robustness" should be a primary design objective alongside performance. Different performance objectives -- mean packet delay, the tail of worst user's queue distribution, or that of the overall queue distribution -- result in remarkably different scheduling policies. Different design objectives and resulting schedulers are also not equally robust, which is important due to the uncertainty and variability in both the wireless environment and the traffic. The proposed class of schedulers offers low packet delays, less sensitivity to the scheduler parameters and channel characteristics, and a more graceful degradation of service in terms of the fraction of users meeting their delay requirements under transient overloads, when compared with other well-known schedulers.
Schedulers for best-effort flows are characterized by a fundamental tradeoff between "maximizing the total service rate" and "prioritizing flows with short residual sizes". We characterize two regimes based on the "degree" of opportunistic gain present in the system. In the first regime -- where the opportunistic capacity of the system increases sharply with the number of users -- the use of residual flow-size information in scheduling will 'not' result in a significant reduction in flow-level delays. Whereas, in the second regime -- where the opportunistic capacity increases slowly with the number of users -- using flow-size information alongside channel state information 'may' result in a significant reduction. We then propose a class of schedulers which offers good performance in either regime, in terms of mean file transfer delays as well as probability of blocking for systems that enforce flow admission control.
This thesis provides a comprehensive theoretical study of these fundamental tradeoffs for opportunistic schedulers, as well as an exploration of some of the practical ramifications to engineering wireless systems. / text
|
22 |
Effective cooperative scheduling of task-parallel applications on multiprogrammed parallel architecturesVaristeas, Georgios January 2015 (has links)
Emerging architecture designs include tens of processing cores on a single chip die; it is believed that the number of cores will reach the hundreds in not so many years from now. However, most common parallel workloads cannot fully utilize such systems. They expose fluctuating parallelism, and do not scale up indefinitely as there is usually a point after which synchronization costs outweigh the gains of parallelism. The combination of these issues suggests that large-scale systems will be either multiprogrammed or have their unneeded resources powered off.Multiprogramming leads to hardware resource contention and as a result application performance degradation, even when there are enough resources, due to negative share effects and increased bus traffic. Most often this degradation is quite unbalanced between co-runners, as some applications dominate the hardware over others. Current Operating Systems blindly provide applications with access to as many resources they ask for. This leads to over-committing the system with too many threads, memory contention and increased bus traffic. Due to the inability of the application to have any insight on system-wide resource demands, most parallel workloads will create as many threads as there are available cores. If every co-running application does the same, the system ends up with threads $N$ times the amount of cores. Threads then need to time-share cores, so the continuous context-switching and cache line evictions generate considerable overhead.This thesis proposes a novel solution across all software layers that achieves throughput optimization and uniform performance degradation of co-running applications. Through a novel fully automated approach (DVS and Palirria), task-parallel applications can accurately quantify their available parallelism online, generating a meaningful metric as parallelism feedback to the Operating System. A second component in the Operating System scheduler (Pond) uses such feedback from all co-runners to effectively partition available resources.The proposed two-level scheduling scheme ultimately achieves having each co-runner degrade its performance by the same factor, relative to how it would execute with unrestricted isolated access to the same hardware. We call this fair scheduling, departing from the traditional notion of equal opportunity which causes uneven degradation, with some experiments showing at least one application degrading its performance 10 times less than its co-runners. / <p>QC 20151016</p>
|
23 |
Taintx: A System for Protecting Sensitive DocumentsDillon, Patrice 06 August 2009 (has links)
Across the country members of the workforce are being laid off due to downsizing. Most of those people work for large corporations and have access to important company documents. There have been several studies suggesting that employees are taking critical information after learning they will be laid off. This becomes an issue and a threat to a corporation's security. Corporations are then placed in a position to make sure sensitive documents never leave the company. In this study we build a system that is used to assist corporations and systems administrators. This system will prevent users from taking sensitive documents. The system used in this study helps to maintain a level of security that is not only beneficial but is a crucial part of managing a corporation, and enhancing its ability to compete in an aggressive market.
|
24 |
Co-projeto de hardware e software de um escalonador de processos para arquiteturas multicore heterogêneas baseadas em computação reconfigurável / Hardware and software co-design of a process scheduler for heterogeneous multicore architectures based on reconfigurable computingBueno, Maikon Adiles Fernandez 05 November 2013 (has links)
As arquiteturas multiprocessadas heterogêneas têm como objetivo principal a extração de maior desempenho da execução dos processos, por meio da utilização de núcleos apropriados às suas demandas. No entanto, a extração de maior desempenho é dependente de um mecanismo eficiente de escalonamento, capaz de identificar as demandas dos processos em tempo real e, a partir delas, designar o processador mais adequado, de acordo com seus recursos. Este trabalho tem como objetivo propor e implementar o modelo de um escalonador para arquiteturas multiprocessadas heterogêneas, baseado em software e hardware, aplicado ao sistema operacional Linux e ao processador SPARC Leon3, como prova de conceito. Nesse sentido, foram implementados monitores de desempenho dentro dos processadores, os quais identificam as demandas dos processos em tempo real. Para cada processo, sua demanda é projetada para os demais processadores da arquitetura e em seguida é realizado um balanceamento visando maximizar o desempenho total do sistema, distribuindo os processos entre processadores, de modo a diminuir o tempo total de processamento de todos os processos. O algoritmo de maximização Hungarian, utilizado no balanceamento do escalonador, foi desenvolvido em hardware, proporcionando paralelismo e maior desempenho na execução do algoritmo. O escalonador foi validado por meio da execução paralela de diversos benchmarks, resultando na diminuição dos tempos de execução em relação ao escalonador sem suporte à heterogeneidade / Heterogeneous multiprocessor architectures have as main objective the extraction of higher performance from processes through the use of appropriate cores to their demands. However, the extraction of higher performance is dependent on an efficient scheduling mechanism, able to identify in real-time the demands of processes and to designate the most appropriate processor according to their resources. This work aims at design and implementations of a model of a scheduler for heterogeneous multiprocessor architectures based on software and hardware, applied to the Linux operating system and the SPARC Leon3 processor as proof of concept. In this sense, performance monitors have been implemented within the processors, which in real-time identifies the demands of processes. For each process, its demand is projected for the other processors in the architecture and then it is performed a balancing to maximize the total system performance by distributing processes among processors. The Hungarian maximization algorithm, used in balancing scheduler was developed in hardware, providing greater parallelism and performance in the execution of the algorithm. The scheduler has been validated through the parallel execution of several benchmarks, resulting in decreased execution times compared to the scheduler without the heterogeneity support
|
25 |
Uma proposta de algoritmo de escalonamento baseado em lógica nebulosa para redes LTE (Long Term Evolution)Santana, Fernando Castelo Branco Gonçalves 25 July 2016 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-05-30T19:56:59Z
No. of bitstreams: 1
FernandoCasteloBranco.pdf: 834917 bytes, checksum: 637b3f82aba4cce981014b690d89cb95 (MD5) / Made available in DSpace on 2017-05-30T19:56:59Z (GMT). No. of bitstreams: 1
FernandoCasteloBranco.pdf: 834917 bytes, checksum: 637b3f82aba4cce981014b690d89cb95 (MD5)
Previous issue date: 2016-07-25 / In recent decades, the increasing use of mobile technologies has motivated
the development of new techniques and algorithms to provide high transmission rates in mobile networks. Among them, LTE (Long Term Evolution) technology is one of the most significant standards for fourth generation (4G) mobile telephony. Due to its high data rates, LTE becomes very attractive to several kinds of applications. However, the quality of the transmissions can be severely impacted by the radio resources scheduling process, since it can lead to fluctuations in the delay and in the application data rates. To perform resource allocation in LTE networks, the scheduler uses various parameters estimated from the radio environment. This process can lead to the occurrence of erroneous estimates, which should be mitigated in order to deal with the inaccuracies of the wireless environments. In this context, this master thesis presents a fuzzy-based downlink scheduler for LTE networks, named PAFS (Performance-Aware Fuzzy Scheduler). The results show that the proposed scheduler promotes a suitable allocation of the radio resources, improving the performance of different QoS (Quality of Service) parameters
without compromising the fairness among the system users. / Nas últimas d´ecadas, o aumento da utilização de tecnologias de comunicações móveis tem motivado o desenvolvimento de novas técnicas e algoritmos capazes de prover
altas taxas de transmissão em redes celulares. Entre elas, a tecnologia LTE (Long Term Evolution) é um dos padrões mais significativos para a telefonia móvel de quarta geração (4G). Devido às suas elevadas taxas de dados, o LTE torna-se muito atraente para vários tipos de aplicações. No entanto, a qualidade das transmissões pode ser severamente impactada pelo processo de escalonamento de recursos de rádio, uma vez que pode levar a flutuações no atraso e nas taxas de dados das aplicações. Para executar a alocação de recursos em redes LTE, o escalonador utiliza vários parâmetros estimados a partir do ambiente de rádio. Esse processo pode levar à ocorrência de estimativas erradas, o que deve ser mitigado de forma a lidar com as imprecisões dos ambientes sem fio. Neste contexto, este trabalho apresenta um escalonador de enlace de descida baseado em lógica nebulosa chamado PAFS (Performance-Aware Fuzzy Scheduler ). Os resultados mostram que o escalonador proposto promove uma alocação adequada dos recursos de rádio, melhorando o desempenho de diferentes parâmetros de QoS (Quality of Service – Qualidade de Servi¸co) sem comprometer a justiça entre os usuários do sistema.
|
26 |
基於MapReduce之雲端運算下具地域特性之動態排程 / Dynamic locality driven scheduler for mapreduce based cloud computing陳耀宗, Chen, Yao Chung Unknown Date (has links)
MapReduce 是目前最熱門的雲端技術之一,用來處理大量資料,不論資料探勘、非結構化的紀錄檔、網頁索引處理及其他需要大量資料處理的科學研究,都可透過 MapReduce 得到極佳的執行效率。MapReduce 為一分散式批次資料處理程式框架,將一個工作分解為許多較小的 map 任務以及 reduce 任務,由map 處理每個小問題,再由reduce將問題彙整,得到最終的結果。
Hadoop 是一個開放原始碼的 MapReduce 架構,並且被廣泛地應用在以大規模資料運算為主的雲端計算。Hadoop有一個非常重要的元件稱為scheduler ,是 hadoop的中樞,負責調度、指派任務和資源分配的優先順序。Scheduler的任務選擇與分配方式,將會影響 MapReduce 工作的執行效率與整個叢集的使用率,目前Hadoop預設的scheduler是將任務以先進先出(FIFO)的方式進行排程。提升MapReduce運算效能的挑戰之一為如何適當的分配Mapper 和 Reducer給雲端裡的每個節點來執行。儘管過去已經有許多改善MapReduce運算效能的研究,但是大部分的方法在實際的運作中,仍存在很多的問題,如工作節點的動態負載、data locality的問題,計算節點的異質性等等。我們發現目前Hadoop對於這些問題並沒有妥善處理,並且在相關的情況下,整體效能仍有改進空間。
我們提出Data Locality Driven Scheduler(DLDS)的方法,並實踐在 Hadoop上,試圖提高scheduler的效能。我們設計不同的實驗,比較DLDS在不同狀況下和其他的排程演算法的差異。實驗結果顯示,透過提高資料的地域性,平均可提昇10% 至 15% 的效能。 / MapReduce is programming model for processing large data set. It is typically used to do distributed computing on clusters of computers such as Cloud computing platform. Examples of bit data set include unstructured logs, web indexing, scientific data, surveillance data, etc.
MapReduce is a distributed processing program framework, a computing job is broken down into many smaller Map tasks and a Reduce task.Each Map task processes a partition of the given data set and Reduce aggregates the results of Maps to produce final result.
Hadoop is an open-source MapReduce architecture, and is widely used in many cloud-based services.To best utilize computing resource in a cloud server, a task scheduler is essential to assign tasks to appropriate processors as well as to prioritize resource allocation. The default scheduler of Hadoop is first-in-first-out (FIFO) scheduler which is simple but has a performance inefficiency yet to be improved. Although there have been many researches aiming to improve the performance of MapReduce platform in the past year, there still have many issues hindering the performance improvement, such as dynamic load balance, data locality, and heterogeneity of computing nodes.
To improve data locality, we propose a new scheduler called Data Locality Driven Scheduler (DLDS) based on Hadoop platform. DLDS improve Hadoop's performamce by allocating Map tasks as close as possible to the data block they are to process. We evaluated the proposed DLDS against several other schedulers by simulation on an 8 nodes real Hadoop system. Experimental results show that DLDS can improve data locality by 10-15%, which results in a significant performamce improvement.
|
27 |
QoS Evaluation of BandwidthSchedulers in IPTV Networks OfferedSRD Fluid Video TrafficMondal, Chandra Shekhar January 2009 (has links)
Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB).IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth datatransmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] First, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one incontinuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be include CAC, traffic policing used for traffic control.QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator andanalytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.
|
28 |
QoS evaluation of Bandwidth Schedulers in IPTV Networks Offered SRD Fluid Video TrafficHabib, Mohammad Ahasan January 2009 (has links)
Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB). IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth data transmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] first, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one in continuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be including CAC, traffic policing used for traffic control. QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator and analytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.
|
29 |
QoS-Aware Packet Scheduler for LTE Downlink Based on Packet Prediction MechanismTang, Chang-Lung 09 August 2011 (has links)
none
|
30 |
Mechanisms on Multipoint Communications for ABR Services on ATM NetworksHsiao, Wen-Jiunn 17 February 2005 (has links)
Asynchronous Transfer Mode (ATM) network is being deployed in carrier backbone. ATM can transmit a wide variety of traffic, such as video, voice, and data. Available Bit Rate (ABR) service is one of six ATM services, which is now under intensive research for its closed loop feedback control feature. ABR service supports two types of connections: unicast and multicast. There are also three types of multicast connections: point-to-multipoint, multipoint-to-point, and multipoint-to-multipoint. Multipoint communication is the exchange of information among multiple senders and multiple receivers, forming a multicast group. Examples of multicast applications include audio and video conferencing, video on demand, tele-metering, distributed games, and data distribution applications.
In this dissertation, we focus on queuing and packet scheduling management for multipoint-to-point ABR connections. Although there are so many proposed fairness definitions for all ABR sources in a multipoint-to-point connection, there are still problems about queue lengths, queuing delays, and throughputs, when ABR sources are with variable-length packets. From the nature of VC-merge scheme on merged points in a multipoint-to-point connection, merged switches cannot transmit cell-stream of a packet out until the packet is completely and totally queued. If there is no complete packets queued, the switch can then choose an incomplete packet for cut-through forwarding for efficiency. Therefore, if the switch chooses a long packet from a branch that has smaller cell input rate, for cut-through forwarding, the throughput of output ports will experience severe oscillations. At the same time, ABR queue lengths will be also occupied with severe growth, and ABR cells will be experienced long queuing delays.
We proposed a scheme, named MWTF (Minimum Waiting Time First), which is architecture-independent of any rate allocation schemes and fairness definitions, to resolve the problems by providing length of each packet to merged switches. Thereby the scheduler can choose an appropriate incomplete packet for cut-through forwarding, by selecting the packet that has the smallest packet waiting time. Simulation results show that merged switch has good performances. Throughput will be no severe oscillations and will be getting smoother. Also cells have smaller and smoother queuing delays in average, and the switches have much smaller queue lengths and smoother variations.
|
Page generated in 0.0362 seconds