• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 11
  • 7
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Environnements pour l'analyse expérimentale d'applications de calcul haute performance / Environments for the experimental analysis of HPC applications.

Perarnau, Swann 01 December 2011 (has links)
Les machines du domaine du calcul haute performance (HPC) gagnent régulièrement en com- plexité. De nos jours, chaque nœud de calcul peut être constitué de plusieurs puces ou de plusieurs cœurs se partageant divers caches mémoire de façon hiérarchique. Que se soit pour comprendre les performances ob- tenues par une application sur ces architectures ou pour développer de nouveaux algorithmes et valider leur performance, une phase d'expérimentation est souvent nécessaire. Dans cette thèse, nous nous intéressons à deux formes d'analyse expérimentale : l'exécution sur machines réelles et la simulation d'algorithmes sur des jeux de données aléatoires. Dans un cas comme dans l'autre, le contrôle des paramètres de l'environnement (matériel ou données en entrée) permet une meilleure analyse des performances de l'application étudiée. Ainsi, nous proposons deux méthodes pour contrôler l'utilisation par une application des ressources ma- térielles d'une machine : l'une pour le temps processeur alloué et l'autre pour la quantité de cache mémoire disponible. Ces deux méthodes nous permettent notamment d'étudier les changements de comportement d'une application en fonction de la quantité de ressources allouées. Basées sur une modification du compor- tement du système d'exploitation, nous avons implémenté ces méthodes pour un système Linux et démontré leur utilité dans l'analyse de plusieurs applications parallèles. Du point de vue de la simulation, nous avons étudié le problème de la génération aléatoire de graphes orientés acycliques (DAG) pour la simulation d'algorithmes d'ordonnancement. Bien qu'un grand nombre d'algorithmes de génération existent dans ce domaine, la plupart des publications repose sur des implémen- tations ad-hoc et peu validées de ces derniers. Pour pallier ce problème, nous proposons un environnement de génération comprenant la majorité des méthodes rencontrées dans la littérature. Pour valider cet envi- ronnement, nous avons réalisé de grande campagnes d'analyses à l'aide de Grid'5000, notamment du point de vue des propriétés statistiques connues de certaines méthodes. Nous montrons aussi que la performance d'un algorithme est fortement influencée par la méthode de génération des entrées choisie, au point de ren- contrer des phénomènes d'inversion : un changement d'algorithme de génération inverse le résultat d'une comparaison entre deux ordonnanceurs. / High performance computing systems are increasingly complex. Nowadays, each compute node can contain several sockets or several cores and share multiple memory caches in a hierarchical way. To understand an application's performance on such systems or to develop new algorithms and validate their behavior, an experimental study is often required. In this thesis, we consider two types of experimental analysis : execution on real systems and simulation using randomly generated inputs. In both cases, a scientist can improve the quality of its performance analysis by controlling the environment (hardware or input data) used. Therefore, we discuss two methods to control hardware resources allocation inside a system : one for the processing time given to an application, the other for the amount of cache memory available to it. Both methods allow us to study how an application's behavior change according to the amount of resources allocated. Based on modifications of the operating system, we implemented these methods for Linux and demonstrated their use for the analysis of several parallel applications. Regarding simulation, we studied the issue of the random generation of directed acyclic graphs for scheduler simulations. While numerous algorithms can be found for such problem, most papers in this field rely on ad-hoc implementations and provide little validation of their generator. To tackle this issue, we propose a complete environment providing most of the classical generation methods. We validated this environment using big analysis campaigns on Grid'5000, verifying known statistical properties of most algorithms. We also demonstrated that the performance of a scheduler can be impacted by the generation method used, identifying a reversing phenomenon : changing the generating algorithm can reverse the comparison between two schedulers.
32

Bounds For Scheduling In Non-Identical Uniform Multiprocessor Systems

Darera, Vivek N 06 1900 (has links)
With multiprocessors and multicore processors becoming ubiquitous, focus has shifted from research on uniprocessors to that on multiprocessors. Results derived for the uniprocessor case unfortunately do not always directly extend to the multiprocessor case in a straightforward manner. This necessitates a paradigm shift in the approach used to design and analyse the behaviour of such processors. In the case of Real-time systems, that is, systems characterised by explicit timing constraints, analysis and performance guarantees are more important, as failure is unacceptable. Scheduling algorithms used in Real-time systems have to be carefully designed as the performance of the system depends critically on them. Efficient tests for determining if a set of tasks can be feasibly scheduled on such a computing system using a particular scheduling algorithm thus assumes importance. Traditionally, the ‘task utilization’ parameter has been used for devising such tests. Utilization based tests for Earliest Deadline First(EDF) and Rate Monotonic(RM) scheduling algorithms are known and are well understood for uniprocessor systems. In our work, we derive limits on similar bounds for the multiprocessor case. Our work diners from previous literature in that we explore the case when the individual processors constituting the multiprocessor need not be identical. Each processor in such a system is characterised by a capacity, or speed, and the time taken by a task to execute on a processor is inversely proportional to its speed. Such instances may arise during system upgradation, when faster processors may be added to the system, making it a non-identical multiprocessor, or during processor design, when the different cores on the chip may have different processing power to handle dynamic workloads. We derive results for the partitioned paradigm of multiprocessor scheduling, that is, when tasks are partitioned among the processors, and interprocessor migration after a part of execution is completed is not allowed. Results are derived for both fixed priority algorithms(RM)and dynamic priority algorithms (EDF) used on individual processors. A maximum and minimum limit on the bounds for a ‘greedy’ class of algorithms are established, since the actual value of the bound depends on the algorithm that allocates the tasks. We also derive the utilization bound of an algorithm whose bound is close to the upper limit in both cases. We find that an expression for the utilization bound can be obtained when EDF is used as the uniprocessor scheduling algorithm, but when RM is the uniprocessor scheduling algorithm,an O(mn) algorithm is required to find the utilization bound, where m is the number of tasks in the system and n is the number of processors. Knowledge of such bounds allows us to carry out very fast schedulability tests, although we have the limitation that the tests are sufficient but not necessary to ensure schedulability. We also compare the value of the bounds with those achievable in ‘equivalent’ identical multiprocessor systems and find that the performance guarantees provided by the non-identical multiprocessor system are far higher than those offered by the equivalent identical system.
33

Performance Modeling Based Scheduling And Rescheduling Of Parallel Applications On Computational Grids

Sanjay, H A 10 1900 (has links)
As computational grids have become popular and ubiquitous, users have access to large number and different types of geographically distributed grid resources. Many computational grid frameworks are composed of multiple distributed sites with each site consisting of one or more dedicated or non-dedicated clusters. Jobs submitted to a grid are handled by a matascheduler which interacts with the local schedulers of the clusters for scheduling jobs to the individual clusters. Computational grids have been found to be powerful research-beds for execution of various kinds of parallel applications. When a parallel application is submitted to a grid, the metascheduler has to choose a set of resources from a cluster for application execution. To select the best set of resources for application execution, it is important to determine the performance of the application. Accurate performance estimates of an application is essential in assisting a grid meta scheduler to efficiently schedule user jobs. Thus models that predict execution times of parallel applications on a set of resources and a search procedure (scheduling strategy) which selects the best set of machines within a cluster for application execution are of importance for enabling the parallel applications on grids. For efficient execution of large scientific parallel applications consisting of multiple phases, performance models of the individual phases should be obtained. Efficient rescheduling strategies that can use the per-phase models to adapt the parallel applications to application and resource dynamics are necessary for maintaining high performance of the applications on grids. A practical and robust grid computing infrastructure that integrates components related to application and resource monitoring, performance modeling, scheduling and rescheduling techniques, is highly essential for large-scale deployment and high performance of scientific applications on grid systems and hence for fostering high performance computing. This thesis focuses on developing performance models for predicting execution times of parallel problems/subproblems on dedicated and non-dedicated grid resources. The thesis also constructs robust scheduling and rescheduling strategies in a grid metascheduler that can use the performance models for efficient execution of large scientific parallel applications on dynamic grids. Finally, the thesis builds a practical and robust grid middleware infrastructure which integrates components related to performance modeling, scheduling and rescheduling, monitoring and migration frameworks for large-scale deployment and use of high performance applications on grids. The thesis consists of four main components. In the first part of the thesis, we have developed a comprehensive set of performance modeling strategies to predict the execution times of tightly-coupled parallel applications on a set of resources in a dedicated or non-dedicated cluster. The main purpose of our prediction strategies is to aid grid metaschedulers in making scheduling decisions. Our performance modeling strategies, based on linear regression, can deal with non-dedicated systems where the loads can change during application executions. Our models do not require detailed knowledge and instrumentation of the applications and can be constructed without the involvement of application developers. The strategies are intended for rapid and large scale deployment of parallel applications on non-dedicated grid systems. We have evaluated our strategies on 8, 16, 24 and 32-node clusters with random loads and load traces from a grid system. Our performance modeling strategies gave less than 30% average percentage prediction errors in all cases, which is reasonable for non-dedicated systems. We also found that scheduling based on the predictions by our strategies will result in perfect scheduling in many cases. For modeling large-scale scientific applications, we use execution profiles and automatic program analysis, and manual analysis of significant portions of the application’s code to identify the different phases of applications. We then adopt our performance modeling strategies to predict execution times for the different phases of the tightly-coupled parallel applications on a set of resources in a dedicated or non-dedicated cluster. Our experiments show that using combinations of performance models of the phases give 18% – 70% more accurate predictions than using single performance models for the applications. In the second part of the thesis, we have devised, evaluated and compared algorithms for scheduling tightly-coupled parallel applications on multi-cluster grids. Our algorithms use performance models that predict the execution times of parallel applications, for evaluations of candidate schedules. In this work, we propose a novel algorithm called Box Elimination (BE) that searches a space of performance model parameters to determine efficient schedules. By eliminating large search space regions containing poorer solutions at each step and searching high quality solutions, our algorithm is able to generate efficient schedules within few seconds for even clusters of 512 processors. By means of large number of real and simulation experiment, we compared our algorithm with popular optimization techniques. We show that our algorithm generates up to 80% more efficient schedules than other algorithms and the resulting execution times are more robust against performance modeling errors. The third part of the thesis deals with policies for rescheduling long-running multi-phase parallel applications in response to application and resource dynamics. In this work, we use our performance modeling and scheduling strategies to derive rescheduling plans for executing multi-phase parallel applications on grids. A rescheduling plan consists of potential points in application execution for rescheduling and schedules of resources for application execution between two consecutive rescheduling points. We have developed three algorithms, namely an incremental algorithm, a divide-and-conquer algorithm and a genetic algorithm, for deriving a rescheduling plan for a parallel application execution. We have also developed an algorithm that uses rescheduling plans derived on different clusters to form a single coherent rescheduling plan for application execution on a grid consisting of multiple clusters. The rescheduling plans generated by our algorithms are highly efficient leading to application execution times that are higher than the execution times corresponding to brute force method by less than 10%. We also find that rescheduling in response to changing application and resource dynamics, using the rescheduling plans for multi-cluster grids generated by our algorithms, give much lesser execution times when compared to executions of the applications on a single schedule throughout application execution. In the final part of the thesis, we have developed a practical grid middleware framework called MerITA (Middleware for Performance Improvement of Tightly Coupled Parallel Applications on Grids), a system for effective execution of tightly-coupled parallel applications on multi-cluster grids consisting of dedicated or non-dedicated, interactive or batch systems. The framework brings together performance modeling for automatically determining the characteristics of parallel applications, scheduling strategies that use the performance models for efficient mapping of applications to resources, rescheduling policies for determining the points in application execution when executing applications can be rescheduled to different sets of resources to obtain performance improvement and a check-pointing library for enabling rescheduling.
34

Performance Evaluation of Voice Traffic over MPLS Network with TE and QoS Implementation

Kharel, Jeevan, Adhikari, Deepak January 2011 (has links)
Multiprotocol Label Switching (MPLS) is a new paradigm in routing architectures which has changed the way Internet Protocol (IP) packet is transferred in a Network. MPLS ensures the reliability of the communication minimizing the delays and enhancing the speed of packet transfer. One important feature of MPLS is its capability of providing Traffic Engineering (TE) which plays a vital role for minimizing the congestion by efficient load, balancing and management of the network resources. The performance evaluation is done considering the network parameters latency, jitter, packet end to end delay, and packet delay variation. Integration of QoS with the MPLS-TE network may enhance the performance of the network. Various scheduling algorithms can be used for implementing QoS on a network, which may vary the performance of the network. In our study, QoS is implemented on top of the MPLS-TE network using Differentiated Service (DiffServ) architecture. Different basic scheduling algorithms are used for the implementation of QoS and to check their impact on the network and to identify the suitable one among them. Performance evaluation is done considering the network parameters latency, jitter, packet end-to-end delay, and Packet Delay Variation. The simulation was done using OPNET modeler 16.0 and the results were analyzed. The simulation result shows that using TE along with QoS in MPLS network decreases the latency, jitter, packet delay variation and end to end packet delay compared to using TE alone for voice traffic. / +46738732963
35

Impact of Real Time Events on the Relative Efficiency of the Proposed Dynamic Scheduling Algorithms for Diffusion Furnace(s) in the Semiconductor Manufacturing

Vimala Rani, M January 2017 (has links) (PDF)
The manufacturing industries play a significant role in contributing to the economy of a country. Among various manufacturing industries, the semiconductor manufacturing (SM) industries is one of the fastest growing industries in the world having worldwide sales of $31 billion in the month of December 2016. Semiconductors are required by large number of industries, including Telecommunications, Medical Electronics, Automobile, Defence and Aerospace, Consumer Electronics, etc.,. Today, without semiconductors, the technology that we count on every day would not be possible. Because of these, the demand for SM industry is increased rapidly. In addition, most of the semiconductor based products‘ life is very short. Due to these, SM industry is highly competitive industry. Thus, to utilize the resources effectively, to handle the huge demand, and to deliver the product on-time, efficient scheduling is important in SM industry. SM process can be broadly classified into Wafer Fabrication (called as wafer fab), Wafer Probing, Assembly, and Final Testing. Scheduling is more important in wafer fab due to complex operations involving with multiple types of machines and re-entrant, expensive machines, and time-consuming process involved. Thus, this study concerns about scheduling in wafer fab, particularly diffusion operation. The diffusion operation, carried out on batch processing machine, heavily impacts the production rate of wafer fab and in turn the SM industry. This is due to the fact that, diffusion operation requires relatively longest processing time among all the operations in the wafer fab. Due to these, diffusion operation is the bottleneck operations in the wafer fab. Based on the detailed literature review, this study addresses a new research problem on dynamic scheduling (DS) of diffusion furnace(s) by considering together the various real-life problem characteristics: Non-identical parallel diffusion furnaces, Machine eligibility restriction, Incompatible-job families, Job and/or resource related real time events, and Non-agreeable release time and due-date. In addition, due to the importance of on-time delivery this study deals with five due-date based scheduling objectives: Total weighted tardiness (TWT), Number of tardy jobs (NT), On time delivery (OTD) rate, Total earliness and lateness (TE/L) and Maximum lateness (Lmax) as a single objective as well as multi objectives. Here, the multi objectives are developed, considering all the five due-date based scheduling objectives in a linear form by randomly assigning equal and unequal weights to each of the due-date based single objectives considered in this study. With these, the main objective of this thesis is to study the impact of job and/or resource related real time events (JR-RTE) on the relative efficiency of the proposed dynamic scheduling algorithms for diffusion furnace(s) while optimizing each of the due-date based scheduling objectives considered in this study. The research problem considered in this study is decomposed into five phases. From the analysis of the literature, it is observed that, there is no earlier study has the mathematical models for dynamic scheduling (DS) of diffusion furnaces to optimize all the due-date based scheduling objectives, considered in this study. Due to this, in the first phase, fourteen (0-1) mixed integer linear programming (MILP) models are proposed for DS of diffusion furnaces (seven models for DS of single diffusion furnace and seven models for DS of non-identical parallel diffusion furnaces) to optimize the due-date based single objectives: TWT, NT, OTD rate, TE/L, and Lmax and multi objectives: MO1 and MO2. All the proposed (0-1) MILP models are demonstrated for its workability by developing a suitable numerical example, LINGO set code (which generates each of the proposed (0-1) MILP model for any given data), and solving using LINGO solver. Further, based on the analysis of the literature a suitable experimental design is proposed and generated 15 small-scale test data. The computational complexity of each of the proposed (0-1) MILP models is discussed empirically by solving 15 small-scale test data. Due to the computational intractability of the proposed (0-1) MILP models for DS of diffusion furnaces, the second phase of the research focuses on a simple alternative approach based on dispatching rules, as the analysis of the literature reveals that dispatching rules are heavily used in the SM industry. However, there is no study in the literature presenting a comparative analysis of various dispatching rules particularly due-date based dispatching rules (DDR) for DS of diffusion furnace(s) to optimize various due-date based scheduling objectives. Accordingly, in the second phase, this study proposes a simple Greedy Algorithm (GA) based on DDR (called as GA-DDR) for Dynamic Scheduling of Single Diffusion Furnace (DS-SDF). Further, this study proposes twenty variants of GA-DDR considering various due-date based dispatching rules such as Earliest Due-Date, Flow Due-Date, Operational Due-Date, Modified Operational Due-Date, Critical Ratio, Minimum Slack First, Cost OVER Time, ten versions of Apparent Tardiness Cost (ATC) [including a new ATC rule proposed in this study] & five versions of Batch Apparent Tardiness Cost (BATC) [including a new BATC rule proposed in this study] for DS-SDF. All these twenty variants of GA-DDR are implemented in Turbo C. An experimental design is proposed in this phase for generating large-scale test data. Accordingly, 270 large-scale problem instances (representing 27 problem configurations and 10 instances per configurations) are generated. With these, a series of computational experiments are carried out to understand the relative efficiency of the twenty proposed variants of GA-DDR as follows: The efficiency of each of the twenty proposed variants of GA-DDR for DS-SDF with respect to each of the scheduling objectives considered in this study is analysed in comparison with optimal objective function value obtained from the corresponding (0-1) MILP models for 15 small-scale problem instances using the standard performance measures: Average Relative Percentage Deviation (ARPD) and Maximum Relative Percentage Deviation (MRPD). Further, for each of the 270 problem instances the efficiency of each of the twenty proposed variants of GA-DDR for DS-SDF with respect to each of the scheduling objectives is analysed in comparison with estimated objective function value, which is computed by giving the twenty feasible solutions obtained for each instances as input to Weibull distribution, (i) empirically using the performance measures: ARPD, MRPD, Integrated Rank (IRANK), & Global comparison based on Worst Solution (GCWS), and (ii) statistically by using the performance measures: Mean, Median, and 95% confidence interval. From the overall analysis, at the end of the second phase of the study, six efficient variants of GA-DDR among the twenty proposed variants of GA-DDR are identified for DS-SDF and discussed the insights for their better performance. In these six efficient variants of GA-DDR, two variants of GA-DDR uses the new ATC rule and/or BATC rule proposed by the author of this thesis. The second phase of the research considers only dynamic arrival of jobs in all the twenty variants of GA-DDR. But, in the real-life various unexpected job related real time events: rush job, due-date change, early/late arrival of job, change in job priority, and job cancellation and/or resource related real time events: machine breakdown, operator illness, tool failure, shortage of material, and defective material will occur in addition to the dynamic arrival of jobs. From the literature, it is observed that, all the studies in the dynamic scheduling of diffusion furnaces consider only future arrival of jobs and no study considering real time events. Further, to the best of our knowledge, the research studies on discrete processing machines develop various rescheduling algorithm or modify the existing algorithm whenever real time events occur while taking the scheduling decision. However, due to the longest operation time requirements at diffusion furnace and the computerized tracking system in the shop floor of wafer fab, we strongly propose a research hypothesis that modifying appropriately the work-in-process (WIP) data and/or the availability time of the corresponding diffusion furnace(s) for next scheduling depending upon the occurrence of job and/or resource related real time events respectively by utilizing the existing computerized tracking system in the shop floor is sufficient, and changing any proposed efficient algorithms for DS-SDF is not required. This hypothesis is proved both empirically and statistically in the third phase of this research, considering the twenty proposed variants of GA-DDR for DS-SDF and the proposed experimental design. Accordingly, this study propose a formal researchable hypothesis that there is no impact of JR-RTE on the relative efficiency of the twenty proposed variants of GA-DDR for DS-SDF while optimizing each of the due-date based scheduling objectives considered in this study. For testing the proposed hypothesis, this study proposed adjusted GA-DDR (with JR-RTE) for each of the proposed GA-DDR, in which there is step to update the WIP data if job related event occurs, and/or the next available time of corresponding diffusion furnace(s) for scheduling the same if resource related event occurs, before finalizing the scheduling decision. Each of the 270 large-scale problem instances generated using the proposed experimental design is solved by each of the 20 adjusted variants of GA-DDR (with JR-RTE). The comparison on the relative efficiency of each of the 20 proposed variants of GA-DDR and adjusted GA-DDR (with JR-RTE) is carried out using the performance measures: ARPD and MRPD [that is, ARPD(GA-DDR) vs. ARPD(adjusted GA-DDR with JR-RTE), and MRPD(GA-DDR) vs. MRPD(adjusted GA-DDR with JR-RTE)] while optimizing each of the seven scheduling objectives considered in this study. The empirical analysis of the comparisons reveals that there is no change in the relative efficiency of each of the 20 proposed variants of GA-DDR and the corresponding 20 adjusted variants of GA-DDR (with JR-RTE) while optimizing each of the scheduling objectives considered in this study. Further, this study proved the proposed hypothesis statistically by conducting the Spearman‘s rank order correlation between each of the 20 variants of GA-DDR and adjusted GA-DDR (with JR-RTE) for DS-SDF while optimizing each of the seven due-date based scheduling objectives considered in this study. From the empirical and statistical analyses carried out in the third phase of the study indicated that, no need to adjust the proposed variants of GA-DDR for any occurrences of real time events for obtaining efficient schedule. The SM industry normally would have more than one non-identical diffusion furnaces and that too in parallel. Due to some technical reasons, some jobs are processed only in specific diffusion furnace(s) available in the shop floor (this is called as machine eligibility restriction in scheduling theory). Hence, the impact of JR-RTE on the dynamic scheduling (DS) of non-identical parallel diffusion furnaces (NPDF) with machine eligibility restriction (MER) is addressed in the fourth phase of this study. In the fourth phase of the research study, the twenty proposed variants of GA-DDR for DS-SDF extended appropriately for DS-NPDF with MER [called as Extended GA-DDR (EGA-DDR)]. Further, a few new problem parameters required for NPDF with MER are identified from the literature and extended the proposed experimental design and generated 270 problem instances for representing NPDF with MER. For testing the proposed hypothesis on the impact of JR-RTE on DS-NPDF with MER, exactly the similar research processes carried out for comparing GA-DDR vs. adjusted GA-DDR (with JR-RTE) is followed for comparing EGA-DDR vs. adjusted EGA-DDR (with JR-RTE). Both empirical and statistical analyses clearly proved that there is no impact of JR-RTE on the relative efficiency of the twenty variants of EGA-DDR for DS-NPDF with MER while optimizing each of the due-date based scheduling objectives considered in this study and no need to adjust the variants of EGA-DDR for any occurrences of real time events for obtaining efficient schedule. So far, the study addressed the development of efficient GA-DDR and EGA-DDR for DS-SDF and DS-NPDF with MER respectively and studied the impact of JR-RTE on the relative efficiency of these proposed GA-DDR and EGA-DDR. Now, in the final phase of the research study, the impact of JR-RTE on the meta heuristics: Simulated Annealing (SA) and Tabu Search (TS), one at a time, for DS-SDF while minimizing TWT are studied. Accordingly, the required parameters for these two meta heuristics are identified from the literature and the meta heuristics: SA and TS, considering each of the six solutions obtained from the six efficient variants of GA-DDR respectively as initial solution are implemented. From the analysis of the solutions obtained, for each of the 270 problem instances, from each of the six efficient variants of GA-DDR and from each of the meta heuristics: SA and TS, it appears that the six efficient proposed variants of GA-DDR seems to be robust in terms of both quality and computational time requirements in obtaining efficient solution. Further, to study the impact of JR-RTE on meta heuristics: SA and TS, this study considers (a) six solutions obtained from each of the six efficient variants of GA-DDR for DS-SDF as the initial solution and obtained six final solutions respectively from each of the meta heuristics, and (b) six solutions obtained from each of the six adjusted variants of GA-DDR (with JR-RTE) for DS-SDF as the initial solution and obtained six final solutions respectively from each of the meta heuristics. For each of the meta heuristics, these two sets of final solutions, obtained for each of the 270 problem instances, are compared empirically and statistically, based on various performance measures considered in this study, and proved the research hypothesis defined in this study. The major research contributions of this study are as follows - By analyzing the literature on scheduling diffusion furnaces and the real-life situation in scheduling diffusion furnaces, a new research problem on dynamic scheduling (DS) of diffusion furnaces with incompatible-job families, machine eligibility restriction, non-agreeable release time and due-date, considering job and/or resource related real time events (JR-RTE) along with dynamic job arrival to optimize due-date based scheduling objectives: TWT, NT, OTD rate, TE/L, and Lmax as a single objective as well as multi objective was defined. - Seven (0-1) MILP models for each of DS-SDF and DS-NPDF were proposed for optimizing each of the seven due-date based scheduling objectives considered in this study and the computational complexity was observed. - Due to the computational complexity of the proposed (0-1) MILP models and the popularity of the dispatching rules in the semiconductor manufacturing industry, this study proposed and compared the twenty variants of (i) greedy algorithm based on due-date based dispatching rules (GA-DDR) for DS-SDF, and (ii) Extended GA-DDR for DS-NPDF with machine eligibility restriction (MER). - The impact of JR-RTE on the twenty proposed variants of (a) GA-DDR for DS-SDF, and (b) EGA-DDR for DS-NPDF with MER was studied and observed that modifying the data appropriately by utilizing the existing computerized tracking system available in the shop floor is sufficient and rescheduling or modifying the existing algorithms are not required when the occurrences of JR-RTE happens. - Finally, single solution based meta heuristics: Simulated Annealing (SA) and Tabu Search (TS), considering each of the six solution obtained from each of the six efficient variants of GA-DDR proposed in this study as initial solution respectively, were proposed for DS-SDF to minimize TWT. Performance analysis of the solution obtained from each of the six efficient variants of GA-DDR and from each of the meta heuristics were carried out and observed that efficient variants of GA-DDR seems to be robust in terms of both quality and computational time requirements in obtaining efficient solution. In addition, the impact of JR-RTE on the meta heuristics: SA and TS were studied and proved the research hypothesis proposed in this study. Although, this study considers many real-life problem characteristics, there are certain limitations in this study. Though this study proposed mathematical model for DS-NPDF, the required additional constraint on Machine Eligibility is not considered in this study. Further, the impact of JR-RTE on the meta heuristics: SA and TS were studied considering only DS-SDF and not for DS-NPDF with MER. In addition to overcoming the limitations mentioned here, there are many immediate future research directions for the problem studied in this thesis such as proposing the greedy algorithms for scheduling diffusion operation along with upstream or downstream operation, and proposing population based meta heuristics for the research problem defined in this study.
36

Implementace pokročilých mechanismů plánování množin RT úloh běžících pod uC/OS-II / Implementation of Advanced Real-Time Scheduling Mechanisms for uC/OS-II

Čižinský, Vojtěch January 2010 (has links)
This thesis deals with extensions of uC/OS-II kernel services. These extensions are about advanced task scheduling mechanisms. Source code of this operating system is wide open and can be, in accordance with licence agreement, modified and extended with additional capabilities. Functionality of implemented scheduling algorithms is at the end verified using tools Cheddar and TimesTool.
37

Scheduling for wireless control in single hop WirelessHART networks

Ercoli, Valeria January 2010 (has links)
No description available.
38

Voice Capacity in Opportunistic Spectrum Access Networks with Friendly Scheduling

Hassanein, Hanan January 2016 (has links)
Radio spectrum has become increasingly scarce due to the proliferation of new wireless communication services. This problem has been exacerbated by fixed bandwidth licensing policies that often lead to spectral underutilization. Cognitive radio networks (CRN) can address this issue using flexible spectrum management that permits unlicensed (secondary) users to access the licensed spectrum. Supporting real-time quality-of-service (QoS) in CRNs however, is very challenging, due to the random spectrum availability induced by the licensed (primary) user activity. This thesis considers the problem of real-time voice transmission in CRNs with an emphasis on secondary network ``friendliness''. Friendliness is measured by the secondary real-time voice capacity, defined as the number of connections that can be supported, subject to typical QoS constraints. The constant bit rate (CBR) air interface case is first assumed. An offline scheduler that maximizes friendliness is derived using an integer linear program (ILP) that can be solved using a minimum cost flow graph construction. Two online primary scheduling algorithms are then introduced. The first algorithm is based on shaping the primary spectral hole patterns subject to primary QoS constraints. The second applies real-time scheduling to both primary traffic and virtual secondary calls. The online scheduling algorithms are found to perform well compared to the friendliness upper bound. Extensive simulations of the primary friendly schedulers show the achievable secondary voice capacity for a variety of parameters compared to non-friendly primary scheduling. The thesis then considers the variable bit rate (VBR) air interface option for primary transmissions. Offline and online approaches are taken to generate a primary VBR traffic schedule that is friendly to secondary voice calls. The online VBR schedulers are found to perform well compared to the friendliness upper bound. Simulation results are presented that show the effect of the primary traffic load and primary network delay tolerance on the primary network friendliness level towards potential secondary voice traffic. Finally, secondary user friendliness is considered from an infrastructure deployment point of view. A cooperative framework is proposed, which allows the primary traffic to be relayed by helper nodes using decode-and-forward (DF) relaying. This approach decreases the primary traffic channel utilization, which, in turn, increases the capacity available to potential secondary users. A relay selection optimization problem is first formulated that minimizes the primary channel utilization. A greedy algorithm that assigns relay nodes to primary data flows is introduced and found to perform well compared to the optimum bound. Results are presented that show the primary network friendliness for different levels of primary channel utilization. / Dissertation / Doctor of Philosophy (PhD)
39

Analyse probabiliste des systèmes temps réel / Probabilistic analysis of real-time systems

Maxim, Dorin 10 December 2013 (has links)
Les systèmes embarqués temps réel critiques intègrent des architectures complexes qui évoluent constamment afin d'intégrer des nouvelles fonctionnalités requises par les utilisateurs finaux des systèmes (automobile, avionique, ferroviaire, etc.). Ces nouvelles architectures ont un impact direct sur la variabilité du comportement temporel des systèmes temps réel. Cette variabilité entraîne un sur-approvisionnement important si la conception du système est uniquement basée sur le raisonnement pire cas. Approches probabilistes proposent des solutions basées sur la probabilité d'occurrence des valeurs les plus défavorables afin d'éviter le sur-approvisionnement, tout en satisfaisant les contraintes temps réel. Les principaux objectifs de ce travail sont de proposer des nouvelles techniques d'analyse des systèmes temps réel probabilistes et des moyens de diminuer la complexité de ces analyses, ainsi que de proposer des algorithmes optimaux d'ordonnancement à priorité fixe pour les systèmes avec des temps d'exécution décrits par des variables aléatoires. Les résultats que nous présentons dans ce travail ont été prouvés surs et à utiliser pour les systèmes temps réel durs, qui sont l'objet principal de notre travail. Notre analyse des systèmes avec plusieurs paramètres probabilistes a été démontrée considérablement moins pessimiste que d'autres types d'analyses. Cet analyse combinée avec des algorithmes d'ordonnancement optimaux appropriées pour les systèmes temps réel probabilistes peut aider les concepteurs de systèmes à mieux apprécier la faisabilité d'un système, en particulier de ceux qui sont jugé irréalisable par des analyses/algorithmes d'ordonnancement déterministes / Critical real-time embedded systems integrate complex architectures that evolve constantly in order to provide new functionality required by the end users of the systems (automotive, avionics, railway, etc). These new architectures have a direct impact on the variability of the timing behavior of the real-time system. This variability leads to important over-provisioning if the design of the system is based only on worst case reasoning. Probabilistic approaches propose solutions are based on the probability of occurrence of the worst case values in order to avoid over provisioning while satisfying real-time constraints. The main objectives of this work are new analysis techniques for probabilistic real-time systems and ways of decreasing the complexity of these analyses, as well as to propose optimal fixed priority scheduling algorithms for systems that have variability at the level of execution times. The results that we provide in this work have been proved applicable to hard real-time systems, which are the main focus of our work. Our proposed analysis for systems with multiple probabilistic parameters has been shown to greatly decrease the pessimism introduced by other types of analyses. This type of analysis combined with the proper optimal scheduling algorithms for probabilistic real-time system help the system designers to better appreciate the feasibility of a system, especially of those that are deemed unfeasible by deterministic analyses/scheduling algorithms
40

Joint Congestion Control, Routing And Distributed Link Scheduling In Power Constrained Wireless Mesh Networks

Sahasrabudhe, Nachiket S 11 1900 (has links)
We study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh networks, where the nodes in the network are subjected to energy expenditure rate constraints. As wireless scenario does not allow all the links to be active all the time, only a subset of given links can be active simultaneously. We model the inter-link interference using the link contention graph. All the nodes in the network are power-constrained and we model this constraint using energy expenditure rate matrix. Then we formulate the problem as a network utility maximization (NUM) problem. We notice that this is a convex optimization problem with affine constraints. We apply duality theory and decompose the problem into two sub-problems namely, network layer congestion control and routing problem, and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. The optimal scheduler selects that set of non-interfering links, for which the sum of link prices is maximum. We study the effects of energy expenditure rate constraints of the nodes on the maximum possible network utility. It turns out that the dominant of the two constraints namely, the link capacity constraint and the node energy expenditure rate constraint affects the network utility most. Also we notice the fact that the energy expenditure rate constraints do not affect the nature of optimal link scheduling problem. Following this fact, we study the problem of distributed link scheduling. Optimal scheduling requires selecting independent set of maximum aggregate price, but this problem is known to be NP-hard. We first show that as long as scheduling policy selects the set of non-interfering links, it can not go unboundedly away from the optimal solution of network utility maximization problem. Then we proceed and evaluate a simple greedy scheduling algorithm. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.

Page generated in 0.0671 seconds