1 |
Predicting likelihood of requirement implementation within the planned iterationDehghan, Ali 31 May 2017 (has links)
There has been a significant interest in the estimation of time and effort in fixing defects among both software practitioners and researchers over the past two decades. However, most of the focus has been on prediction of time and effort in resolving bugs, or other low level tasks, without much regard to predicting time needed to complete high-level requirements, a critical step in release planning. In this thesis, we describe a mixed-method empirical study on three large IBM projects in which we developed and evaluated a process of training a predictive model constituting a set of 29 features in nine categories in order to predict if whether or not a requirement will be completed within its planned iteration. We conducted feature engineering through iterative interviews with IBM software practitioners as well as analysis of large development and project management repositories of these three projects. Using machine learning techniques, we were able to make predictions on requirement completion time at four different stages of requirement lifetime. Using our industrial partner’s interest in high precision over recall, we then adopted a cost sensitive learning method and maximized precision of predictions (ranging from 0.8 to 0.97) while maintaining an acceptable recall. We also ranked the features based on their relative importance to the optimized predictive model. We show that although satisfying predictions can be made at early stages, even on the first day of requirement creation, performance of predictions improves over time by taking advantage of requirements’ progress data. Furthermore, feature importance ranking results show that although importance of features are highly dependent on project and prediction stage, there are certain features (e.g. requirement creator, time remained to the end of iteration, time since last requirement summary change and number of times requirement has been replanned for a new iteration) that emerge as important across most projects and stages, implying future worthwhile research directions for both researchers and practitioners. / Graduate
|
2 |
Quantifying the Reliability of Performance Time and User Perceptions Obtained from Passive Exoskeleton EvaluationsNoll, Alexander Baldrich Benoni 16 August 2024 (has links)
Work-related musculoskeletal disorders (WMSDs) cost US industries billions annually and reduce quality of life for those afflicted. Passive exoskeletons (EXOs) have emerged as a potential intervention to reduce worker exposures to WMSD risk factors. As EXO adoption is rising, EXO manufacturers are designing and producing new EXOs in accordance with growing demand. However, there are no standardized EXO evaluation protocols and EXO use recommendations, due in part to insufficient information on the reliability of EXO evaluation measures. The purpose of this thesis was to quantify the reliability of common EXO evaluation measures, using both traditional approaches a more advanced statistical approach (i.e., Generalizability Theory), while also identifying potential effects of EXO type, work task, and individual differences. This work used data from a recently completed EXO evaluation study, conducted in Virginia Tech's Occupational Ergonomics and Biomechanics Lab. Forty-two total participants completed simulated occupational tasks, in two separate experimental sessions on different days, while using an arm-support EXO (ASE) and a back-support EXO (BSE). Several outcome measures reached excellent within-session reliability within four trials for many tasks considered. Between-session reliability levels were lower than within-session levels, with outcome measures reaching moderate-to-good reliability for most tasks. Interindividual differences accounted for the largest proportion of variance for measurement reliability, followed by the experimental session. For all tasks, outcome measures reached excellent dependability levels, with many achieving excellent levels within five total trials. Inconsistencies observed in between-session reliability levels and dependability levels suggest that additional training and EXO familiarity may affect measurement reliability of outcome measures differently for some tasks, unique to each EXO type. These discrepancies emphasize the importance for additional research into this topic. Overall, the current findings indicate that many of the commonly used EXO evaluation measures are reliable and dependable within five trials and one experimental session, providing a potential foundation for standardized EXO assessment protocols. / Master of Science / Work-related musculoskeletal disorders (WMSDs) are a substantial economic burden and impair the quality of life for affected workers. Passive exoskeletons (EXOs), which use springs or elastic material to distribute the load placed on workers during manual labor, are a possible solution to reduce worker exposure to WMSD risk factors. EXO adoption is rising, but there are no standardized procedures to test the effectiveness of EXOs or standardized recommendations for EXO use. The purpose of this thesis was to determine the reliability of EXO evaluation measures commonly used in prior research, using both traditional reliability calculation methods alongside a more advanced method (i.e., Generalizability Theory). Data from a recently completed study were used, which were collected from 42 participants in two separate experimental sessions on two different days. Participants completed tasks intended to simulate manual work, using either an arm-support exoskeleton – which supported their upper arms during relevant tasks, or a back-support exoskeleton – which supported their lower back during relevant tasks. Many of the tasks and outcome measures reached excellent reliability within four repetitions in a single day. When examining reliability of evaluations across days, we found reliability levels were lower than levels obtained from a single day. All tasks and outcome measures reached excellent dependability levels, with many requiring only five trials to reach excellent levels. Reliability increased with the number of trials in an EXO evaluation experiment. Moreover, our results revealed that the EXO type being used and the biological sex of a participant both influence reliability, but individual participant differences had the greatest effect on measurement reliability. This research reveals possible experimental conditions required for reliable, efficient, and cost-effective EXO research, facilitating the development of a standardized EXO evaluation protocol.
|
3 |
Υλοποίηση και εξομοίωση ενός max-min fair sharing αλγορίθμου και σύγκριση αλγορίθμων χρονοπρογραμματισμού σε Grids / Implementation and simulation of fair grid scheduling algorithmsΝταφούλη, Ελένη 26 February 2009 (has links)
Θέμα της παρούσας εργασίας είναι η υλοποίηση και εξομοίωση δίκαιων
αλγόριθμων χρονοπρογραμματισμού σε Grids και η σύγκρισή τους με κλασικούς
αλγόριθμους χρονοπρογραμματισμού.
Η βασική ιδέα πίσω από την τεχνολογία Grid και τις υπηρεσίες που παρέχει είναι
η ενοποίηση υπολογιστικών και αποθηκευτικών πόρων και η συνολική θεώρηση τους
από τους χρήστες. Με τον τρόπο αυτό γίνεται δυνατή η ανάπτυξη πολύπλοκων και
απαιτητικών εφαρμογών, τόσο στον χώρο της επιστημονικής έρευνας, όσο και στα
πλαίσια της παραγωγής εμπορικών λύσεων.
Ένα τέτοιο σύστημα απαιτεί διαμοιρασμό των υπολογιστικών και άλλων πόρων
καθώς και μεγάλες ταχύτητες σύνδεσης μεταξύ τους. Οι αλγόριθμοι χρονοπρογραμματισμού αναλαμβάνουν τον αποδοτικό διαμοιρασμό των πόρων ώστε να
επιτυγχάνεται καλύτερη ποιότητα υπηρεσίας. Η αποτελεσματικότητα ενός
αλγόριθμου χρονοπρογραμματισμού εξαρτάται από την συνάρτηση που θέλουμε να
βελτιστοποιήσουμε, που με τη σειρά της εξαρτάται από τεχνο-οικονομικά κριτήρια.
Στην προσπάθεια βελτιστοποίησης της εκάστοτε συνάρτησης ευνοούνται κάποιες
προς εκτέλεση διεργασίες έναντι άλλων. Ένας δίκαιος αλγόριθμος
χρονοπρογραμματισμού όμως θα πρέπει να συμπεριφέρεται με τον ίδιο τρόπο σε
όλες τις διεργασίες ανεξαρτήτως των χαρακτηριστικών τους.
Στην εργασία που θα παρουσιάσουμε, αναλύουμε δύο δίκαιους αλγόριθμους
χρονοπρογραμματισμού, τον Fair Completion Time (Ordering) και τον Fair
Completion Time Estimation (Assignment). Κατόπιν τους υλοποιούμε και τους
εξομοιώνουμε με το GridSim Toolkit και συγκρίνουμε την απόδοση τους με
κλασικούς αλγόριθμους χρονοπρογραμματισμού. / The subject of this thesis is the implementation and simulation of fair scheduling algorithms
applied on Computational Grids and their comparison with the classic scheduling algorithms.
The basic idea of the Grid Technology and the services it provides, is the unification of
computational and storage resources. This way it is possible to serve sophisticated applications, in
fields like scientific research and trading.
A Grid Network demands the sharing of the computational and storage resources, and high
bandwidth connections between them. Scheduling algorithms are responsible for the efficient
assignment of tasks to resources for better quality of service. Evaluating the efficiency of a
scheduling algorithm depends on a utility function that we seek to optimize which in turns
depends on techno-economic criteria. As a result of trying to optimize the utility function, some
tasks with specific characteristics are favoured against others. A fair scheduling algorithm
however should treat all tasks in the same way regardless of their characteristics.
In this thesis we study the Fair Completion Time Ordering Algorithm and suggest a new fair
scheduling algorithm called Fair Completion Time Estimation Assignment Algorithm. We
implement and simulate these algorithms using the GridSim Toolkit and compare them with the
classic scheduling algorithms.
|
4 |
Design of Scheduling Algorithms Using Game Theoretic IdeasKulkarni, Janardhan Dattatreya January 2015 (has links)
<p>Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance. </p><p>The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.</p><p>The main contributions of the thesis can be placed in one of the following categories.</p><p>1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.</p><p>2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.</p><p>3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.</p><p>4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.</p> / Dissertation
|
5 |
Solution Approaches For Flexible Job Shop Scheduling ProblemsBalci, Serife Aytug 01 February 2013 (has links) (PDF)
discrete parts manufacturing industries. We are motivated by the production
environment of Roketsan Missiles Industries Incorporation, operating at Turkish
defense industry. Our objective is to minimize the total weighted completion
times of the jobs in the system.
We formulate the problem as a mixed integer linear program and find that our
model could find optimal solutions only to small sized problem instances. For
medium and large sized problem instances, we develop heuristic algorithms with
high quality approximate solutions in reasonable solution time.
Our proposed heuristic algorithm has hierarchical approach and benefits from
optimization models and priority rules. We improve the heuristic method via
best move with non-blocking strategy and design several experiments to test the
performances. Our computational results have revealed that proposed heuristic
algorithm can find high quality solutions to large sized instances very quickly.
|
6 |
Approximation and Optimal Algorithms for Scheduling Jobs subject to Release DatesYu, Su-Jane 30 July 2003 (has links)
In this dissertation, we study the single machine scheduling problem with an objective of minimizing the total completion time subject to release dates. The problem, denoted 1|rj £UCj ,was known to be strongly NP-hard and both theoretically and practically important. The focus of the research in this dissertation is to develop the efficient algorithms for solving the 1|rj|£UCj problem.
This thesis contains two parts.
In the first part, the theme concerns the approximation approach. We derive a necessary and sufficient condition for local optimality, which can be implemented as a priority rule and be used to construct three heuristic algorithms with running times of O(n log n). By ¡¨local optimality¡¨, we mean the optimality of all candidates whenever a job is selected in a schedule, without considering the other jobs preceding or following. This is the most broadly considered concepts of locally optimal rule. We also identify a dominant subset which is strictly contained in each of all known dominant subsets, where a dominant subset is a set of solutions containing all optimal schedules.
In the second part, we develop our optimality algorithms for the 1|rj |£UCj problem. First, we present a lemma for estimating the sum of delay times of the rest jobs, if the starting time is delayed a period of time in a schedule. Then, using the lemma, partially, we proceed to develop a new partition property and three dominance theorems, that will be used and have improved the branch-and-bound algorithms for our optimization approach. By exploiting the insights gained from our heuristics as a branching scheme and by exploiting our heuristics as an upper bounding procedure, we propose three branch-and-bound algorithms. Our algorithms can optimally solve the problem up to 120 jobs, which is known to be the best till now.
|
7 |
Instantly Decodable Network Coding: From Centralized to Device-to-Device CommunicationsDouik, Ahmed S. 05 1900 (has links)
From its introduction to its quindecennial, network coding have built a strong reputation in enhancing packet recovery process and achieving maximum information flow in both wires and wireless networks. Traditional studies focused on optimizing the throughput of the network by proposing complex schemes that achieve optimal delay. With the shift toward distributed computing at mobile devices, throughput and complexity become both critical factors that affect the efficiency of a coding scheme. Instantly decodable network coding imposed itself as a new paradigm in network coding that trades off this two aspects. This paper presents a survey of instantly decodable network coding schemes that are proposed in the literature. The various schemes are identified, categorized and evaluated. Two categories can be distinguished namely the conventional centralized schemes and the distributed or cooperative schemes. For each scheme, the comparison is carried out in terms of reliability, performance, complexity and packet selection methodology. Although the performance is generally inversely proportional to the computation complexity, numerous successful schemes from both the performance and complexity viewpoint are identified.
|
8 |
Evaluation of the effect of stabilization time in eventually consistent systemsSvan, Mac January 2010 (has links)
The effect of using the eventual consistency model is evaluated, compared to the effect of immediate consistency, by increasing the stabilization time in both models and using the immediate consistent system as a baseline for evaluations. The immediate consistent system performs better if the information and the decisions are replicated adequately fast throughout the system. When the stabilization time increases the effectiveness of eventual consistency emerges, which is most obvious when time constraints make it difficult to propagate information and decisions. By using a simulation to extract data for evaluations, it is verified in this research that as the stabilization time between different parts of a system increases, the eventual consistency will always outperform the immediate consistent system. This statement is valid in all situations where consistency models are useful. Of secondary importance in the research, by adding more decision layers to the eventual consistency model, the performance output is increased significantly, as swift and less well calculated decisions can be thwarted and corrected using the second decision layer.
|
9 |
Optimal and Approximate Algorithms for the Multiple-Lots-per-Carrier Scheduling and Integrated Automated Material Handling and Lot Scheduling Problems in 300mm Wafer FabsWang, Lixin 22 October 2008 (has links)
The latest generation of semiconductor wafer fabs produce Integrated Circuits (ICs) on silicon wafers of 300mm diameter. In this dissertation, we address the following two types of (new) scheduling problems that are encountered in this generation of wafer fabs: multiple-lots-per-carrier scheduling problem (MLCSP) and integrated automated material handling and lot scheduling problem (IMHLSP). We consider several variations of the MLCSP depending upon the number of machines used, the prevailing processing technology of the machines, and the type of objective functions involved. For the IMHLSP, we study two instances, one with infinite number of vehicles and the other with finite number of vehicles.
We begin by introducing a single-machine, multiple-lots-per-carrier with single-wafer-processing-technology scheduling problem for the objective of minimizing the total completion time (MLCSP1). The wafer carrier is a front-opening unified pod (FOUP) that can hold a limited number of wafers. The problem is easy to solve when all the lots are of the same size. For the case of different lot sizes, we first relax the carrier (FOUP) capacity and propose a dynamic programming-based algorithm, called RelaxFOUP-DP, which enables a quick determination of its optimal solution that serves as a lower bound for the problem with limited FOUP capacity. Then, a branch-and-bound algorithm, designated as MLCSP1-B&B, is developed that relies on the lower bound determined by the RelaxFOUP-DP algorithm. Numerical tests indicate that MLCSP1-B&B finds optimal solutions much faster than the direct solution of the MLCSP1 model by the AMPL CPLEX 10.1 Solver. In fact, for the medium and low density problems, the MLCSP1-B&B algorithm finds optimal solutions at the starting node (node zero) itself.
Next, we consider a single-machine, multiple-lots-per-carrier with single-carrier-processing-technology scheduling problem for the objective of minimizing total completion time (MLCSP2). As for the case of MLCSP1, the optimal solution for the case in which all the lots are of the same size can be obtained easily. For the case of different lot sizes, we determine a lower bound and an upper bound for the problem and prove the worst-case ratios for them.
Subsequently we analyze a two-machine flow shop, multiple-lots-per-carrier with single-wafer-processing-technology scheduling problem for the objective of minimizing the makespan (MLCSP3). We first consider a relaxed version of this problem, and transform the original problem to a two-machine flow shop lot streaming problem. Then, we propose algorithms to find the optimal capacitated sublot sizes for the case of lots with (1) the same ratio of processing times, and, (2) different ratios of processing times on the machines. Since the optimal solutions obtained from the lot streaming problem may not be feasible to the MLCSP3, we develop heuristic methods based on the heuristic procedures for the bin packing problem. We develop four heuristic procedures for lots with the same ratio of processing times, and another four procedures for lots with different ratios of processing times on the machines. Results of our numerical experimentation are presented that show that our heuristic procedures generate almost optimal solutions in a matter of a few seconds.
Next, we address the integrated automated material handling and lot scheduling problem (IMHLSP) in the presence of infinite number of vehicles. We, first, propose a new strong hybrid model, which has the advantages of both segregate and direct models. In the segregate model, a job is always transferred to the stocker after its completion at a station, while in the direct model, it is transferred to the next machine in case that machine can accommodate the jobs; otherwise, the job will stay at current station. The decisions involved in the strong hybrid model are the sequence in which to process the lots and a selection between the segregate and direct models for each lot, whichever optimizes system performance. We show that, under certain conditions about the processing times of the lots, the problem can be approximated by the cases of either infinite buffer or zero-buffer at the machines. Hence, we consider all three cases of the IMHLSP in this chapter, namely, infinite buffer, zero-buffer, and limited buffer sizes. For the strong hybrid model with limited buffer size, we propose a branch-and-bound algorithm, which uses a modified Johnson's algorithm to determine a lower bound. Two upper bounds for this algorithm are also determined. Results of our numerical investigation indicate that our algorithm finds optimal solutions faster than the direct solution of the IMHLSP model by the AMPL CPLEX 10.1 Solver. Experimental results also indicate that for the same problem size, the times required to solve the IMHLSP model with interbay movements are larger than those for intrabay movements.
Finally, we investigate the IMHLSP in the presence of limited number of vehicles. Due to the complex nature of the underlying problem, we analyze small-size versions of this problem and develop algorithms for their solution. For some of these problems, we can find optimal solutions in polynomial time. Also, based on our analysis on small-size systems, we have shown why some real-time dispatching (RTD) rules used in real fabs are expected to perform well while not the others. In addition, we also propose some new and promising RTD rules based on our study. / Ph. D.
|
10 |
Deterministic Scheduling Of Parallel Discrete And Batch ProcessorsVenkataramana, M 07 1900 (has links)
Scheduling concerns the allocation of limited resources to tasks over time. In manufacturing systems, scheduling is nothing but assigning the jobs to the available processors over a period of time. Our research focuses on scheduling in systems of parallel processors which is challenging both from the theoretical and practical perspectives. The system of parallel processors is a common occurrence in different types of modern manufacturing systems such as job shop, batch shop and mass production.
A variety of important and challenging problems with realistic settings in a system of parallel processors are considered. We consider two types of processors comprising discrete and batch processors. The processor which produces one job at a time is called a discrete processor. Batch processor is a processor that can produce several jobs simultaneously by keeping jobs in a batch form which is commonly seen in semiconductor manufacturing, heat treatment operations and also in chemical processing industries. Our aim is to develop efficient solution methodologies (heuristics/metaheuristics) for three different problems in the thesis. The first two problems consider the objective of minimizing total weighted tardiness in cases of discrete and batch processors where customer delivery time performance is critical. The third problem deals with the objective of minimizing the total weighted completion time in the case of batch processors to reduce work-in-process inventory.
Specifically, the first problem deals with the scheduling of parallel identical discrete processors to minimize total weighted tardiness. We develop a metaheuristic based on
Ant Colony Optimization(ACO) approach to solve the problem and compare it with the available best heuristics in the literature such as apparent tardiness cost and modified due date rules. An extensive experimentation is conducted to evaluate the performance of the ACO approach on different problem sizes with varied tardiness factors. Our experimentation shows that the proposed ant conony optimization algorithm yields promising results as compared to the best of the available heuristics.
The second problem concerns with the scheduling of jobs to parallel identical batch processors for minimizing the total weighted tardiness. It is assumed that the jobs are incompatible in respect of job families indicating that jobs from different families cannot be processed together. We decompose the problem into two stages including batch formation and batch scheduling as in the literature. Ant colony optimization based heuristics are developed in which ACO is used to solve the batch scheduling problem. Our computational experimentation shows that the proposed five ACO based heuristics perform better than the available best traditional dispatching rule called ATC-BATC rule.
The third scheduling problem is to minimize the total weighted completion time in a system of parallel identical batch processors. In the real world manufacturing system, jobs to be scheduled come in lots with different job volumes(i.e number of jobs) and priorities. The real settings of lots and high batch capacity are considered in this problem. This scheduling problem is formulated as a mixed integer non-linear program. We develop a solution framework based on the decomposition approach for this problem. Two heuristics are proposed based on the proposed decomposition approach and the performance of these heuristics is evaluated in the cases of two and three batch processors by comparing with the solution of LINGO solver.
|
Page generated in 0.0809 seconds