• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Stochastic Resource Constrained Project Scheduling With Stochastic Task Insertion Problems

Archer, Sandra 01 January 2008 (has links)
The area of focus for this research is the Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) with Stochastic Task Insertion (STI). The STI problem is a specific form of the SRCPSP, which may be considered to be a cross between two types of problems in the general form: the Stochastic Project Scheduling Problem, and the Resource Constrained Project Scheduling Problem. The stochastic nature of this problem is in the occurrence/non-occurrence of tasks with deterministic duration. Researchers Selim (2002) and Grey (2007) laid the groundwork for the research on this problem. Selim (2002) developed a set of robustness metrics and used these to evaluate two initial baseline (predictive) scheduling techniques, optimistic (0% buffer) and pessimistic (100% buffer), where none or all of the stochastic tasks were scheduled, respectively. Grey (2007) expanded the research by developing a new partial buffering strategy for the initial baseline predictive schedule for this problem and found the partial buffering strategy to be superior to Selim s extreme buffering approach. The current research continues this work by focusing on resource aspects of the problem, new buffering approaches, and a new rescheduling method. If resource usage is important to project managers, then a set of metrics that describes changes to the resource flow would be important to measure between the initial baseline predictive schedule and the final as-run schedule. Two new sets of resource metrics were constructed regarding resource utilization and resource flow. Using these new metrics, as well as the Selim/Grey metrics, a new buffering approach was developed that used resource information to size the buffers. The resource-sized buffers did not show to have significant improvement over Grey s 50% buffer used as a benchmark. The new resource metrics were used to validate that the 50% buffering strategy is superior to the 0% or 100% buffering by Selim. Recognizing that partial buffers appear to be the most promising initial baseline development approach for STI problems, and understanding that experienced project managers may be able to predict stochastic probabilities based on prior projects, the next phase of the research developed a new set of buffering strategies where buffers are inserted that are proportional to the probability of occurrence. The results of this proportional buffering strategy were very positive, with the majority of the metrics (both robustness and resource), except for stability metrics, improved by using the proportional buffer. Finally, it was recognized that all research thus far for the SRCPSP with STI focused solely on the development of predictive schedules. Therefore, the final phase of this research developed a new reactive strategy that tested three different rescheduling points during schedule eventuation when a complete rescheduling of the latter portion of the schedule would occur. The results of this new reactive technique indicate that rescheduling improves the schedule performance in only a few metrics under very specific network characteristics (those networks with the least restrictive parameters). This research was conducted with extensive use of Base SAS v9.2 combined with SAS/OR procedures to solve project networks, solve resource flow problems, and implement reactive scheduling heuristics. Additionally, Base SAS code was paired with Visual Basic for Applications in Excel 2003 to implement an automated Gantt chart generator that provided visual inspection for validation of the repair heuristics. The results of this research when combined with the results of Selim and Grey provide strong guidance for project managers regarding how to develop baseline predictive schedules and how to reschedule the project as stochastic tasks (e.g. unplanned work) do or do not occur. Specifically, the results and recommendations are provided in a summary tabular format that describes the recommended initial baseline development approach if a project manager has a good idea of the level and location of the stochasticity for the network, highlights two cases where rescheduling during schedule eventuation may be beneficial, and shows when buffering proportional to the probability of occurrence is recommended, or not recommended, or the cases where the evidence is inconclusive.
2

Data Perspectives of Workflow Schema Evolution : Cases of Task Deletion and Insertion

Arunagiri, Aravindhan January 2013 (has links) (PDF)
Dynamic changes in the business environment requires their business process to be up-to-date. The Workflow Management Systems supporting these business processes need to adapt to these changes rapidly. The Work Flow Management Systems however lacks the ability to dynamically propagate the process changes to their process model schemas (Workflow templates). The literature on workflow schema evolution emphasizes the impact of changes in control flow with very ittle attention to other aspects of a workflow schema. This thesis studies the data aspect (data flow and data model) of workflow schema during its evolution. Workflow schema changes can lead to inconsistencies between the underlying database model and the workflow. A rather straight forward approach to the problem would be to abandon the existing database model and start afresh. However this introduces data persistence issues. Also there could be significant system downtimes involved in the process of migrating data from the old database model to the current one. In this research we develop an approach to address this problem. The business changes demand various types of control flow changes to its business process model (workflow schema). The control flow changes include task insertion, deletion, swapping, movement, replacement, extraction, in-lining, Parallelizing etc. Many of the control flow changes to the workflow can be made by using the combination of a simple task insertion and deletion, while some like embedding task in loop/ conditional branch and Parallelizing tasks also requires the addition/removal of control dependency between the tasks. Since many of the control flow change patterns involves task insertion and deletion at its core, in this thesis we study its impact on the underlying data model. We propose algorithms to dynamically handle the changes in the underlying relational database schema. First we identify the basic change patterns that can be implemented using atomic task insertion and deletions. Then we characterize these basic pattern in terms of their data flow anomalies (Missing, Redundant, Conflicting data) that they can generate. The Data schema compliance criteria are developed to identify the data changes: (i) that makes the underlying database schema inconsistent with the modified workflow and (ii) generating the aforementioned data anomalies. The Data schema compliance criteria characterizes the change patterns in terms of its ability to work with the current relational data model. The Data schema compliance criteria show various properties required of the modified workflow to be consistent with the underlying database model. The data of any workflow instance conforming to Data schema compliance criteria can be directly accommodated in the database model. The data anomalies (of task insertion and deletion) identified using DSC are handled dynamically using respective Data adaptation algorithms. The algorithm uses the functional dependency constraints in the relational database model to adapt/handle these data anomalies. Such handled data (changes) that conform to DSC can be directly accommodated in the underlying database schema. Hence with this approach the workflow can be modified (using task insertion and deletion) and their data changes can be implemented on-the-fly using the Data adaptation algorithms. In this research the same old data model is evolved without abandoning it even after the modification of the workflow schema. This maintains the old data persistence in the existing database schema. Detailed implementation procedures to deploy the Data adaptation algorithms are presented with illustrative examples.

Page generated in 0.1158 seconds