• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving Throughput and Predictability of High-volume Business Processes Through Embedded Modeling

DeKeyrel, Joseph S. 01 January 2011 (has links)
Being faster is good. Being predictable is better. A faithful model of a system, loaded to reflect the system's current state, can then be used to look into the future and predict performance. Building faithful models of processes with high degrees of uncertainty can be very challenging, especially where this uncertainty exists in terms of processing times, queuing behavior and re-work rates. Within the context of an electronic, multi-tiered workflow management system (WFMS) the author builds such a model to endogenously quote due dates. A WFMS that manages business objects can be recast as a flexible flow shop in which the stations that a job (representing the business object) passes through are known and the jobs in the stations queues at any point are known. All of the other parameters associated with the flow shop, including job processing times per station, and station queuing behavior are uncertain though there is a significant body of past performance data that might be brought to bear. The objective, in this environment, is to meet the delivery date promised when the job is accepted. To attack the problem the author develops a novel heuristic algorithm for decomposing the WFMS's event logs exposing non-standard queuing behavior, develops a new simulation component to implement that behavior, and assembles a prototypical system to automate the required historical analysis and allow for on-demand due date quoting through the use of embedded discrete event simulation modeling. To attack the problem the author develops a novel heuristic algorithm for decomposing the WFMS's event logs exposing non-standard queuing behavior, develops a new simulation component to implement that behavior, and assembles a prototypical system to automate the required historical analysis and allow for on-demand due date quoting through the use of embedded discrete event simulation modeling. The developed software components are flexible enough to allow for both the analysis of past performance in conjunction with the WFMS's event logs, and on-demand analysis of new jobs entering the system. Using the proportion of jobs completed within the predicted interval as the measure of effectiveness, the author validates the performance of the system over six months of historical data and during live operations with both samples achieving the 90% service level targeted.
2

Data Transmission Scheduling For Distributed Simulation Using Packet A

Vargas-Morales, Juan 01 January 2004 (has links)
Communication bandwidth and latency reduction techniques are developed for Distributed Interactive Simulation (DIS) protocols. Using logs from vignettes simulated by the OneSAF Testbed Baseline (OTB), a discrete event simulator based on the OMNeT++ modeling environment is developed to analyze the Protocol Data Unit (PDU) traffic over a wireless flying Local Area Network (LAN). Alternative PDU bundling and compression techniques are studied under various metrics including slack time, travel time, queue length, and collision rate. Based on these results, Packet Alloying, a technique for the optimized bundling of packets, is proposed and evaluated. Packet Alloying becomes more active when it is needed most: during negative spikes of transmission slack time. It produces aggregations that preserve the internal PDU format, allowing the resulting packets to be subjectable to further bundling and/or compression by conventional techniques. To optimize the selection of bundle delimitation, three online predictive strategies were developed: Neural-Network based, Always-Wait, and Always-Send. These were compared with three offline strategies defined as Type, Type-Length and Type-Length-Size. Applying Always-Wait to the studied vignette using the wireless links set to 64 Kbps, a reduction in the magnitude of negative slack time from -75 to -9 seconds for the worst spike was achieved, which represents a reduction of 88 %. Similarly, at 64 Kbps, Always-Wait reduced the average satellite queue length from 2,963 to 327 messages for a 89% reduction. From the analysis of negative slack-time spikes it was determined which PDU types are of highest priority. The router and satellite queues in the case study were modified accordingly using a priority-based transmission scheduler. The analysis of total travel times based of PDU types numerically shows the benefit obtained. The contributions of this dissertation include the formalization of a selective PDU bundling scheme, the proposal and study of different predictive algorithms for the next PDU, and priority-based optimization using Head-of-Line (HoL) service. These results demonstrate the validity of packet optimizations for distributed simulation environments and other possible applications such as TCP/IP transmissions.

Page generated in 0.1064 seconds