• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 9
  • 2
  • 1
  • Tagged with
  • 68
  • 68
  • 20
  • 13
  • 13
  • 12
  • 10
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modeling Training Effects on Task Performance Using a Human Performance Taxonomy

Meador, Douglas P. 31 December 2008 (has links)
No description available.
62

Performance modeling of congestion control and resource allocation under heterogeneous network traffic : modeling and analysis of active queue management mechanism in the presence of poisson and bursty traffic arrival processes

Wang, Lan January 2010 (has links)
Along with playing an ever-increasing role in the integration of other communication networks and expanding in application diversities, the current Internet suffers from serious overuse and congestion bottlenecks. Efficient congestion control is fundamental to ensure the Internet reliability, satisfy the specified Quality-of-Service (QoS) constraints and achieve desirable performance in response to varying application scenarios. Active Queue Management (AQM) is a promising scheme to support end-to-end Transmission Control Protocol (TCP) congestion control because it enables the sender to react appropriately to the real network situation. Analytical performance models are powerful tools which can be adopted to investigate optimal setting of AQM parameters. Among the existing research efforts in this field, however, there is a current lack of analytical models that can be viewed as a cost-effective performance evaluation tool for AQM in the presence of heterogeneous traffic, generated by various network applications. This thesis aims to provide a generic and extensible analytical framework for analyzing AQM congestion control for various traffic types, such as non-bursty Poisson and bursty Markov-Modulated Poisson Process (MMPP) traffic. Specifically, the Markov analytical models are developed for AQM congestion control scheme coupled with queue thresholds and then are adopted to derive expressions for important QoS metrics. The main contributions of this thesis are listed as follows: • Study the queueing systems for modeling AQM scheme subject to single-class and multiple-classes Poisson traffic, respectively. Analyze the effects of the varying threshold, mean traffic arrival rate, service rate and buffer capacity on the key performance metrics. • Propose an analytical model for AQM scheme with single class bursty traffic and investigate how burstiness and correlations affect the performance metrics. The analytical results reveal that high burstiness and correlation can result in significant degradation of AQM performance, such as increased queueing delay and packet loss probability, and reduced throughput and utlization. • Develop an analytical model for a single server queueing system with AQM in the presence of heterogeneous traffic and evaluate the aggregate and marginal performance subject to different threshold values, burstiness degree and correlation. • Conduct stochastic analysis of a single-server system with single-queue and multiple-queues, respectively, for AQM scheme in the presence of multiple priority traffic classes scheduled by the Priority Resume (PR) policy. • Carry out the performance comparison of AQM with PR and First-In First-Out (FIFO) scheme and compare the performance of AQM with single PR priority queue and multiple priority queues, respectively.
63

Modeling and Performance Analysis of Distributed Systems with Collaboration Behaviour Diagrams

Israr, Toqeer 23 April 2014 (has links)
The use of distributed systems, involving multiple components, has become a common industry practice. However, modeling the behaviour of such systems is a challenge, especially when the behavior consists of several collaborations of different parties, each involving possibly several starting (input) and ending (output) events of the involved components. Furthermore, the global behavior should be described as a composition of several sub-behaviours, in the following called collaborations, and each collaboration may be further decomposed into several sub-collaborations. We assume that the performance of the elementary sub-collaborations is known, and that the performance of the global behavior should be determined from the performance of the contained elementary collaborations and the form of the composition. A collaboration, in this thesis, is characterized by a partial order of input and output events, and the performance of the collaboration is defined by the minimum delays required for a given output event with respect to an input event. This is a generalization of the semantics of UML Activities, where all input events are assumed to occur at the same time, and all output events occur at the same time. We give a semantic definition of the dynamic behavior of composed collaborations using the composition operators for control flow from UML Activity diagrams, in terms of partial order relationships among the involved input and output events. Based on these semantics, we provide formulas for calculating the performance of composed collaborations in terms of the performance of the sub-collaborations, where each delay is characterized by (a) a fixed value, (b) a range of values, and (c) a distribution (in the case of stochastic behaviours). We also propose approximations for the case of stochastic behavior with Normal distributions, and discuss the expected errors that may be introduced due to ignoring of shared resources or possible dependencies in the case of stochastic behaviours. A tool has been developed for evaluating the performance of complex collaborations, and examples and case studies are discussed to illustrate the applicability of the performance analysis and the visual notation which we introduced for representing the partial-order relationships of the input and output events.
64

Integrated Parallel Simulations and Visualization for Large-Scale Weather Applications

Malakar, Preeti January 2013 (has links) (PDF)
The emergence of the exascale era necessitates development of new techniques to efficiently perform high-performance scientific simulations, online data analysis and on-the-fly visualization. Critical applications like cyclone tracking and earthquake modeling require high-fidelity and high- performance simulations involving large-scale computations and generate huge amounts of data. Faster simulations and simultaneous online data analysis and visualization enable scientists provide real-time guidance to policy makers. In this thesis, we present a set of techniques for efficient high-fidelity simulations, online data analysis and visualization in environments with varying resource configurations. First, we present a strategy for improving throughput of weather simulations with multiple regions of interest. We propose parallel execution of these nested simulations based on partitioning the 2D process grid into disjoint rectangular regions associated with each subdomain. The process grid partitioning is obtained from a Huffman tree which is constructed from the relative execution times of the subdomains. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. We observe up to 33% gain over the default strategy in weather models. Second, we propose a processor reallocation heuristic that minimizes data redistribution cost while reallocating processors in the case of dynamic regions of interest. This algorithm is based on hierarchical diffusion approach that uses a novel tree reorganization strategy. We have also developed a parallel data analysis algorithm to detect regions of interest within a domain. This helps improve performance of detailed simulations of multiple weather phenomena like depressions and clouds, thereby in- creasing the lead time to severe weather phenomena like tornadoes and storm surges. Our method is able to reduce the redistribution time by 25% over a simple partition from scratch method. We also show that it is important to consider resource constraints like I/O bandwidth, disk space and network bandwidth for continuous simulation and smooth visualization. High simulation rates on modern-day processors combined with high I/O bandwidth can lead to rapid accumulation of data at the simulation site and eventual stalling of simulations. We show that formulating the problem as an optimization problem can deter- mine optimal execution parameters for enabling smooth simulation and visualization. This approach proves beneficial for resource-constrained environments, whereas a naive greedy strategy leads to stalling and disk overflow. Our optimization method provides about 30% higher simulation rate and consumes about 25-50% lesser storage space than a naive greedy approach. We have then developed an integrated adaptive steering framework, InSt, that analyzes the combined e ect of user-driven steering with automatic tuning of application parameters based on resource constraints and the criticality needs of the application to determine the final parameters for the simulations. It is important to allow the climate scientists to steer the ongoing simulation, specially in the case of critical applications. InSt takes into account both the steering inputs of the scientists and the criticality needs of the application. Finally, we have developed algorithms to minimize the lag between the time when the simulation produces an output frame and the time when the frame is visualized. It is important to reduce the lag so that the scientists can get on-the- y view of the simulation, and concurrently visualize important events in the simulation. We present most-recent, auto-clustering and adaptive algorithms for reducing lag. The lag-reduction algorithms adapt to the available resource parameters and the number of pending frames to be sent to the visualization site by transferring a representative subset of frames. Our adaptive algorithm reduces lag by 72% and provides 37% larger representativeness than the most-recent for slow networks.
65

Topics In Performance Modeling Of IEEE 802.11 Wireless Local Area Networks

Panda, Manoj Kumar 03 1900 (has links) (PDF)
This thesis is concerned with analytical modeling of Wireless Local Area Networks (WLANs) that are based on IEEE 802.11 Distributed Coordination Function (DCF). Such networks are popularly known as WiFi networks. We have developed accurate analytical models for the following three network scenarios: (S1) A single cell WLAN with homogeneous nodes and Poisson packet arrivals, (S2) A multi-cell WLAN (a) with saturated nodes, or (b) with TCP-controlled long-lived downloads, and (S3) A multi-cell WLAN with TCP-controlled short-lived downloads. Our analytical models are simple Markovian abstractions that capture the detailed network behavior in the considered scenarios. The insights provided by our analytical models led to two applications: (i) a faster “model-based'” simulator, and (ii) a distributed channel assignment algorithm. We also study the stability of the network through our Markov models. For scenario (S1), we develop a new approach as compared to the existing literature. We apply a “State Dependent Attempt Rate'” (SDAR) approximation to reduce a single cell WLAN with non-saturated nodes to a coupled queue system. We provide a sufficient condition under which the joint queue length Markov chain is positive recurrent. For the case when the arrival rates into the queues are equal we propose a technique to reduce the state space of the coupled queue system. In addition, when the buffer size of the queues are finite and equal we propose an iterative method to estimate the stationary distribution of the reduced state process. Our iterative method yields accurate predictions for important performance measures, namely, “throughput'”, “collision probability” and “packet delay”. We replace the detailed implementation of the MAC layer in NS-2 with the SDAR contention model, thus yielding a ``model-based'' simulator at the MAC layer. We demonstrate that the SDAR model of contention provides an accurate model for the detailed CSMA/CA protocol in scenario (S1). In addition, since the SDAR model removes much of the details at the MAC layer we obtain speed-ups of 1.55-5.4 depending on the arrival rates and the number of nodes in the single cell WLAN. For scenario (S2), we consider a restricted network setting where a so-called “Pairwise Binary Dependence” (PBD) condition holds. We develop a first-cut scalable “cell-level” model by applying the PBD condition. Unlike a node- or link-level model, the complexity of our cell-level model increases with the number of cells rather than with the number of nodes/links. We demonstrate the accuracy of our cell-level model via NS-2 simulations. We show that, as the “access intensity” of every cell goes to infinity the aggregate network throughput is maximized. This remarkable property of CSMA, namely, “maximization of aggregate network throughput in a distributed manner” has been proved recently by Durvy et al. (TIT, March, 2009) for an infinite linear chain of nodes. We prove it for multi-cell WLANs with arbitrary cell topology (under the PBD condition). Based on this insight provided by our analytical model we propose a distributed channel assignment algorithm. For scenario (S3), we consider the same restricted network setting as for scenario (S2). For Poisson flow arrivals and i.i.d. exponentially distributed flow sizes we model a multi-cell WLAN as a network of processor-sharing queues with state-dependent service rates. The state-dependent service rates are obtained by applying the model for scenario (S2) and taking the access intensities to infinity. We demonstrate the accuracy of our model via NS-2 simulations. We also demonstrate the inaccuracy of the service model proposed in the recent work by Bonald et al. (SIGMETRICS 2008) and identify the implicit assumption in their model which leads to this inaccuracy. We call our service model which accurately characterizes the service process in a multi-cell WLAN (under the PBD condition) “DCF scheduling” and study the “stability region” of DCF scheduling for small networks with single or multiple overlapping “contention domains”.
66

Modeling and Performance Analysis of Distributed Systems with Collaboration Behaviour Diagrams

Israr, Toqeer January 2014 (has links)
The use of distributed systems, involving multiple components, has become a common industry practice. However, modeling the behaviour of such systems is a challenge, especially when the behavior consists of several collaborations of different parties, each involving possibly several starting (input) and ending (output) events of the involved components. Furthermore, the global behavior should be described as a composition of several sub-behaviours, in the following called collaborations, and each collaboration may be further decomposed into several sub-collaborations. We assume that the performance of the elementary sub-collaborations is known, and that the performance of the global behavior should be determined from the performance of the contained elementary collaborations and the form of the composition. A collaboration, in this thesis, is characterized by a partial order of input and output events, and the performance of the collaboration is defined by the minimum delays required for a given output event with respect to an input event. This is a generalization of the semantics of UML Activities, where all input events are assumed to occur at the same time, and all output events occur at the same time. We give a semantic definition of the dynamic behavior of composed collaborations using the composition operators for control flow from UML Activity diagrams, in terms of partial order relationships among the involved input and output events. Based on these semantics, we provide formulas for calculating the performance of composed collaborations in terms of the performance of the sub-collaborations, where each delay is characterized by (a) a fixed value, (b) a range of values, and (c) a distribution (in the case of stochastic behaviours). We also propose approximations for the case of stochastic behavior with Normal distributions, and discuss the expected errors that may be introduced due to ignoring of shared resources or possible dependencies in the case of stochastic behaviours. A tool has been developed for evaluating the performance of complex collaborations, and examples and case studies are discussed to illustrate the applicability of the performance analysis and the visual notation which we introduced for representing the partial-order relationships of the input and output events.
67

Performance modeling of congestion control and resource allocation under heterogeneous network traffic. Modeling and analysis of active queue management mechanism in the presence of poisson and bursty traffic arrival processes.

Wang, Lan January 2010 (has links)
Along with playing an ever-increasing role in the integration of other communication networks and expanding in application diversities, the current Internet suffers from serious overuse and congestion bottlenecks. Efficient congestion control is fundamental to ensure the Internet reliability, satisfy the specified Quality-of-Service (QoS) constraints and achieve desirable performance in response to varying application scenarios. Active Queue Management (AQM) is a promising scheme to support end-to-end Transmission Control Protocol (TCP) congestion control because it enables the sender to react appropriately to the real network situation. Analytical performance models are powerful tools which can be adopted to investigate optimal setting of AQM parameters. Among the existing research efforts in this field, however, there is a current lack of analytical models that can be viewed as a cost-effective performance evaluation tool for AQM in the presence of heterogeneous traffic, generated by various network applications. This thesis aims to provide a generic and extensible analytical framework for analyzing AQM congestion control for various traffic types, such as non-bursty Poisson and bursty Markov-Modulated Poisson Process (MMPP) traffic. Specifically, the Markov analytical models are developed for AQM congestion control scheme coupled with queue thresholds and then are adopted to derive expressions for important QoS metrics. The main contributions of this thesis are listed as follows: iii ¿ Study the queueing systems for modeling AQM scheme subject to single-class and multiple-classes Poisson traffic, respectively. Analyze the effects of the varying threshold, mean traffic arrival rate, service rate and buffer capacity on the key performance metrics. ¿ Propose an analytical model for AQM scheme with single class bursty traffic and investigate how burstiness and correlations affect the performance metrics. The analytical results reveal that high burstiness and correlation can result in significant degradation of AQM performance, such as increased queueing delay and packet loss probability, and reduced throughput and utlization. ¿ Develop an analytical model for a single server queueing system with AQM in the presence of heterogeneous traffic and evaluate the aggregate and marginal performance subject to different threshold values, burstiness degree and correlation. ¿ Conduct stochastic analysis of a single-server system with single-queue and multiple-queues, respectively, for AQM scheme in the presence of multiple priority traffic classes scheduled by the Priority Resume (PR) policy. ¿ Carry out the performance comparison of AQM with PR and First-In First-Out (FIFO) scheme and compare the performance of AQM with single PR priority queue and multiple priority queues, respectively.
68

Development Of A Performance Analysis Framework For Water Pipeline Infrastructure Using Systems Understanding

Vishwakarma, Anmol 29 January 2019 (has links)
The fundamental purpose of drinking water distribution systems is to provide safe drinking water at sufficient volumes and optimal pressure with the lowest lifecycle costs from the source (treatment plants, raw water source) to the customers (residences, industries). Most of the distribution systems in the US were laid out during the development phase after World War II. As the drinking water infrastructure is aging, water utilities are battling the increasing break rates in their water distribution system and struggling to bear the associated economic costs. However, with the growth in sensory technologies and data science, water utilities are seeing economic value in collecting data and analyzing it to monitor and predict the performance of their distribution systems. Many mathematical models have been developed to guide repair and rehabilitation decisions in the past but remain largely unused because of low reliability. This is because any effort to build a decision support framework based on a model should rest its foundations on a robust knowledge base of the critical factors influencing the system, which varies from utility to utility. Mathematical models built on a strong understanding of the theory, current practices and the trends in data can prove to be more reliable. This study presents a framework to support repair and rehabilitation decisions for water utilities using water pipeline field performance data. / Master of Science / The fundamental purpose of drinking water distribution systems is to provide a safe and sufficient volume of drinking water at optimal pressure with the lowest costs to the water utilities. Most of the distribution systems in the US were established during the development phase after World War II. The problem of aging drinking water infrastructure is an increasing financial burden on water utilities due to increasing water main breaks. The growth in data collection by water utilities has proven to be a useful tool to monitor and predict the performance of the water distribution systems and support asset management decisions. However, the mathematical models developed in the past suffer from low reliability due to limited data used to create models. Also, any effort to build sophisticated mathematical models should be supported with a comprehensive review of the existing recommendations from research and current practices. This study presents a framework to support repair and rehabilitation decisions for water utilities using water pipeline field performance data.

Page generated in 0.0883 seconds