131 |
Schedulability analysis of real-time systems with stochastic task execution timesManolache, Sorin January 2002 (has links)
<p>Systems controlled by embedded computers become indispensable in our lives and can be found in avionics, automotive industry, home appliances, medicine, telecommunication industry, mecatronics, space industry, etc. Fast, accurate and flexible performance estimation tools giving feedback to the designer in every design phase are a vital part of a design process capable to produce high quality designs of such embedded systems.</p><p>In the past decade, the limitations of models considering fixed task execution times have been acknowledged for large application classes within soft real-time systems. A more realistic model considers the tasks having varying execution times with given probability distributions. No restriction has been imposed in this thesis on the particular type of these functions. Considering such a model, with specified task execution time probability distribution functions, an important performance indicator of the system is the expected deadline miss ratio of tasks or task graphs.</p><p>This thesis proposes two approaches for obtaining this indicator in an analytic way. The first is an exact one while the second approach provides an approximate solution trading accuracy for analysis speed. While the first approach can efficiently be applied to monoprocessor systems, it can handle only very small multi-processor applications because of complexity reasons. The second approach, however, can successfully handle realistic multiprocessor applications. Experiments show the efficiency of the proposed techniques.</p> / Report code: LiU-Tek-Lic-2002:58.
|
132 |
Performance Analysis and Deployment Techniques forWireless Sensor NetworksShe, Huimin January 2012 (has links)
Recently, wireless sensor network (WSN) has become a promising technology with a wide range of applications such as supply chain monitoring and environment surveillance. It is typically composed of multiple tiny devices equipped with limited sensing, computing and wireless communication capabilities. Design of such networks presents several technique challenges while dealing with various requirements and diverse constraints. Performance analysis and deployment techniquesare required to provide insight on design parameters and system behaviors. Based on network calculus, a deterministic analysis method is presented for evaluating the worst-case delay and buffer cost of sensor networks. To this end,traffic splitting and multiplexing models are proposed and their delay and buffer bounds are derived. These models can be used in combination to characterize complex traffic flowing scenarios. Furthermore, the method integrates a variable duty cycle to allow the sensor nodes to operate at low rates thus saving power. In an attempt to balance traffic load and improve resource utilization and performance,traffic splitting mechanisms are introduced for sensor networks with general topologies. To provide reliable data delivery in sensor networks, retransmission has been one of the most popular schemes. We propose an analytical method to evaluate the maximum data transmission delay and energy consumption of two types of retransmission schemes: hop-by-hop retransmission and end-to-end retransmission.In order to validate the tightness of the bounds obtained by the analysis method, the simulation results and analytical results are compared with various input traffic loads. The results show that the analytic bounds are correct and tight. Stochastic network calculus has been developed as a useful tool for Qualityof Service (QoS) analysis of wireless networks. We propose a stochastic servicecurve model for the Rayleigh fading channel and then provide formulas to derive the probabilistic delay and backlog bounds in the cases of deterministic and stochastic arrival curves. The simulation results verify that the tightness of the bounds are good. Moreover, a detailed mechanism for bandwidth estimation of random wireless channels is developed. The bandwidth is derived from the measurement of statistical backlogs based on probe packet trains. It is expressed by statistical service curves that are allowed to violate a service guarantee with a certain probability. The theoretic foundation and the detailed step-by-step procedure of the estimation method are presented. One fundamental application of WSNs is event detection in a Field of Interest(FoI), where a set of sensors are deployed to monitor any ongoing events. To satisfy a certain level of detection quality in such applications, it is desirable that events in the region can be detected by a required number of sensors. Hence, an important problem is how to conduct sensor deployment for achieving certain coverage requirements. In this thesis, a probabilistic event coverage analysis methodis proposed for evaluating the coverage performance of heterogeneous sensor networks with randomly deployed sensors and stochastic event occurrences. Moreover,we present a framework for analyzing node deployment schemes in terms of three performance metrics: coverage, lifetime, and cost. The method can be used to evaluate the benefits and trade-offs of different deployment schemes and thus provide guidelines for network designers. / <p>QC 20120906</p>
|
133 |
Enhancing geothermal heat pump systems with parametric performance analysesSelf, Stuart 01 April 2010 (has links)
Parametric performance analyses and comparison of a basic geothermal heat pump, a heat pump cycle with motor cooling/refrigerant preheating, and a heat pump cycle utilizing an economizer with respect to first law is conducted through simulation. Changing compressor, pump, and motor efficiency, along with condenser pressure, evaporator pressure, degree of subcooling at the condenser exit and degree of superheating at the evaporator exit is investigated. Economizer arrangements yield the highest coefficient of performance and resilience to change in COP with variation in evaporator pressure, and degree of superheating and subcooling. The basic vapor compression and motor cooling/refrigerant preheating systems have the lowest COP throughout and greatest resilience to variation in compressor efficiency, motor efficiency and condenser pressure. Motor cooling/refrigerant preheating and economizers have advantages over basic vapor compression cycles. Motor cooling reduces ground loop heat exchanger length with similar COP, and economizers allow for an increase in COP compared to the basic cycle. / UOIT
|
134 |
Circuit Timing and Leakage Analysis in the Presence of VariabilityHeloue, Khaled R. 15 February 2011 (has links)
Driven by the need for faster devices and higher transistor densities, technology trends have pushed transistor dimensions into the deep sub-micron regime. This continued scaling, however, has led to many challenges facing digital integrated circuits today. One important challenge is the increased variations in the underlying process and environmental parameters, and the significant impact of this variability on circuit timing and leakage power, making it increasingly difficult to design circuits that achieve a required specification. Given these challenges, there is a need for computer-aided design (CAD) techniques that can predict and analyze circuit performance (timing and leakage) accurately and efficiently in the presence of variability. This thesis presents new techniques for variation-aware timing and leakage analysis that address different aspects of the problem.
First, on the timing front, a pre-placement statistical static timing analysis technique is presented. This technique can be applied at an early stage of design, when within-die correlations are still unknown. Next, a general parameterized static timing analysis framework is proposed, which supports a general class of nonlinear delay models and handles both random (process) parameters with arbitrary distributions and non-random (environmental) parameters. Following this, a parameterized static timing analysis technique is presented, which can capture circuit delay exactly at any point in the parameter space. This is enabled by identifying all potentially critical paths in the circuit through novel and efficient pruning algorithms that improve on the state of art both in theoretical complexity and runtime. Also on the timing front, a novel distance-based metric for robustness is proposed. This metric can be used to quantify the susceptibility of parameterized timing quantities to failure, thus enabling designers to fix the nodes with smallest robustness values in order to improve the overall design robustness.
Finally, on the leakage front, a statistical technique for early-mode and late-mode leakage estimation is presented. The novelty lies in the random gate concept, which allows for efficient and accurate full-chip leakage estimation. In its simplest form, the leakage estimation reduces to finding the area under a scaled version of the within-die channel length auto-correlation function, which can be done in constant time.
|
135 |
Exploiting diversity in wireless channels with bit-interleaved coded modulation and iterative decoding (BICM-ID)Tran, Huu Nghi 23 April 2008
<p>This dissertation studies a state-of-the-art bandwidth-efficient coded modulation technique, known as bit interleaved coded modulation with iterative decoding (BICM-ID), together with various diversity techniques to dramatically improve the performance of digital communication systems over wireless channels.</p>
<p>For BICM-ID over a single-antenna frequency non-selective fading channel, the problem of mapping over multiple symbols, i.e., multi-dimensional (multi-D) mapping, with 8-PSK constellation is investigated. An explicit algorithm to construct a good multi-D mapping of 8-PSK to improve the asymptotic performance of BICM-ID systems is introduced. By comparing the performance of the proposed mapping with an unachievable lower bound, it is conjectured that the proposed mapping is the global optimal mapping. The superiority of the proposed mapping over the best conventional (1-dimensional complex) mapping and the multi-D mapping found previously by computer search is thoroughly demonstrated.</p>
<p>In addition to the mapping issue in single-antenna BICM-ID systems, the use of signal space diversity (SSD), also known as linear constellation precoding (LCP), is considered in BICM-ID over frequency non-selective fading channels. The performance analysis of BICM-ID and complex N-dimensional signal space diversity is carried out to study its performance limitation, the choice of the rotation matrix and the design of a low-complexity receiver. Based on the design criterion obtained from a tight error bound, the optimality of the rotation matrix is established. It is shown that using the class of optimal rotation matrices, the performance of BICM-ID systems over a frequency non-selective Rayleigh fading channel approaches that of the BICM-ID systems over an additive white Gaussian noise (AWGN) channel when the dimension of the signal constellation increases. Furthermore, by exploiting the sigma mapping for any M-ary quadrature amplitude modulation (QAM) constellation, a very simple sub-optimal, yet effective iterative receiver structure suitable for signal constellations with large dimensions is proposed. Simulation results in various cases and conditions indicate that the proposed receiver can achieve the analytical performance bounds with low complexity.</p>
<p>The application of BICM-ID with SSD is then extended to the case of cascaded Rayleigh fading, which is more suitable to model mobile-to-mobile communication channels. By deriving the error bound on the asymptotic performance, it is first illustrated that for a small modulation constellation, a cascaded Rayleigh fading causes a much more severe performance degradation than a
conventional Rayleigh fading. However, BICM-ID employing SSD with a sufficiently large constellation can close the performance gap between the Rayleigh and cascaded Rayleigh fading channels, and their performance can closely approach that over an AWGN channel.</p>
<p>In the next step, the use of SSD in BICM-ID over frequency selective Rayleigh fading channels employing a multi-carrier modulation technique known as orthogonal frequency division multiplexing (OFDM) is studied. Under the assumption of correlated fading over subcarriers, a tight bound on the asymptotic error performance for the general case of applying SSD over all N subcarriers is derived and used to establish the best achievable asymptotic performance by SSD. It is then shown that precoding over subgroups of at least L subcarriers per group, where L is the number of channel taps, is sufficient to obtain this best asymptotic error performance, while significantly reducing the receiver complexity. The optimal joint subcarrier grouping and rotation matrix design is subsequently determined by solving the Vandermonde linear system. Illustrative examples show a good agreement between various analytical and simulation results.</p>
<p>Further, by combining the ideas of multi-D mapping and subcarrier grouping, a novel power and bandwidth-efficient bit-interleaved coded modulation with OFDM and iterative decoding (BI-COFDM-ID) in which multi-D mapping is performed over a group of subcarriers for broadband transmission in a frequency selective fading environment is proposed. A tight bound on the asymptotic error performance is developed, which shows that subcarrier mapping and grouping have independent impacts on the overall error performance, and hence they can be independently optimized. Specifically, it is demonstrated that the optimal subcarrier mapping is similar to the optimal multi-D mapping for BICM-ID in frequency non-selective Rayleigh fading environment, whereas the optimal subcarrier grouping is the same with that of OFDM with SSD. Furthermore, analytical and simulation results show that the proposed system with the combined optimal subcarrier mapping and grouping can achieve the full channel diversity without using SSD and provide significant coding gains as compared to the previously studied BI-COFDM-ID with the same power, bandwidth and receiver complexity.</p>
<p>Finally, the investigation is extended to the application of BICM-ID over a multiple-input multiple-output (MIMO) system equipped with multiple antennas at both the transmitter and the receiver to exploit both time and spatial diversities, where neither the transmitter nor the receiver knows the channel fading coefficients. The concentration is on the class of unitary constellation, due to its advantages in terms of both information-theoretic capacity and error probability. The tight error bound with respect to the asymptotic performance is also derived for any given unitary constellation and mapping rule. Design criteria regarding the choice of unitary constellation and mapping are then established. Furthermore, by using the unitary constellation obtained from orthogonal design with quadrature phase-shift keying (QPSK or 4-PSK) and 8-PSK, two different mapping rules are proposed. The first mapping rule gives the most suitable mapping for systems that do not implement iterative processing, which is similar to a Gray mapping in coherent channels. The second mapping rule yields the best mapping for systems with iterative decoding. Analytical and simulation results show that with the proposed mappings of the unitary constellations obtained from orthogonal designs, the asymptotic error performance of the iterative systems can closely approach a lower bound which is applicable to any unitary constellation and mapping.</p>
|
136 |
Network Performance Analysis of Packet Scheduling AlgorithmsGhiassi-Farrokhfal, Yashar 21 August 2012 (has links)
Some of the applications in modern data networks are delay sensitive (e.g., video and voice).
An end-to-end delay analysis is needed to estimate the required network resources of delay
sensitive applications. The schedulers used in the network can impact the resulting delays to
the applications. When multiple applications are multiplexed in a switch, a scheduler is used
to determine the precedence of the arrivals from different applications.
Computing the end-to-end delay and queue sizes in a network of schedulers is difficult and
the existing solutions are limited to some special cases (e.g., specific type of traffic). The theory
of Network Calculus employs the min-plus algebra to obtain performance bounds. Given an
upper bound on the traffic arrival in any time interval and a lower bound on the available service
(called the service curve) at a network element, upper bounds on the delay and queue size of
the traffic in that network element can be obtained. An equivalent end-to-end service curve of a
tandem of queues is the min-plus convolution of the service curves of all nodes along the path.
A probabilistic end-to-end delay bound using network service curve scales with O(H logH)
in the path length H. This improves the results of the conventional method of adding per-node
delay bounds scaling with O(H^3).
We have used and advanced Network Calculus for end-to-end delay analysis in a network of
schedulers. We formulate a service curve description for a large class of schedulers which we
call Delta-schedulers. We show that with this service curve, tight single node delay and backlog
bounds can be achieved. In an end-to-end scenario, we formulate a new convolution theoii
rem which considerably improves the end-to-end probabilistic delay bounds. We specify our
probabilistic end-to-end delay and backlog bounds for exponentially bounded burstniess (EBB)
traffic arrivals. We show that the end-to-end delay varies considerably by the type of schedulers
along the path. Using these bounds, we also show that a if the number of flows increases, the
queues inside a network can be analyzed in isolation and regardless of the network effect.
|
137 |
Network Performance Analysis of Packet Scheduling AlgorithmsGhiassi-Farrokhfal, Yashar 21 August 2012 (has links)
Some of the applications in modern data networks are delay sensitive (e.g., video and voice).
An end-to-end delay analysis is needed to estimate the required network resources of delay
sensitive applications. The schedulers used in the network can impact the resulting delays to
the applications. When multiple applications are multiplexed in a switch, a scheduler is used
to determine the precedence of the arrivals from different applications.
Computing the end-to-end delay and queue sizes in a network of schedulers is difficult and
the existing solutions are limited to some special cases (e.g., specific type of traffic). The theory
of Network Calculus employs the min-plus algebra to obtain performance bounds. Given an
upper bound on the traffic arrival in any time interval and a lower bound on the available service
(called the service curve) at a network element, upper bounds on the delay and queue size of
the traffic in that network element can be obtained. An equivalent end-to-end service curve of a
tandem of queues is the min-plus convolution of the service curves of all nodes along the path.
A probabilistic end-to-end delay bound using network service curve scales with O(H logH)
in the path length H. This improves the results of the conventional method of adding per-node
delay bounds scaling with O(H^3).
We have used and advanced Network Calculus for end-to-end delay analysis in a network of
schedulers. We formulate a service curve description for a large class of schedulers which we
call Delta-schedulers. We show that with this service curve, tight single node delay and backlog
bounds can be achieved. In an end-to-end scenario, we formulate a new convolution theoii
rem which considerably improves the end-to-end probabilistic delay bounds. We specify our
probabilistic end-to-end delay and backlog bounds for exponentially bounded burstniess (EBB)
traffic arrivals. We show that the end-to-end delay varies considerably by the type of schedulers
along the path. Using these bounds, we also show that a if the number of flows increases, the
queues inside a network can be analyzed in isolation and regardless of the network effect.
|
138 |
Circuit Timing and Leakage Analysis in the Presence of VariabilityHeloue, Khaled R. 15 February 2011 (has links)
Driven by the need for faster devices and higher transistor densities, technology trends have pushed transistor dimensions into the deep sub-micron regime. This continued scaling, however, has led to many challenges facing digital integrated circuits today. One important challenge is the increased variations in the underlying process and environmental parameters, and the significant impact of this variability on circuit timing and leakage power, making it increasingly difficult to design circuits that achieve a required specification. Given these challenges, there is a need for computer-aided design (CAD) techniques that can predict and analyze circuit performance (timing and leakage) accurately and efficiently in the presence of variability. This thesis presents new techniques for variation-aware timing and leakage analysis that address different aspects of the problem.
First, on the timing front, a pre-placement statistical static timing analysis technique is presented. This technique can be applied at an early stage of design, when within-die correlations are still unknown. Next, a general parameterized static timing analysis framework is proposed, which supports a general class of nonlinear delay models and handles both random (process) parameters with arbitrary distributions and non-random (environmental) parameters. Following this, a parameterized static timing analysis technique is presented, which can capture circuit delay exactly at any point in the parameter space. This is enabled by identifying all potentially critical paths in the circuit through novel and efficient pruning algorithms that improve on the state of art both in theoretical complexity and runtime. Also on the timing front, a novel distance-based metric for robustness is proposed. This metric can be used to quantify the susceptibility of parameterized timing quantities to failure, thus enabling designers to fix the nodes with smallest robustness values in order to improve the overall design robustness.
Finally, on the leakage front, a statistical technique for early-mode and late-mode leakage estimation is presented. The novelty lies in the random gate concept, which allows for efficient and accurate full-chip leakage estimation. In its simplest form, the leakage estimation reduces to finding the area under a scaled version of the within-die channel length auto-correlation function, which can be done in constant time.
|
139 |
Exploiting diversity in wireless channels with bit-interleaved coded modulation and iterative decoding (BICM-ID)Tran, Huu Nghi 23 April 2008 (has links)
<p>This dissertation studies a state-of-the-art bandwidth-efficient coded modulation technique, known as bit interleaved coded modulation with iterative decoding (BICM-ID), together with various diversity techniques to dramatically improve the performance of digital communication systems over wireless channels.</p>
<p>For BICM-ID over a single-antenna frequency non-selective fading channel, the problem of mapping over multiple symbols, i.e., multi-dimensional (multi-D) mapping, with 8-PSK constellation is investigated. An explicit algorithm to construct a good multi-D mapping of 8-PSK to improve the asymptotic performance of BICM-ID systems is introduced. By comparing the performance of the proposed mapping with an unachievable lower bound, it is conjectured that the proposed mapping is the global optimal mapping. The superiority of the proposed mapping over the best conventional (1-dimensional complex) mapping and the multi-D mapping found previously by computer search is thoroughly demonstrated.</p>
<p>In addition to the mapping issue in single-antenna BICM-ID systems, the use of signal space diversity (SSD), also known as linear constellation precoding (LCP), is considered in BICM-ID over frequency non-selective fading channels. The performance analysis of BICM-ID and complex N-dimensional signal space diversity is carried out to study its performance limitation, the choice of the rotation matrix and the design of a low-complexity receiver. Based on the design criterion obtained from a tight error bound, the optimality of the rotation matrix is established. It is shown that using the class of optimal rotation matrices, the performance of BICM-ID systems over a frequency non-selective Rayleigh fading channel approaches that of the BICM-ID systems over an additive white Gaussian noise (AWGN) channel when the dimension of the signal constellation increases. Furthermore, by exploiting the sigma mapping for any M-ary quadrature amplitude modulation (QAM) constellation, a very simple sub-optimal, yet effective iterative receiver structure suitable for signal constellations with large dimensions is proposed. Simulation results in various cases and conditions indicate that the proposed receiver can achieve the analytical performance bounds with low complexity.</p>
<p>The application of BICM-ID with SSD is then extended to the case of cascaded Rayleigh fading, which is more suitable to model mobile-to-mobile communication channels. By deriving the error bound on the asymptotic performance, it is first illustrated that for a small modulation constellation, a cascaded Rayleigh fading causes a much more severe performance degradation than a
conventional Rayleigh fading. However, BICM-ID employing SSD with a sufficiently large constellation can close the performance gap between the Rayleigh and cascaded Rayleigh fading channels, and their performance can closely approach that over an AWGN channel.</p>
<p>In the next step, the use of SSD in BICM-ID over frequency selective Rayleigh fading channels employing a multi-carrier modulation technique known as orthogonal frequency division multiplexing (OFDM) is studied. Under the assumption of correlated fading over subcarriers, a tight bound on the asymptotic error performance for the general case of applying SSD over all N subcarriers is derived and used to establish the best achievable asymptotic performance by SSD. It is then shown that precoding over subgroups of at least L subcarriers per group, where L is the number of channel taps, is sufficient to obtain this best asymptotic error performance, while significantly reducing the receiver complexity. The optimal joint subcarrier grouping and rotation matrix design is subsequently determined by solving the Vandermonde linear system. Illustrative examples show a good agreement between various analytical and simulation results.</p>
<p>Further, by combining the ideas of multi-D mapping and subcarrier grouping, a novel power and bandwidth-efficient bit-interleaved coded modulation with OFDM and iterative decoding (BI-COFDM-ID) in which multi-D mapping is performed over a group of subcarriers for broadband transmission in a frequency selective fading environment is proposed. A tight bound on the asymptotic error performance is developed, which shows that subcarrier mapping and grouping have independent impacts on the overall error performance, and hence they can be independently optimized. Specifically, it is demonstrated that the optimal subcarrier mapping is similar to the optimal multi-D mapping for BICM-ID in frequency non-selective Rayleigh fading environment, whereas the optimal subcarrier grouping is the same with that of OFDM with SSD. Furthermore, analytical and simulation results show that the proposed system with the combined optimal subcarrier mapping and grouping can achieve the full channel diversity without using SSD and provide significant coding gains as compared to the previously studied BI-COFDM-ID with the same power, bandwidth and receiver complexity.</p>
<p>Finally, the investigation is extended to the application of BICM-ID over a multiple-input multiple-output (MIMO) system equipped with multiple antennas at both the transmitter and the receiver to exploit both time and spatial diversities, where neither the transmitter nor the receiver knows the channel fading coefficients. The concentration is on the class of unitary constellation, due to its advantages in terms of both information-theoretic capacity and error probability. The tight error bound with respect to the asymptotic performance is also derived for any given unitary constellation and mapping rule. Design criteria regarding the choice of unitary constellation and mapping are then established. Furthermore, by using the unitary constellation obtained from orthogonal design with quadrature phase-shift keying (QPSK or 4-PSK) and 8-PSK, two different mapping rules are proposed. The first mapping rule gives the most suitable mapping for systems that do not implement iterative processing, which is similar to a Gray mapping in coherent channels. The second mapping rule yields the best mapping for systems with iterative decoding. Analytical and simulation results show that with the proposed mappings of the unitary constellations obtained from orthogonal designs, the asymptotic error performance of the iterative systems can closely approach a lower bound which is applicable to any unitary constellation and mapping.</p>
|
140 |
Qualitative Performance Analysis for Large-Scale Scientific WorkflowsBuneci, Emma 30 May 2008 (has links)
<p>Today, large-scale scientific applications are both data driven and distributed. To support the scale and inherent distribution of these applications, significant heterogeneous and geographically distributed resources are required over long periods of time to ensure adequate performance. Furthermore, the behavior of these applications depends on a large number of factors related to the application, the system software, the underlying hardware, and other running applications, as well as potential interactions among these factors.</p>
<p>Most Grid application users are primarily concerned with obtaining the result of the application as fast as possible, without worrying about the details involved in monitoring and understanding factors affecting application performance. In this work, we aim to provide the application users with a simple and intuitive performance evaluation mechanism during the execution time of their long-running Grid applications or workflows. Our performance evaluation mechanism provides a qualitative and periodic assessment of the application's behavior by informing the user whether the application's performance is expected or unexpected. Furthermore, it can help improve overall application performance by informing and guiding fault-tolerance services when the application exhibits persistent unexpected performance behaviors.</p>
<p>This thesis addresses the hypotheses that in order to qualitatively assess application behavioral states in long-running scientific Grid applications: (1) it is necessary to extract temporal information in performance time series data, and that (2) it is sufficient to extract variance and pattern as specific examples of temporal information. Evidence supporting these hypotheses can lead to the ability to qualitatively assess the overall behavior of the application and, if needed, to offer a most likely diagnostic of the underlying problem.</p>
<p>To test the stated hypotheses, we develop and evaluate a general <em> qualitative performance analysis</em> framework that incorporates (a) techniques from time series analysis and machine learning to extract and learn from data, structural and temporal features associated with application performance in order to reach a qualitative interpretation of the application's behavior, and (b) mechanisms and policies to reason over time and across the distributed resource space about the behavior of the application. </p>
<p>Experiments with two scientific applications from meteorology and astronomy comparing signatures generated from instantaneous values of performance data versus those generated from temporal characteristics support the former hypothesis that temporal information is necessary to extract from performance time series data to be able to accurately interpret the behavior of these applications. Furthermore, temporal signatures incorporating variance and pattern information generated for these applications reveal signatures that have distinct characteristics during well-performing versus poor-performing executions. This leads to the framework's accurate classification of instances of similar behaviors, which represents supporting evidence for the latter hypothesis. The proposed framework's ability to generate a qualitative assessment of performance behavior for scientific applications using temporal information present in performance time series data represents a step towards simplifying and improving the quality of service for Grid applications.</p> / Dissertation
|
Page generated in 0.1113 seconds