• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 9
  • 2
  • 1
  • Tagged with
  • 68
  • 68
  • 20
  • 13
  • 13
  • 12
  • 10
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Scalable Stochastic Models for Cloud Services

Ghosh, Rahul January 2012 (has links)
<p>Cloud computing appears to be a paradigm shift in service oriented computing. Massively scalable Cloud architectures are spawned by new business and social applications as well as Internet driven economics. Besides being inherently large scale and highly distributed, Cloud systems are almost always virtualized and operate in automated shared environments. The deployed Cloud services are still in their infancy and a variety of research challenges need to be addressed to predict their long-term behavior. Performance and dependability of Cloud services are in general stochastic in nature and they are affected by a large number of factors, e.g., nature of workload and faultload, infrastructure characteristics and management policies. As a result, developing scalable and predictive analytics for Cloud becomes difficult and non-trivial. This dissertation presents the research framework needed to develop high fidelity stochastic models for large scale enterprise systems using Cloud computing as an example. Throughout the dissertation, we show how the developed models are used for: (i) performance and availability analysis, (ii) understanding of power-performance trade-offs, (ii) resiliency quantification, (iv) cost analysis and capacity planning, and (v) risk analysis of Cloud services. In general, the models and approaches presented in this thesis can be useful to a Cloud service provider for planning, forecasting, bottleneck detection, what-if analysis or overall optimization during design, development, testing and operational phases of a Cloud.</p> / Dissertation
32

An Intelligent Framework for Energy-Aware Mobile Computing Subject to Stochastic System Dynamics

January 2017 (has links)
abstract: User satisfaction is pivotal to the success of mobile applications. At the same time, it is imperative to maximize the energy efficiency of the mobile device to ensure optimal usage of the limited energy source available to mobile devices while maintaining the necessary levels of user satisfaction. However, this is complicated due to user interactions, numerous shared resources, and network conditions that produce substantial uncertainty to the mobile device's performance and power characteristics. In this dissertation, a new approach is presented to characterize and control mobile devices that accurately models these uncertainties. The proposed modeling framework is a completely data-driven approach to predicting power and performance. The approach makes no assumptions on the distributions of the underlying sources of uncertainty and is capable of predicting power and performance with over 93% accuracy. Using this data-driven prediction framework, a closed-loop solution to the DEM problem is derived to maximize the energy efficiency of the mobile device subject to various thermal, reliability and deadline constraints. The design of the controller imposes minimal operational overhead and is able to tune the performance and power prediction models to changing system conditions. The proposed controller is implemented on a real mobile platform, the Google Pixel smartphone, and demonstrates a 19% improvement in energy efficiency over the standard frequency governor implemented on all Android devices. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2017
33

Modeling of proton exchange membrane fuel cell performance degradation and operation life

Ahmadi Sarbast, Vahid 10 September 2021 (has links)
Proton Exchange Membrane Fuel Cell (PEMFC) is the most commonly used type of hydrogen fuel cell and a promising solution for vehicular and stationary power applications. This research starts with an extensive review of the PEMFC research, including experimental testing, and performance modeling, and performance degradation modeling using relatively accurate and easy-to-use mechanistic models. Next, a new PEMFC performance degradation model is introduced by amending the semi-empirical, mechanistic performance model to support the design and control of PEMFC systems and fuel cell electric vehicles (FCEVs). The new model takes into account critical factors impacting PEMFC performance. The performance degradation due to the oxidation of catalyst platinum (Pt) and loss of active surface area is captured by fitting the degradation model parameters using experimental data to capture the observed PEMFC performance fading. The new performance degradation model is then tested and further improved under the four typical load modes that a PEMFC system experiences in a vehicular application under regular driving cycles. The model is also fitted with PEMFC experimental degradation data under different load modes to improve modeling accuracy. The new model is applied and tested using simulations of a representative FCEV. The actual power load on an 80 kW PEMFC system in the modeled FCEV was obtained using the Advanced Vehicle Simulator (ADVISOR) under the US EPA Urban Dynamometer Driving Schedule (UDDS). With the ability to predict the operation life of the PEMFC, the appropriate sizes of the PEMFC system and the energy storage system (ESS) can be determined. Improved power control and energy management can be developed to extend the operation life of the PEMFC and lower the lifecycle cost of the FCEV. / Graduate
34

Multi-Agent Based Simulations in the Grid Environment

Mengistu, Dawit January 2007 (has links)
The computational Grid has become an important infrastructure as an execution environment for scientific applications that require large amount of computing resources. Applications which would otherwise be unmanageable or take a prohibitively longer execution time under previous computing paradigms can now be executed efficiently on the Grid within a reasonable time. Multi-agent based simulation (MABS) is a methodology used to study and understand the dynamics of real world phenomena in domains involving interaction and/or cooperative problem solving where the participants are characterized by entities having autonomous and social behaviour. For certain domains the size of the simulation is extremely large, intractable without employing adequate computing resources such as the Grid. Although the Grid has come with immense opportunities to resource demanding applications such as MABS, it has also brought with it a number of challenges related to performance. Performance problems may have their origins either on the side of the computing infrastructure or the application itself, or both. This thesis aims at improving the performance of MABS applications by overcoming problems inherent to the behaviour of MABS applications. It also studies the extent to which the MABS technologies have been exploited in the field of simulation and find ways to adapt existing technologies for the Grid. It investigates performance monitoring and prediction systems in the Grid environment and their implementation for MABS application with the purpose of identifying application related performance problems and their solutions. Our research shows that large-scale MABS applications have not been implemented despite the fact that many problem domains that cannot be studied properly with only partial simulation. We assume that this is due to the lack of appropriate tools such as MABS platforms for the Grid. Another important finding of this work is the improvement of application performance through the use of MABS specific middleware.
35

Statistical Techniques to Model and Optimize Performance of Scientific, Numerically Intensive Workloads

Steven Monteiro, Steena Dominica 01 December 2016 (has links)
Projecting performance of applications and hardware is important to several market segments—hardware designers, software developers, supercomputing centers, and end users. Hardware designers estimate performance of current applications on future systems when designing new hardware. Software developers make performance estimates to evaluate performance of their code on different architectures and input datasets. Supercomputing centers try to optimize the process of matching computing resources to computing needs. End users requesting time on supercomputers must provide estimates of their application’s run time, and incorrect estimates can lead to wasted supercomputing resources and time. However, application performance is challenging to predict because it is affected by several factors in application code, specifications of system hardware, choice of compilers, compiler flags, and libraries. This dissertation uses statistical techniques to model and optimize performance of scientific applications across different computer processors. The first study in this research offers statistical models that predict performance of an application across different input datasets prior to application execution. These models guide end users to select parameters that produce optimal application performance during execution. The second study offers a suite of statistical models that predict performance of a new application on a new processor. Both studies present statistical techniques that can be generalized to analyze, optimize, and predict performance of diverse computation- and data-intensive applications on different hardware.
36

Fuel Performance Modeling of Reactivity-Initiated Accidents Using the BISON Code

Folsom, Charles Pearson 01 December 2017 (has links)
The Fukushima Daiichi nuclear accidents in 2011 sparked considerable interest in the U.S. to develop new nuclear fuel with enhanced accident tolerance. Throughout the development of these new fuel concepts they will be extensively modeled using specialized computer codes and experimentally tested for a variety of different postulated accident scenarios. One accident scenario of interest, reactivity-initiated accident, is a nuclear reactor event involving a sudden increase in fission rate that causes a rapid increase in reactor power and temperature of the fuel which can lead to the failure of the fuel rods and are lease of radioactive material. The focus of this work will be on the fuel performance modeling of reactivity-initiated accidents using the BISON code being developed at Idaho National Laboratory. The overall goal of this work is to provide the best possible modeling predictions for future experimental tests. Accurate predictive capability modeling using BISON is important for safe operation of these tests and provides a cheaper alternative to the expensive experiments.
37

Performance Modeling of Large-Scale Parallel-Distributed Processing for Cloud Environment / クラウド環境における大規模並列分散処理の性能モデル

Hirai, Tsuguhito 23 May 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21280号 / 情博第674号 / 新制||情||116(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 山下 信雄, 准教授 増山 博之, 教授 笠原 正治 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
38

Analyzing and Evaluating the Resilience of Scheduling Scientific Applications on High Performance Computing Systems using a Simulation-based Methodology

Sukhija, Nitin 09 May 2015 (has links)
Large scale systems provide a powerful computing platform for solving large and complex scientific applications. However, the inherent complexity, heterogeneity, wide distribution, and dynamism of the computing environments can lead to performance degradation of the scientific applications executing on these computing systems. Load imbalance arising from a variety of sources such as application, algorithmic, and systemic variations is one of the major contributors to their performance degradation. In general, load balancing is achieved via scheduling. Moreover, frequently occurring resource failures drastically affect the execution of applications running on high performance computing systems. Therefore, the study of deploying support for integrated scheduling and fault-tolerance mechanisms for guaranteeing that applications deployed on computing systems are resilient to failures becomes of paramount importance. Recently, several research initiatives have started to address the issue of resilience. However, the major focus of these efforts was geared more toward achieving system level resilience with less emphasis on achieving resilience at the application level. Therefore, it is increasingly important to extend the concept of resilience to the scheduling techniques at the application level for establishing a holistic approach that addresses the performability of these applications on high performance computing systems. This can be achieved by developing a comprehensive modeling framework that can be used to evaluate the resiliency of such techniques on heterogeneous computing systems for assessing the impact of failures as well as workloads in an integrated way. This dissertation presents an experimental methodology based on discrete event simulation for the analysis and the evaluation of the resilience of scheduling scientific applications on high performance computing systems. With the aid of the methodology a wide class of dependencies existing between application and computing system are captured within a deterministic model for quantifying the performance impact expected from changes in application and system characteristics. Ideally, the results obtained by employing the proposed simulation-based performance prediction framework enabled an introspective design and investigation of scheduling heuristics to reason about how to best fully optimize various often antagonistic objectives, such as minimizing application makespan and maximizing reliability.
39

Evaluating the Robustness of Resource Allocations Obtained through Performance Modeling with Stochastic Process Algebra

Srivastava, Srishti 09 May 2015 (has links)
Recent developments in the field of parallel and distributed computing has led to a proliferation of solving large and computationally intensive mathematical, science, or engineering problems, that consist of several parallelizable parts and several non-parallelizable (sequential) parts. In a parallel and distributed computing environment, the performance goal is to optimize the execution of parallelizable parts of an application on concurrent processors. This requires efficient application scheduling and resource allocation for mapping applications to a set of suitable parallel processors such that the overall performance goal is achieved. However, such computational environments are often prone to unpredictable variations in application (problem and algorithm) and system characteristics. Therefore, a robustness study is required to guarantee a desired level of performance. Given an initial workload, a mapping of applications to resources is considered to be robust if that mapping optimizes execution performance and guarantees a desired level of performance in the presence of unpredictable perturbations at runtime. In this research, a stochastic process algebra, Performance Evaluation Process Algebra (PEPA), is used for obtaining resource allocations via a numerical analysis of performance modeling of the parallel execution of applications on parallel computing resources. The PEPA performance model is translated into an underlying mathematical Markov chain model for obtaining performance measures. Further, a robustness analysis of the allocation techniques is performed for finding a robustmapping from a set of initial mapping schemes. The numerical analysis of the performance models have confirmed similarity with the simulation results of earlier research available in existing literature. When compared to direct experiments and simulations, numerical models and the corresponding analyses are easier to reproduce, do not incur any setup or installation costs, do not impose any prerequisites for learning a simulation framework, and are not limited by the complexity of the underlying infrastructure or simulation libraries.
40

Simulation-based Cognitive Workload Modeling And Evaluation Of Adaptive Automation Invoking And Revoking Strategies

Rusnock, Christina 01 January 2013 (has links)
In human-computer systems, such as supervisory control systems, large volumes of incoming and complex information can degrade overall system performance. Strategically integrating automation to offload tasks from the operator has been shown to increase not only human performance but also operator efficiency and safety. However, increased automation allows for increased task complexity, which can lead to high cognitive workload and degradation of situational awareness. Adaptive automation is one potential solution to resolve these issues, while maintaining the benefits of traditional automation. Adaptive automation occurs dynamically, with the quantity of automated tasks changing in real-time to meet performance or workload goals. While numerous studies evaluate the relative performance of manual and adaptive systems, little attention has focused on the implications of selecting particular invoking or revoking strategies for adaptive automation. Thus, evaluations of adaptive systems tend to focus on the relative performance among multiple systems rather than the relative performance within a system. This study takes an intra-system approach specifically evaluating the relationship between cognitive workload and situational awareness that occurs when selecting a particular invoking-revoking strategy for an adaptive system. The case scenario is a human supervisory control situation that involves a system operator who receives and interprets intelligence outputs from multiple unmanned assets, and then identifies and reports potential threats and changes in the environment. In order to investigate this relationship between workload and situational awareness, discrete event simulation (DES) is used. DES is a standard technique in the analysis iv of systems, and the advantage of using DES to explore this relationship is that it can represent a human-computer system as the state of the system evolves over time. Furthermore, and most importantly, a well-designed DES model can represent the human operators, the tasks to be performed, and the cognitive demands placed on the operators. In addition to evaluating the cognitive workload to situational awareness tradeoff, this research demonstrates that DES can quite effectively model and predict human cognitive workload, specifically for system evaluation. This research finds that the predicted workload of the DES models highly correlates with well-established subjective measures and is more predictive of cognitive workload than numerous physiological measures. This research then uses the validated DES models to explore and predict the cognitive workload impacts of adaptive automation through various invoking and revoking strategies. The study provides insights into the workload-situational awareness tradeoffs that occur when selecting particular invoking and revoking strategies. First, in order to establish an appropriate target workload range, it is necessary to account for both performance goals and the portion of the workload-performance curve for the task in question. Second, establishing an invoking threshold may require a tradeoff between workload and situational awareness, which is influenced by the task’s location on the workload-situational awareness continuum. Finally, this study finds that revoking strategies differ in their ability to achieve workload and situational awareness goals. For the case scenario examined, revoking strategies based on duration are best suited to improve workload, while revoking strategies based on revoking thresholds are better for maintaining situational awareness.

Page generated in 0.0758 seconds