• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 9
  • 2
  • 1
  • Tagged with
  • 68
  • 68
  • 20
  • 13
  • 13
  • 12
  • 10
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

High-speed performance and power modeling

Sunwoo, Dam 01 October 2010 (has links)
The high cost of designing, testing and manufacturing semiconductor chips makes simulation essential to predict performance and power throughout the design cycle of hardware components. However, standard detailed software performance/power simulators are too slow to finish real-life benchmarks within the design cycle. To compensate, reduced accuracy is often traded for improved simulator performance. This dissertation explores the FPGA-Accelerated Simulation Technologies (FAST) methodology that can dramatically improve simulation performance without sacrificing accuracy. Design trade-offs of the functional model partition of a FAST simulator are discussed and QUICK, an implementation of a FAST functional model that is designed to provide fast functional execution as well as the ability to rollback and execute down different paths is described. QUICK is general enough to be useful beyond FPGA-accelerated simulators and provides complex ISA (x86) and full-system support. A complete FAST simulator that combines QUICK with an FPGA-based timing model runs in the millions of x86 instructions per seconds, several orders of magnitude faster than software simulators of comparable accuracy capability, and boots unmodified Windows XP and Linux. Ideally, one could model power at the same speeds as performance modeling in a FAST simulator. However, traditional software-implemented power estimation techniques are very slow. PrEsto, a new power modeling methodology that automatically generates accurate power models that can efficiently fit and operate within FAST simulators, is proposed. Such models can dramatically improve the accuracy and performance of architectural power estimation. Improving high-accuracy simulator performance will open research directions that could not be explored economically in the past. The combination of simulation performance, accuracy, and power estimation capabilities extend the usefulness of such simulators, thus enabling the co-design of architecture, hardware implementation, operating systems, and software. / text
12

Embedded Processor Selection/Performance Estimation using FPGA-based Profiling

Obeidat, Fadi 26 July 2010 (has links)
In embedded systems, modeling the performance of the candidate processor architectures is very important to enable the designer to estimate the capability of each architecture against the target application. Considering the large number of available embedded processors, the need has increased for building an infrastructure by which it is possible to estimate the performance of a given application on a given processor with a minimum of time and resources. This dissertation presents a framework that employs the softcore MicroBlaze processor as a reference architecture where FPGA-based profiling is implemented to extract the functional statistics that characterize the target application. Linear regression analysis is implemented for mapping the functional statistics of the target application to the performance of the candidate processor architecture. Hence, this approach does not require running the target application on each candidate processor; instead, it is run only on the reference processor which allows testing many processor architectures in very short time.
13

Bifacial photovoltaic (PV) system performance modeling utilizing ray tracing

Asgharzadeh Shishavan, Amir 01 August 2019 (has links)
Bifacial photovoltaics (PV) is a promising technology which allows solar cells to absorb light and generate power from both front and rear sides of the cells. Bifacial PV systems generate more power per area compared to their monofacial counterparts because of the additional energy generated from the backside. However, modeling the performance of bifacial PV systems is more challenging than monofacial systems and industry requires novel and accurate modeling tools to understand and estimate the benefit of this technology. In this dissertation, a rigorous model utilizing a backward raytracing software tool called RADIANCE is developed, which allows accurate irradiance modeling of the front and rear sides of the bifacial PV systems. The developed raytracing model is benchmarked relative to other major bifacial irradiance modeling tools based on view-factor model. The accuracy of the irradiance models is tested by comparing with the measured irradiance data from the sensors installed on various bifacial PV systems. Our results show that the raytracing model is more accurate in modeling backside irradiance compared to the other irradiance models. However, this higher accuracy comes at a cost of higher computational time and resources. The raytracing model is also used to understand the impact of different installation parameters such as tilt angle, height above the ground, albedo and size of the south-facing fixed-tilt bifacial PV systems. Results suggest bifacial gain has a linear relationship with albedo, and an increasing saturating relationship with module height. However, the impact of tilt angle is much more complicated and depends on other installation parameters. It is shown that larger bifacial systems may have up to 20º higher optimum tilt angle compared to small-scale systems. We also used the raytracing model to simulate and compare the performance of two common configurations for bifacial PV systems: optimally tilted facing south/north (BiS/N) and vertically installed facing east/west (BiE/W). Our results suggest that in the case of no nearby obstruction, BiS/N performs better than BiE/W for most of the studied locations. However, the results show that for high latitude locations such as Alaska, having a small nearby obstruction may result in having better yield for vertical east-facing system than south-facing tilted system. RADIANCE modeling tool is also used in combination of a custom tandem device model to simulate the performance of tandem bifacial PV systems. Modeling results suggest that while the energy gain from bifacial tandem systems is not high, range of suitable top-cell bandgaps is greatly broadened. Therefore, more options for top-cell absorber of tandem cell are introduced.
14

Model-Based Testing for Performance Requirements : A Systematic Mapping Study and A Sample Study

Abdeen, Waleed, Chen, Xingru January 2019 (has links)
Model-Based Testing is a method that supports automated test design by using amodel. Although it is adopted in industrial, it is still an open area within performancerequirements. We aim to look into MBT for performance requirements and find out aframework that can model the performance requirements. We conducted a systematicmapping study, after that we conducted a sample study on software requirementsspecifications, then we introduced the Performance Requirements Verification andValidation (PRVV) model and finally, we completed another sample study to seehow the model works in practice. We found that there are many models can beused for performance requirement while the maturity is not enough. MBT can beimplemented in the context of performance, and it has been gaining momentum inrecent years compared to earlier. The PRVV model we developed can verify theperformance requirements and help to generate the test case.
15

The Empirical Testing of Musical Performance Assessment Paradigm

Russell, Brian Eugene 03 May 2010 (has links)
The purpose of this study was to test a hypothesized model of aurally perceived performer-controlled musical factors that influence assessments of performance quality. Previous research studies on musical performance constructs, musical achievement, musical expression, and scale construction were examined to identify the factors that influence assessments of performance quality. A total of eight factors were identified: tone, intonation, rhythmic accuracy, articulation, tempo, dynamics, timbre, and interpretation. These factors were categorized as either technique or musical expression factors. Items representing these eight variables were chosen from previous research on scale development. Additional items, along with researcher created items, were also chosen to represent the variables of technique, musical expression and overall perceptions of performance quality. The 44 selected items were placed on the Aural Musical Performance Quality (AMPQ) measure and paired with a four-point Likert scale. The reliability for the AMPQ measure was reported at .977. A total of 58 volunteer adjudicators were recruited to evaluate four recordings that represented one of each instrumental category of interest: brass, woodwind, voice, and string. The resulting performance evaluations (N = 232) were analyzed using statistical regression and path analysis techniques. The results of the analysis provide empirical support for the existence of the model of aurally perceived performer-controlled musical factors. Technique demonstrated significant direct effects on overall perceptions of performance quality and musical expression. Musical expression also demonstrated a significant direct effect on overall perceptions of performance quality. The results of this study are consistent with hypothesized model of performer-controlled musical factors.
16

Performance modeling of cloud computing centers

Khazaei, Hamzeh 21 February 2013 (has links)
Cloud computing is a general term for system architectures that involves delivering hosted services over the Internet, made possible by significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet. A cloud service differs from traditional hosting in three principal aspects. First, it is provided on demand, typically by the minute or the hour; second, it is elastic since the user can have as much or as little of a service as they want at any given time; and third, the service is fully managed by the provider -- user needs little more than computer and Internet access. Typically a contract is negotiated and agreed between a customer and a service provider; the service provider is required to execute service requests from a customer within negotiated quality of service (QoS) requirements for a given price. Due to dynamic nature of cloud environments, diversity of user's requests, resource virtualization, and time dependency of load, provides expected quality of service while avoiding over-provisioning is not a simple task. To this end, cloud provider must have efficient and accurate techniques for performance evaluation of cloud computing centers. The development of such techniques is the focus of this thesis. This thesis has two parts. In first part, Chapters 2, 3 and 4, monolithic performance models are developed for cloud computing performance analysis. We begin with Poisson task arrivals, generally distributed service times, and a large number of physical servers. Later on, we extend our model to include finite buffer capacity, batch task arrivals, and virtualized servers with a large number of virtual machines in each physical machine. However, a monolithic model may suffer from intractability and poor scalability due to large number of parameters. Therefore, in the second part of the thesis (Chapters 5 and 6) we develop and evaluate tractable functional performance sub-models for different servicing steps in a complex cloud center and the overall solution obtains by iteration over individual sub-model solutions. We also extend the proposed interacting analytical sub-models to capture other important aspects including pool management, power consumption, resource assigning process and virtual machine deployment of nowadays cloud centers. Finally, a performance model suitable for cloud computing centers with heterogeneous requests and resources using interacting stochastic models is proposed and evaluated.
17

Performance modeling of cloud computing centers

Khazaei, Hamzeh 21 February 2013 (has links)
Cloud computing is a general term for system architectures that involves delivering hosted services over the Internet, made possible by significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet. A cloud service differs from traditional hosting in three principal aspects. First, it is provided on demand, typically by the minute or the hour; second, it is elastic since the user can have as much or as little of a service as they want at any given time; and third, the service is fully managed by the provider -- user needs little more than computer and Internet access. Typically a contract is negotiated and agreed between a customer and a service provider; the service provider is required to execute service requests from a customer within negotiated quality of service (QoS) requirements for a given price. Due to dynamic nature of cloud environments, diversity of user's requests, resource virtualization, and time dependency of load, provides expected quality of service while avoiding over-provisioning is not a simple task. To this end, cloud provider must have efficient and accurate techniques for performance evaluation of cloud computing centers. The development of such techniques is the focus of this thesis. This thesis has two parts. In first part, Chapters 2, 3 and 4, monolithic performance models are developed for cloud computing performance analysis. We begin with Poisson task arrivals, generally distributed service times, and a large number of physical servers. Later on, we extend our model to include finite buffer capacity, batch task arrivals, and virtualized servers with a large number of virtual machines in each physical machine. However, a monolithic model may suffer from intractability and poor scalability due to large number of parameters. Therefore, in the second part of the thesis (Chapters 5 and 6) we develop and evaluate tractable functional performance sub-models for different servicing steps in a complex cloud center and the overall solution obtains by iteration over individual sub-model solutions. We also extend the proposed interacting analytical sub-models to capture other important aspects including pool management, power consumption, resource assigning process and virtual machine deployment of nowadays cloud centers. Finally, a performance model suitable for cloud computing centers with heterogeneous requests and resources using interacting stochastic models is proposed and evaluated.
18

Performance Modeling of In Situ Rendering

Larsen, Matthew 01 May 2017 (has links)
With the push to exascale, in situ visualization and analysis will play an increasingly important role in high performance computing. Tightly coupling in situ visualization with simulations constrains resources for both, and these constraints force a complex balance of trade-offs. A performance model that provides an a priori answer for the cost of using an in situ approach for a given task would assist in managing the trade-offs between simulation and visualization resources. In this work, we present new statistical performance models, based on algorithmic complexity, that accurately predict the run-time cost of a set of representative rendering algorithms, an essential in situ visualization task. To train and validate the models, we create data-parallel rendering algorithms within a light-weight in situ infrastructure, and we conduct a performance study of an MPI+X rendering infrastructure used in situ with three HPC simulation applications. We then explore feasibility issues using the model for selected in situ rendering questions.
19

An automated approach for systems performance and dependability improvement through sensitivity analysis of Markov chains

de Souza Matos Júnior, Rubens 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T15:58:19Z (GMT). No. of bitstreams: 2 arquivo3464_1.pdf: 2672787 bytes, checksum: 9bee33c2153182c2ce64b9027453243a (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Sistemas computacionais estão em constante evolução para satisfazer crescimentos na demanda, ou novas exigências dos usuários. A administração desses sistemas requer decisões que sejam capazes de prover o nível mais alto nas métricas de desempenho e dependabilidade, com mudanças mínimas `a configuração existente. É comum realizar análises de desempenho, confiabilidade, disponibilidade e performabilidade de sistemas através de modelos analíticos, e as cadeias de Markov representam um dos formalismos matemáticos mais utilizados, permitindo estimar algumas métricas de interesse, dado um conjunto de parâmetros de entrada. No entanto, a análise de sensibilidade, quando feita, é executada simplesmente variando o conjunto de parâmetros dentro de suas faixas de valores e resolvendo repetidamente o modelo escolhido. A análise de sensibilidade diferencial permite a quem está modelando encontrar gargalos de uma maneira mais sistemática e eficiente. Este trabalho apresenta uma abordagem automatizada para análise de sensibilidade, e almeja guiar a melhoria de sistemas computacionais. A abordagem proposta é capaz de acelerar o processo de tomada de decisão, no que se refere a optimização de ajustes de hardware e software, além da aquisição e substituição de componentes. Tal metodologia usa as cadeias de Markov como técnica de modelagem formal, e a análise de sensibilidade desses modelos, preenchendo algumas lacunas encontradas na literatura sobre análise de sensibilidade. Por fim, a análise de sensibilidade de alguns sistemas distribuídos selecionados, conduzida neste trabalho, destaca gargalos nestes sistemas e fornece exemplos da acurácia da metodologia proposta, assim como ilustra sua aplicabilidade
20

Autonomous cloud resource provisioning : accounting, allocation, and performance control

Lakew, Ewnetu Bayuh January 2015 (has links)
The emergence of large-scale Internet services coupled with the evolution of computing technologies such as distributed systems, parallel computing, utility computing, grid, and virtualization has fueled a movement toward a new resource provisioning paradigm called cloud computing. The main appeal of cloud computing lies in its ability to provide a shared pool of infinitely scalable computing resources for cloud services, which can be quickly provisioned and released on-demand with minimal effort. The rapidly growing interest in cloud computing from both the public and industry together with the rapid expansion in scale and complexity of cloud computing resources and the services hosted on them have made monitoring, controlling, and provisioning cloud computing resources at runtime into a very challenging and complex task. This thesis investigates algorithms, models and techniques for autonomously monitoring, controlling, and provisioning the various resources required to meet services’ performance requirements and account for their resource usage. Quota management mechanisms are essential for controlling distributed shared resources so that services do not exceed their allocated or paid-for budget. Appropriate cloud-wide monitoring and controlling of quotas must be exercised to avoid over- or under-provisioning of resources. To this end, this thesis presents new distributed algorithms that efficiently manage quotas for services running across distributed nodes. Determining the optimal amount of resources to meet services’ performance requirements is a key task in cloud computing. However, this task is extremely challenging due to multi-faceted issues such as the dynamic nature of cloud environments, the need for supporting heterogeneous services with different performance requirements, the unpredictable nature of services’ workloads, the non-triviality of mapping performance measurements into resources, and resource shortages. Models and techniques that can predict the optimal amount of resources needed to meet service performance requirements at runtime irrespective of variations in workloads are proposed. Moreover, different service differentiation schemes are proposed for managing temporary resource shortages due to, e.g., flash crowds or hardware failures. In addition, the resources used by services must be accounted for in order to properly bill customers. Thus, monitoring data for running services should be collected and aggregated to maintain a single global state of the system that can be used to generate a single bill for each customer. However, collecting and aggregating such data across geographical distributed locations is challenging because the management task itself may consume significant computing and network resources unless done with care. A consistency and synchronization mechanism that can alleviate this task is proposed.

Page generated in 0.0919 seconds