Spelling suggestions: "subject:"benchmark.""
31 |
Methods, software, and benchmarks for modeling long timescale dynamics in solid-state atomic systemsChill, Samuel T. 17 September 2014 (has links)
The timescale of chemical reactions in solid-state systems greatly exceeds
what may be modeled by direct integration of Newton's equation of motion.
This limitation spawned the development of many different methods such as
(adaptive) kinetic Monte Carlo (A)KMC, (harmonic) transition state theory (H)TST, parallel replica dynamics (PRD), hyperdynamics (HD), and temperature accelerated dynamics.
The focus of this thesis was to
(1) implement many of these methods in a single open-source software package
(2) develop standard benchmarks to compare their accuracy and computational cost and
(3) develop new long timescale methods.
The lack of a open-source package that implements long timescale methods makes it difficult to directly evaluate the quality of different approaches.
It also impedes the development of new techniques. Due to these concerns we developed Eon, a program that implements several long timescale methods including PRD, HD, and AKMC as well as global optimization algorithms basin hopping, and minima hopping. Standard benchmarks to evaluate the performance of local geometry optimization; global optimization; and single-ended and double-ended saddle point searches were created. Using Eon and several other well known programs, the accuracy and performance of different algorithms was compared. Important to this work is a website where anyone may download the code to repeat any of the numerical experiments. A new method for long timescale simulations is also introduced: molecular dynamics saddle search adaptive kinetic Monte Carlo (AKMC-MDSS). AKMC-MDSS improves upon AKMC by using short high-temperature MD trajectories to locate the important low-temperature reaction mechanisms of interest. Most importantly, the use of MD enables the development of a proper stopping criterion for the AKMC simulation that ensures that the relevant reaction mechanisms at the low temperature have been found. Important to the simulation of any material is knowledge of the experimental structure. Extended x-ray absorption fine structure (EXAFS) is a technique often used to determine
local atomic structure. We propose a technique to quantitatively measure the accuracy of the commonly used fitting models. This technique reveals that the fitting models interpreted nanoparticles as being significantly more ordered and of much shorter bond length than they really are. / text
|
32 |
Estudos e avaliações de compiladores para arquiteturas reconfiguráveis / A compiler analysis for reconfigurable hardwareLopes, Joelmir José 25 May 2007 (has links)
Com o aumento crescente das capacidades dos circuitos integrado e conseqüente complexidade das aplicações, em especial as embarcadas, um requisito tem se tornado fundamental no desenvolvimento desses sistemas: ferramentas de desenvolvimento cada vez mais acessíveis aos engenheiros, permitindo, por exemplo, que um programa escrito em linguagem C possa ser convertido diretamente em hardware. Os FPGAs (Field Programmable Gate Array), elemento fundamental na caracterização de computação reconfigurável, é um exemplo desse crescimento, tanto em capacidade do CI como disponibilidade de ferramentas. Esse projeto teve como objetivos: estudar algumas ferramentas de conversão C, C++ ou Java para hardware reconfigurável; estudar benchmarks a serem executadas nessas ferramentas para obter desempenho das mesmas, e ter o domínio dos conceitos na conversão de linguagens de alto nível para hardware reconfigurável. A plataforma utilizada no projeto foi a da empresa Xilinx XUP V2P / With the growing capacities of Integrated Circuits (IC) and the complexity of the applications, especially in embedded systems, there are now requisites for developing tools that convert algorithms C direct into the hardware. As a fundamental element to characterize Reconfigurable Computing, FPGA (Field Programmable Gate Array) is an example of those CIs, as well as the tools that have been developed. In this project we present different tools to convert C into the hardware. We also present benchmarks to be executed on those tools for performance analysis. Finally we conclude the project presenting results relating the experience to implement C direct into the hardware. The Xilinx XUP V2P platform was used in the project
|
33 |
Tâches de raisonnement en logiques hybrides / Reasoning Tasks for Hybrid LogicsHoffmann, Guillaume 13 December 2010 (has links)
Les logiques modales sont des logiques permettant la représentation et l'inférence de connaissances. La logique hybride est une extension de la logique modale de base contenant des nominaux, permettant de faire référence à un unique individu ou monde du modèle. Dans cette thèse nous présentons plusieurs algorithmes de tableaux pour logiques hybrides expressives. Nous présentons aussi une implémentation de ces calculs, et nous décrivons les tests de correction et de performance que nous avons effectués, ainsi que les outils les permettant. De plus, nous étudions en détail une famille particulière de logiques liée aux logiques hybrides : les logiques avec opérateurs de comptage. Nous étudions la complexité et la décidabilité de certains de ces langages / Modal logics are logics enabling representing and inferring knowledge. Hybrid logic is an extension of the basic modal logic that contains nominals which enable to refer to a single individual or world of the model. In this thesis, we present several tableaux-based algorithms for expressive hybrid logics. We also present an implementation of these calculi and we describe correctness and performance tests we carried out, and the tools that enable these. Moreover, we study a particular family of logics related to hybrid logics: logics with counting operators.We investigate previous results, and study the complexity and decidability of certain of these languages
|
34 |
Creation of a whole-core PWR benchmark for the analysis and validation of neutronics codesHon, Ryan Paul 03 April 2013 (has links)
This work presents a whole-core benchmark problem based on a 2-loop pressurized water reactor with both UO₂and MOX fuel assemblies. The specification includes heterogeneity at both the assembly and core level. The geometry and material compositions are fully described and multi-group cross section libraries are provided in 2, 4, and 8 group formats. Simplifications made to the benchmark specification include a Cartesian boundary, to facilitate the use of transport codes that may have trouble with cylindrical boundaries, and control rod homogenization, to reduce the geometric complexity of the problem. These modifications were carefully chosen to preserve the physics of the problem and a justification of these modifications is given. Detailed Monte Carlo reference solutions including core eigenvalue, assembly averaged fission densities and selected fuel pin fission densities are presented for benchmarking diffusion and transport methods. Three different core configurations are presented in the paper namely all-rods-out, all-rods-in, and some-rods-in.
|
35 |
How reliable are earnings? : A study about real activities manipulation and accrual-based management in EuropeBjurman, Albin, Weihagen, Erik January 2013 (has links)
Background & Subject discussion: Financial reporting and earnings affect stakeholders’ decisions and is a vital component in firm’s information disclosure. Management possesses considerable influence over financial reports. Earnings consist of a cash-flow and accrual component. Earnings can be affected by managers’ judgment and decision either by accrual-based earnings management or real activities manipulation. Earnings management affects the relevance and reliability of financial reporting and is widely researched. Europe is consolidating and accounting and audit standards are harmonizing. Real activities manipulation is unobserved in Europe. Increased attention and regulations of earnings management are inducing more creative methods to alter earnings, such as stock repurchases. Purpose: The main purpose of this study is to investigate if real activities manipulation can be observed in Europe and to what extent in relationship to accrual-based activities to avoid reporting small losses. An underlying purpose is to study different methods of RAM, including some newer approaches to detect hypothesized RAM by stock repurchases. An additional purpose is to evaluate the different utilized detection methods to clarify effectiveness. The final purpose is to consider possible effects of EM on reliability and relevance of financial reporting. Conclusion: The result concludes that earnings management are performed by real activities manipulation. Stock repurchases, decreased discretionary expenses and production cost all indicate earnings management to avoid reporting earnings below a specific benchmark. The result questions the reliability and relevance of reported earnings.
|
36 |
Performance differences in encryption software versus storage devicesOlsson, Robin January 2012 (has links)
This thesis looked at three encryption applications that all use the symmetric encryption algorithms AES, Twofish and Serpent but differ in their implementation and how this difference would illustrate itself in performance benchmarks depending on the type of storage device that they were used on. Three mechanical hard drives and one solid state drive were used in the performance benchmarks which measured a variety of different disk operations across the three encryption applications and their algorithms. From the benchmarks performance charts were produced which showed that DiskCryptor had the best performance when using a solid state drive and that TrueCrypt had the best performance when using mechanical hard drives. By choosing DiskCryptor as the encryption application when using a solid state drive a performance increase of 38.9% compared to BestCrypt and 28.4% compared to TrueCrypt was achieve when using the AES algorithm. It was also shown that Twofish was overall the best performing algorithm. The primary conclusion that can be drawn from this thesis is that it is important to choose the right encryption application depending on the type of storage device used in order to get the best performance possible.
|
37 |
Seismic Interstory Drift Demands in Steel Friction Damped Braced BuildingsPeternell Altamira, Luis E. 16 January 2010 (has links)
In the last 35 years, several researchers have proposed, developed and tested different friction devices for seismic control of structures. Their research has demonstrated that such devices are simple, economical, practical, durable and very effective. However, research on passive friction dampers, except for few instances, has not been given appropriate attention lately. This has caused some of the results of old studies to become out-of-date, lose their validity in the context of today's design philosophies or to fall short on the expectations of this century's structural engineering. An analytical study on the behavior of friction devices and the effect they have on the structures into which they are incorporated has been undertaken to address the new design trends, codes, evaluation criteria and needs of today's society.
The present study consists of around 7,000 structural analyses that are used to show the excellent seismic performance and economic advantages of Friction Damped Braced Frames. It serves, at the same time, to improve our understanding on their dynamic behavior. Finally, this thesis also sets the basis for future research on the application of this type of seismic energy dissipating systems.
|
38 |
The construction and role of non-covalent benchmarks in computational chemistryMarshall, Michael S. 02 July 2012 (has links)
This thesis focuses on the construction and role of benchmark quality computations in the area of
non-covalent interactions. We have provided a detailed error analysis of focal-point schemes
commonly used in benchmark quality computations, as well as provide error and speedup analysis of
commonly used approximations to these methods. An analysis of basis set effects on
higher-order corrections to MP2/CBS has been carried out, providing the community error bounds on future benchmarks. We demonstrate how these high-level computations can elucidate a better
understanding of non-bonded interactions in chemistry as well as provide high-quality reference data to refit existing methods against to increase the overall accuracy of the method.
|
39 |
Automatic generation of synthetic workloads for multicore systemsGanesan, Karthik 11 July 2012 (has links)
When designing a computer system, benchmark programs are used with cycle accurate performance/power simulators and HDL level simulators to evaluate novel architectural enhancements, perform design space exploration, understand the worst-case power characteristics of various designs and find performance bottlenecks. This research effort is directed towards automatically generating synthetic benchmarks to tackle three design challenges: 1) For most of the simulation related purposes, full runs of modern real world parallel applications like the PARSEC, SPLASH suites cannot be used as they take machine weeks of time on cycle accurate and HDL level simulators incurring a prohibitively large time cost 2) The second design challenge is that, some of these real world applications are intellectual property and cannot be shared with processor vendors for design studies 3) The most significant problem in the design stage is the complexity involved in fixing the maximum power consumption of a multicore design, called the Thermal Design Power (TDP). In an effort towards fixing this maximum power consumption of a system at the most optimal point, designers are used to hand-crafting possible code snippets called power viruses. But, this process of trying to manually write such maximum power consuming code snippets is very tedious.
All of these aforementioned challenges has lead to the resurrection of synthetic benchmarks in the recent past, serving as a promising solution to all the challenges. During the design stage of a multicore system, availability of a framework to automatically generate system-level synthetic benchmarks for multicore systems will greatly simplify the design process and result in more confident design decisions. The key idea behind such an adaptable benchmark synthesis framework is to identify the key characteristics of real world parallel applications that affect the performance and power consumption of a real program and create synthetic executable programs by varying the values for these characteristics. Firstly, with such a framework, one can generate miniaturized synthetic clones for large target (current and futuristic) parallel applications enabling an architect to use them with slow low-level simulation models (e.g., RTL models in VHDL/Verilog) and helps in tailoring designs to the targeted applications. These synthetic benchmark clones can be distributed to architects and designers even if the original applications are intellectual property, when they are not publicly available. Lastly, such a framework can be used to automatically create maximum power consuming code snippets to be able to help in fixing the TDP, heat sinks, cooling system and other power related features of the system.
The workload cloning framework built using the proposed synthetic benchmark generation methodology is evaluated to show its superiority over the existing cloning methodologies for single-core systems by generating miniaturized clones for CPU2006 and ImplantBench workloads with only an average error of 2.9% in performance for up to five orders of magnitude of simulation speedup. The correlation coefficient predicting the sensitivity to design changes is 0.95 and 0.98 for performance and power consumption. The proposed framework is evaluated by cloning parallel applications implemented based on p-threads and OpenMP in the PARSEC benchmark suite. The average error in predicting performance is 4.87% and that of power consumption is 2.73%. The correlation coefficient predicting the sensitivity to design changes is 0.92 for performance. The efficacy of the proposed synthetic benchmark generation framework for power virus generation is evaluation on SPARC, Alpha and x86 ISAs using full system simulators and also using real hardware. The results show that the power viruses generated for single-core systems consume 14-41% more power compared to MPrime on SPARC ISA. Similarly, the power viruses generated for multicore systems consume 45-98%, 40-89% and 41-56% more power than PARSEC workloads, running multiple copies of MPrime and multithreaded SPECjbb respectively. / text
|
40 |
Product Market Competition and Real Earnings Management to Meet or Beat Earnings TargetsYoung, Alex January 2015 (has links)
<p>Earnings management could be motivated by either managerial opportunism or efficient contracting. To discriminate between these motivations, I use a measure of product market competition that analytical research predicts will discipline managers and better align their interests with those of shareholders. Thus, if earnings management reflects managerial opportunism, then an increase in competition will decrease earnings management; and if it reflects efficient contracting, then an increase in competition will increase earnings management. Consistent with earnings management indicating managerial opportunism, I show that an increase in competition decreases real earnings management in the form of overproduction to avoid reporting negative earnings or a negative change in earnings.</p> / Dissertation
|
Page generated in 0.0396 seconds