• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 216
  • 45
  • 29
  • 26
  • 24
  • 21
  • 16
  • 15
  • 13
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 462
  • 73
  • 56
  • 55
  • 47
  • 40
  • 40
  • 37
  • 31
  • 31
  • 31
  • 30
  • 29
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A collection of benchmark examples for the numerical solution of algebraic Riccati equations II: Discrete-time case

Benner, P., Laub, A. J., Mehrmann, V. 30 October 1998 (has links) (PDF)
This is the second part of a collection of benchmark examples for the numerical solution of algebraic Riccati equations. After presenting examples for the continuous-time case in Part I, our concern in this paper is discrete-time algebraic Riccati equations. This collection may serve for testing purposes in the construction of new numerical methods, but may also be used as a reference set for the comparison of methods.
52

Issues in the Use of Benchmarking by Church Leaders

Keyt, John C. 17 November 2000 (has links)
With church attendance falling, church leaders are searching for methods to reverse that trend. Benchmarking the practices of successful churches offers one such avenue. This article points to issues of internal and external fit which should be considered before those benchmarked best practices are implemented by church leaders.
53

Using Benchmark Assessment Scores to Predict Scores on the Mississippi Biology I Subject Area Test, Second Edition

Smith, Cheryl Lynn 09 May 2015 (has links)
Schools across Mississippi are challenged with educational growth. Since the enactment of NCLB, Mississippi has been grappling with a decrease in the graduation rate among its’ public high school students. Despite all the preparation, spent funds, and professional development for teachers, many students are not being successful on required subject area tests. The purpose of this study was to determine if benchmark assessment scores could be used as a predictor of state assessment scores. This study was guided by 3 research questions and utilized 1 research design. For the purpose of this study, a simple linear regression correlational research design was used to develop an equation to determine if the ELS Biology I Benchmark Assessment scores were a reliable predictor of Mississippi Biology I SATP2 scores. Question 1 sought to determine the accuracy of the fall ELS Biology I Benchmark Assessment scores on predicting the Mississippi Biology I SATP2 for high school students. Question 2 sought to determine the accuracy of the winter ELS Biology I Benchmark Assessment scores on predicting the Mississippi Biology I SATP2 for high school students. Question 3 sought to determine the accuracy of the spring ELS Biology I Benchmark Assessment scores on predicting the Mississippi Biology I SATP2 for high school students. Data analyses results indicated a statistically significant model for predicting Mississippi Biology I SATP2 scores for each of the benchmark assessments. Although the fall administration was statistically significant, it was not very accurate in predicting SATP2 scores. It was determined that the ELS Biology I Benchmark Assessment could accurately predict scores on the Mississippi Biology I SATP2 for high school students. The study concluded with recommendations for future research, especially in the area of science.
54

Benchmarking Virtual Network Mapping Algorithms

Zhu, Jin 01 January 2012 (has links) (PDF)
The network architecture of the current Internet cannot accommodate the deployment of novel network-layer protocols. To address this fundamental problem, network virtualization has been proposed, where a single physical infrastructure is shared among different virtual network slices. A key operational problem in network virtualization is the need to allocate physical node and link resources to virtual network requests. While several different virtual network mapping algorithms have been proposed in literature, it is difficult to compare their performance due to differences in the evaluation methods used. In this thesis work, we proposed VNMBench, a virtual network mapping benchmark that provides a set of standardized inputs and evaluation metrics. Using this benchmark, different algorithms can be evaluated and compared objectively. The benchmark model separate into two parts: static model and dynamic model, which operated in fixed and changed mapping process. We present such an evaluation using three existing virtual network mapping algorithms. We compare the evaluation results of our synthetic benchmark with those of actual Emulab requests to show that VNMBench is sufficiently realistic. We believe this work provides an important foundation to quantitatively evaluating the performance of a critical component in the operation of virtual networks.
55

Experimental Adsorption and Reaction Studies on Transition Metal Oxides Compared to DFT Simulations

Chen, Han 11 June 2021 (has links)
A temperature-programmed desorption (TPD) study of CO and NH₃ adsorption on MnO(100) with complimentary density functional theory (DFT) simulations was conducted. TPD reveals a primary CO desorption signal at 130 K from MnO(100) in the low coverage limit giving an adsorption energy of -35.6 ±2.1 kJ/mol on terrace sites. PBE+U gives a more reasonable structural result than PBE, and the adsorption energy obtained by PBE+U and DFT-D3 Becke-Johnson gives excellent agreement with the experimentally obtained ΔE<sub>ads</sub> for adsorption at Mn²⁺ terrace sites. The analysis of NH₃-TPD traces revealed that adsorption energy on MnO(100) is coverage-dependent. At the low-coverage limit, the adsorption energy on terraces is -58.7±1.0 kJ/mol. A doser results in the formation of a transient NH₃ multilayers that appears in TPD at around 110K. For a terrace site, PBE+U predicts a more realistic surface adsorbate geometry than PBE does, with PBE+U with Tkatchenko-Scheffler method with iterative Hirshfeld partitioning (TSHP) provides the best prediction. DFT simulations of the dehydrogenation elementary step of the ethyl and methyl fragments on α-Cr2O₃(101̅2) were also conducted to complement previous TPD studies of these subjects. On the nearly-stoichiometric surface of α-Cr₂O₃(101̅2), CD₃₋ undergoes dehydrogenation to produce CD₂=CD₂ and CD₄. Previous TPD traces suggest that the α-hydrogen (α-H) elimination of methyl groups on α-Cr₂O₃(101̅2) is the rate-limiting step, and has an activation barrier of 135±2 kJ/mol. DFT simulations showed that PBE gives reasonable prediction of the adsorption sites for CH3- fragments in accordance with XPS spectra, while PBE+U did not. Both PBE and PBE+U failed to predict the correct adsorption sites for CH₂=. When the simulation is set in accordance with the experimentally observed adsorption sites for the carbon species, PBE gives very accurate prediction on the reaction barrier when an adjacent I adatom is present, while PBE+U failed spectacularly. When the simulation is set in accordance with the DFT-predicted adsorption sites, PBE is still able to accurately predict the reaction barrier (<1% to 8.7% error) while PBE+U is less accurate. DFT is also used to complement the previous study of the β-H elimination an ethyl group on the α-Cr₂O₃(101̅2) surface. The DFT simulation shows that absent surface Cl adatoms, PBE predicts an activation barrier of 92.6 kJ/mol, underpredicting the experimental activation barrier by 28.7%, while PBE+U predicts a barrier of 27.0 kJ/mol, under-predicting the experimental barrier by 79.2%. The addition of chlorine on the adjacent cation improved the prediction on barrier by PBE+U marginally, while worsened the prediction by PBE marginally. Grant information: Financial support provided by the U.S. Department of Energy through grant DE-FG02 97ER14751. / Doctor of Philosophy / Nowadays, density functional theory (DFT), a computational approach to chemistry has become increasingly more popular due to it being less computationally expensive than other traditional computational approaches. One major shortcoming of DFT is its inability to explain the electronic interactions within transition metal oxides, where the electronic configuration within one cation is intimately linked to those on adjacent cations. To address this, DFT+U, a variant of DFT, has been developed to better account for these special electronic interactions. However, not enough experimental comparisons have been established to verify the accuracy of DFT and DFT+U. Our lab focuses on providing high quality experimental benchmarks that can be readily compared to by the DFT community. To establish the experimental benchmarks, we use a technique called temperature-programmed desorption (TPD), which focuses on measuring the rate at which gas molecules leave a sample surface populated with a pre-determined amount of gas molecules as the temperature of the surface is raised at constant but slow temperature ramp rate. Through analysis of the results, the adsorption energy can be obtained for a desorption process, or an activation barrier if the desorption is the result of a surface reaction. Some simple calculations involving PBE, a popular functional used in the DFT community, and its variant PBE+U were conducted for comparison purposes. The transition metal oxide surfaces chosen in this study is MnO(100) and of α-Cr₂O3(101̅2), because they both possess the special electronic interactions between their own cations. For adsorption studies, we determined adsorption energies of carbon monoxide (CO), and ammonia (NH₃) on MnO(100) single crystal surface. For CO, TPD study revealed that CO undergoes weak adsorption on the surface, with no dissociation of CO detected. PBE predicts an unreasonable surface adsorption geometry while PBE+U predicts a reasonable one. When coupled with a particular dispersion correction method named DFT-D3 Becke-Johnson, PBE+U predicts a very accurate adsorption energy of CO on MnO(100). TPD shows that NH₃ undergoes a stronger adsorption on MnO(100) with no dissociation of NH₃. Similarly, PBE+U predicted a more reasonable adsorption geometry while PBE did not. Coupled with a dispersion correction named Tkatchenko-Scheffler method with iterative Hirshfeld partitioning (TSHP), PBE+U provides an accurate prediction of adsorption energy. In comparison to previous experimental works based on TPD results, the simple decomposition reactions of an ethyl group and a methyl group were also studied on α-Cr₂O₃(101̅2) surface using DFT. Overall, PBE gave better prediction on the activation barrier than PBE+U did in comparison to experimentally observed barriers.
56

Performance Measurement and Analysis of Transactional Web Archiving

Maharshi, Shivam 19 July 2017 (has links)
Web archiving is necessary to retain the history of the World Wide Web and to study its evolution. It is important for the cultural heritage community. Some organizations are legally obligated to capture and archive Web content. The advent of transactional Web archiving makes the archiving process more efficient, thereby aiding organizations to archive their Web content. This study measures and analyzes the performance of transactional Web archiving systems. To conduct a detailed analysis, we construct a meaningful design space defined by the system specifications that determine the performance of these systems. SiteStory, a state-of-the-art transactional Web archiving system, and local archiving, an alternative archiving technique, are used in this research. We experimentally evaluate the performance of these systems using the Greek version of Wikipedia deployed on dedicated hardware on a private network. Our benchmarking results show that the local archiving technique uses a Web server’s resources more efficiently than SiteStory for one data point in our design space. Better performance than SiteStory in such scenarios makes our archiving solution favorable to use for transactional archiving. We also show that SiteStory does not impose any significant performance overhead on the Web server for the rest of the data points in our design space. / Master of Science / Web archiving is the process of preserving the information available on the World Wide Web into archives. This process provides historians and cultural heritage scholars access to the data that allows them to understand the evolution of the Internet and its usage. Additionally, Web archiving is also essential for some organizations that are obligated to keep the records of online resource access for their customers. Transactional Web archiving is an archiving technique where the information available on the Web is archived by capturing a transaction between a user and the Web server processing the user’s request. Transactional Web archiving provides a more complete and accurate history of a Web server than the traditional Web archiving models. However, in some scenarios the transactional Web archiving solutions may impose performance issues for the Web server being archived. In this thesis, we conduct a detailed performance analysis of SiteStory, a state-of-the-art transactional Web archiving solution, in various experimental settings. Furthermore, we propose a novel transactional Web archiving approach and compare its performance with SiteStory. To conduct a realistic study, we analyze real-life traffic on Greek Wikipedia website and generate similar traffic to perform our benchmarking experiments. Our benchmarking results show that our archiving technique uses a Web server’s resources more efficiently than SiteStory in some scenarios. Better performance than SiteStory in such scenarios makes our archiving solution favorable to use for transactional archiving. We also show that SiteStory does not impose any significant performance overhead on the Web server in other scenarios.
57

Synthesizing a Hybrid Benchmark Suite with BenchPrime

Wu, Xiaolong 09 October 2018 (has links)
This paper presents BenchPrime, an automated benchmark analysis toolset that is systematic and extensible to analyze the similarity and diversity of benchmark suites. BenchPrime takes multiple benchmark suites and their evaluation metrics as inputs and generates a hybrid benchmark suite comprising only essential applications. Unlike prior work, BenchPrime uses linear discriminant analysis rather than principal component analysis, as well as selects the best clustering algorithm and the optimized number of clusters in an automated and metric-tailored way, thereby achieving high accuracy. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites. As a case study, this work for the first time compares the DenBench with the MediaBench and MiBench using four different metrics to provide a multi-dimensional understanding of the benchmark suites. For each metric, BenchPrime measures to what degree DenBench applications are irreplaceable with those in MediaBench and MiBench. This provides means for identifying an essential subset from the three benchmark suites without compromising the application balance of the full set. The experimental results show that the necessity of including DenBench applications varies across the target metrics and that significant redundancy exists among the three benchmark suites. / Master of Science / Representative benchmarks are widely used in the research area to achieve an accurate and fair evaluation of hardware and software techniques. However, the redundant applications in the benchmark set can skew the average towards redundant characteristics overestimating the benefit of any proposed research. This work proposes a machine learning-based framework BenchPrime to generates a hybrid benchmark suite comprising only essential applications. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.
58

Réduction de modèle, observation et commande prédictive d'une station d'épuration d'eaux usées / Model reduction and predictive control of a wastewater treatment station

Assaf, Ali 12 December 2012 (has links)
Les installations d'épuration des eaux usées sont des systèmes de grande dimension, non linéaires, sujets à des perturbations importantes en flux et en charge. Une commande prédictive (MPC) a été appliquée au Benchmark BSM1 qui est un environnement de simulation qui définit une installation d'épuration. Une identification en boucle ouverte d'une station d'épuration d'eaux usées a été réalisée pour déterminer un modèle linéaire en se basant sur un ensemble de mesures entrée sortie du Benchmark BSM1. Les réponses indicielles en boucle ouverte ont été obtenues à partir de variation échelon des entrées autour de leurs valeurs stationnaires. Le modèle tient compte des non-linéarités à travers des paramètres variables. Les réponses indicielles obtenues permettent de déterminer par optimisation les fonctions de transfert continues correspondantes. Ces fonctions de transfert peuvent être regroupées en cinq modèles mathématiques. Des fonctions de transfert continues de premier ordre, de premier ordre avec un intégrateur, des réponses inverses, de second ordre et de second ordre avec zéro représentant les réponses indicielles ont été identifiées. Les valeurs numériques des coefficients de chaque modèle choisi ont été calculées par un critère des moindres carrés. La commande prédictive (MPC) utilise le modèle obtenu comme un modèle interne pour commander le procédé. Deux stratégies de la commande prédictive DMC et QDMC d'une station d'épuration avec ou sans compensation par anticipation ont été testées. La commande par anticipation est utilisée pour réduire l'effet de deux perturbations mesurées, le débit entrant et la concentration entrante en ammonium, sur le système / Wastewater treatment processes are large scale, non linear systems, submitted to important disturbances of influent flow rate and load. Model predictive control (MPC) widely used industrial technique for advanced multivariable control, has been applied to the Benchmark Simulation Model 1 (BSM1) simulation benchmark of wastewater treatment process. An open loop identification method has been developed to determine a linear model for a set of input-output measurements of the process. All the step responses have been obtained in open loop from step variations of the manipulated inputs and measured disturbances around their steady state values. The non-linearities of the model are taken into account by variable parameters. The step responses coefficient obtained make it possible to determine by optimization the corresponding transfer functions. That functions are classified by five mathematical models, such as : first order, first order with integrator, inverse response, second order and second order with zero. The numerical values of coefficients of each model selected were calculated using a least squares criterion. Model predictive control (MPC) uses the resulting model as an internal model to control the process. Dynamic matrix control DMC and quadratic dynamic matrix control QDMC predictive control strategies, in the absence and presence of feedforward compensation, have been tested. Two measured disturbances have been used for feedforward control, the influent flow rate and ammonium concentration
59

Le choix des architectures hybrides, une stratégie réaliste pour atteindre l'échelle exaflopique. / The choice of hybrid architectures, a realistic strategy to reach the Exascale.

Loiseau, Julien 14 September 2018 (has links)
La course à l'Exascale est entamée et tous les pays du monde rivalisent pour présenter un supercalculateur exaflopique à l'horizon 2020-2021.Ces superordinateurs vont servir à des fins militaires, pour montrer la puissance d'une nation, mais aussi pour des recherches sur le climat, la santé, l'automobile, physique, astrophysique et bien d'autres domaines d'application.Ces supercalculateurs de demain doivent respecter une enveloppe énergétique de 1 MW pour des raisons à la fois économiques et environnementales.Pour arriver à produire une telle machine, les architectures classiques doivent évoluer vers des machines hybrides équipées d'accélérateurs tels que les GPU, Xeon Phi, FPGA, etc.Nous montrons que les benchmarks actuels ne nous semblent pas suffisants pour cibler ces applications qui ont un comportement irrégulier.Cette étude met en place une métrique ciblant les aspects limitants des architectures de calcul: le calcul et les communications avec un comportement irrégulier.Le problème mettant en avant la complexité de calcul est le problème académique de Langford.Pour la communication nous proposons notre implémentation du benchmark du Graph500.Ces deux métriques mettent clairement en avant l'avantage de l'utilisation d'accélérateurs, comme des GPUs, dans ces circonstances spécifiques et limitantes pour le HPC.Pour valider notre thèse nous proposons l'étude d'un problème réel mettant en jeu à la fois le calcul, les communications et une irrégularité extrême.En réalisant des simulations de physique et d'astrophysique nous montrons une nouvelle fois l'avantage de l'architecture hybride et sa scalabilité. / The countries of the world are already competing for Exascale and the first exaflopics supercomputer should be release by 2020-2021.These supercomputers will be used for military purposes, to show the power of a nation, but also for research on climate, health, physics, astrophysics and many other areas of application.These supercomputers of tomorrow must respect an energy envelope of 1 MW for reasons both economic and environmental.In order to create such a machine, conventional architectures must evolve to hybrid machines equipped with accelerators such as GPU, Xeon Phi, FPGA, etc.We show that the current benchmarks do not seem sufficient to target these applications which have an irregular behavior.This study sets up a metrics targeting the walls of computational architectures: computation and communication walls with irregular behavior.The problem for the computational wall is the Langford's academic combinatorial problem.We propose our implementation of the Graph500 benchmark in order to target the communication wall.These two metrics clearly highlight the advantage of using accelerators, such as GPUs, in these specific and representative problems of HPC.In order to validate our thesis we propose the study of a real problem bringing into play at the same time the computation, the communications and an extreme irregularity.By performing simulations of physics and astrophysics we show once again the advantage of the hybrid architecture and its scalability.
60

Metodologia de benchmark para avaliação de desempenho não-estacionária: um estudo de caso baseado em aplicações de computação em nuvem / Benchmark methodology for non-stationary performance evaluation: a case study based on cloud computing applications

Mamani, Edwin Luis Choquehuanca 23 February 2016 (has links)
Este trabalho analisa os efeitos das propriedades dinâmicas de sistemas distribuídos de larga escala e seu impacto no desempenho, e introduz uma abordagem para o planejamento de experimentos de referência capazes de expor essa influência. Especialmente em aplicações complexas com múltiplas camadas de software (multi-tier), o efeito total de pequenos atrasos introduzidos por buffers, latência na comunicação e alocação de recursos, pode resultar gerando inércia significativa ao longo do funcionamento do sistema. A fim de detetar estas propriedade dinâmica, o experimento de execução do benchmark deve excitar o sistema com carga de trabalho não-estacionária sob um ambiente controlado. A presente pesquisa discorre sobre os elementos essenciais para este fim e ilustra a abordagem de desenvolvimento com um estudo de caso. O trabalho também descreve como a metodologia de instrumentação pode ser explorada em abordagens de modelagem dinâmica para sobrecargas transientes devido a distúrbios na carga de trabalho. / This work examines the effects of dynamic properties of large-scale distributed systems and their impact on the delivered performance, and introduces an approach to the design of benchmark experiments capable of exposing this influence. Specially in complex, multi-tier applications, the net effect of small delays introduced by buffers, communication latency and resource instantiation, may result in significant inertia along the input-output path. In order to bring out these dynamic property, the benchmark experiment should excite the system with non-stationary workload under controlled conditions. The present research report elaborates on the essentials for this purpose and illustrates the design approach through a case study. The work also outlines how the instrumentation methodology can be exploited in dynamic modeling approaches to model transient overloads due to workload disturbances.

Page generated in 0.041 seconds