• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Using Benchmark Assessment Scores to Predict Scores on the Mississippi Biology I Subject Area Test, Second Edition

Smith, Cheryl Lynn 09 May 2015 (has links)
Schools across Mississippi are challenged with educational growth. Since the enactment of NCLB, Mississippi has been grappling with a decrease in the graduation rate among its’ public high school students. Despite all the preparation, spent funds, and professional development for teachers, many students are not being successful on required subject area tests. The purpose of this study was to determine if benchmark assessment scores could be used as a predictor of state assessment scores. This study was guided by 3 research questions and utilized 1 research design. For the purpose of this study, a simple linear regression correlational research design was used to develop an equation to determine if the ELS Biology I Benchmark Assessment scores were a reliable predictor of Mississippi Biology I SATP2 scores. Question 1 sought to determine the accuracy of the fall ELS Biology I Benchmark Assessment scores on predicting the Mississippi Biology I SATP2 for high school students. Question 2 sought to determine the accuracy of the winter ELS Biology I Benchmark Assessment scores on predicting the Mississippi Biology I SATP2 for high school students. Question 3 sought to determine the accuracy of the spring ELS Biology I Benchmark Assessment scores on predicting the Mississippi Biology I SATP2 for high school students. Data analyses results indicated a statistically significant model for predicting Mississippi Biology I SATP2 scores for each of the benchmark assessments. Although the fall administration was statistically significant, it was not very accurate in predicting SATP2 scores. It was determined that the ELS Biology I Benchmark Assessment could accurately predict scores on the Mississippi Biology I SATP2 for high school students. The study concluded with recommendations for future research, especially in the area of science.
52

Benchmarking Virtual Network Mapping Algorithms

Zhu, Jin 01 January 2012 (has links) (PDF)
The network architecture of the current Internet cannot accommodate the deployment of novel network-layer protocols. To address this fundamental problem, network virtualization has been proposed, where a single physical infrastructure is shared among different virtual network slices. A key operational problem in network virtualization is the need to allocate physical node and link resources to virtual network requests. While several different virtual network mapping algorithms have been proposed in literature, it is difficult to compare their performance due to differences in the evaluation methods used. In this thesis work, we proposed VNMBench, a virtual network mapping benchmark that provides a set of standardized inputs and evaluation metrics. Using this benchmark, different algorithms can be evaluated and compared objectively. The benchmark model separate into two parts: static model and dynamic model, which operated in fixed and changed mapping process. We present such an evaluation using three existing virtual network mapping algorithms. We compare the evaluation results of our synthetic benchmark with those of actual Emulab requests to show that VNMBench is sufficiently realistic. We believe this work provides an important foundation to quantitatively evaluating the performance of a critical component in the operation of virtual networks.
53

Experimental Adsorption and Reaction Studies on Transition Metal Oxides Compared to DFT Simulations

Chen, Han 11 June 2021 (has links)
A temperature-programmed desorption (TPD) study of CO and NH₃ adsorption on MnO(100) with complimentary density functional theory (DFT) simulations was conducted. TPD reveals a primary CO desorption signal at 130 K from MnO(100) in the low coverage limit giving an adsorption energy of -35.6 ±2.1 kJ/mol on terrace sites. PBE+U gives a more reasonable structural result than PBE, and the adsorption energy obtained by PBE+U and DFT-D3 Becke-Johnson gives excellent agreement with the experimentally obtained ΔE<sub>ads</sub> for adsorption at Mn²⁺ terrace sites. The analysis of NH₃-TPD traces revealed that adsorption energy on MnO(100) is coverage-dependent. At the low-coverage limit, the adsorption energy on terraces is -58.7±1.0 kJ/mol. A doser results in the formation of a transient NH₃ multilayers that appears in TPD at around 110K. For a terrace site, PBE+U predicts a more realistic surface adsorbate geometry than PBE does, with PBE+U with Tkatchenko-Scheffler method with iterative Hirshfeld partitioning (TSHP) provides the best prediction. DFT simulations of the dehydrogenation elementary step of the ethyl and methyl fragments on α-Cr2O₃(101̅2) were also conducted to complement previous TPD studies of these subjects. On the nearly-stoichiometric surface of α-Cr₂O₃(101̅2), CD₃₋ undergoes dehydrogenation to produce CD₂=CD₂ and CD₄. Previous TPD traces suggest that the α-hydrogen (α-H) elimination of methyl groups on α-Cr₂O₃(101̅2) is the rate-limiting step, and has an activation barrier of 135±2 kJ/mol. DFT simulations showed that PBE gives reasonable prediction of the adsorption sites for CH3- fragments in accordance with XPS spectra, while PBE+U did not. Both PBE and PBE+U failed to predict the correct adsorption sites for CH₂=. When the simulation is set in accordance with the experimentally observed adsorption sites for the carbon species, PBE gives very accurate prediction on the reaction barrier when an adjacent I adatom is present, while PBE+U failed spectacularly. When the simulation is set in accordance with the DFT-predicted adsorption sites, PBE is still able to accurately predict the reaction barrier (<1% to 8.7% error) while PBE+U is less accurate. DFT is also used to complement the previous study of the β-H elimination an ethyl group on the α-Cr₂O₃(101̅2) surface. The DFT simulation shows that absent surface Cl adatoms, PBE predicts an activation barrier of 92.6 kJ/mol, underpredicting the experimental activation barrier by 28.7%, while PBE+U predicts a barrier of 27.0 kJ/mol, under-predicting the experimental barrier by 79.2%. The addition of chlorine on the adjacent cation improved the prediction on barrier by PBE+U marginally, while worsened the prediction by PBE marginally. Grant information: Financial support provided by the U.S. Department of Energy through grant DE-FG02 97ER14751. / Doctor of Philosophy / Nowadays, density functional theory (DFT), a computational approach to chemistry has become increasingly more popular due to it being less computationally expensive than other traditional computational approaches. One major shortcoming of DFT is its inability to explain the electronic interactions within transition metal oxides, where the electronic configuration within one cation is intimately linked to those on adjacent cations. To address this, DFT+U, a variant of DFT, has been developed to better account for these special electronic interactions. However, not enough experimental comparisons have been established to verify the accuracy of DFT and DFT+U. Our lab focuses on providing high quality experimental benchmarks that can be readily compared to by the DFT community. To establish the experimental benchmarks, we use a technique called temperature-programmed desorption (TPD), which focuses on measuring the rate at which gas molecules leave a sample surface populated with a pre-determined amount of gas molecules as the temperature of the surface is raised at constant but slow temperature ramp rate. Through analysis of the results, the adsorption energy can be obtained for a desorption process, or an activation barrier if the desorption is the result of a surface reaction. Some simple calculations involving PBE, a popular functional used in the DFT community, and its variant PBE+U were conducted for comparison purposes. The transition metal oxide surfaces chosen in this study is MnO(100) and of α-Cr₂O3(101̅2), because they both possess the special electronic interactions between their own cations. For adsorption studies, we determined adsorption energies of carbon monoxide (CO), and ammonia (NH₃) on MnO(100) single crystal surface. For CO, TPD study revealed that CO undergoes weak adsorption on the surface, with no dissociation of CO detected. PBE predicts an unreasonable surface adsorption geometry while PBE+U predicts a reasonable one. When coupled with a particular dispersion correction method named DFT-D3 Becke-Johnson, PBE+U predicts a very accurate adsorption energy of CO on MnO(100). TPD shows that NH₃ undergoes a stronger adsorption on MnO(100) with no dissociation of NH₃. Similarly, PBE+U predicted a more reasonable adsorption geometry while PBE did not. Coupled with a dispersion correction named Tkatchenko-Scheffler method with iterative Hirshfeld partitioning (TSHP), PBE+U provides an accurate prediction of adsorption energy. In comparison to previous experimental works based on TPD results, the simple decomposition reactions of an ethyl group and a methyl group were also studied on α-Cr₂O₃(101̅2) surface using DFT. Overall, PBE gave better prediction on the activation barrier than PBE+U did in comparison to experimentally observed barriers.
54

Performance Measurement and Analysis of Transactional Web Archiving

Maharshi, Shivam 19 July 2017 (has links)
Web archiving is necessary to retain the history of the World Wide Web and to study its evolution. It is important for the cultural heritage community. Some organizations are legally obligated to capture and archive Web content. The advent of transactional Web archiving makes the archiving process more efficient, thereby aiding organizations to archive their Web content. This study measures and analyzes the performance of transactional Web archiving systems. To conduct a detailed analysis, we construct a meaningful design space defined by the system specifications that determine the performance of these systems. SiteStory, a state-of-the-art transactional Web archiving system, and local archiving, an alternative archiving technique, are used in this research. We experimentally evaluate the performance of these systems using the Greek version of Wikipedia deployed on dedicated hardware on a private network. Our benchmarking results show that the local archiving technique uses a Web server’s resources more efficiently than SiteStory for one data point in our design space. Better performance than SiteStory in such scenarios makes our archiving solution favorable to use for transactional archiving. We also show that SiteStory does not impose any significant performance overhead on the Web server for the rest of the data points in our design space. / Master of Science
55

Synthesizing a Hybrid Benchmark Suite with BenchPrime

Wu, Xiaolong 09 October 2018 (has links)
This paper presents BenchPrime, an automated benchmark analysis toolset that is systematic and extensible to analyze the similarity and diversity of benchmark suites. BenchPrime takes multiple benchmark suites and their evaluation metrics as inputs and generates a hybrid benchmark suite comprising only essential applications. Unlike prior work, BenchPrime uses linear discriminant analysis rather than principal component analysis, as well as selects the best clustering algorithm and the optimized number of clusters in an automated and metric-tailored way, thereby achieving high accuracy. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites. As a case study, this work for the first time compares the DenBench with the MediaBench and MiBench using four different metrics to provide a multi-dimensional understanding of the benchmark suites. For each metric, BenchPrime measures to what degree DenBench applications are irreplaceable with those in MediaBench and MiBench. This provides means for identifying an essential subset from the three benchmark suites without compromising the application balance of the full set. The experimental results show that the necessity of including DenBench applications varies across the target metrics and that significant redundancy exists among the three benchmark suites. / Master of Science / Representative benchmarks are widely used in the research area to achieve an accurate and fair evaluation of hardware and software techniques. However, the redundant applications in the benchmark set can skew the average towards redundant characteristics overestimating the benefit of any proposed research. This work proposes a machine learning-based framework BenchPrime to generates a hybrid benchmark suite comprising only essential applications. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.
56

Réduction de modèle, observation et commande prédictive d'une station d'épuration d'eaux usées / Model reduction and predictive control of a wastewater treatment station

Assaf, Ali 12 December 2012 (has links)
Les installations d'épuration des eaux usées sont des systèmes de grande dimension, non linéaires, sujets à des perturbations importantes en flux et en charge. Une commande prédictive (MPC) a été appliquée au Benchmark BSM1 qui est un environnement de simulation qui définit une installation d'épuration. Une identification en boucle ouverte d'une station d'épuration d'eaux usées a été réalisée pour déterminer un modèle linéaire en se basant sur un ensemble de mesures entrée sortie du Benchmark BSM1. Les réponses indicielles en boucle ouverte ont été obtenues à partir de variation échelon des entrées autour de leurs valeurs stationnaires. Le modèle tient compte des non-linéarités à travers des paramètres variables. Les réponses indicielles obtenues permettent de déterminer par optimisation les fonctions de transfert continues correspondantes. Ces fonctions de transfert peuvent être regroupées en cinq modèles mathématiques. Des fonctions de transfert continues de premier ordre, de premier ordre avec un intégrateur, des réponses inverses, de second ordre et de second ordre avec zéro représentant les réponses indicielles ont été identifiées. Les valeurs numériques des coefficients de chaque modèle choisi ont été calculées par un critère des moindres carrés. La commande prédictive (MPC) utilise le modèle obtenu comme un modèle interne pour commander le procédé. Deux stratégies de la commande prédictive DMC et QDMC d'une station d'épuration avec ou sans compensation par anticipation ont été testées. La commande par anticipation est utilisée pour réduire l'effet de deux perturbations mesurées, le débit entrant et la concentration entrante en ammonium, sur le système / Wastewater treatment processes are large scale, non linear systems, submitted to important disturbances of influent flow rate and load. Model predictive control (MPC) widely used industrial technique for advanced multivariable control, has been applied to the Benchmark Simulation Model 1 (BSM1) simulation benchmark of wastewater treatment process. An open loop identification method has been developed to determine a linear model for a set of input-output measurements of the process. All the step responses have been obtained in open loop from step variations of the manipulated inputs and measured disturbances around their steady state values. The non-linearities of the model are taken into account by variable parameters. The step responses coefficient obtained make it possible to determine by optimization the corresponding transfer functions. That functions are classified by five mathematical models, such as : first order, first order with integrator, inverse response, second order and second order with zero. The numerical values of coefficients of each model selected were calculated using a least squares criterion. Model predictive control (MPC) uses the resulting model as an internal model to control the process. Dynamic matrix control DMC and quadratic dynamic matrix control QDMC predictive control strategies, in the absence and presence of feedforward compensation, have been tested. Two measured disturbances have been used for feedforward control, the influent flow rate and ammonium concentration
57

Le choix des architectures hybrides, une stratégie réaliste pour atteindre l'échelle exaflopique. / The choice of hybrid architectures, a realistic strategy to reach the Exascale.

Loiseau, Julien 14 September 2018 (has links)
La course à l'Exascale est entamée et tous les pays du monde rivalisent pour présenter un supercalculateur exaflopique à l'horizon 2020-2021.Ces superordinateurs vont servir à des fins militaires, pour montrer la puissance d'une nation, mais aussi pour des recherches sur le climat, la santé, l'automobile, physique, astrophysique et bien d'autres domaines d'application.Ces supercalculateurs de demain doivent respecter une enveloppe énergétique de 1 MW pour des raisons à la fois économiques et environnementales.Pour arriver à produire une telle machine, les architectures classiques doivent évoluer vers des machines hybrides équipées d'accélérateurs tels que les GPU, Xeon Phi, FPGA, etc.Nous montrons que les benchmarks actuels ne nous semblent pas suffisants pour cibler ces applications qui ont un comportement irrégulier.Cette étude met en place une métrique ciblant les aspects limitants des architectures de calcul: le calcul et les communications avec un comportement irrégulier.Le problème mettant en avant la complexité de calcul est le problème académique de Langford.Pour la communication nous proposons notre implémentation du benchmark du Graph500.Ces deux métriques mettent clairement en avant l'avantage de l'utilisation d'accélérateurs, comme des GPUs, dans ces circonstances spécifiques et limitantes pour le HPC.Pour valider notre thèse nous proposons l'étude d'un problème réel mettant en jeu à la fois le calcul, les communications et une irrégularité extrême.En réalisant des simulations de physique et d'astrophysique nous montrons une nouvelle fois l'avantage de l'architecture hybride et sa scalabilité. / The countries of the world are already competing for Exascale and the first exaflopics supercomputer should be release by 2020-2021.These supercomputers will be used for military purposes, to show the power of a nation, but also for research on climate, health, physics, astrophysics and many other areas of application.These supercomputers of tomorrow must respect an energy envelope of 1 MW for reasons both economic and environmental.In order to create such a machine, conventional architectures must evolve to hybrid machines equipped with accelerators such as GPU, Xeon Phi, FPGA, etc.We show that the current benchmarks do not seem sufficient to target these applications which have an irregular behavior.This study sets up a metrics targeting the walls of computational architectures: computation and communication walls with irregular behavior.The problem for the computational wall is the Langford's academic combinatorial problem.We propose our implementation of the Graph500 benchmark in order to target the communication wall.These two metrics clearly highlight the advantage of using accelerators, such as GPUs, in these specific and representative problems of HPC.In order to validate our thesis we propose the study of a real problem bringing into play at the same time the computation, the communications and an extreme irregularity.By performing simulations of physics and astrophysics we show once again the advantage of the hybrid architecture and its scalability.
58

Metodologia de benchmark para avaliação de desempenho não-estacionária: um estudo de caso baseado em aplicações de computação em nuvem / Benchmark methodology for non-stationary performance evaluation: a case study based on cloud computing applications

Mamani, Edwin Luis Choquehuanca 23 February 2016 (has links)
Este trabalho analisa os efeitos das propriedades dinâmicas de sistemas distribuídos de larga escala e seu impacto no desempenho, e introduz uma abordagem para o planejamento de experimentos de referência capazes de expor essa influência. Especialmente em aplicações complexas com múltiplas camadas de software (multi-tier), o efeito total de pequenos atrasos introduzidos por buffers, latência na comunicação e alocação de recursos, pode resultar gerando inércia significativa ao longo do funcionamento do sistema. A fim de detetar estas propriedade dinâmica, o experimento de execução do benchmark deve excitar o sistema com carga de trabalho não-estacionária sob um ambiente controlado. A presente pesquisa discorre sobre os elementos essenciais para este fim e ilustra a abordagem de desenvolvimento com um estudo de caso. O trabalho também descreve como a metodologia de instrumentação pode ser explorada em abordagens de modelagem dinâmica para sobrecargas transientes devido a distúrbios na carga de trabalho. / This work examines the effects of dynamic properties of large-scale distributed systems and their impact on the delivered performance, and introduces an approach to the design of benchmark experiments capable of exposing this influence. Specially in complex, multi-tier applications, the net effect of small delays introduced by buffers, communication latency and resource instantiation, may result in significant inertia along the input-output path. In order to bring out these dynamic property, the benchmark experiment should excite the system with non-stationary workload under controlled conditions. The present research report elaborates on the essentials for this purpose and illustrates the design approach through a case study. The work also outlines how the instrumentation methodology can be exploited in dynamic modeling approaches to model transient overloads due to workload disturbances.
59

Contribuindo para a avaliação do teste de programas concorrentes: uma abordagem usando benchmarks / Evaluating the testing of concurrent programs: an approach using benchmarks

Dourado, George Gabriel Mendes 18 November 2015 (has links)
O teste de programas concorrentes é uma atividade que envolve diferentes perspectivas. Uma das mais conhecidas refere-se ao desenvolvimento de novos conhecimentos sobre critérios, modelos e ferramentas de teste que auxiliem o testador nessa atividade. Outra perspectiva, igualmente importante, porém, ainda incipiente, é a avaliação da atividade de teste de programas concorrentes com relação à sua eficiência e eficácia para revelar defeitos de difícil detecção. O projeto TestPar em desenvolvimento no ICMC/USP tem abordado essas duas perspectivas ao longo dos últimos anos, onde novas tecnologias de teste vêm sendo desenvolvidas e avaliadas sistematicamente. Este trabalho inseriu-se no contexto do projeto TestPar e teve por objetivo principal contribuir para melhorar a avaliação da atividade de teste de programas concorrentes, através do desenvolvimento de benchmarks específicos para este contexto. Essa avaliação representa um desafio para a área de teste, sendo essencial a existência de benchmarks simples o bastante para serem validados manualmente, se necessário, e complexos o bastante para exercitar aspectos não triviais de comunicação e sincronização, encontrados de fato nos programas concorrentes. Assim, neste trabalho de mestrado foram desenvolvidos benchmarks livres de defeitos conhecidos e algumas versões de benchmarks com defeitos intencionalmente inseridos, baseados em taxonomias de defeitos. Esses benchmarks seguiram uma série de características bem definidas, contando ainda com uma documentação padronizada e completa. Os benchmarks foram validados através da condução de estudos experimentais, do uso em diferentes projetos de pesquisa e também com a verificação da sua aplicabilidade para fins educacionais. Os resultados obtidos demonstram que os benchmarks atingiram os objetivos para os quais foram propostos, gerando uma demanda controlada e qualificada sobre modelo, critérios e a ferramenta de teste desenvolvidos no projeto TestPar. Os experimentos realizados permitiram destacar pontos positivos e limitações desses artefatos. Outra aplicação dos benchmarks foi como recurso educacional para o ensino em disciplinas como programação concorrente. / The testing of concurrent programs is an activity that involves distinct perspectives. One of the most known refers to the development of new knowledge about criteria, models and testing tools to support this activity. Other perspective, as important as the first one and still incipient, is the evaluation of the testing activity of concurrent programs with respect to its efficiency and effectiveness in revealing errors hard to detect. The TestPar project under development at ICMC/USP has addressed both these two perspectives over the past years, where new testing technologies are being proposed and evaluated systematically. This project belongs to the context of the TestPar project, aiming to improve the evaluation of the testing activity of concurrent programs through the development of benchmarks specific for this context. This evaluation represents a challenge to the testing area, which must consider benchmarks simple enough to be validated manually, if necessary, but also complex enough to exercise not trivial aspects of communication/synchronization, found in programs used indeed. Thus, in this work it were developed bug-free benchmarks and some versions of faulty benchmarks with bugs inserted, based on error taxonomies. These benchmarks followed a series of well-defined features, including also a standardized and complete documentation. Benchmarks were validated by means of diferent scenarios: experimental studies, their use by different on-going research projects and also with the verification of their applicability for educational aims. The results obtained show that our benchmarks have achieved their objectives, generating a controlled and qualified demand on model, criteria and the tool developed under TestPar project. The experiments reveal strengths and limitations of these artifacts. Benchmarks have been also used as educational resources for the teaching of concurrent programs.
60

Avaliação do impacto da comunicação intra e entre-nós em nuvens computacionais para aplicações de alto desempenho / Evaluation of impact from inter and intra-node communication in cloud computing for HPC applications

Okada, Thiago Kenji 07 November 2016 (has links)
Com o advento da computação em nuvem, não é mais necessário ao usuário investir grandes quantidades de recursos financeiros em equipamentos computacionais. Ao invés disto, é possível adquirir recursos de processamento, armazenamento ou mesmo sistemas completos por demanda, usando um dos diversos serviços disponibilizados por provedores de nuvem como a Amazon, o Google, a Microsoft, e a própria USP. Isso permite um controle maior dos gastos operacionais, reduzindo custos em diversos casos. Por exemplo, usuários de computação de alto desempenho podem se beneficiar desse modelo usando um grande número de recursos durante curtos períodos de tempo, ao invés de adquirir um aglomerado computacional de alto custo inicial. Nosso trabalho analisa a viabilidade de execução de aplicações de alto desempenho, comparando o desempenho de aplicações de alto desempenho em infraestruturas com comportamento conhecido com a nuvem pública oferecida pelo Google. Em especial, focamos em diferentes configurações de paralelismo com comunicação interna entre processos no mesmo nó, chamado de intra-nós, e comunicação externa entre processos em diferentes nós, chamado de entre-nós. Nosso caso de estudo para esse trabalho foi o NAS Parallel Benchmarks, um benchmark bastante popular para a análise de desempenho de sistemas paralelos e de alto desempenho. Utilizamos aplicações com implementações puramente MPI (para as comunicações intra e entre-nós) e implementações mistas onde as comunicações internas foram feitas utilizando OpenMP (comunicação intra-nós) e as comunicações externas foram feitas usando o MPI (comunicação entre-nós). / With the advent of cloud computing, it is no longer necessary to invest large amounts of money on computing resources. Instead, it is possible to obtain processing or storage resources, and even complete systems, on demand, using one of the several available services from cloud providers like Amazon, Google, Microsoft, and USP. Cloud computing allows greater control of operating expenses, reducing costs in many cases. For example, high-performance computing users can benefit from this model using a large number of resources for short periods of time, instead of acquiring a computer cluster with high initial cost. Our study examines the feasibility of running high-performance applications, comparing the performance of high-performance applications in a known infrastructure compared to the public cloud offering from Google. In particular, we focus on various parallel configurations with internal communication between processes on the same node, called intra-node, and external communication between processes on different nodes, called inter-nodes. Our case study for this work was the NAS Parallel Benchmarks, a popular benchmark for performance analysis of parallel systems and high performance computing. We tested applications with MPI-only implementations (for intra and inter-node communications) and mixed implementations where internal communications were made using OpenMP (intra-node communications) and external communications were made using the MPI (inter-node communications).

Page generated in 0.0391 seconds