• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Extensão da geração de carga do Bench4Q para benchmark de desempenho em regime transiente / Extension of the load generation for Bench4Q benchmark performance transient regime

Flavio Luiz dos Santos de Souza 08 April 2016 (has links)
Este trabalho de mestrado apresenta o desenvolvimento de uma extensão no benchmark Bench4Q. A extensão proposta é uma nova funcionalidade para o benchmark. O referido framework é utilizado para gerar carga sintética para um sistema e-commerce acoplado ao benchmark. Seu principal emprego na literatura tem sido em avaliação de desempenho sob carga estacionária. Contudo, recentes pesquisas tem apresentado interesse no estudo de arquiteturas adaptativas de autogerenciamento de recursos, o que implica em responder às perturbações e atender os requisitos de desempenho em regime transiente propostos para o sistema. No entanto, este benchmark não abrange os estados transiente do sistema. O presente trabalho tem por objetivo estender o benchmark Bench4Q acrescentando-lhe capacidade de excitar a resposta transiente do sistema mediante as perturbações da carga de trabalho. Para isso, o software foi acrescido de funcionalidade capaz de gerenciar a modulação da carga de trabalho. Os experimentos foram executados em um ambiente multicamadas que apresentou resultados compatíveis ao objetivo, representando contribuições para a área de avaliação de desempenho. A motivação da pesquisa, inserção em outros trabalhos em andamento e direções futuras são introduzidas. / This master thesis introduces the development of an extension for the Bench4Q benchmark. The referred framework is utilized to generate sinthetic workload for a companion e-commerce benchmark. The software package Bench4q is a benchmark for cloud computing applications which simulates various aspects of conventional architectures and workloads in this kind of environment. It is mainly referenced in the literature in works on performance evaluation under stationary load. Recent research works have broaden its interest to the study of adaptive architectures of resource self-management, what implies in responding to disturbances and meeting performance requirements in transient regime. This work aims at extending Bench4q adding it capabilities to excite the transient response of the system by means of applying disturbances during execution time. To this end, the piece of software shall be enriched with functionalities for generating non-stationary workload and programmed disturbances. Experients have been carried out in a multi-layer enviroment and have yielded positive result, representing contributions to the state of the art. The motivation of this piece of work, insertion in other ongoing research and directions are introduced.
72

Avaliação do impacto da comunicação intra e entre-nós em nuvens computacionais para aplicações de alto desempenho / Evaluation of impact from inter and intra-node communication in cloud computing for HPC applications

Thiago Kenji Okada 07 November 2016 (has links)
Com o advento da computação em nuvem, não é mais necessário ao usuário investir grandes quantidades de recursos financeiros em equipamentos computacionais. Ao invés disto, é possível adquirir recursos de processamento, armazenamento ou mesmo sistemas completos por demanda, usando um dos diversos serviços disponibilizados por provedores de nuvem como a Amazon, o Google, a Microsoft, e a própria USP. Isso permite um controle maior dos gastos operacionais, reduzindo custos em diversos casos. Por exemplo, usuários de computação de alto desempenho podem se beneficiar desse modelo usando um grande número de recursos durante curtos períodos de tempo, ao invés de adquirir um aglomerado computacional de alto custo inicial. Nosso trabalho analisa a viabilidade de execução de aplicações de alto desempenho, comparando o desempenho de aplicações de alto desempenho em infraestruturas com comportamento conhecido com a nuvem pública oferecida pelo Google. Em especial, focamos em diferentes configurações de paralelismo com comunicação interna entre processos no mesmo nó, chamado de intra-nós, e comunicação externa entre processos em diferentes nós, chamado de entre-nós. Nosso caso de estudo para esse trabalho foi o NAS Parallel Benchmarks, um benchmark bastante popular para a análise de desempenho de sistemas paralelos e de alto desempenho. Utilizamos aplicações com implementações puramente MPI (para as comunicações intra e entre-nós) e implementações mistas onde as comunicações internas foram feitas utilizando OpenMP (comunicação intra-nós) e as comunicações externas foram feitas usando o MPI (comunicação entre-nós). / With the advent of cloud computing, it is no longer necessary to invest large amounts of money on computing resources. Instead, it is possible to obtain processing or storage resources, and even complete systems, on demand, using one of the several available services from cloud providers like Amazon, Google, Microsoft, and USP. Cloud computing allows greater control of operating expenses, reducing costs in many cases. For example, high-performance computing users can benefit from this model using a large number of resources for short periods of time, instead of acquiring a computer cluster with high initial cost. Our study examines the feasibility of running high-performance applications, comparing the performance of high-performance applications in a known infrastructure compared to the public cloud offering from Google. In particular, we focus on various parallel configurations with internal communication between processes on the same node, called intra-node, and external communication between processes on different nodes, called inter-nodes. Our case study for this work was the NAS Parallel Benchmarks, a popular benchmark for performance analysis of parallel systems and high performance computing. We tested applications with MPI-only implementations (for intra and inter-node communications) and mixed implementations where internal communications were made using OpenMP (intra-node communications) and external communications were made using the MPI (inter-node communications).
73

Leistungsbewertung von Workstations mit SPEC-SFS-Benchmarks fuer den Einsatz als Fileserver

Hofbauer, Jens 29 July 1996 (has links)
Die SFS-Benchmarksuite (Release 1.1) wurde auf mehreren Rechnerarchitekturen der TU Chemnitz-Zwickau installiert und zu einer Leistungs- bewertung dieser Architekturen genutzt. Eine speziell untersuchte Bedingung war die Hardwareausstattung und Rechnerkonfiguraion von NFS-Servern.
74

Étude de transformations et d’optimisations de code parallèle statique ou dynamique pour architecture "many-core" / Study of transformations and static or dynamic parallel code optimization for manycore architecture

Gallet, Camille 13 October 2016 (has links)
L’évolution des supercalculateurs, de leur origine dans les années 60 jusqu’à nos jours, a fait face à 3 révolutions : (i) l’arrivée des transistors pour remplacer les triodes, (ii) l’apparition des calculs vectoriels, et (iii) l’organisation en grappe (clusters). Ces derniers se composent actuellement de processeurs standards qui ont profité de l’accroissement de leur puissance de calcul via une augmentation de la fréquence, la multiplication des cœurs sur la puce et l’élargissement des unités de calcul (jeu d’instructions SIMD). Un exemple récent comportant un grand nombre de cœurs et des unités vectorielles larges (512 bits) est le co-proceseur Intel Xeon Phi. Pour maximiser les performances de calcul sur ces puces en exploitant aux mieux ces instructions SIMD, il est nécessaire de réorganiser le corps des nids de boucles en tenant compte des aspects irréguliers (flot de contrôle et flot de données). Dans ce but, cette thèse propose d’étendre la transformation nommée Deep Jam pour extraire de la régularité d’un code irrégulier et ainsi faciliter la vectorisation. Ce document présente notre extension et son application sur une mini-application d’hydrodynamique multi-matériaux HydroMM. Ces travaux montrent ainsi qu’il est possible d’obtenir un gain de performances significatif sur des codes irréguliers. / Since the 60s to the present, the evolution of supercomputers faced three revolutions : (i) the arrival of the transistors to replace triodes, (ii) the appearance of the vector calculations, and (iii) the clusters. These currently consist of standards processors that have benefited of increased computing power via an increase in the frequency, the proliferation of cores on the chip and expansion of computing units (SIMD instruction set). A recent example involving a large number of cores and vector units wide (512-bit) is the co-proceseur Intel Xeon Phi. To maximize computing performance on these chips by better exploiting these SIMD instructions, it is necessary to reorganize the body of the loop nests taking into account irregular aspects (control flow and data flow). To this end, this thesis proposes to extend the transformation named Deep Jam to extract the regularity of an irregular code and facilitate vectorization. This thesis presents our extension and application of a multi-material hydrodynamic mini-application, HydroMM. Thus, these studies show that it is possible to achieve a significant performance gain on uneven codes.
75

Logique de requêtes à la XPath : systèmes de preuve et pertinence pratique / XPath-like Query Logics : Proof Systems and Real-World Applicability

Lick, Anthony 08 July 2019 (has links)
Motivées par de nombreuses applications allant du traitement XML à lavérification d'exécution de programmes, de nombreuses logiques sur les arbresde données et les flux de données ont été développées dans la littérature.Celles-ci offrent divers compromis entre expressivité et complexitéalgorithmique ; leur problème de satisfiabilité a souvent une complexité nonélémentaire ou peut même être indécidable.De plus, leur étude à travers des approches de théories des modèles ou dethéorie des automates peuvent être algorithmiquement impraticables ou manquerde modularité.Dans une première partie, nous étudions l'utilisation de systèmes de preuvecomme un moyen modulaire de résoudre le problème de satisfiabilité des données logiques sur des structures linéaires.Pour chaque logique considérée, nous développons un calcul d'hyperséquentscorrect et complet et décrivons une stratégie de recherche de preuve optimaledonnant une procédure de décision NP.En particulier, nous présentons un fragment NP-complet de la logique temporelle sur les ordinaux avec données, la logique complète étant indécidable, qui est exactement aussi expressif que le fragment à deux variables de la logique du premier ordre sur les ordinaux avec données.Dans une deuxième partie, nous menons une étude empirique des principaleslogiques à la XPath décidables proposées dans la littérature.Nous présentons un jeu de tests que nous avons développé à cette fin etexaminons comment ces logiques pourraient être étendues pour capturer davantage de requêtes du monde réel sans affecter la complexité de leur problème de satisfiabilité.Enfin, nous analysons les résultats que nous avons recueillis à partir de notre jeu de tests et identifions les nouvelles fonctionnalités à prendre en charge afin d’accroître la couverture pratique de ces logiques. / Motivated by applications ranging from XML processing to runtime verificationof programs, many logics on data trees and data streams have been developed in the literature.These offer different trade-offs between expressiveness and computationalcomplexity; their satisfiability problem has often non-elementary complexity or is even undecidable.Moreover, their study through model-theoretic or automata-theoretic approaches can be computationally impractical or lacking modularity.In a first part, we investigate the use of proof systems as a modular way tosolve the satisfiability problem of data logics on linear structures.For each logic we consider, we develop a sound and complete hypersequentcalculus and describe an optimal proof search strategy yielding an NPdecision procedure.In particular, we exhibit an NP-complete fragment of the tense logic over data ordinals---the full logic being undecidable---, which is exactly as expressive as the two-variable fragment of the first-order logic on data ordinals.In a second part, we run an empirical study of the main decidable XPath-likelogics proposed in the literature.We present a benchmark we developed to that end, and examine how these logicscould be extended to capture more real-world queries without impacting thecomplexity of their satisfiability problem.Finally, we discuss the results we gathered from our benchmark, and identifywhich new features should be supported in order to increase the practicalcoverage of these logics.
76

[en] AUTOMATIC GENERATION OF BENCHMARKS FOR EVALUATING KEYWORD AND NATURAL LANGUAGE INTERFACES TO RDF DATASETS / [pt] GERAÇÃO AUTOMÁTICA DE BENCHMARKS PARA AVALIAR INTERFACES BASEADAS EM PALAVRAS-CHAVE E LINGUAGEM NATURAL PARA DATASETS RDF

ANGELO BATISTA NEVES JUNIOR 04 November 2022 (has links)
[pt] Os sistemas de busca textual fornecem aos usuários uma alternativa amigável para acessar datasets RDF (Resource Description Framework). A avaliação de desempenho de tais sistemas requer benchmarks adequados, consistindo de datasets RDF, consultas e respectivas respostas esperadas. No entanto, os benchmarks disponíveis geralmente possuem poucas consultas e respostas incompletas, principalmente porque são construídos manualmente com a ajuda de especialistas. A contribuição central desta tese é um método para construir benchmarks automaticamente, com um maior número de consultas e com respostas mais completas. O método proposto aplica-se tanto a consultas baseadas em palavras-chave quanto em linguagem natural e possui duas partes: geração de consultas e geração de respostas. A geração de consultas seleciona um conjunto de entidades relevantes, chamadas de indutores, e, para cada uma, heurísticas orientam o processo de extração de consultas relacionadas. A geração de respostas recebe as consultas produzidas no passo anterior e computa geradores de solução (SG), subgrafos do dataset original contendo diferentes respostas às consultas. Heurísticas também orientam a construção dos SGs evitando o desperdiço de recursos computacionais na geração de respostas irrelevantes. / [en] Text search systems provide users with a friendly alternative to access Resource Description Framework (RDF) datasets. The performance evaluation of such systems requires adequate benchmarks, consisting of RDF datasets, text queries, and respective expected answers. However, available benchmarks often have small sets of queries and incomplete sets of answers, mainly because they are manually constructed with the help of experts. The central contribution of this thesis is a method for building benchmarks automatically, with larger sets of queries and more complete answers. The proposed method works for both keyword and natural language queries and has two steps: query generation and answer generation. The query generation step selects a set of relevant entities, called inducers, and, for each one, heuristics guide the process of extracting related queries. The answer generation step takes the queries and computes solution generators (SG), subgraphs of the original dataset containing different answers to the queries. Heuristics also guide the construction of SGs, avoiding the waste of computational resources in generating irrelevant answers.
77

Web-based Benchmarks for Forecasting Systems: The ECAST Platform

Ulbricht, Robert, Hartmann, Claudio, Hahmann, Martin, Donker, Hilko, Lehner, Wolfgang 10 January 2023 (has links)
The role of precise forecasts in the energy domain has changed dramatically. New supply forecasting methods are developed to better address this challenge, but meaningful benchmarks are rare and time-intensive. We propose the ECAST online platform in order to solve that problem. The system's capability is demonstrated on a real-world use case by comparing the performance of different prediction tools.
78

Comparison of Technologies for General-Purpose Computing on Graphics Processing Units

Sörman, Torbjörn January 2016 (has links)
The computational capacity of graphics cards for general-purpose computinghave progressed fast over the last decade. A major reason is computational heavycomputer games, where standard of performance and high quality graphics constantlyrise. Another reason is better suitable technologies for programming thegraphics cards. Combined, the product is high raw performance devices andmeans to access that performance. This thesis investigates some of the currenttechnologies for general-purpose computing on graphics processing units. Technologiesare primarily compared by means of benchmarking performance andsecondarily by factors concerning programming and implementation. The choiceof technology can have a large impact on performance. The benchmark applicationfound the difference in execution time of the fastest technology, CUDA, comparedto the slowest, OpenCL, to be twice a factor of two. The benchmark applicationalso found out that the older technologies, OpenGL and DirectX, are competitivewith CUDA and OpenCL in terms of resulting raw performance.
79

A TOOL FOR PERFORMANCE EVALUATION OF REAL-TIME UNIX OPERATING SYSTEMS

Furht, B., Boujarwah, A., Gluch, D., Joseph, D., Kamath, D., Matthews, P., McCarty, M., Stoehr, R., Sureswaran, R. 11 1900 (has links)
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada / In this paper we present the REAL/STONE Real-Time Tester, a tool for performance evaluation of real-time UNIX operating systems. The REAL/STONE Real-Time Tester is a synthetic benchmark that simulates a typical real-time environment. The tool performs typical real-time operations, such as: (a) reads data from an external source and accesses it periodically, (b) processes data through a number of real-time processes, and © displays the final data. This study can help users in selecting the most-effective real-time UNIX operating system for a given application.
80

A benchmark for impact assessment of affordable housing

Okehielem, Nelson January 2011 (has links)
There is a growing recognition in the built environment for the significance of benchmarking. It is recognized as a key driver for measuring success criteria in the built environment sector. In spite of the huge application of this technique to the sector and other sectors, very little is known of it in affordable housing sub-sector and where it has been used, components of housing quality were not holistically considered. This study considers this identified deficiency in developing a benchmark for assessing affordable housing quality impact factors. As part of this study, samples of 4 affordable Housing projects were examined. Two each were originally selected from under 5 categories of ‘operational quality standards’ within United Kingdom. Samples of 10 projects were extracted from a total of 80 identified UK affordable housing projects. Investigative study was conducted on these projects showing varying impact factors and constituent parameters responsible for their quality. Identified impact criteria found on these projects were mapped against a unifying set standard and weighted with ‘relative importance index’. Adopting quality function deployment (QFD) technique, a quality matrix was developed from these quality standards groupings with their impact factors. An affordable housing quality benchmark and a relative toolkit evolved from resultant quality matrix of project case studies and questionnaire served on practitioners’ performance. Whereas the toolkit was empirically tested for reliability and construct validity, the benchmark was subjected to refinement with the use of project case study.

Page generated in 0.0424 seconds