Spelling suggestions: "subject:"high performance anda"" "subject:"high performance ando""
321 |
Dosagem de concreto de elevado desempenho pelo processo da calda de cimento. / Mixture proportoning for high performance concrete by cement grout processLintz, Rosa Cristina Cecche 30 September 1997 (has links)
Nesta dissertação são apresentados um método de dosagem de concreto de elevado desempenho baseado na reologia da calda de cimento e os resultados de sua aplicação para alguns materiais brasileiros. O método baseia-se em dois modelos teóricos: modelo reológico que trata da viscosidade de uma suspensão de grãos polidispersados e fórmula empírica para previsão da resistência do concreto considerando a resistência da argamassa padrão. O método apresenta como vantagens a economia em mão de obra e materiais, uma vez que manipula somente caldas de cimento. Por outro lado, permite chegar a misturas de concreto auto-adensáveis quando frescos e de alta resistência quando endurecida. A aplicação do método aos materiais brasileiros é feita com algumas modificações em relação ao método original: corpos de prova de formato cúbico, cura térmica do concreto em água em ebulição, emprego de cimento Portland comum, etc. As principais conclusões resultantes desta aplicação são: precisão da fórmula de previsão da resistência do concreto, fixação do teor ótimo de superplastificante e de adição mineral, confirmação do teor ótimo de sílica ativa nos CEDs, eficácia da cura térmica em água em ebulição, eficiência do emprego de corpos de prova cúbico e diferente comportamento dos diversos materiais empregados. / In this dissertation are presented a mix method for high performance concrete base don rheology of cements pastes and on results of its application to some brazilian materials. The method is based on two theoretical models: a rheological model dealing with the viscosity of a polydispersed suspension and a empirical formula that allows the prediction of compressive strenght of standard mortar. The method presents as advantage the economy of manual work and materials since just handles binding pastes. Otherwise it allows to obtain self¬-densified mistures when concrete is fresh strength when concrete is hardened. The method application to Brazilian materials is done with some changes in relation to its originality: cubic specimen, thermal curing of concrete in boiling water, use of commom Portland cement, etc. The main conclusions drawn after this application are: accuracy of the prediction of concrete strength by the extended formula, determination of the best percentage of mineral and chemical addition, ratification of the best percentage of condensed silica fume in high performance concrete mixtures, effectiveness of the thermal curing of concrete en boiling water, effectiveness in the use of cubic casting mould and different behavior for several materials applied in this research.
|
322 |
Design and evaluation of a technology-scalable architecture for instruction-level parallelismNagarajan, Ramadass, 1977- 28 August 2008 (has links)
Not available
|
323 |
High performance computing for irregular algorithms and applications with an emphasis on big data analyticsGreen, Oded 22 May 2014 (has links)
Irregular algorithms such as graph algorithms, sorting, and sparse matrix multiplication, present numerous programming challenges, including scalability, load balancing, and efficient memory utilization. In this age of Big Data we face additional challenges since the data is often streaming at a high velocity and we wish to make near real-time decisions for real-world events. For instance, we may wish to track Twitter for the pandemic spread of a virus. Analyzing such data sets requires combing algorithmic optimizations and utilization of massively multithreaded architectures, accelerator such as GPUs, and distributed systems. My research focuses upon designing new analytics and algorithms for the continuous monitoring of dynamic social networks. Achieving high performance computing for irregular algorithms such as Social Network Analysis (SNA) is challenging as the instruction flow is highly data dependent and requires domain expertise.
The rapid changes in the underlying network necessitates understanding real-world graph properties such as the small world property, shrinking network diameter, power law distribution of edges, and the rate at which updates occur. These properties, with respect to a given analytic, can help design load-balancing techniques, avoid wasteful (redundant) computations, and create streaming algorithms. In the course of my research I have considered several parallel programming paradigms for a wide range systems of multithreaded platforms: x86, NVIDIA's CUDA, Cray XMT2, SSE-SIMD, and Plurality's HyperCore. These unique programming models require examination of the parallel programming at multiple levels: algorithmic design, cache efficiency, fine-grain parallelism, memory bandwidths, data management, load balancing, scheduling, control flow models and more. This thesis deals with these issues and more.
|
324 |
Exploration of parallel graph-processing algorithms on distributed architectures / Exploration d’algorithmes de traitement parallèle de graphes sur architectures distribuéesCollet, Julien 06 December 2017 (has links)
Avec l'explosion du volume de données produites chaque année, les applications du domaine du traitement de graphes ont de plus en plus besoin d'être parallélisées et déployées sur des architectures distribuées afin d'adresser le besoin en mémoire et en ressource de calcul. Si de telles architectures larges échelles existent, issue notamment du domaine du calcul haute performance (HPC), la complexité de programmation et de déploiement d’algorithmes de traitement de graphes sur de telles cibles est souvent un frein à leur utilisation. De plus, la difficile compréhension, a priori, du comportement en performances de ce type d'applications complexifie également l'évaluation du niveau d'adéquation des architectures matérielles avec de tels algorithmes. Dans ce contexte, ces travaux de thèses portent sur l’exploration d’algorithmes de traitement de graphes sur architectures distribuées en utilisant GraphLab, un Framework de l’état de l’art dédié à la programmation parallèle de tels algorithmes. En particulier, deux cas d'applications réelles ont été étudiées en détails et déployées sur différentes architectures à mémoire distribuée, l’un venant de l’analyse de trace d’exécution et l’autre du domaine du traitement de données génomiques. Ces études ont permis de mettre en évidence l’existence de régimes de fonctionnement permettant d'identifier des points de fonctionnements pertinents dans lesquels on souhaitera placer un système pour maximiser son efficacité. Dans un deuxième temps, une étude a permis de comparer l'efficacité d'architectures généralistes (type commodity cluster) et d'architectures plus spécialisées (type serveur de calcul hautes performances) pour le traitement de graphes distribué. Cette étude a démontré que les architectures composées de grappes de machines de type workstation, moins onéreuses et plus simples, permettaient d'obtenir des performances plus élevées. Cet écart est d'avantage accentué quand les performances sont pondérées par les coûts d'achats et opérationnels. L'étude du comportement en performance de ces architectures a également permis de proposer in fine des règles de dimensionnement et de conception des architectures distribuées, dans ce contexte. En particulier, nous montrons comment l’étude des performances fait apparaitre les axes d’amélioration du matériel et comment il est possible de dimensionner un cluster pour traiter efficacement une instance donnée. Finalement, des propositions matérielles pour la conception de serveurs de calculs plus performants pour le traitement de graphes sont formulées. Premièrement, un mécanisme est proposé afin de tempérer la baisse significative de performance observée quand le cluster opère dans un point de fonctionnement où la mémoire vive est saturée. Enfin, les deux applications développées ont été évaluées sur une architecture à base de processeurs basse-consommation afin d'étudier la pertinence de telles architectures pour le traitement de graphes. Les performances mesurés en utilisant de telles plateformes sont encourageantes et montrent en particulier que la diminution des performances brutes par rapport aux architectures existantes est compensée par une efficacité énergétique bien supérieure. / With the advent of ever-increasing graph datasets in a large number of domains, parallel graph-processing applications deployed on distributed architectures are more and more needed to cope with the growing demand for memory and compute resources. Though large-scale distributed architectures are available, notably in the High-Performance Computing (HPC) domain, the programming and deployment complexity of such graphprocessing algorithms, whose parallelization and complexity are highly data-dependent, hamper usability. Moreover, the difficult evaluation of performance behaviors of these applications complexifies the assessment of the relevance of the used architecture. With this in mind, this thesis work deals with the exploration of graph-processing algorithms on distributed architectures, notably using GraphLab, a state of the art graphprocessing framework. Two use-cases are considered. For each, a parallel implementation is proposed and deployed on several distributed architectures of varying scales. This study highlights operating ranges, which can eventually be leveraged to appropriately select a relevant operating point with respect to the datasets processed and used cluster nodes. Further study enables a performance comparison of commodity cluster architectures and higher-end compute servers using the two use-cases previously developed. This study highlights the particular relevance of using clustered commodity workstations, which are considerably cheaper and simpler with respect to node architecture, over higher-end systems in this applicative context. Then, this thesis work explores how performance studies are helpful in cluster design for graph-processing. In particular, studying throughput performances of a graph-processing system gives fruitful insights for further node architecture improvements. Moreover, this work shows that a more in-depth performance analysis can lead to guidelines for the appropriate sizing of a cluster for a given workload, paving the way toward resource allocation for graph-processing. Finally, hardware improvements for next generations of graph-processing servers areproposed and evaluated. A flash-based victim-swap mechanism is proposed for the mitigation of unwanted overloaded operations. Then, the relevance of ARM-based microservers for graph-processing is investigated with a port of GraphLab on a NVIDIA TX2-based architecture.
|
325 |
Incorporacao e liberacao de resveratrol em hidrogeis polimericos / Resveratrol immobilization and release in polymeric hydrogelsMOMESSO, ROBERTA G.R.A.P. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:27:38Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:04:21Z (GMT). No. of bitstreams: 0 / Resveratrol (3, 4, 5-trihidroxiestilbeno) é um polifenol produzido por uma grande variedade de plantas em resposta ao estresse e encontrado predominantemente em cascas de uvas. Este princípio ativo apresenta vários benefícios à saúde, como a capacidade antioxidante, relacionada à prevenção de diversos tipos de câncer e do envelhecimento precoce da pele. No entanto, apresenta baixa biodisponibilidade quando administrado por via oral, o que torna interessante sua aplicação tópica. O principal objetivo deste trabalho foi a incorporação de resveratrol em hidrogéis poliméricos para obtenção de um sistema de liberação utilizado topicamente contra o desenvolvimento de desordens cutâneas, como o envelhecimento cutâneo e o câncer de pele. As matrizes poliméricas compostas por poli(N-vinil-2-pirrolidona) (PVP), poli(etileno glicol) (PEG) e ágar ou PVP e propano-1,2,3-triol (glicerina) e irradiadas a 20 kGy foram caracterizadas pelos ensaios de fração gel e intumescimento; sua biocompatibilidade preliminar foi avaliada in vitro por meio do ensaio de citotoxicidade utilizando o método de incorporação do vermelho neutro. Devido à baixa solubilidade do resveratrol em água, verificou-se o efeito da adição de 2% de etanol às matrizes. Todas as matrizes estudadas, contendo ou não álcool, apresentaram alto grau de reticulação, capacidade de intumescimento e não apresentaram toxicidade em ensaio preliminar de biocompatibilidade. Os dispositivos foram obtidos pela incorporação de resveratrol nas matrizes poliméricas, realizada de forma direta e indireta, ou seja, antes e após irradiação, respectivamente. Os dispositivos obtidos pelo método direto foram submetidos aos ensaios de fração gel, intumescimento e citotoxicidade e apresentaram-se semelhantes às respectivas matrizes. Os dispositivos contendo 0,05% de resveratrol obtidos pelo método direto e os dispositivos contendo 0,1% de resveratrol obtidos pelo método indireto foram submetidos ao ensaio de cinética de liberação durante 24 h. A quantificação do resveratrol liberado foi realizada por cromatografia líquida de alta eficiência (HPLC). Apenas os dispositivos obtidos pelo método indireto apresentaram capacidade de liberar o resveratrol incorporado, que apresentou capacidade antioxidante após liberação. / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
|
326 |
High-Performance Analytics (HPA) / High-Performance Analytics (HPA)Soukup, Petr January 2012 (has links)
The aim of the thesis on the topic of High-Performance Analytics is to gain a structured overview of solutions of high performance methods for data analysis. The thesis introduction concerns with definitions of primary and secondary data analysis, and with the primary systems which are not appropriate for analytical data analysis. The usage of mobile devices, modern information technologies and other factors caused a rapid change of the character of data. The major part of this thesis is devoted particularly to the historical turn in the new approaches towards analytical data analysis, which was caused by Big Data, a very frequent term these days. Towards the end of the thesis there are discussed the system sources which greatly participate in the new approaches to the analytical data analysis as well as in the technological solutions of High Performance Analytics themselves. The second, practical part of the thesis is aimed at a comparison of the performance in conventional methods for data analysis and in one of the high performance methods of High Performance Analytics (more precisely, with In-Memory Analytics). Comparison of individual solutions is performed in identical environment of High Performance Analytics server. The methods are applied to a certain sample whose volume is increased after every round of executed measurement. The conclusion evaluates the tests results and discusses the possibility of usage of the individual High Performance Analytics methods.
|
327 |
Dosagem de concreto de elevado desempenho pelo processo da calda de cimento. / Mixture proportoning for high performance concrete by cement grout processRosa Cristina Cecche Lintz 30 September 1997 (has links)
Nesta dissertação são apresentados um método de dosagem de concreto de elevado desempenho baseado na reologia da calda de cimento e os resultados de sua aplicação para alguns materiais brasileiros. O método baseia-se em dois modelos teóricos: modelo reológico que trata da viscosidade de uma suspensão de grãos polidispersados e fórmula empírica para previsão da resistência do concreto considerando a resistência da argamassa padrão. O método apresenta como vantagens a economia em mão de obra e materiais, uma vez que manipula somente caldas de cimento. Por outro lado, permite chegar a misturas de concreto auto-adensáveis quando frescos e de alta resistência quando endurecida. A aplicação do método aos materiais brasileiros é feita com algumas modificações em relação ao método original: corpos de prova de formato cúbico, cura térmica do concreto em água em ebulição, emprego de cimento Portland comum, etc. As principais conclusões resultantes desta aplicação são: precisão da fórmula de previsão da resistência do concreto, fixação do teor ótimo de superplastificante e de adição mineral, confirmação do teor ótimo de sílica ativa nos CEDs, eficácia da cura térmica em água em ebulição, eficiência do emprego de corpos de prova cúbico e diferente comportamento dos diversos materiais empregados. / In this dissertation are presented a mix method for high performance concrete base don rheology of cements pastes and on results of its application to some brazilian materials. The method is based on two theoretical models: a rheological model dealing with the viscosity of a polydispersed suspension and a empirical formula that allows the prediction of compressive strenght of standard mortar. The method presents as advantage the economy of manual work and materials since just handles binding pastes. Otherwise it allows to obtain self¬-densified mistures when concrete is fresh strength when concrete is hardened. The method application to Brazilian materials is done with some changes in relation to its originality: cubic specimen, thermal curing of concrete in boiling water, use of commom Portland cement, etc. The main conclusions drawn after this application are: accuracy of the prediction of concrete strength by the extended formula, determination of the best percentage of mineral and chemical addition, ratification of the best percentage of condensed silica fume in high performance concrete mixtures, effectiveness of the thermal curing of concrete en boiling water, effectiveness in the use of cubic casting mould and different behavior for several materials applied in this research.
|
328 |
Sélection de caractéristiques stables pour la segmentation d'images histologiques par calcul haute performance / Robust feature selection for histology images through high performance computingBouvier, Clément 18 January 2019 (has links)
L’histologie produit des images à l’échelle cellulaire grâce à des microscopes optiques très performants. La quantification du tissu marqué comme les neurones s’appuie de plus en plus sur des segmentations par apprentissage automatique. Cependant, l’apprentissage automatique nécessite une grande quantité d’informations intermédiaires, ou caractéristiques, extraites de la donnée brute multipliant d’autant la quantité de données à traiter. Ainsi, le nombre important de ces caractéristiques est un obstacle au traitement robuste et rapide de séries d’images histologiques. Les algorithmes de sélection de caractéristiques pourraient réduire la quantité d’informations nécessaires mais les ensembles de caractéristiques sélectionnés sont peu reproductibles. Nous proposons une méthodologie originale fonctionnant sur des infrastructures de calcul haute-performance (CHP) visant à sélectionner des petits ensembles de caractéristiques stables afin de permettre des segmentations rapides et robustes sur des images histologiques acquises à très haute-résolution. Cette sélection se déroule en deux étapes : la première à l’échelle des familles de caractéristiques. La deuxième est appliquée directement sur les caractéristiques issues de ces familles. Dans ce travail, nous avons obtenu des ensembles généralisables et stables pour deux marquages neuronaux différents. Ces ensembles permettent des réductions significatives des temps de traitement et de la mémoire vive utilisée. Cette méthodologie rendra possible des études histologiques exhaustives à haute-résolution sur des infrastructures CHP que ce soit en recherche préclinique et possiblement clinique. / In preclinical research and more specifically in neurobiology, histology uses images produced by increasingly powerful optical microscopes digitizing entire sections at cell scale. Quantification of stained tissue such as neurons relies on machine learning driven segmentation. However such methods need a lot of additional information, or features, which are extracted from raw data multiplying the quantity of data to process. As a result, the quantity of features is becoming a drawback to process large series of histological images in a fast and robust manner. Feature selection methods could reduce the amount of required information but selected subsets lack of stability. We propose a novel methodology operating on high performance computing (HPC) infrastructures and aiming at finding small and stable sets of features for fast and robust segmentation on high-resolution histological whole sections. This selection has two selection steps: first at feature families scale (an intermediate pool of features, between space and individual feature). Second, feature selection is performed on pre-selected feature families. In this work, the selected sets of features are stables for two different neurons staining. Furthermore the feature selection results in a significant reduction of computation time and memory cost. This methodology can potentially enable exhaustive histological studies at a high-resolution scale on HPC infrastructures for both preclinical and clinical research settings.
|
329 |
Towards a high performance parallel library to compute fluid flexible structures interactionsNagar, Prateek 08 April 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / LBM-IB method is useful and popular simulation technique that is adopted ubiquitously
to solve Fluid-Structure interaction problems in computational
fluid dynamics.
These problems are known for utilizing computing resources intensively while solving
mathematical equations involved in simulations. Problems involving such interactions
are omnipresent, therefore, it is eminent that a faster and accurate algorithm
exists for solving these equations, to reproduce a real-life model of such complex analytical
problems in a shorter time period. LBM-IB being inherently parallel, proves
to be an ideal candidate for developing a parallel software. This research focuses
on developing a parallel software library, LBM-IB based on the algorithm proposed
by [1] which is first of its kind that utilizes the high performance computing abilities
of supercomputers procurable today. An initial sequential version of LBM-IB is developed
that is used as a benchmark for correctness and performance evaluation of
shared memory parallel versions. Two shared memory parallel versions of LBM-IB
have been developed using OpenMP and Pthread library respectively. The OpenMP
version is able to scale well enough, as good as 83% speedup on multicore machines
for <=8 cores. Based on the profiling and instrumentation done on this version, to
improve the data-locality and increase the degree of parallelism, Pthread based data
centric version is developed which is able to outperform the OpenMP version by 53%
on manycore machines. A distributed version using the MPI interfaces on top of
the cube based Pthread version has also been designed to be used by extreme scale
distributed memory manycore systems.
|
330 |
Högpresterande arbetssystem (HPWS) : En kartläggning av individuell målsättning, prestationsstyrningsamt välmående på arbetsplatsenWikander Ericsson, Johanna January 2022 (has links)
Sammanfattning Bakgrund: High Performance Work Systems (HPWS), kan definieras som ett system av HR praktiker med intention att skapa en miljö som ger de anställda mer ansvar och skapa större organisationsengagemang. Anställda ses och behandlas som värdefulla tillgångar. Det ökade organisationsengagemanget gör i sin tur det möjligt för organisationer att skapa och upprätthålla konkurrensfördelar för att kunna uppnå organisationens mål, de anställdas mål sätts i linje med organisationen vilket gör att alla arbetar åt samma håll. Den största anledningen till att organisationer implementerar HPWS är att öka företagets effektivitet och produktivitet. Syfte: Syftet med studien är att kartlägga anställdas arbete med individuell målsättning, erfarenhet av prestationsstyrning samt välmående på arbetsplatsen i en HPWS miljö, samt undersöka dess relationer. Metod: En kvantitativ studie utförd på data insamlad via enkäter i Teams Forms från en icke slumpmässig urvalsgrupp bestående av personer med anställning i en organisation med implementerad HPWS. Data har statistiskt analyserats i PSPP och resultaten presenteras i tabeller, figurer med tillhörande deskriptiva text. Resultat: Resultatet visar att anställda i en organisation med implementerat HPWS har en hög grad av självbestämmande, möjlighet att vara med och påverka, de värderar sitt arbete som kompetenshöjande och meningsfullt samt har en god work-life balance. Slutsats: Studies resultat gör det rimligt att anta att HPWS skapar en miljö som har positiv påverkan på den anställdas arbete med individuell målsättning, erfarenhet av prestationsstyrning samt välmående på arbetsplatsen. Resultatet visar även tendenser på skillnader mellan män och kvinnor, där kvinnornas medelvärde är något högre än männens. Baserat på den tendensen, föreslår denna studie att förslag till framtida forskning att undersöka vilka variabler som påverkar kvinnors upplevelse av HPWS positivt. Nyckelord: HPWS (High Performance Work Systems), individuell målsättning, prestationsstyrning, välmående på arbetsplatsen / Abstrakt Bakgrund: Högpresterande arbetssystem kan definieras som ett system av HR-metoder som skapar en miljö i en verksamhet som tillåter en anställd större engagemang och ansvar. Anställda ses och behandlas som värderade tillgångar. Det ökade engagemanget i verksamheten gör det möjligt att skapa och bibehålla konkurrensfördelar tack vare medarbetarnas engagemang för att hjälpa verksamheten att lyckas. Den främsta anledningen till att anta HPWS är att öka företagets effektivitet och produktivitet. Syfte: Syftet med denna studie är att kartlägga medarbetares arbete med individuell målformulering, erfarenhet från performance management och deras välbefinnande på jobbet i en HPWS-miljö, samt granska relationerna mellan de tre. Metod: En kvantitativ studie med resultat baserade på statistisk analys utförd i PSPP baserat på data som samlats in via Teams formulär frågeformulär från ett icke-slumpmässigt urval av personer som arbetar i organisationer med implementerad HPWS. Resultatet presenteras i tabeller och figurer med beskrivande text associerad. Resultat: Resultaten från denna studie visar att anställda i en organisation med implementerat HPWS har en hög grad av autonomi, stor möjlighet att påverka beslut, de värdesätter sitt arbete som meningsfullt och kompetenshöjande och har en god balans mellan arbete och privatliv. Slutsats: Resultaten från denna studie gör det rimligt att anta att HPWS skapar en miljö som har en positiv inverkan på medarbetarnas arbete med att sätta individuella mål, erfarenhet av performance management och välbefinnande i arbetet. Resultatet visar tendenser till skillnader mellan män och kvinnor, där kvinnor får högre poäng jämfört med män. Baserat på denna trend är förslaget till ytterligare studier att undersöka vilka variabler i en HPWS-miljö som har en positiv inverkan på kvinnor. Sökord: HPWS (High Performance Work Systems), individuell målsättning, performance management, välbefinnande på jobbet
|
Page generated in 0.1028 seconds