• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 73
  • 45
  • 20
  • 18
  • 12
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 394
  • 78
  • 73
  • 72
  • 70
  • 59
  • 57
  • 50
  • 38
  • 37
  • 35
  • 35
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

O estudo do paralelismo no ensino da geometria analítica plana: do empírico ao dedutivo

Hajnal, Fabiana 31 October 2007 (has links)
Made available in DSpace on 2016-04-27T16:58:31Z (GMT). No. of bitstreams: 1 Fabiana Hajnal.pdf: 11059921 bytes, checksum: e7170e4e346abdf66bff5f94b21112d8 (MD5) Previous issue date: 2007-10-31 / Secretaria da Educação do Estado de São Paulo / This dissertation involves a study of argumentation and proof in relation to the teaching and learning of analytic geometry and particularly the property of parallelism in this topic. The work seeks answers to the following questions: in what form can dynamic geometry environments contribute in students attempts to construct mathematical arguments and proof? What difficulties and resistances emerge in learning situations which address the concept of parallelism in analytic geometry? To respond to these questions, sequences of activities, based on some aspects of didactical engineering was designed. For the conception of these activities, the research drew from the work of Parsysz concerning the levels of development of geometrical thinking and the analysis of students´ interactions with the activities was based on the Balacheff´s classification of different types of proof. Analysis of the results obtained in the application of the activity sequence showed that the dynamic geometry environment contributed to the creation of situations that supported the construction of meanings for the concept of parallelism and that the students engaged with the activities in the manner proposed, producing some kind of relevant proof / Esta dissertação tem por objetivo fazer um estudo sobre argumentação e prova envolvendo o paralelismo no ensino da geometria analítica. O trabalho procura responder às seguintes questões: de que forma os ambientes de geometria dinâmica contribuem para que os alunos construam suas argumentações e provas? Quais são as dificuldades ou resistências que se apresentam na situação de aprendizagem do conceito de paralelismo no ensino da geometria analítica? Para responder a esse questionamento, foi concebida uma seqüência de atividades baseada em alguns elementos da engenharia didática. Para a concepção das atividades a pesquisa se apoiou nos trabalhos de Parsysz sobre os níveis do desenvolvimento do pensamento geométrico e para as análises das atividades, na tipologia de provas de Balacheff. A análise dos resultados obtidos na aplicação da seqüência mostrou que o ambiente de geometria dinâmica contribuiu para a criação de situações que ajudaram na construção do conceito de paralelismo e que os alunos alcançaram os objetivos propostos satisfatoriamente e produziram algum tipo de prova
102

A transparent and energy aware reconfigurable multiprocessor platform for efficient ILP and TLP exploitation

Rutzig, Mateus Beck January 2012 (has links)
As the number of embedded applications is increasing, the current strategy of several companies is to launch a new platform within short periods, to execute the application set more efficiently, with low energy consumption. However, for each new platform deployment, new tool chains must come along, with additional libraries, debuggers and compilers. This strategy implies in high hardware redesign costs, breaks binary compatibility and results in a high overhead in the software development process. Therefore, focusing on area savings, low energy consumption, binary compatibility maintenance and mainly software productivity improvement, we propose the exploitation of Custom Reconfigurable Arrays for Multiprocessor System (CReAMS). CReAMS is composed of multiple adaptive reconfigurable systems to efficiently explore Instruction and Thread Level Parallelism (ILP and TLP) at hardware level, in a totally transparent fashion. Conceived as homogeneous organization, CReAMS shows a reduction of 37% in energy-delay product (EDP) compared to an ordinary multiprocessing platform when assuming the same chip area. When a variety of processor with different capabilities on exploiting ILP are coupled in a single die, conceiving CReAMS as a heterogeneous organization, performance improvements of up to 57% and energy savings of up to 36% are showed in comparison with the homogenous platform. In addition, the efficiency of the adaptability provided by CReAMS is demonstrated in a comparison to a multiprocessing system composed of 4- issue Out-of-Order SparcV8 processors, 28% of performance improvements are shown considering a power budget scenario.
103

Uma introdução geral à poesia hebraica bíblica / A general introduction to the Biblical Hebrew poetry

Nunes Júnior, Edson Magalhães 28 November 2012 (has links)
Ao lidar com uma parte considerável da Bíblia Hebraica, o leitor precisa estar a par do que é Poesia Hebraica Bíblica, suas características, peculiaridades e nuances a fim de entender e apreciar o texto. Mas como os Hebreus não deixaram nenhum manual de poética, o debate sobre a poesia da Bíblia Hebraica envolve desde sua presença no texto até suas características gerais e específicas. No presente trabalho, apresenta-se uma breve discussão sobre a Poesia Hebraica Bíblica no cenário acadêmico atual. Também são expostas as características dessa poesia, com ênfase no paralelismo. / When dealing with most part of the Hebrew Bible, the reader must be aware of what is biblical Hebrew poetry, its characteristics, peculiarities and details in order to understand and appreciate the text. Since there isn\'t a Hebrew manual of poetics, the debate about the biblical Hebrew poetry comprises from its presence in the text to its general and specific characteristics. The following research presents a brief discussion about Hebrew Bible in the current academic scenario, as the characteristics of this kind of poetry with an emphasis in parallelism.
104

Stratégies d'analyse de performance pour les applications basées sur tâches sur plates-formes hybrides / Performance Analysis Strategies for Task-based Applications on Hybrid Platforms

Garcia Pinto, Vinicius 30 October 2018 (has links)
Les techniques de programmations pour le calcul de haute performanceont adopté les modèles basés sur parallélisme de tâche qui sontcapables de s’adapter plus facilement à des superordinateurs avec desarchitectures hybrides. La performance des applications basées surtâches dépende fortement des heuristiques d'ordonnancement dynamiqueset de sa capacité à exploiter les ressources de calcul et decommunication.Malheureusement, les stratégies d'analyse de performancetraditionnelles ne sont pas convenables pour comprendre les supportsd'exécution dynamiques et les applications basées sur tâches. Cesstratégies prévoient un comportement régulier avec des phases decalcul et de communication, par contre, des applications basées surtâches ne manifestent pas de phases précises. Par ailleurs, la granularitéplus fine des applications basées sur tâches typiquement provoque descomportements stochastiques qui donnent lieu aux structuresirrégulières qui sont difficiles à analyser.Dans cette thèse, nous proposons des stratégies d'analyse deperformance qui exploitent la combinaison de la structure del'application, d'ordonnancement et des informations de laplate-forme. Nous présentons comment nos stratégies peuvent aider àcomprendre des problèmes de performance dans des applications baséesur tâches qui exécutent dans des plates-formes hybrides. Nosstratégies d'analyse de performance sont construites avec des outilsmodernes pour l'analyse de données, ce que permettre la création despanneaux de visualisation personnalisés. Ces panneaux permettent lacompréhension et l'identification de problèmes de performancesoccasionnés par de mauvaises décisions d'ordonnancement etconfiguration incorrect du support d'exécution et de laplate-forme. Grâce à combinaison de simulation et débogage nouspouvons aussi construire une représentation visuelle de l'état interneet des estimations calculées par l'ordonnancer durant l'ordonnancementd'une nouvelle tâche.Nous validons notre proposition parmi de l'analyse de tracesd'exécutions d'une factorisation de Cholesky implémenté avec lesupport d'exécution StarPU et exécutée dans une plate-forme hybride(CPU/GPU). Nos études de cas montrent comment améliorer la partitiondes tâches entre le multi-(GPU, coeur) pour s'approcher des bornesinférieures théoriques, comment améliorer le pipeline des opérationsMPI entre le multi-(noeud, coeur, GPU) pour réduire le démarrage lentedans les noeuds distribués et comment optimiser le support d'exécutionpour augmenter la bande passante MPI. Avec l'emploi des stratégies desimulation et débogage, nous fournissons un workflow pourl'examiner, en détail, les décisions d'ordonnancement. Cela permet deproposer des changements pour améliorer les mécanismes d'ordonnancementet prefetch du support d'exécution. / Programming paradigms in High-Performance Computing have been shiftingtoward task-based models that are capable of adapting readily toheterogeneous and scalable supercomputers. The performance oftask-based applications heavily depends on the runtime schedulingheuristics and on its ability to exploit computing and communicationresources.Unfortunately, the traditional performance analysis strategies areunfit to fully understand task-based runtime systems and applications:they expect a regular behavior with communication and computationphases, while task-based applications demonstrate no clearphases. Moreover, the finer granularity of task-based applicationstypically induces a stochastic behavior that leads to irregularstructures that are difficult to analyze.In this thesis, we propose performance analysis strategies thatexploit the combination of application structure, scheduler, andhardware information. We show how our strategies can help tounderstand performance issues of task-based applications running onhybrid platforms. Our performance analysis strategies are built on topof modern data analysis tools, enabling the creation of customvisualization panels that allow understanding and pinpointingperformance problems incurred by bad scheduling decisions andincorrect runtime system and platform configuration.By combining simulation and debugging we are also able to build a visualrepresentation of the internal state and the estimations computed bythe scheduler when scheduling a new task.We validate our proposal by analyzing traces from a Choleskydecomposition implemented with the StarPU task-based runtime systemand running on hybrid (CPU/GPU) platforms. Our case studies show howto enhance the task partitioning among the multi-(GPU, core) to getcloser to theoretical lower bounds, how to improve MPI pipelining inmulti-(node, core, GPU) to reduce the slow start in distributed nodesand how to upgrade the runtime system to increase MPI bandwidth. Byemploying simulation and debugging strategies, we also provide aworkflow to investigate, in depth, assumptions concerning the schedulerdecisions. This allows us to suggest changes to improve the runtimesystem scheduling and prefetch mechanisms.
105

Um método para paralelização automática de workflows intensivos em dados / A method for automatic paralelization of data-intensive workflows

Watanabe, Elaine Naomi 22 May 2017 (has links)
A análise de dados em grande escala é um dos grandes desafios computacionais atuais e está presente não somente em áreas da ciência moderna mas também nos setores público e industrial. Nesses cenários, o processamento dos dados geralmente é modelado como um conjunto de atividades interligadas por meio de fluxos de dados os workflows. Devido ao alto custo computacional, diversas estratégias já foram propostas para melhorar a eficiência da execução de workflows intensivos em dados, tais como o agrupamento de atividades para minimizar as transferências de dados e a paralelização do processamento, de modo que duas ou mais atividades sejam executadas ao mesmo tempo em diferentes recursos computacionais. O paralelismo nesse caso é definido pela estrutura descrita em seu modelo de composição de atividades. Em geral, os Sistemas de Gerenciamento de Workflows, responsáveis pela coordenação e execução dessas atividades em um ambiente distribuído, desconhecem o tipo de processamento a ser realizado e por isso não são capazes de explorar automaticamente estratégias para execução paralela. As atividades paralelizáveis são definidas pelo usuário em tempo de projeto e criar uma estrutura que faça uso eficiente de um ambiente distribuído não é uma tarefa trivial. Este trabalho tem como objetivo prover execuções mais eficientes de workflows intensivos em dados e propõe para isso um método para a paralelização automática dessas aplicações, voltado para usuários não-especialistas em computação de alto desempenho. Este método define nove anotações semânticas para caracterizar a forma como os dados são acessados e consumidos pelas atividades e, assim, levando em conta os recursos computacionais disponíveis para a execução, criar automaticamente estratégias que explorem o paralelismo de dados. O método proposto gera réplicas das atividades anotadas e define também um esquema de indexação e distribuição dos dados do workflow que possibilita maior acesso paralelo. Avaliou-se sua eficiência em dois modelos de workflows com dados reais, executados na plataforma de nuvem da Amazon. Usou-se um SGBD relacional (PostgreSQL) e um NoSQL (MongoDB) para o gerenciamento de até 20,5 milhões de objetos de dados em 21 cenários com diferentes configurações de particionamento e replicação de dados. Os resultados obtidos mostraram que a paralelização da execução das atividades promovida pelo método reduziu o tempo de execução do workflow em até 66,6% sem aumentar o seu custo monetário. / The analysis of large-scale datasets is one of the major current computational challenges and it is present not only in fields of modern science domain but also in the industry and public sector. In these scenarios, the data processing is usually modeled as a set of activities interconnected through data flows as known as workflows. Due to their high computational cost, several strategies were proposed to improve the efficiency of data-intensive workflows, such as activities clustering to minimize data transfers and parallelization of data processing for reducing makespan, in which two or more activities are performed at same time on different computational resources. The parallelism, in this case, is defined in the structure of the workflows model of activities composition. In general, Workflow Management Systems are responsible for the coordination and execution of these activities in a distributed environment. However, they are not aware of the type of processing that will be performed by each one of them. Thus, they are not able to automatically explore strategies for parallel execution. Parallelizable activities are defined by user at workflow design time and creating a structure that makes an efficient use of a distributed environment is not a trivial task. This work aims to provide more efficient executions for data intensive workflows and, for that, proposes a method for automatic parallelization of these applications, focusing on users who are not specialists in high performance computing. This method defines nine semantic annotations to characterize how data is accessed and consumed by activities and thus, taking into account the available computational resources, automatically creates strategies that explore data parallelism. The proposed method generates replicas of annotated activities. It also defines a workflow data indexing and distribution scheme that allows greater parallel access. Its efficiency was evaluated in two workflow models with real data, executed in Amazon cloud platform. A relational (PostgreSQL) and a NoSQL (MongoDB) DBMS were used to manage up to 20.5 million of data objects in 21 scenarios with different partitioning and data replication settings. The experiments have shown that the parallelization of the execution of the activities promoted by the method resulted in a reduction of up to 66.6 % in the workflows makespan without increasing its monetary cost.
106

[en] SUPPORT INTEGRATION OF DYNAMIC WORKLOAD GENERATION TO SAMBA FRAMEWORK / [pt] INTEGRAÇÃO DE SUPORTE PARA GERAÇÃO DE CARGA DINÂMICA AO AMBIENTE DE DESENVOLVIMENTO SAMBA

SERGIO MATEO BADIOLA 25 October 2005 (has links)
[pt] Alexandre Plastino em sua tese de doutorado apresenta um ambiente de desenvolvimento de aplicações paralelas SPMD (Single Program, Multiple Data) denominado SAMBA que permite a geração de diferentes versões de uma aplicação paralela a partir da incorporação de diferentes algoritmos de balanceamento de carga disponíveis numa biblioteca própria. O presente trabalho apresenta uma ferramenta de geração de carga dinâmica integrada a este ambiente que possibilita criar, em tempo de execução, diferentes perfis de carga externa a serem aplicados a uma aplicação paralela em estudo. Dessa forma, pretende-se permitir que o desenvolvedor de uma aplicação paralela possa selecionar o algoritmo de balanceamento de carga mais apropriado frente a condições variáveis de carga externa. Com o objetivo de validar a integração da ferramenta ao ambiente SAMBA, foram obtidos resultados da execução de duas aplicações SPMD distintas. / [en] Alexandre Plastino s tesis presents a framework for the development of SPMD parallel applications, named SAMBA, that enables the generation of different versions of a parallel application by incorporating different load balancing algorithms from an internal library. This dissertation presents a dynamic workload generation s tool, integrated to SAMBA, that affords to create, at execution time, different external workload profiles to be applied over a parallel application in study. The objective is to enable that a parallel application developer selects the most appropriated load balancing algorithm based in its performance under variable conditions of external workload. In order to validate this integration, two SPMD applications were implemented.
107

Implementation of Data Parallel Primitives on MIMD Shared Memory Systems

Mortensen, Christian January 2019 (has links)
This thesis presents an implementation of a multi-threaded C library for performing data parallel computations on MIMD shared memory systems, with support for user defined operators and one-dimensional sparse arrays. Multi-threaded parallel execution was achieved by the use of the POSIX threads, and the library exposes several functions for performing data parallel computations directly on arrays. The implemented functions were based on a set of primitives that many data parallel programming languages have in common. The individual scalability of the primitives varied greatly, with most of them only gaining a significant speedup when executed on two cores followed by a significant drop-off in speedup as more cores were added. An exception to this was the reduction primitive however, which managed to achieve near optimal speedup in most tests. The library proved unviable for expressing algorithms requiring more then one or two primitives in sequence due to the overhead that each of them cause.
108

The French Art Song Style in Selected Songs by Charles Ives

Talbott, Christy Jo 14 July 2004 (has links)
Charles Ives is commonly referred to as the "Father of American Music." The implication is one that Ives himself would agree with, that he wrote purely American ideas from his own environment without reference to other styles or methods, in particular the widespread European tradition. Some composers, like Aaron Copland and Roger Sessions, created an American sonority by incorporating the concepts of musical construction they studied at the Paris Conservatoire. Ives, conversely, received no instruction in Europe, but the techniques so prevalent in the music of the French art song are found in certain songs written by Ives. Though he claimed no European influence, however, he used the late nineteenth century French song style in some of his songs, and he also borrowed tunes from the French composers. This study identifies significant trademarks of eighteenth century French song and the stylistic traits associated with a variety of prominent composers of the time. Ives's childhood musical influences, his church position, and his studies at Yale University will establish a relationship between Ives and the French musical ideas. The primary source for his songs is his collection entitled 114 Songs. Ives gathered his songs and put them into one collection which included Four French Songs. Through the analysis of several songs, including the four French songs written by Ives and three comparisons of songs by Ives with songs by French composers, it becomes evident that Ives was influenced, to a certain extent, by French music and used many techniques of the style.
109

A Scalable Run-Time System for NestStep on Cluster Supercomputers

Sohl, Joar January 2006 (has links)
<p>NestStep is a collection of parallel extensions to existing programming languages. These extensions supports a shared memory model and nested parallelism. NestStep is based the Bulk-Synchronous Programming model. Most of the communication of data in NestStep takes place in a</p><p>combine/commit phase, which is essentially a reduction followed by a broadcast.</p><p>The primary aim of the project that this thesis is based on was to develop a runtime system for NestStep-C, the extensions for the C programming language. The secondary aim was to find which tree structure among a selected few is the best for communicating data in the combine/commit phase.</p><p>This thesis includes information about NestStep, how to interface with the NestStep runtime system, some example applications and benchmarks for determining the best tree structure. A binomial tree structure and trees similar to it was empirically found to yield the best performance.</p>
110

The Copycat Project: An Experiment in Nondeterminism and Creative Analogies

Hofstadter, Douglas 01 January 1984 (has links)
A micro-world is described, in which many analogies involving strikingly different concepts and levels of subtlety can be made. The question "What differentiates the good ones from the bad ones?" is discussed, and then the problem of how to implement a computational model of the human ability to come up with such analogies (and to have a sense for their quality) is considered. A key part of the proposed system, now under development is its dependence on statistically emergent properties of stochastically interacting "codelets" (small pieces of ready-to-run code created by the system, and selected at random to run with probability proportional to heuristically assigned "urgencies"). Another key element is a network of linked concepts of varying levels of "semanticity", in which activation spreads and indirectly controls the urgencies of new codelets. There is pressure in the system toward maximizing the degree of "semanticity" or "intensionality" of descriptions of structures, but many such pressures, often conflicting, must interact with one another, and compromises must be made. The shifting of (1) perceived oundaries inside structures, (2) descriptive concepts chosen to apply to structures, and (3) features perceived as "salient" or not, is called "slippage". What can slip, and how are emergent consequences of the interaction of (1) the temporary ("cytoplasmic") structures involved in the analogy with (2) the permanent ("Platonic") concepts and links in the conceptual proximity network, or "slippability network". The architecture of this system is postulated as a general architecture suitable for dealing not only with fluid analogies, but also with other types of abstract perception and categorization tasks, such as musical perception, scientific theorizing, Bongard problems and others.

Page generated in 0.0803 seconds