• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 9
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 67
  • 50
  • 44
  • 27
  • 16
  • 13
  • 13
  • 12
  • 12
  • 12
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Rámec pro tvorbu aplikací s podporou peer-to-peer spolupráce / Application Development Framework for Peer-to-Peer Collaboration

Hrdina, Jan January 2019 (has links)
The thesis deals with the design and implementation of the application framework for the creation of collaborative web editors that enable peer-to-peer collaboration in real time. The thesis summarizes existing approaches for data replication, from which M. Kleppmann's CRDT (conflict-free replicated data type) for JSON is chosen as the most suitable. Using the resulting framework, the created content can be safely shared in groups of peers, where each member can be assigned different permissions. Own communication protocols based on WebRTC, WebSocket and WebCrypto are designed and implemented for P2P connection establishment and subsequent communication. The framework allows to resolve conflicts and work independently without an Internet connection. For a consistent user experience, the library includes a set of user interface elements for managing friends, groups, and other common tasks. The framework is implemented using functional design patterns implemented in the ReasonML language. The functionality of the result is verified by creating an example application of the mind map editor.
112

Cuneiform

Brandt, Jörgen 29 January 2021 (has links)
In der Bioinformatik und der Next-Generation Sequenzierung benötigen wir oft große und komplexe Verarbeitungsabläufe um Daten zu analysieren. Die Werkzeuge und Bibliotheken, die hierin die Verarbeitungsschritte bilden, stammen aus unterschiedlichen Quellen und exponieren unterschiedliche Schnittstellen, was ihre Integration in Datenanalyseplattformen erschwert. Hinzu kommt, dass diese Verarbeitungsabläufe meist große Datenmengen prozessieren weshalb Forscher erwarten, dass unabhängige Verarbeitungsschritte parallel laufen. Der Stand der Technik im Feld der wissenschaftlichen Datenverarbeitung für Bioinformatik und Next-Generation Sequenzierung sind wissenschaftliche Workflowsysteme. Ein wissenschaftliches Workflowsystem erlaubt es Forschern Verarbeitungsabläufe als Workflow auszudrücken. Solch ein Workflow erfasst die Datenabhängigkeiten in einem Verarbeitungsablauf, integriert externe Software und erlaubt es unabhängige Verarbeitungsschritte zu erkennen, um sie parallel auszuführen. In dieser Arbeit präsentieren wir Cuneiform, eine Workflowsprache, und ihre verteilte Ausführungsumgebung. Für Cuneiform's Design nehmen wir die Perspektive der Programmiersprachentheorie ein. Wir lassen Methoden der funktionalen Programmierung einfließen um Komposition und Datenabhängigkeiten auszudrücken. Wir nutzen operationelle Semantiken um zu definieren, wann ein Workflow wohlgeformt und konsistent ist und um Reduktion zu erklären. Für das Design der verteilten Ausführungsumgebung nehmen wir die Perspektive der verteilten Systeme ein. Wir nutzen Petri Netze um die Kommunikationsstruktur der im System beteiligten Agenten zu erklären. / Bioinformatics and next-generation sequencing data analyses often form large and complex pipelines. The tools and libraries making up the processing steps in these pipelines come from different sources and have different interfaces which hampers integrating them into data analysis frameworks. Also, these pipelines process large data sets. Thus, users need to parallelize independent processing steps. The state of the art in large-scale scientific data analysis for bioinformatics and next-generation sequencing are scientific workflow systems. A scientific workflow system allows researchers to describe a data analysis pipeline as a scientific workflow which integrates external software, defines the data dependencies forming a data analysis pipeline, and parallelizes independent processing steps. Scientific workflow systems consist of a workflow language providing a user interface, and an execution environment. The workflow language determines how users express workflows, reuse and compose workflow fragments, integrate external software, how the scientific workflow system identifies independent processing steps, and how we derive optimizations from a workflow's structure. The execution environment schedules and runs data processing operations. In this thesis we present Cuneiform, a workflow language, and its distributed execution environment. For Cuneiform's design we take the perspective of programming languages. We adopt methods from functional programming towards composition and expressing data dependencies. We apply operational semantics and type systems to define well-formedness, consistency, and reduction of Cuneiform workflows. For the design of the distributed execution environment we take the perspective of distributed systems. We apply Petri nets to define the communication patterns among the distributed execution environment's agents.
113

Efficient multiple hypothesis tracking using a purely functional array language

Nolkrantz, Marcus January 2022 (has links)
An autonomous vehicle is a complex system that requires a good perception of the surrounding environment to operate safely. One part of that is multiple object tracking, which is an essential component in camera-based perception whose responsibility is to estimate object motion from a sequence of images. This requires an association problem to be solved where newly estimated object positions are mapped to previously predicted trajectories, for which different solution strategies exist.  In this work, a multiple hypothesis tracking algorithm is implemented. The purpose is to demonstrate that measurement associations are improved compared to less compute-intensive alternatives. It was shown that the implemented algorithm performed 13 percent better than an intersection over union tracker when evaluated using a standard evaluation metric. Furthermore, this work also investigates the usage of abstraction layers to accelerate time-critical parallel operations on the GPU. It was found that the execution time of the tracking algorithm could be reduced by 42 percent by replacing four functions with implementations written in the purely functional array language Futhark. Finally, it was shown that a GPU code abstraction layer can reduce the knowledge barrier required to write efficient CUDA kernels.
114

Relativistic Causal Ordering A Memory Model for Scalable Concurrent Data Structures

Triplett, Josh 01 January 2012 (has links)
High-performance programs and systems require concurrency to take full advantage of available hardware. However, the available concurrent programming models force a difficult choice, between simple models such as mutual exclusion that produce little to no concurrency, or complex models such as Read-Copy Update that can scale to all available resources. Simple concurrent programming models enforce atomicity and causality, and this enforcement limits concurrency. Scalable concurrent programming models expose the weakly ordered hardware memory model, requiring careful and explicit enforcement of causality to preserve correctness, as demonstrated in this dissertation through the manual construction of a scalable hash-table item-move algorithm. Recent research on "relativistic programming" aims to standardize the programming model of Read-Copy Update, but thus far these efforts have lacked a generalized memory ordering model, requiring data-structure-specific reasoning to preserve causality. I propose a new memory ordering model, "relativistic causal ordering", which combines the scalabilty of relativistic programming and Read-Copy Update with the simplicity of reader atomicity and automatic enforcement of causality. Programs written for the relativistic model translate to scalable concurrent programs for weakly-ordered hardware via a mechanical process of inserting barrier operations according to well-defined rules. To demonstrate the relativistic causal ordering model, I walk through the straightforward construction of a novel concurrent hash-table resize algorithm, including the translation of this algorithm from the relativistic model to a hardware memory model, and show through benchmarks that the resulting algorithm scales far better than those based on mutual exclusion.
115

A Performance Comparison of Java Streams and Imperative Loops / En prestandajämförelse av Java streams och imperativa loopar

Åkerfeldt, Magnus January 2023 (has links)
The Stream API was added in Java 8. With the help of lambda expressions (anonymous functions), streams enable functional-style operations on sequences of elements. In this project, we evaluate how streams perform in comparison to imperative loops in terms of execution time, from the perspective of how streams are commonly used in public GitHub repositories. Additionally, two algorithms are implemented with and without streams, to assess the impact of stream usage on algorithmic performance. Parallel streams are only examined briefly due to their infrequent usage. We find that sequential streams in general are slower than imperative loops. However, stream performance heavily relies on how many elements are being processed, which is referred to as input size. For input sizes smaller than 100, most stream pipelines are several times slower than imperative loops. Meanwhile, for input sizes between 10 000 and 1 000 000, streams are on average only 39% to 74% slower than loops, and in some cases, they even slightly outperform them. Additionally, we observe that using streams when implementing algorithms in some cases leads to much slower execution times, while in other cases, it barely affects the execution time at all. We conclude that stream performance primarily depends on input size, presumably because of the high overhead abstraction cost of creating streams, but their performance also depends on other factors, such as operation type and pipeline length. / Med Java 8 introducerades streams. Med hjälp av lambda-uttryck (anonyma funktioner) möjliggör streams användandet av funktionella operationer på sekvenser av element. I detta projekt mäter vi hur streams presterar i jämförelse med imperativa loopar med hänsyn till exekveringstid, från perspektivet av hur streams vanligen används i publika GitHub-projekt. Parallella streams undersöks endast i begränsad utsträckning, på grund av hur sällan de används. Resultaten visar att streams överlag är långsammare än imperativa loopar. Skillnaden i prestanda beror dock starkt på indatastorleken, det vill säga hur många element som streamen bearbetar. För indatastorlekar mindre än 100 element är streams ofta flera gånger långsammare än deras imperativa motsvarigheter. Samtidigt är streams i genomsnitt endast 39% till 74% långsammare än imperativa motsvarigheter för indatastorlekar mellan 10 000 och 1 000 000 element, och i några fall är de till och med något snabbare än imperativ kod. Vidare observerar vi att användning av streams vid implementation av algoritmer i vissa fall leder till mycket längre exekveringstider, medan det i andra fall knappt påverkar exekveringstiden alls. Vi drar slutsatsen att prestandan av streams främst beror på indatastorlek, men också på andra faktorer, såsom operationstyp och hur många operationer som används i en pipeline.
116

Sur l’utilisation du langage de programmation Scheme pour le développement de jeux vidéo

St-Hilaire, David 10 1900 (has links)
Ce mémoire vise à recenser les avantages et les inconvénients de l'utilisation du langage de programmation fonctionnel dynamique Scheme pour le développement de jeux vidéo. Pour ce faire, la méthode utilisée est d'abord basée sur une approche plus théorique. En effet, une étude des besoins au niveau de la programmation exprimés par ce type de développement, ainsi qu'une description détaillant les fonctionnalités du langage Scheme pertinentes au développement de jeux vidéo sont données afin de bien mettre en contexte le sujet. Par la suite, une approche pratique est utilisée en effectuant le développement de deux jeux vidéo de complexités croissantes: Space Invaders et Lode Runner. Le développement de ces jeux vidéo a mené à l'extension du langage Scheme par plusieurs langages spécifiques au domaine et bibliothèques, dont notamment un système de programmation orienté objets et un système de coroutines. L'expérience acquise par le développement de ces jeux est finalement comparée à celle d'autres développeurs de jeux vidéo de l'industrie qui ont utilisé Scheme pour la création de titres commerciaux. En résumé, l'utilisation de ce langage a permis d'atteindre un haut niveau d'abstraction favorisant la modularité des jeux développés sans affecter les performances de ces derniers. / This master's thesis aims at pinpointing the pros and cons of using the dynamic functionnal language Scheme for developing video games. The method used is first based on a theoretical approach. Indeed, the specific requirements for video game programming and a detailed description of relevant Scheme features are presented. Then, a practical approach is taken by presenting two video games developed using the Scheme language: Space Invaders and Lode Runner. Their development resulted in the creation of various domain-specific languages and libraries, such as an objec- oriented system and a coroutine system. Each of these are presented separately in their respective chapter. Finally, the experience achieved in this process is compared to the experience acquired by some video game companies that also used Scheme for the developpement of their titles. The use of Scheme allowed us to perform various high-level abstractions that improved the modularity of the video games developed, without affecting their performance.
117

Implantation des futures sur un système distribué par passage de messages

Lasalle-Ratelle, Jérémie 08 1900 (has links)
Ce mémoire présente une implantation de la création paresseuse de tâches desti- née à des systèmes multiprocesseurs à mémoire distribuée. Elle offre un sous-ensemble des fonctionnalités du Message-Passing Interface et permet de paralléliser certains problèmes qui se partitionnent difficilement de manière statique grâce à un système de partitionnement dynamique et de balancement de charge. Pour ce faire, il se base sur le langage Multilisp, un dialecte de Scheme orienté vers le traitement parallèle, et implante sur ce dernier une interface semblable à MPI permettant le calcul distribué multipro- cessus. Ce système offre un langage beaucoup plus riche et expressif que le C et réduit considérablement le travail nécessaire au programmeur pour pouvoir développer des programmes équivalents à ceux en MPI. Enfin, le partitionnement dynamique permet de concevoir des programmes qui seraient très complexes à réaliser sur MPI. Des tests ont été effectués sur un système local à 16 processeurs et une grappe à 16 processeurs et il offre de bonnes accélérations en comparaison à des programmes séquentiels équiva- lents ainsi que des performances acceptables par rapport à MPI. Ce mémoire démontre que l’usage des futures comme technique de partitionnement dynamique est faisable sur des multiprocesseurs à mémoire distribuée. / This master’s thesis presents an implementation of lazy task creation for distributed memory multiprocessors. It offers a subset of Message-Passing Interface’s functionality and allows parallelization of some problems that are hard to statically partition thanks to its dynamic partitionning and load balancing system. It is based on Multilisp, a Scheme dialect for parallel computing, and implements an MPI like interface on top of it. It offers a richer and more expressive language than C and simplify the work needed to developp programs similar to those in MPI. Finally, dynamic partitioning allows some programs that would be very hard to develop in MPI. Tests were made on a 16 cpus computer and on a 16 cpus cluster. The system gets good accelerations when compared to equivalent sequential programs and acceptable performances when compared to MPI. It shows that it is possible to use futures as a dynamic partitioning method on distributed memory multiprocessors.
118

Sur l’utilisation du langage de programmation Scheme pour le développement de jeux vidéo

St-Hilaire, David 10 1900 (has links)
Ce mémoire vise à recenser les avantages et les inconvénients de l'utilisation du langage de programmation fonctionnel dynamique Scheme pour le développement de jeux vidéo. Pour ce faire, la méthode utilisée est d'abord basée sur une approche plus théorique. En effet, une étude des besoins au niveau de la programmation exprimés par ce type de développement, ainsi qu'une description détaillant les fonctionnalités du langage Scheme pertinentes au développement de jeux vidéo sont données afin de bien mettre en contexte le sujet. Par la suite, une approche pratique est utilisée en effectuant le développement de deux jeux vidéo de complexités croissantes: Space Invaders et Lode Runner. Le développement de ces jeux vidéo a mené à l'extension du langage Scheme par plusieurs langages spécifiques au domaine et bibliothèques, dont notamment un système de programmation orienté objets et un système de coroutines. L'expérience acquise par le développement de ces jeux est finalement comparée à celle d'autres développeurs de jeux vidéo de l'industrie qui ont utilisé Scheme pour la création de titres commerciaux. En résumé, l'utilisation de ce langage a permis d'atteindre un haut niveau d'abstraction favorisant la modularité des jeux développés sans affecter les performances de ces derniers. / This master's thesis aims at pinpointing the pros and cons of using the dynamic functionnal language Scheme for developing video games. The method used is first based on a theoretical approach. Indeed, the specific requirements for video game programming and a detailed description of relevant Scheme features are presented. Then, a practical approach is taken by presenting two video games developed using the Scheme language: Space Invaders and Lode Runner. Their development resulted in the creation of various domain-specific languages and libraries, such as an objec- oriented system and a coroutine system. Each of these are presented separately in their respective chapter. Finally, the experience achieved in this process is compared to the experience acquired by some video game companies that also used Scheme for the developpement of their titles. The use of Scheme allowed us to perform various high-level abstractions that improved the modularity of the video games developed, without affecting their performance.
119

Implantation des futures sur un système distribué par passage de messages

Lasalle-Ratelle, Jérémie 08 1900 (has links)
Ce mémoire présente une implantation de la création paresseuse de tâches desti- née à des systèmes multiprocesseurs à mémoire distribuée. Elle offre un sous-ensemble des fonctionnalités du Message-Passing Interface et permet de paralléliser certains problèmes qui se partitionnent difficilement de manière statique grâce à un système de partitionnement dynamique et de balancement de charge. Pour ce faire, il se base sur le langage Multilisp, un dialecte de Scheme orienté vers le traitement parallèle, et implante sur ce dernier une interface semblable à MPI permettant le calcul distribué multipro- cessus. Ce système offre un langage beaucoup plus riche et expressif que le C et réduit considérablement le travail nécessaire au programmeur pour pouvoir développer des programmes équivalents à ceux en MPI. Enfin, le partitionnement dynamique permet de concevoir des programmes qui seraient très complexes à réaliser sur MPI. Des tests ont été effectués sur un système local à 16 processeurs et une grappe à 16 processeurs et il offre de bonnes accélérations en comparaison à des programmes séquentiels équiva- lents ainsi que des performances acceptables par rapport à MPI. Ce mémoire démontre que l’usage des futures comme technique de partitionnement dynamique est faisable sur des multiprocesseurs à mémoire distribuée. / This master’s thesis presents an implementation of lazy task creation for distributed memory multiprocessors. It offers a subset of Message-Passing Interface’s functionality and allows parallelization of some problems that are hard to statically partition thanks to its dynamic partitionning and load balancing system. It is based on Multilisp, a Scheme dialect for parallel computing, and implements an MPI like interface on top of it. It offers a richer and more expressive language than C and simplify the work needed to developp programs similar to those in MPI. Finally, dynamic partitioning allows some programs that would be very hard to develop in MPI. Tests were made on a 16 cpus computer and on a 16 cpus cluster. The system gets good accelerations when compared to equivalent sequential programs and acceptable performances when compared to MPI. It shows that it is possible to use futures as a dynamic partitioning method on distributed memory multiprocessors.
120

Translating relational queries into iterative programs

Freytag, Johann Christoph, January 1900 (has links)
Thesis (Ph. D.)--Harvard University, 1985. / Includes bibliographical references (p).

Page generated in 0.1461 seconds