• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 54
  • 47
  • 45
  • 37
  • 19
  • 16
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 727
  • 318
  • 113
  • 77
  • 74
  • 66
  • 57
  • 54
  • 54
  • 51
  • 41
  • 41
  • 41
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Structural Performance Comparison of Parallel Software Applications

Weber, Matthias 09 December 2016 (has links)
With rising complexity of high performance computing systems and their parallel software, performance analysis and optimization has become essential in the development of efficient applications. The comparison of performance data is a key operation required in performance analysis. An analyst may conduct different types of comparisons in order to understand the performance properties of an application. One use case is comparing performance data from multiple measurements. Typical examples for such comparisons are before/after comparisons when applying optimizations or changing code versions. Besides comparing performance between multiple runs, also comparing performance characteristics across the parallel execution streams of an application is essential to detect performance problems. This is typically useful to detect imbalances, outliers, or changing runtime behavior during the execution of an application. While such comparisons are straightforward for the aggregated data in performance profiles, only limited solutions exist for comparing event traces. Trace-based analysis, i.e., the collection of fine-grained information on individual application events with timestamps and application context, has proven to be a powerful technique. The detailed performance information included in event traces make them very suitable for performance analysis. However, this level of detail also presents a challenge because it implies a large and overwhelming amount of data. Currently, users need to perform manual comparison of event traces, which is extremely challenging and time consuming because of the large volume of detailed data and the need to correctly line up trace events. To fill the gap of missing solutions for automatic comparison of event traces, this work proposes a set of techniques that automatically align traces. The alignment allows their structural comparison and the highlighting of differences between them. A set of novel metrics provide the user with an objective measure of the differences between traces, both in terms of differences in the event stream and timing differences across events. An additional important aspect of trace-based analysis is the visualization of performance data in event timelines. This has proven to be a powerful approach for the detection of various types of performance problems. However, visualization of large numbers of event timelines quickly hits the limits of available display resolution. Likewise, identifying performance problems is challenging in the large amount of visualized performance data. To alleviate these problems this work proposes two new approaches for event timeline visualization. First, novel folding strategies for event timelines facilitate visual scalability and provide powerful overviews of performance data at the same time. Second, this work presents an effective approach that automatically identifies and highlights several types of performance critical sections in an application run. This approach identifies time dominant functions of an application and subsequently uses them to analyze runtime imbalances throughout the application run. Intuitive visualizations present the resulting runtime variations and guide the analyst to performance hot spots. Evaluations with benchmarks and real-world applications assess all introduced techniques. The effectiveness of the comparison approaches is demonstrated by showing automatically detected performance issues and structural differences between different versions of applications and across parallel execution streams. Case studies showcase the capabilities of the event timeline visualization techniques by demonstrating scalable performance data visualizations and detecting performance problems and code inefficiencies in real-world applications.
222

Autorské právo v Evropské unii: vliv zájmových skupin při tvorbě směrnice o kolektivní správě práv / Copyright in the European Union: Influence of Interest Groups during the Creation of the Collective Rights Management Directive

Slabyhoudek, Václav January 2018 (has links)
The master's thesis focuses on the influence of interest groups during the creation of the 2012 Proposal for a Directive on collective rights management. In particular, the thesis deals with the pre-legislative phase of the legislative process, which began in April 2004. The theoretical framework includes conceptualization of interest groups, lobbying and influence. The mechanisms of influence are analysed using two theories - rational choice theory and rational choice institutionalism. The thesis utilizes process tracing - theory testing as a main methodological approach. Empirical evidence is investigated by analysing primary sources. The main subjects of the analysis are the most relevant documents from the European Commission concerning the pre-legislative phase. Four semi-structured interviews with selected relevant actors were also conducted. The thesis concludes by confirming/ disproving of the main hypothesis: Specific interest groups succeeded in influencing the text of the proposal for a directive on the collective management of copyright.
223

Občanská společnost jako aktér politického procesu / Civil Society and its Agency in the Political Process

Mazák, Jaromír January 2019 (has links)
(EN) Civil society is made up of committed individuals, non-governmental non-profit organizations, their employees, volunteers, and other supporters, as well as relations among these actors. Civil society activities include community development, advancement of leisure and professional interests, services to vulnerable groups, as well as efforts to intervene in the political process and to support certain legislation and systemic change. This work focuses on the latter, i.e. the ways how civil society actors influence the political process. In the introductory chapter I present an overview of the current research of civil society and political activism in the Czech Republic and other post-communist countries of Central and Eastern Europe. In this chapter, I identify five central propositions that can be formulated on the basis of existing scientific discussion and I subject them to a critical assessment. In addition, I argue that against the backdrop of the discussion, two streams of literature can be distinguished which differ in their assessment of civil society's quality in Central and Eastern Europe. I try to clarify the reasons for these contradictions. In the second chapter I offer an overview of social movements theories, thus completing the theoretical basis for the empirical part of this...
224

Trace-based Performance Analysis for Hardware Accelerators

Juckeland, Guido 05 February 2013 (has links)
This thesis presents how performance data from hardware accelerators can be included in event logs. It extends the capabilities of trace-based performance analysis to also monitor and record data from this novel parallelization layer. The increasing awareness to power consumption of computing devices has led to an interest in hybrid computing architectures as well. High-end computers, workstations, and mobile devices start to employ hardware accelerators to offload computationally intense and parallel tasks, while at the same time retaining a highly efficient scalar compute unit for non-parallel tasks. This execution pattern is typically asynchronous so that the scalar unit can resume other work while the hardware accelerator is busy. Performance analysis tools provided by the hardware accelerator vendors cover the situation of one host using one device very well. Yet, they do not address the needs of the high performance computing community. This thesis investigates ways to extend existing methods for recording events from highly parallel applications to also cover scenarios in which hardware accelerators aid these applications. After introducing a generic approach that is suitable for any API based acceleration paradigm, the thesis derives a suggestion for a generic performance API for hardware accelerators and its implementation with NVIDIA CUPTI. In a next step the visualization of event logs containing data from execution streams on different levels of parallelism is discussed. In order to overcome the limitations of classic performance profiles and timeline displays, a graph-based visualization using Parallel Performance Flow Graphs (PPFGs) is introduced. This novel technical approach is using program states in order to display similarities and differences between the potentially very large number of event streams and, thus, enables a fast way to spot load imbalances. The thesis concludes with the in-depth analysis of a case-study of PIConGPU---a highly parallel, multi-hybrid plasma physics simulation---that benefited greatly from the developed performance analysis methods. / Diese Dissertation zeigt, wie der Ablauf von Anwendungsteilen, die auf Hardwarebeschleuniger ausgelagert wurden, als Programmspur mit aufgezeichnet werden kann. Damit wird die bekannte Technik der Leistungsanalyse von Anwendungen mittels Programmspuren so erweitert, dass auch diese neue Parallelitätsebene mit erfasst wird. Die Beschränkungen von Computersystemen bezüglich der elektrischen Leistungsaufnahme hat zu einer steigenden Anzahl von hybriden Computerarchitekturen geführt. Sowohl Hochleistungsrechner, aber auch Arbeitsplatzcomputer und mobile Endgeräte nutzen heute Hardwarebeschleuniger um rechenintensive, parallele Programmteile auszulagern und so den skalaren Hauptprozessor zu entlasten und nur für nicht parallele Programmteile zu verwenden. Dieses Ausführungsschema ist typischerweise asynchron: der Skalarprozessor kann, während der Hardwarebeschleuniger rechnet, selbst weiterarbeiten. Die Leistungsanalyse-Werkzeuge der Hersteller von Hardwarebeschleunigern decken den Standardfall (ein Host-System mit einem Hardwarebeschleuniger) sehr gut ab, scheitern aber an einer Unterstützung von hochparallelen Rechnersystemen. Die vorliegende Dissertation untersucht, in wie weit auch multi-hybride Anwendungen die Aktivität von Hardwarebeschleunigern aufzeichnen können. Dazu wird die vorhandene Methode zur Erzeugung von Programmspuren für hochparallele Anwendungen entsprechend erweitert. In dieser Untersuchung wird zuerst eine allgemeine Methodik entwickelt, mit der sich für jede API-gestützte Hardwarebeschleunigung eine Programmspur erstellen lässt. Darauf aufbauend wird eine eigene Programmierschnittstelle entwickelt, die es ermöglicht weitere leistungsrelevante Daten aufzuzeichnen. Die Umsetzung dieser Schnittstelle wird am Beispiel von NVIDIA CUPTI darstellt. Ein weiterer Teil der Arbeit beschäftigt sich mit der Darstellung von Programmspuren, welche Aufzeichnungen von den unterschiedlichen Parallelitätsebenen enthalten. Um die Einschränkungen klassischer Leistungsprofile oder Zeitachsendarstellungen zu überwinden, wird mit den parallelen Programmablaufgraphen (PPFGs) eine neue graphenbasisierte Darstellungsform eingeführt. Dieser neuartige Ansatz zeigt eine Programmspur als eine Folge von Programmzuständen mit gemeinsamen und unterchiedlichen Abläufen. So können divergierendes Programmverhalten und Lastimbalancen deutlich einfacher lokalisiert werden. Die Arbeit schließt mit der detaillierten Analyse von PIConGPU -- einer multi-hybriden Simulation aus der Plasmaphysik --, die in großem Maße von den in dieser Arbeit entwickelten Analysemöglichkeiten profiert hat.
225

A Performance Comparison of Dynamic- and Inline Ray Tracing in DXR : An application in soft shadows

Sjöberg, Joakim, Zachrisson, Filip January 2021 (has links)
Background. Ray tracing is a tool that can be used to increase the quality of the graphics in games. One application in graphics that ray tracing excels in is generating shadows because ray tracing can simulate how shadows are generated in real life more accurately than rasterization techniques can. With the release of GPUs with hardware support for ray tracing, it can now be used in real-time graphics applications to some extent. However, it is still a computationally heavy task requiring performance improvements. Objectives. This thesis will evaluate the difference in performance of three raytracing methods in DXR Tier 1.1, namely dynamic ray tracing and two forms of inline ray tracing. To further investigate the ray-tracing performance, soft shadows will be implemented to see if the driver can perform optimizations differently (depending on the choice of ray-tracing method) on the subsequent and/or preceding API interactions. With the pipelines implemented, benchmarks will be performed using different GPUs, scenes, and a varying amount of shadow-casting lights. Methods. The scientific method is based on an experimental approach, using both implementation and performance tests. The experimental approach will begin by extending an in-house DirectX 12 renderer. The extension includes ray-tracing functionality, so that hard shadows can be generated using both dynamic- and the inline forms ray tracing. Afterwards, soft shadows are generated by implementing a state-of-the-art-denoiser with some modifications, which will be added to each ray-tracing method. Finally, the renderer is used to perform benchmarks of various scenes with varying amounts of shadow-casting lights and object complexity to cover a broad area of scenarios that could occur in a game and/or in other similar applications. Results and Conclusions. The results gathered in this experiment suggest that under the experimental conditions of the chosen scenes, objects, and number of lights, AMD’s GPUs were faster in performance when using dynamic ray tracing than using inline ray tracing, whilst Nvidia’s GPUs were faster when using inline ray tracing compared to when using dynamic ray tracing. Also, with an increasing amount of shadow-casting lights, the choice of ray-tracing method had low to no impact except for linearly increasing the execution time in each test. Finally, adding soft shadows(subsequent and preceding API interactions) also had low to no relative impact on the results depending on the different ray-tracing methods. / Bakgrund. Strålspårning (ray tracing) är ett verktyg som kan användas för att öka kvalitén på grafiken i spel. En tillämpning i grafik som strålspårning utmärker sig i är när skuggor ska skapas eftersom att strålspårning lättare kan simulera hur skuggor skapas i verkligheten, vilket tidigare tekniker i rasterisering inte hade möjlighet för. Med ny hårdvara där det finns support för strålspårning inbyggt i grafikkorten finns det nu möjligheter att använda strålspårning i realtids-applikationer inom vissa gränser. Det är fortfarande tunga beräkningar som behöver slutföras och det är därav att det finns behov av förbättringar.  Syfte. Denna uppsats kommer att utvärdera skillnaderna i prestanda mellan tre olika strålspårningsmetoder i DXR nivå 1.1, nämligen dynamisk strålspårning och två olika former av inline strålspårning. För att ge en bredare utredning på prestandan mellan strålspårningsmetoderna kommer mjuka skuggor att implementeras för att se om drivrutinen kan göra olika optimiseringar (beroende på valet av strålspårningsmetod) på de efterföljande och/eller föregående API anropen. Efter att dessa rörledningar (pipelines) är implementerade kommer prestandatester att utföras med olika grafikkort, scener, och antal ljus som kastar skuggor. Metod. Den vetenskapliga metoden är baserat på ett experimentellt tillvägagångssätt, som kommer innehålla både ett experiment och ett flertal prestandatester. Det experimentella tillvägagångssättet kommer att börja med att utöka en egenskapad DirectX 12 renderare. Utökningen kommer tillföra ny funktionalitet för att kunna hantera strålspårning så att hårda skuggor ska kunna genereras med både dynamisk och de olika formerna av inline strålspårning. Efter det kommer mjuka skuggor att skapas genom att implementera en väletablerad avbrusningsteknik med några modifikationer, vilket kommer att bli tillagt på varje strålspårningssteg. Till slut kommer olika prestandatester att mätas med olika grafikkort, olika antal ljus, och olika scener för att täcka olika scenarion som skulle kunna uppstå i ett spel och/eller i andra liknande applikationer.  Resultat och Slutsatser. De resultat från testerna i detta experiment påvisar att under dessa förutsättningar så är AMD’s grafikkort snabbare på dynamisk strålspårning än på inline strålspårning, samtidigt som Nvidias grafikkort är snabbare på inline strålspårning än på den dynamiska varianten. Ökandet av ljus som kastar skuggor påvisade låg till ingen förändring förutom ett linjärt ökande av exekveringstiden i de flesta testerna. Slutligen så visade det sig även att tillägget av mjuka skuggor (efterföljande och föregående API interaktioner) hade låg till ingen påverkan på valet av strålspårningsmetod.
226

Hardware-Accelerated Ray Tracing of Implicit Surfaces : A study of real-time editing and rendering of implicit surfaces

Hansson Söderlund, Herman January 2021 (has links)
Background. Rasterization of triangle geometry has been the dominating rendering technique in the real-time rendering industry for many years. However, triangles are not always easy to work with for content creators. With the introduction of hardware-accelerated ray tracing, rasterization-based lighting techniques have been steadily replaced by ray tracing techniques. This shift may signify the opportunity of exploring other, more easily manipulated, geometry-type alternatives compared to triangle geometry. One such geometry type is implicit surfaces. Objectives. This thesis investigates the rendering speed, editing speed, and image quality of different implicit surface rendering techniques using a state-of-the-art, hardware-accelerated, path tracing implementation. Furthermore, it investigates how implicit surfaces may be edited in real time and how editing affects rendering. Methods. A baseline direct sphere tracing algorithm is implemented to render implicit surfaces. Additionally, dense and narrow band discretization algorithms that sphere trace a discretization of the implicit surface are implemented. For each technique, two variations that provide potential benefits in rendering speed are also tested. Additionally, a real-time implicit surface editor that can utilize all the mentioned rendering techniques is created. Rendering speed, editing speed, and image quality metrics are captured for all techniques using different scenes created with the editor and an existing hardware-accelerated path tracing solution. Image quality differences are measured using mean squared error and the image difference evaluator FLIP. Results. Direct sphere tracing achieves the best image quality results but has the slowest rendering speed. Dense discretization achieves the best rendering speed in most tests and achieves better image quality results compared to narrow band discretization. Narrow band discretization achieves significantly better editing speed than both direct sphere tracing and dense discretization. All variations of each algorithm achieve better or equal rendering and editing speed compared to their standard implementation. All algorithms achieve real-time rendering and editing performance. However, only discretized methods display real-time rendering performance for all scenes, and only narrow band discretization displays real-time editing performance for a larger number of primitives. Conclusions. Implicit surfaces can be rendered and edited in real time while using a state-of-the-art, hardware-accelerated, path tracing algorithm. Direct sphere tracing degrades in performance when the implicit surface has an increased number of primitives, whereas discretization techniques perform independently of this. Furthermore, narrow band discretization is fast enough so that editing can be performed in real time even for implicit surfaces with a large number of primitives, which is not the case for direct sphere tracing or dense discretization. / Bakgrund. Triangelrastrering har varit den dominerande renderingstekniken inom realtidsgrafik i flera år. Trianglar är dock inte alltid lätta att jobba med för skapare av grafiska modeller. Med introduktionen av hårdvaruaccelererad strålspårning har rastreringsbaserade ljussättningstekniker stadigt ersatts av strålspårningstekniker. Detta skifte innebär att det kan finnas möjlighet för att utforska andra, mer lättredigerade geometrityper jämfört med triangelgeometri, exempelvis implicita ytor. Syfte. Detta examensarbete undersöker rendering- och redigeringshastigheten, samt bildkvaliteten av olika renderingstekniker för implicita ytor tillsammans med en spjutspetsalgoritm för hårdvaruaccelererad strålföljning. Den undersöker även hur implicita ytor kan redigeras i realtid och hur det påverkar rendering. Metod. En direkt sfärspårningsalgoritm implementeras som baslinje för att rendera implicita ytor. Även algoritmer som utför sfärstrålning över en kompakt- och smalbandsdiskretisering av den implicita ytan implementeras. För varje teknik implementeras även två variationer som potentiellt kan ge bättre prestanda. Utöver dessa renderingstekniker skapas även ett redigeringsverktyg för implicita ytor. Renderingshastighet, redigeringshastighet, och bildkvalité mäts för alla tekniker över flera olika scener som har skapats med redigeringsverktyget tillsammans med en hårdvaruaccelererad strålföljningsalgoritm. Skillnader i bildkvalité utvärderas med hjälp av mean squared error och evalueringsverktyget för bildskillnader som heter FLIP. Resultat. Direkt sfärspårning åstadkommer bäst bildkvalité, men har den långsammaste renderingshastigheten. Kompakt diskretisering renderar snabbast i de flesta tester och åstadkommer bättre bildkvalité än vad smalbandsdiskretisering gör. Smalbandsdiskretisering åstadkommer betydligt bättre redigeringshastighet än både direkt sfärspårning och kompakt diskretisering. Variationerna för respektive algoritm presterar alla lika bra eller bättre än standardvarianten för respektive algoritm. Alla algoritmer uppnår realtidsprestanda inom rendering och redigering. Endast diskretiseringsmetoderna uppnår dock realtidsprestanda för rendering med alla scener och endast smalbandsdiskretisering uppnår realtidsprestanda för redigering med ett större antal primitiver. Slutsatser. Implicita ytor kan renderas och redigeras i realtid tillsammans med en spjutspetsalgoritm för hårdvaruaccelererad strålföljning. Vid användning av direkt sfärstrålning minskar renderingshastigheten när den ytan består av ett stort antal primitiver. Diskretiseringstekniker har dock en renderingshastighet som är oberoende av antalet primitiver. Smalbandsdiskretisering är tillräckligt snabb för att redigering ska kunna ske i realtid även för implicita ytor som består stora antal primitiver.
227

Etiska problem vid användning av kontaktspårningsapplikationer : En kvalitativ litteraturstudie / Ethical issues when using contact tracing applications : A qualitative literature review

Asgeirsdottir, Anna, Johansson, Patricia January 2022 (has links)
I samband med utbrottet av covid-19 introducerades kontaktspårningsapplikationer (contact tracing application, CTA), eftersom de har bevisats kunna vara ett effektivt verktyg för kontaktspårning och därmed minska smittspridning. CTA kan med hjälp av Bluetooth signaler bestämma en individs positionering och vilka personer de varit i kontakt med. Detta innebär att personlig data om dess användare samlas in, hanteras och lagras. CTA är beroende av en hög acceptans och användningsnivå för att fungera på ett effektivt sätt. I tidigare forskningsstudier uppmärksammas därmed att etiken kopplad till dessa applikationerna behöver belysas. Studien syftar därmed till att belysa vilka etiska problem som finns kopplade till användningen av CTA. Studien är utformad som en litteraturstudie med en kvalitativ innehållsanalys. Resultatet består av en sammanställning av de etiska problemen kopplade till CTA utifrån kategorierna i ramverket PAPA. Sammanställningen visar på att det finns många etiska problem kopplade till CTA som behöver bemötas och det visar även på att det är till stor del samma etiska problem som uppstår och är relevanta inom IS idag, som det har varit inom området i årtionden sedan PAPA skapades. / During the outbreak of covid-19, contact tracing applications (CTA) were introduced as they have been proven to be an effective tool for contact tracing and thereby reduce the spread of infection. With Bluetooth signals, CTA can determine an individual’s positioning and which people they have been in contact with. This means that personal data about its users is collected, managed, and stored. CTA depends on a high level of acceptance and usage to function effectively. In previous research studies, attention is thus drawn to the fact that ethics linked to these applications need to be enlightened. Therefore, this study aims to shed light on the ethical problems associated with CTA. The study is designed as a literature review with a qualitative content analysis. The result consists of a compilation of the ethical problems linked to CTA based on the categories in the PAPA framework. The compilation shows that there are many ethical problems linked to CTA that need to be addressed and it also shows that it is largely the same ethical problems that arise and are relevant in IS today, as it has been in the field for decades since PAPA was created.
228

En route to automated maintenance of industrial printing systems: digital quantification of print-quality factors based on induced printing failure

Bischoff, Peter, Carreiro, André V., Kroh, Christoph, Schuster, Christiane, Härtling, Thomas 22 February 2024 (has links)
Tracking and tracing are a key technology for production process optimization and subsequent cost reduction. However, several industrial environments (e.g. high temperatures in metal processing) are challenging for most part-marking and identification approaches. A method for printing individual part markings on metal components (e.g. data matrix codes (DMCs) or similar identifiers) with high temperatures and chemical resistance has been developed based on drop-on-demand (DOD) print technology and special ink dispersions with submicrometer-sized ceramic and glass particles. Both ink and printer are required to work highly reliably without nozzle clogging or other failures to prevent interruptions of the production process in which the printing technology is used. This is especially challenging for the pigmented inks applied here. To perform long-term tests with different ink formulations and to assess print quality over time, we set up a test bench for inkjet printing systems. We present a novel approach for monitoring the printhead’s state as well as the print-quality degradation. This method does not require measuring and monitoring, e.g. electrical components or drop flight, as it is done in the state of the art and instead uses only the printed result. By digitally quantifying selected quality factors within the printed result and evaluating their progression over time, several non-stationary measurands were identified. Some of these measurands show a monotonic trend and, hence, can be used to measure print-quality degradation. These results are a promising basis for automated printing system maintenance.
229

Analysis of cellular drivers of zebrafish heart regeneration by single-cell RNA sequencing 
and high-throughput lineage tracing

Hu, Bo 22 September 2021 (has links)
Das Herz eines Zebrafishs ist bemerkenswert, da es sich nach einer Verletzung vollständig regenerieren kann. Der Regenerationsprozess wird von Fibrose begleitet - der Bildung von überschüssigem Gewebe der extrazellulären Matrix (ECM). Anders als bei Säugetieren ist die Fibrose im Zebrafish nur transient. Viele Signalwege wurden identifiziert, die an der Herzregeneration beteiligt sind. Allerdings sind die Zelltypen, insbesondere Nicht-Kardiomyozyten, die für die Regulation des Regenerationsprozesses verantwortlich sind, weitgehend unbekannt. In dieser Arbeit haben wir systematisch alle Zelltypen des gesunden und des verletzten Zebrafischherzens mithilfe einer auf Mikrofluidik basierenden Hoch-Durchsatz- Einzelzell-RNA-Sequenzierung bestimmt. Wir fanden eine große Heterogenität von ECM-produzierenden Zellen, einschließlich einer Reihe neuer Fibroblasten, die nach einer Verletzung mit unterschiedlicher Dynamik auftreten. Wir konnten aktivierte Fibroblasten beschreiben und Fibroblasten-Subtypen mit einer pro-regenerativen Funktion identifizieren. Darüber hinaus haben wir eine Methode entwickelt, um die Transkriptomanalyse und die Rekonstruktion von Zell-Verwandtschaften auf Einzelzellebene zu kombinieren. Unter Verwendung der CRISPR-Cas9-Technologie führten wir zufällige Mutationen in bekannte und ubiquitär transkribierte DNA-Loci während der Embryonalentwicklung von Zebrafischen ein. Diese Mutationen dienten als zellspezifische, permanente und vererbbare “Barcodes”, die zu einem späteren Zeitpunkt erfasst werden konnten. Mit maßgeschneiderten Analysealgorithmen konnten wir dann Stammbäume der sequenzierten Einzelzellen erstellen. Mit dieser neuen Methode haben wir gezeigt, dass im sich regenerierenden Zebrafischherz ECM-produzierende Zellpopulationen entweder mit dem Epi- oder mit dem Endokardium verwandt sind. Zusätzlich entdeckten wir, dass vom Endokardium abgeleitete Zelltypen vom Wnt-Signalweg abhängig sind. / The zebrafish heart has the remarkable capacity to fully regenerate after injury. The regeneration process is accompanied by fibrosis - the formation of excess extracellular matrix (ECM) tissue, at the injury site. Unlike in mammals, the fibrosis of the zebrafish heart is only transient. While many pathways involved in heart regeneration have been identified, the cell types, especially non-myocytes, responsible for the regulation of the regenerative process have largely remained elusive. Here, we systematically determined all different cell types of both the healthy and cryo-injured zebrafish heart in its regeneration process using microfluidics based high-throughput single-cell RNA sequencing. We found a considerable heterogeneity of ECM producing cells, including a number of novel fibroblast cell types which appear with different dynamics after injury. We could describe activated fibroblasts that extensively switch on gene modules for ECM production and identify fibroblast sub- types with a pro-regenerative function. Furthermore, we developed a method that is capable of combining transcriptome analysis with lineage tracing on the single-cell level. Using CRISPR-Cas9 technology, we introduced random mutations into known and ubiquitously transcribed DNA loci during the zebrafish embryonic development. These mutations served as cell-unique, permanent, and heritable barcodes that could be captured at a later stage simultaneously with the transcriptome by high-throughput single-cell RNA sequencing. With custom tailored analysis algorithms, we were then able to build a developmental lineage tree of the sequenced single cells. Using this new method, we revealed that in the regenerating zebrafish heart, ECM contributing cell populations derive either from the epi- or the endocardium. Additionally, we discovered in a functional experiment that endocardial derived cell types are Wnt signaling dependent.
230

Distributed Trace Comparisons for Code Review : A System Design and Practical Evaluation

Rabo, Hannes January 2020 (has links)
Ensuring the health of a distributed system with frequent updates is complicated. Many tools exist to improve developers’ comprehension and productivity in this task, but room for improvement exists. Based on previous research within request flow comparison, we propose a system design for using distributed tracing data in the process of reviewing code changes. The design is evaluated from the perspective of system performance and developer productivity using a critical production system at a large software company. The results show that the design has minimal negative performance implications while providing a useful service to the developers. They also show a positive but statistically insignificant effect on productivity during the evaluation period. To a large extent, developers adopted the tool into their workflow to explore and improve system understanding. This use case deviates from the design target of providing a method to compare changes between software versions. We conclude that the design is successful, but more optimization of functionality and a higher rate of adoption would likely improve the effects the tool could have. / Att säkerställa stabilitet i ett distribuerat system med hög frekvens av uppdateringar är komplicerat. I dagsläget finns många verktyg som hjälper utvecklare i deras förståelse och produktivitet relaterat till den här typen av problem, dock finns fortfarande möjliga förbättringar. Baserat på tidigare forskning inom teknik för att jämföra protokollförfrågningsflöden mellan mjukvaruversioner så föreslår vi en systemdesign för ett nytt verktyg. Designen använder sig av data från distribuerad tracing för att förbättra arbetsflödet relaterat till kodgranskning. Designen utvärderas både prestanda och produktivitetsmässigt under utvecklingen av ett affärskritiskt produktionssystem på ett stort mjukvaruföretag. Resultaten visar att designen har mycket låg inverkan på prestandan av systemet där det införs, samtidigt som den tillhandahåller ett användbart verktyg till utvecklarna. Resultaten visar också på en positiv men statistiskt insignifikant effekt på utvecklarnas produktivitet. Utvecklarna använde primärt verktyget för att utforska och förbättra sin egen förståelse av systemet som helhet. Detta användningsområde avvek från det ursprungliga målet med designen, vilket var att tillhandahålla en tjänst för att jämföra mjukvaruversioner med varandra. Från resultaten drar vi slutsatsen att designen som helhet var lyckad, men mer optimering av funktionalitet och mer effektivt införande av verktyget i arbetsflödet hade troligtvis resulterat i större positiva effekter på organisationen.

Page generated in 0.0603 seconds