• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 364
  • 54
  • 47
  • 45
  • 37
  • 19
  • 16
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 733
  • 321
  • 114
  • 77
  • 74
  • 66
  • 57
  • 54
  • 54
  • 51
  • 42
  • 42
  • 42
  • 38
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Développement d'un modèle thermomécanique axisymétrique en milieu semi-transparent avec transfert radiatif : application au fluage et à la trempe des verres / Development of an axisymmetric thermomechanical model in semi-transparent medium with radiative transfer : application to the creep and the tempering of glasses

Agboka, Kossiga 26 June 2018 (has links)
La majorité des produits verriers du marché sont issus d’une opération de mise en forme à hautes températures, suivie d’une phase de refroidissement contrôlé afin d’éliminer (verre recuit) ou générer (verre trempé) des contraintes résiduelles. Le comportement mécanique du verre étant fortement thermo-dépendant, le contrôle des températures est un élément déterminant pour le succès du procédé de fabrication. Lors de la simulation numérique, pour ce milieu semi-transparent, les échanges thermiques par conduction et par rayonnement sont à considérer. La résolution de l’ETR (Equation de Transfert Radiatif) est menée dans cette thèse par le biais de la « Méthode P1 » et le « Back Ray Tracing » (BRT). Les deux codes développés ont été validés par l’étude comparative avec les données en températures et en contraintes résiduelles issues de la littérature sur le refroidissement dans l’épaisseur du verre soumis à des conditions variées en convection naturelle et forcée. Une expérimentation qui consiste à refroidir un disque de verre sur un support métallique a été développée dans le but de comparer les températures et contraintes générées par l’expérimentation et par la modélisation issue du couplage thermomécanique avec les deux codes P1 et BRT. De manière plus originale, la méthode BRT a été étendue à des géométries de révolution. Une première approche a consisté à étudier le fluage d’une goutte de verre et à analyser l’influence du choix du modèle de résolution de l’ETR sur les températures et les géométries au cours de la mise en forme. / Most of glass products on the market come from a high-temperature forming operation, followed by a controlled cooling phase to remove (annealed glass) or generate (tempered glass) residual stresses. Since the mechanical behaviour of the glass is highly thermo-dependent, temperature control is a determining factor for the success of the manufacturing process. During numerical simulations, for this semi-transparent medium, heat exchanges by conduction and radiation have to be considered. In this work, the resolution of the ETR (radiative transfer Equation) is carried out using the "P1 method" and the "Back Ray tracing" (BRT). The two developed codes were validated by the comparative study with the temperature and residual stresses data from the literature on cooling in the thickness of the glass subject to various conditions in natural and forced convection. An experimentation which consists in cooling a glass disk on a metal support was developed in order to compare the temperatures and stresses generated by the testing and by the modelling resulting from the thermomechanical coupling with the two codes P1 and BRT. In a more original way, the BRT method was extended to revolving geometries. A first approach was to study the creep of a glass gob and to analyze the influence of the choice of the ETR's resolution model on the temperatures and geometries during the forming.
222

Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads

Schöne, Robert, Schuchart, Joseph, Ilsche, Thomas, Hackenberg, Daniel January 2014 (has links)
There is a variety of tools to measure the performance of Linux systems and the applications running on them. However, the resulting performance data is often presented in plain text format or only with a very basic user interface. For large systems with many cores and concurrent threads, it is increasingly difficult to present the data in a clear way for analysis. Moreover, certain performance analysis and debugging tasks require the use of a high-resolution time-line based approach, again entailing data visualization challenges. Tools in the area of High Performance Computing (HPC) have long been able to scale to hundreds or thousands of parallel threads and help finding performance anomalies. We therefore present a solution to gather performance data using Linux performance monitoring interfaces. A combination of sampling and careful instrumentation allows us to obtain detailed performance traces with manageable overhead. We then convert the resulting output to the Open Trace Format (OTF) to bridge the gap between the recording infrastructure and HPC analysis tools. We explore ways to visualize the data by using the graphical tool Vampir. The combination of established Linux and HPC tools allows us to create an interface for easy navigation through time-ordered performance data grouped by thread or CPU and to help users find opportunities for performance optimizations.
223

Structural Performance Comparison of Parallel Software Applications

Weber, Matthias 09 December 2016 (has links)
With rising complexity of high performance computing systems and their parallel software, performance analysis and optimization has become essential in the development of efficient applications. The comparison of performance data is a key operation required in performance analysis. An analyst may conduct different types of comparisons in order to understand the performance properties of an application. One use case is comparing performance data from multiple measurements. Typical examples for such comparisons are before/after comparisons when applying optimizations or changing code versions. Besides comparing performance between multiple runs, also comparing performance characteristics across the parallel execution streams of an application is essential to detect performance problems. This is typically useful to detect imbalances, outliers, or changing runtime behavior during the execution of an application. While such comparisons are straightforward for the aggregated data in performance profiles, only limited solutions exist for comparing event traces. Trace-based analysis, i.e., the collection of fine-grained information on individual application events with timestamps and application context, has proven to be a powerful technique. The detailed performance information included in event traces make them very suitable for performance analysis. However, this level of detail also presents a challenge because it implies a large and overwhelming amount of data. Currently, users need to perform manual comparison of event traces, which is extremely challenging and time consuming because of the large volume of detailed data and the need to correctly line up trace events. To fill the gap of missing solutions for automatic comparison of event traces, this work proposes a set of techniques that automatically align traces. The alignment allows their structural comparison and the highlighting of differences between them. A set of novel metrics provide the user with an objective measure of the differences between traces, both in terms of differences in the event stream and timing differences across events. An additional important aspect of trace-based analysis is the visualization of performance data in event timelines. This has proven to be a powerful approach for the detection of various types of performance problems. However, visualization of large numbers of event timelines quickly hits the limits of available display resolution. Likewise, identifying performance problems is challenging in the large amount of visualized performance data. To alleviate these problems this work proposes two new approaches for event timeline visualization. First, novel folding strategies for event timelines facilitate visual scalability and provide powerful overviews of performance data at the same time. Second, this work presents an effective approach that automatically identifies and highlights several types of performance critical sections in an application run. This approach identifies time dominant functions of an application and subsequently uses them to analyze runtime imbalances throughout the application run. Intuitive visualizations present the resulting runtime variations and guide the analyst to performance hot spots. Evaluations with benchmarks and real-world applications assess all introduced techniques. The effectiveness of the comparison approaches is demonstrated by showing automatically detected performance issues and structural differences between different versions of applications and across parallel execution streams. Case studies showcase the capabilities of the event timeline visualization techniques by demonstrating scalable performance data visualizations and detecting performance problems and code inefficiencies in real-world applications.
224

Autorské právo v Evropské unii: vliv zájmových skupin při tvorbě směrnice o kolektivní správě práv / Copyright in the European Union: Influence of Interest Groups during the Creation of the Collective Rights Management Directive

Slabyhoudek, Václav January 2018 (has links)
The master's thesis focuses on the influence of interest groups during the creation of the 2012 Proposal for a Directive on collective rights management. In particular, the thesis deals with the pre-legislative phase of the legislative process, which began in April 2004. The theoretical framework includes conceptualization of interest groups, lobbying and influence. The mechanisms of influence are analysed using two theories - rational choice theory and rational choice institutionalism. The thesis utilizes process tracing - theory testing as a main methodological approach. Empirical evidence is investigated by analysing primary sources. The main subjects of the analysis are the most relevant documents from the European Commission concerning the pre-legislative phase. Four semi-structured interviews with selected relevant actors were also conducted. The thesis concludes by confirming/ disproving of the main hypothesis: Specific interest groups succeeded in influencing the text of the proposal for a directive on the collective management of copyright.
225

Občanská společnost jako aktér politického procesu / Civil Society and its Agency in the Political Process

Mazák, Jaromír January 2019 (has links)
(EN) Civil society is made up of committed individuals, non-governmental non-profit organizations, their employees, volunteers, and other supporters, as well as relations among these actors. Civil society activities include community development, advancement of leisure and professional interests, services to vulnerable groups, as well as efforts to intervene in the political process and to support certain legislation and systemic change. This work focuses on the latter, i.e. the ways how civil society actors influence the political process. In the introductory chapter I present an overview of the current research of civil society and political activism in the Czech Republic and other post-communist countries of Central and Eastern Europe. In this chapter, I identify five central propositions that can be formulated on the basis of existing scientific discussion and I subject them to a critical assessment. In addition, I argue that against the backdrop of the discussion, two streams of literature can be distinguished which differ in their assessment of civil society's quality in Central and Eastern Europe. I try to clarify the reasons for these contradictions. In the second chapter I offer an overview of social movements theories, thus completing the theoretical basis for the empirical part of this...
226

Trace-based Performance Analysis for Hardware Accelerators

Juckeland, Guido 05 February 2013 (has links)
This thesis presents how performance data from hardware accelerators can be included in event logs. It extends the capabilities of trace-based performance analysis to also monitor and record data from this novel parallelization layer. The increasing awareness to power consumption of computing devices has led to an interest in hybrid computing architectures as well. High-end computers, workstations, and mobile devices start to employ hardware accelerators to offload computationally intense and parallel tasks, while at the same time retaining a highly efficient scalar compute unit for non-parallel tasks. This execution pattern is typically asynchronous so that the scalar unit can resume other work while the hardware accelerator is busy. Performance analysis tools provided by the hardware accelerator vendors cover the situation of one host using one device very well. Yet, they do not address the needs of the high performance computing community. This thesis investigates ways to extend existing methods for recording events from highly parallel applications to also cover scenarios in which hardware accelerators aid these applications. After introducing a generic approach that is suitable for any API based acceleration paradigm, the thesis derives a suggestion for a generic performance API for hardware accelerators and its implementation with NVIDIA CUPTI. In a next step the visualization of event logs containing data from execution streams on different levels of parallelism is discussed. In order to overcome the limitations of classic performance profiles and timeline displays, a graph-based visualization using Parallel Performance Flow Graphs (PPFGs) is introduced. This novel technical approach is using program states in order to display similarities and differences between the potentially very large number of event streams and, thus, enables a fast way to spot load imbalances. The thesis concludes with the in-depth analysis of a case-study of PIConGPU---a highly parallel, multi-hybrid plasma physics simulation---that benefited greatly from the developed performance analysis methods. / Diese Dissertation zeigt, wie der Ablauf von Anwendungsteilen, die auf Hardwarebeschleuniger ausgelagert wurden, als Programmspur mit aufgezeichnet werden kann. Damit wird die bekannte Technik der Leistungsanalyse von Anwendungen mittels Programmspuren so erweitert, dass auch diese neue Parallelitätsebene mit erfasst wird. Die Beschränkungen von Computersystemen bezüglich der elektrischen Leistungsaufnahme hat zu einer steigenden Anzahl von hybriden Computerarchitekturen geführt. Sowohl Hochleistungsrechner, aber auch Arbeitsplatzcomputer und mobile Endgeräte nutzen heute Hardwarebeschleuniger um rechenintensive, parallele Programmteile auszulagern und so den skalaren Hauptprozessor zu entlasten und nur für nicht parallele Programmteile zu verwenden. Dieses Ausführungsschema ist typischerweise asynchron: der Skalarprozessor kann, während der Hardwarebeschleuniger rechnet, selbst weiterarbeiten. Die Leistungsanalyse-Werkzeuge der Hersteller von Hardwarebeschleunigern decken den Standardfall (ein Host-System mit einem Hardwarebeschleuniger) sehr gut ab, scheitern aber an einer Unterstützung von hochparallelen Rechnersystemen. Die vorliegende Dissertation untersucht, in wie weit auch multi-hybride Anwendungen die Aktivität von Hardwarebeschleunigern aufzeichnen können. Dazu wird die vorhandene Methode zur Erzeugung von Programmspuren für hochparallele Anwendungen entsprechend erweitert. In dieser Untersuchung wird zuerst eine allgemeine Methodik entwickelt, mit der sich für jede API-gestützte Hardwarebeschleunigung eine Programmspur erstellen lässt. Darauf aufbauend wird eine eigene Programmierschnittstelle entwickelt, die es ermöglicht weitere leistungsrelevante Daten aufzuzeichnen. Die Umsetzung dieser Schnittstelle wird am Beispiel von NVIDIA CUPTI darstellt. Ein weiterer Teil der Arbeit beschäftigt sich mit der Darstellung von Programmspuren, welche Aufzeichnungen von den unterschiedlichen Parallelitätsebenen enthalten. Um die Einschränkungen klassischer Leistungsprofile oder Zeitachsendarstellungen zu überwinden, wird mit den parallelen Programmablaufgraphen (PPFGs) eine neue graphenbasisierte Darstellungsform eingeführt. Dieser neuartige Ansatz zeigt eine Programmspur als eine Folge von Programmzuständen mit gemeinsamen und unterchiedlichen Abläufen. So können divergierendes Programmverhalten und Lastimbalancen deutlich einfacher lokalisiert werden. Die Arbeit schließt mit der detaillierten Analyse von PIConGPU -- einer multi-hybriden Simulation aus der Plasmaphysik --, die in großem Maße von den in dieser Arbeit entwickelten Analysemöglichkeiten profiert hat.
227

A Performance Comparison of Dynamic- and Inline Ray Tracing in DXR : An application in soft shadows

Sjöberg, Joakim, Zachrisson, Filip January 2021 (has links)
Background. Ray tracing is a tool that can be used to increase the quality of the graphics in games. One application in graphics that ray tracing excels in is generating shadows because ray tracing can simulate how shadows are generated in real life more accurately than rasterization techniques can. With the release of GPUs with hardware support for ray tracing, it can now be used in real-time graphics applications to some extent. However, it is still a computationally heavy task requiring performance improvements. Objectives. This thesis will evaluate the difference in performance of three raytracing methods in DXR Tier 1.1, namely dynamic ray tracing and two forms of inline ray tracing. To further investigate the ray-tracing performance, soft shadows will be implemented to see if the driver can perform optimizations differently (depending on the choice of ray-tracing method) on the subsequent and/or preceding API interactions. With the pipelines implemented, benchmarks will be performed using different GPUs, scenes, and a varying amount of shadow-casting lights. Methods. The scientific method is based on an experimental approach, using both implementation and performance tests. The experimental approach will begin by extending an in-house DirectX 12 renderer. The extension includes ray-tracing functionality, so that hard shadows can be generated using both dynamic- and the inline forms ray tracing. Afterwards, soft shadows are generated by implementing a state-of-the-art-denoiser with some modifications, which will be added to each ray-tracing method. Finally, the renderer is used to perform benchmarks of various scenes with varying amounts of shadow-casting lights and object complexity to cover a broad area of scenarios that could occur in a game and/or in other similar applications. Results and Conclusions. The results gathered in this experiment suggest that under the experimental conditions of the chosen scenes, objects, and number of lights, AMD’s GPUs were faster in performance when using dynamic ray tracing than using inline ray tracing, whilst Nvidia’s GPUs were faster when using inline ray tracing compared to when using dynamic ray tracing. Also, with an increasing amount of shadow-casting lights, the choice of ray-tracing method had low to no impact except for linearly increasing the execution time in each test. Finally, adding soft shadows(subsequent and preceding API interactions) also had low to no relative impact on the results depending on the different ray-tracing methods. / Bakgrund. Strålspårning (ray tracing) är ett verktyg som kan användas för att öka kvalitén på grafiken i spel. En tillämpning i grafik som strålspårning utmärker sig i är när skuggor ska skapas eftersom att strålspårning lättare kan simulera hur skuggor skapas i verkligheten, vilket tidigare tekniker i rasterisering inte hade möjlighet för. Med ny hårdvara där det finns support för strålspårning inbyggt i grafikkorten finns det nu möjligheter att använda strålspårning i realtids-applikationer inom vissa gränser. Det är fortfarande tunga beräkningar som behöver slutföras och det är därav att det finns behov av förbättringar.  Syfte. Denna uppsats kommer att utvärdera skillnaderna i prestanda mellan tre olika strålspårningsmetoder i DXR nivå 1.1, nämligen dynamisk strålspårning och två olika former av inline strålspårning. För att ge en bredare utredning på prestandan mellan strålspårningsmetoderna kommer mjuka skuggor att implementeras för att se om drivrutinen kan göra olika optimiseringar (beroende på valet av strålspårningsmetod) på de efterföljande och/eller föregående API anropen. Efter att dessa rörledningar (pipelines) är implementerade kommer prestandatester att utföras med olika grafikkort, scener, och antal ljus som kastar skuggor. Metod. Den vetenskapliga metoden är baserat på ett experimentellt tillvägagångssätt, som kommer innehålla både ett experiment och ett flertal prestandatester. Det experimentella tillvägagångssättet kommer att börja med att utöka en egenskapad DirectX 12 renderare. Utökningen kommer tillföra ny funktionalitet för att kunna hantera strålspårning så att hårda skuggor ska kunna genereras med både dynamisk och de olika formerna av inline strålspårning. Efter det kommer mjuka skuggor att skapas genom att implementera en väletablerad avbrusningsteknik med några modifikationer, vilket kommer att bli tillagt på varje strålspårningssteg. Till slut kommer olika prestandatester att mätas med olika grafikkort, olika antal ljus, och olika scener för att täcka olika scenarion som skulle kunna uppstå i ett spel och/eller i andra liknande applikationer.  Resultat och Slutsatser. De resultat från testerna i detta experiment påvisar att under dessa förutsättningar så är AMD’s grafikkort snabbare på dynamisk strålspårning än på inline strålspårning, samtidigt som Nvidias grafikkort är snabbare på inline strålspårning än på den dynamiska varianten. Ökandet av ljus som kastar skuggor påvisade låg till ingen förändring förutom ett linjärt ökande av exekveringstiden i de flesta testerna. Slutligen så visade det sig även att tillägget av mjuka skuggor (efterföljande och föregående API interaktioner) hade låg till ingen påverkan på valet av strålspårningsmetod.
228

Hardware-Accelerated Ray Tracing of Implicit Surfaces : A study of real-time editing and rendering of implicit surfaces

Hansson Söderlund, Herman January 2021 (has links)
Background. Rasterization of triangle geometry has been the dominating rendering technique in the real-time rendering industry for many years. However, triangles are not always easy to work with for content creators. With the introduction of hardware-accelerated ray tracing, rasterization-based lighting techniques have been steadily replaced by ray tracing techniques. This shift may signify the opportunity of exploring other, more easily manipulated, geometry-type alternatives compared to triangle geometry. One such geometry type is implicit surfaces. Objectives. This thesis investigates the rendering speed, editing speed, and image quality of different implicit surface rendering techniques using a state-of-the-art, hardware-accelerated, path tracing implementation. Furthermore, it investigates how implicit surfaces may be edited in real time and how editing affects rendering. Methods. A baseline direct sphere tracing algorithm is implemented to render implicit surfaces. Additionally, dense and narrow band discretization algorithms that sphere trace a discretization of the implicit surface are implemented. For each technique, two variations that provide potential benefits in rendering speed are also tested. Additionally, a real-time implicit surface editor that can utilize all the mentioned rendering techniques is created. Rendering speed, editing speed, and image quality metrics are captured for all techniques using different scenes created with the editor and an existing hardware-accelerated path tracing solution. Image quality differences are measured using mean squared error and the image difference evaluator FLIP. Results. Direct sphere tracing achieves the best image quality results but has the slowest rendering speed. Dense discretization achieves the best rendering speed in most tests and achieves better image quality results compared to narrow band discretization. Narrow band discretization achieves significantly better editing speed than both direct sphere tracing and dense discretization. All variations of each algorithm achieve better or equal rendering and editing speed compared to their standard implementation. All algorithms achieve real-time rendering and editing performance. However, only discretized methods display real-time rendering performance for all scenes, and only narrow band discretization displays real-time editing performance for a larger number of primitives. Conclusions. Implicit surfaces can be rendered and edited in real time while using a state-of-the-art, hardware-accelerated, path tracing algorithm. Direct sphere tracing degrades in performance when the implicit surface has an increased number of primitives, whereas discretization techniques perform independently of this. Furthermore, narrow band discretization is fast enough so that editing can be performed in real time even for implicit surfaces with a large number of primitives, which is not the case for direct sphere tracing or dense discretization. / Bakgrund. Triangelrastrering har varit den dominerande renderingstekniken inom realtidsgrafik i flera år. Trianglar är dock inte alltid lätta att jobba med för skapare av grafiska modeller. Med introduktionen av hårdvaruaccelererad strålspårning har rastreringsbaserade ljussättningstekniker stadigt ersatts av strålspårningstekniker. Detta skifte innebär att det kan finnas möjlighet för att utforska andra, mer lättredigerade geometrityper jämfört med triangelgeometri, exempelvis implicita ytor. Syfte. Detta examensarbete undersöker rendering- och redigeringshastigheten, samt bildkvaliteten av olika renderingstekniker för implicita ytor tillsammans med en spjutspetsalgoritm för hårdvaruaccelererad strålföljning. Den undersöker även hur implicita ytor kan redigeras i realtid och hur det påverkar rendering. Metod. En direkt sfärspårningsalgoritm implementeras som baslinje för att rendera implicita ytor. Även algoritmer som utför sfärstrålning över en kompakt- och smalbandsdiskretisering av den implicita ytan implementeras. För varje teknik implementeras även två variationer som potentiellt kan ge bättre prestanda. Utöver dessa renderingstekniker skapas även ett redigeringsverktyg för implicita ytor. Renderingshastighet, redigeringshastighet, och bildkvalité mäts för alla tekniker över flera olika scener som har skapats med redigeringsverktyget tillsammans med en hårdvaruaccelererad strålföljningsalgoritm. Skillnader i bildkvalité utvärderas med hjälp av mean squared error och evalueringsverktyget för bildskillnader som heter FLIP. Resultat. Direkt sfärspårning åstadkommer bäst bildkvalité, men har den långsammaste renderingshastigheten. Kompakt diskretisering renderar snabbast i de flesta tester och åstadkommer bättre bildkvalité än vad smalbandsdiskretisering gör. Smalbandsdiskretisering åstadkommer betydligt bättre redigeringshastighet än både direkt sfärspårning och kompakt diskretisering. Variationerna för respektive algoritm presterar alla lika bra eller bättre än standardvarianten för respektive algoritm. Alla algoritmer uppnår realtidsprestanda inom rendering och redigering. Endast diskretiseringsmetoderna uppnår dock realtidsprestanda för rendering med alla scener och endast smalbandsdiskretisering uppnår realtidsprestanda för redigering med ett större antal primitiver. Slutsatser. Implicita ytor kan renderas och redigeras i realtid tillsammans med en spjutspetsalgoritm för hårdvaruaccelererad strålföljning. Vid användning av direkt sfärstrålning minskar renderingshastigheten när den ytan består av ett stort antal primitiver. Diskretiseringstekniker har dock en renderingshastighet som är oberoende av antalet primitiver. Smalbandsdiskretisering är tillräckligt snabb för att redigering ska kunna ske i realtid även för implicita ytor som består stora antal primitiver.
229

Etiska problem vid användning av kontaktspårningsapplikationer : En kvalitativ litteraturstudie / Ethical issues when using contact tracing applications : A qualitative literature review

Asgeirsdottir, Anna, Johansson, Patricia January 2022 (has links)
I samband med utbrottet av covid-19 introducerades kontaktspårningsapplikationer (contact tracing application, CTA), eftersom de har bevisats kunna vara ett effektivt verktyg för kontaktspårning och därmed minska smittspridning. CTA kan med hjälp av Bluetooth signaler bestämma en individs positionering och vilka personer de varit i kontakt med. Detta innebär att personlig data om dess användare samlas in, hanteras och lagras. CTA är beroende av en hög acceptans och användningsnivå för att fungera på ett effektivt sätt. I tidigare forskningsstudier uppmärksammas därmed att etiken kopplad till dessa applikationerna behöver belysas. Studien syftar därmed till att belysa vilka etiska problem som finns kopplade till användningen av CTA. Studien är utformad som en litteraturstudie med en kvalitativ innehållsanalys. Resultatet består av en sammanställning av de etiska problemen kopplade till CTA utifrån kategorierna i ramverket PAPA. Sammanställningen visar på att det finns många etiska problem kopplade till CTA som behöver bemötas och det visar även på att det är till stor del samma etiska problem som uppstår och är relevanta inom IS idag, som det har varit inom området i årtionden sedan PAPA skapades. / During the outbreak of covid-19, contact tracing applications (CTA) were introduced as they have been proven to be an effective tool for contact tracing and thereby reduce the spread of infection. With Bluetooth signals, CTA can determine an individual’s positioning and which people they have been in contact with. This means that personal data about its users is collected, managed, and stored. CTA depends on a high level of acceptance and usage to function effectively. In previous research studies, attention is thus drawn to the fact that ethics linked to these applications need to be enlightened. Therefore, this study aims to shed light on the ethical problems associated with CTA. The study is designed as a literature review with a qualitative content analysis. The result consists of a compilation of the ethical problems linked to CTA based on the categories in the PAPA framework. The compilation shows that there are many ethical problems linked to CTA that need to be addressed and it also shows that it is largely the same ethical problems that arise and are relevant in IS today, as it has been in the field for decades since PAPA was created.
230

En route to automated maintenance of industrial printing systems: digital quantification of print-quality factors based on induced printing failure

Bischoff, Peter, Carreiro, André V., Kroh, Christoph, Schuster, Christiane, Härtling, Thomas 22 February 2024 (has links)
Tracking and tracing are a key technology for production process optimization and subsequent cost reduction. However, several industrial environments (e.g. high temperatures in metal processing) are challenging for most part-marking and identification approaches. A method for printing individual part markings on metal components (e.g. data matrix codes (DMCs) or similar identifiers) with high temperatures and chemical resistance has been developed based on drop-on-demand (DOD) print technology and special ink dispersions with submicrometer-sized ceramic and glass particles. Both ink and printer are required to work highly reliably without nozzle clogging or other failures to prevent interruptions of the production process in which the printing technology is used. This is especially challenging for the pigmented inks applied here. To perform long-term tests with different ink formulations and to assess print quality over time, we set up a test bench for inkjet printing systems. We present a novel approach for monitoring the printhead’s state as well as the print-quality degradation. This method does not require measuring and monitoring, e.g. electrical components or drop flight, as it is done in the state of the art and instead uses only the printed result. By digitally quantifying selected quality factors within the printed result and evaluating their progression over time, several non-stationary measurands were identified. Some of these measurands show a monotonic trend and, hence, can be used to measure print-quality degradation. These results are a promising basis for automated printing system maintenance.

Page generated in 0.0862 seconds