• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 6
  • 6
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 20
  • 20
  • 15
  • 14
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Efficient generation and rendering of tube geometry in Unreal Engine : Utilizing compute shaders for 3D line generation / Effektiv generering och rendering av tubgeometri i Unreal Engine : Generering av 3D-linjer med compute shaders

Woxler, Platon January 2021 (has links)
Massive graph visualization in an immersive environment, such as virtual reality (VR) or Augmented Reality (AR), has the possibility to improve users’ understanding when exploring data in new ways. To make the most of a visualization, such as this, requires interactive components that are fast enough to accommodate interactivity. By rendering the edges of the graph as shaded lines that imitate three‑dimensional (3D) lines or tubes, one can circumvent technical limitations. This method works well enough when using traditional two‑dimensional (2D) monitors, but representing tubes as flat lines in a virtual environment (VE) makes for a less immersive user experience as opposed to visualizing true 3D geometry. In order to accommodate for these requirements i.e., speed and visual fidelity, we need a time efficient way of producing tubular meshes. This thesis project explores how one can generate tubular geometry utilizing compute shaders in the modern game engine, Unreal Engine (UE). Exploiting the parallel computing power of the graphical processing unit (GPU) we use compute shaders to generate a tubular mesh following a predetermined path. The result from the project is an open source plugin for UE, able to generate tubular geometry at rapid rates. While not giving any major advantages when generating smaller models, comparing it to a sequential implementation, the compute shader implementation create and render models > 40× faster when generating 106 tube segments. A secondary effect of generating most of the data on the GPU, is that we avoid bottlenecks that can occur when surpassing the bandwidth of the central processing unit (CPU) to GPU data transfer. Using this tool researches can more easily explore information visualization in a VE. Furthermore, this thesis promotes extended development of mesh generation, using compute shaders in UE. / Att visualisera stora grafer i en immersiv miljö, såsom VR eller AR, kan förbättra en användares förståelse när de utforskar data på nya sätt. För att få ut det mesta av denna typen av visualiseringar krävs interaktiva komponenter som är tillräckligt snabba för att tillgodose interaktivitet. Genom att visa de linjer, som binder samman en grafs noder, som plana linjer som imiterar 3Dlinjer eller rör, kan man undvika att slå i det tak som tekniska begränsningar medför. Denna metoden är acceptabel vid användning av traditionella 2Dskärmar, men att representera rör som plana linjer i VE ger en mindre immersiv användarupplevelse, i kontrast till att visualisera sann 3D -geometri. För att tillgodose dessa krav dvs, tidseffektivitet och visuella kvaliteter, behöver vi ett effektivt sätt att producera 3D-linjer. Denna uppsats undersöker hur man kan generera rörformad geometri med hjälp av compute shaders i den moderna spelmotorn Unreal Engine (UE). Genom att använda compute shaders kan vi utnyttja den parallella beräkningskraften hos en GPU, kan vi generera ett rörformat mesh som följer en förutbestämd bana. Resultatet från projektet är ett open source-plugin för UE, som kan generera rörformad geometri i höga hastigheter. Även om det inte kan visas ge några större fördelar när man genererar mindre modeller, jämfört med en sekventiell implementering, skapar och renderar implementeringen av compute Shaders modeller > 40× snabbare, när de genererar 106 rörsegment. I och med att den större delen av datan skapas på GPU kan vi också undvika den flaskhals som kan uppstå när vi överskrider bandbredden mellan CPU och GPU. Med hjälp av verktyget som skapats i samband med denna uppsats kan människor lättare utforska informationsvisualisering i VE. Dessutom främjar denna uppsats utökad utveckling av mesh-generering med hjälp av compute shaders i UE.
52

Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method / Utveckling av en pipeline för att ge upphov till kontinuerligt utvecklande av mjukvara på hårdvara : Implementation på en Raspberry Pi för att simulera en fysisk pedal genom användandet av Hardware In the Loop-metoden

Ryd, Jonatan, Persson, Jeffrey January 2021 (has links)
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware. / Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
53

On continuous maximum flow image segmentation algorithm / Segmentation d'images par l'algorithme des flot maximum continu

Marak, Laszlo 28 March 2012 (has links)
Ces dernières années avec les progrès matériels, les dimensions et le contenu des images acquises se sont complexifiés de manière notable. Egalement, le différentiel de performance entre les architectures classiques mono-processeur et parallèles est passé résolument en faveur de ces dernières. Pourtant, les manières de programmer sont restées largement les mêmes, instituant un manque criant de performance même sur ces architectures. Dans cette thèse, nous explorons en détails un algorithme particulier, les flots maximaux continus. Nous explicitons pourquoi cet algorithme est important et utile, et nous proposons plusieurs implémentations sur diverses architectures, du mono-processeur à l'architecture SMP et NUMA, ainsi que sur les architectures massivement parallèles des GPGPU. Nous explorons aussi des applications et nous évaluons ses performances sur des images de grande taille en science des matériaux et en biologie à l'échelle nano / In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology
54

Développement d’algorithmes d’imagerie et de reconstruction sur architectures à unités de traitements parallèles pour des applications en contrôle non destructif / Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

Pedron, Antoine 28 May 2013 (has links)
La problématique de cette thèse se place à l’interface entre le domaine scientifique du contrôle non destructif par ultrasons (CND US) et l’adéquation algorithme architecture. Le CND US comprend un ensemble de techniques utilisées pour examiner un matériau, qu’il soit en production ou maintenance. Afin de détecter d’éventuels défauts, de les positionner et les dimensionner, des méthodes d’imagerie et de reconstruction ont été développées au CEA-LIST, dans la plateforme logicielle CIVA.L’évolution du matériel d’acquisition entraine une augmentation des volumes de données et par conséquent nécessite toujours plus de puissance de calcul pour parvenir à des reconstructions en temps interactif. L’évolution multicoeurs des processeurs généralistes (GPP), ainsi que l’arrivée de nouvelles architectures comme les GPU rendent maintenant possible l’accélération de ces algorithmes.Le but de cette thèse est d’évaluer les possibilités d’accélération de deux algorithmes de reconstruction sur ces architectures. Ces deux algorithmes diffèrent dans leurs possibilités de parallélisation. Pour un premier, la parallélisation sur GPP est relativement immédiate, contrairement à celle sur GPU qui nécessite une utilisation intensive des instructions atomiques. Quant au second, le parallélisme est plus simple à exprimer, mais l’ordonnancement des nids de boucles sur GPP, ainsi que l’ordonnancement des threads et une bonne utilisation de la mémoire partagée des GPU sont nécessaires pour obtenir un fonctionnement efficace. Pour ce faire, OpenMP, CUDA et OpenCL ont été utilisés et comparés. L’intégration de ces prototypes dans la plateforme CIVA a mis en évidence un ensemble de problématiques liées à la maintenance et à la pérennisation de codes sur le long terme. / This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterize possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform.Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purprose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms.The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed.
55

The international financial reporting standard for small and medium-sized entities : the need and form of a third-tier financial reporting standard in Namibia

Klink, Petra 27 May 2016 (has links)
The development of the International Financial Reporting Standard for Small and Medium-sized Entities (IFRS for SMEs) was based on the demand for a more simplified financial reporting standard, compared to International Financial Reporting Standards (IFRS). Despite simplifications, the requirements of the IFRS for SMEs are still regarded complex and costly to apply, especially for micro entities in developing countries such as Namibia. Consequently, there is a need to further simplify financial reporting requirements for micro entities in the form of a third-tier financial reporting standard. A third-tier standard can take the form of either a separately developed standard or a simplification of existing standard(s). There are more advantages to the development of a standard based on existing standard(s), taking into account the Namibian financial reporting environment. It is therefore recommended that Namibia develop a third-tier standard based on the IFRS for SMEs. / Financial Accounting / M. Phil. (Accounting Science)
56

Dynamický částicový systém jako účinný nástroj pro statistické vzorkování / A dynamical particle system as a driver for optimal statistical sampling

Mašek, Jan Unknown Date (has links)
The presented doctoral thesis aims at development a new efficient tool for optimization of uniformity of point samples. One of use-cases of these point sets is the usage as optimized sets of integration points in statistical analyses of computer models using Monte Carlo type integration. It is well known that the pursuit of uniformly distributed sets of integration points is the only possible way of decreasing the error of estimation of an integral over an unknown function. The tasks of the work concern a survey of currently used criteria for evaluation and/or optimization of uniformity of point sets. A critical evaluation of their properties is presented, leading to suggestions towards improvements in spatial and statistical uniformity of resulting samples. A refined variant of the general formulation of the phi optimization criterion has been derived by incorporating the periodically repeated design domain along with a scale-independent behavior of the criterion. Based on a notion of a physical analogy between a set of sampling points and a dynamical system of mutually repelling particles, a hyper-dimensional N-body system has been selected to be the driver of the developed optimization tool. Because the simulation of such a dynamical system is known to be a computationally intensive task, an efficient solution using the massively parallel GPGPU platform Nvidia CUDA has been developed. An intensive study of properties of this complex architecture turned out as necessary to fully exploit the possible solution speedup.
57

SHAP-Secure Hardware Agent Platform

Zabel, Martin, Preußer, Thomas B., Reichel, Peter, Spallek, Rainer G. 11 June 2007 (has links)
This paper presents a novel implementation of an embedded Java microarchitecture for secure, realtime, and multi-threaded applications. Together with the support of modern features of object-oriented languages, such as exception handling, automatic garbage collection and interface types, a general-purpose platform is established which also fits for the agent concept. Especially, considering real-time issues, new techniques have been implemented in our Java microarchitecture, such as an integrated stack and thread management for fast context switching, concurrent garbage collection for real-time threads and autonomous control flows through preemptive round-robin scheduling.
58

Reliable General Purpose Sentiment Analysis of the Public Twitter Stream

Haldenwang, Nils 27 September 2017 (has links)
General purpose Twitter sentiment analysis is a novel field that is closely related to traditional Twitter sentiment analysis but slightly differs in some key aspects. The main difference lies in the fact that the novel approach considers the unfiltered public Twitter stream while most of the previous approaches often applied various filtering steps which are not feasible for many applications. Another goal is to yield more reliable results by only classifying a tweet as positive or negative if it distinctly consists of the respective sentiment and mark the remaining messages as uncertain. Traditional approaches are often not that strict. Within the course of this thesis it could be verified that the novel approach differs significantly from the traditional approach. Moreover, the experimental results indicated that the archetypical approaches could be transferred to the new domain but the related domain data is consistently sub par when compared to high quality in-domain data. Finally, the viability of the best classification algorithm could be qualitatively verified in a real-world setting that was also developed within the course of this thesis.
59

Akcelerace operací nad řídkými maticemi v nelineární metodě nejmenších čtverců / Accelerated Sparse Matrix Operations in Nonlinear Least Squares Solvers

Polok, Lukáš January 2017 (has links)
Tato práce se zaměřuje na datové struktury pro reprezentaci řídkých blokových matic a s nimi spojených výpočetních algoritmů, jež jsem navrhl. Řídké blokové matice se vyskytují při řešení mnoha dílčích problémů jako například při řešení metody nejmenších čtverců. Nelineární metoda nejmenších čtverců (NLS) je často aplikována v robotice pro řešení problému lokalizace robota (SLAM) nebo v příbuzných úlohách 3D rekonstrukce v počítačovém vidění (BA), (SfM). Problémy konečných elementů (FEM) a parciálních diferenciálních rovnic (PDE) v oboru fyzikálních simulací můžou také mít blokovou strukturu. Většina existujících implementací řídké lineární algebry používají řídké matice s granularitou jednotlivých elementů a jen několik málo podporuje řídké blokové matice. To může být způsobeno složitostí blokových formátů, jež snižuje rychlost výpočtů, pokud bloky nejsou dost velké. Některé ze specializovaných NLS optimalizátorů v robotice a počítačovém vidění používají blokové matice jako interní reprezentaci, aby snížily cenu sestavování řídkých matic, ale nakonec tuto reprezentaci převedou na elementovou řídkou matici pro implementaci k řešení systémů rovnic. Existující implementace pro řídké blokové matice se většinou soustředí na jedinou operaci, často násobení matice vektorem. Řešení navržené v této disertaci pokrývá širší spektrum funkcí: implementovány jsou funkce pro efektivní sestavení řídké blokové matice, násobení matice vektorem nebo jinou maticí a nechybí ani řešení trojúhelníkových systémů nebo Choleského faktorizace. Tyto funkce mohou být snadno použity ke řešení systémů lineárních rovnic pomocí analytických nebo iterativních metod nebo k výpočtu vlastních čísel. Jsou zde popsány rychlé algoritmy pro hlavní procesor (CPU) i pro grafické akcelerátory (GPU). Navrhované algoritmy jsou integrovány v knihovně SLAM++ , jež řeší problém nelineárních nejmenších čtverců se zaměřením na problémy v robotice a počítačovém vidění. Je provedeno vyhodnocení na standardních datasetech kde navrhované metody dosahují výrazně lepších výsledků než dosavadní metody popsané v literatuře -- a to bez kompromisů v přesnosti či obecnosti řešení.
60

Perceived Affordance and Socio-Technical Transition: Blockchain for the Swedish Public Sector / Uppfattad görlighet och socio-teknisk övergång: blockkedjor för svensk offentlig sektor

JONSSON, JOHAN R. January 2018 (has links)
The Swedish public sector is under constant pressure to improve processes and services through further digitalization. Blockchain is a novelty technology which shows promise of enabling functionalities which are desired within the sector. However, as the technology is still in its infancy, the practical value it could offer the sector remains unproven. In this master thesis, the socio-technical transition of the public sector for adopting blockchain is analyzed using the multi-level perspective framework. The sector is operationalized as an incumbent socio-technical regime and blockchain as a collection of niche innovations. Affordance theory and the multi-level perspective are combined to analyze how the perception of blockchain affects the potential transition pathways. The primary empirical data is gathered through a series of interviews with key individuals from both the Swedish public sector and blockchain community, as well as from attending blockchain events. Secondary data is gathered through the review of various types of literature regarding the topic. The findings of the thesis show that the practical value and functionalities that blockchain offers and that match the needs of the sector are verification, authentication, traceability, automating simple logical functions, and digitizing unique value. The identified conceptual solutions deemed suitable today are: blockchain for identity management, blockchain for data verification, blockchains for property registers of, e.g., vehicles and real estate, and external industry blockchains for improved traceability of, e.g., supply chains and sales records. The thesis also derives recommendations for the public sector indicating that, e.g., active education, revision of regulation, and international cooperation would further a potential transition towards blockchain. It also finds that perceived affordances of a technology in its early stages affect the transition pathways; barriers of entry, number of potential adopting application sectors, the level of coordination, and the available resources for development are influenced by the perceptions. / Svensk offentlig sektor utsätts konstant för påtryckningar gällande fortsatt digitalisering av processer och tjänster. Blockkedjan är en ny teknologi som påvisar potential att kunna tillgodose funktioner önskvärda inom den offentliga sektorn. Dock är teknologin fortfarande i ett begynnande stadie och dess praktiska värde är ännu obevisat. I detta examensarbete analyseras offentliga sektorns potentiella socio-tekniska övergång till att ta blockkedjor i bruk med hjälp av multinivåperspektiv-ramverket. Sektorn operationaliseras som en befintlig socio-teknisk regim och blockkedjor som en samling av nischinnovationer. Görlighetsteori och multinivåperspektivet kombineras för att analysera hur uppfattningen av blockkedjor påverkar de potentiella övergångsvägarna. Primära empiriska data samlas in genom en serie av intervjuer med nyckelindivider från både svensk offentlig sektor och blockkedjegemenskapen, samt även från deltagande i blockkedjearrangemang. Sekundära data samlas in genom en studie av diverse typer av litteratur gällande ämnet. Examensarbetets resultat påvisar att det praktiska värdet och funktionaliteterna som blockkedjor tillgodoser och som passar med offentliga sektorns behov är verifikation, autentisering, spårbarhet, automatisering av simpla logiska funktioner, samt digitalisering av unika värden. De identifierade konceptuella lösningarna som bedöms lämpliga i dagsläget är: blockkedja för identitetshantering, blockkedja för dataverifikation, blockkedjor för egendomsregister, t.ex. för fordon och bostäder, samt externa industriblockkedjor för förbättrad spårning, t.ex. av försörjningskedjor och försäljning. Examensarbetet härleder även rekommendationer till offentliga sektorn, innefattande exempelvis aktiv utbildning, revision av reglementen, samt internationellt samarbete. Resultaten påvisar även att den uppfattade görligheten av en teknologi i ett tidigt stadie av innovation påverkar övergångsvägarna in i en regim. Detta genom att uppfattningarna influerar inträdesbarriärer, antalet potentiella applikationssektorer, koordinationsnivån, samt mängden tillgängliga resurser.

Page generated in 0.0576 seconds