• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Vyhledávání informací / Information Retrieval

Šabatka, Pavel January 2010 (has links)
The purpose of this thesis is a summary of theoretical knowledge in the field of information retrieval. This document contains mathematical models that can be used for information retrieval algorithms, including how to rank them. There are also examined the specifics of image and text data. The practical part is then an implementation of the algorithm in video shots of the TRECVid 2009 dataset based on high-level features. The uniqueness of this algorithm is to use internet search engines to obtain terms similarity. The work contains a detailed description of the implemented algorithm including the process of tuning and conclusions of its testing.
202

Generic simulation modelling of stochastic continuous systems

Albertyn, Martin 24 May 2005 (has links)
The key objective of this research is to develop a generic simulation modelling methodology that can be used to model stochastic continuous systems effectively. The generic methodology renders simulation models that exhibit the following characteristics: short development and maintenance times, user-friendliness, short simulation runtimes, compact size, robustness, accuracy and a single software application. The research was initiated by the shortcomings of a simulation modelling method that is detailed in a Magister dissertation. A system description of a continuous process plant (referred to as the Synthetic Fuel plant) is developed. The decision support role of simulation modelling is considered and the shortcomings of the original method are analysed. The key objective, importance and limitations of the research are also discussed. The characteristics of stochastic continuous systems are identified and a generic methodology that accommodates these characteristics is conceptualised and developed. It consists of the following eight methods and techniques: the variables technique, the iteration time interval evaluation method, the event-driven evaluation method, the Entity-represent-module method, the Fraction-comparison method, the iterative-loop technique, the time “bottleneck” identification technique and the production lost “bottleneck” identification technique. Five high-level simulation model building blocks are developed. The generic methodology is demonstrated and validated by the development and use of two simulation models. The five high-level building blocks are used to construct identical simulation models of the Synthetic Fuel plant in two different simulation software packages, namely: Arena and Simul8. An iteration time interval and minimum sufficient sample sizes are determined and the simulation models are verified, validated, enhanced and compared. The simulation models are used to evaluate two alternative scenarios. The results of the scenarios are compared and conclusions are presented. The factors that motivated the research, the process that was followed and the generic methodology are summarised. The original method and the generic methodology are compared and the strengths and weaknesses of the generic methodology are discussed. The contribution to knowledge is explained and future developments are proposed. The possible range of application and different usage perspectives are presented. To conclude, the lessons learnt and reinforced are considered. / Thesis (PhD (Industrial Engineering))--University of Pretoria, 2004. / Industrial and Systems Engineering / unrestricted
203

EMULATION FOR MULTIPLE INSTRUCTION SET ARCHITECTURES

Christopher M Wright (10645670) 07 May 2021 (has links)
<p>System emulation and firmware re-hosting are popular techniques to answer various security and performance related questions, such as, does a firmware contain security vulnerabilities or meet timing requirements when run on a specific hardware platform. While this motivation for emulation and binary analysis has previously been explored and reported, starting to work or research in the field is difficult. Further, doing the actual firmware re-hosting for various Instruction Set Architectures(ISA) is usually time consuming and difficult, and at times may seem impossible. To this end, I provide a comprehensive guide for the practitioner or system emulation researcher, along with various tools that work for a large number of ISAs, reducing the challenges of getting re-hosting working or porting previous work for new architectures. I layout the common challenges faced during firmware re-hosting and explain successive steps and survey common tools to overcome these challenges. I provide emulation classification techniques on five different axes, including emulator methods, system type, fidelity, emulator purpose, and control. These classifications and comparison criteria enable the practitioner to determine the appropriate tool for emulation. I use these classifications to categorize popular works in the field and present 28 common challenges faced when creating, emulating and analyzing a system, from obtaining firmware to post emulation analysis. I then introduce a HALucinator [1 ]/QEMU [2 ] tracer tool named HQTracer, a binary function matching tool PMatch, and GHALdra, an emulator that works for more than 30 different ISAs and enables High Level Emulation.</p>
204

Methodology to Derive Resource Aware Context Adaptable Architectures for Field Programmable Gate Arrays

Samala, Harikrishna 01 December 2009 (has links)
The design of a common architecture that can support multiple data-flow patterns (or contexts) embedded in complex control flow structures, in applications like multimedia processing, is particularly challenging when the target platform is a Field Programmable Gate Array (FPGA) with a heterogeneous mixture of device primitives. This thesis presents scheduling and mapping algorithms that use a novel area cost metric to generate resource aware context adaptable architectures. Results of a rigorous analysis of the methodology on multiple test cases are presented. Results are compared against published techniques and show an area savings and execution time savings of 46% each.
205

Reviewing Code Review : Defining and developing High-level ConceptualCode Review at a financial technology company / Granskning av kodgranskning

Olausson, Andreas, Louca, Stefanus January 2020 (has links)
Code review is a recurring activity at software companies where the source code, orparts of it, undergoes an inspection where the aim is to detect possible errors beforethe code is released for production. A variation of code review that is common today iscalled modern code review and is more lightweight practise than formal code review. Inmodern code review, the developers participate and continuously revise their colleagues’code.At a financial technology company in Stockholm, modern code review is applied. Thecompany has expressed a need to implement a tool that can facilitate the code reviewprocess. One suggestion from the company was to implement high-level conceptual codereview (HCCR), an idea of a tool where code changes are sorted automatically intodifferent commits with a specific message.In order to implement the tool, HCCR needs to be defined and concretised since it haspreviously existed solely as an idea. As a first step of the project, developers’ view ofwhat information is desirable in a commit needed to be examined. The project addressedthe following research questions: What information is desirable and needed by the developers of a medium-sizedcompany, to help them do code reviews in a pull-based environment?– What should the information consist of?– How should the information be presented?To answer these questions, interviews were conducted with software developers at thecompany, together with observations where the developers had to try out a first iterationof HCCR. The first iteration was developed using the company’s guidelines on howdevelopers contribute to code changes together with our company supervisor’s viewson how the tool can work. The interviews were recorded and transcribed, whereafteriia thematic analysis was applied. From the analysis, 13 concepts emerged, which weredivided into five categories. The developers wanted the commits to be atomic, compilableand testable in order to facilitate debugging. The developers also expressed a need toget clear information about both pull-requests (PRs) and commit messages. In theinterviews, a theme emerged that the messages should consist of: what has changed andwhy it has changed. Differences were also observed in the code review process as differentdevelopers use different strategies when reviewing code.Based on the information that emerged from the interviews and observations along withprevious research, a second iteration of HCCR was prepared. The report concludes bydiscussing possible implementations of the tool. / Kodgranskning är en vanligt förekommande aktivitet hos mjukvaruföretag där källkoden,eller delar av den, genomgår en granskning för att upptäcka möjliga fel innan kodensläpps till produktion. En variation av kodgranskning som är vanlig idag kallas modernkodgranskning och är en mindre formell kodgranskning där utvecklarna själva är medoch kontinuerligt reviderar sina kollegors kod.Ett finansiellt teknikbolag i Stockholm tillämpar modern kodgranskning. Företagethar uttryckt ett behov av att implementera ett verktyg som kan underlättakodgranskningsprocessen. Ett förslag från företaget var att implementera HCCR, enidé om ett verktyg där kodändringar automatiskt sorteras till olika, så kallade, commits1med ett specifikt meddelande.För att implementera verktyget behöver HCCR definieras och konkretiseras. Som ettförsta steg i projektet behövde vi undersöka utvecklarnas önskvärda information av huren commit bör utformas. Projektet behandlar följande forskningsfrågor: Vilken information är önskvärd och behövs av utvecklarna på ett medelstort företag,för att hjälpa dem att göra kodgranskningar i en pull-based miljö?– Vad ska informationen bestå av?– Hur skall informationen presenteras?För att svara på frågorna gjordes intervjuer med mjukvaruutvecklarna på företagettillsammans med observationer där utvecklarna fick prova på en första iteration avHCCR. Den första iterationen togs fram genom att använda företagets riktlinjer gällandehur utvecklare bidrar med kodändringar tillsammans med åsikter från handledaren påföretaget om hur verktyget kan fungera. Intervjuerna spelades in och transkriberades1När det kommer till ord och uttryck inom Git, (exempelvis commits, pull-request, push, pull) finnsdet ingen standardiserad översättning till svenska. Därför kommer dessa ord skrivas på engelska isammanfattningen.ivvarpå en tematisk analys genomfördes. Från analysen framträdde 13 koncept kringkodgranskning vilka delades in i fem kategorier. Utvecklarna önskade att varje commitskulle vara atomisk, kompilerbar samt testbar för att underlätta felsökning av buggar.Utvecklarna uttryckte också ett behov av att få tydlig information om både PR ochcommit-meddelanden. I intervjuerna framkom det att meddelandena borde bestå av:vad som har ändrats och varför det har ändrats. Det observerades även skillnader ikodgranskningsprocessen då olika utvecklare använder olika strategier när de granskarkod.Baserat på den information som framträdde från intervjuerna och observationernatillsammans med tidigare forskning utarbetades en andra iteration av HCCR. Rapportenavslutas med att diskutera möjliga implementationer av verktyget.
206

Shaderprestanda inom Unity : En jämförelse mellan Unity Shader Graph och HLSL shaders / Shader Performance Within Unity : A comparison between Unity Shader Graph and HLSL shaders

Börjesson, Jonathan January 2022 (has links)
Genom att skapa shaders kan datorspelsutvecklare åstadkomma en uppsjö av visuella effekter. Det enda som sätter gränserna är fantasin och prestandan. En av de största spelmotorerna på marknaden är Unity Engine (Unity Technologies, 2005). Det finns två utvecklingsmetoder för att skapa shaders i Unity; genom det visuella verktyget Unity Shader Graph eller genom att programmera i High-Level Shading Language. Fördelen med Unity Shader Graph är dess användarvänlighet. Kan en följd av denna användarvänlighet vara en nackdel på resultatets prestanda?  Denna studies syfte är att undersöka prestandaskillnader mellan shaders implementerade med High-Level Shading Language kontra Unity Shader Graph. Detta undersöktes genom att skapa tre shaders i Unity Shader Graph och sedan tre utseendemässigt liknande shaders i High-Level Shading Language. Efter skapandet, optimerades shadersarna skapta med High-Level Shading Language genom optimeringstekniker föreslagna av Crawford och O’Boyle (2018). Resultatet visade att inga starka kopplingar kunde göras mellan användandet av Unity Shader Graph och försämrad prestanda. Testresultaten var inte konklusiva, vissa shaders presterade bättre på en hårdvara men sämre på alternativ hårdvara. Vid 3 av 6 test presterade de jämförda shadersarna utan en signifikant prestandaskillnad. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet. </p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
207

Användning av högnivåspråket Swift i webbläsaren och i Android : En studie på möjligheterna att återanvända högnivåspråket Swift utanför iOS i andra plattformar som webbläsare och Android / Using the high-level language Swift in the browser and on Android : A study on the possibilities of reusing the high-level language Swift outside of iOS in other platforms such as browsers and Android

Albaloua, Mark, Kizilkaya, Kenan January 2023 (has links)
Syftet med detta arbete var att undersöka möjligheterna att använda högnivåspråket Swift utanför iOS i webbläsaren och i Android. Detta för att minska mängden kod som skrivs och därmed minska utvecklingstiden för att skapa applikationer för iOS, webbläsaren samt Android. För att hitta lämpliga verktyg som löser frågeställningen har en undersökning av tidigare arbeten och metoder gjorts. Resultatet från undersökningen ledde till användningen av ramverket Tokamak tillsammans med WebAssembly för att återanvända Swift i webbläsaren och verktyget SwiftKotlin för att återanvända Swift i Android. En applikation med designmönstret Model-View-ViewModel (MVVM) skapades i avsikt att testa återanvändbarheten. Resultatet visade att Tokamak tillsammans med WebAssembly möjliggör återanvändning av ursprungliga koden för iOS-applikationen komplett förutom plattformsspecifika funktioner som lokalt sparande och nätverksanrop. SwiftKotlin möjliggör återanvändning av modellklassen i applikationen med små justeringar, medan vymodell och vyklasserna behöver skrivas manuellt. / The purpose of this work was to study the possibilities of using the high-level language Swift outside of iOS in the browser and on Android. This is to reduce the amount of code written thus reducing development time to create applications for iOS, browser, and Android. To find suitable tools to solve the problem, a study on previous works and methods has been made. The results of the study led to the use of the framework Tokamak together with WebAssembly to reuse Swift in the browser and the tool SwiftKotlin to reuse Swift on Android. An application using the Model-View-ViewModel (MVVM) design pattern was created with the intention of testing reusability. The results showed that Tokamak with WebAssembly made it possible to use all the code from the original iOS application except platform-specific functions such as local saving and network calls. SwiftKotlin made it possible to reuse the model class with some small adjustments while the viewmodel and view classes must be manually written.
208

Throughput Constrained and Area Optimized Dataflow Synthesis for FPGAs

Sun, Hua 21 February 2008 (has links) (PDF)
Although high-level synthesis has been researched for many years, synthesizing minimum hardware implementations under a throughput constraint for computationally intensive algorithms remains a challenge. In this thesis, three important techniques are studied carefully and applied in an integrated way to meet this challenging synthesis requirement. The first is pipeline scheduling, which generates a pipelined schedule that meets the throughput requirement. The second is module selection, which decides the most appropriate circuit module for each operation. The third is resource sharing, which reuses a circuit module by sharing it between multiple operations. This work shows that combining module selection and resource sharing while performing pipeline scheduling can significantly reduce the hardware area, by either using slower, more area-efficient circuit modules or by time-multiplexing faster, larger circuit modules, while meeting the throughput constraint. The results of this work show that the combined approach can generate on average 43% smaller hardware than possible when a single technique (resource sharing or module selection) is applied. There are four major contributions of this work. First, given a fixed throughput constraint, it explores all feasible frequency and data introduction interval design points that meet this throughput constraint. This enlarged pipelining design space exploration results in superior hardware architectures than previous pipeline synthesis work because of the larger sapce. Second, the module selection algorithm in this work considers different module architectures, as well as different pipelining options for each architecture. This not only addresses the unique architecture of most FPGA circuit modules, it also performs retiming at the high-level synthesis level. Third, this work proposes a novel approach that integrates the three inter-related synthesis techniques of pipeline scheduling, module selection and resource sharing. To the author's best knowledge, this is the first attempt to do this. The integrated approach is able to identify more efficient hardware implementations than when only one or two of the three techniques are applied. Fourth, this work proposes and implements several algorithms that explore the combined pipeline scheduling, module selection and resource sharing design space, and identifies the most efficient hardware architecture under the synthesis constraint. These algorithms explore the combined design space in different ways which represents the trade off between algorithm execution time and the size of the explored design space.
209

VISUELL PROGRAMMERING OCH SHADERPRESTANDA : En jämförelse mellan shaders gjorda i Unreal Material och HLSL / VISUAL PROGRAMMING AND SHADERPERFORMANCE : A comparison between shaders made in Unreal Material and HLSL

Olsson, William January 2023 (has links)
Inom spelutveckling vill utvecklare gärna använda sig av visuella effekter för sina spel och genom shaderprogram kan en mångfald av visuella effekter skapas. Det som hindrar spelutvecklare från att använda sig utav alla visuella effekter de vill ha med i spelet är prestandan. Alla användare har inte tillgång till den senaste hårdvaran och kan ha minimikrav på prestandan hos ett spel. Spelbranschen går dessutom mer mot mobila spel (ISFE &amp; EGDF 2021, ss. 18-19). Mobiler har begränsade resurser och har inte lika mycket processorkraft för grafik som en dedikerad grafikprocessor kan ge. Syftet med denna studie är att undersöka om det finns en skillnad i prestanda på shaders som implementerats med Unreal Material kontra High-Level Shading Language. Detta gjordes genom en implementation i varje språk av två shadereffekter. Efteråt konstruerades två renderingsintensiva scener där tidsmätningar på varje implementation genomfördes. Resultatet av detta kunde inte visa på någon koppling mellan vilket implementationssätt som användes och skillnad i prestanda. Resultaten av testerna kunde endast peka på en försumbar skillnad i prestanda.
210

An Embedded System for Classification and Dirt Detection on Surgical Instruments

Hallgrímsson, Guðmundur January 2019 (has links)
The need for automation in healthcare has been rising steadily in recent years, both to increase efficiency and for freeing educated workers from repetitive, menial, or even dangerous tasks. This thesis investigates the implementation of two pre-determined and pre-trained convolutional neural networks on an FPGA for the classification and dirt detection of surgical instruments in a robotics application. A good background on the inner workings and history of artificial neural networks is given and expanded on in the context of convolutional neural networks. The Winograd algorithm for computing convolutional operations is presented as a method for increasing the computational performance of convolutional neural networks. A selection of development platform and toolchains is then made. A high-level design of the overall system is explained, before details of the high-level synthesis implementation of the dirt detection convolutional neural network are shown. Measurements are then made on the performance of the high-level synthesis implementation of the various blocks needed for convolutional neural networks. The main convolutional kernel is implemented both by using the Winograd algorithm and the naive convolution algorithm and comparisons are made. Finally, measurements on the overall performance of the end-to-end system are made and conclusions are drawn. The final product of the project gives a good basis for further work in implementing a complete system to handle this functionality in a manner that is both efficient in power and low in latency. Such a system would utilize the different strengths of general-purpose sequential processing and the parallelism of an FPGA and tie those together in a single system. / Behovet av automatisering inom vård och omsorg har blivit allt större de senaste åren, både vad gäller effektivitet samt att befria utbildade arbetare från repetitiva, enkla eller till och med farliga arbetsmoment. Den här rapporten undersöker implementeringen av två tidigare för-definierade och för-tränade faltade neurala nätverk på en FPGA, för att klassificera och upptäcka föroreningar på kirurgiska verktyg. En bra bakgrund på hur neurala nätverk fungerar, och deras historia, presenteras i kontexten faltade neurala nätverk. Winograd algoritmen, som används för att beräkna faltningar, beskrivs som en metod med syfte att öka beräkningsmässig prestanda. Val av utvecklingsplattform och verktyg utförs. Systemet beskrivs på en hög nivå, innan detaljer om hög-nivå-syntesimplementeringen av förorenings-detekterings-nätverket visas. Mätningar görs sedan av de olika bygg-blockens prestanda. Kärnkoden med faltnings-algoritmen implementeras både med Winograd-algoritmen och med den traditionella, naiva, metoden, och utfallet för bägge metoderna jämförs. Slutligen utförs mätningar på hela systemets prestanda och slutsatser dras därav. Projektets slutprodukt kan användas som en bra bas för vidare utveckling av ett komplett system som både är effektivt angående effektförbrukning och har bra prestanda, genom att knyta ihop styrkan hos traditionella sekventiella processorer med parallelismen i en FPGA till ett enda system.

Page generated in 0.0642 seconds