• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 286
  • 286
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

An Embedded System for Classification and Dirt Detection on Surgical Instruments

Hallgrímsson, Guðmundur January 2019 (has links)
The need for automation in healthcare has been rising steadily in recent years, both to increase efficiency and for freeing educated workers from repetitive, menial, or even dangerous tasks. This thesis investigates the implementation of two pre-determined and pre-trained convolutional neural networks on an FPGA for the classification and dirt detection of surgical instruments in a robotics application. A good background on the inner workings and history of artificial neural networks is given and expanded on in the context of convolutional neural networks. The Winograd algorithm for computing convolutional operations is presented as a method for increasing the computational performance of convolutional neural networks. A selection of development platform and toolchains is then made. A high-level design of the overall system is explained, before details of the high-level synthesis implementation of the dirt detection convolutional neural network are shown. Measurements are then made on the performance of the high-level synthesis implementation of the various blocks needed for convolutional neural networks. The main convolutional kernel is implemented both by using the Winograd algorithm and the naive convolution algorithm and comparisons are made. Finally, measurements on the overall performance of the end-to-end system are made and conclusions are drawn. The final product of the project gives a good basis for further work in implementing a complete system to handle this functionality in a manner that is both efficient in power and low in latency. Such a system would utilize the different strengths of general-purpose sequential processing and the parallelism of an FPGA and tie those together in a single system. / Behovet av automatisering inom vård och omsorg har blivit allt större de senaste åren, både vad gäller effektivitet samt att befria utbildade arbetare från repetitiva, enkla eller till och med farliga arbetsmoment. Den här rapporten undersöker implementeringen av två tidigare för-definierade och för-tränade faltade neurala nätverk på en FPGA, för att klassificera och upptäcka föroreningar på kirurgiska verktyg. En bra bakgrund på hur neurala nätverk fungerar, och deras historia, presenteras i kontexten faltade neurala nätverk. Winograd algoritmen, som används för att beräkna faltningar, beskrivs som en metod med syfte att öka beräkningsmässig prestanda. Val av utvecklingsplattform och verktyg utförs. Systemet beskrivs på en hög nivå, innan detaljer om hög-nivå-syntesimplementeringen av förorenings-detekterings-nätverket visas. Mätningar görs sedan av de olika bygg-blockens prestanda. Kärnkoden med faltnings-algoritmen implementeras både med Winograd-algoritmen och med den traditionella, naiva, metoden, och utfallet för bägge metoderna jämförs. Slutligen utförs mätningar på hela systemets prestanda och slutsatser dras därav. Projektets slutprodukt kan användas som en bra bas för vidare utveckling av ett komplett system som både är effektivt angående effektförbrukning och har bra prestanda, genom att knyta ihop styrkan hos traditionella sekventiella processorer med parallelismen i en FPGA till ett enda system.
212

High Data Rate Signal Processing Architectures and Compilation Strategies for Scalable, Multi-Gigabit Digital Systems

Nybo, Daniel Alexander 12 April 2024 (has links) (PDF)
In this study we present a high-performance computing architecture and hardware acceleration strategy for a heterogeneous multi-gigabit computing system. The system architecture integrates a BeeGFS distributed file system, capable of achieving 80 Gbps of sustained write throughput across five nodes, essential for managing the high data volumes generated by a 25 high performance computer (HPC) compute cluster. To ensure operational efficiency and scalability, the tasks performed on the Linux compute cluster consisting of 30 nodes are automated using Ansible, facilitating seamless deployment, management, and updates. We present compilation strategies for a hardware accelerated Polyphase Filter Bank (PFB) channelization routine optimized for Xilinx Ultrascale+ FPGAs, capable of simultaneously processing 2048 channels per 12 input streams. This setup shows the efficiency of High Level Sysnthesis of FPGA-based signal processing in handling demanding data analysis tasks. We also present the implementation and verification of a 1.6 Gsps Direct Memory Access (DMA) transfer from DDR4 memory to a modern Radio Frequency System on Chip (RFSoC) digital to analog converter. The combination of a high-throughput file system, streamlined automation, and advanced signal processing capabilities shows these system's ability to meet the needs of complex, real-time data analysis and processing applications, advancing the field of computational research.
213

Search for the Higgs boson decaying to a pair of muons with the CMS experiment at the Large Hadron Collider

Dmitry Kondratyev (14228264) 08 December 2022 (has links)
<p>The CERN Large Hadron Collider (LHC) offers a unique opportunity to test the Standard Model of particle physics. The Standard Model predicts the existence of a Higgs boson and provides accurate estimates for the strength of the interactions of the Higgs boson with other particles. After the discovery of the Higgs boson, the measurement of its properties, such as its couplings to other particles, is of paramount importance. </p> <p>The projects described in this thesis explore different aspects of one of such measurements – the search for the Higgs boson decay into a pair of muons (H→<em>μμ</em>), conducted by the CMS experiment at the LHC. This decay plays an important role in elementary particle physics, as it provides a direct way to measure the coupling of the Higgs boson to the muon. The first evidence of the H→<em>μμ</em> decay was reported in 2020 as a result of an elaborate statistical analysis of the dataset collected by the CMS experiment during Run 2 of the LHC (2016–2018). The observed (expected) upper limit on the signal strength modifier for this decay at 95% confidence level was found to be 1.93 (0.81), constituting the most precise measurement to date. </p> <p>The details of this analysis, along with studies to establish possible directions for the development of the next iteration of  the H→<em>μμ</em> analysis using Run 3 data, are discussed in this thesis. In addition, a novel machine learning-based algorithm for the muon high level trigger is presented, which ultimately improves the data-taking efficiency of the CMS experiment, and hence, helps to increase the sensitivity of future H→<em>μμ</em> searches. Finally, projections of the H→<em>μμ</em> search sensitivity to the data-taking conditions at the High-Luminosity Large Hadron Collider are presented, estimating the achievable precision for future measurements of the Higgs boson properties.</p>
214

Software test case generation from system models and specification. Use of the UML diagrams and High Level Petri Nets models for developing software test cases.

Alhroob, Aysh M. January 2010 (has links)
The main part in the testing of the software is in the generation of test cases suitable for software system testing. The quality of the test cases plays a major role in reducing the time of software system testing and subsequently reduces the cost. The test cases, in model de- sign stages, are used to detect the faults before implementing it. This early detection offers more flexibility to correct the faults in early stages rather than latter ones. The best of these tests, that covers both static and dynamic software system model specifications, is one of the chal- lenges in the software testing. The static and dynamic specifications could be represented efficiently by Unified Modelling Language (UML) class diagram and sequence diagram. The work in this thesis shows that High Level Petri Nets (HLPN) can represent both of them in one model. Using a proper model in the representation of the software specifications is essential to generate proper test cases. The research presented in this thesis introduces novel and automated test cases generation techniques that can be used within a software sys- tem design testing. Furthermore, this research introduces e cient au- tomated technique to generate a formal software system model (HLPN) from semi-formal models (UML diagrams). The work in this thesis con- sists of four stages: (1) generating test cases from class diagram and Object Constraint Language (OCL) that can be used for testing the software system static specifications (the structure) (2) combining class diagram, sequence diagram and OCL to generate test cases able to cover both static and dynamic specifications (3) generating HLPN automat- ically from single or multi sequence diagrams (4) generating test cases from HLPN. The test cases that are generated in this work covered the structural and behavioural of the software system model. In first two phases of this work, the class diagram and sequence diagram are decomposed to nodes (edges) which are linked by Classes Hierarchy Table (CHu) and Edges Relationships Table (ERT) as well. The linking process based on the classes and edges relationships. The relationships of the software system components have been controlled by consistency checking technique, and the detection of these relationships has been automated. The test cases were generated based on these interrelationships. These test cases have been reduced to a minimum number and the best test case has been selected in every stage. The degree of similarity between test cases is used to ignore the similar test cases in order to avoid the redundancy. The transformation from UML sequence diagram (s) to HLPN facilitates the simpli cation of software system model and introduces formal model rather than semi-formal one. After decomposing the sequence diagram to Combined Fragments, the proposed technique converts each Combined Fragment to the corresponding block in HLPN. These blocks are con- nected together in Combined Fragments Net (CFN) to construct the the HLPN model. The experimentations with the proposed techniques show the effectiveness of these techniques in covering most of the software system specifications.
215

Evaluation of FPGA Partial Reconfiguration : for real-time Vision applications

Guo, Guanghao January 2020 (has links)
The usage of programmable logic resources in Field Programmable Gate Arrays, also known as FPGAs, has increased a lot recently due to the complexity of the algorithms, especially for some computer vision algorithms. Due to this reason, sometimes the hardware resources in the FPGA are not sufficient. Partial reconfiguration provides us with the possibility to solve this problem. Partial reconfiguration is a technique that can be used to reconfigure specific parts of the FPGA during run-time. By using this technique, we can reduce the need for programmable logic resources. This master thesis project aims to design a software framework for partial reconfiguration that can load a set of processing components/algorithms (e.g. object detection, optical flow, Harris-Corner detection etc) in the FPGA area without affecting real-time static components such as camera capture, basic image filtering and colour conversion which are continuously running. Partial reconfiguration has been applied to two different video processing pipelines, a direct streaming architecture and a frame buffer streaming architecture respectively. The result shows that reconfiguration time is predictable which depends on the partial bitstream size, and that partial reconfiguration can be used in real-time applications taking the partial bitstream size and the frequency to switch the partial bitstreams into account. / Användningen av programmerbara logiska resurser i Field Programmable Gate Arrayer, även känd som FPGA:er, har ökat mycket nyligen på grund av komplexiteten hos algoritmerna, speciellt för vissa datorvisningsalgoritmer. På grund av detta är det ibland inte tillräckligt med hårdvaruresurser i FPGA:n. Partiell omkonfiguration ger oss möjlighet att lösa detta problem. Partiell omkonfigurering är en teknik som kan användas för att omkonfigurera specifika delar av FPGA:n under körtid. Genom att använda denna teknik kan vi minska behovet av programmerbara logiska resurser. Det här mastersprojektet syftar till att utforma ett programvaru-ramverk för partiell omkonfiguration som kan ladda en uppsättning processkomponenter / algoritmer (t.ex. objektdetektering, optiskt flöde, Harris-Corner detection etc) i FPGA- området utan att påverka statiska realtids-komponenter såsom kamerafångst, grundläggande bildfiltrering och färgkonvertering som körs kontinuerligt. Partiell omkonfiguration har tillämpats på två olika videoprocessnings-pipelines, en direkt-strömmande respektive en rambuffert-strömmande arkitektur. Resultatet visar att omkonfigurationstiden är förutsägbar och att partiell omkonfiguration kan användas i realtids-tillämpningar.
216

Develop a Graphical User Interface for the assembler for SiLago Platform / Utveckla ett grafiskt användargränssnitt för assemblern för SiLago Platform

Wang, Yuxuan January 2023 (has links)
Vesyla-II is developed as the High-Level Synthesis (HLS) tool serving the SiLago platform. The assembler Manas is a part of the Coarse Grain Reconfigurable Architectures (CGRA) compiler in Vesyla-II, which is used to transform the information from source code into the target language. A group of graphical Intermediate Representatives (IRs) associated with the instruction set of Dynamically Reconfigurable Resource Array (DRRA) plays an important role in this transformation. Manual effort is required to optimize the result of transformation by configuring the DRRA instructions and organizing the relationship among them. In this thesis project, we provide a graphical user interface ManasUI to assist Vesyla-II developers in graphically operating the transformation. We follow the normal procedure of software development and start by working out the requirements of ManasUI based on the background knowledge of the graphical IRs applied in Vesyla-II. Then we design and implement the prototype of ManasUI based on the solution of the web application. ManasUI takes the DRRA instruction set or one of the IRs — Hierarchical Multi-Thread Dependency Graph (HMTDG) as input and graphically represents the concepts of node, hierarchy, and dependency relationship in HMTDG. The canvas component in ManasUI provides a handy interface for users to interact with the graph of HMTDG directly. The functionality, scalability, and portability of ManasUI are verified by conducting a group of test cases. The test cases are designed aiming to cover all user scenarios. From the results of testing, the prototype of ManasUI meets all the requirements that we have identified. / Vesyla-II är utvecklat som ett HLS verktyg för SiLago-plattformen. Assembleren Manas är en del av CGRA kompilatorn i Vesyla-II, som används för att omvandla informationen från källkod till det specifika språket. En grupp av IRs associerad med instruktionsuppsättningen av DRRA spelar en viktig roll i denna transformation. Manuellt arbete krävs för att optimera resultatet av transformationen genom att konfigurera DRRA instruktionerna och organisera förhållandet mellan dem. I detta examensarbete skapade vi ett grafiskt användargränssnitt ManasUI för att hjälpa Vesyla-II utvecklare att grafiskt hantera omvandlingen av källkod. Vi följer det normala tillvägagångsättet för mjukvaruutveckling och börjar med att dokumentera och arbeta fram kraven för ManasUI baserat på bakgrundskunskapen om den grafiska IRs som tillämpas i Vesyla-II. Sedan designar och implementerar vi prototypen av ManasUI baserat på webbapplikationens lösning. ManasUI tar DRRA instruktionsuppsättningen eller en av de IRs — HMTDG som indata och representerar grafiskt begreppen om nod, hierarki och beroendeförhållande i HMTDG. Canvas-komponenten i ManasUI ger ett praktiskt gränssnitt för användare att interagera med grafen för HMTDG direkt. Funktionaliteten, skalbarheten och portabiliteten för ManasUI är verifierad genom att genomföra en grupp testfall. Testfallen är utformade för att täcka alla användarscenarier. Testresultaten visar att prototypen av ManasUI uppfyller alla krav som vi har identifierat.
217

Development of Parallel Architectures for Radar/Video Signal Processing Applications

Jarrah, Amin January 2014 (has links)
No description available.
218

High-level Planning for Multi-agent System using a Sampling-based method

Feng Yu, Yan, Wang, Ziming January 2020 (has links)
One of the main focus of robotics is to integraterobotic tasks and motion planning, which has an increasedsignificance due to their growing number of application fieldsin transportation, navigation, warehouse management and muchmore. A crucial step towards this direction is to have robotsautomatically plan its trajectory to accomplish the given task.In this project a multi-layered approach was implemented toaccomplish it. Our framework consists of a discrete high-levelplanning layer that is designed for planning, and a continuouslow-level search layer that uses a sampling-based method for thetrajectory searching. The layers will interact with each otherduring the search for a solution. In order to coordinate formulti-agent system, velocity tuning is used to avoid collisions, anddifferent priority are assigned to each robot to avoid deadlocks.As a result, the framework trades off completeness for efficiency.The main aim of this project is to study and learn about high-level motion planning and multi-agent system, as an introductionto robotics and computer science. / En viktig aspekt inom robotik är att integrera robotuppgifter med rörelseplanering, som har en ökande be- tydelse för samhället på grund av dess applikationsområde inom t.ex. transport, navigering och lagerhantering. Ett avgörande steg till detta är att få robotarna automatiskt planerar sin bana för att utföra de givna uppgifterna. I detta projekt implementerades “Multi-layered” metod för att uppnå detta. Metoden består av ett hög-nivå diskret planeringslager som är designad för planering, och ett kontinuerligt låg-nivå sökningslager som använder ”sampling-based” algoritmer för sökning av bana. Lagerna interageras med varandra under den tiden där metoden söker efter en önskvärd bana som satisfiera uppdraget. För att koordinera samtliga robotar används den frikopplat approachen där hastigheter för olika robotar justeras till att undvika kollisioner, samt olika prioriteringar tilldelas för varje robot för att undvika ett blockerat låsläge. ”Sampling-based” algoritmer och den frikopplat approachen är oftast mer effektivt tidsmässigt men garantera inte att lösning kommer att hittas även om den existerar. Syftet med detta projekt är att studera och lära sig om rörelseplanering på högt-nivå och multi-agentsystem, som en introduktion till robotik och datavetenskap. / Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
219

Specification Decomposition and Formal Behavior Generation in Multi-Robot Systems

Schillinger, Philipp January 2017 (has links)
While autonomous robot systems are becoming increasingly common, their usage is still mostly limited to rather simple tasks. This primarily results from the need for manually programming the execution plans of the robots. Instead, as shown in this thesis, their behavior can be automatically generated from a given goal specification. This forms the basis for providing formal guarantees regarding optimality and satisfaction of the mission goal specification and creates the opportunity to deploy these robots in increasingly sophisticated scenarios. Well-defined robot capabilities of comparably low complexity can be developed independently from a specific high-level goal and then, using a behavior planner, be automatically composed to achieve complex goals in a verifiably correct way. Considering multiple robots introduces significant additional planning complexity. Not only actions need to be planned, but also allocation of parts of the mission to the individual robots needs to be considered. Classically, either are planning and allocation seen as two independent problems which requires to solve an exponential number of planning problems, or the formulation of a joint team model leads to a product state space between the robots. The resulting exponential complexity prevents most existing approaches from being practically useful in more complex and realistic scenarios. In this thesis, an approach is presented to utilize the interplay of allocation and planning, which avoids the exponential complexity for independently executable parts of the mission specification. Furthermore, an approach is presented to identify these independent parts automatically when only being given a single goal specification for the team. This bears the potential of improving the efficiency to find an optimal solution and is a significant step towards the application of formal multi-robot behavior planning to real-world problems. The effectiveness of the proposed methods is therefore illustrated in experiments based on an existing office environment and in realistic scenarios. / Även om autonoma robotsystem blir allt vanligare är deras användning fortfarande mestadels begränsad till ganska enkla uppgifter. Detta beror främst på att manuell programmering av robotarnas exekveringsplaner behövs. Istället, som det visas i denna avhandling, kan deras beteende genereras automatiskt från en given målspecifikation. Detta utgör fundamentet för att ge en formell garanti att det resulterande beteendet är optimalt och uppdragsmålspecifikationen är uppfylld. Därför skapar det möjlighet att använda dessa robotar i alltmer sofistikerade scenarier. Väldefinierade robotkompetenser med relativt låg komplexitet kan utvecklas oberoende av ett specifikt mål på hög nivå och sedan sammansättas automatiskt med hjälp av en beteendeplanerare för att uppnå komplexa mål på ett verifierbar korrekt sätt. Om det handlar om flera robotar så introduceras ytterligare planeringskomplexitet som är betydande. Inte bara åtgärder behöver planeras, men även fördelning av uppdragets olika delar till de enskilda robotarna måste hanteras. Traditionellt anses planering och allokering som två oberoende problem som kräver att man löser ett exponentiellt antal planeringsproblem, eller så leder formuleringen av en gemensam modell för hela gruppen till ett produkttillståndsutrymme mellan robotarna. Den resulterande exponentiella komplexiteten förhindrar att de flesta befintliga metoderna är praktiskt användbara i mer komplexa och realistiska scenarier. I denna avhandling presenteras ett tillvägagångssätt för att utnyttja samspelet mellan allokering och planering, som undviker exponentiell komplexitet för oberoende exekverbara delar av uppdragsspecifikationen. Dessutom presenteras ett tillvägagångssätt för att automatiskt identifiera dessa oberoende delar när endast en enda målspecifikation ges för arbetslaget. Detta har potential att förbättra effektiviteten för att hitta en optimal lösning och är ett viktigt steg mot tillämpningen av formell multi-robot-beteendeplanering för realistiska problem. Effektiviteten av de föreslagna metoderna illustreras därför i experiment baserade på en befintlig kontorsmiljö och i realistiska scenarier. / <p>QC 20170928</p>
220

A High-Level Interface for Accelerating Spiking Neural Networks on the Edge with Heterogeneous Hardware : Enabling Rapid Prototyping of Training Algorithms and Topologies on Field-Programmable Gate Arrays

Eidlitz Rivera, Kaspar Oscarsson January 2024 (has links)
With the increasing use of machine learning by devices at the network's edge, a trend of moving computation from data centers to these devices is emerging. This shift imposes strict energy requirements on the algorithms used and the hardware on which they are implemented. Neuromorphic spiking neural networks (SNNs) and heterogeneous sytems on a chip (SoCs) are showing great potential for energy-efficient computing on the edge. This thesis describes the development of a high-level interface for accelerating SNNs on an FPGA–CPU SoC. The system is based on an existing open-source, low-level implementation, adapting it for a research-focused Python front-end. The developed interface provides a productive environment for exploring and evaluating SNN algorithms and topologies through compatibility with industry-standard tools for numerical computing, data analysis, and visualization, while still taking full advantage of FPGA-based hardware acceleration. The system is evaluated and showcased by analyzing the training of a small network to solve the XOR problem. As the project matures, future development could enable integration with commonly used machine learning libraries, further increasing it's potential.

Page generated in 0.2345 seconds