• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

SimITK: Visual Programming of the ITK Image Processing Library within Simulink

DICKINSON, ANDREW WILLIAM LAIRD 13 September 2011 (has links)
The Insight Segmentation and Registration Toolkit (ITK) is a long-established image processing library used for image analysis, visualisation, and image-guided surgery applications. ITK is a collection of C++ classes that can potentially pose usability problems for users without appropriate C++ programming experience. In order to remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within the visual programming environment of MATLAB, Simulink, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily C++ templated nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective datatypes and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around the ITK class responsible for resolving the datatypes used by ITK to native types used by Simulink. Part of this challenge surrounds the automated capturing and storage of the pertinent class information (name, inputs/outputs, acceptable datatypes, and allowed dimensionalities) that needs to be reflected within the final block representation. The WrapITK package, included with ITK, serves to generate initial class representations as complex eXtended Markup Language (XML) files that are consolidated and refined to organise the information into an easily accessible structure when extracting during wrapping. Once refined, the data are transferred into custom-written SimITK-templates - one template for each required filetype - through a series of custom-keyword substitutions that replace special keywords with appropriately retrieved XML information and/or programming code. The primary result from the SimITK wrapping procedure is the generation of multiple Simulink Block libraries. From these libraries, blocks are selected and interconnected to demonstrate case study examples: a 3D segmentation workflow using cranial-CT data and a 3D MRI-to-CT registration workflow. Suggestions for future development are included as well as several appendices containing a list of classes, code comparisons between ITK C++ and SimITK workflow, installation documentation (both user and developer), as well as example file templates used to integrate ITK within Simulink. / Thesis (Master, Computing) -- Queen's University, 2011-09-13 16:48:53.906
2

Application d'un langage de programmation de type flot de données à la synthèse haut-niveau de système de vision en temps-réel sur matériel reconfigurable / Application of a dataflow programming language to the high level synthesis of real time vision systems on reconfigurable hardware

Ahmed, Sameer 24 January 2013 (has links)
Les circuits reconfigurables de type FPGA (Field Programmable Gate Arrays) peuvent désormais surpasser les processeurs généralistes pour certaines applications offrant un fort degré de parallélisme intrinsèque. Ces circuits sont traditionnellement programmés en utilisant des langages de type HDL (Hardware Description Languages), comme Verilog et VHDL. L'usage de ces langages permet d'exploiter au mieux les performances offertes par ces circuits mais requiert des programmeurs une très bonne connaissance des techniques de conception numérique. Ce pré-requis limite fortement l'utilisation des FPGA par la communauté des concepteurs de logiciel en général. Afin de pallier cette limitation, un certain nombre d'outils de plus haut niveau ont été développés, tant dans le monde industriel qu'académique. Parmi les approches proposées, celles fondées sur une transformation plus ou moins automatique de langages de type C ou équivalent, largement utilisés dans le domaine logiciel, ont été les plus explorées. Malheureusement, ces approches ne permettent pas, en général, d'obtenir des performances comparables à celles issues d'une formulation directe avec un langage de type HDL, en raison, essentiellement, de l'incapacité de ces langages à exprimer le parallélisme intrinsèque des applications. Une solution possible à ce problème passe par un changement du modèle de programmation même. Dans le contexte qui est le notre, le modèle flot de données apparaît comme un bon candidat. Cette thèse explore donc l'adoption d'un modèle de programmation flot de données pour la programmation de circuits de type FPGA. Plus précisément, nous évaluons l'adéquation de CAPH, un langage orienté domaine (Domain Specific Language) à la description et à l'implantation sur FPGA d'application opérant à la volée des capteurs (stream processing applications). L'expressivité du langage et l'efficacité du code généré sont évaluées expérimentalement en utilisant un large spectre d'applications, allant du traitement d'images bas niveau (filtrage, convolution) à des applications de complexité réaliste telles que la détection de mouvement, l'étiquetage en composantes connexes ou l'encodage JPEG. / Field Programmable Gate Arrays (FPGAs) are reconfigurable devices which can outperform General Purpose Processors (GPPs) for applications exhibiting parallelism. Traditionally, FPGAs are programmed using Hardware Description Languages (HDLs) such as Verilog and VHDL. Using these languages generally offers the best performances but the programmer must be familiar with digital design. This creates a barrier for the software community to use FPGAs and limits their adoption as a computing solution. To make FPGAs accessible to both software and hardware programmers, a number of tools have been proposed both by academia and industry providing high-level programming environment. A widely used approach is to convert C-like languages to HDLs, making it easier for software programmers to use FPGAs. But these approaches generally do not provide performances on the par with those obtained with HDL languages. The primary reason is the inability of C-like approaches to express parallelism. Our claim is that in order to have a high level programming language for FPGAs as well as not to compromise on performance, a shift in programming paradigm is required. We think that the Dataflowow / actor programming model is a good candidate for this. This thesis explores the adoption of Dataflow / actor programming model for programming FPGAs. More precisely, we assess the suitability of CAPH, a domain-specific language based on this programming model for the description and implementation of stream-processing applications on FPGAs. The expressivity of the language and the efficiency of the generated code are assessed experimentally using a set of test bench applications ranging from very simple applications (basic image filtering) to more complex realistic applications such as motion detection, Connected Component Labeling (CCL) and JPEG encoder.
3

Design and Implementation of an Audio Codec (AMR-WB) using Dataflow Programming Language CAL in the OpenDF Environment

Ali, Hazem, Patoary, Mohammad Nazrul Ishlam January 2010 (has links)
<p>Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, we have to develop parallel applications that assign tasks to different cores. On each core, pipeline, data and task parallelization can be used to achieve higher performance. Dataflow programming languages are attractive for achieving parallelism because of their high-level, machine-independent, implicitly parallel notation and because of their fine-grain parallelism. These features are essential for obtaining effective, scalable utilization of multi-core processors.</p><p>In this thesis work we have parallelized an existing audio codec - Adaptive Multi-Rate Wide Band (AMR-WB) - written in the C language for single core processor. The target platform is a multi-core AMR11 MP developer board. The final result of the efforts is a working AMR-WB encoder implemented in CAL and running in the OpenDF simulator. The C specification of the AMR-WB encoder was analysed with respect to dataflow and parallelism. The final implementation was developed in the CAL Actor Language, with the goal of exposing available parallelism - different dataflows - as well as removing unwanted data dependencies. Our thesis work discusses mapping techniques and guidelines that we followed and which can be used in any future work regarding mapping C based applications to CAL. We also propose solutions for some specific dependencies that were revealed in the AMR-WB encoder analysis and suggest further investigation of possible modifications to the encoder to enable more efficient implementation on a multi-core target system.</p>
4

Design and Implementation of an Audio Codec (AMR-WB) using Dataflow Programming Language CAL in the OpenDF Environment

Ali, Hazem, Patoary, Mohammad Nazrul Ishlam January 2010 (has links)
Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, we have to develop parallel applications that assign tasks to different cores. On each core, pipeline, data and task parallelization can be used to achieve higher performance. Dataflow programming languages are attractive for achieving parallelism because of their high-level, machine-independent, implicitly parallel notation and because of their fine-grain parallelism. These features are essential for obtaining effective, scalable utilization of multi-core processors. In this thesis work we have parallelized an existing audio codec - Adaptive Multi-Rate Wide Band (AMR-WB) - written in the C language for single core processor. The target platform is a multi-core AMR11 MP developer board. The final result of the efforts is a working AMR-WB encoder implemented in CAL and running in the OpenDF simulator. The C specification of the AMR-WB encoder was analysed with respect to dataflow and parallelism. The final implementation was developed in the CAL Actor Language, with the goal of exposing available parallelism - different dataflows - as well as removing unwanted data dependencies. Our thesis work discusses mapping techniques and guidelines that we followed and which can be used in any future work regarding mapping C based applications to CAL. We also propose solutions for some specific dependencies that were revealed in the AMR-WB encoder analysis and suggest further investigation of possible modifications to the encoder to enable more efficient implementation on a multi-core target system.
5

Visualisation Studio for the analysis of massive datasets

Tucker, Roy Colin January 2016 (has links)
This thesis describes the research underpinning and the development of a cross platform application for the analysis of simultaneously recorded multi-dimensional spike trains. These spike trains are believed to carry the neural code that encodes information in a biological brain. A number of statistical methods already exist to analyse the temporal relationships between the spike trains. Historically, hundreds of spike trains have been simultaneously recorded, however as a result of technological advances recording capability has increased. The analysis of thousands of simultaneously recorded spike trains is now a requirement. Effective analysis of large data sets requires software tools that fully exploit the capabilities of modern research computers and effectively manage and present large quantities of data. To be effective such software tools must; be targeted at the field under study, be engineered to exploit the full compute power of research computers and prevent information overload of the researcher despite presenting a large and complex data set. The Visualisation Studio application produced in this thesis brings together the fields of neuroscience, software engineering and information visualisation to produce a software tool that meets these criteria. A visual programming language for neuroscience is produced that allows for extensive pre-processing of spike train data prior to visualisation. The computational challenges of analysing thousands of spike trains are addressed using parallel processing to fully exploit the modern researcher’s computer hardware. In the case of the computationally intensive pairwise cross-correlation analysis the option to use a high performance compute cluster (HPC) is seamlessly provided. Finally the principles of information visualisation are applied to key visualisations in neuroscience so that the researcher can effectively manage and visually explore the resulting data sets. The final visualisations can typically represent data sets 10 times larger than previously while remaining highly interactive.

Page generated in 0.07 seconds