• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 7
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 90
  • 24
  • 23
  • 21
  • 21
  • 21
  • 14
  • 14
  • 13
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Using Workload Characterization to Guide High Performance Graph Processing

Hassan, Mohamed Wasfy Abdelfattah 24 May 2021 (has links)
Graph analytics represent an important application domain widely used in many fields such as web graphs, social networks, and Bayesian networks. The sheer size of the graph data sets combined with the irregular nature of the underlying problem pose a significant challenge for performance, scalability, and power efficiency of graph processing. With the exponential growth of the size of graph datasets, there is an ever-growing need for faster more power efficient graph solvers. The computational needs of graph processing can take advantage of the FPGAs' power efficiency and customizable architecture paired with CPUs' general purpose processing power and sophisticated cache policies. CPU-FPGA hybrid systems have the potential for supporting performant and scalable graph solvers if both devices can work coherently to make up for each other's deficits. This study aims to optimize graph processing on heterogeneous systems through interdisciplinary research that would impact both the graph processing community, and the FPGA/heterogeneous computing community. On one hand, this research explores how to harness the computational power of FPGAs and how to cooperatively work in a CPU-FPGA hybrid system. On the other hand, graph applications have a data-driven execution profile; hence, this study explores how to take advantage of information about the graph input properties to optimize the performance of graph solvers. The introduction of High Level Synthesis (HLS) tools allowed FPGAs to be accessible to the masses but they are yet to be performant and efficient, especially in the case of irregular graph applications. Therefore, this dissertation proposes automated frameworks to help integrate FPGAs into mainstream computing. This is achieved by first exploring the optimization space of HLS-FPGA designs, then devising a domain-specific performance model that is used to build an automated framework to guide the optimization process. Moreover, the architectural strengths of both CPUs and FPGAs are exploited to maximize graph processing performance via an automated framework for workload distribution on the available hardware resources. / Doctor of Philosophy / Graph processing is a very important application domain, which is emphasized by the fact that many real-world problems can be represented as graph applications. For instance, looking at the internet, web pages can be represented as the graph vertices while hyper links between them represent the edges. Analyzing these types of graphs is used for web search engines, ranking websites, and network analysis among other uses. However, graph processing is computationally demanding and very challenging to optimize. This is due to the irregular nature of graph problems, which can be characterized by frequent indirect memory accesses. Such a memory access pattern is dependent on the data input and impossible to predict, which renders CPUs' sophisticated caching policies useless to performance. With the rise of heterogeneous computing that enabled using hardware accelerators, a new research area was born, attempting to maximize performance by utilizing the available hardware devices in a heterogeneous ecosystem. This dissertation aims to improve the efficiency of utilizing such heterogeneous systems when targeting graph applications. More specifically, this research focuses on the collaboration of CPUs and FPGAs (Field Programmable Gate Arrays) in a CPU-FPGA hybrid system. Innovative ideas are presented to exploit the strengths of each available device in such a heterogeneous system, as well as addressing some of the inherent challenges of graph processing. Automated frameworks are introduced to efficiently utilize the FPGA devices, in addition to distributing and scheduling the workload across multiple devices to maximize the performance of graph applications.
22

Unstructured Finite Element Computations on Configurable Computers

Ramachandran, Karthik 18 August 1998 (has links)
Scientific solutions to physical problems are computationally intensive. With the increasing emphasis in the area of Custom Computing Machines, many physical problems are being solved using configurable computers. The Finite Element Method (FEM) is an efficient way of solving physical problems such as heat equations, stress analysis and two- and three-dimensional Poisson's equations. This thesis presents the solution to physical problems using the FEM on a configurable platform. The core computational unit in an iterative solution to the FEM, the matrix-by-vector multiplication, is developed in this thesis along with the framework necessary for implementing the FEM solution. The solutions for 2-D and 3-D Poisson's equations are implemented with the use of an adaptive mesh refinement method. The dominant computation in the method is matrix-by-vector multiplication and is performed on the Wildforce board, a configurable platform. The matrix-by-vector multiplication units developed in this thesis are basic mathematical units implemented on a configurable platform and can be used to accelerate any mathematical solution that involves such an operation. / Master of Science
23

Implementation of a Turbo Decoder on a Configurable Computing Platform

Hess, Jason Richard 22 September 1999 (has links)
Turbo codes are a new class of codes that can achieve exceptional error performance and energy efficiency at low signal-to-noise ratios. Decoding turbo codes is a complicated procedure that often requires custom hardware if it is to be performed at acceptable speeds. Configurable computing machines are able to provide the performance advantages of custom hardware while maintaining the flexibility of general-purpose microprocessors and DSPs. This thesis presents an implementation of a turbo decoder on an FPGA-based configurable computing platform. Portability and flexibility are emphasized in the implementation so that the decoder can be used as part of a configurable software radio. The system presented performs turbo decoding for a variable block size with a variable number of decoding iterations while using only a single FPGA. When six iterations are performed, the decoder operates at an information bit rate greater than 32 kbps. / Master of Science
24

A Genetic Algorithm-Based Place-and-Route Compiler For A Run-time Reconfigurable Computing System

Kahne, Brian C. 14 May 1997 (has links)
Configurable Computing is a technology which attempts to increase computational power by customizing the computational platform to the specific problem at hand. An experimental computing model known as wormhole run-time reconfiguration allows for partial reconfiguration and is highly scalable. In this approach, configuration information and data are grouped together in a computing unit called a stream, which can tunnel through the chip creating a series of interconnected pipelines. The Colt/Stallion project at Virginia Tech implements this computing model into integrated circuits. In order to create applications for this platform, a compiler is needed which can convert a human readable description of an algorithm into the sequences of configuration information understood by the chip itself. This thesis covers two compilers which perform this task. The first compiler, Tier1, requires a programmer to explicitly describe placement and routing inside of the chip. This could be considered equivalent to an assembler for a traditional microprocessor. The second compiler, Tier2, allows the user to express a problem as a dataflow graph. Actual placing and routing of this graph onto the physical hardware is taken care of through the use of a genetic algorithm. A description of the two languages is presented, followed by example applications. In addition, experimental results are included which examine the behavior of the genetic algorithm and how alterations to various genetic operator probabilities affects performance. / Master of Science
25

Matching Genetic Sequences in Distributed Adaptive Computing Systems

Worek, William J. 22 August 2002 (has links)
Distributed adaptive computing systems (ACS) allow developers to design applications using multiple programmable devices. The ACS API, an API created for distributed adaptive com-puting, gives developers the ability to design scalable ACS systems in a cluster networking environment for large applications. One such application, found in the field of bioinformatics, is the DNA sequence alignment problem. This thesis presents a runtime reconfigurable FPGA implementation of the Smith-Waterman similarity comparison algorithm. Additionally, this thesis presents tools designed for the ACS API that assist developers creating applications in a heterogeneous distributed adaptive computing environment. / Master of Science
26

A COMMERCIAL OFF THE SHELF CONTINUOUSLY TUNABLE HIGH DATA RATE SATELLITE RECEIVER

Varela, Julio, Conrad, Robert 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / TSI TelSys, Inc. is in the process of developing a production level, continuously tunable satellite receiver designed to support multiple high data rate, low earth and geostationary orbit missions in the 20 Mbps to 800 Mbps composite QPSK data rate range. This paper will evaluate market demands on satellite receivers and outline receiver design technique as a solution to high rate, multi-mission support.
27

COMPARISON OF VARIABILITY MODELING TECHNIQUES

Akram, Asif, Abbas, Qammer January 2009 (has links)
<p>Variability in complex systems offering rich set of features is a seriouschallenge to their users in term of flexibility with many possible variants fordifferent application contexts and maintainability. During the long period oftime, much effort has been made to deal with these issues. An effort in thisregard is developing and implementing different variability modelingtechniques.This thesis argues the explanation of three modeling techniques namedconfigurable components, feature models and function-means trees. The maincontribution to the research includes:• A comparison of above mentioned variability modeling techniques in asystematic way,• An attempt to find the integration possibilities of these modelingtechniques based on literature review, case studies, comparison,discussions, and brainstorming.The comparison is based on three case studies each of which is implemented inall above mentioned three modeling techniques and a set of generic aspects ofthese techniques which are further divided into characteristics. At the end, acomprehensive discussion on the comparison is presented and in final sectionsome integration possibility are proposed on the basis of case studies,characteristics, commonalities and experience gained through theimplementation of case studies and literature review.</p>
28

可動態調整的電子病歷存取控管機制 / A Dynamically Configurable Access Control Mechanism for Electronic Medical Records

許原瑞, Hsu,Yuan Jui Unknown Date (has links)
在醫療系統中,存取控管是電子病歷安全防護的核心。針對這樣的議題,我們實驗室已經有設計出一種安全的架構,利用最新的程式開發技術,剖面導向程式設計為基礎,設計出一種宣告式電子病歷安全控管的方法。這樣的設計讓安全管理者可以有系統化的控制整個系統的安全存取。但是這樣的架構下,安全規則的變動必須經過好幾道複雜的手續,造成使用上彈性不足。 本研究針對這樣的架構提出幾種改進的方式,使安全規則更動更具有彈性。主要分為兩方面,第一,針對安全規則的變數,設計可以彈性更動的方式,不需要為了更動變數而重複整個安全控管規則產生流程。第二,利用動態載入的功能,提出可以由外部Java程式寫好安全控管規則,在執行時候將該規則載入來判斷,如此對於複雜的安全控管規則也有修改的彈性。希望藉由這樣彈性的設計使我們設計的安全控管架構更能符合實際使用的需求。 / Maintaining proper access control to Electronic Medical Records (EMR) is essential to protecting patients’ privacy. However, the fine-grained and dynamic nature of access control rules for EMR has imposed great challenges on the healthcare information system developers. This thesis presents a dynamically configurable access control mechanism for Web-based EMR systems.It is an enhancement of a previous work in which static aspects are employed to enforce fine-grained access control for EMR. Specifically, we provide two additional kinds of dynamic adjustment mechanism to enhance the static access control aspects, namely dynamic parameters and dynamic constraints. If the scope of dynamic changes is small, dynamic parameters can realize the required changes. Otherwise, dynamic constraints can be used to support replacement of the access control enforcing code while allowing the EMR application running as usual. Consequently, system administrators have a fine range of choices with different trade-offs between flexibility and performance, namely fully static aspects, parameterized aspects using dynamic parameters and fully dynamic aspects using dynamic constraints. We have built a Web-based EMR prototype implementation using AspectJ to demonstrate our approach.
29

Role based modelling in support of configurable manufacturing system design

Ding, Chenghua January 2010 (has links)
Business environments, in which any modern Manufacturing Enterprise (ME) operates, have grown significantly in complexity and are changing faster than ever before. It follows that designing a flexible manufacturing system to achieve a set of strategic objectives involves making a series of complex decisions over time. Therefore manufacturing industry needs improved knowledge about likely impacts of making different types of change in MEs and improved modelling approaches that are capable of providing a systematic way of modelling change impacts in complex business processes; prior to risky and costly change implementation projects. An ability to simulate the execution of process instances is also needed to control, animate and monitor simulated flows of multiple products through business processes; and thereby to assess impacts of dynamic distributions and assignments of multiple resource types during any given time period. Further more this kind of modelling capability needs to be integrated into a single modelling framework so as to improve its flexibility and change coordination. Such a modelling capability and framework should help MEs to achieve successfully business process re-engineering, continuous performance development and enterprise re-design. This thesis reports on the development of new modelling constructs and their innovative application when used together with multiple existing modelling approaches. This enables human and technical resource systems to be described, specified and modelled coherently and explicitly. In turn this has been shown to improve the design of flexible, configurable and re-usable manufacturing resource systems, capable of supporting decision making in agile manufacturing systems. A newly conceived and developed Role-Based Modelling Methodology (R-BMM) was proposed during this research study. Also the R-BMM was implemented and tested by using it together with three existing modelling approaches namely (1) extended Enterprise Modelling, (2) dynamic Causal Loop Diagramming and (3) Discrete Event Simulation Modelling (via software PlantSimulation ®). Thereby these three distinct modelling techniques were deployed in a new and coherent way. The new R-BMM approach to modelling manufacturing systems was designed to facilitate: (1) Graphical Representation (2) Explicit Specification and (3) Implementation Description of Resource systems. Essentially the approach enables a match between suitable human and technical resource systems and well defined models of processes and workflows. Enterprise Modelling is used to explicitly define functional and flexibility competencies that need to be possessed by suitable role holders. Causal Loop Diagramming is used to reason about dependencies between different role attributes. The approach was targeted at the design and application of simulation models that enable relative performance comparisons (such as work throughput, lead-time and process costs) to be made and to show how performance is affected by different role decompositions and resourcing policies. The different modelling techniques are deployed via a stepwise application of the R-BMM approach. Two main case studies were carried out to facilitate methodology testing and methodology development. The chosen case company possessed manufacturing characteristics required to facilitate testing and development; in terms of significant complexity and change with respect to its products and their needed processing structures and resource systems. The first case study was mainly designed to illustrate an application, and benefits arising from application, of the new modelling approach. This provided both qualitative and quantitative results analysis and evaluation. Then with a view to reflecting on modelling methodology testing and to address a wider scope manufacturing problem, the second case study was designed and applied at a different level of abstraction, to further test and verify the suitability and re-usability of the methodology. Through conceiving the new R-BMM approach, to create, analyse and assess the utility of sets of models, this research has proposed and tested enhancements to current means of realising reconfigurable and flexible production systems.
30

Framework to manage labels for e-assessment of diagrams

Jayal, Ambikesh January 2010 (has links)
Automatic marking of coursework has many advantages in terms of resource benefits and consistency. Diagrams are quite common in many domains including computer science but marking them automatically is a challenging task. There has been previous research to accomplish this, but results to date have been limited. Much of the meaning of a diagram is contained in the labels and in order to automatically mark the diagrams the labels need to be understood. However the choice of labels used by students in a diagram is largely unrestricted and diversity of labels can be a problem while matching. This thesis has measured the extent of the diagram label matching problem and proposed and evaluated a configurable extensible framework to solve it. A new hybrid syntax matching algorithm has also been proposed and evaluated. This hybrid approach is based on the multiple existing syntax algorithms. Experiments were conducted on a corpus of coursework which was large scale, realistic and representative of UK HEI students. The results show that the diagram label matching is a substantial problem and cannot be easily avoided for the e-assessment of diagrams. The results also show that the hybrid approach was better than the three existing syntax algorithms. The results also show that the framework has been effective but only to limited extent and needs to be further refined for the semantic stage. The framework proposed in this Thesis is configurable and extensible. It can be extended to include other algorithms and set of parameters. The framework uses configuration XML, dynamic loading of classes and two design patterns namely strategy design pattern and facade design pattern. A software prototype implementation of the framework has been developed in order to evaluate it. Finally this thesis also contributes the corpus of coursework and an open source software implementation of the proposed framework. Since the framework is configurable and extensible, its software implementation can be extended and used by the research community.

Page generated in 0.0721 seconds