• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 127
  • 28
  • 21
  • 7
  • 1
  • Tagged with
  • 855
  • 334
  • 323
  • 318
  • 317
  • 317
  • 317
  • 313
  • 313
  • 312
  • 311
  • 311
  • 311
  • 311
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Accelerating the parsing process with an application specific VLSI RISC processor

McMullin, John Derek January 1997 (has links)
This thesis investigates the topic of the design, implementation and potential use of specialised hardware used to accelerate the recognition and translation of computer programs expressed in a range of computer languages. This investigation focuses specifically on the twin processes of parsing and lexical analysis. The research described was carried out in two areas namely, the feasibility of designing a specialised instruction set for a RISC like processor able to accelerate the parsing and lexical analysis process, and the physical implementation of a RISC processor in CMOS VLSI technology able to execute the designed instruction set. The feasibility of mapping the process of language recognition onto the instruction set of a RISC processor is investigated. This involves an assessment of the suitability of the LL(1) and LALR(1) algorithms, both of which are used for parsing, and other associated algorithms, used for lexical analysis, as a basis for an appropriate instruction set architecture. The feasibility of an instruction set design which uses fixed size instructions with variable size data fields to ensure scaleable operation is also investigated. The appropriate software mechanisms used to validate the instruction set architecture are outlined. The practical implementation using CMOS technology of a RISC processor able to execute the new instruction set is investigated. In particular the feasibility of using bit-slice technology to implement the processor having fixed size instructions with variable size data-paths and address ranges is investigated. The combination of novel instruction set with variable data-widths and the fabricated devices able to activate semantic actions directly from hardware together form an original contribution to the field of parsing and lexical analysis.
62

Load balancing strategies for distributed computer systems

Butt, Wajeeh U. N. January 1993 (has links)
The study investigates various load balancing strategies to improve the performance of distributed computer systems. A static task allocation and a number of dynamic load balancing algorithms are proposed, and their performances evaluated through simulations. First, in the case of static load balancing, the precedence constrained scheduling heuristic is defined to effectively allocate the task systems with high communication to computation ratios onto a given set of processors. Second, the dynamic load balancing algorithms are studied using a queueing theoretic model. For each algorithm, a different load index has been used to estimate the host loads. These estimates are utilized in simple task placement heuristics to determine the probabilities for transferring tasks between every two hosts in the system. The probabilities determined in this way are used to perform dynamic load balancing in a distributed computer system. Later, these probabilities are adjusted to include the effects of inter-host communication costs. Finally, network partitioning strategies are proposed to reduce the communication overhead of load balancing algorithms in a large distributed system environment. Several host-grouping strategies are suggested to improve the performance of load balancing algorithms. This is achieved by limiting the exchange of load information messages within smaller groups of hosts while restricting the transfer of tasks to long distance remote hosts which involve high communication costs. Effectiveness of the above-mentioned algorithms is evaluated by simulations. The model developed in this study for such simulations can be used in both static and dynamic load balancing environments.
63

Algorithm design and 3D computer graphics rendering

Ewins, Jon Peter January 2000 (has links)
3D Computer graphics is becoming an almost ubiquitous part of the world in which we live. being present in art. entertainment. advertising. CAD. training and education. scientific visualisation and with the growth of the internet. in e-commerce and communication. This thesis encompasses two areas of study: The design of algorithms for high quality. real-time 3D computer graphics rendering hardware and the methodology and means for achieving this. When investigating new algorithms and their implementation in hardware. it is important to have a thorough understanding of their operation. both individually and in the context of an entire architecture. It is helpful to be able to model different algorithmic variations rapidly and experiment with them interchangeably. This thesis begins with a description of software based modelling techniques for the rapid investigation of algorithms for 3D computer graphics within the context of a C++ prototyping environment. Recent tremendous increases in the rendering performance of graphics hardware have been shadowed by corresponding advancements in the accuracy of the algorithms accelerated. Significantly. these improvements have led to a decline in tolerance towards rendering artefacts. Algorithms for the effective and efficient implementation of high quality texture filtering and edge antialiasing form the focus of the algorithm research described in this thesis. Alternative algorithms for real-time texture filtering are presented in terms of their computational cost and performance. culminating in the design of a low cost implementation for higher quality anisotropic texture filtering. Algorithms for edge antialiasing are reviewed. with the emphasis placed upon area sampling solutions. A modified A-buffer algorithm is presented that uses novel techniques to provide: efficient fragment storage; support for multiple intersecting transparent surfaces; and improved filtering quality through an extendable and weighted filter support from a single highly optimised lookup table.
64

Design techniques for enhancing the performance of frame buffer systems

Makris, Alexander January 1997 (has links)
The 2D and 3D graphics support for PC's and workstations is becoming a very challenging field. The need to continuously support real time image generation at higher frame rates and resolutions implies that all levels of the graphics generation process must continuously improve. New hardware algorithms need to be devised and the existing ones must be optimised for better performance. These algorithms must exploit parallelism in every possible way and new hardware architectures and memory configurations must accompany them to support this kind ofparallelism. This thesis focuses on new hardware techniques, of both architectural and algorithmic nature, to accelerate the 2D and 3D graphics performance of computer systems. Some of these new techniques are in the frame buffer access level, where the images are stored in the video memory and then displayed on the screen. Some are in the rasterisation level where the drawing of basic primitives such as lines, triangle and polygons takes place. Novel rasterisation algorithms are invented and compared with traditional ones in terms of hardware complexity and performance and their basic models have been implemented in VHDL and in other software languages. New frame buffer architectures are introduced and analysed that can improve the overall performance of a graphics system significantly and are compatible with a number of graphics systems in terms of their requirements. During the development of this thesis special consideration was given to the hardware (e. g. VHDL register-transfer level) implementation of the described architectures and algorithms. Both software, hardware models and their test environments were implemented in a way to maximise the accuracy of the results. The reason for that was to make sure that actual hardware implementation would be possible and it would produce the same results without any surprises
65

A formal approach to hardware analysis

Traub, Niklas Gerard January 1986 (has links)
No description available.
66

A restructuring mechanism for a codasyl-type data base

Carden, James January 1983 (has links)
No description available.
67

Compact and efficient method of RGB to RGBW data conversion for OLED microdisplays

Can, Chi January 2012 (has links)
Colour Electronic Information Displays (EIDs) typically consist of pixels that are made up of red, green and blue (RGB) subpixels. A recent technology, Organic Light Emitting Diode (OLED), offers the potential to create a superior EID. OLED is already suitable for use in small displays and microdisplays for personal electronics products. OLED microdisplays, in particular, exhibit lower power consumption than equivalent direct-view panels thus enabling microdisplay-based personal display systems such as electronic viewfinders and video glasses to exhibit the longest possible battery life. In many EIDs, the light source is white and colour filters are used, at the expense of much absorbed light, to create the RGB light in the subpixels. Hence, the concept has recently emerged of adding a white (W) subpixel to form an RGBW pixel. The advantages can include lower power, higher luminance, and in the case of emissive displays, longer lifetime. One key to realizing the improved performance of RGBW EIDs is a suitable method of data conversion from standard RGB input signal formats to RGBW output signal formats. An OLED microdisplay built on Complementary Metal–Oxide–Semiconductor (CMOS) active matrix back-plane exhibits low power consumption. This device architecture also gives the OLED microdisplay the potential to realize the concept of low-power Display System on a Chip (DSoC). In realizing the performance potential of DSoC on an RGBW OLED microdisplay, there is a trade-off between system resources used to perform the data conversion and the image quality achieved. A compact and efficient method of RGB-to-RGBW data conversion is introduced to fit the requirement of “minimum system resources with indistinguishable visual side-effect” that is appropriate for an OLED microdisplay. In this context, the terms “Compact” and “Efficient” mean that the data conversion functionality (i) is capable of insertion into the signal path, (ii) is capable of integration on the OLED microdisplay back-plane, i.e., is small and (iii) consumes minimal power. The image quality produced by the algorithm is first simulated on a software platform, followed by an optical analysis of the output of the algorithm implemented on a real time hardware platform. The optical analysis shows good preservation of colour fidelity in the image on the microdisplay so that the proposed RGB-to-RGBW data conversion algorithm delivers sufficiently high image quality whilst remaining compact and efficient to meet the development requirements of the RGBW OLED microdisplay with DSoC approach.
68

The design of protocols for high performance in a networked computing environment

Law, Gary D. January 1989 (has links)
Technological advances in both local area networks and computer processor design have led to multiple computer installations being composed of a much wider range of network devices than previously possible. High bandwidth computer networks may now interconnect large numbers of devices that have different processor architectures and instruction sets, as well as various levels of performance. This thesis is concerned with the merits of such networks and addresses the problem of how the many different types of computers may be integrated to form a unified system. A review of a number of approaches towards the formation of multiple computer .systems includes campus computer networks, configurations of mainframes and examples of distributed computer systems. This study provides an insight into the fundamental principles of this field. The key features of the systems considered in the study are grouped together in a description of a general network structure. Subsequently, the network devices in this structure are classified into three groups, according to their roles and communication requirements. The three-way classification of devices leads to the development of a Triadic Network Model to describe the interactions within and between the three groups. The model's specification of network communication provides the basis for protocols that are well suited to the needs of this computing environment. The thesis covers the principles of the protocols and the details of their implementation in an experimental system. The software tools developed to support the implementation are also described.
69

Transforming imperative programs

Illsley, Martin January 1988 (has links)
No description available.
70

Design of a network filing system

McLellan, Paul Michael January 1981 (has links)
No description available.

Page generated in 0.0207 seconds