• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 551
  • 51
  • 43
  • 41
  • 32
  • 29
  • 27
  • 20
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Spatial reaction systems on parallel supercomputers

Smith, Mark January 1994 (has links)
A wide variety of physical, chemical and biological systems can be represented as a collection of discrete spatial locations within which some interaction proceeds, and between which reactants diffuse or migrate. Many such real-world spatial reaction systems are known to be both non-linear and stochastic in nature, and thus studies of these systems have generally relied upon analytic approximation and computer simulation. However, this later approach can become impractical for large, complex systems which require massive computational resources. In this work we analyse a general spatial reaction system in both the deterministic and stochastic scenarios. A study of the deterministic parameter space reveals a new categorisation for system development in terms of its <I>criticality</I>. This result is then coupled with a complete analysis of the linearised stochastic system, in order to provide an understanding of the spatial-temporal covariance structures within reactant distributions. In addition to an analysis, and empirical confirmation, of the various criticality behaviours in both deterministic and stochastic cases, we use our theoretical results to enable efficient implementation of spatial reaction system simulations on parallel supercomputers. Such novel computing resources are necessary to enable the study of realistic-scale, long-term stochastic activity, however they are notoriously difficult to exploit. We have therefore developed advanced programming and implementation techniques, concentrating mainly on dynamic load-balancing methodologies, to enable such studies. These techniques make direct use of our analytic results in order to achieve the most efficient exploitation of supercomputing resources, given the particular attributes of the system under study. These new techniques have allowed us to investigate complex individual-based systems on a previously untried scale. In addition, they are of general applicability to a wide range of real-world simulations.
22

Optimisation of partitioned temporal joins

Zurek, Thomas January 1997 (has links)
Joins are the most expensive and performance-critical operations in relational data-base systems. In this thesis, we investigate processing techniques for joins that are based on a temporal intersection condition. Intuitively, such joins are used whenever one wants to match data from two or more relations that is valid at the same time. This work is divided into two parts. First, we analyse techniques that have been proposed for equi-joins. Some of them have already been adapted for temporal join processing by other authors. However, hash-based and parallel techniques - which are usually the most efficient ones in the context of equi-joins - have only found little attraction and leave several temporal-specific issues unresolved. Hash-based and parallel techniques are based on explicit symmetric partitioning. In the case of an equi-join condition, partitioning can guarantee that the relations are split into <I>disjoint</I> fragments; in the case of a temporal intersection condition, partitioning usually results in <I>non-disjoint </I>fragments with a large number of tuples being replicated between fragments. This causes a considerable overhead for partitioned temporal join processing. However, we develop an algorithm of polynomial time complexity that computes a partition that minimises the number of tuple replications while creating fragments of limited sizes. In the second, the synthetical part of this work, we focus on the conclusions that can be drawn from the results of the first part. We propose and develop an optimisation process that (a) analyses the temporal relations that participate in a temporal join, (b) proposes several possible partitions for these relations, (c) analyses these partition and predicts their performance implications on the base of a parameterised cost model and (d) chooses the cheapest partition to process the temporal join. We also show how this process can be efficiently implemented by using a new index structure, called the IP-table. The thesis is concluded by a thorough experimental evaluation of the optimisation process and a chapter that shows the suitability of IP-tables in a wider context of temporal query optimisation, namely using them to estimate selectivities of temporal join conditions.
23

Optimisation strategy for semiconductor technology development in a volume manufacturing environment

Redford, Mark January 2000 (has links)
The time to market (TTM) for integrated circuits is key to the success for any technology and the products manufactured using it. Today, it is no longer considered economical to continue to debug a technology once it has been released to manufacturing as market share and the development return on investment, are both impacted. This thesis addresses this by proposing a strategy to optimise a given technology. IT uses a combination of Design Of Experiments (DOE), Response Surface Methodology (RSM) and Technology Computer Aided Design (TCAD). The breakdown voltage of an LDMOS technology is optimised, which is well known to be a challenge to model using design experiments and provides a stringent test of the strategy. Covariance models are employed to improve the model fits to the TCAD data with the justification being that the TCAD results have no random error associated with them. This strategy was successfully applied to three technologies where results were comparable to measure data from one silicon iteration and saved at least six months in development, as well as to several other projects within the Analog Process Technology Development group at National Semiconductor, Greenock, Scotland.
24

Computational studies in stellar dynamics

Sweatman, Winston Lemay January 1991 (has links)
Computational studies have been used in conjunction with theoretical approaches to investigate a number of problems in stellar dynamics. These problems have particular relevance to globular star clusters. The investigations began in the area of three-body scattering (i.e. encounters between a binary and a single star), dealing especially with close triple encounters. A prediction was made, using two theoretical approaches, of the probability distribution for the energy of the binary at the end of an encounter, in cases where the energy is either very large or very small. Programs were written to run on the Edinburgh University mainframe computer to provide a numerical test of the theory. To tackle larger problems an N-body code has been developed for the Edinburgh Concurrent Supercomputer, and its performance analysed. The analysis included a brief study of the optimum order of the algorithm used for this code. The program has been used to simulate Plummer model star clusters containing 1024 and 10048 stars. From the results of these simulations, investigations have been made into the problems of Lagrangian radii oscillations and core wandering. The latter is the motion of the densest part of the star cluster, whilst the former is to do with the movement of mass towards and away from this position. The approach involved a direct look at the variation in the coordinates, supplemented by the computations of autocorrelations and variances.
25

Low bit-rate image sequence coding

Cubiss, Christopher January 1994 (has links)
Digital video, by its very nature, contains vast amounts of data. Indeed, the storage and transmission requirements of digital video frequency far exceed practical storage and transmission capacity. Therefore such research has been dedicated to developing compression algorithms for digital video. This research has recently culminated in the introduction of several standards for image compression. The CCITT H.261 and the motion picture experts group (MPEG) standards both target full-motion video and are based upon a hybrid architecture which combines motion-compensated prediction with transform coding. Although motion-compensated transform coding has been shown to produce <I>reasonable</I> quality reconstructed images, it has also been shown that as the compression ratio is progressively increased the quality of the reconstructed image rapidly degrades. The reasons for this degradation are twofold: firstly, the transform coder is optimised for encoding real-world images, not prediction errors; and secondly, the motion-estimation and transform-coding algorithms both decompose the image into a regular array of blocks which, as the coding distortion is progressively increased, results in the well known 'blocking' effect. The regular structure of this coding artifact makes this error particularly disturbing. This research investigates motion estimation and motion compensated prediction with the aim of characterising the prediction error so that more optimal spatial coding algorithms can be chosen. Motion-compensated prediction was considered in detail. Simple theoretical models of the prediction error were developed and it was shown that, for sufficiently accurate motion estimates, motion-compensated prediction could be considered as a non-ideal spatial band-pass filtering operation. Rate-distortion theory was employed to show that the inverse spectral flatness measure of the prediction error provides a direct indication of the expected coding gain of an optimal hybrid motion-compensated prediction algorithm.
26

Computer modelling of agroforestry systems

Anderson, Thomas R. January 1991 (has links)
The potential of agroforestry in the British uplands depends largely on the ability of system components to efficiently use resources for which they compete. A typical system would comprise conifers planted at wide spacing, with sheep grazing pasture beneath. Computer models were developed to investigate the growth of trees and pasture in a British upland agroforest system, assuming that growth is primarily a function of light intercepted. Some of the implications of growing trees at wide spacing compared to conventional spacings, and the impact of trees on the spatial and annual production of pasture, were examined. Competition for environmental resources between trees and pasture was assumed to be exclusively for light: below-ground interactions were ignored. Empirical methods were used to try and predict timber production in agroforest stands based on data for conventional forest stands, and data for widely-spaced radiata pine grown in South Africa. These methods attempted to relate stem volume increment to stand density, age, and derived competition measures. Inadequacy of the data base prevented successful extrapolation of growth trends of British stands, although direct extrapolation of the South African data did permit predictions to be made. A mechanistic individual-tree growth model was developed, both to investigate the mechanisms of tree growth at wide spacings, and to provide an interface for a pasture model to examine pasture growth under the shading conditions imposed by a tree canopy. The process of light interception as influenced by radiation geometry and stand architecture was treated in detail. Other features given detailed consideration include carbon partitioning, respiration, the dynamics of foliage and crown dimensions, and wood density within tree stems. The predictive ability of the model was considered poor, resulting from inadequate knowledge and data on various aspects of tree growth. The model highlighted the need for further research into the dynamics of crown dimensions, foliage dynamics, carbon partitioning patterns and wood density within stems, and how these are affected by wide spacing. A pasture model was developed to investigate growth beneath the heterogeneous light environment created by an agroforest tree canopy. Pasture growth was closely related to light impinging on the crop, with temperature having only a minor effect. The model highlighted the fact that significant physiological adaptation (increased specific leaf area, decreased carbon partitioned below-ground and changes in the nitrogen cycle) is likely to occur in pasture shaded by a tree canopy.
27

Capacitance-voltage measurements : an expert system approach

Walls, James Austin January 1990 (has links)
The problems for automating capacitance-voltage (C-V) based process monitoring tests are many and varied. The challenges are: interpreting numerical and graphical data, and secondly, performing tests in a correct sequence whilst applying the appropriate simplifying assumptions in the data analysis. A progressive series of experiments using connectionism, pattern-recognition and knowledge-based techniques were researched, culminating with the development of a fully automatic hybrid system for the control of the Hewlett-Packard HP4061 semi-conductor test system. This thesis also includes a substantial review of, and guide to, the theory and practice of the high-frequency, low-frequency and pulsed C-V measurements, conductance-voltage and capacitance-time measurements. Several novel software packages have been written (CV-ASSIST, CV-EXPLORE), including a rule-based expert system (CV-EXPERT) for the control and opportunistic sequencing of measurements and analyses. The research concludes that the new approaches offer the full power and sensitivity of C-V measurements to the operator without the burden of careful procedure, interpretation of the data, and validation of the algorithms used.
28

Simulating the lytic-lysogenic switch in bacteriophage λ

Andrews, Robert Michael January 1999 (has links)
A computer-based simulation of the lytic-lysogenic switch in <I>bacteriophage </I>λ, has been built. The simulation uses information form the λ DNA sequence, as well as kinetic and thermodynamic data from phage experiments, to model λ growth. This thesis has involved three stages of investigation. (1) Building the model from what we know - our knowledge of λ and its behaviour. (2) Testing the model against what we know - comparing <I>in vivo </I>and <I>in silico</I> studies of λ growth. (3) Using the model to perform λ experiments.
29

A software architecture for modeling and distributing virtual environments

Hawkes, Rycharde January 1996 (has links)
The simulation of a Virtual Environment (VE) is an intensive process which is severely limited if restricted to one machine. Through distribution it is possible to increase the size and accuracy of the simulation, thus permitting multiple users to interact with each other and the VE. Existing distributed VE systems have been designed to target a specific level of distribution. This level is dictated by the geographical distance over which the systems must operate and the communications medium connecting them. The system requirements on a tightly-coupled multiprocessor system are not the same as those of a system operating over a Wide Area Network (WAN). Consequently, the solution for any given level does not scale well to larger or smaller system configurations. VE modelling has its heritage in Computer-Aided Design (CAD) and has evolved unchecked into its present state. As the amount of information required in a VE increases, so the current modelling techniques and tools are put under added stress to cope with the extra load. Most modelling techniques are driven by the structure of the system upon which the model must execute, rather than capturing the structure of the information it should represent. This thesis questions the motives behind VE modelling, examines the problems of distributing a VE and details the various solutions that have been employed. An analysis of the methods used leads to the selection of techniques which may be combined to provide a solution unified over all levels of distribution. The proposed solution is also integrated with and actively supports the modelling process, thus providing a powerful environment for VE designers and participants alike. The architecture of this system is presented complete with a description of a prototype implementation that demonstrates the key aspects. The thesis concludes with an evaluation of the prototype.
30

Efficient critical area extraction for photolithographically defined patterns on ICs

Chia, Mark P. C. January 2002 (has links)
The IC industry is developing at a phenomenal rate where smaller and denser chips are being manufactured. The yield of the fabrication process is one of the key factors that determine the cost of a chip. The pattern transfered onto silicon is not a perfect representation of the mask layout, and for an SRAM cell this results in a difference of 3 % between the average number of faults calculated from the mask layout and the aerial image. This thesis investigates methods that are capable of better estimating the yield of an IC during their design phase which can efficiently and accurately estimate the critical area (CA) without the need to directly calculate the aerial image. The initial attempt generates an equivalent set of parallel lines from the mask layout which is then used to estimate the CA after pattern transfer. To achieve this EYE, Depict and WorkBench were integrated with in-house software. Benchmarking on appropriate layouts resulted in estimates within 0.5 - 2.5 <i>% </i>of the aerial image compared with 1.5 -3.5 % for the mask layout. However, for layouts which did not lend themselves to representation by equivalent parallel lines, this method resulted in estimates that were not as accurate as those obtained using the mask layout. The second approach categorises CA curves into different groups based on physical characteristics of the layout. By identifying which group a curve belongs to, the appropriate mapping can be made to estimate the pattern transfer process. However, due to the large number of track combinations it proved too difficult to reliably classify layouts into an appropriate group. Another method proposed determines a track length and position using a combination of AND and OR operations with shifting algorithms. The limitation of this approach was that it was not robust and only proved to work with certain layout types. The fourth method used a one dimensional algorithm to categorise layouts. The estimated CA was within 0.2 % of the aerial image as compared to the mask layout CA of 2.2 <i>%. </i>The disadvantage of this method is that it can only classify parallel tracks. The next approach built upon the above method and can categorise a layout in two dimensions, not being limited to parallel tracks. A variety of designs were used as benchmarks, and for these layouts this method resulted in estimates that were within 0 - 10.7 % of the aerial image compared with 0.5 - 13.4 % for the mask layout.

Page generated in 0.0237 seconds