• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Chronicling process model construction using World Wide Web technology

Andrews, Samuel Ross January 2001 (has links)
In developing and constructing process models, a large amount of data is generated. This needs to be stored as information by providing context (e.g. description and units), such that it may be accessed and Understood by both human users and computer tools. To be useful, information sets must be chronicled by recording information such as by whom it was generated, when, and using what tool. Links should also be maintained between related sets of information to document the relationship. The World Wide Web (WWW) facilitates the provision of links between documents. It may, amongst other things, be thought of as a globally addressable file system. Accessed by a graphical tool, the browser, it can provide a common user interface to access and interact with documents across disparate operating systems. Documents on the WWW include hypertext links, which enable one document to link to another. This thesis describes an object-oriented system which makes use of the www to chronicle process model construction. The object-oriented paradigm has been used to provide a convenient mechanism for encapsulating data in a structured framework. As well as containing the object data, objects also contain information on who created the object, at what time, etc. This information may be used to generate a browsable history of object creation. Objects are stored as www documents, and may be created, viewed and manipulated using standard browsers. An application programmer's interface has been written which enables the information to be manipulated via Fortran programs. Objects have been developed for standard process engineering entities such as streams, mixtures, components, process topologys, etc. Once created, these may be used to generate simulation models in a variety of formats, e.g. ASPEN, spreadsheet model, etc. Conversely, process objects may be generated from ASPEN models. Four case studies have been included, showing various applications of the system. In conclusion, the www provides a suitable environment for implementing such a system, due to its ease of use, and the fact that it provides both a user interface and enables remote access to the system.
22

Simulation models of shared-memory multiprocessor systems

Coe, Paul January 2000 (has links)
Multiprocessors have often been thought of as the solution to today's every increasing computing needs; but they are expensive, complex and difficult to design. This thesis focusses on the development of multiprocessor simulations that would aid the design and evaluation of such systems. The thesis starts by outlining the various possibilities for multiprocessor design and discusses some of the more common problems that must be addressed. A selection of simulation environments and models that have been developed to study complex computer systems are then described. The major problem with these simulation systems is that they generally focus on a small area of multiprocessor systems design in order to produce fast simulations that generate results quickly; consequently they provide very little flexibility and room for exploration. The aim of this project was to design and implement a flexible multiprocessor model within the HASE simulation environment, enabling the designer to explore a large design space with a minimum of effort, focussing more on flexibility and less on simulation speed. A parameterised simulation model has been developed that presents the designer with many design options with which to experiment. The parameters allow simple alternatives to be explored, for example, different component speeds or bus widths, as well as more complicated features, for example, coherence protocols, synchronisation primitives and architecture configurations. The model was designed in a modular manner that allows new parameter values to be incorporated, as well as new implementations of the various entities. To support this new model, the HASE system was extended to provide better support for multiprocessor modelling.
23

The design of an interactive computer system for microelectronic mask making

Eades, John David January 1977 (has links)
No description available.
24

A clustered VLIW architecture based on queue register files

Fernandes, Marcio Merino January 1998 (has links)
Instruction-level parallelism (ILP) is a set of hardware and software techniques that allow parallel execution of machine operations. Superscalar architectures rely most heavily upon hardware schemes to identify parallelism among operations. Although successful in terms of performance, the hardware complexity involved might limit the scalability of this model. VLIW architectures use a different approach to exploit ILP. In this case all data dependence analyses and scheduling of operations are performed at compile time, resulting in a simpler hardware organization. This allows the inclusion of a larger number of functional units (FUs) into a single chip. IN spite of this relative simplification, the scalability of VLIW architectures can be constrained by the size and number of ports of the register file. VLIW machines often use software pipelining techniques to improve the execution of loop structures, which can increase the register pressure. Furthermore, the access time of a register file can be compromised by the number of ports, causing a negative impact on the machine cycle time. For these reasons we understand that the benefits of having parallel FUs, which have motivated the investigation of alternative machine designs. This thesis presents a scalar VLIW architecture comprising clusters of FUs and private register files. Register files organised as queue structures are used as a mechanism for inter-cluster communication, allowing the enforcement of fixed latency in the process. This scheme presents better possibilities in terms of scalability as the size of the individual register files is not determined by the total number of FUs, suggesting that the silicon area may grow only linearly with respect to the total number of FUs. However, the effectiveness of such an organization depends on the efficiency of the code partitioning strategy. We have developed an algorithm for a clustered VLIW architecture integrating both software pipelining and code partitioning in a a single procedure. Experimental results show it may allow performance levels close to an unclustered machine without communication restraints. Finally, we have developed silicon area and cycle time models to quantify the scalability of performance and cost for this class of architecture.
25

Clustered multithreading for speculative execution

Marukatat, Rangsipan January 2003 (has links)
This thesis introduces the use of hierarchy and clusters in multithreaded execution, which allows several fragments of an application to be specifically optimised and executed by clusters of thread processing units (TPUs) as orchestrated by compile-time analyses. Our multithreaded architecture is a network of homogeneous thread processing units. Additional features were proposed, aiming at dynamic clustering of the TPUs throughout the entire program execution as well as minimum hardware support for speculative execution. The architecture executes a sub-set of the MIPS insruction set augmented with multithreaded instructions. A multithreaded compilation system was implemented, which focuses on high-level or front-end transformation from sequential C programs to multithreaded ones. Empirical studies were conducted on benchmarks containing two types of program structures: loops and conditional branches. Coarse-grained control speculation enables simultaneous execution of several sub-problems such as loops, each of which could in turn be executed by multiple threads. Strategies were proposed for allocating TPU resources to these sub-problems and evaluated in simulations. Significant speedups were observed in the performance of multithreaded loop execution, and could be further improved by the application of control speculation.
26

Study of an adaptive and multifunctional computational behaviour generation model for virtual creatures

Wang, Fang January 2002 (has links)
High fidelity virtual environments can be inhabited by virtual creatures. A virtual creature should be able to learn itself how to improve its old behaviors and produce new related behaviors so as to be more adaptive and autonomous and hence reduce human design work. This thesis presents a study of an adaptive and multifunctional Com­putational Behavior Generation (CBG) model for virtual creatures with the ultimate goal of enhancing a creature's adaptation and multifunctionality in behavior control by learning. Specifically, we require that the CBG model can learn to perform variable behavior tasks in various environments and situations. The design of the CBG model is inspired by the natural behavior control system in the brain. It can perform the whole procedure of decision, programming and execution of motor actions, and its hierarchical architecture provides the material basis for its adaptive and multifunctional learning implementation. The concrete achievement of adaptation and multifunctionality by learning is obtained with the help of a Multiagent based Evolutionary Artificial Neural Network with Lifetime Learning (MENL), which can learn to make correct action decisions for varied behavior tasks in varied situations. MENL takes advantage of the whole population information of evolution by maintaining a batch of multiagents in every evolutionary generation. These agents co-decide the decisions to be executed, and they are subject to evolutionary learning through all of their lifetime. The fitness function of MENL is designed without many specific constraints, and can be easily extended for a variety of behaviors. As a consequence, the CBG with MENL can obtain high adaptation and generalization in behavior. The CBG model combined with the MENL learning algorithm enables a virtual crea­ture to learn several general navigation functions independently and jointly in unknown environments. These functions include exploration, goal reaching, and wandering. The virtual creature is first asked to learn general exploration only in a series of increasing complex environments. This creature has adapted to various environments and nav­igated in them successfully. The whole successful exploration experiment is achieved due to the competition and emergent cooperation among multiagents and their con­tinuous lifetime learning.
27

Simulation modelling of complex human policy issues : towards a broad interdisciplinarity

Crane, David C. January 2000 (has links)
Computer simulation models are being used increasingly as decision support tools for policy-making regarding many complex human-related policy issues. This requires a different method of application than that developed for the physical sciences. Starting with a review of current thinking in the philosophy of (physical) science and other literatures dealing with the methodology of economics and with metaphor, the concept of a model is expanded to include the informal assumptions upon which form structures are founded. Two types of interdisciplinarity are identified; the 'broad' which establishes dialogue between the non-formal foundations, and the 'narrow', which does not. These issues are expanded for the case of systems modelling (including system dynamics, complex systems and quasi-formal systems) to take account of the lack of an objective stance when addressing human-related issues. These ideas are applied to the development of the discipline of economics, and the demarcation between a mainstream and 'ecological' alternative is examined. A 'thick' or rhetorical reading suggests that the division is at best useful in only some circumstances, and hides a number of other important divisions within the field. The broad interdisciplinary method is illustrated by three case studies of policy-relevant models. The ECCO model represents national and regional sustainability options in the biophysical context using conventional system dynamics. The CarteSim model represents changes in spatial land-use patterns at the regions and urban level using complex system dynamics. The IPSO model is a hybrid of ECCO and CarteSim, using complex dynamic representations of the interaction between physical capital and technology options. Taken as a whole, the three case studies comprise a broad interdisciplinary inquiry into the debate regarding the effects of natural capital, human capital and technology on economic growth.
28

Prefetching techniques for client server object-oriented database systems

Knafla, Nils January 1999 (has links)
The performance of many object-oriented database applications suffers from the page fetch latency which is determined by the expense of disk access. In this work we suggest several prefetching techniques to avoid, or at least to reduce, page fetch latency. In practice no prediction technique is prefect and no prefetching technique can reduce the total demand fetch time. Therefore we are interested in the trade-off between the level of accuracy required for obtaining good results in terms of elapsed time reduction and the processing overhead needed to achieve this level of accuracy. If prefetching accuracy is high then the total elapsed time of an application can be reduced significantly otherwise if the prefetching accuracy is low, many incorrect pages are prefetched and the extra load on the client, network, server and disks decreases the whole system performance. Access pattern of object-oriented databases are often complex and usually hard to predict accurately. The main thrust of our work therefore concentrates on analysing the structure of object relationships to obtain knowledge about page reference patterns. We designed a technique, called OSP, which prefetches pages according to a time constraint established by the duration of a page fetch. In addition, every page has an associated weight that decides about the execution of a prefetch. We implemented OSP in the EXODUS storage manager by adding multithreading to the database client. The performance of OSP is evaluated on different machines in interaction with buffer management, distributed databases and other system parameters.
29

Combining symbolic conflict recognition with Markov chains for fault identification

Smith, Finlay S. January 2002 (has links)
A novel approach is presented in this thesis that exploits uncertain information on the behavioural description of system components to identify possible fault behaviours in physical systems. The result is a diagnostic system that utilises all available evidence at each stage. The approach utilises the standard conflict recognition technique developed in the well-known General Diagnostic Engine framework to support diagnostic inference, through the production of both rewarding and penalising evidence. The penalising evidence is derived from the conflict sets and the rewarding evidence is derived, in a similar way, from two sets of components both combining to predict the same value of a given variable within the system model. The rewarding evidence can be used to increase the possibility of a given component being in the actual fault model, whilst penalising evidence is used to reduce the possibility. Markov matrices are derived from given evidence, thereby enabling the use of Markov Chains in the diagnostic process. Markov Chains are used to determine possible next states of a system based only upon the current state. This idea is adapted so that instead of moving from one state to another the movement is between different behavioural modes of individual components. The transition probability between states then becomes the possibility of each behaviour being in the next model. Markov matrices are therefore used to revise the beliefs in each of the behaviours of each component. This research has resulted in a technique for identifying candidates for multiple faults that is shown to be very effective. To illustrate the process, electrical circuits consisting of approximately five hundred components are used to show how the technique works on a large scale. The electrical circuits used are drawn from a standard test set.
30

Process modelling to support software development under the capability maturity model

Yang, Kann-Jang January 1998 (has links)
Before the technique of software components is mature, we believe that the software process is another essential topic for "manufacturing software products". The steps in the software process must be defined very precisely and carefully. Process-centred Software Engineering Environments (PSEEs) are viewed by many as a way to assist developers in the execution of their work. Research has produced a variety of PSEEs providing support for management and technical activities. However, no consensus has formed on the issue of which process is the most appropriate one. The Capability Maturity Model for Software (CMM), developed by Software Engineering Institute in Carnegie Mellon University, provides software organisations with guidance on how to gain control of their processes for developing and maintaining software. For the last few years, some organisations have successfully improved their software process maturity by using the CMM. This research builds a PSEE, called SPI (Software Process Improvement) PASTA, that models the CMM by using the process notation PASTA (Process and Artefact State Machine Transition Abstraction). There are two reasons for doing this research. Firstly, we believe that a PSEE must comply with a framework of continuous process improvement, such as the CMM, in order to get to the right destination. Secondly, in any context in which the CMM is applied, a reasonable interpretation of the practices should be used. The CMM must be appropriately interpreted for different size projects and software organisations. The SPI PASTA provides a framework for continuous improvement of the process. This framework complies with a supporting knowledge transfer and implementation services architecture that makes it possible to achieve higher software process maturity.

Page generated in 0.0469 seconds