• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Software composition with templates

Yermashov, Kostyantyn January 2008 (has links)
Software composition systems are systems that concentrate on the composition of components. Thes.e systems represent a growi~ subfield of software engineering. Traditional software composition approaches define components as black-boxes. Black-boxes are characterised by their visible behaviour, but not their visible structure. They describe what can be done, rather than how it can be done. Basically, black-boxes are structurally monolithic units that can be composed together via provided interfaces. Growing complexity of software systems and dynamically changing requirements to these systems demand better parameterisation of components. State of the art approaches have tried to increase parameterisation of systems with so-called grey-box components (grey-boxes). These types of components introduced a structural configurability of components. Greyboxes could improve composability, reusability, extensibility and adaptability of software systems. However, there is still there is a big gap between grey-box approaches and business. ,' We see two main reasons for this. Firstly, a structurally non-monolithic nature of grey-boxes results in a significantly increased number of components and relationships that may form a software system. This makes grey-box approaches more complex and their development more expensive. There is a lack of tools to decrease the complexity of grey-box approaches. Secondly, grey-box composition approaches are oriented to the experts with a technical background in programming languages and software architectures. Up to now, state-of-the-art approaches have not addressed the question of their efficient applicability by domain experts with no technical background in programming languages. We consider a structural visibility of grey-boxes gives a chance to provide better externalisation of business logic, so that even a non-expert in programming language could design a software system for hislher special domain. In this thesis, we propose a holistic approach, called Neurath Composition Framework, to compose software systems according to well-defined requirements which have been externalised, giving the ownership of the design to the end-user. We show how externalisation of business logic can be achieved using grey-box composition systems augmented with the domain-specific visual interfaces. We define our own grey-box composition system based on the Parametric Code Templates component model and Molecular Operations composition technique. With this composition system awareness 'of a design, comprehensive development and the reuse of program code templates can be achieved. Finally, we present a sample implementation that shows the applicability of the composition framework to solve real-life business tasks.
82

Methods for the efficient measurement of phased mission system reliability and component importance

Reed, Sean January 2011 (has links)
An increasing number of systems operate over a number of consecutive time periods, in which their reliability structure and the consequences of failure differ, in order to perform some overall operation. Each distinct time period is known as a phase and the overall operation is known as a phased mission. Generally, a phased mission fails immediately if the system fails at any point and is considered a success only if all phases are completed without failure. The work presented in this thesis provides efficient methods for the prediction and optimisation of phased mission reliability. A number of techniques and methods for the analysis of phased mission reliability have been previously developed. Due to the component and system failure time dependencies introduced by the phases, the computational expense of these methods is high and this limits the size of the systems that can be analysed in reasonable time frames on modern computers. Two importance measures, which provide an index of the influence of each component on the system reliability, have also been previously developed. This is useful for the optimisation of the reliability of a phased mission, however a much larger number have been developed for non-phased missions and the different perspectives and functions they provide are advantageous. This thesis introduces new methods as well as improvements and extensions to existing methods for the analysis of both non-repairable and repairable systems with an emphasis on improved efficiency in the derivation of phase and mission reliability. New importance measures for phased missions are also presented, including interpretations of those currently available for non-phased missions. These provide a number of interpretations of component importance, allowing those most suitable in a given context to be employed and thus aiding in the optimisation of mission reliability. In addition, an extensive computer code has been produced that implements and tests the majority of the newly developed techniques and methods.
83

Theory and design techniques for stored program implementations of sequential systems

Hussain, Ahsan M. A. January 1976 (has links)
The basic principles of sequential switching theory were first developed by Huffman and later generalised by Moore and Mealy. These techniques and subsequent ones based on them were mainly concerned with minimizing the amount of logical hardware in the form of discrete gate components. One result of this early work was the development, by Hartmanis, of an algebraic structure theory for sequential systems. In recent years, however, the advent of MSI/LSI has changed the fundamental design requirements, and new design criteria were thus created and many of the conventional minimization methods were rendered obsolete. In particular, no systematic techniques exist for designing systems at the sub-system or system level which ,the MSI and LSI technology requires. In this thesis, using the criterion of minimal total storage requirements of a given sequential switching system, the applicability, of the structure theory due to Hartmanis, in conjunction with MSI/LSI modules is examined, and the different possible resulting structures are also examined for their suitability to LSI/MSI realisations. Also, the interpartition relationships that lead to these structures are studied and best possible component sizes within the different possible structures are determined. In this connection, a procedure has been developed which systematically leads to either least storage or most uniform component machines. However, since a large section of sequential switching systems either do not decompose into convenient sizes and structures or that a large amount of redundancy has to be introduced in order to make them decompose, alternative realisation techniques which can be used to realise such systems have been developed. These are the State and Input techniques, which resemble in some aspects the Ashenhurst-Curtis type of disjunctive decompositions, and are general and result in uniform components. The size and structure of the components can be varied so as to suit available modules. Above all, these systems offer a simple and effective method of realising asynchronous systems requiring no special state assignment. This is done through the use of inertial delays in conjunction with a decoder in the feedback loops of the system.
84

Development of a weight-based topological map-matching algorithm and an integrity method for location-based ITS services

Velaga, Nagendra R. January 2010 (has links)
The main objective of this research is to enhance navigation modules of location-based Intelligent Transport Systems (ITS) by developing a weight-based topological map-matching algorithm and a map-aided integrity monitoring process. Map-matching (MM) algorithms integrate positioning data from positioning sensors with spatial road network data to identify firstly, the road link on which a vehicle is travelling from a set of candidate links; and secondly, to determine the vehicle s location on that segment. A weight-based topological MM algorithm assigns weights for all candidate links based on different criteria such as the similarity in vehicle movement direction and link direction and the nearness of the positioning point to a link. The candidate link with the highest total weighting score is selected as the correct link. This type of map-matching algorithm is very popular due to its simplicity and speediness in identifying the correct links. Existing topological map-matching algorithms however have a number of limitations: (1) employing a number of thresholds that may not be transferable, (2) assigning arbitrary weighting coefficients to different weights, (3) not distinguishing among different operational environments (i.e., urban, suburban and rural) when determining the relative importance of different weights and (4) not taking into account all available data that could enhance the performance of a topological MM algorithm. In this research a novel weight-based topological map-matching algorithm is developed by addressing all the above limitations. The unique features of this algorithm are: introducing two new weights on turn restrictions and connectivity at junctions to improve the performance of map-matching; developing a more robust and reliable procedure for the initial map-matching process; performing two consistency checks to minimise mismatches and determining the relative importance of different weights for specific operational environments using an optimisation technique. Any error associated with either the raw positioning data (from positioning sensors) or spatial road network, or the MM process can lead to incorrect road link identification and inaccurate vehicle location estimation. Users should be notified when the navigation system performance is not reliable. This is referred to as an integrity monitoring process. In this thesis, a user-level map-aided integrity method that takes into account all error sources associated with the three components of a navigation system is developed. Again, the complexity of the road network is also considered. Errors associated with a spatial road map are given special attention. Two knowledge-based fuzzy inference systems are employed to measure the integrity scale, which provides the level of confidence in map-matching results. Performance of the new MM algorithm and the integrity method was examined using a real-world field data. The results suggest that both the algorithm and the integrity method have the potential to support a wide range of real-time location-based ITS services. The MM algorithm and integrity method developed in this research are simple, fast, efficient and easy to implement. In addition, the accuracy offered by the enhanced MM algorithm is found to be high; it is able to identify the correct links 97.8% of the time with an horizontal accuracy of 9.1 m. This implies that the developed algorithm has high potential to be implemented by industry for the purpose of supporting the navigation modules of location-based intelligent transport systems.
85

Computerised link analysis system : development and testing of a new link analysis system

Zhao, Yu January 2011 (has links)
Link Analysis (LA) is a popular ergonomics tool used to study and improve the layout of workspaces (Ferreira and Hignett, 2005); to study abstract relationships, for example between criminals (Harper and Harris, 1975) or ill people (Stuster and Chovil, 1994). It can also be used as a data (event) recording method to capture interactions to assist the understanding of staff behaviour and interactions with their environment (McDonald and Dzwonczyk, 1988). Currently, most researchers still rely on the manual method to perform LA. However, the traditional pen and paper method is cumbersome, time consuming and gives limited outputs. Since 1964, when Haygood et al. (1964) succeeded in improving the manual method of LA, researchers have been trying to use computer techniques to enhance the performance and increasing efficiency of LA. However, these methods may also have limitations. To address these limitations a Computerised Link Analysis (CLA) system has been developed. The CLA system is not only a computerised LA application (layout creating, event recording and result generating) which is able to reduce the time and effort compared to the manual method, but is also an integrated task analysis tool incorporating traditional LA functions, basic task analysis functions (recording detailed description of operator activities, as components of Hierarchical Task Analysis), and time-motion functions (recording time stamps of operator activities). Additional outputs that are not part of traditional LA comprise time-event lists (start/end time, duration, chronology, additional notes, importance and operator ID), processed link diagrams (with the link direction and frequency), as well as the conventional LA results of link diagrams and link tables. The CLA system was developed in four phases, including pre-test developments (CLA v1.01-v1.05) and post-test developments (CLA v1.05- v1.12). The pre-test developments are also called early software developments, iv and in this phase, the system was developed according to the specifications which were produced based on the limitations of LA and other computer-aided LA methods. Then three iterative tests were designed to test the functions of the system to make sure it performs reliably under both laboratory and real world conditions: The Technical Validation Test (CLA v1.05-v1.08) was carried out in a laboratory environment, aiming to ensure that the software and hardware for the CLA system worked technically and that the outputs achieved an acceptable level of accuracy (reliability testing and debugging). The Usability Test (CLA v1.08-v1.11) was carried out in a laboratory environment, aiming to test the software and hardware by observing the performance of 12 system operators as they used CLA, collecting user feedback after the test, and identifying possible improvements from users. The beta test (CLA v1.11-v1.12) was carried out in a field environment, aiming to make sure the faults identified in the Technical Validation and Usability tests were fixed; to review real-time data recording and analysis abilities and identify further improvements by using the CLA system in a complex real-life environment, Emergency Department (ED) at an UK hospital. The results show that CLA improves the traditional method of LA in both efficiency and effectiveness. A major step forward is the additional functionality of the CLA system as an integrated task analysis tool, which is able to collect and process real-time LA, HTA and time-motion data concurrently. This produces enriched data that result in both more detailed investigations of the target environment and lead to new research directions.
86

Transform domain texture synthesis on surfaces

Shet, Rupesh N. January 2008 (has links)
In the recent past application areas such as virtual reality experiences, digital cinema and computer gamings have resulted in a renewed interest in advanced research topics in computer graphics. Although many research challenges in computer graphics have been met due to worldwide efforts, many more are yet to be met. Two key challenges which still remain open research problems are, the lack of perfect realism in animated/virtually-created objects when represented in graphical format and the need for the transmissiim/storage/exchange of a massive amount of information in between remote locations, when 3D computer generated objects are used in remote visualisations. These challenges call for further research to be focused in the above directions. Though a significant amount of ideas have been proposed by the international research community in their effort to meet the above challenges, the ideas still suffer from excessive complexity related issues resulting in high processing times and their practical inapplicability when bandwidth constraint transmission mediums are used or when the storage space or computational power of the display device is limited. In the proposed work we investigate the appropriate use of geometric representations of 3D structure (e.g. Bezier surface, NURBS, polygons) and multi-resolution, progressive representation of texture on such surfaces. This joint approach to texture synthesis has not been considered before and has significant potential in resolving current challenges in virtual realism, digital cinema and computer gaming industry. The main focus of the novel approaches that are proposed in this thesis is performing photo-realistic texture synthesis on surfaces. We have provided experimental results and detailed analysis to prove that the proposed algorithms allow fast, progressive building of texture on arbitrarily shaped 3D surfaces. In particular we investigate the above ideas in association with Bezier patch representation of 3D objects, an approach which has not been considered so far by any published world wide research effort, yet has flexibility of utmost practical importance. Further we have discussed the novel application domains that can be served by the inclusion of additional functionality within the proposed algorithms.
87

Adaptive privacy management for distributed applications

Wu, Maomao January 2007 (has links)
In networked computing environments, it becomes increasingly difficult for normal people to manage privacy, i.e., “to determine for themselves when, how, and to what extent information about them is communicated with others”. The thesis argues that achieving better privacy is not about hiding as much personal information as possible but enabling personal information disclosure at a level of openness that is as close as to a user’s desired level to assist him/her in accomplishing useful tasks. Following Palen and Dourish’s observation that privacy management is a dialectic and dynamic boundary regulation process [Palen03], the thesis argues that no set of pre-specified static privacy policies can meet users' changing requirements for privacy in networked computing environments, and therefore a new approach (i.e., adaptive privacy management) is proposed as the process that a user and/or a system to continuously adjust the system behaviour of disclosing personal information according to the user's changing desire for openness. In this thesis, we propose a set of requirements for adaptive privacy management and i the design and implementation of a middleware that meets these requirements for the target domain of applications that enable intentional sharing of personal information in networked computing environments. The middleware facilitates the creation of adaptive privacy aware applications that allows users or the system on behalf of the user to adjust the balance between openness and closedness; leading to an evolution of the users’ privacy preferences as a result of on-going interactions. A prototype adaptive privacy management system was implemented based on this middleware; demonstrating the feasibility of adaptive privacy management for the target domain. Both the principles of adaptive privacy management and the prototype implementation were evaluated based on the results of a detailed user study using a GSM location sharing application constructed using the prototype platform. The study reveals the our core requirements are important for end users, and that our supporting design did provide adequate support for the characteristics we propose.
88

A multi-agent system for a bus crew rescheduling system

Shibghatullah, Abdul Samad January 2008 (has links)
Unpredictable events (UE) are major factors that cause disruption to everyday bus operation. In the occurrence of UE, the main resources - crews and vehicles - are affected, and this leads to crew schedule disruption. One way to deal with the problem is crew rescheduling. Most of the current approaches are based on static schedules do not support rescheduling in a real-time scenario. They have the ability to reschedule but a new complete schedule is produced without concerning the real time situation. The mathematical approaches which are used by most scheduling packages have the ability to search for optimum or near optimum schedules but they are usually slow to produce results in real-time because they are computationally intensive when faced with complex situations. In practice, crew or bus rescheduling is managed manually, based on the supervisor's capabilities and experience in managing UE. However, manual rescheduling is complex, prone to error and not optimum, especially when dealing with many UE at the same time. This research proposes the CRSMAS (Crew Rescheduling System with Multi Agent System) approach as an alternative that may help supervisors to make quick rescheduling decisions by automating the crew rescheduling process. A Multi Agent System (MAS) is considered suitable to support this rescheduling because agents can dynamically adapt their behaviour to changing environments and they can find solutions quickly via negotiations and cooperation between them. To evaluate the CRSMAS, two types of experiment are carried out: Single Event and Multiple Events. The Single Event experiment is used to find characteristics of crew schedules that influence the crew rescheduling process while the Multiple Events experiment is used to test the capability of CRSMAS in dealing with numerous events that occur randomly. A wide range of simulation results, based on real-world data, are reported and analysed. Based on the experiment it is concluded that CRSMAS is suitable for automating the crew rescheduling process and capable of quick rescheduling whether facing single events or multiple events at the same time, the success of rescheduling is not only dependant on the tool but also to other factors such as the characteristics of crew schedules and the period of the UE, and one limitation of CRSMAS that was discovered is it cannot simulate different type of events at the same time. This limitation is because in different events there are different rules but, in Virtual World, agents can only negotiate with one set of rules at a time.
89

A service-oriented architecture and language for abstracted distributed algorithms

Goodman, Daniel January 2007 (has links)
No description available.
90

Correct synthesis and integration of compiler-generated function units

Ellis, Martin Andrew January 2008 (has links)
Computer architectures can use custom logic in addition to general pur- pose processors to improve performance for a variety of applications. The use of custom logic allows greater parallelism for some algorithms. While conventional CPUs typically operate on words, ne-grained custom logic can improve e ciency for many bit level operations. The commodi ca- tion of eld programmable devices, particularly FPGAs, has improved the viability of using custom logic in an architecture. This thesis introduces an approach to reasoning about the correctness of compilers that generate custom logic that can be synthesized to provide hardware acceleration for a given application. Compiler intermediate representations (IRs) and transformations that are relevant to genera- tion of custom logic are presented. Architectures may vary in the way that custom logic is incorporated, and suitable abstractions are used in order that the results apply to compilation for a variety of the design parameters that are introduced by the use of custom logic.

Page generated in 0.0142 seconds