• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Reasoning about correctness properties of a coordination programming language

Grov, Gudmund January 2009 (has links)
Safety critical systems place additional requirements to the programming language used to implement them with respect to traditional environments. Examples of features that in uence the suitability of a programming language in such environments include complexity of de nitions, expressive power, bounded space and time and veri ability. Hume is a novel programming language with a design which targets the rst three of these, in some ways, contradictory features: fully expressive languages cannot guarantee bounds on time and space, and low-level languages which can guarantee space and time bounds are often complex and thus error-phrone. In Hume, this contradiction is solved by a two layered architecture: a high-level fully expressive language, is built on top of a low-level coordination language which can guarantee space and time bounds. This thesis explores the veri cation of Hume programs. It targets safety properties, which are the most important type of correctness properties, of the low-level coordination language, which is believed to be the most error-prone. Deductive veri cation in Lamport's temporal logic of actions (TLA) is utilised, in turn validated through algorithmic experiments. This deductive veri cation is mechanised by rst embedding TLA in the Isabelle theorem prover, and then embedding Hume on top of this. Veri cation of temporal invariants is explored in this setting. In Hume, program transformation is a key feature, often required to guarantee space and time bounds of high-level constructs. Veri cation of transformations is thus an integral part of this thesis. The work with both invariant veri cation, and in particular, transformation veri cation, has pinpointed several weaknesses of the Hume language. Motivated and in uenced by this, an extension to Hume, called Hierarchical Hume, is developed and embedded in TLA. Several case studies of transformation and invariant veri cation of Hierarchical Hume in Isabelle are conducted, and an approach towards a calculus for transformations is examined.
202

Data mining of many-attribute data : investigating the interaction between feature selection strategy and statistical features of datasets

Luo, Silang January 2009 (has links)
In many datasets, there is a very large number of attributes (e.g. many thousands). Such datasets can cause many problems for machine learning methods. Various feature selection (FS) strategies have been developed to address these problems. The idea of an FS strategy is to reduce the number of features in a dataset (e.g. from many thousands to a few hundred) so that machine learning and/or statistical analysis can be done much more quickly and effectively. Obviously, FS strategies attempt to select the features that are most important, considering the machine learning task to be done. The work presented in this dissertation concerns the comparison between several popular feature selection strategies, and, in particular, investigation of the interaction between feature selection strategy and simple statistical features of the dataset. The basic hypothesis, not investigated before, is that the correct choice of FS strategy for a particular dataset should be based on a simple (at least) statistical analysis of the dataset. First, we examined the performance of several strategies on a selection of datasets. Strategies examined were: four widely-used FS strategies (Correlation, Relief F, Evolutionary Algorithm, no-feature-selection), several feature bias (FB) strategies (in which the machine learning method considers all features, but makes use of bias values suggested by the FB strategy), and also combinations of FS and FB strategies. The results showed us that FB methods displayed strong capability on some datasets and that combined strategies were also often successful. Examining these results, we noted that patterns of performance were not immediately understandable. This led to the above hypothesis (one of the main contributions of the thesis) that statistical features of the dataset are an important consideration when choosing an FS strategy. We then investigated this hypothesis with several further experiments. Analysis of the results revealed that a simple statistical feature of a dataset, that can be easily pre-calculated, has a clear relationship with the performance Silang Luo PHD-06-2009 Page 2 of certain FS methods, and a similar relationship with differences in performance between certain pairs of FS strategies. In particular, Correlation based FS is a very widely-used FS technique based on the basic hypothesis that good feature sets contain features that are highly correlated with the class, yet uncorrelated with each other. By analysing the outcome of several FS strategies on different artificial datasets, the experiments suggest that CFS is never the best choice for poorly correlated data. Finally, considering several methods, we suggest tentative guidelines for choosing an FS strategy based on simply calculated measures of the dataset.
203

The augmented reality framework : an approach to the rapid creation of mixed reality environments and testing scenarios

Davis, Benjamin Charles January 2009 (has links)
Debugging errors during real-world testing of remote platforms can be time consuming and expensive when the remote environment is inaccessible and hazardous such as deep-sea. Pre-real world testing facilities, such as Hardware-In-the-Loop (HIL), are often not available due to the time and expense necessary to create them. Testing facilities tend to be monolithic in structure and thus inflexible making complete redesign necessary for slightly different uses. Redesign is simpler in the short term than creating the required architecture for a generic facility. This leads to expensive facilities, due to reinvention of the wheel, or worse, no testing facilities. Without adequate pre-real world testing, integration errors can go undetected until real world testing where they are more costly to diagnose and rectify, e.g. especially when developing Unmanned Underwater Vehicles (UUVs). This thesis introduces a novel framework, the Augmented Reality Framework (ARF), for rapid construction of virtual environments for Augmented Reality tasks such as Pure Simulation, HIL, Hybrid Simulation and real world testing. ARF’s architecture is based on JavaBeans and is therefore inherently generic, flexible and extendable. The aim is to increase the performance of constructing, reconfiguring and extending virtual environments, and consequently enable more mature and stable systems to be developed in less time due to previously undetectable faults being diagnosed earlier in the pre-real-world testing phase. This is only achievable if test harnesses can be created quickly and easily, which in turn allows the developer to visualise more system feedback making faults easier to spot. Early fault detection and less wasted real world testing leads to a more mature, stable and less expensive system. ARF provides guidance on how to connect and configure user made components, allowing for rapid prototyping and complex virtual environments to be created quickly and easily. In essence, ARF tries to provide intuitive construction guidance which is similar in nature to LEGOR pieces which can be so easily connected to form useful configurations. ARF is demonstrated through case studies which show the flexibility and applicability of ARF to testing techniques such as HIL for UUVs. In addition, an informal study was carried out to asses the performance increases attributable to ARF’s core concepts. In comparison to classical programming methods ARF’s average performance increase was close to 200%. The study showed that ARF was incredibly intuitive since the test subjects were novices in ARF but experts in programming. ARF provides key contributions in the field of HIL testing of remote systems by providing more accessible facilities that allow new or modified testing scenarios to be created where it might not have been feasible to do so before. In turn this leads to early detection of faults which in some cases would not have ever been detected before.
204

Solving key design issues for massively multiplayer online games on peer-to-peer architectures

Fan, Lu January 2009 (has links)
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale on the Internet and are predominantly implemented by Client/Server architectures. While such a classical approach to distributed system design offers many benefits, it suffers from significant technical and commercial drawbacks, primarily reliability and scalability costs. This realisation has sparked recent research interest in adapting MMOGs to Peer-to-Peer (P2P) architectures. This thesis identifies six key design issues to be addressed by P2P MMOGs, namely interest management, event dissemination, task sharing, state persistency, cheating mitigation, and incentive mechanisms. Design alternatives for each issue are systematically compared, and their interrelationships discussed. How well representative P2P MMOG architectures fulfil the design criteria is also evaluated. It is argued that although P2P MMOG architectures are developing rapidly, their support for task sharing and incentive mechanisms still need to be improved. The design of a novel framework for P2P MMOGs, Mediator, is presented. It employs a self-organising super-peer network over a P2P overlay infrastructure, and addresses the six design issues in an integrated system. The Mediator framework is extensible, as it supports flexible policy plug-ins and can accommodate the introduction of new superpeer roles. Key components of this framework have been implemented and evaluated with a simulated P2P MMOG. As the Mediator framework relies on super-peers for computational and administrative tasks, membership management is crucial, e.g. to allow the system to recover from super-peer failures. A new technology for this, namely Membership-Aware Multicast with Bushiness Optimisation (MAMBO), has been designed, implemented and evaluated. It reuses the communication structure of a tree-based application-level multicast to track group membership efficiently. Evaluation of a demonstration application shows i that MAMBO is able to quickly detect and handle peers joining and leaving. Compared to a conventional supervision architecture, MAMBO is more scalable, and yet incurs less communication overheads. Besides MMOGs, MAMBO is suitable for other P2P applications, such as collaborative computing and multimedia streaming. This thesis also presents the design, implementation and evaluation of a novel task mapping infrastructure for heterogeneous P2P environments, Deadline-Driven Auctions (DDA). DDA is primarily designed to support NPC host allocation in P2P MMOGs, and specifically in the Mediator framework. However, it can also support the sharing of computational and interactive tasks with various deadlines in general P2P applications. Experimental and analytical results demonstrate that DDA efficiently allocates computing resources for large numbers of real-time NPC tasks in a simulated P2P MMOG with approximately 1000 players. Furthermore, DDA supports gaming interactivity by keeping the communication latency among NPC hosts and ordinary players low. It also supports flexible matchmaking policies, and can motivate application participants to contribute resources to the system.
205

An integrated approach to high integrity software verification

Ellis, William James January 2010 (has links)
Computer software is developed through software engineering. At its most precise, software engineering involves mathematical rigour as formal methods. High integrity software is associated with safety critical and security critical applications, where failure would bring significant costs. The development of high integrity software is subject to stringent standards, prescribing best practises to increase quality. Typically, these standards will strongly encourage or enforce the application of formal methods. The application of formal methods can entail a significant amount of mathematical reasoning. Thus, the development of automated techniques is an active area of research. The trend is to deliver increased automation through two complementary approaches. Firstly, lightweight formal methods are adopted, sacrificing expressive power, breadth of coverage, or both in favour of tractability. Secondly, integrated solutions are sought, exploiting the strengths of different technologies to increase automation. The objective of this thesis is to support the production of high integrity software by automating an aspect of formal methods. To develop tractable techniques we focus on the niche activity of verifying exception freedom. To increase effectiveness, we integrate the complementary technologies of proof planning and program analysis. Our approach is investigated by enhancing the SPARK Approach, as developed by Altran Praxis Limited. Our approach is implemented and evaluated as the SPADEase system. The key contributions of the thesis are summarised below: • Configurable and Sound - Present a configurable and justifiably sound approach to software verification. • Cooperative Integration - Demonstrate that more targeted and effective automation can be achieved through the cooperative integration of distinct technologies. • Proof Discovery - Present proof plans that support the verification of exception freedom. • Invariant Discovery - Present invariant discovery heuristics that support the verification of exception freedom. • Implementation as SPADEase - Implement our approach as SPADEase. • Industrial Evaluation - Evaluate SPADEase against both textbook and industrial subprograms.
206

Ολοκλήρωση επιχειρησιακών / βιομηχανικών συστημάτων βασισμένης σε ροές εργασίας με χρήση μιας αρχιτεκτονικής σημασιολογικά ορισμένων υπηρεσιών ιστού / Workflow - coordinated integration of enterprise / industrial systems based on a semantic service - oriented architecture

Αλεξάκος, Χρήστος 07 September 2007 (has links)
Η ολοκλήρωση Επιχειρησιακών / Βιομηχανικών Συστημάτων θεωρείται πλέον σημαντική για την ανάπτυξη καινοτόμων επιχειρησιακών μοντέλων στην βιομηχανία. Ο μεγάλος βαθμός ετερογένειας που συναντάται σε συστήματα και εφαρμογές σε ένα επιχειρησιακό περιβάλλον, εισάγει θέματα αλληλοεπίδρασης και συντονισμού ώστε να αντιμετωπιστούν τα προβλήματα της ολοκλήρωσης των Επιχειρησιακών / Βιομηχανικών Συστημάτων με εύκαμπτο και δυναμικό τρόπο. Η εργασία αυτή παρουσιάζει ένα σημασιολογικό επιχειρησιακό μοντέλο που περιγράφει τόσο την δομή της επιχείρησης όσο και τις υπηρεσίες που παρέχονται από τα διάφορα συστήματα, χρησιμοποιώντας οντολογίες. Το μοντέλο αυτό χρησιμοποιείται σε μια προτεινόμενη αρχιτεκτονική η οποία αξιοποιεί «state-of-the-art» τεχνολογίες όπως Υπηρεσίες Ιστού και Ροές Εργασιών ώστε να επιτευχθούν η αλληλοεπίδραση των συστημάτων και ο συντονισμός των επιχειρησιακών διεργασιών. Η προτεινόμενη αρχιτεκτονική βασίζεται στην χρήση Υπηρεσιών Ιστού για το «άνοιγμα» των λειτουργιών των επιχειρησιακών συστημάτων ώστε να είναι προσβάσιμες και εκμεταλλεύσιμες μέσω του ενδοεπιχειρησιακού τοπικού δικτύου. Επιπρόσθετα, η ανάπτυξη του σημασιολογικού επιχειρησιακού μοντέλου αποσκοπεί στην περιγραφή και επεξήγηση της δομής της επιχείρησης καθώς και των επιχειρησιακών λειτουργιών που προσφέρονται από της παρεχόμενες Υπηρεσίες Ιστού. Το εννοιολογικό μοντέλο χρησιμοποιείται για τον σχεδιασμό και την εκτέλεση ρόων επιχειρησιακών διεργασιών με σκοπό αφενός να προσφέρει ένα επίπεδο διαφάνειας όσο αφορά τις τεχνικές λεπτομέρειες των συστημάτων και των υπηρεσιών ιστού, όσο και τον αυτόματο συντονισμό της ροής της πληροφορίας μεταξύ των διάφορων Επιχειρησιακών / Βιομηχανικών Συστημάτων. / Enterprise integration is significant for the enforcement of novel business models in an enterprise / industry. The great heterogeneity of systems / applications in the enterprise environment requires the introduction of interoperability aspects in order to resolve integration problems in a flexible and dynamic way. This approach introduces an advanced enterprise semantic model representing both enterprise structure and available services, through the use of ontologies. The model is associated by a specific architecture that uses the above model in combination with “state-of-the art” technologies such as Web Services and workflows. The current thesis proposes a uniform integrated approach towards the semantic representation of an enterprise, introducing a semantic description of the enterprise structure as well as semantic annotations for the implemented enterprise Web Services. Dominant technologies are used such as Semantic Web Services, workflows and ontologies. An architecture is also proposed for the combination of the above technologies and the proposed approach implementation.
207

Computer aided reliability prediction

Partridge, Christopher David January 1976 (has links)
This thesis describes a project, sponsored by the Admiralty Surface Weapons Establishment (A.S.W.E), whose objective is to investigate the use of Computer-Aided Design (C.A.D) methods in reliability engineering and, in particular, in reliability prediction. The project evolved as a result of continuous interaction with users whose requirements and comments have assisted in the definition of the project specification which in turn, implied the method of computation (Monte Carlo analysis) and the form of the implementation (a modularly structured program). The project produced a CAD method which aimed to provide: i) a means of predicting the reliability of complex hetero-geneous systems and an aid to estimate their spares requirements in an efficient way. ii) software which is easily extendable, modifiable and, while oriented towards the ICL 1900 range of computers, optimally portable. iii) a mode of documentation which permits the use of the program by reliability engineers who have no previous computing experience. In order to fulfil these requirements it was necessary to incorporate a number of novel features which includes: i) the use of hierarchical structures as a means of modelling the reliability of large and complex systems. ii) the introduction of a modelling device in the form of a controlled switch which allows the modelling of a wide range of dependent failure and repair mechanisms. iii) the transformation of any type of failure and repair distribution into a uniform data structure which permits the easy and efficient handling of any type of distribution function. iv) the use of modular programming and program documentation as a means of providing the necessary efficiency, flexibility and user- accessability. This thesis includes the description of the CAD method and illustrates it by means of a number of examples. Further, it discusses some of the problems of using this method to predict the reliability of mechanical engineering systems. The use of the program by A.S.W.E. contractors and Polytechnic students is described by reference to diverse design examples. Further areas of research and development in relation to the project are given. To assist the reader who may not be equally familiar with the standard terminologies of reliability engineering, statistics and computing used in this thesis, a set of selected definitions is included in one of the appendices.
208

A web-based 3D virtual environment for managing large volumes of imagery data

Smith, A. January 2006 (has links)
This thesis proposes a novel concept for managing large volumes of imagery data through a web-based 3D virtual environment, and describes the development of a prototype system for demonstrate the feasibility of this concept. The system, called Vista, allows users to upload imagery data (such as images, videos and presentations) through a web browser, and the data are automatically placed into virtual galleries, cinemas and studios. The environment modifies itself dynamically to accommodate addition and deletion of imagery data. The system empowers ordinary users to utilise the Internet as a global data repository and showplace and provides an intuitive and stimulating way to visualise and explore large volumes of imagery data. It removes the burden of non-trivial management of large volumes of data, including ontological organisation of users files and geometrical design of the virtual environment. Several advanced interaction techniques have been developed to support such a virtual environment. In particular, the concept of treemaps has been adapted to provide global view of a virtual environment, resulting in a new visual representation, warped treemap, which has been used as a navigation atlas. Many adaptive features have been incorporated into the prototypes for improving the general usability and reducing users cognitive loading. In addition, several interesting and useful metaphors, such as virtual archives and space-warp zones have been introduced. Such a technology not only offers users a novel and intuitive experience of interacting with computers, but also provides a convenient tool for managing huge volumes of data in a largely automatic manner. In addition, the use of the technology can be extended to other applications, including commercial sectors (e.g., housing), industrial sectors (e.g., inspection), health care (e.g., medical images) and public sectors (e.g., archiving and museums).
209

The construction of general graph editors using object-oriented programming

Wang, C. Y. January 1994 (has links)
Diagrams are used extensively to model real world systems and problems, and as such, application-specific diagram editors are needed in computer application systems. However, the development of a diagram editor from scratch is time consuming and needs specialized expertise. This thesis describes the implementation of DECADE, a software development environment for constructing application-specific diagram editors. It provides users with an application framework, a specification editing tool, and an application construction tool to reduce this development effort. Object-oriented technology, which is discussed in the thesis, was employed in the design and implementation of DECADE. An analysis of diagrams and editors is included which leads to a model of diagrams and an abstraction of diagram editors. Diagram editors are abstracted into four parts: the diagram, representation, manipulation and the editor user interface. A diagram object defines a type of diagram with special elements, the representation objects manage a given diagram and generate specific application data. The manipulation abstraction models editing activities, and the user interface objects support the interaction with users. Based on these abstractions, an application framework was developed which provides substantial software reusability, and the possibility to automate diagram editor construction from the software components that are derived from the framework. The construction tool selects and specializes the components according to a specification, which is generated by the specification editing tool, and then integrates them together into a diagram editor. An object-oriented specification method and a frame-based construction mechanism for software automation are presented and used in the implementation. A few experimental applications are also given and analyzed in the thesis.
210

A computational framework for stereo imaging

Uribe, Maria Patricia Trujillo January 2005 (has links)
No description available.

Page generated in 0.034 seconds