• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

The use of proofs-as-programs to build an analogy-based functional program editor

Whittle, J. N. D. January 1999 (has links)
This thesis presents a novel application of the technique known as proofs-as-programs. Proofs-as-programs defines a correspondence between proofs in a constructive logic and fundamental programs. By using this correspondence, a functional program may be represented directly as the proof of a specification and so the program may be analysed, within this proof framework. <I>C<SUP>Y</SUP>NTHIA</I> is a programming editor for the functional language ML which uses proofs-as-programs to analyse users' programs as they are written. So that the user requires no knowledge of proof theory, the underlying proof representation is completely hidden. The proof framework allows programs written in <I>C<SUP>Y</SUP>NTHIA </I>to be checked to be syntactically correct, well-typed, well-defined and terminating. Users of <I>C<SUP>Y</SUP>NTHIA</I> make fewer programming errors and the feedback facilities of <I>C<SUP>Y</SUP>NTHIA</I> mean that it is easier to track down the source of errors when they do occur. <I>C<SUP>Y</SUP>NTHIA</I> also embodies the idea of programming by analogy - rather than starting from scratch, users always begin with an existing function definition. They then apply a sequence of high-level editing commands which transform this starting definition into the one required. These commands preserve correctness and also increase programming efficiency by automating commonly occurring steps. The thesis describes the design and implementation of <I>C<SUP>Y</SUP>NTHIA</I> and investigates its role as a novice programming environment. Use by experts is possible but only a subset of ML is currently supported. Two major trials of <I>C<SUP>Y</SUP>NTHIA</I> have shown that <I>C<SUP>Y</SUP>NTHIA</I> is well-suited as a teaching tool.

Aspect Oriented Software Fault Tolerance for Mission Critical Systems

Hameed, Kashif January 2010 (has links)
Software fault tolerance is a means of achieving high dependability for mission and safety critical systems. Despite continuing efforts to prevent and remove faults during software development, application-level fault tolerance measures are still required to avoid failures due to residual design, programming and transient faults. In addition to functional complexity of application level software, non-functional requirements, such as diversity, redundancy, exception handling, voting and adjudication mechanisms, are introduced by fault tolerance measures, bringing additional system complexity. Current software patterns, styles and architectures do not respect the separation of concerns at design and programming layers which is desirable when striving to manage complexity, maintainability and portability issues. Moreover the lack of domain specific fault tolerance schemes, like error detection and recovery mechanisms, further makes this task complicated for developers. The main contribution of this research is to provide architectural support for software fault tolerance using an Aspect Oriented Software Development paradigm. The approach used proposes aspect oriented fault tolerance frameworks incorporating exception handling, design diversity and protective wrappers to fulfil the needs of a large range of dependable applications. The utilization of the proposed frameworks IS demonstrated to offer several advantages, involving modularization, reduced complexity, and reusability, over traditional, ad-hoc fault tolerant implementations. Three separate case studies are used to evaluate the proposed frameworks through dependability assessment and software metrics analysis. The results show that the proposed frameworks can improve dependability with higher fault coverage and better separation of fault tolerance concerns from core functionality.

Event modelling, detection and mining in multimedia surveillance videos

Anwar, Fahad January 2010 (has links)
Due to the advances in digital information technologies and dramatic drop in the prices of storage devices, the installation of visual surveillance systems (VSS) has become relatively inexpensive. However, maintaining the staff to monitor these CCTV sources is still a costly affair. Moreover, as the end users are becoming more aware of the vision based technologies, there is ever growing demand for advanced surveillance systems which can not only model and detect complex interesting events but can also provide intelligence to improve their operational management process. In this thesis we investigated the three main aspects of visual surveillance systems (event modelling, event detection and event mining) and propose the framework in which these different aspects of surveillance systems complement each other. The research contributions presented in the thesis mostly fall under the event mining and event modelling aspects of visual surveillance systems. Most of the previous work for mining multimedia events is based upon discovering/detecting already known abnormal events or deals with finding frequent event patterns. In contrast, in this thesis we present a framework to discover unknown anomalous events associated with a frequent sequence of events (AEASP); that is to discover events which are unlikely to follow a frequent sequence of events. This information can be very useful for discovering unknown abnormal events and can provide early actionable intelligence to redeploy resources to specific areas of view (such as PTZ camera or attention of a CCTV user). Discovery of anomalous events against a sequential pattern can also provide business intelligence for store management in the retail sector. The proposed event mining framework also takes the temporal aspect of AEASP into consideration, that is to discover anomalous events which are true for a specific time interval only and might not be an AEASP over a whole time spectrum and vice versa. To confront the process/memory expensive process of searching all the instances of multiple sequential patterns in each data sequence a dynamic sequential pattern search mechanism (DSPS_SM) is also introduced. Different experiments are conducted to evaluate the proposed AEASP mining algorithm's accuracy and performance. Next, we proposed an event mining framework to automate the process of generating appearance models of real world entities by utilising the results of already detected events and text streams of multimedia events. A comprehensive problem definition and entity appearance model generation framework is presented. To validate the proposed entity appearance model generation concept, we implemented the proposed algorithm and conducted the experiments by using "Columbia University Image Library (COIL-I 00)" object database [I). To address the event modelling aspect of surveillance systems we extended and modified the event description framework (EDF) presented in [2] and proposed the extended version of it (EDFE). EDFE not only increased EDF's capability to model complex multimedia events but also facilitated the event detection and event mining processes. Modelled events (generated by EDFE) and the event detection process is evaluated using sequences generated in laboratory and also from realistic surveillance environment, the results of experiments are then analysed using precision and recall measures.

Methods for the efficient measurement of phased mission system reliability and component importance

Reed, Sean January 2011 (has links)
An increasing number of systems operate over a number of consecutive time periods, in which their reliability structure and the consequences of failure differ, in order to perform some overall operation. Each distinct time period is known as a phase and the overall operation is known as a phased mission. Generally, a phased mission fails immediately if the system fails at any point and is considered a success only if all phases are completed without failure. The work presented in this thesis provides efficient methods for the prediction and optimisation of phased mission reliability. A number of techniques and methods for the analysis of phased mission reliability have been previously developed. Due to the component and system failure time dependencies introduced by the phases, the computational expense of these methods is high and this limits the size of the systems that can be analysed in reasonable time frames on modern computers. Two importance measures, which provide an index of the influence of each component on the system reliability, have also been previously developed. This is useful for the optimisation of the reliability of a phased mission, however a much larger number have been developed for non-phased missions and the different perspectives and functions they provide are advantageous. This thesis introduces new methods as well as improvements and extensions to existing methods for the analysis of both non-repairable and repairable systems with an emphasis on improved efficiency in the derivation of phase and mission reliability. New importance measures for phased missions are also presented, including interpretations of those currently available for non-phased missions. These provide a number of interpretations of component importance, allowing those most suitable in a given context to be employed and thus aiding in the optimisation of mission reliability. In addition, an extensive computer code has been produced that implements and tests the majority of the newly developed techniques and methods.

JeX : an implementation of a Java exception analysis framework to exploit potential optimisations

Stevens, Andrew January 2002 (has links)
Exceptions in Java are generated by explicit program syntax or implicit events at runtime. The potential control flow paths introduced by these implicit exceptions must be represented in static flow graphs. The impact of these additional paths reduces the effectiveness of standard aheadof- time optimisations. This thesis presents research that focuses on measuring and reducing the effects of these implicit exceptions. In order to conduct this research, a novel static analysis framework, called JeX, has been developed. This tool provides an environment for the analysis and optimisation of Java programs using the bytecode representation. Data generated by this tool clearly shows that implicit exceptions significantly fragment the standard flow graphs used by many intraprocedural optimisation techniques. This fragmentation increases the number of flow dependence relationships and so negatively affects numerous flow analysis techniques. The main contribution of this work is the development of new algorithms that can statically prove that certain runtime exceptions can never occur. Once these exceptions have been extracted, the control flow structures are re-generated without being affected by those potential exceptions. The results show that these algorithms extract 24-29% of all implicit potential exceptions in the eight benchmark programs selected. The novel, partial stack evaluation algorithm is particularly successful at extracting potential null pointer exceptions, with reductions in these of 53-68%. This thesis also provides a simulation of perfect exception extraction by removing the effects of all implicit exceptions in the flow graphs. The secondary contribution of this research is the development of program call graph generation algorithms with novel receiver prediction analysis. This thesis presents a comparative study of the graphs generated using fast but conservative analysis with more effective rapid type analysis algorithms. This study shows that Java bytecodes are well suited to a fine-grained instance prediction type analysis technique, although this context-sensitive approach does not scale well with larger test programs. The final contribution of this work is the JeX tool itself. This is a generic, whole program analysis system for Java programs. It includes a number of general optimisation modules, algorithms for generating several static analysis data structures and a visualisation interface for viewing all data structures and Java class file contents

Mobile computation with functions

Kırlı, Zeliha D. January 2002 (has links)
The practice of computing has reached a stage where computers are seen as parts of a global computing platform. The possibility of exploiting resources on a global scale has given rise to a new paradigm -- the mobile computation paradigm -- for computation in large-scale distributed networks. Languages which enable the mobility of code over the network are becoming widely used for building distributed applications. This thesis explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of a mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance of systems are discussed here. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behaviour of mobile functions and offer solutions to the problems under investigation. The thesis presents a survey of the languages Concurrent ML, Facile and PLAN which remain loyal to the principles of the functional language ML and hence inherit its strengths in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages. Two chapters focus on using types to statically predict whether functions are used locally or may become mobile at runtime. Types are exploited for distributed calltracking to estimate which functions are invoked at which sites in the system. Compilers for mobile code languages would benefit from such estimates in dealing with the heterogeneity of the network nodes, in providing static profiling tools and in estimating the resource-consumption of programs. Two chapters are devoted to the use of types in controlling the flow of values in a system where users have different trust levels. The confinement of values within a specified mobility region is the subject of one of these. The other focuses on systems where values are classified with respect to their confidentiality level. The sources of undesirable flows of information are identified and a solution based on noninterference is proposed.

References to graphical objects in interactive multimodal queries

He, D. January 2001 (has links)
This thesis describes a computational model for interpreting natural language expressions in an interactive multimodal query system integrating both natural language text and graphic displays. The primary concern of the model is to interpret expressions that might involve graphical attributes and expressions whose referents could be objects on the screen. Graphical objects on the screen are used to visualise entities in the application domain and their attributes (in short, domain entities and domain attributes). This is why graphical objects are treated as descriptions of those domain entities/attributes in the literature. However, graphical objects and their attributes are visible during the interaction, and are thus known by the participants of the interaction. Therefore, they themselves should be part of the mutual knowledge of the interaction. This poses some interesting problems in language processing. As part of the mutual knowledge graphical attributes could be used in expressions, and graphical objects could be referred to by expressions. In consequences, there could be ambiguities about whether an attribute in an expression belongs to a graphical object or to a domain entity. There could also be ambiguities about the referent of an expression is a graphical object or a domain entity. The main contributions of this thesis consist of analysing the above ambiguities, designing, implementing and testing a computational model and a demonstrational system for resolving these ambiguities. Firstly, a structure and corresponding terminology are set up, so these ambiguities can be clarified as ambiguities derived from referring to different databases, the screen or the application domain (in short, source ambiguities). Secondly, a meaning representation language is designed which explicitly represents the information about which database an attribute/entity comes from. Several linguistic regularities inside and among referring expressions are described so that they can be used as heuristics in the ambiguity resolution. Thirdly, a computational model based on constraint satisfaction is constructed to resolve simultaneously some reference ambiguities and source ambiguities.

Synthesizing fundamental frequency using models automatically trained from data

Dusterhoff, K. E. January 2000 (has links)
The primary goal of this research is to produce stochastic models which can be used to generate fundamental frequency contours for synthetic utterances. The models produced are binary decision trees which are used to predict a parameterized description of fundamental frequency for an utterance. These models are trained using the sort of information which is typically available to a speech synthesizer during intonation generation. For example, the speech database is annotated with information about the location of word, phrase, segment, and syllable boundaries. The decision trees ask questions about such information. One obvious problem facing the stochastic modelling approach to intonation synthesis models is obtaining data with the appropriate intonation annotation. This thesis presents a method by which such an annotation can be automatically derived for an utterance. The method uses Hidden Markov Models to label speech with intonation event boundaries given fundamental frequency, energy, and Mel frequency cepstral coefficients. Intonation events are fundamental frequency movements which relate to constituents larger than the syllable nucleus. Even if there is an abundance of fully labelled speech data, and the intonation synthesis models appear robust, it is important to produce an evaluation of the resulting intonation contours which allows comparison with other intonation synthesis methods. Such an evaluation could be used to compare versions of the same basic methodology or completely different methodologies. The question of intonation evaluation is addressed in this thesis in terms of system development. Objective methods of evaluating intonation contours are investigated and reviewed with regard to their ability to regularly provide feedback which can be used to improve the systems being evaluated. The fourth area investigated in this thesis is the interaction between segmental (phone) and suprasegmental (intonation) levels of speech. This investigation is not undertaken separately from the other investigations. Questions about phone-intonation interaction form a part of the research in both intonation synthesis and intonation analysis.

Improving the performance of recommender algorithms

Redpath, Jennifer Louise January 2010 (has links)
Recommender systems were designed as a software solution to the problem of information overload. Recommendations can be generated based on the content descriptions of past purchases (Content-based), the personal ratings an individual has assigned to a set of items (Collaborative) or from a combination of both (Hybrid). There are issues that affect the performance of recommender systems, in terms of accuracy and coverage, such as data sparsity and dealing with new users and items. This thesis presents a comprehensive set of offline experiments and empirical results with the goal of improving the recommendation accuracy and coverage for the poorest performers in the dataset. This research suggests approaches for dealing with four specific research challenges: the standardisation of evaluation methods and metrics, the definition and identification of sparse users and items, improving the accuracy of hybrid systems targeted specifically at the poor performers and addressing the cold-start problem for new users. A selection of recommendation algorithms were implemented and/or extended, namely, user-based collaborative filtering, item-based collaborative filtering, collaboration-via-content and two hybrid prediction algorithms. The first two methods were developed with the express intention of providing a baseline for improvement, facilitating the identification of poor performers and analysing the factors which influenced the performance of recommendation algorithms. The later algorithms were targeted at the poor performers and were also examined with respect to user and item sparsity. The collaboration-via-content algorithm, when extended with a new content attribute, resulted in an improvement for new users. The hybrid prediction algorithms, which combined user-based and item-based approaches in such a way as to include information about transitive relationships, were able to improve upon the baseline accuracy and coverage results. In particular, the final hybrid algorithm saw a 3.5% improvement in accuracy for the poor performers compared to item-based collaborative filtering.

Agile computing

Suri, Niranjan January 2008 (has links)
Wirelessly networked dynamic and mobile environments, such as tactical military environments, pose many challenges to building distributed computing systems. A wide variety of node types and resources, unreliable and bandwidth constrained communications links, high chum rates, and rapidly changing user demands and requirements make it difficult to build systems that satisfy the needs of the users and provide good performance. Agile c ._mputing is an innovative metaphor for distributed computing systems and prescribes a new approach to their design and implementation. Agile computing may be defined as opportunistically discovering, manipulating, and exploiting available computing and communication resources. The term agile is used to highlight the desire to both quickly react to changes in the environment as well as to take advantage of transient resources only available for short periods of time. This thesis describes the overall agile computing metaphor as well as one concrete realisation through a middleware infrastructure. An important contribution of the thesis is the definition of a generic architecture for agile computing, which identifies the core and ancillary attributes that contribute to systems that are to be agile. The thesis also describes the design and implementation of one concrete middleware solution, which provides a number of components and capabilities that integrate together to address the challenges of the overall problem. These components include the Aroma virtual machine, the Mockets communications library, the Group Manager resource discovery component, the DisService peer-to-peer information dissemination system, and the AgServe service oriented architecture. The design and development of these components has been motivated by observing problems with real systems in tactical military environments. As a result, the components have been incorporated into real systems and used in the field. The key contribution of this thesis is the prescribed approach to combining these capabilities in order to build opportunistic systems. The capabilities of these components, both individually, as well as part of a single integrated system, are evaluated through a series of experiments and compared with existing systems and standards. The results show significant performance improvements for each of the components. For example, the Mockets library performs up to 7.6x better than TCP (Transmission Control Protocol) sockets in terms of throughput depending on the type of radio utilised. When exploiting unique features in the Mockets library, such as message replacement, the Mockets library performs up to 44x better than SCTP (Stream Control Transmission Protocol) and SCPS (Space Communications Protocol Standards) in terms of timeliness of delivery of data. Likewise, when compared to the JXTA middleware from Sun Microsystems, the Group Manager uses up to 4.8x less bandwidth to support service discovery. Finally, experiments to measure the agility of the integrated middleware show that transient resources that are available for as short a period as 10 seconds can be opportunistically exploited. The Agile Computing Middleware, as presented in this thesis, continues to evolve in terms of further optimisations, incorporation of new components, enhancement of the existing components, and test and evaluation in real-world demonstrations and exercises. It is also hoped that the definition of the concept of agile computing and a general architecture for agile computing will encourage other researchers to build new systems that adopt and advance the notions of agile computing.

Page generated in 0.0545 seconds