• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Reasoning about goal-plan trees in autonomous agents : development of petri net and constraint-based approaches with resulting performance comparisons

Shaw, Patricia H. January 2010 (has links)
Multi-agent systems and autonomous agents are becoming increasingly important in current computing technology. In many applications, the agents are often asked to achieve multiple goals individually or within teams where the distribution of these goals may be negotiated among the agents. It is expected that agents should be capable of working towards achieving all its currently adopted goals concurrently. However, in doing so, the goals can interact both constructively and destructively with each other, so a rational agent must be able to reason about these interactions and any other constraints that may be imposed on them, such as the limited availability of resources that could affect their ability to achieve all adopted goals when pursuing them concurrently. Currently, agent development languages require the developer to manually identify and handle these circumstances. In this thesis, we develop two approaches for reasoning about the interactions between the goals of an individual agent. The first of these employs Petri nets to represent and reason about the goals, while the second uses constraint satisfaction techniques to find efficient ways of achieving the goals. Three types of reasoning are incorporated into these models: reasoning about consumable resources where the availability of the resources is limited; the constructive interaction of goals whereby a single plan can be used to achieve multiple goals; and the interleaving of steps for achieving different goals that could cause one or more goals to fail. Experimental evaluation of the two approaches under various different circumstances highlights the benefits of the reasoning developed here whilst also identifying areas where one approach provides better results than the other. This can then be applied to suggest the underlying technique used to implement the reasoning that the agent may want to employ based on the goals it has been assigned.
132

Verification of pointer-based programs with partial information

Luo, Chenguang January 2011 (has links)
The proliferation of software across all aspects of people's life means that software failure can bring catastrophic result. It is therefore highly desirable to be able to develop software that is verified to meet its expected specification. This has also been identified as a key objective in one of the UK Grand Challenges (GC6) (Jones et al., 2006; Woodcock, 2006). However, many difficult problems still remain in achieving this objective, partially due to the wide use of (recursive) shared mutable data structures which are hard to keep track of statically in a precise and concise way. This thesis aims at building a verification system for both memory safety and functional correctness of programs manipulating pointer-based data structures, which can deal with two scenarios where only partial information about the program is available. For instance the verifier may be supplied with only partial program specification, or with full specification but only part of the program code. For the first scenario, previous state-of-the-art works (Nguyen et al., 2007; Chin et al., 2007; Nguyen and Chin, 2008; Chin et al, 2010) generally require users to provide full specifications for each method of the program to be verified. Their approach seeks much intellectual effort from users, and meanwhile users are liable to make mistakes in writing such specifications. This thesis proposes a new approach to program verification that allows users to provide only partial specification to methods. Our approach will then refine the given annotation into a more complete specification by discovering missing constraints. The discovered constraints may involve both numerical and multiset properties that could be later confirmed or revised by users. Meanwhile, we further augment our approach by requiring only partial specification to be given for primary methods of a program. Specifications for loops and auxiliary methods can then be systematically discovered by our augmented mechanism, with the help of information propagated from the primary methods. This work is aimed at verifying beyond shape properties, with the eventual goal of analysing both memory safety and functional properties for pointer-based data structures. Initial experiments have confirmed that we can automatically refine partial specifications with non-trivial constraints, thus making it easier for users to handle specifications with richer properties. For the second scenario, many programs contain invocations to unknown components and hence only part of the program code is available to the verifier. As previous works generally require the whole of program code be present, we target at the verification of memory safety and functional correctness of programs manipulating pointer-based data structures, where the program code is only partially available due to invocations to unknown components. Provided with a Hoare-style specification ({Pre} prog {Post}) where program (prog) contains calls to some unknown procedure (unknown), we infer a specification (mspecu) for the unknown part (unknown) from the calling contexts, such that the problem of verifying program (prog) can be safely reduced to the problem of proving that the unknown procedure (unknown) (once its code is available) meets the derived specification (mspecu). The expected specification (mspecu) is automatically calculated using an abduction-based shape analysis specifically designed for a combined abstract domain. We have implemented a system to validate the viability of our approach, with encouraging experimental results.
133

Techniques for scheduling time-triggered resource-constrained embedded systems

Gendy, Ayman Khalifa Ghaly January 2009 (has links)
It is often argued that time-triggered (TT) architectures are the most suitable basis for safety-related applications as their use tends to result in highly-predictable system behaviour. This predictability is increased when TT architectures are coupled with the use of co-operative (or "non pre-emptive") task sets. Despite many attractive properties, such "time-triggered co-operative" (TTC) and related "time-triggered hybrid" (TTH) architectures rarely receive much attention in the research literature. One important reason for this is that these designs are seen to be "fragile": that is, small changes to the task set may require revisions to the whole schedule. Such revisions are seen as challenging and time consuming. To tackle this problem two novel algorithms (TTSA1 and TTSA2), which help to automate the process of scheduler selection and configuration, are introduced. While searching for a workable schedule, both the algorithms try to ensure that all task constraints are met, a co-operative scheduler is used whenever possible and the power consumption is kept as low as possible. The effectiveness of these algorithms is tested by means of empirical trials. Both TTSA1 and TTSA2, like most of scheduling algorithms introduced in the literature, rely on knowledge of task worst-case execution time (WCET). Unfortunately, determining the WCET of tasks is rarely straightforward. Even in situations where accurate WCET estimates are available at design time, variations in task execution time, between its best-case execution time (BCET) and its WCET, may still affect the system predictability and/or violate task constraints. In an effort to address this problem, a set of code-balancing techniques is introduced. Using an empirical study it is demonstrated that these techniques help in reducing the variations in task execution time, and hence increase the system predictability. These goals are achieved with a reduced power-consumption overhead, compared to alternative solutions.
134

Context-aware automatic service selection

Yu, Hong Qing January 2009 (has links)
Service-Oriented Architecture (SOA) is a paradigm for developing next generation distributed systems. SOA introduces an opportunity to build dynamically configurable distributed systems by invoking suitable services at runtime, which makes the systems being more flexible to be integrated and easily to be reused. With fast growing numbers of offered services, automatically identifying suitable services becomes a crucial issue. A new and interesting research direction is to select a service which is not only suitable in general but also suitable towards a particular requester's needs and services context at runtime. This dissertation proposes an approach for supporting automatic context-aware service selection and composition in a dynamic environment. The main challenges are: (1) specifying context information in a machine usable form; (2) developing a service selection method which can choose the adequate services by use the context information; (3) introducing context-awareness into the service composition process. To address the challenges, we employ Semantic Web technology for modelling context information and service capabilities to automatically generate service selection criteria at runtime. Meanwhile, a Type-based Logic Scoring Preference Extended (TLE) service selection method is developed to adequately and dynamically evaluate and aggregate the context-aware criteria. In addition, we introduce the composition context and a Backward Composition Context based Service Selection algorithm (BCCbSS) for composing suitable services on the y in a fault-tolerant manner. Furthermore, this dissertation describes the design and implementation of the method and algorithm. Experimental evaluation results demonstrate that the TLE method and BCCbSS algorithm provide an efficient and scalable solution to deal with the context-aware service selection problem both in single service selection and composition scenarios. Our research results make a further step to develop highly automated and dynamically adaptive systems in the future.
135

Modelling business conversations in service component architectures

Abreu, João Pedro Abril de January 2010 (has links)
Service-oriented computing (SOC) is a new paradigm for creating and providing business services via computer-based systems. In SOC, services are computational entities that can be published together with a description of business functionality, discovered automatically and used by independent organizations to compose and provide new services. Although several technologies are being introduced with the goal of supporting SOC, the paradigm lacks theories and techniques that enable the development of reliable systems. SENSORIA is a research project that addresses these aspects by developing mathematically-based methods for engineering service-oriented systems. Within this project, the SENSORIA Reference Modelling Language (SRML) is being developed to support the design of services at a level of abstraction that captures business functionality independently of specific technologies. In this thesis, we provide a semantics for the fragment of SRML that supports the design of composite services from a functional point of view. The main goal of this research is to give system designers the means to design new services by integrating existing services, while making sure that the resulting system provides the intended business functionality - what is called correctness of composition. In order to address this goal, we define a mathematical model of computation for service-oriented systems based on the typical business conversations that occur between the constituents of such systems. We then define the semantics of the SRML language over this model and base it on a set of specification patterns that capture common service behaviour. We show that the formality of the language can be exploited with practical gains, by proposing a methodology for model-checking the correctness of service compositions. Our results indicate that a formal approach to service design based on the conversational nature of business interactions can promote the development of functionally correct services. Furthermore, this approach can optimize the development of service-oriented systems by allowing conceptual errors to be identified and corrected before the systems are built.
136

Architectural support for socio-technical systems

El-Hassan, Osama E. S. January 2009 (has links)
Software development paradigms are increasingly stretching their scope from the core technical implementation of required functionalities to include processes and people who interact with the implemented systems. Socio-technical systems reflect such a trend as they incorporate the interactions and processes of their social participants, by treating them not as users but as integral players who enact well-defined roles. However, developers of these systems struggle with their complexity and weak architectural support. The challenge is that existing toolboxes for modelling and implementing complex software systems do not take into account interactions that are not causal, but only biddable (i.e. whose execution cannot be ensured by software). Therefore, models and implementations generated by these toolboxes cannot detect and respond to situations in which the system participants deviate from prescribed behaviour and fail to play the role that they have been assigned as entities of the system. The research focus is on how a norm-based architectural framework can promote the externalisation of the social dimension that arises in software-intensive systems which exhibit interactions between social components (i.e. people or groups of people) and technical components (devices, computer-based systems and so on) that are critical for the domain in which they operate. This includes building normative models for evolvable and adaptable socio-technical systems to target such interactions in a way that ensures that the required global properties emerge. The proposed architectural framework is based on a new class of architectural connectors (social laws) that provide mechanisms through which the biddability of human interactions can be taken into account, and the sub-ideal situations that result from the violation of organisational norms can be modelled and acted upon by self-adapting the socio-technical systems. The framework is equipped with a new method underpinned by a coherent body of concepts and supported by a graph-based formalism in which roles present the structural semantics of the configuration, while the laws have operational semantics given by the graph transformations rules. Guiding methodological steps are given to support the identification of critical social interactions and the implementation of the proposed method. Case studies derive the evaluation of the approach to demonstrate its generality, applicability, flexibility and maintainability.
137

Adaptive mutation operators for evolutionary algorithms

Korejo, Imtiaz Ali January 2012 (has links)
Evolutionary algorithms (EAs) are a class of stochastic search and optimization algorithms that are inspired by principles of natural and biological evolution. Although EAs have been found to be extremely useful in finding solutions to practically intractable problems, they suffer from issues like premature convergence, getting stuck to local optima, and poor stability. Recently, researchers have been considering adaptive EAs to address the aforementioned problems. The core of adaptive EAs is to automatically adjust genetic operators and relevant parameters in order to speed up the convergence process as well as maintaining the population diversity. In this thesis, we investigate adaptive EAs for optimization problems. We study adaptive mutation operators at both population level and gene level for genetic algorithms (GAs), which are a major sub-class of EAs, and investigate their performance based on a number of benchmark optimization problems. An enhancement to standard mutation in GAs, called directed mutation (DM), is investigated in this thesis. The idea is to obtain the statistical information about the fitness of individuals and their distribution within certain regions in the search space. This information is used to move the individuals within the search space using DM. Experimental results show that the DM scheme improves the performance of GAs on various benchmark problems. Furthermore, a multi-population with adaptive mutation approach is proposed to enhance the performance of GAs for multi-modal optimization problems. The main idea is to maintain multi-populations on different peaks to locate multiple optima for multi-modal optimization problems. For each sub-population, an adaptive mutation scheme is considered to avoid the premature convergence as well as accelerating the GA toward promising areas in the search space. Experimental results show that the proposed multi-population with adaptive mutation approach is effective in helping GAs to locate multiple optima for multi-modal optimization problems.
138

Compositional verification of model-level refactorings based on graph transformations

Bisztray, Dénes András January 2010 (has links)
With the success of model-driven development as well as component-based and service-oriented systems, models of software architecture are key artifacts in the development process. To adapt to changing requirements and improve internal software quality such models have to evolve while preserving aspects of their behaviour. These behaviour preserving developments are known as refactorings. The verification of behaviour preservation requires formal semantics, which can be defined by model transformation, e.g., using process algebras as semantic domain for architectural models. Denotational semantics of programming languages are by definition compositional. In order to enjoy a similar property in the case of model transformations, every component of the source model should be distinguishable in the target model and the mapping compatible with syntactic and semantic composition. To avoid the costly verification of refactoring steps on large systems and create refactoring patterns we present a general method based on compositional typed graph transformations. This method allows us to extract a (usually much smaller) rule from the transformation performed, verify this rule instead and use it as a refactoring pattern in other scenarios. The main result of the thesis shows that the verification of rules is indeed sufficient to guarantee the desired semantic relation between source and target models. A formal definition of compositionality for mappings from software models represented as typed graphs to semantic domains is proposed. In order to guarantee compositionality, a syntactic criterion has been established for the implementation of the mappings by typed graph transformations with negative application conditions. We apply the approach to the refactoring of architectural models based on UML component, structure, and activity diagrams with CSP as semantic domain.
139

Flicker and unsteadiness compensation for archived film sequences

Forbin, Guillaume January 2009 (has links)
No description available.
140

On the development of a stochastic optimisation algorithm with capabilities for distributed computing

Yang, Siyu January 2009 (has links)
In this thesis, we devise a new stochastic optimisation method (cascade optimisation algorithm) by incorporating the concepts from Markov process whilst eliminating the inherent sequential nature that is the major deficit preventing the exploitation of advances in distributed computing infrastructures. This method introduces partitions and pools to store intermediate solution and corresponding objectives. A Markov process increases the population of partitions and pools. The population is distributed periodically following an external certain. With the use of partitions and pools, multiple Markov processes can be launched simultaneously for different partitions and pools. The cascade optimisation algorithm is suitable for parallel and distributed computing environments. In addition, this method has the potential to integrate knowledge acquisition techniques (e. g. data mining and ontology) to achieve effective knowledge-based decision making. Several features are extracted and studied in this thesis. The application problems involve both the small-scale and the large-scale optimisation problems. Comparisons with the stochastic optimisation methods are made and results show that the cascade optimisation algorithm can converge to the optimal solutions in agreement with other methods more quickly. The cascade optimisation algorithm is also studied on parallel and distributed computing environments in terms of the reduction in computation time.

Page generated in 0.1567 seconds