• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A framework for exploiting emergent behaviour to capture 'best practice' within a programming domain

Mercer, Sarah Jane January 2004 (has links)
Inspection is a formalised process for reviewing an artefact in software engineering. It is proven to significantly reduce defects, to ensure that what is delivered is what is required, and that the finished product is effective and robust. Peer code review is a less formal inspection of code, normally classified as inadequate or substandard Inspection. Although it has an increased risk of not locating defects, it has been shown to improve the knowledge and programming skills of its participants. This thesis examines the process of peer code review, comparing it to Inspection, and attempts to describe how an informal code review can improve the knowledge and skills of its participants by deploying an agent oriented approach. During a review the participants discuss defects, recommendations and solutions, or more generally their own experience. It is this instant adaptability to new information that gives the review process the ability to improve knowledge. This observed behaviour can be described as the emergent behaviour of the group of programmers during the review. The wider distribution of knowledge is currently only performed by programmers attending other reviews. To maximise the benefits of peer code review, a mechanism is needed by which the findings from one team can be captured and propagated to other reviews / teams throughout an establishment. A prototype multi-agent system is developed with the aim of capturing the emergent properties of a team of programmers. As the interactions between the team members is unstructured and the information traded is dynamic, a distributed adaptive system is required to provide communication channels for the team and to provide a foundation for the knowledge shared. Software agents are capable of adaptivity and learning. Multi-agent systems are particularly effective at being deployed within distributed architectures and are believed to be able to capture emergent behaviour. The prototype system illustrates that the learning mechanism within the software agents provides a solid foundation upon which the ability to detect defects can be learnt. It also demonstrates that the multi-agent approach is apposite to provide the free flow communication of ideas between programmers, not only to achieve the sharing of defects and solutions but also at a high enough level to capture social information. It is assumed that this social information is a measure of one element of the review process's emergent behaviour. The system is capable of monitoring the team-perceived abilities of programmers, those who are influential on the programming style of others, and the issues upon which programmers agree or disagree. If the disagreements are classified as unimportant or stylistic issues, can it not therefore be assumed that all agreements are concepts of "Best Practice"? The conclusion is reached that code review is not a substandard Inspection but is in fact complementary to the Inspection model, as the latter improves the process of locating and identifying bugs while the former improves the knowledge and skill of the programmers, and therefore the chance of bugs not being encoded to start with. The prototype system demonstrates that it is possible to capture best practice from a review team and that agents are well suited to the task. The performance criteria of such a system have also been captured. The prototype system has also shown that a reliable level of learning can be attained for a real world task. The innovative way of concurrently deploying multiple agents which use different approaches to achieve the same goal shows remarkable robustness when learning from small example sets. The novel way in which autonomy is promoted within the agents' design but constrained within the agent community allows the system to provide a sufficiently flexible communications structure to capture emergent social behaviour, whilst ensuring that the agents remain committed to their own goals.
2

Scalable support for process-oriented programming

Ritson, Carl G. January 2013 (has links)
Process-oriented programming is a method for applying a high degree of concurrency within software design while avoiding associated pitfalls such as deadlocks and race hazards. A process-oriented computer program contains multiple distinct software processes which execute concurrently. All interaction between processes, including information exchange, occurs via explicit communication and synchronisation mechanisms. The explicit nature of interaction in process-oriented programming underpins its ability to provide manageable concurrency. These interaction mechanisms represent both a potential overhead in the execution of process-oriented software and a point of mechanical sympathy with emerging multi-core computer architectures. This thesis details engineering to reduce the overheads associated with a process-oriented style of software design and evaluate its mechanical sympathy. The first half of this thesis provides an in-depth review of facilities for concurrent programming and their support in programming languages. Common concurrent programming facilities are defined and their relationship to process-oriented design established. It contains an analysis of the significance of mechanical sympathy in programming languages, trends in hardware and software design, and relates these to process-oriented programming. The latter part of this thesis describes techniques for the compilation and execution of process-oriented software on multi-core hardware so as to achieve the maximum utilisation of parallel computing resources with the minimum overhead from process-oriented interaction mechanisms. A new runtime kernel design for the occampi programming language is presented and evaluated. This design enables efficient cache-affine work-stealing scheduling of processes on multi-core hardware using waitfree and non-blocking algorithms. This is complemented by modern compilation techniques for occam-pi program code using machine independent assembly to improve performance and portability, and methods for debugging the execution of processoriented software using a virtual machine interpreter. Through application, these methods prove the mechanical sympathy and parallel execution potential of a process-oriented software.
3

Integral sliding mode fault tolerant control schemes with control allocation

Hamayun, Mirza Tariq January 2013 (has links)
The key attribute of a Fault Tolerant Control (FTC) system is to maintain overall system stability and acceptable performance in the face of faults and failures within the system. In this thesis new integral sliding mode (ISM) control allocation schemes for FTC are proposed, which have the potential to maintain the nominal fault free performance for the entire system response, in the face of actuator faults and even complete failures of certain actuators. The incorporation of ISM within a control allocation framework uses the measured or estimated values of the actuator effectiveness levels to redistribute the control effort among the healthy actuators to maintain closed-loop stability. This combination allows one controller to be used in both fault free as well as in fault or failure situations. A fault tolerant control allocation scheme which relies on an a posteri approach, building on an existing state feedback controller designed using only the primary actuators, is also proposed. Retro-fitting of an ISM scheme to an existing feedback controller is advantageous from an industrial perspective, because fault tolerance can be introduced without changing the existing control loops. To deal with a wider range of operating conditions, the fault tolerant features of ISM are also extended to linear parameter varying systems. A FTC scheme considering only the availability of measured system outputs is also proposed, where now the feedback controller design is based on the estimated states. In each of the ISM fault tolerant schemes proposed, a rigorous closed-loop analysis is carried out to ensure the stability of the sliding motion in the face of faults or failures. A high fidelity benchmark model of a large transport aircraft is used to demonstrate the efficacy of the new FTC schemes.
4

Design patterns to support the migration between event-triggered and time-triggered software architectures

Lakhani, Farha Naz January 2013 (has links)
There are two main architectures used to develop software for modern embedded systems: these can be labelled as “event-triggered” (ET) and “time-triggered” (TT). This thesis is concerned with the issues involved in migration between these two architectures. Although TT architectures are widely used in safety-critical applications (for example, in aerospace and medical systems) they are less familiar to developers of mainstream embedded systems. The work in this thesis began from the premise that – for a broad class of systems that have been implemented using an ET architecture – migration to a TT architecture would improve reliability. It may be tempting to assume that conversion between ET and TT designs will simply involve converting all event-handling software routines into periodic activities. However, the required changes to the software architecture are, in many cases rather more profound. The main contribution of the work presented in this thesis is to identify ways in which the significant effort involved in migrating between existing ET architectures and “equivalent” (and effective) TT architectures could be reduced. The research has taken an innovative step in this regard by introducing the use of ‘Design patterns’ for this purpose for the first time. This thesis describes the development, experimental testing and preliminary assessment of a novel set of design patterns. The thesis goes on to evaluate the effectiveness of some of the key patterns in the development of some representative systems. The pattern evaluation process involved both controlled laboratory experiments on real-time applications, and comprehensive feedback from experts in industry. The results presented in this thesis suggest that pattern-based approaches have the potential to simplify the migration process between ET and TT architectures. The thesis concludes by presenting suggestions for future work in this important area.
5

Modularising change management in dynamic language aspect-oriented programming frameworks to reduce fragility

Waters, Robert William January 2012 (has links)
Aspect-oriented programming (AOP) is a way of specifying crosscutting concerns (program features) as aspects - a modularisation of the concern and how it crosscuts the rest of a program. In most AOP facilities, there is an implicit assumption that the functional (non-crosscutting) concerns that these crosscutting concerns interact with (base) do not change. This assumption turns out to be incorrect in the case of script-based dynamic programming languages. This type of language experiences change throughout the execution of a program. Aspects specify in selectors which program points (join points) in the base they are bound to. When these join points experience change, there is a risk of an aspect under or over matching joinpoints - termed fragility. Aspects may need to be transformed, loaded, unloaded and otherwise dynamically adapt to changes in join point presence. The state-of-the-art provides various ways of addressing the problems of fragility, though these are neither integrated nor applicable in their current form to dynamic languages. To overcome these problems, this thesis proposes an integrated solution using two novel modularisations and three supporting features to address these via change management. Adaption plans are a structured trigger-based module containing internal modules representing choices with associated rules. These choices specify the acceptable triggers and the rules are a structured, typed, parameterised tree specifying a response. Delegation points modularise change management concerns that crosscut selectors. The supportive features are reflection support. metadata, and change notification. Reflection allows the state of a program to be reasoned about and changed at runtime, supporting change management decisions. Metadata is an established concept of reducing fragility by reducing dependency on the precise structure of a program. Change notification provides a unified mechanism for identifying change, something that reduces the complexity of change management in dynamic languages.
6

An empirical assessment of model driven development in industry

Hutchinson, John Edward January 2011 (has links)
Model driven development (MDD) is one of a number of proposals for software development that promises a number of important benefits. As with any "new" approach, it would be expected that there would be proponents of the approach and those who are opposed to it. MDD has been surprisingly contentious, though, perhaps because it challenges the code-centric model of software development (which therefore challenges the natural approach of those who develop software). But it remains the case that stories abound about significant successes resulting from using MDD in industry, and at the same time, detractors claim that MDD is inherently wrong - it is an abstraction too far: the cost of raising the level of abstraction from code to models can only ever result in an increase of costs or effort that can never be recovered. This thesis reports on work that has attempted to uncover the truth in this area in a way that has never been applied on a large-scale to MDD-based software development in industry. By going to industry practitioners, via a widely completed questionnaire and a number of in-depth interviews, it reports on what real software developers are doing in real companies. The results should lay to rest the belief that "MDD doesn't work" - apparently, it does. Companies around the world are using MDD in a variety of settings and are reporting significant benefits from its use. However, there is a subtle balance of potentially positive and potentially negative impacts of MDD use which successful users in industry prove able to manage, so that they are able to take advantages of the benefits rather than being dogged by the negatives.
7

Object-Oriented Specification:Analysable Patterns & Change Management

Heaven, William John Douglas January 2008 (has links)
Formal techniques have been shown to be useful in the development of correct software. But the level of expertise required of practitioners of these techniques prohibits their widespread adoption. Attempts to integrate formal specification techniques with modern, often agile, software development practices are becoming more successful. However, these new techniques do not yet have development environments that facilitate the construction of consistent specifications for the non-expert developer. :Many of the tools that support the analysis of specifications expressed in these languages give misleading feedback in cases where the specification is inconsistent. Further, logical changes made to a specification typically invalidate the results of previous analyses. This thesis is therefore concerned with the development of an environment to facilitate the construction of correct specifications. Analysis patterns are identified that guide a non-expert specifier through some of the logical pitfalls of analysing a program specification. A change management framework for program specifications is described, which minimises the number of SAT calls needed to recheck the consistency of an edited specification. A lightweight program specification language, called Loy, is defined, which can be automatically analysed by the Alloy Analyzer, through a formal encoding of Loy into Alloy. A prototype tool is presented that automates the encoding and implements the analysis patterns and change management framework in the context of Loy specifications.
8

A pattern-based approach to changing software requirements in brown-field business contexts

Brier, John January 2011 (has links)
In organisations, competitive advantage is increasingly reliant on the alignment of sociotechnical systems with business processes. 'Socio-technical' refers to the complex systems of people, tasks and technology. Supporting this alignment is exacerbated by the speed of technological change and its relationship with organisation growth. This complexity is further aggravated in a number of ways. Organisations and/or parts of organisations are structured differently and have different approaches to change. These differences impact on their responsiveness to change, their use of technology, and its relationship to business processes. In requirements engineering, a lack of understanding of the organisational context in which change takes place has been a problem over the last decade. Eliciting requirements is complex, with requirements changing constantly. Delivered change is affected by further changing needs, as stakeholders identify new ways of using IT. Changing requirements can lead to mismatches between tasks, technology and people. Relations and their alignment can be compromised. We contribute to understanding this complex domain by presenting an approach which engages with stakeholders/users in the early stages of the requirements elicitation process. The two expressions of the approach are derived from the literature and 19 real-world studies. They are referred to as Conceptual Framework and Change Frame. Both support a problem-centred focus on context analysis when reasoning about changing technology in business processes. The framework provides structures, techniques, notation and terminology. These represent, describe, and analyse the context in which change takes place, in the present and over time. The Change Frame combines an extension of the framework with an organisation pattern. It facilitates representing, describing and analysing change, across the strategic/operation area of an organisation. A known pattern of solution is provided, for the recurring change problem of representing an organisation-wide change in different organisation locations. Chapter 4 shows the conceptual framework in the context of a real-world study, and chapter 6 uses a real-world use/case scenario to illustrate the change frame. Both chapters show support for understanding change, through client/customer and stakeholder/users reasoning about the implications of change.
9

Model driven software modernisation

Chen, Feng January 2007 (has links)
Constant innovation of information technology and ever-changing market requirements relegate more and more existing software to legacy status. Generating software through reusing legacy systems has been a primary solution and software re-engineering has the potential to improve software productivity and quality across the entire software life cycle. The classical re-engineering technology starts at the level of program source code which is the most or only reliable information on a legacy system. The program specification derived from legacy source code will then facilitate the migration of legacy systems in the subsequent forward engineering steps. A recent research trend in re-engineering area carries this idea further and moves into model driven perspective that the specification is presented with models. The thesis focuses on engaging model technology to modernise legacy systems. A unified approach, REMOST (Re-Engineering through MOdel conStruction and Transformation), is proposed in the context of Model Driven Architecture (MDA). The theoretical foundation is the construction of a WSL-based Modelling Language, known as WML, which is an extension of WSL (Wide Spectrum Language). WML is defined to provide a spectrum of models for the system re-engineering, including Common Modelling Language (CML), Architecture Description Language (ADL) and Domain Specific Modelling Language (DSML). 9rtetaWML is designed for model transformation, providing query facilities, action primitives and metrics functions. A set of transformation rules are defined in 9rtetaWML to conduct system abstraction and refactoring. Model transformation for unifying WML and UML is also provided, which can bridge the legacy systems to MDA. The architecture and working flow of the REMOST approach are proposed and a prototype tool environment is developed for testing the approach. A number of case studies are used for experiments with the approach and the prototype tool, which show that the proposed approach is feasible and promising in its domain. Conclusion is drawn based on analysis and further research directions are also discussed.
10

The theory and practice of refinement-after-hiding

Burton, Jonathan January 2004 (has links)
In software or hardware development, we take an abstract view of a process or system - i.e. a specification - and proceed to render it in a more implement able form. The relationship between an implementation and its specification is characterised in the context of formal verification using a notion called refinement: this notion provides a correctness condition which must be met before we can say that a particular implementation is correct with respect to a particular specification. For a notion of refinement to be useful, it should reflect the ways in which we might want to make concrete our abstract specification. In process algebras, such as those used in [28,50,63]' the notion that a process Q implements or refines a process P is based on the idea that Q is more deterministic than P: this means that every behaviour of the implementation must be possible for the specification. Consider the case that we build a (specification) network from a set of (specification) component processes, where communications or interactions between these processes are hidden. The abstract behaviour which con- stitutes these communications or interactions may be implemented using a particular protocol, replication of communication channels to mask possible faults or perhaps even parallel access to data structures to increase perfor- mance. These concrete behaviours will be hidden in the construction of the final implementation network and so the correctness of the final network may be considered using standard notions of refinement. However, we can- not directly verify the correctness of component processes in the general case, precisely because we may have done more than simply increase determinism in the move from specification to implementation component. Standard (pro- cess algebraic) refinement does not, therefore, fully reflect the ways in which we may wish to move from the abstract to the concrete at the level of such components. This has implications both in terms of the state explosion prob- lem and also in terms of verifying in isolation the correctness of a component which may be used in a number of different contexts. We therefore introduce a more powerful notion of refinement, which we shall call refinement-after-hiding: this gives us the power to approach ver- ification compositionally even though the behaviours of an implementation component may not be contained in those of the corresponding specification, provided that the (parts of the) behaviours which are different will be hidden in the construction of the final network. We explore both the theory and practice of this new notion and also present a means for its automatic verifi- cation. Finally, we use the notion of refinement-after-hiding, along with the means of verification, to verify the correctness of an important algorithm for asynchronous communication. The nature of the verification and the results achieved are completely new and quite significant.

Page generated in 0.0561 seconds