• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A pattern-based approach to changing software requirements in brown-field business contexts

Brier, John January 2011 (has links)
In organisations, competitive advantage is increasingly reliant on the alignment of sociotechnical systems with business processes. 'Socio-technical' refers to the complex systems of people, tasks and technology. Supporting this alignment is exacerbated by the speed of technological change and its relationship with organisation growth. This complexity is further aggravated in a number of ways. Organisations and/or parts of organisations are structured differently and have different approaches to change. These differences impact on their responsiveness to change, their use of technology, and its relationship to business processes. In requirements engineering, a lack of understanding of the organisational context in which change takes place has been a problem over the last decade. Eliciting requirements is complex, with requirements changing constantly. Delivered change is affected by further changing needs, as stakeholders identify new ways of using IT. Changing requirements can lead to mismatches between tasks, technology and people. Relations and their alignment can be compromised. We contribute to understanding this complex domain by presenting an approach which engages with stakeholders/users in the early stages of the requirements elicitation process. The two expressions of the approach are derived from the literature and 19 real-world studies. They are referred to as Conceptual Framework and Change Frame. Both support a problem-centred focus on context analysis when reasoning about changing technology in business processes. The framework provides structures, techniques, notation and terminology. These represent, describe, and analyse the context in which change takes place, in the present and over time. The Change Frame combines an extension of the framework with an organisation pattern. It facilitates representing, describing and analysing change, across the strategic/operation area of an organisation. A known pattern of solution is provided, for the recurring change problem of representing an organisation-wide change in different organisation locations. Chapter 4 shows the conceptual framework in the context of a real-world study, and chapter 6 uses a real-world use/case scenario to illustrate the change frame. Both chapters show support for understanding change, through client/customer and stakeholder/users reasoning about the implications of change.
42

Model driven software modernisation

Chen, Feng January 2007 (has links)
Constant innovation of information technology and ever-changing market requirements relegate more and more existing software to legacy status. Generating software through reusing legacy systems has been a primary solution and software re-engineering has the potential to improve software productivity and quality across the entire software life cycle. The classical re-engineering technology starts at the level of program source code which is the most or only reliable information on a legacy system. The program specification derived from legacy source code will then facilitate the migration of legacy systems in the subsequent forward engineering steps. A recent research trend in re-engineering area carries this idea further and moves into model driven perspective that the specification is presented with models. The thesis focuses on engaging model technology to modernise legacy systems. A unified approach, REMOST (Re-Engineering through MOdel conStruction and Transformation), is proposed in the context of Model Driven Architecture (MDA). The theoretical foundation is the construction of a WSL-based Modelling Language, known as WML, which is an extension of WSL (Wide Spectrum Language). WML is defined to provide a spectrum of models for the system re-engineering, including Common Modelling Language (CML), Architecture Description Language (ADL) and Domain Specific Modelling Language (DSML). 9rtetaWML is designed for model transformation, providing query facilities, action primitives and metrics functions. A set of transformation rules are defined in 9rtetaWML to conduct system abstraction and refactoring. Model transformation for unifying WML and UML is also provided, which can bridge the legacy systems to MDA. The architecture and working flow of the REMOST approach are proposed and a prototype tool environment is developed for testing the approach. A number of case studies are used for experiments with the approach and the prototype tool, which show that the proposed approach is feasible and promising in its domain. Conclusion is drawn based on analysis and further research directions are also discussed.
43

The aggregating algorithm and regression

Busuttil, Steven January 2008 (has links)
Our main interest is in the problem of making predictions in the online mode of learning where at every step in time a signal arrives and a prediction needs to be made before the corresponding outcome arrives. Loss is suffered if the prediction and outcome do not match perfectly. In the prediction with expert advice framework, this protocol is augmented by a pool of experts that produce their predictions before we have to make ours. The Aggregating Algorithm (AA) is a technique that optimally merges these experts so that the resulting strategy suffers a cumulative loss that is almost as good as that of the best expert in the pool. The AA was applied to the problem of regression, where outcomes are continuous real numbers, to get the AA for Regression (AAR) and its kernel version, KAAR. On typical datasets, KAAR's empirical performance is not as good as that of Kernel Ridge Regression (KRR) which is a popular regression method. KAAR performs better than KRR only when the data is corrupted with lots of noise or contains severe outliers. To alleviate this we introduce methods that are a hybrid between KRR and KAAR. Empirical experiments suggest that, in general, these new methods perform as good as or better than both KRR and KAAR. In the second part of this dissertation we deal with a more difficult problem— we allow the dependence of outcomes on signals to change with time. To handle this we propose two new methods: WeCKAAR and KAARCh. WeCKAAR is a simple modification of one of our methods from the first part of the dissertation to include decaying weights. KAARCh is an application of the AA to the case where the experts are all the predictors that can change with time. We show that KAARCh suffers a cumulative loss that is almost as good as that of any expert that does not change very rapidly. Empirical results on data with changing dependencies demonstrate that WeCKAAR and KAARCh perform well in practice and are considerably better than Kernel Ridge Regression.
44

Interactive evolutionary computing in early lifecycle software engineering design

Simons, Christopher Lloyd January 2011 (has links)
Design is fundamental to software development. Indeed, early lifecycle software engineering design is crucial and has significant impact of subsequent development activities. Inferior designs can result in deleterious down-stream consequences. Therefore improving the traceability, structural integrity and elegance of software design has significant potential for enhancing software development productivity. However, early lifecycle software design is a demanding and non-trivial task for software engineers to perform and current computational tool support for software engineers is limited. Thus to address this limitation, this thesis investigates the potential of interactive evolutionary search and complementary computational intelligence to enable the exploration and discovery of useful and interesting software designs relating to the design problem at hand. To enable evolutionary search and exploration of possible design solutions, a novel, discrete, object-based representation of both design problem and design solution is proposed. Associated genetic operators including self-adapting mutation are also proposed. Experiments show that this novel representation enables highly effective search and exploration of the software design solution space. Next, software agents are introduced to facilitate an interactive framework for natural collaborative designer / computer interaction. Empirical investigations reveal that colourful visualisation of software designs engages the designer. Furthermore, with enhanced generation of multiple candidate designs, opportunities for periods of designer reflection are presented thus enabling sudden design discovery. Design elegance is an important but complex factor in software design. Four novel quantitative elegance measures are proposed which enhance the interactive design experience by selecting elegant software designs for designer evaluation. Using designer elegance evaluation as reward, reward-based machine learning is exploited to steer a dynamic, multi-objective search according to designer elegance intentions. Designer interactivity is further enhanced by a dynamic, fitness-proportionate interactive interval, which judiciously varies the number of evolutionary generations between interactions to promote search and exploration and further reduce use fatigue. The integration of interactive, dynamic evolutionary search with software agents and reward-based learning is found to produce an engaging, compelling interactive experience for software designers, successfully enabling the search, exploration and discovery of fruitful, interesting and useful early lifecycle software designs.
45

Human error in the design of a safety-critical system

Shryane, Nick January 2003 (has links)
No description available.
46

Strategies in qualitative research methods in the evolution of software development processes

Gittins, Robert Godfrey January 2004 (has links)
No description available.
47

Functional programming and erratic non-determinism

Pitcher, C. S. January 2001 (has links)
No description available.
48

Modeling security requirements for context aware system using UML

Almutairi, Saad January 2013 (has links)
Modeling in general is "an abstract representation of a specification, design or system from a particular point of view". System modeling is "a technique to express, visualise, analyse and transform the architecture of a system". The United Modeling Language (UML) is "a language for specifying, visualising, constructing, and documenting the artefacts of a software-intensive system as well as for business mod- eling and other non-software systems". UML consists of different types of diagrams such as Use Case diagram, Activity diagram, State diagram and Class diagram. Each type of these diagrams concerns a different aspects of the system development process. Context-Aware Systems (CASs) are primarily associated with Pervasive/Ubi- quitous Computing, which has became most prominent since the advent of smart phones and the inclusion of mobility features in computing devices. CASs can sense different aspects of their environment and use the dynamic Context Information (CI) to adapt their behavior accordingly. Hence, various precis of CI, such as User context, Physical context, Computer context and Time context, play a major role in controlling CAS behaviour and functions. Security is considered one of major challenges in CAS specially because such systems often gather sensitive user information; this information may compromise the security of the system if disclosed to unauthorised users. Thus, the design of a CAS must consider system security as a major requirement. Although security is traditionally considered as a non-functional requirement and is delayed to a later stage of the system development lifecycle, this thesis insists that security must be considered as early as possible because of its high importance. This is also in line with the "secure by design" concept. Therefore, in this thesis the UML diagrams Use Case diagram, Activity diagram and State diagram will be enhanced in order to enable them to model a CAS and then capture its security requirements at the earliest possible stage of the software development process. The contribution to knowledge that this thesis makes is at least threefold, as outlined blow: • Enhancing Use Case diagram notations to express dynamic CAS functional behaviour by showing the influences of CI changes. These extended notations are then used to capture the CAS security requirements. • Enhancing Activity diagram notations in order to demonstrate and clarify the extended Use Case diagram by developing general diagram elements for CASs. This helps to show the data flow during the execution of a CAS function, and then present the security requirements. • Enhancing State diagram notations to depict dynamism and security of a CAS also at this level, and to ultimately support the enhancement on Use Case and Activity diagrams. These extended UML diagrams will be evaluated by applying them to a real- world Case Study to show their practical applicability. The case study is about an infostation-based mobile learning environment. This environment of Mobile Learning (M-learning) is deployed across a university boundary and provides a variety of services such `download lecture' and `do exam' to mobile users. In conclusion, this research proposes and demonstrates an applicable approach to capture and model security requirements for CASs using innovative extensions of existing types of UML diagrams: Use Case, Activity and State.
49

Architecting tacit information in conceptual data models for requirements process improvement

Williams, Gbolahan January 2013 (has links)
Despite extensive work in the field of Requirements Engineering, ineffective require- ments remains a major antecedent to the failure of projects. Requirements Engineering (RE) refers to the body of methods associated with elucidating the needs of a client, when considering the development of a new system or product. In the literature, challenges in RE have been mainly attributed to insufficient client input, incomplete requirements, evolving requirements and lack of understanding of the domain. Accordingly, this has raised the need for methods of effectively eliciting, analysing and recording requirements. In the literature, promising methods have been proposed for using ethnography to improve methods for elicitation because of its strong qualitative and quantitative qualities in understanding human activities. There has also been success with the use of Model Driven Engineering techniques for analysing, recording and communicating requirements through the use of Conceptual Data Models (CDM), to provide a shared understanding of the domain of a system. However, there has been little work that has attempted to integrate these two areas either from an empirical or theoretical perspective. In this thesis, we investigate how ethnographic research methods contribute to a method for data analysis in RE. Specifically, we consider the proposition that a CDM based on explicit and implicit information derived from ethnographic elicitation, will lead to design solutions that more closely match the expectations of clients. As a result of our investigation, this thesis presents the following key contributions: (i) the introduction of an ethnographic approach to RE for elicitation and verification (ii) a rich CDM metamodel and modeling language necessary for defining and recording ethnographic analyses based on implicit and explicit information (iii) a method for mapping CDM’s to high level architectural abstractions called ecologies. To compliment this work, an evaluation case study is provided that demonstrates a real world application of this work.
50

Performance requirements verification during software systems development, using UML model transformation approach

Al Abdullatif, Abdullatif M. January 2011 (has links)
Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests.

Page generated in 0.047 seconds