• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 572
  • 311
  • 57
  • 41
  • 25
  • 21
  • 17
  • 15
  • 8
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1275
  • 1275
  • 570
  • 394
  • 306
  • 235
  • 223
  • 198
  • 157
  • 157
  • 133
  • 121
  • 120
  • 110
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A new perspective and a framework for software generation

De la Harpe, Margaretha 17 August 2012 (has links)
M.Sc. / The following questions led to this study: Why are there still so many approaches to the software generation process without one single approach taking the lead? Not only are there several methodologies available for the software generation process, but a methodology is not in use for long before it is replaced by an improved version or even another methodology. This is as a result of continuing further development and research. Sometimes the new methodology is not necessarily an improvement, but a paradigm shift. An example of this is object-orientation which followed shortly after the introduction of CASE as an alternative to software generation. Why are users to a large extent still dissatisfied and disillusioned with the software generation process even though they are more involved with it than before? User are more involved in the software generation process as a result of the availability of sophisticated tools, as well as joint sessions with the developer during the analysis and design stages of the software generation process. Yet, despite this, software systems in most cases still do not perform according to users' expectations. Why did the use of formal methodologies, based on successful techniques of the engineering field, only result in a limited improvement of the quality, control and operationalization of the software system? The cost of maintenance is still very high in relation to the total cost of generating a software system. The same degree of success attained in, say, the engineering field, could not be achieved [AND I]. Why is there a simultaneous movement towards incremental approaches and formal methods although these approaches are really moving in opposite directions? The incremental approach is based on obtaining quick results through prototyping without necessarily following a formal methodology [AND2]. Formal methods, on the other hand, attempt to formalize the software generation process through mathematical transformations. The advantage of using these mathematical transformations is that automation and verification of processes can be achieved [McC1]. Both these approaches show promising results, but the incremental approach might suit the developer better and is already used widely by practitioners. Why is it so difficult to find the correct methodology for generating a software system? The selection of an appropriate methodology is extremely difficult because of the variety of methodologies, technologies and hardware available. Some methodologies are also used for only a limited period because of rapid advances in technology. Why do sophisticated and user-friendly tools not succeed in simplifying the software generation process? Despite sophisticated tools such as CASE, where the user of these tools is guided through the different steps of the methodology, these tools have not succeeded in delivering the results expected by industry. The problems experienced during the software generation process are investigated. In order to distinguish between different approaches to software generation, is it necessary to place different approaches in relation to one another by considering the different elements of each. The characteristics and constraints of the software generation process must also be considered. All the issues pertaining to the software generation process will be discussed in terms of the problem statement.
42

Exploiting persistence in CASE technology

Figueira, Ricardo January 1997 (has links)
Bibliography: pages 102-107. / A Design Workbench has been built for Napier88 [MBC+94] as part of the natural progression towards developing better product systems and improving software construction tools. The system includes a Metamodeller (enabling users to specify the data and process models they prefer), a Model Builder which supports multiple coexisting models and a Target System Generator. Experience using the Workbench has shown that it is easy to use, increases productivity, improves programming standards and facilitates code sharing. This thesis demonstrates the benefits of orthogonal persistence for Computer-Aided Software Engineering by describing an initial design environment and its subsequent extension to include support for multiple co-existing models.
43

Effective and Appropriate Use of Controlled Experimentation in Software Development Research

Johnson, Mark Alan 28 October 1996 (has links)
Although there is a large body of research and publication on software development, only a limited amount of this work includes empirical demonstration of its effectiveness. Yet, it is this empirical understanding which will help move software development from a craft to an engineering discipline. Of the empirical methods for research, controlled experiments are the most commonly thought of in scientific studies, and yet the least used to study software development. This thesis begins with a brief review of the different empirical methods commonly used to study software development. This review provides a quick introduction to each empirical method, compares the main advantages and weaknesses of each method, and provides a context for how controlled experimentation compares to other empirical methods for studying software development. Using empirical methods to study software development is not easy or straightforward. There are limitations which appear to be inherent in the nature of software and issues due to the improper understanding or application of empirical methods. These limitations and issues are identified, specifically for controlled experiments, and approaches for dealing with them are proposed. A controlled experiment was designed and conducted to demonstrate the method and explore the limitations and issues for empirical research in software development. This experiment and its results are presented. This example experiment demonstrates that conducting even a simple experiment in software development is challenging. Lessons learned from this experience are reported. Controlled experiments require that the researcher have a high degree of control over the environment where the experiment is carried out. This can be costly and difficult to achieve. This thesis concludes by discussing how controlled experiments can be used effectively in studies of software development.
44

HOOD : a Higher-Order Object-Oriented Database model and its implementation

Brand, Michael Max January 1992 (has links)
Bibliography: pages 133-140. / There is no accepted standard for the object-oriented database paradigm at present, which has led to different definitions of features and conformance requirements. HOOD is a Higher-Order Object-Oriented Database system which defines a meta-data model for specifying the requirements of an Object-Oriented Database, which provides uniformity and extensibility. From this specification and by making use of a comprehensive structure system, an exemplar or implementation model is defined. Among the constructs provided by the model are types, instances, objects, values, methods, base types, generic types and metatypes. The mechanisms of instantiation and subtyping allow for relationships between these constructs. Extensibility is provided in the model for types, base types, structures and methods. Uniformity is achieved by defining all constructs as instances and through the use of messages for all operations. There is only one form of object construct which provides persistence and identities. The complex values and extensibility of the model allow it to adapt in order to model the real world instead of adapting the real world to fit the model. We have implemented a subset of the structures and values defined in the model, provided persistence and identities for object, and included the various constructs mentioned above. The method language allows for the specification of methods, the passing of messages, and the use of complex values. The compiler performs type checking and resolution and generates instructions for an abstract machine which manipulates the database.
45

An Estelle compiler

Van Dijk, Jacques January 1988 (has links)
The increasing development and use of computer networks has necessitated international standards to be defined. Central to the standardization efforts is the concept of a Formal Description Technique (FDT) which is used to provide a definition medium for communication protocols and services. This document describes the design and implementation of one of the few existing compilers for the one such FDT, the language "Estelle" ([ISO85], [ISO86], [ISO87]).
46

Utilising the Software Engineering Methods and Theory framework to critically evaluate software engineering practice in the South African banking industry

Le Roux, Alistair Graham 17 March 2016 (has links)
A research report submitted to the Faculty of Engineering and The Built Environment of the University of Witwatersrand, Johannesburg In partial fulfilment of the requirements for the Degree of Master of Science in Engineering September 2015 / In recent years, software has become the cornerstone of banking and new business products are directly dependant on software. The delivery cycles for new features is now related to market share. This drive to use software as a vehicle for competitive advantage has created an environment in which software development of new business systems are increasingly on the critical path of many projects. An organisation’s portfolio of software intensive projects is situated within this complexity and organisations attempt to mitigate the risks associated with these complexities by implementing software development processes and practices. A key problem facing the modern bank is how to define and build a software development process that caters for both the traditional and increasingly agile genres of software development characteristics in a consistent and manageable way. The banks attempt to address this problem through continuous methodology and process improvements. Comparing and assessing non-standardised software engineering lifecycle models without a common framework is a complex and subjective task. A standardised language is important for simplifying the task for developing new methods and practices or for analysing and documenting existing practices. The Software Engineering Methods and Theory (SEMAT) initiative has developed a standardised kernel of essential concepts, together with a language that describes the essence of software engineering. This kernel, called the Essence, has recently become an Object Management Group (OMG) standard. The Essence kernel, together with its language, can be used as the underpinning theory to analyse an existing method and help provide insights that can drive method enhancements. The research report proposes a simple, actionable analysis framework to assist organisations to assess, review and develop their software engineering methods. The core concepts of the methodology are identified and mapped to the Essence concepts. The governance model of the Essence is mapped to the governance model of the industry model and a set of practices is identified and documented in the Essence language. The mapping and resulting analysis can be used to test the validity of the Essence theory in practice and identify areas for improvement in both the method and the Essence standard. The analysis framework has been applied to an operational software development lifecycle of a large South African bank. A mapping of the Essence concepts to the governance model and method documented in the lifecycle was completed. This mapping revealed that the Essence is a valid tool and can be used to describe a method in practice. Furthermore it is useful as an analysis framework to assess the governance model that manages and measures the progress of an endeavour in the Bank. The case study and resulting analysis demonstrate that the Essence standard can be used to analyse a methodology and identify areas for improvement. The analysis also identified areas for improvement in the Essence specification.
47

Sustainable Software Development: Evolving Extreme Programming

Sedano, Todd 01 April 2017 (has links)
Context: Software development is a complex socio-technical endeavor that involves coordinating different disciplines and skill sets. Practitioners experiment with and adopt processes and practices with a goal of making their work more effective. Objective: To observe, describe, and analyze software development processes and practices in an industrial setting. Our goal is to generate a descriptive theory of software engineering development, which is rooted in empirical data. Method: Following Constructivist Grounded Theory, we conducted a 2.5 year participant-observation of eight software projects at Pivotal, a software development company. We interviewed 33 software engineers, interaction designers, and product managers, and analyzed one year of retrospection topics. We iterated between data collection, data analysis and theoretical sampling until achieving theoretical saturation and generating a descriptive theory. Results: 1) This research introduces a descriptive theory of Sustainable Software Development. The theory encompasses principles, policies, and practices aiming at removing knowledge silos and improving code quality, hence leading to development sustainability. 2) At the heart of Sustainable Software Development is team code ownership. This research widens our understanding of team code ownership. Developers achieve higher team code ownership when they understand the system context, have contributed to the code in question, perceive code quality as high, believe the product will satisfy the user needs, and perceive high team cohesion. 3) This research introduces the first evidence-based waste taxonomy, identifying eight wastes along with causes and tensions, and compares it with Lean Software Development’s waste taxonomy. Conclusion: The Sustainable Software Development theory refines and extends our understanding of Extreme Programming by adding principles, policies, and practices (including Overlapping Pair Rotation) and aligning them with the business goal of sustainability. One key aspect of the theory is team code ownership, which is rooted in numerous cognitive, emotional, contextual and technical factors and cannot be achieved simply by policy. Another key dimension is waste identification and elimination, which has led to a new taxonomy of waste. Overall, this research contributes to the field of software engineering by providing new insights, rooted in empirical data, into how a software organization leverages and extends Extreme Programming to achieve software sustainability.
48

Quality management challenges in iterative software product development of a selected software development organisation in Cape Town, South Africa

Chipunza, Enciliah January 2018 (has links)
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2018. / Many software organisations using iterative software development approach use practices that relate to quality management. However, the quality management process has been inadequate. Despite many research studies conducted on quality management in iterative software product development none have adequately addressed the challenges and mitigation techniques to have an adequate process that leads to a quality software product. The objective of this study was to determine factors that affect the quality management process in iterative software development. The research followed a qualitative approach, a case of software organisation SasTech in Cape Town, South Africa. 22 interviews were conducted on three roles actively involved in the software product development process. These are product management, quality assurance and software developers. Themes were drawn from results and were tabulated. The duality of technology theory was used as a theoretical lens to data analysis. Several factors were identified to influence the software quality management process. These include planning, documentation, process ownership, technologies, testing, timelines and management support. Through the general proposed framework, facilities (human resources and technologies), interpretive schemes (architecture) and norms (practices) of software quality management can be institutionalised leading to adequate and effective quality management in iterative development for SasTech as well as other organisations in the same industry.
49

Knowledge management and throughput optimization in large-scale software development

Andersson, Henrik January 2015 (has links)
Large-scale software development companies delivering market-driven products have introduced agile methodologies as the way of working to a big extent. Even though there are many benefits with an agile way of working, problems occur when scaling agile because of the increased complexity. One explicit problem area is to evolve deep product knowledge, which is a domain specific knowledge that cannot be developed anywhere else but at the specific workplace. This research aims to identify impediments for developing domain specific knowledge and provide solutions to overcome these challenges in order to optimize knowledge growth and throughput. The result of the research shows that impediments occur in four different categories, based on a framework for knowledge sharing drivers. These are people-related, task-related, structure-related and technology-related. The challenging element with knowledge growth is to integrate the training into the feature development process, without affecting the feature throughput negatively. The research also shows that by increasing the knowledge sharing, the competence level of the whole organization can be increased, and thereby be beneficial from many perspectives, such as feature-throughput and code quality.
50

Application and evaluation of methods for merging user experience design with agilesoftware development

Eriksson Vikner, Mikael January 2016 (has links)
Cinnober is an organization that develops advanced software solutions for financial institutions. As a part of the technology toolkit used at Cinnober there is a web framework with which GUI development can be driven from the data available on the server, through configuration rather than development. Rather than having the user interface emerge as a result of technology and available data, they would like to explore a software development model driven by user centered design. Cinnober practices scrum, an agile software development framework, which has proven difficult to integrate with user centered design. This thesis strives to identify suitable methods for performing user centered design in the environment of agile software development. A development process based on scrum, lean UX, staggered sprints and the effect map was then utilized and evaluated in a short development project at Cinnober. Utilizing and evaluating those methods yielded valuable input which can be of use in future development efforts. While there was plenty of positive feedback from the development team there was also some room for improvement. Additionally, there are quite a few pieces missing in order for the utilized development process to cover all aspects considered important in one of the most commonly cited definitions of user centered design.

Page generated in 0.0355 seconds