• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

A framework for the classification and detection of design defects and software quality assurance

Allanqawi, Khaled Kh. S. Kh January 2015 (has links)
In current software development lifecyeles of heterogeneous environments, the pitfalls businesses have to face are that software defect tracking, measurements and quality assurance do not start early enough in the development process. In fact the cost of fixing a defect in a production environment is much higher than in the initial phases of the Software Development Life Cycle (SDLC) which is particularly true for Service Oriented Architecture (SOA). Thus the aim of this study is to develop a new framework for defect tracking and detection and quality estimation for early stages particularly for the design stage of the SDLC. Part of the objectives of this work is to conceptualize, borrow and customize from known frameworks, such as object-oriented programming to build a solid framework using automated rule based intelligent mechanisms to detect and classify defects in software design of SOA. The framework on design defects and software quality assurance (DESQA) will blend various design defect metrics and quality measurement approaches and will provide measurements for both defect and quality factors. Unlike existing frameworks, mechanisms are incorporated for the conversion of defect metrics into software quality measurements. The framework is evaluated using a research tool supported by sample used to complete the Design Defects Measuring Matrix, and data collection process. In addition, the evaluation using a case study aims to demonstrate the use of the framework on a number of designs and produces an overall picture regarding defects and quality. The implementation part demonstrated how the framework can predict the quality level of the designed software. The results showed a good level of quality estimation can be achieved based on the number of design attributes, the number of quality attributes and the number of SOA Design Defects. Assessment shows that metrics provide guidelines to indicate the progress that a software system has made and the quality of design. Using these guidelines, we can develop more usable and maintainable software systems to fulfil the demand of efficient systems for software applications. Another valuable result coming from this study is that developers are trying to keep backwards compatibility when they introduce new functionality. Sometimes, in the same newly-introduced elements developers perform necessary breaking changes in future versions. In that way they give time to their clients to adapt their systems. This is a very valuable practice for the developers because they have more time to assess the quality of their software before releasing it. Other improvements in this research include investigation of other design attributes and SOA Design Defects which can be computed in extending the tests we performed.
172

Automating quantitative information flow

Heusser, Jonathan January 2011 (has links)
Unprecedented quantities of personal and business data are collected, stored, shared, and processed by countless institutions all over the world. Prominent examples include sharing personal data on social networking sites, storing credit card details in every store, tracking customer preferences of supermarket chains, and storing key personal data on biometric passports. Confidentiality issues naturally arise from this global data growth. There are continously reports about how private data is leaked from confidential sources where the implications of the leaks range from embarrassment to serious personal privacy and business damages. This dissertation addresses the problem of automatically quantifying the amount of leaked information in programs. It presents multiple program analysis techniques of different degrees of automation and scalability. The contributions of this thesis are two fold: a theoretical result and two different methods for inferring and checking quantitative information flows are presented. The theoretical result relates the amount of possible leakage under any probability distribution back to the order relation in Landauer and Redmond’s lattice of partitions [35]. The practical results are split in two analyses: a first analysis precisely infers the information leakage using SAT solving and model counting; a second analysis defines quantitative policies which are reduced to checking a k-safety problem. A novel feature allows reasoning independent of the secret space. The presented tools are applied to real, existing leakage vulnerabilities in operating system code. This has to be understood and weighted within the context of the information flow literature which suffers under an apparent lack of practical examples and applications. This thesis studies such “real leaks” which could influence future strategies for finding information leaks.
173

Software based solutions for mobile positioning

Hamani, Sadek January 2013 (has links)
This thesis is concerned with the development of pure software-based solutions for cellular positioning. The proposed self-positioning solutions rely solely on the available network infrastructure and do not require additional hardware or any modifications in the cellular network. The main advantage of using RSS rather than timing measurements is to overcome the need for synchronisation between base stations. By exploiting the availability of RSS observations, the self-positioning methods presented in this thesis have been implemented as mobile software applications and tested in real world positioning experiments. The well-known Extended Kalman Filter can be used as a static positioning process while modeling the uncertainty in signal strength observations. The range estimation is performed using an empirical propagation model that has been calibrated using RSS measurements in the same trial areas where the positioning process is applied. In order to overcome the need for a priori maps of the GSM network, a novel cellular positioning method is proposed in this thesis. It is based on the concept of Simultaneous Localisation And Mapping (SLAM) which represents one of the greatest successes of autonomous navigation research. By merging target localisation and the mapping of unknown base stations into a single problem, Cellular SLAM allows a mobile phone to build a map of its environment and concurrently use this map to determine its position.
174

Automated composition of sequence diagrams

Alwanain, Mohammed Ibrahim January 2016 (has links)
Software design is a significant stage in software development life cycle as it creates a blueprint for the implementation of the software. Design-errors lead to costly and insufficient implementation. Hence, it is crucial to provide solutions to discover the design error in early stage of the system development and solve them. Inspired by various engineering disciplines, the software community proposed the concept of modelling in order to reduce these costly errors. Modelling provides a platform to create an abstract representation of the software systems concluding to the birth of various modelling languages such as Unified Modelling Language (UML), Automata, and Petri Net. Due to the modelling raises the level of abstraction throughout the analysis and design process, it enables the system discovers to efficiently identify errors. Since modern systems become more complex, models are often produced part-by-part to help reduce the complexity of the design. This often results in partial specifications captured in models focusing on a subset of the system. To produce an overall model of the system, such partial models must be composed together. Model composition is the process of combining partial models to create a single coherent model. Due to manual model composition is error prone, time-consuming and tedious, it must be replaced by automated model compositions. This thesis presents a novel approach for an automatic composition technique for creating behaviour models, such as a sequence diagram, from partial specifications captured in multiple sequence diagrams with the help of constraint solvers.
175

The application of software product line engineering to energy management in the cloud and in virtualised environments

Murwantara, I. Made January 2016 (has links)
Modern software is created from components which can often perform a large number of tasks. For a given task, often there are many variations of components that can be used. As a result, software with comparable functionality can often be produced from a variety of components. The choice of software components influences the energy consumption. A popular method of software reuse with the components' setting selection is Software Product Line (SPL). Even though SPL has been used to investigate the energy related to the combination of software components, there has been no in depth study of how to measure the consumption of energy from a configuration of components and the extent to which the components contribute to energy usage. This thesis investigates how software components' diversity affects energy consumption in virtualised environments and it presents a method of identifying combinations of components that consume less energy. This work gives insight into the cultivation of the green software components by identifying which components influence the total consumption of energy. Furthermore, the thesis investigates how to use component diversity in a dynamic form in the direction of managing the consumption of energy as the demand on the system changes.
176

The theory of LEGO

Pollack, Robert January 1995 (has links)
LEGO is a computer program for interactive typechecking in the Extended Calculus of Constructions and two of its subsystems. LEGO also supports the extension of these three systems with inductive types. These type systems can be viewed as logics, and as meta languages for expressing logics, and LEGO is intended to be used for interactively constructing proofs in mathematical theories presented in these logics. I have developed LEGO over six years, starting from an implementation of the Calculus of Constructions by Gérard Huet. LEGO has been used for problems at the limits of our abilities to do formal mathematics. In this thesis I explain some aspects of the meta-theory of LEGO's type systems leading to a machine-checked proof that typechecking is decidable for all three type theories supported by LEGO, and to a verified algorithm for deciding their typing judgements, assuming only that they are normalizing. In order to do this, the theory of Pure Type Systems (PTS) is extended and formalized in LEGO. This extended example of a formally developed body of mathematics is described, both for its main theorems, and as a case study in formal mathematics. In many examples, I compare formal definitions and theorems with their informal counterparts, and with various alternative approaches, to study the meaning and use of mathematical language, and suggest clarifications in the informal usage. Having outlined a formal development far too large to be surveyed in detail by a human reader, I close with some thoughts on how the human mathematician's state of understanding and belief might be affected by possessing such a thing.
177

Supporting software integration activities with first-class code changes / Aide à l'intégration de branches grâce à la réification des changements

Dias, Victor 27 November 2015 (has links)
Les développeurs changent le code source en parallèle les uns des autres, ce qui fait diverger les bases de code. Ces divergences se doivent d'être réintégrées. L'intégration de bases de code divergentes est une activité complexe. Par exemple, réunir deux bases de code indépendamment correctes peut générer des problèmes. L'intégration peut être difficile avec les outils existants, qui, au lieu de gérer l'évolution des entités réelles du programme modifié, gère les changements de code au niveau des lignes de texte dans les fichiers sources. Les outils sont importants: les outils de développement de logiciels se sont grandement améliorés en passant par exemple d'éditeurs de texte génériques à des IDEs qui fournissent de la manipulation de code de haut niveau tels que la refactorisation automatique et la complétion de code. Cette amélioration a été possible grâce à la réification des entités de programme. Néanmoins, les outils d'intégration n'ont pas profité d'une réification similaire des entités de changement pour améliorer l'intégration. Dans cette thèse nous avons d'abord conduit une étude auprès de développeurs pour comprendre quelles sont les activités menées durant une intégration qui sont peu supportées par les outils. L'une d'elle est la détection de commits mêlés (qui contiennent des tâches non liées telles qu'une correction de bug et une refactorisation).Ensuite, nous proposons Epicea, un modèle de changement réifié et des outils d'IDE associés, et EpiceaUntangler, une approche pour aider les développeurs à démêler les commits en se basant sur Epicea. Les résultats de nos évaluations avec des études de cas issues du monde réel montrent l’utilité de nos approches. / Developers typically change codebases in parallel from each other, which results in diverging codebases. Such diverging codebases must be integrated when finished. Integrating diverging codebases involves difficult activities. For example, two changes that are correct independently can introduce subtle bugs when integrated together. Integration can be difficult with existing tools, which, instead of dealing with the evolution of the actual program entities being changed, handle code changes as lines of text in files. Tools are important: software development tools have greatly improved from generic text editors to IDEs by providing high-level code manipulation such as automatic refactorings and code completion. This improvement was possible by the reification of program entities. Nevertheless, integration tools did not benefit from a similar reification of change entities to improve productivity in integration. In this work we first conducted a study to learn which integration activities are important and have little tool support. We discovered that one of such activities is the detection of tangled commits (that contain unrelated tasks such as a bug fix and a refactoring). Then we proposed Epicea, a reified change model and associated IDE tools, and EpiceaUntangler, an approach to help developers share untangled commits based on Epicea. The results of our evaluations with real-world studies show the usefulness of our approaches.
178

Seeking improvements in detailed design support for software development projects

Ramsay, Craig Douglas January 2012 (has links)
Evidence from literature indicates prevailing issues in relation to the documentation of software systems. Documentation requires significant effort to create and maintain and this can often reduce the inclination to produce it in a timely manner and to ensure that it remains up to date with the program which code it corresponds to. As a result, documentation is not entirely trusted as a source of consultation during software maintenance tasks. Existing tool support neglects important aspects of detailed design documentation for software systems. This work proposes a design for a novel research tool which provides improved support for the detailed design and documentation of software systems and which addresses the prevailing issues identified. At the core of this tool is a ‘dynamic synchronization’ feature which automates the process of detecting and synchronizing changes between program code and documentation at the level of detailed algorithms in code; preventing them from getting out of date. An evaluation experiment was designed and conducted wherein the research tool was used to complete a series of programming and documentation tasks representing typical software development and code maintenance scenarios. The results show that software developers using the dynamic synchronization feature had a significant 66 percent reduction in the time required to keep their documentation up to date during a code maintenance task (p < 0.01), and a significant 31 percent reduction in the time to complete the maintenance task (p < 0.01). In a questionnaire, they expressed a significant 20 percent higher confidence level that their documentation was an accurate reflection of their code than software developers using non-synchronized forms of documentation (subjective measure, p = 0.03). Further areas of research and development are proposed.
179

The discursive constitution of software development

Cornut, Francis January 2009 (has links)
The successful development of software continues to be of central interest, both as an academic topic and in professional practice. Consequently, several software development approaches and methodologies have been developed and promoted over the past decades. However, despite the attention given to the subject and the methodical support available, software development and how it should be practiced continue to be controversial. This thesis examines how beliefs about software development come to be socially established as legitimate, and how they come to constitute software development practices in an organization. It is argued that the emergence of a dominant way of conceiving of and practicing software development is the outcome of power relations that permeate the discursive practices of organizational actors. The theoretical framework of this study is guided by Pierre Bourdieu’s theory of symbolic violence and organizational discourse theory. As a research method, ethnographic research techniques are utilized as part of a case study to gain deep insights into the standardization of software development practices. The research site is the IT division of a large financial services organization and is composed of ten units distributed across eight countries. The tumultuous development of a knowledge management programme intended to institutionalize a standard software development process across the organization’s units provides the case for this research. This thesis answers the call for studies providing detailed accounts of the sociopolitical process by which technically oriented practices are transferred and standardized within organizations. It is submitted that a discourse theoretical approach informed by Bourdieu’s thinking enables us to conceptualize this process in a more meaningful, and theoretically rigorous, manner. In providing this theoretical approach, the thesis seeks to contribute to current research on technology and innovation management, and to offer guidance on some issues concerning the management of the software development process.
180

A holistic semantic based approach to component specification and retrieval

Li, Chengpu January 2012 (has links)
Component-Based Development (CBD) has been broadly used in software development as it enhances the productivity and reduces the costs and risks involved in systems development. It has become a well-understood and widely used technology for developing not only large enterprise applications, but also a whole spectrum of software applications, as it offers fast and flexible development. However, driven by the continuous expansions of software applications, the increase in component varieties and sizes and the evolution from local to global component repositories, the so-called component mismatch problem has become an even more severe hurdle for component specification and retrieval. This problem not only prevents CBD from reaching its full potential, but also hinders the acceptance of many existing component repository. To overcome the above problem, existing approaches engaged a variety of technologies to support better component specification and retrieval. The existing approaches range from the early syntax-based (traditional) approaches to the recent semantic-based approaches. Although the different technologies are proposed to achieve accurate description of the component specification and/or user query in their specification and retrieval, the existing semantic-based approaches still fail to achieve the following goals which are desired for present component reuse: precise, automated, semantic-based and domain capable. This thesis proposes an approach, namely MVICS-based approach, aimed at achieving holistic, semantic-based and adaptation-aware component specification and retrieval. As the foundation, a Multiple-Viewed and Interrelated Component Specification ontology model (MVICS) is first developed for component specification and repository building. The MVICS model provides an ontology-based architecture to specify components from a range of perspectives; it integrates the knowledge of Component-Based Software Engineering (CBSE), and supports ontology evolution to reflect the continuous developments in CBD and components. A formal definition of the MVICS model is presented, which ensures the rigorousness of the model and supports the high level of automation of the retrieval. Furthermore, the MVICS model has a smooth mechanism to integrate with domain related software system ontology. Such integration enhances the function and application scope of the MVICS model by bringing more domain semantics into component specification and retrieval. Another improved feature of the proposed approach is that the effect of possible component adaptation is extended to the related components. Finally a comprehensive profile of the result components shows the search results to the user from a summary to satisfied and unsatisfied discrepancy details. The above features of the approach are well integrated, which enables a holistic view in semantic-based component specification and retrieval. A prototype tool was developed to exert the power of the MVICS model in expressing semantics and process automation in component specification and retrieval. The tool implements the complete process of component search. Three case studies have been undertaken to illustrate and evaluate the usability and correctness of the approach, in terms of supporting accurate component specification and retrieval, seamless linkage with a domain ontology, adaptive component suggestion and comprehensive result component profile. A conclusion is drawn based on an analysis of the feedback from the case studies, which shows that the proposed approach can be deployed in real life industrial development. The benefits of MVICS include not only the improvement of the component search precision and recall, reducing the development time and the repository maintenance effort, but also the decrease of human intervention on CBD.

Page generated in 0.0446 seconds