• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 8
  • 2
  • 1
  • Tagged with
  • 43
  • 43
  • 43
  • 16
  • 11
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Impact of Domain Knowledge on the Effectiveness of Requirements Engineering Activities

Niknafs, Ali January 2014 (has links)
One of the factors that seems to influence an individual’s effectiveness in requirements engineering activities is his or her knowledge of the problem being solved, i.e., domain knowledge. While in-depth domain knowledge enables a requirements engineer to understand the problem easier, he or she can fall for tacit assumptions of the domain and might overlook issues that are obvious to domain experts and thus remain unmentioned. The purpose of this thesis is to investigate the impact of domain knowledge on different requirements engineering activities. The main research question this thesis attempts to answer is “How does one form the most effective team, consisting of some mix of domain ignorants and domain awares, for a requirements engineering activity involving knowledge about the domain of the computer-based system whose requirements are being determined by the team?” This thesis presents two controlled experiments and an industrial case study to test a number of hypotheses. The main hypothesis states that a requirements engineering team for a computer-based system in a particular domain, consisting of a mix of requirements analysts that are ignorant of the domain and requirements analysts that are aware of the domain, is more effective at requirement idea generation than a team consisting of only requirements analysts that are aware of the domain. The results of the controlled experiments, although not conclusive, provided some support for the positive effect of the mix on effectiveness of a requirements engineering team. The results also showed a significant effect of other independent variables, especially educational background. The data of the case study corroborated the results of the controlled experiments. The main conclusion that can be drawn from the findings of this thesis is that the presence in a requirements engineering team of a domain ignorant with a computer science or software engineering background improves the effectiveness of the team.
12

Reengenharia da ferramenta Projection Explorer para apoio à seleção de estudos primários em revisão sistemática / Reengineering of projection explorer tool to support selection of primary studies on systematic review

Rafael Messias Martins 11 April 2011 (has links)
A crescente adoção do paradigma experimental na pesquisa em Engenharia de Software visa a obtenção de evidências experimentais sobre as tecnologias propostas para garantir sua correta avaliação e para a construção de um corpo de conhecimento sólido da disciplina. Uma das abordagens de pesquisa experimental é a revisão sistemática, um método rigoroso, planejado e auditável para a realização da coleta e análise crítica de dados experimentais disponíveis sobre um determinado tema de pesquisa. Apesar de produzir resultados confiáveis, a condução de uma revisão sistemática pode ser trabalhosa e muitas vezes demorada, principalmente quando existe um grande volume de estudos a serem considerados, selecionados e avaliados. Uma solução encontrada na literatura é a utilização de ferramentas de Mineração Visual de Textos (VTM) como a Projection Explorer (PEx) para apoiar a fase de seleção e análise de estudos primários no processo de revisão sistemática. Neste trabalho foi realizada uma reengenharia de software na ferramenta PEx com dois objetivos principais: apoiar, utilizando VTM, a fase de seleção e análise de estudos primários no processo de revisão sistemática e implementar novos requisitos não-funcionais relativos à melhoria da manutenibilidade e escalabilidade da ferramenta. Como resultado foi construída uma plataforma modular para a instanciação de ferramentas de visualização e, a partir da mesma, uma ferramenta de revisão sistemática apoiada por VTM. Os resultados de um estudo de caso executado com a ferramenta mostraram que a abordagem de aplicação de técnicas VTM usada nesse contexto é viável e promissora, melhorando tanto a performance quanto a efetividade da seleção / The increasing adoption of the experimental paradigm in Software Engineering research aims at obtaining experimental evidence of the proposed technologies to ensure their proper evaluation and to build a solid body of knowledge for the discipline. One approach of experimental research is the systematic review, a rigorous, auditable and planned method to carry out the collection and analysis of experimental data available on a particular research topic. Despite producing reliable results, conducting a systematic review can be a cumbersome and often lengthy process, especially when a large volume of studies is to be considered, selected and evaluated. One solution found in the literature is the use of Visual Text Mining (VTM) tools such as the Projection Explorer (PEx) to support the selection and analysis of primary studies in the systematic review process. In this work a software re-engineering was performed on PEx with two main goals: to support, using VTM, the stage of selection and analysis of primary studies in the systematic review process and to implement new non-functional requirements related to improving the maintainability and scalability of the tool. The results were the building of a modular platform for instantiating visualization tools and, from it, the instantiation of a systematic review tool supported by VTM. The results of a case study carried out with the tool showed that the VTM approach used in this context is feasible and promising, improving both performance and the effectiveness of selection
13

Observational Studies of Software Engineering Using Data from Software Repositories

Delorey, Daniel Pierce 06 March 2007 (has links) (PDF)
Data for empirical studies of software engineering can be difficult to obtain. Extrapolations from small controlled experiments to large development environments are tenuous and observation tends to change the behavior of the subjects. In this thesis we propose the use of data gathered from software repositories in observational studies of software engineering. We present tools we have developed to extract data from CVS repositories and the SourceForge Research Archive. We use these tools to gather data from 9,999 Open Source projects. By analyzing these data we are able to provide insights into the structure of Open Source projects. For example, we find that the vast majority of the projects studied have never had more than three contributors and that the vast majority of authors studied have never contributed to more than one project. However, there are projects that have had up to 120 contributors in a single year and authors who have contributed to more than 20 projects which raises interesting questions about team dynamics in the Open Source community. We also use these data to empirically test the belief that productivity is constant in terms of lines of code per programmer per year regardless of the programming language used. We find that yearly programmer productivity is not constant across programming languages, but rather that developers using higher level languages tend to write fewer lines of code per year than those using lower level languages.
14

On Identifying Technical Debt using Bug Reports in Practice

Silva, Lakmal January 2023 (has links)
Context: In an era where every industry is impacted by software, it is vital to keep software costs under control for organizations to be competitive. A key factor contributing to software costs is software maintenance where a significant proportion is utilized to deal with different types of technical debt. Technical debt is a metaphor used to describe the cost of taking shortcuts or sub-optimal design and implementation that compromises the software quality. Similar to financial debt, technical debt needs to be paid off in the future. Objective: To be in control of technical debt related costs, organizations need to identify technical debt types and quantify them to introduce solutions and prioritize repayment strategies. However, the invisible nature of technical debt makes its identification challenging in practice. Our aim is to find pragmatic ways to identify technical debt in practice, that can be supported by evidence. Once technical debt types that are significant have been identified, we aim to propose suggestions to mitigate them. Method: We used design science as a methodological framework to iteratively improve the technical debt identification methods. We utilized bug reports, which are artifacts produced by software engineers during the development and operation of the software system  as the data source for technical debt identification. Software defects reported through bug reports are considered as one of the key external quality attributes of a software system  which supports us in our evidence based approach. Throughout the design science iterations, we used the following research methods: case study and sample study. Results: We produced three design artifacts that support technical debt identification. The first artifact is a systematic process to identify architectural technical debt from bug reports. The second is an automated bug analysis and a visualization tool to support our research as well as to support practitioners to identify components with hot spots in relation to the number of defects. The third is a method for identifying documentation debt from bug reports. Conclusion: Based on the findings from this thesis, we demonstrated that bug reports can be utilized as a data source to identify technical debt in practice by identifying two types of technical debt; architectural technical debt and documentation debt. Compared to the identification of documentation debt, architectural technical debt identification still remains challenging due to the abstract nature of the architecture and its boundaries. Therefore, our future work will focus on evaluating the impact of reducing the sources of documentation debt on the frequency of bug reports and overall project cost.
15

Utilizing Continuous Integration environments for evaluation of software quality attributes

Yu, Liang January 2023 (has links)
Software quality attributes are properties that reflect the quality of a software system, and Non-functional requirements (NFRs) are the specifications that define how a software system should perform to reach a desired level of goals of the quality attributes.The evaluation of quality attributes is important to show the effectiveness of a system in meeting customers' NFRs. Continuous integration (CI) environments have emerged as powerful platforms for organizations to improve software quality through automated software verification and validation.Despite this, there is a growing need for evaluating quality attributes that is often met by in-house development of metrics and tools.This highlights the importance of quality attributes for software product quality. This thesis investigates the association between quality attributes and components of a CI environment, as well as how to utilize these components for evaluating software quality attributes.The focus is on improving the knowledge of the evaluation and providing specific recommendations for companies to enhance their CI environments for higher demands of quality evaluation.The contributions of this thesis include a better understanding of the relationship between quality attributes and CI components, and a set of practical guidelines for companies to effectively leverage CI for quality attribute evaluation. The studies in this thesis utilized mixed methodologies, including a systematic literature review, a multi-case study conducted in four software development companies, and an synthesis of the collected data.The multi-case study provided a comprehensive overview of practices for quality attribute evaluation and how CI components can generate data to support the evaluation of specific attributes.The synthesis study presents a maturity model based on the collected data from both academia and industry, and the model can aid organizations in assessing their current level of maturity in utilizing CI environments to identify potential improvements.The results in these studies show the capabilities of different components of a CI environment and how these components can be used to support the evaluation of quality attributes.While the use of CI environments for the thesis topic offers benefits, it also presents several challenges, for example, the challenge to identify effective quality metrics. In conclusion, this thesis contributes to the understanding of the use of CI environments for evaluating software quality attributes.The results suggest that CI environments can be an effective approach for quality attribute evaluation, but suitable metrics need to be considered to ensure accurate and meaningful evaluation results. Furthermore, the thesis presents areas for future research, such as the use of machine learning techniques to improve the accuracy of quality assessment using CI environments.
16

Software Supply Chain Security: Attacks, Defenses, and the Adoption of Signatures

Taylor R Schorlemmer (14674685) 27 April 2024 (has links)
<p dir="ltr">Modern software relies heavily on third-party dependencies (often distributed via public package registries), making software supply chain attacks a growing threat. Prior work investigated attacks and defenses, but only taxonomized attacks or proposed defensive techniques, did not consistently define software supply chain attacks, and did not provide properties to assess the security of software supply chains. We do not have a unified definition of software supply chain attacks nor a set of properties that a secure software supply chain should follow.</p><p dir="ltr">Guaranteeing authorship in a software supply chain is also a challenge. Package maintainers can guarantee package authorship through software signing. However, it is unclear how common this practice is or if existing signatures are created properly. Prior work provided raw data on registry signing practices, but only measured single platforms, did not consider quality, did not consider time, and did not assess factors that may influence signing. We do not have up-to-date measurements of signing practices nor do we know the quality of existing signatures. Furthermore, we lack a comprehensive understanding of factors that influence signing adoption.</p><p dir="ltr">This thesis addresses these gaps. First, we systematize existing knowledge into: (1) a four-stage supply chain attack pattern; and (2) a set of properties for secure supply chains (transparency, validity, and separation). Next, we measure current signing quantity and quality across three kinds of package registries: traditional software (Maven Central, PyPI), container images (Docker Hub), and machine learning models (Hugging Face). Then, we examine longitudinal trends in signing practices. Finally, we use a quasi-experiment to estimate the effect that various factors had on software signing practices.</p><p dir="ltr">To summarize the findings of our quasi-experiment: (1) mandating signature adoption improves the quantity of signatures; (2) providing dedicated tooling improves the quality of signing; (3) getting started is the hard part — once a maintainer begins to sign, they tend to continue doing so; and (4) although many supply chain attacks are mitigable via signing, signing adoption is primarily affected by registry policy rather than by public knowledge of attacks, new engineering standards, etc. These findings highlight the importance of software package registry managers and signing infrastructure.</p>
17

Um framework para avaliação sistemática de técnicas de teste no contexto de programação concorrente / A Framework for systematic testing techniques evaluation applied to concurrent programming

Melo, Silvana Morita 04 April 2018 (has links)
Contexto: Embora diversas técnicas de teste de software tenham sido propostas para o contexto da programação concorrente, as informações sobre elas encontram-se de dispersas na literatura, não oferecendo uma caracterização apropriada e dados relevantes que possam auxiliar a compreensão e consequente aplicação efetiva dessas técnicas, dificultando o processo de transferência de conhecimento entre a academia e a comunidade interessada. Objetivo: Nesse contexto, o principal objetivo deste trabalho é oferecer subsídios, na forma de um framework, que seja capaz de apoiar a caracterização e seleção sistemática de técnicas de teste de software concorrente. Metodologia: Para atender esse objetivo, foi construído um corpo de conhecimento que reúne de maneira integrada informações relevantes ao processo de tomada de decisão sobre qual técnica de teste aplicar a um determinado projeto de software. Um design de experimentos é definido, funcionando como guia para condução de estudos empíricos que podem ser usados para a realimentação, atualização e evolução do corpo de conhecimento. Buscando sistematizar o processo de seleção de técnicas de teste, é definido um esquema de caracterização que considera as principais características da programação concorrente que influenciam a atividade de teste de software e calcula a adequação desses atributos aos atributos do projeto em desenvolvimento. Resultados e Conclusões: A fim de permitir que a comunidade interaja com o framework proposto, foi disponibilizada uma infraestrutura computacional que permite o acesso ao corpo de conhecimento e automatiza o processo de seleção de técnicas de teste de software concorrente. O estudo experimental conduzido para avaliação da proposta, mostrou que a abordagem contribui de maneira efetiva para caracterizar, comparar e quantificar a adequabilidade baseada em atributos, melhorando consideravelmente o processo de seleção de técnicas de teste para software concorrente segundo as expectativas dos usuários. / Background: Although a variety of concurrent software testing techniques have been proposed for the concurrent programming context, the information about them are scattered in the literature, not offering an appropriate characterization and relevant data that can aid the understanding and consequently the effective application of these techniques, hindering the process of knowledge transfer between the academia and the interested community. Objective: In this context, the main objective of this work is to provide subsidies in form of a framework which will be able to support the characterization and systematic selection of concurrent software testing techniques. Methodology: In order to meet this objective, a body of knowledge has been built that brings together, in an integrated way, information relevant for the decision-making process about what testing technique should be applied in a specific software project. A design of experiments is defined as a guide for conducting empirical studies that can be used for feedback, updating, and evolution of the body of knowledge. With the objective of systematizing the process of testing techniques selection is defined a characterization scheme that considers the main characteristics of the concurrent programming that influence the testing activity and calculates the adequacy for these attributes in comparison with the software project in development. Results and Conclusions: In order to allow the community interaction with the proposed framework was provided a computational infrastructure that allows access to the body of knowledge and the automation of the selection process. The empirical study conducted to evaluate the proposal showed that the approach effectively contributes to characterize, compare and quantify the adequacy based on the attributes, improving the selection process of concurrent software testing techniques according to the users expectations.
18

Variability Modeling in the Real

Berger, Thorsten 15 May 2013 (has links) (PDF)
Variability modeling is one of the key disciplines to cope with complex variability in large software product lines. It aims at creating, evolving, and configuring variability models, which describe the common and variable characteristics, also known as features, of products in a product line. Since the introduction of feature models more than twenty years ago, many variability modeling languages and notations have been proposed both in academia and industry, followed by hundreds of publications on variability modeling techniques that have built upon these theoretical foundations. Surprisingly, there are relatively few empirical studies that aim at understanding the use of such languages. What variability modeling concepts are actually used in practice? Do variability models applied in real-world look similar to those published in literature? In what technical and organizational contexts are variability models applicable? We present an empirical study that addresses this research gap. Our goals are i) to verify existing theoretical research, and ii) to explore real-world variability modeling languages and models expressed in them. We study concepts and semantics of variability modeling languages conceived by practitioners, and the usage of these concepts in real, large-scale models. Our aim is to support variability modeling research by providing empirical data about the use of its core modeling concepts, by identifying and characterizing further concepts that have not been as widely addressed, and by providing realistic assumptions about scale, structure, content, and complexity of real-world variability models. We believe that our findings are of relevance to variability modeling researchers and tool designers, for example, those working on interactive product configurators or feature dependency checkers. Our extracted models provide realistic benchmarks that can be used to evaluate new techniques. Recognizing the recent trend in software engineering to open up software platforms to facilitate inter-organizational reuse of software, we extend our empirical discourse to the emerging field of software ecosystems. As natural successors of successful product lines, ecosystems manage huge variability among and within their software assets, thus, represent a highly interesting class of systems to study variability modeling concepts and mechanisms. Our studied systems comprise eleven highly configurable software systems, two ecosystems with closed platforms, and three ecosystems relying on open platforms. Some of our subjects are among the largest successful systems in existence today. Results from a survey on industrial variability modeling complement these subjects. Our overall results provide empirical evidence that the well-researched concepts of feature modeling are used in practice, but also that more advanced concepts are needed. We observe that assumptions about variability models in the literature do not hold. Our study also reveals that variability models work best in centralized variability management scenarios, and that they are fragile and have to be controlled by a small team. We also identify a particular type of dependencies that is increasingly used in open platforms and helps sustain the growth of ecosystems. Interestingly, while enabling distributed variability, these dependencies rely on a centralized and stable vocabulary. Finally, we formulate new hypotheses and research questions that provide direction for future research.
19

Impediments for Automated Software Test Execution

Wiklund, Kristian January 2015 (has links)
Automated software test execution is a critical part of the modern software development process, where rapid feedback on the product quality is expected. It is of high importance that impediments related to test execution automation are prevented and removed as quickly as possible. An enabling factor for all types of improvement is to understand the nature of what is to be improved. The goal with this thesis is to further the knowledge about common problems encountered by software developers using test execution automation, in order to enable improvement of test execution automation in industrial software development. The research has been performed through industrial case studies and literature reviews. The analysis of the data have primarily been performed using qualitative methods, searching for patterns, themes, and concepts in the data.  Our main findings include: (a) a comprehensive list of potential impediments reported in the published body of knowledge on test execution automation, (b) an in-depth analysis of how such impediments may affect the performance of a development team, and (c) a proposal for a qualitative model of interactions between the main groups of phenomena that contribute to the formation of impediments in a test execution automation project. In addition to this, we contribute qualitative and quantitative empirical data from our industrial case studies.  Through our results, we find that test execution automation is a commonly under-estimated activity,  not only in terms of resources but also in terms of the complexity of the work. There is a clear tendency to perform the work ad hoc, down-prioritize the automation in favor of other activities,  and ignore the long-term effects in favor of short-term gains. This is both a technical and a cultural problem that need to be managed by awareness of the problems that may arise, and also have to be solved in the long term through education and information. We conclude by proposing a theoretical model of the socio-technical system that needs to be managed to be successful with test execution automation. / Syftet med denna avhandling är att utöka den vetenskapliga kunskapen om problem som kan uppstå under användning av automatiserad testning i industriell programvaruutveckling. Utveckling av programvara består principiellt av ett antal processteg: kravbehandling, detaljerad systemkonstruktion, implementation i form av programmering, och slutligen testning som säkerställer att kvaliteten på programvaran är tillräcklig för dess tilltänkta användare. Testning representerar en stor del av tiden och kostnaden för utveckling av programvaran, och detta gör att det är attraktivt att automatisera testningen. Automatiserade tester kan bidra med många positiva effekter. Testning som utförs om och om igen, för att säkerställa att gammal funktionalitet inte slutar att fungera när ändringar görs i programvaran, kan med fördel göras automatiserat. Detta frigör kvalificerad personal till kvalificerat arbete. Automatisering kan även minska ledtiden för testningen och därmed möjliggöra snabbare leveranser till kund. Ett annat mål med testautomatisering är att vara säker på att samma tester utförs på ett likartat sätt varje gång produkten testas, så att den håller en jämn och stabil kvalitetsnivå. Automatiserad testning är dock en mer komplex och kostsam verksamhet än vad man kan tro, och problem som uppstår under dess användning kan ha stora konsekvenser. Detta gäller i ännu större utsträckning i organisationer som använder moderna utvecklingsmetoder där automatisering är grundstenen för en effektiv kvalitetskontroll. För att kunna undvika så många problem som möjligt, är det därför mycket viktigt att förstå vad som händer när man använder testautomatisering i stor skala. Denna avhandling presenterar resultat från fallstudier i svensk industri, som, kombinerat med en systematisk genomgång av befintlig forskning inom området, har utförts för att söka djupare kunskap och möjligheter till generalisering. Arbetet har varit beskrivande, och förklarande, och bygger på empirisk forskningsmetodik.  I avhandlingen bidrar vi med (a) information om de problem relaterade till automatiserad testning som vi har identifierat i de empiriska fallstudierna, (b) en diskussion av dessa problem i relation till andra studier i området, (c) en systematisk litteraturstudie som ger en översikt över relevanta publikationer i området, (d) en analys av bevisen som insamlats genom litteraturstudien, samt (e) en modell av det organisatoriska och tekniska system som måste hanteras för att man skall nå framgång med ett testautomatiseringsprojekt. Resultaten tyder på att rent tekniska problem inte utgör huvuddelen av de problem som upplevs med testautomatisering. Istället handlar det till stora delar om organisationell dynamik: hantering av förändringen det innebär att införa automatisering, planering av automatisering och dess användning, samt vilka finansiella förväntningar man har på automatiseringen. / ITS-EASY Post Graduate School for Embedded Software and Systems
20

The study of socio-technical coordination using a socio-technical congruence model

Kwan, Irwin Hin-Bong 15 August 2011 (has links)
Coordination in software development, especially in global software development, is important because a team cannot perform well unless its team members communicate and maintain awareness of each other's activities. In order to improve socio-technical coordination, which is coordination among team members who work on interdependent technical entities, it must be conceptualized and measured. One measurement of coordination is socio-technical congruence, which calculates the alignment between technical relationships and social relationships. The problem is that there are a large number of social and technical factors to consider when using socio-technical congruence to study coordination. Current limitations with socio-technical congruence include the inability to represent the size of gaps in coordination between people, the sparse understanding of the role of awareness in conjunction with other coordination mechanisms, and the lack of a technique with which to model people who are involved in certain communication patterns, but not assigned to technical tasks. To address these limitations, this dissertation describes a socio-technical congruence model to study socio-technical coordination. The model focuses on refining conceptualizations of technical and social relationships between people, on describing an improved gap technique for calculating socio-technical alignment, and on providing guidelines on how to study coordination in teams using the socio-technical congruence model. I first develop the model theoretically from related work. I then conduct two empirical investigations to address limitations of the model. The first study examines awareness in a small global team using observational studies. The second study examines important communicators and people who emerge in coordination} despite having no technical relationships by examining email archives from the same team. I conduct a third empirical investigation of a large global team to apply the model to study the relationship between socio-technical congruence and team performance using the project's repository. Finally, I revisit the model and improve it based on the empirical findings. The model refines conceptualizations of relationships, classifies emergent people who are suddenly involved with a task or a team during the project, and represents multi-variable relationships. It includes a template and an accompanying process for applying socio-technical congruence to study socio-technical coordination. This model enables researchers to study socio-technical coordination and analyze its effect on software engineering outcomes such as performance and quality. / Graduate

Page generated in 0.0496 seconds