• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 7
  • 7
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Knowledge management and throughput optimization in large-scale software development

Andersson, Henrik January 2015 (has links)
Large-scale software development companies delivering market-driven products have introduced agile methodologies as the way of working to a big extent. Even though there are many benefits with an agile way of working, problems occur when scaling agile because of the increased complexity. One explicit problem area is to evolve deep product knowledge, which is a domain specific knowledge that cannot be developed anywhere else but at the specific workplace. This research aims to identify impediments for developing domain specific knowledge and provide solutions to overcome these challenges in order to optimize knowledge growth and throughput. The result of the research shows that impediments occur in four different categories, based on a framework for knowledge sharing drivers. These are people-related, task-related, structure-related and technology-related. The challenging element with knowledge growth is to integrate the training into the feature development process, without affecting the feature throughput negatively. The research also shows that by increasing the knowledge sharing, the competence level of the whole organization can be increased, and thereby be beneficial from many perspectives, such as feature-throughput and code quality.
2

Analysis Of Complexity And Coupling Metrics Of Subsystems In Large Scale Software Systems

Ramakrishnan, Harish 01 January 2006 (has links)
Dealing with the complexity of large-scale systems can be a challenge for even the most experienced software architects and developers. Large-scale software systems can contain millions of elements, which interact to achieve the system functionality. Managing and representing the complexity involved in the interaction of these elements is a difficult task. We propose an approach for analyzing the reusability, maintainability and complexity of such a complex large-scale software system. Reducing the dependencies between the subsystems increase the reusability and decrease the efforts needed to maintain the system thus reducing the complexity of the system. Coupling is an attribute that summarizes the degree of interdependence or connectivity among subsystems and within subsystems. When used in conjunction with measures of other attributes, coupling can contribute to an assessment or prediction of software quality. We developed a set of metrics for measuring the coupling at the subsystems level in a large-scale software system as a part of this work. These metrics do not take into account the complexity internal to a subsystem and considers a subsystem as a single entity. Such a dependency metric gives an opportunity to predict the cost and effort needed to maintain the system and also to predict the reusability of the system parts. It also predicts the complexity of the system. More the dependency, higher is the cost to maintain and reuse the software. Also the complexity and cost of the system will be high if the coupling is high. We built a large-scale system and implemented these research ideas and analyzed how these measures help in minimizing the complexity and system cost. We also proved that these coupling measures help in re-factoring of the system design.
3

Determining confidence in test quality assessment in large-scale software development

Malmrud, Cecilia January 2022 (has links)
Software testing can be cumbersome and complicated. The complexity of the tests increases when the software itself becomes larger and more complex. When continuous integration is applied to software development feedback to tests can be obtained regularly.  These tests are performed in stages and each loop provides test results.A theoretical model for assigning confidence to different testing stages is presented in this thesis to aid in the understanding of the test quality. The input to the model was based on information given in interviews in a case study performed at Ericsson AB. The model is based on the ISO/IEC 25010 standard for software product quality. The theoretical model presented is focused on the early stages of integration and evaluated qualitatively. Its input is delivery test results, trouble reports both from customers and internal testers, and continuous integration flow trends. It was concluded that the theoretical model can be easily automated as each input source can be automatically collected. For developers working in the early stages of integration the model could be of help to give insight into what confidence they can assign their to test's quality. For testers working later in the flow the suitability of the model requires alterations that cannot be deduced from this thesis alone. For other stakeholders the usefulness of the model depends on how involved their work is in the development chain.
4

Stakeholder analysis in software-intensive systems development

Kelanti, M. (Markus) 18 October 2016 (has links)
Abstract A stakeholder analysis is commonly a part of the requirements engineering process in the development of software systems. It contributes to identifying, analysing, negotiating and validating requirements from multiple stakeholder viewpoints that do not necessary share the same views on a system under development and do not necessary express themselves using a similar language. Stakeholder analysis is often integrated into a used development method or practice and doesn’t necessarily appear as a separate process. The increase in software size, availability and use in different appliances, however, requires more from the stakeholder analysis than has been recognized in Software Engineering literature. The increasing scale of software systems and connections to other systems increase the number of involved stakeholders complicating the stakeholder analysis. In addition, how the actual stakeholder analysis should be implemented in large scale software development and how it supports the development effort is problematic in practice. The purpose of this thesis is to study the role and purpose of a stakeholder analysis in a large-scale software-intensive systems development. In this thesis, an empirical approach is taken to study the large-scale software-intensive systems development as phenomena in order to observe it as a whole. This approach allows this thesis to analyse the phenomena from different perspectives in order to identify and describe the nature and purpose of a stakeholder analysis in large-scale software-intensive systems development. The contribution of this thesis is the following. First, the thesis contributes to both the practical and scientific community by describing the role of stakeholder analysis in the software-intensive systems development process. Secondly, it demonstrates how a stakeholder analysis can be implemented in a large-scale software-intensive systems development process. / Tiivistelmä Sidosryhmäanalyysi on yleensä osa vaatimusmäärittelyprosessia ohjelmistojärjestelmien kehityksessä. Se edesauttaa vaatimusten tunnistamista, analysointia, sopimista ja vahvistamista useiden eri sidosryhmien näkökulmasta tilanteissa, missä eri sidosryhmät eivät välttämättä jaa samaa näkökulmaa kehitettävään järjestelmään ja eivät välttämättä käytä samaa kieltä ilmaistakseen itseään. Sidosryhmäanalyysi on usein integroitu suoraan käytettyyn kehitysmenetelmään tai käytäntöön ja ei välttämättä ilmene erillisenä prosessina. Ohjelmiston koon kasvaessa ja yhteyksien lisääntyminen yhä useampiin laitteisiin on johtanut tilanteeseen, missä sidosryhmäanalyysilta vaaditaan yhä enemmän kuin kirjallisuudessa on aiemmin tunnistettu. Ohjelmistojärjestelmien alati kasvava koko ja yhteyksien lisääntyminen muihin järjestelmiin kasvattaa sidosryhmien määrää vaikeuttaen sidosryhmäanalyysin tekemistä. Lisäksi on ongelmallista, että miten sidosryhmäanalyysin tulisi tukea suuren mittakaavan ohjelmistotuotantoa ja miten se käytännössä toteutetaan tällaisessa ympäristössä. Tämän väitöskirjan tavoitteena on tutkia sidosryhmän roolia ja tarkoitusta suuren mittakaavan ohjelmistointensiivisten järjestelmien tuotannossa. Tutkimus on toteutettu empiirisellä lähestymistavalla tarkkailemalla suuren mittakaavan ohjelmistointensiivisten järjestelmien tuotantoa kokonaisuutena. Tämä lähestymistapa mahdollistaa kokonaisuuden analysoinnin eri näkökulmista, jotta sidosryhmäanalyysin luonne ja tarkoitus voidaan tunnistaa ja kuvata suuren mittakaavan ohjelmistointensiivisten järjestelmien tuotannossa. Väitöskirjan tulosten kontribuutio jakautuu kahteen osaan. Ensimmäiseksi väitöskirjan tulokset auttavat sekä tiedeyhteisöä ja käytännön työtä tekeviä kuvaamalla sidosryhmäanalyysin suuren mittakaavan ohjelmistointensiivisten järjestelmien tuotannossa. Toiseksi tulokset havainnollistavat miten sidosryhmäanalyysi voidaan toteuttaa suuren mittakaavan ohjelmistointensiivisten järjestelmien tuotekehitysprosessissa.
5

Challenges of Large-ScaleSoftware Testing and the Role of Quality Characteristics : Empirical Study

Belay, Eyuel January 2020 (has links)
Currently, information technology is influencing every walks of life. Our livesincreasingly depend on the software and its functionality. Therefore, thedevelopment of high-quality software products is indispensable. Also, inrecent years, there has been an increasing interest in the demand for high-qualitysoftware products. The delivery of high-quality software products and services isnot possible at no cost. Furthermore, software systems have become complex andchallenging to develop, test, and maintain because of scalability. Therefore, withincreasing complexity in large scale software development, testing has been acrucial issue affecting the quality of software products. In this paper, large-scalesoftware testing challenges concerning quality and their respective mitigations arereviewed using a systematic literature review, and interviews. Existing literatureregarding large-scale software development deals with issues such as requirementand security challenges, so research regarding large-scale software testing and itsmitigations is not dealt with profoundly.In this study, a total of 2710 articles were collected from 1995-2020; 1137(42%)IEEE, 733(27%) Scopus, and 840(31%) Web of Science. Sixty-four relevant articleswere selected using a systematic literature review. Also, to include missed butrelevant articles, snowballing techniques were applied, and 32 additional articleswere included. A total of 81 challenges of large-scale software testing wereidentified from 96 total articles out of which 32(40%) performance, 10(12 %)security, 10(12%) maintainability, 7(9 %) reliability, 6(8%) compatibility, 10(12%)general, 3(4%) functional suitability, 2(2%) usability, and 1(1%) portability weretesting challenges were identified. The author identified more challenges mainlyabout performance, security, reliability, maintainability, and compatibility qualityattributes but few challenges about functional suitability, portability, and usability.The result of the study can be used as a guideline in large-scale software testingprojects to pinpoint potential challenges and act accordingly.
6

Exploring Impact of Project Size in Effort Estimation : A Case Study of Large Software Development Projects

Nilsson, Nathalie, Bencker, Linn January 2021 (has links)
Background: Effort estimation is one of the cornerstones in project management with the purpose of creating efficient planning and the ability to keep budgets. Despite the extensive research done within this area, one of the biggest and most complex problems in project management within software development is still considered to be the estimation process. Objectives: The main objectives of this thesis were threefold: i) firstly to define the characteristics for a large project, ii) secondly to identify factors causing inaccurate effort estimates and iii) lastly to understand how the identified factors impact the effort estimation process, all of this within the context of large-scale agile software development and from the perspective of a project team.Methods: To fulfill the purpose of this thesis, an exploratory case study was executed. The data collection consisted of archival research, questionnaire, and interviews. The data analysis was partly conducted using the statistical software toolStata.Results: The definition of a large project is from a project team’s perspective based on high complexity and a large scope of requirements. The following identified factors were identified to affect the estimation process in large projects: deficient requirements, changes in scope, complexity, impact in multiple areas, coordination, and required expertise, and the findings indicate that these are affecting estimation accuracy negatively. Conclusions: The conclusion of this study is that besides the identified factors affecting the estimation process there are many different aspects that can directly or indirectly contribute to inaccurate effort estimates, categorized as requirements, complexity, coordination, input and estimation process, management, and usage of estimates.
7

Benefits of transactive memory systems in large-scale development

Aivars, Sablis January 2016 (has links)
Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work. Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition. Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation. Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects. Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization.

Page generated in 0.0769 seconds