• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1575
  • 481
  • Tagged with
  • 2056
  • 2052
  • 2051
  • 534
  • 426
  • 422
  • 402
  • 208
  • 175
  • 174
  • 135
  • 134
  • 130
  • 114
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Software Requirements Prioritization Practices in Software Start-ups : A Qualitative research based on Start-ups in India

Vajrapu, Rakesh Guptha, Kothwar, Sravika January 2018 (has links)
Context: Requirements prioritization is used in software product management and is concerned with identifying the most valuable requirements from a given set. This is necessary to satisfy the needs of customers, to provide support for stakeholders and more importantly for release planning. Irrespective of the size of the organization (small, medium and large), requirements prioritization is important to minimize the risk during development. However, few studies explore how requirements prioritization is practiced in start-ups. Software start-ups are becoming important suppliers of innovative and software-intensive products.Earlier studies suggest that requirements discovery and validation is the core activity in start-ups. However, due to limited resources, start-ups need to prioritize on what requirements to focus. If they do it wrong it leads to wasted resources.While larger organizations may afford such waste, start-ups cannot.Moreover, researchers have identified that start-ups are not small versions of large companies and the existing software development practices cannot be transferred directly due to low rigor in current studies.Thus, we planned to conduct an exploratory study on requirements prioritization practices in the context of software start-ups. Objectives: The main aim of our study is to explore the state-of-art of requirements prioritization practices used in start-ups.We also identify the challenges associated with the corresponding practices and few possible solutions. Methods: In this qualitative research, we conduct a literature review by referring to many article sources like IEEE Xplore, Scopus, and Google Scholar to identify the prioritization practices and challenges in general. An interview study is conducted by using semi-structured interviews to collect data from practitioners.Thematic analysis was used to analyze the interview data. Results: We have identified 15 practices from 8 different start-ups companies with corresponding challenges and possible solutions. Our results show mixed reviews in terms of the prioritization practices at start-ups. From the total of 8 companies about 6 companies followed formal methods while in the remaining 2 companies, prioritization was informal and not clear. The results show that value-based method is the dominant prioritization technique in start-ups. The results also show that customer input and return on investment aspects of prioritization play a key role when compared to other aspects. Conclusions: The results of this study provide an understanding of the various requirements prioritization practices in start-ups and challenges faced in implementing them.These results are validated from the answers found in the literature. The solutions identified for the corresponding challenges allow the practitioners to approach them in a better way. As this study focused only on Indian software start-up companies, it is recommended to extend to Swedish software start-up companies as well to get a broader perspective. Scaling of sample size is also recommended. This study may help future research on requirements engineering in start-ups. It may also help practitioners who have an intention to begin a software start-up company to get an idea of what challenges they may face while prioritizing requirements and can use these solutions to mitigate them.
102

Strategizing and Evaluating the Onboarding of Software Developers in Large-Scale Globally Distributed Legacy Projects

Britto, Ricardo January 2017 (has links)
Background: Recruitment and onboarding of software developers are essential steps in software development undertakings. The need for adding new people is often associated with large-scale long-living projects and globally distributed projects. The formers are challenging because they may contain large amounts of legacy (and often complex) code (legacy projects). The latters are challenging, because the inability to find sufficient resources in-house may lead to onboarding people at a distance, and often in many distinct sites. While onboarding is of great importance for companies, there is little research about the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed projects with large amounts of legacy code. Furthermore, no study has proposed any systematic approaches to support the design of onboarding strategies and evaluation of onboarding results in the aforementioned context. Objective: The aim of this thesis is two-fold: i) identify the challenges and implications associated with onboarding software developers and teams in large-scale globally distributed legacy projects; and ii) propose solutions to support the design of onboarding strategies and evaluation of onboarding results in large-scale globally distributed legacy projects. Method: In this thesis, we employed literature review, case study, and business process modeling. The main case investigated in this thesis is the development of a legacy telecommunication software product in Ericsson. Results: The results show that the performance (productivity, autonomy, and lead time) of new developers/teams onboarded in remote locations in large-scale distributed legacy projects is much lower than the performance of mature teams. This suggests that new teams have a considerable performance gap to overcome. Furthermore, we learned that onboarding problems can be amplified by the following challenges: the complexity of the product and technology stack, distance to the main source of product knowledge, lack of team stability, training expectation misalignment, and lack of formalism and control over onboarding strategies employed in different sites of globally distributed projects. To help companies addressing the challenges we identified in this thesis, we propose a process to support the design of onboarding strategies and the evaluation of onboarding results. Conclusions: The results show that scale, distribution and complex legacy code may make onboarding more difficult and demand longer periods of time for new developers and teams to achieve high performance. This means that onboarding in large-scale globally distributed legacy projects must be planned well ahead and companies must be prepared to provide extended periods of mentoring by expensive and scarce resources, such as software architects. Failure to foresee and plan such resources may result in effort estimates on one hand, and unavailability of mentors on another, if not planned in advance. The process put forward herein can help companies to deal with the aforementioned problems through more systematic, effective and repeatable onboarding strategies.
103

Multidisciplinary design automation : Working with product model extensions

Heikkinen, Tim January 2018 (has links)
Being able to efficiently and effectively provide custom products has been identified as a competitive advantage for manufacturing organizations. Product configuration has been shown to be an effective way of achieving this through a modularization, product platform and product family development approach. A core assumption behind product configuration is that the module variants and their constraints can be explicitly defined as product knowledge in terms of geometry and configuration rules. This is not always the case, however. Many companies require extensive engineering to develop each module variant and cannot afford to do so in order to meet potential customer requirements within a predictable future. Instead, they try to implicitly define the module variants in terms of the process for how they can be realized. In this way they can realize module variants on demand efficiently and effectively when the customer requirements are better defined, and the development can be justified by the increased probability of profiting from the outcome. Design automation, in its broadest definition, deals with computerized engineering support by effectively and efficiently utilizing pre-planned reusable assets to progress the design process. There have been several successful implementations reported in the literature, but a widespread use is yet to be seen. It deals with the explicit definition of engineering process knowledge, which results in a collection of methods and models that can come in the form of computer scripts, parametric CADmodels, template spreadsheets, etc. These methods and models are developed using various computer tools and maintained within the different disciplines involved, such as geometric modeling, simulation, or manufacturing, and are dependent on each other through the product model. To be able to implement, utilize, and manage design automation systems in or across multiple disciplines, it is important to first understand how the disciplinary methods and models are dependent on each other through the product model and then how these relations should be constructed to support the users without negatively affecting other aspects, such as modeling flexibility, minimum documentation, and software tool independence. To support the successful implementation and management of design automation systems the work presented here has focused on understanding how some digital product model constituents are, can, and, to some extent, should be extended to concretize relations between methods and models from different tools and disciplines. It has been carried out by interviewing Swedish industrial companies, performing technical reviews, performing literature reviews, and developing prototypes, which has resulted in an increased understanding and the consequent development of a conceptual framework that highlights aspects relating to the choice of extension techniques.
104

Tool orchestration for modeling, verification and analysis of collaborating autonomous machines

Mrvaljevic, Pavle January 2020 (has links)
System-of-systems (SoS) is a collective of multiple system units that have a common purpose. In this thesis, the Volvo Electric Site is investigated as an example case study in which safety and performance properties of collaborating autonomous machines are evaluated and analyzed. Formal methods in software engineering aim to prove the correctness of the system by evaluating its mathematical model. We use an actor-based framework, AdaptiveFlow, for modeling system functionalities and timing features. The aim is to link an abstract model evaluation and a simulation of real-world cases that are deployed in the VCE Simulator. In addition, it is necessary to make sure that AdaptiveFlow provides correct-by-design scenarios. The verification is conducted by developing an orchestration method between the AdaptiveFlow framework and the VCE Simulator. A tool named VMap is developed throughout this thesis for automated mapping of the input models of AdaptiveFlow and the VCE Simulator to make the orchestration possible. Furthermore, AdaptiveFlow is perceived in two different ways, as a design tool, and as an analysis tool. The models created in AdaptiveFlow are directly mapped to the VCE Simulator by using the VMap tool where the VCE Simulator is used as a testbed for checking these models. The outcome of this thesis is reflected in the establishment of a mapping pattern between AdaptiveFlow inputs and VCE simulator by developing the VMap tool for automatic mapping. It was shown that there is a natural mapping between the AdaptiveFlow models and VCE simulator inputs. By using VMap, we can quickly get to the desired scenarios. Through the development of three different cases, the results show that it is possible to design safe and optimal scenarios by orchestrating the AdaptiveFlow and the VCE Simulator using the VMap tool as well as the correlation between results from AdaptiveFlow and VCE Simulator.
105

Input Partitioning Impact on Combinatorial Test Coverage

Ballkoci, Rea January 2020 (has links)
Software testing is a crucial activity when it comes to the software lifecycle as it can say with a certain confidence that the software will behave according to its specified behavior. However, due to the large input space, it is almost impossible to check all the combinations that might possibly lead to failures. Input partitioning and combinatorial testing are two techniques that can partially solve the test creation and selection problem, by minimizing the number of test cases to be executed. These techniques work closely together, with input partitioning providing a selection of values that are more likely to expose software faults, and combinatorial testing generating all the possible combinations between two to six parameters. The aim of this Thesis is to study how exactly input partitioning impacts combinatorial test coverage, in terms of the measured t-way coverage percentage and the number of missing test cases to achieve full t-way coverage. For this purpose, six manually written test suites were provided by Bombardier Transportation. We performed an experiment, where the combinatorial coverage is measured for four systematic strategies of input partitioning, using a tool called Combinatorial Coverage Measurement (CCM) tool. The strategies are based on the interface documentations, where we can partition using information about data types or predefined partitions, and specification documentations, where we can partition while using Boundary Value Analysis (BVA) or not. The results show that input partitioning affects the combinatorial test coverage through two factors, the number of partitions or intervals and the number of representative values per interval. A high number of values will lead to a higher number of combinations that increases exponentially. The strategy based on specifications without considering BVA always scored the highest coverage per test suite ranging between 22% and 67% , in comparison to the strategy with predefined partitions that almost always had the lowest score ranging from 4% to 41%. The strategy based on the data types was consistent in always having the second highest score when it came to combinatorial coverage ranging from 8% to 56%, while the strategy that considers BVA would vary, strongly depending on the number of non-boolean parameters and their respective number of boundary values, ranging from 3% to 41%. In our study, there were also some other factors that affected the combinatorial coverage such as the number of manually created test cases, data types of the parameters and their values present in the test suites. In conclusion, an input partitioning strategy must be chosen carefully to exercise parts of the system that can potentially result in the discovery of an unintended behavior. At the same time, a test engineer should also consider the number of chosen values. Different strategies can generate different combinations, and thus influencing the obtained combinatorial coverage. Tools that automate the generation of the combinations are adviced to achieve 100% combinatorial coverage.
106

An industrial case study to improve test case execution time

Yadavalli, Tejaswy January 2020 (has links)
No description available.
107

Fundamentala utmaningar med maskininlärning : Identifikation av ansiktsmask på bild

Bile Excell, Linus January 2021 (has links)
Maskininlärning är en teknik som kan användas inom många områden, bland annat inom bildigenkänning. Syftet med detta projekt är a få en grundlig förståelse för hur maskininlärning fungerar, inklusive vilka datatekniska förkunskaper som krävs och vilka utmaningar som finns i självlärande system. Detta har undersökts genom a skapa och optimera e system som identifierar huruvida en person på en bild använder ansiktsmask eller inte. Tyngden har legat på att samla in och hantera data, men framför allt på a optimera flera olika hyperparametrar. Detta genomfördes genom insamling av information för att skaffa en grundläggande förståelse för området. Därefter tränades, validerades och testades systemet. Systemet justerades genom applicering av olika hyperparametrar för a förstå hur dessa påverkade resultatet. Detta gjordes i Keras och resultatet visualiserades i MatPlotlib. Resultatet visade a en utmaning för e självlärande system är a minska overfitting, vilket var anledningen till varför applicering av hyperparametern dropout visade sig vara viktig. Utmaningen med a använda maskininlärning upp fattades framför allt vara a förstå vad som påverkar resultatet, då det finns många parametrar och det tar lång tid att testa alla. Trots det skapades ett tillräckligt bra system för a kunna avgöra om en person bär ansiktsmask eller inte medtanke på den mängd data, tid och kunskap som fanns tillgänglig, vilket tyder på a maskininlärning kan vara användbart både inom detta område och många andra områden i samhället. / Machine learning is a widely used technique, which can be used for image recognition. The aim of this project is to get a basic understanding for how machine learning operates, including the user’s required technical prior knowledge as well as the challenges that exist within a self-learning system. This was examined by creating and optimizing a system that identifies whether a person in a picture is wearing a face mask or not. The main focus of the project has been on collecting and managing data, but most importantly on optimizing hyper parameters. This was executed by collecting information to achieve basic understanding of the topic. Then the system was trained, validated and tested. The system was optimized by application of various hyper parameters to show the user how they affect the result. This was executed in Keras and visualized in MatPlotlib. The result showed that one challenge in a self-learning system is to reduce the risk of overfitting, which is why application of the hyper parameter dropout was important. The challenge in using machine learning seemed to be that many hyper parameters can affect the result, and understanding what, how and why a result is the way it is can be difficult for the user. Despite this, a system that could interpret whether a person in a picture was wearing a face mask or not was created and optimized in a sufficient way regarding the amount of data, time and previous knowledge available. This emphasizes the utility of machine learning both in this and other areas.
108

SOFTWARE FOR SAFE MOBILE ROBOTS WITH ROS 2 AND REBECA

Sharovarskyi, Kostiantyn January 2020 (has links)
Robotic systems are involved in our daily lives and the amount of traction they have received is non-negligible. In spite of their sizeable popularity, the quality of their software is often dismissed. That may hinder an important property of robotic systems: safety. The movement of mobile robots introduces an obvious safety concern. The collision of a robot with various things can lead to disastrous results. By amplifying the development process with formal verification techniques, one can decrease the probability of such failures. In order to facilitate close integration of safety assurance and the development process, we propose a method to develop safe software for ROS 2-powered mobile robots. We conduct a case study by going through all the proposed steps and reporting the results. The case study focuses on a scenario in which mobile robots move from a starting position to the target position. Models of various ROS 2 components utilised in mobile robots are developed. Extensibility is a core property of our model. We show that it allows to verify both single- and multi-robot scenarios. Furthermore, that flexibility allowed us to model two path-finding approaches: one naive approach without collision avoidance and one efficient approach based on the A* algorithm. The proposed method is tightly coupled with modelling, hence, the abstraction will lead to some mismatches between the model and reality. We report such mismatches by deploying the developed software to a simulation environment (i.e. Gazebo) and examining the behavior of the robot(s).
109

Critical success factors in Agile software development projects

Walander, Tomas, Larsson, David January 2015 (has links)
The demand for combining Agile methodologies with large organizations is growing as IT plays a larger role in modern business, even in traditional manufacturing companies. In such organizations, management feel they are losing the ability to plan and control as the developers increasingly utilize Agile methodologies. This mismatch leads to frustration and creates barriers to fully Agile software development. Therefore, this report aims to evaluate what factors affect Agile software development projects in an organizational context, and in particular how these factors can be monitored by the effective use of measures. This master thesis project has conducted a case study at Scania IT, a subsidiary of truck manufacturer Scania, as well as an extensive literature review, which together help identify several critical success factors for combining Agile methodologies with an organization. The report concludes that several aspects are important when agility is introduced to a functional organization and also when combined with a project stage gate model. Moreover, it was found that measures, in particular software metrics, can greatly aid the organization in overcoming several organizational barriers. However, to succeed, corrective actions must be defined that help the organization prevent the measure from becoming yet another statistic data, but rather learn and improve its way of working.
110

Towards Guidelines for Conducting Software Process Simulation in Industry

bin Ali, Nauman January 2013 (has links)
Background: Since the 1950s explicit software process models have been used for planning, executing and controlling software development activities. To overcome the limitation of static models at capturing the inherent dynamism in software development, Software Process Simulation Modelling (SPSM) was introduced in the late 1970s. SPSM has been used to address various challenges, e.g. estimation, planning and process assessment. The simulation models developed over the years have varied in their scope, purpose, approach and the application domain. However, there is a need to aggregate the evidence regarding the usefulness of SPSM for achieving its intended purposes. Objective: This thesis aims to facilitate adoption of SPSM in industrial practice by exploring two directions. Firstly it aims to establish the usefulness of SPSM for its intended purposes, e.g. for planning, training and as an alternative to study the real world software (industrial and open source) development. Secondly to define and evaluate a process for conducting SPSM studies in industry. Method: Two systematic literature reviews (SLR), a literature review, a case study and an action research study were conducted. A literature review of existing SLRs was done to identify the strategies for selecting studies. The resulting process for study selection was utilized in an SLR to capture and aggregate evidence regarding the usefulness of SPSM. Another SLR was used to identify existing process descriptions of how to conduct an SPSM study. The consolidated process and associated guidelines identified in this review were used in an action research study to develop a simulation model of the testing process in a large telecommunication vendor. The action research was preceded by a case study to understand the testing process at the company. Results: A study selection process based on the strategies identified from literature was proposed. It was found to systemize selection and to support inclusiveness with reasonable additional effort in an SLR of the SPSM literature. The SPSM studies identified in literature scored poorly on the rigor and relevance criteria and lacked evaluation of SPSM for the intended purposes. Lastly, based on literature, a six-step process to conduct an SPSM study was used to develop a System Dynamics model of the testing process for training purposes in the company. Conclusion: The findings identify two potential directions for facilitating SPSM adoption. First, by learning from other disciplines having done simulation for a longer time. It was evident how similar the consolidated process for conducting an SPSM study was to the process used in simulation in general. Second the existing work on SPSM can at best be classified as strong ``proof-of-concept’’ that SPSM can be useful in the real world software development. Thus, there is a need to evaluate and report the usefulness of SPSM for the intended purposes with scientific rigor.

Page generated in 0.0676 seconds