• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 66
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 218
  • 68
  • 56
  • 52
  • 49
  • 46
  • 42
  • 39
  • 37
  • 35
  • 30
  • 29
  • 29
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Predicting likelihood of requirement implementation within the planned iteration

Dehghan, Ali 31 May 2017 (has links)
There has been a significant interest in the estimation of time and effort in fixing defects among both software practitioners and researchers over the past two decades. However, most of the focus has been on prediction of time and effort in resolving bugs, or other low level tasks, without much regard to predicting time needed to complete high-level requirements, a critical step in release planning. In this thesis, we describe a mixed-method empirical study on three large IBM projects in which we developed and evaluated a process of training a predictive model constituting a set of 29 features in nine categories in order to predict if whether or not a requirement will be completed within its planned iteration. We conducted feature engineering through iterative interviews with IBM software practitioners as well as analysis of large development and project management repositories of these three projects. Using machine learning techniques, we were able to make predictions on requirement completion time at four different stages of requirement lifetime. Using our industrial partner’s interest in high precision over recall, we then adopted a cost sensitive learning method and maximized precision of predictions (ranging from 0.8 to 0.97) while maintaining an acceptable recall. We also ranked the features based on their relative importance to the optimized predictive model. We show that although satisfying predictions can be made at early stages, even on the first day of requirement creation, performance of predictions improves over time by taking advantage of requirements’ progress data. Furthermore, feature importance ranking results show that although importance of features are highly dependent on project and prediction stage, there are certain features (e.g. requirement creator, time remained to the end of iteration, time since last requirement summary change and number of times requirement has been replanned for a new iteration) that emerge as important across most projects and stages, implying future worthwhile research directions for both researchers and practitioners. / Graduate
22

Effort Modeling and Programmer Participation in Open Source Software Projects

Koch, Stefan January 2005 (has links) (PDF)
This paper analyses and develops models for programmer participation and effort estimation in open source software projects. This has not yet been a centre of research, although any results would be of high importance for assessing the efficiency of this model and for various decision-makers. In this paper, a case study is used for hypotheses generation regarding manpower function and effort modeling, then a large data set retrieved from a project repository is used to test these hypotheses. The main results are that Norden-Rayleigh-based approaches need to be complemented to account for the addition of new features during the lifecycle to be usable in this context, and that programmer-participation based effort models show significantly less effort than those based on output metrics like lines-of-code. (author's abstract) / Series: Working Papers on Information Systems, Information Business and Operations
23

Reassembling scholarly publishing: open access, institutional repositories and the process of change

Kennan, Mary Anne, Information Systems, Technology & Management, Australian School of Business, UNSW January 2008 (has links)
Open access (OA) to scholarly publishing is encouraged and enabled by new technologies such as the Internet, the World Wide Web, their standards and protocols, and search engines. Institutional repositories (IR) as the most recent technological incarnations of OA enable researchers and their institutions to make accessible the outputs of research. While many OA repositories are being implemented, researchers are surprisingly slow in adopting them. While activists promote OA as emanating from the ideals of scholarship, others revile OA as undermining of scholarly publishing's economic base and therefore undermining quality control and peer review. Change is occurring but there are contested views and actions. This research seeks to increase understanding of the issues by addressing the research questions: "How and why is open access reassembling scholarly publishing?" and "What role does introducing an open access institutional repository to researchers play in this reassembly?" This thesis contributes to answering these questions by investigating two IR implementations and the research communities they serve. The research was conducted as an Actor-Network Theory (ANT) field study, where the actors were followed and their relations and controversies explored in action as their landscape was being contested. The research found that central to our understanding of the reassembling of scholarly publishing is the agency emerging from the sociomaterial relations of the OA vision, IR technology and researchers. Being congruent with the aims of scholarship, and also being flexible and mutable, the OA vision enrols researchers to enact it through OA IR, thus transforming scholarly communications. This is counteracted by publishers aligned with the academic reward network within traditional publishing networks. In this delicate choreography the OA IR, its developers, researchers, university administrators and policy makers are merging as critical actors with their more or less congruent vision of OA enacted in their network. The comparative ANT account of the two IR life stories shows how such enactment depends on the degree to which different OA visions could converge, enrol and mobilise other actors, in particular institutional actors, such as a mandate, in transforming researchers' publishing behaviour. This thesis contributes to a novel and in-depth understanding of OA and IR and their roles in reassembling scholarly publishing. It also contributes to the use of ANT in information systems research by advancing a sociomaterial ontology which recognises the intertwining of human and material agency.
24

Extracting Structured Knowledge from Textual Data in Software Repositories

Hasan, Maryam 06 1900 (has links)
Software team members, as they communicate and coordinate their work with others throughout the life-cycle of their projects, generate different kinds of textual artifacts. Despite the variety of works in the area of mining software artifacts, relatively little research has focused on communication artifacts. Software communication artifacts, in addition to source code artifacts, contain useful semantic information that is not fully explored by existing approaches. This thesis, presents the development of a text analysis method and tool to extract and represent useful pieces of information from a wide range of textual data sources associated with software projects. Our text analysis system integrates Natural Language Processing techniques and statistical text analysis methods, with software domain knowledge. The extracted information is represented as RDF-style triples which constitute interesting relations between developers and software products. We applied the developed system to analyze five different textual information, i.e., source code commits, bug reports, email messages, chat logs, and wiki pages. In the evaluation of our system, we found its precision to be 82%, its recall 58%, and its F-measure 68%.
25

Repositories Recreated : Working Towards Improved Interoperability and Integration by a Co-operative Approach in Sweden

Andersson, Stefan, Svensson, Aina January 2013 (has links)
Recently the technological and organizational infrastructures of institutional repositories have been questioned. For example the British so-called Finch report  from last summer argued that further development, as well as higher standards of accessibility of repositories, are needed in order to make them better integrated and interoperable to ultimately bring greater use by both authors and readers. Not only the technical frameworks and presumably low usage levels are criticized but also the lack of “clear policies on such matters as the content they will accept, the uses to which it may be put, and the role that they will play in preservation”. The report concludes that: “In practice patterns of deposit are patchy”. As in the UK, today, all universities and university colleges in Sweden, except a couple of very small and specialized ones, do have an institutional repository. A majority (around 80%) are working together on a co-operative basis within the DiVA Publishing System with the Electronic Publishing Centre at Uppsala University Library acting as the technical and organizational hub. Because the system is jointly funded, and the members contribute according to their size, it has been possible even for smaller institutions with limited resources to run a repository with exactly the same functionalities as the biggest universities. In this presentation we want to demonstrate the ever-increasing importance of institutional repositories in Sweden. Starting more than a decade ago the DiVA Consortium has, for some time, been addressing the problems now raised by the Finch report in a number of areas.
26

DRACA: Decision-support for Root Cause Analysis and Change Impact Analysis

Nadi, Sarah 12 1900 (has links)
Most companies relying on an Information Technology (IT) system for their daily operations heavily invest in its maintenance. Tools that monitor network traffic, record anomalies and keep track of the changes that occur in the system are usually used. Root cause analysis and change impact analysis are two main activities involved in the management of IT systems. Currently, there exists no universal model to guide analysts while performing these activities. Although the Information Technology Infrastructure Library (ITIL) provides a guide to the or- ganization and structure of the tools and processes used to manage IT systems, it does not provide any models that can be used to implement the required features. This thesis focuses on providing simple and effective models and processes for root cause analysis and change impact analysis through mining useful artifacts stored in a Confguration Management Database (CMDB). The CMDB contains information about the different components in a system, called Confguration Items (CIs), as well as the relationships between them. Change reports and incident reports are also stored in a CMDB. The result of our work is the Decision support for Root cause Analysis and Change impact Analysis (DRACA) framework which suggests possible root cause(s) of a problem, as well as possible CIs involved in a change set based on di erent proposed models. The contributions of this thesis are as follows: - An exploration of data repositories (CMDBs) that have not been previously attempted in the mining software repositories research community. - A causality model providing decision support for root cause analysis based on this mined data. - A process for mining historical change information to suggest CIs for future change sets based on a ranking model. Support and con dence measures are used to make the suggestions. - Empirical results from applying the proposed change impact analysis process to industrial data. Our results show that the change sets in the CMDB were highly predictive, and that with a confidence threshold of 80% and a half life of 12 months, an overall recall of 69.8% and a precision of 88.5% were achieved. - An overview of lessons learned from using a CMDB, and the observations we made while working with the CMDB.
27

DRACA: Decision-support for Root Cause Analysis and Change Impact Analysis

Nadi, Sarah 12 1900 (has links)
Most companies relying on an Information Technology (IT) system for their daily operations heavily invest in its maintenance. Tools that monitor network traffic, record anomalies and keep track of the changes that occur in the system are usually used. Root cause analysis and change impact analysis are two main activities involved in the management of IT systems. Currently, there exists no universal model to guide analysts while performing these activities. Although the Information Technology Infrastructure Library (ITIL) provides a guide to the or- ganization and structure of the tools and processes used to manage IT systems, it does not provide any models that can be used to implement the required features. This thesis focuses on providing simple and effective models and processes for root cause analysis and change impact analysis through mining useful artifacts stored in a Confguration Management Database (CMDB). The CMDB contains information about the different components in a system, called Confguration Items (CIs), as well as the relationships between them. Change reports and incident reports are also stored in a CMDB. The result of our work is the Decision support for Root cause Analysis and Change impact Analysis (DRACA) framework which suggests possible root cause(s) of a problem, as well as possible CIs involved in a change set based on di erent proposed models. The contributions of this thesis are as follows: - An exploration of data repositories (CMDBs) that have not been previously attempted in the mining software repositories research community. - A causality model providing decision support for root cause analysis based on this mined data. - A process for mining historical change information to suggest CIs for future change sets based on a ranking model. Support and con dence measures are used to make the suggestions. - Empirical results from applying the proposed change impact analysis process to industrial data. Our results show that the change sets in the CMDB were highly predictive, and that with a confidence threshold of 80% and a half life of 12 months, an overall recall of 69.8% and a precision of 88.5% were achieved. - An overview of lessons learned from using a CMDB, and the observations we made while working with the CMDB.
28

Decomposition mechanisms related to Hanford waste: characterization of NO¯ from organic nitroxyl derivatives

Belcher, Marcus Anthony 08 1900 (has links)
No description available.
29

Enabling Large-Scale Mining Software Repositories (MSR) Studies Using Web-Scale Platforms

Shang, Weiyi 31 May 2010 (has links)
The Mining Software Repositories (MSR) field analyzes software data to uncover knowledge and assist software developments. Software projects and products continue to grow in size and complexity. In-depth analysis of these large systems and their evolution is needed to better understand the characteristics of such large-scale systems and projects. However, classical software analysis platforms (e.g., Prolog-like, SQL-like, or specialized programming scripts) face many challenges when performing large-scale MSR studies. Such software platforms rarely scale easily out of the box. Instead, they often require analysis-specific one-time ad hoc scaling tricks and designs that are not reusable for other types of analysis and that are costly to maintain. We believe that the web community has faced many of the scaling challenges facing the software engineering community, as they cope with the enormous growth of the web data. In this thesis, we report on our experience in using MapReduce and Pig, two web-scale platforms, to perform large MSR studies. Through our case studies, we carefully demonstrate the benefits and challenges of using web platforms to prepare (i.e., Extract, Transform, and Load, ETL) software data for further analysis. The results of our studies show that: 1) web-scale platforms provide an effective and efficient platform for large-scale MSR studies; 2) many of the web community’s guidelines for using web-scale platforms must be modified to achieve the optimal performance for large-scale MSR studies. This thesis will help other software engineering researchers who want to scale their studies. / Thesis (Master, Computing) -- Queen's University, 2010-05-28 00:37:19.443
30

TECHNIQUES FOR IMPROVING SOFTWARE DEVELOPMENT PROCESSES BY MINING SOFTWARE REPOSITORIES

Dhaliwal, Tejinder 08 September 2012 (has links)
Software repositories such as source code repositories and bug repositories record information about the software development process. By analyzing the rich data available in software repositories, we can uncover interesting information. This information can be leveraged to guide software developers, or to automate software development activities. In this thesis we investigate two activities of the development process: selective code integration and grouping of field crash-reports, and use the information available in software repositories to improve each of the two activities. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2012-09-04 12:26:59.388

Page generated in 0.0519 seconds