• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 141
  • 141
  • 34
  • 21
  • 21
  • 20
  • 18
  • 16
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A case study of cross-branch porting in Linux Kernel

Hua, Jinru 23 July 2014 (has links)
To meet different requirements for different stakeholders, branches are widely used to maintain multiple product variants simultaneously. For example, Linux Kernel has a main development branch, known as the mainline; 35 branches to maintain older product versions which are called stable branches; and hundreds of branches for experimental features. To maintain multiple branch-based product variants in parallel, developers often port new features or bug-fixes from one branch to another. In particular, the process of propagating bug-fixes or feature additions to an older version is commonly called backporting. Prior to our study, backporting practices in large scale projects have not been systematically studied. This lack of empirical knowledge makes it difficult to improve the current backporting process in the industry. We hypothesized that cross-branch porting practice is frequent, repetitive, and error-prone. It required significant effort for developers to select patches that need to be backported and then apply them to the target implementation. We carried out two complementary studies to examine this hypothesis. To investigate the extent and effort of porting practice, this thesis first conducted a quantitative study of backporting activities in Linux Kernel with a total of 8 years version history using the data of the main branch and the 35 stable branches. Our study showed that backporting happened at a rate of 149 changes per month, and it took 51 days to propagate patches on average. 40% of changes in the stable branches were ported from the mainline and 64% of ported patches propagated to more than one branch. Out of all backporting changes from the mainline to stable branches, 97.5% were applied without any manual modifications. To understand how Linux Kernel developers keep up to date with development activities across different branches, we carried out an online survey with engineers who may have ported code from the mainline to stable branches based on our prior analysis of Linux Kernel version history. We received 14 complete responses. The participants have 12.6 years of Linux development experience on average and are either maintainers or experts of Linux Kernel. The survey showed that most backporting work was done by the maintainers who knew the program quite well. Those experienced maintainers could easily identify the edits that need to be ported and propagate them with all relevant changes to ensure consistency in multiple branches. Inexperience developers were seldom given an opportunity to backport features or bug-fixes to stable branches. In summary, based on the version history study and the online survey, we concluded that cross-branch porting is frequent, periodic, and repetitive. It required a manual effort to selectively identify the changes that need to be ported, to analyze the dependency of the selected changes, and to apply all required changes to ensure consistency. To eliminate human's omission mistakes, most backporting work was done only by experienced maintainers who could identify all relevant changes along with the change that needed to be backported. Currently inexperienced developers were excluded from cross-branch porting activities from the mainline to stable branches in Linux Kernel. Our results call for an automated approach to identify the patches that require to be ported, to collect context information to help developers become aware of relevant changes, and to notify pertinent developers who may be responsible for the corresponding porting events. / text
2

Approach to Evaluating Clustering Using Classification Labelled Data

Luu, Tuong January 2010 (has links)
Cluster analysis has been identified as a core task in data mining for which many different algorithms have been proposed. The diversity, on one hand, provides us a wide collection of tools. On the other hand, the profusion of options easily causes confusion. Given a particular task, users do not know which algorithm is good since it is not clear how clustering algorithms should be evaluated. As a consequence, users often select clustering algorithm in a very adhoc manner. A major challenge in evaluating clustering algorithms is the scarcity of real data with a "correct" ground truth clustering. This is in stark contrast to the situation for classification tasks, where there are abundantly many data sets labeled with their correct classifications. As a result, clustering research often relies on labeled data to evaluate and compare the results of clustering algorithms. We present a new perspective on how to use labeled data for evaluating clustering algorithms, and develop an approach for comparing clustering algorithms on the basis of classification labeled data. We then use this approach to support a novel technique for choosing among clustering algorithms when no labels are available. We use these tools to demonstrate that the utility of an algorithm depends on the specific clustering task. Investigating a set of common clustering algorithms, we demonstrate that there are cases where each one of them outputs better clusterings. In contrast to the current trend of looking for a superior clustering algorithm, our findings demonstrate the need for a variety of different clustering algorithms.
3

An empirical study of banks' merger and acquisition

Lin, Zi-Jiun 21 June 2000 (has links)
about bank merger and acquisition
4

Approach to Evaluating Clustering Using Classification Labelled Data

Luu, Tuong January 2010 (has links)
Cluster analysis has been identified as a core task in data mining for which many different algorithms have been proposed. The diversity, on one hand, provides us a wide collection of tools. On the other hand, the profusion of options easily causes confusion. Given a particular task, users do not know which algorithm is good since it is not clear how clustering algorithms should be evaluated. As a consequence, users often select clustering algorithm in a very adhoc manner. A major challenge in evaluating clustering algorithms is the scarcity of real data with a "correct" ground truth clustering. This is in stark contrast to the situation for classification tasks, where there are abundantly many data sets labeled with their correct classifications. As a result, clustering research often relies on labeled data to evaluate and compare the results of clustering algorithms. We present a new perspective on how to use labeled data for evaluating clustering algorithms, and develop an approach for comparing clustering algorithms on the basis of classification labeled data. We then use this approach to support a novel technique for choosing among clustering algorithms when no labels are available. We use these tools to demonstrate that the utility of an algorithm depends on the specific clustering task. Investigating a set of common clustering algorithms, we demonstrate that there are cases where each one of them outputs better clusterings. In contrast to the current trend of looking for a superior clustering algorithm, our findings demonstrate the need for a variety of different clustering algorithms.
5

An Analysis of Traceability in Requirements Documents

YAMAMOTO, Shuichiro, TAKAHASHI, Kenji 20 April 1995 (has links)
No description available.
6

Fundamentals of Software Patent Protection at a University

Everett, Christopher E 10 May 2003 (has links)
Software protection by patents is an emerging field and thus is not completely understood by software developers, especially software developers in a university setting. University inventors have to balance their publication productivity and the desire of their university to license inventions that could be profitable. This balance stems from the one-year bar on filing a U.S. patent application after public disclosure such as publications of the invention. The research provides evidence supporting the hypothesis that a university inventor can improve the protection of his or her software patent by applying certain information about patent prosecution practices and the relevant prior art. Software inventors need to be concerned about fulfilling the requirements of patent laws. Some of the methods for fulfilling these requirements include using diagrams in patent applications such as functional block diagrams, flowchart diagrams, and state diagrams and ensuring that the patent application is understandable by non-technical people. The knowledge of prior art ensures that the inventor is not "reinventing the wheel," not infringing on a patent, and understands the current state of the art. The knowledge of patent laws, diagrams, readability, and prior art enables a software inventor to take control of the protection of his or her invention to ensure that the application of this information leads to improvements during the application process.
7

The impact of response styles on the stability of cross-national comparisons

Reynolds, Nina L., Diamantopoulos, A., Simintiras, A. January 2006 (has links)
No / Response style effects are a source of bias in cross-national studies, with some nationalities being more susceptible to particular response styles than others. While response styles, by their very nature, vary with the form of the stimulus involved, previous research has not investigated whether cross-national differences in response styles are stable across different forms of a stimulus (e.g., item wording, scale type, response categories). Using a quasi-experimental design, this study shows that response style differences are not stable across different stimulus formats, and that response style effects impact on substantive cross-national comparisons in an inconsistent way.
8

Release management in free and open source software ecosystems

Poo-Caamaño, Germán 02 December 2016 (has links)
Releasing software is challenging. To decide when to release software, developers may consider a deadline, a set of features or quality attributes. Yet, there are many stories of software that is not released on time. In large-scale software development, release management requires significant communication and coordination. It is particularly challenging in Free and Open Source Software (FOSS) ecosystems, in which hundreds of loosely connected developers and their projects are coordinated for releasing software according to a schedule. In this work, we investigate the release management process in two large-scale FOSS development projects. In particular, our focus is the communication in the whole release management process in each ecosystem across multiple releases. The main research questions addressed in this dissertation are: (1) How do developers in these FOSS ecosystems communicate and coordinate to build and release a common product based on different projects? (2) What are the release management tasks in a FOSS ecosystem? and (3) What are the challenges that release managers face in a FOSS ecosystem? To understand this process and its challenges better, we used a multiple case study methodology, and colleced evidence from a combination of the following sources: documents, archival records, interviews, direct observation, participant observation, and physical artifacts. We conducted the case studies on two FLOSS software ecosystems: GNOME and OpenStack. We analyzed over two and half years of communication in each ecosystem and studied developers’ interactions. GNOME is a collection of libraries, system services, and end-user applications; together, these projects provide a unified desktop —the GNOME desktop. OpenStack is a collection of software tools for building and managing cloud computing platforms for public and private clouds. We catalogued communication channels, categorized coordination activities in one channel, and triangulated our results by interviewing key developers identified through social network analysis. We found factors that impact the release process in a software ecosystem, including a release schedule positively, influence instead of direct control, and diversity. The release schedule drives most of the communication within an ecosystem. To achieve a concerted release, a Release Team helps developers reach technical consensus through influence rather than direct control. The diverse composition of the Release Team might increase its reach and influence in the ecosystem. Our results can help organizations build better large-scale teams and show that software engineering research focused on individual projects might miss important parts of the picture. The contributions of this dissertation are: (1) an empirical study of release management in two FOSS ecosystems (2) a set of lessons learned from the case studies, and (3) a theory of release management in FOSS ecosystems. We summarize our theory that explains our understanding of release management in FOSS ecosystems as three statements: (1) the size and complexity of the integrated product is constrained by the release managers capacity, (2) release management should be capable of reaching the whole ecosystem, and (3) the release managers need social and technical skills. The dissertation discusses this theory in the light of the case studies, other research efforts, and its implications. / Graduate / 0984 / gpoo+proquest@calcifer.org
9

Microfinance institutions: an empirical study from Moldova

Gorgan, Roman January 2012 (has links)
The present master thesis deals with non-banking microfinance institutions and examines its abilities and role in the poverty alleviation process. It is more than necessary to pay attention to the rural sector and its development as any successful transition of the economy requires transition of the rural sector as well. In many transition economies people in the rural areas continue to live on the edge of poverty, engaged in subsistence agriculture and remain susceptible to wide range of shocks. In such countries rural population unlike urban one did not benefit to the same extend from transition and need special attention and supporting policy measures. Due to low penetration rate of microfinance institutions into rural areas, lacking or insufficient size of collateral, financial illiteracy many poor but active man face problems to obtaining finance for the development of new income opportunities. In this context the master thesis emphasizes the role of savings and credit associations, which unlike the commercial banks operate mainly in rural sector, have the most significant effect on poverty alleviation. Finally, the author analyses the activity of 3 non-banking microfinance institutions of the Republic of Moldova and uses publicly available data to calculate the outreach, efficiency and...
10

Understanding Programmers' Working Context by Mining Interaction Histories

Zou, Lijie January 2013 (has links)
Understanding how software developers do their work is an important first step to improving their productivity. Previous research has generally focused either on laboratory experiments or coarsely-grained industrial case studies; however, studies that seek a finegrained understanding of industrial programmers working within a realistic context remain limited. In this work, we propose to use interaction histories — that is, finely detailed records of developers’ interactions with their IDE — as our main source of information for understanding programmer’s work habits. We develop techniques to capture, mine, and analyze interaction histories, and we present two industrial case studies to show how this approach can help to better understand industrial programmers’ work at a detailed level: we explore how the basic characteristics of software maintenance task structures can be better understood, how latent dependence between program artifacts can be detected at interaction time, and show how patterns of interaction coupling can be identified. We also examine the link between programmer interactions and some of the contextual factors of software development, such as the nature of the task being performed, the design of the software system, and the expertise of the developers. In particular, we explore how task boundaries can be automatically detected from interaction histories, how system design and developer expertise may affect interaction coupling, and whether newcomer and expert developers differ in their interaction history patterns. These findings can help us to better reason about the multidimensional nature of software development, to detect potential problems concerning task, design, expertise, and other contextual factors, and to build smarter tools that exploit the inherent patterns within programmer interactions and provide improved support for task-aware and expertise-aware software development.

Page generated in 0.2693 seconds