• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 622
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1437
  • 211
  • 191
  • 191
  • 184
  • 180
  • 125
  • 118
  • 104
  • 103
  • 99
  • 86
  • 82
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

New Non-Parametric Methods for Income Distributions

Luo, Shan 26 April 2013 (has links)
Low income proportion (LIP), Lorenz curve (LC) and generalized Lorenz curve (GLC) are important indexes in describing the inequality of income distribution. They have been widely used for measuring social stability by governments around the world. The accuracy of estimating those indexes is essential to quantify the economics of a country. Established statistical inferential methods for these indexes are based on an asymptotic normal distribution, which may have poor performance when the real income data is skewed or has outliers. Recent applications of nonparametric methods, though, allow researchers to utilize techniques without giving data the parametric distribution assumption. For example, existing research proposes the plug-in empirical likelihood (EL)-based inferences for LIP, LC and GLC. However, this method becomes computationally intensive and mathematically complex because of the presence of nonlinear constraints in the underlying optimization problem. Meanwhile, the limiting distribution of the log empirical likelihood ratio is a scaled Chi-square distribution. The estimation of the scale constant will affect the overall performance of the plug-in EL method. To improve the efficiency of the existing inferential methods, this dissertation first proposes kernel estimators for LIP, LC and GLC, respectively. Then the cross-validation method is proposed to choose bandwidth for the kernel estimators. These kernel estimators are proved to have asymptotic normality. The smoothed jackknife empirical likelihood (SJEL) for LIP, LC and GLC are defined. Then the log-jackknife empirical likelihood ratio statistics are proved to follow the standard Chi-square distribution. Extensive simulation studies are conducted to evaluate the kernel estimators in terms of Mean Square Error and Asymptotic Relative Efficiency. Next, the SJEL-based confidence intervals and the smoothed bootstrap-based confidence intervals are proposed. The coverage probability and interval length for the proposed confidence intervals are calculated and compared with the normal approximation-based intervals. The proposed kernel estimators are found to be competitive estimators, and the proposed inferential methods are observed to have better finite-sample performance. All inferential methods are illustrated through real examples.
632

Variability Modeling in the Real

Berger, Thorsten 15 May 2013 (has links) (PDF)
Variability modeling is one of the key disciplines to cope with complex variability in large software product lines. It aims at creating, evolving, and configuring variability models, which describe the common and variable characteristics, also known as features, of products in a product line. Since the introduction of feature models more than twenty years ago, many variability modeling languages and notations have been proposed both in academia and industry, followed by hundreds of publications on variability modeling techniques that have built upon these theoretical foundations. Surprisingly, there are relatively few empirical studies that aim at understanding the use of such languages. What variability modeling concepts are actually used in practice? Do variability models applied in real-world look similar to those published in literature? In what technical and organizational contexts are variability models applicable? We present an empirical study that addresses this research gap. Our goals are i) to verify existing theoretical research, and ii) to explore real-world variability modeling languages and models expressed in them. We study concepts and semantics of variability modeling languages conceived by practitioners, and the usage of these concepts in real, large-scale models. Our aim is to support variability modeling research by providing empirical data about the use of its core modeling concepts, by identifying and characterizing further concepts that have not been as widely addressed, and by providing realistic assumptions about scale, structure, content, and complexity of real-world variability models. We believe that our findings are of relevance to variability modeling researchers and tool designers, for example, those working on interactive product configurators or feature dependency checkers. Our extracted models provide realistic benchmarks that can be used to evaluate new techniques. Recognizing the recent trend in software engineering to open up software platforms to facilitate inter-organizational reuse of software, we extend our empirical discourse to the emerging field of software ecosystems. As natural successors of successful product lines, ecosystems manage huge variability among and within their software assets, thus, represent a highly interesting class of systems to study variability modeling concepts and mechanisms. Our studied systems comprise eleven highly configurable software systems, two ecosystems with closed platforms, and three ecosystems relying on open platforms. Some of our subjects are among the largest successful systems in existence today. Results from a survey on industrial variability modeling complement these subjects. Our overall results provide empirical evidence that the well-researched concepts of feature modeling are used in practice, but also that more advanced concepts are needed. We observe that assumptions about variability models in the literature do not hold. Our study also reveals that variability models work best in centralized variability management scenarios, and that they are fragile and have to be controlled by a small team. We also identify a particular type of dependencies that is increasingly used in open platforms and helps sustain the growth of ecosystems. Interestingly, while enabling distributed variability, these dependencies rely on a centralized and stable vocabulary. Finally, we formulate new hypotheses and research questions that provide direction for future research.
633

Investigation Of The Effects Of Reuse On Software Quality In An Industrial Setting

Deniz, Berkhan 01 January 2013 (has links) (PDF)
Software reuse is a powerful tool in order to reduce development and maintenance time and cost. Any software life cycle product can be reused, not only fragments of source code. A high degree of reuse correlates with a low defect density. In the literature, many theoretical and empirical researches have examined the relationship of software reuse and quality. In this thesis, the effects of reuse on software quality are investigated in an industrial setting. Throughout this study, we worked with Turkey&rsquo / s leading defense industry company: Aselsan&rsquo / s software engineering department. We aimed to explore their real-life software projects and interpret reuse and quality relations for their projects. With this intention, we defined four different hypotheses to determine reuse and quality relations / and in order to confirm these hypotheses / we designed three separate case studies. In these case studies, we collected and calculated reuse and quality metrics i.e. Object-oriented quality metrics, reuse rates and performance measures of individual modules, fault-proneness of software components, and productivity rates of different products. Finally, by analyzing these measurements, we developed suggestions to further benefit from reuse in Aselsan through systematic improvements to the reuse infrastructure and process. Similar case studies have been reported in the literature, however, in Turkey, there are not many case studies using real-life project data, particularly in the defense industry.
634

Are environmental aspects value drivers for companies? A review of empirical studies

Günther, Edeltraud, Günther, Thomas, Hoppe, Holger 08 May 2004 (has links) (PDF)
"Umweltaspekte und ihre Wertrelevanz für die Unternehmen: Eine Zusammenfassung existierender empirischer Forschungsergebnisse!" (diese Version ist nicht mehr aktuell! 03.04.2007) The management of the value of a firm requires the identification of important value drivers. The main objective of the paper is to find out whether value based management (e.g. shareholder value oriented management) has to take into account environmental aspects. In order to answer this question, the paper will determine whether there is an empirically proven relationship between the economic and environmental performance of companies. The paper is divided into three main parts: The development of a basic theory, the examination of existing empirical results and the concluding integration of theory with empirical results. Basic Theory: To build a basis, the paper first specifies the theoretical connection between economic and environmental performance of companies. Economic performance (e.g. definable by financial statement and/or capital market data) and environmental performance (e.g. definable by operational and strategic data) are linked in a model. Based on the model the influence of environmental aspects on corporate value is examined resulting in seven hypotheses for the relationship of environmental and economic performance. Empirical Results: In order to test the developed theoretical relation, empirical studies are identified and assessed, with regard to the relationship between economic and environmental performance. The research results in 122 studies from all over the world, which are subject to further investigation. The analyzing process includes different steps. First, the studies are divided and reviewed as far as their empirical method and soundness are concerned. Mainly they can be differentiated in regression, event and portfolio studies. Furthermore the study categorizes the environmental and economic variables used in the analyzed studies. Economic measures are divided into the following categories: market based measures (e.g. stock price), accounting based measures (e.g. RoA), accounting and market based measures (e.g. Tobins q), perceived performance measures (i.e. derived from questionnaires) and miscellaneous measures (e.g. shadow prices). Environmental measures on the other hand are divided into the seven categories: strategic environmental performance, operational environmental performance, questionnaires, rating/ranking, events, environmental disclosure and mutual funds. The next step identifies the empirical result of every study as well as important determinants (e.g. the distinguished environmental and economic categories) of the results. The objective is to show whether general patterns for the existence of different results can be defined. Overall it can be shown that about 40% of all studies report a significant positive relationship between economic and environmental performance, contrary to only 11% reporting significant negative results. Furthermore, in order to assess the special role of environmental aspects as value drivers 23 suitable studies are analyzed on the basis of Rappaport?s shareholder network. Results provide an even stronger evidence for the value creating role of environmental aspects. Conclusion: Finally we can detect statistically proven environmental value drivers combining theoretical and empirical research. At the same time, the empirical methods are also evaluated, especially with regard to there ability to provide useful support for the development and examination of the theoretical research. Overall it seems to the authors that the whole research on the topic runs parallel and is not combined. There are only a few studies who are built on already recognized relations or which try the confirm such relations. Based on these cornerstones, requirements for further empirical research are identified and specified.
635

Are environmental aspects value drivers for companies? A review of empirical studies

Günther, Edeltraud, Günther, Thomas, Hoppe, Holger 25 August 2006 (has links) (PDF)
The management of the value of a firm requires the identification of important value drivers. The main objective of the paper is to find out whether value based management (e.g. shareholder value oriented management) has to take into account environmental aspects. In order to answer this question, the paper will determine whether there is an empirically proven relationship between the economic and environmental performance of companies. The paper is divided into three main parts: The development of a basic theory, the examination of existing empirical results and the concluding integration of theory with empirical results. Basic Theory: To build a basis, the paper first specifies the theoretical connection between economic and environmental performance of companies. Economic performance (e.g. definable by financial statement and/or capital market data) and environmental performance (e.g. definable by operational and strategic data) are linked in a model. Based on the model the influence of environmental aspects on corporate value is examined resulting in seven hypotheses for the relationship of environmental and economic performance. Empirical Results: In order to test the developed theoretical relation, empirical studies are identified and assessed, with regard to the relationship between economic and environmental performance. The research results in 122 studies from all over the world, which are subject to further investigation. The analyzing process includes different steps. First, the studies are divided and reviewed as far as their empirical method and soundness are concerned. Mainly they can be differentiated in regression, event and portfolio studies. Furthermore the study categorizes the environmental and economic variables used in the analyzed studies. Economic measures are divided into the following categories: market based measures (e.g. stock price), accounting based measures (e.g. RoA), accounting and market based measures (e.g. Tobins q), perceived performance measures (i.e. derived from questionnaires) and miscellaneous measures (e.g. shadow prices). Environmental measures on the other hand are divided into the seven categories: strategic environmental performance, operational environmental performance, questionnaires, rating/ranking, events, environmental disclosure and mutual funds. The next step identifies the empirical result of every study as well as important determinants (e.g. the distinguished environmental and economic categories) of the results. The objective is to show whether general patterns for the existence of different results can be defined. Overall it can be shown that about 40% of all studies report a significant positive relationship between economic and environmental performance, contrary to only 11% reporting significant negative results. Furthermore, in order to assess the special role of environmental aspects as value drivers 23 suitable studies are analyzed on the basis of Rappaport?s shareholder network. Results provide an even stronger evidence for the value creating role of environmental aspects. Conclusion: Finally we can detect statistically proven environmental value drivers combining theoretical and empirical research. At the same time, the empirical methods are also evaluated, especially with regard to there ability to provide useful support for the development and examination of the theoretical research. Overall it seems to the authors that the whole research on the topic runs parallel and is not combined. There are only a few studies who are built on already recognized relations or which try the confirm such relations. Based on these cornerstones, requirements for further empirical research are identified and specified.
636

Prediction of recurrent events

Fredette, Marc January 2004 (has links)
In this thesis, we will study issues related to prediction problems and put an emphasis on those arising when recurrent events are involved. First we define the basic concepts of frequentist and Bayesian statistical prediction in the first chapter. In the second chapter, we study frequentist prediction intervals and their associated predictive distributions. We will then present an approach based on asymptotically uniform pivotals that is shown to dominate the plug-in approach under certain conditions. The following three chapters consider the prediction of recurrent events. The third chapter presents different prediction models when these events can be modeled using homogeneous Poisson processes. Amongst these models, those using random effects are shown to possess interesting features. In the fourth chapter, the time homogeneity assumption is relaxed and we present prediction models for non-homogeneous Poisson processes. The behavior of these models is then studied for prediction problems with a finite horizon. In the fifth chapter, we apply the concepts discussed previously to a warranty dataset coming from the automobile industry. The number of processes in this dataset being very large, we focus on methods providing computationally rapid prediction intervals. Finally, we discuss the possibilities of future research in the last chapter.
637

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
638

A comprehensive picture of ethical values in caring encounters, based on experiences of those involved : Analysis of concepts developed from empirical studies

Jonasson, Lise-Lotte January 2011 (has links)
Older people should have a life with a sense of value and should feel confident. These ethical values, which are expressed in normative ethics, are expected to prevail in empirical ethics. Central components of nursing are the ethical issues of autonomy, beneficence, non-maleficence and the principles of justice. The general aim of this thesis is to identify and describe the ethical values that are apparent in the caring encounter and their influence on the people involved. This is done from the perspective of the older person in study (I), next of kin in study (II) and nurses in study (III). In study (IV) the aim was to synthesize the concepts from empirical studies (I- III) and analyze, compare and interrelate them with normative ethics. Studies (I, III) were empirical observational studies including follow-up interviews. Twenty-two older people participated voluntarily in study (I), and in study (III) 20 nurses participated voluntarily. In study (II) fourteen next of kin were interviewed. In studies (I- III) constant comparative analysis, the core foundation of grounded theory, was used. Five concepts were used in the analysis in study (IV); three from the grounded theory studies (I- III) and two from the theoretical framework on normative ethics i.e. the ICN code and SFS law. Five categories; being addressed, receiving respect, desiring to participate, increasing self-determination and gaining self-confidence formed the basis for the core category ‚Approaching‛ in study (I). ‘Approaching’ indicates the ethical values that guide nurses in their caring encounters with older people. These ethical values are noted by the older people and are greatly appreciated by them, and also lead to improved quality of care. Four categories were identified in study (II): Receiving, showing respect, facilitating participation and showing professionalism. These categories formed the basis of the core category ‚Being amenable‛, a concept identified in the next of kin’s description of the ethical values that they and the older patients perceive in the caring encounter. In study (III), three categories were identified: showing consideration, connecting, and caring for. These categories formed the basis of the core category ‚Corroborating‛. Corroborating deals with support and interaction. Empirical ethics and normative ethics are intertwined, according to the findings of this study (IV). Normative ethics influence the nurse’s practical performance and could have a greater influence in supporting nurses as professionals. Criteria of good ethical care according to this thesis are: showing respect, invitation to participation, allowing self-determination, and providing safe and secure care. These criteria are elements of the concept of being professional. Professionalism of nurses is shown by: the approach nurses adapt to the performance of their duties, and their competence and knowledge, but also how they apply laws and professional codes
639

Prediction of recurrent events

Fredette, Marc January 2004 (has links)
In this thesis, we will study issues related to prediction problems and put an emphasis on those arising when recurrent events are involved. First we define the basic concepts of frequentist and Bayesian statistical prediction in the first chapter. In the second chapter, we study frequentist prediction intervals and their associated predictive distributions. We will then present an approach based on asymptotically uniform pivotals that is shown to dominate the plug-in approach under certain conditions. The following three chapters consider the prediction of recurrent events. The third chapter presents different prediction models when these events can be modeled using homogeneous Poisson processes. Amongst these models, those using random effects are shown to possess interesting features. In the fourth chapter, the time homogeneity assumption is relaxed and we present prediction models for non-homogeneous Poisson processes. The behavior of these models is then studied for prediction problems with a finite horizon. In the fifth chapter, we apply the concepts discussed previously to a warranty dataset coming from the automobile industry. The number of processes in this dataset being very large, we focus on methods providing computationally rapid prediction intervals. Finally, we discuss the possibilities of future research in the last chapter.
640

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.

Page generated in 0.0349 seconds