• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 618
  • 157
  • 86
  • 74
  • 54
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1429
  • 210
  • 189
  • 189
  • 181
  • 179
  • 124
  • 117
  • 104
  • 103
  • 98
  • 85
  • 81
  • 79
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

EMPIRICAL LIKELIHOOD AND DIFFERENTIABLE FUNCTIONALS

Shen, Zhiyuan 01 January 2016 (has links)
Empirical likelihood (EL) is a recently developed nonparametric method of statistical inference. It has been shown by Owen (1988,1990) and many others that empirical likelihood ratio (ELR) method can be used to produce nice confidence intervals or regions. Owen (1988) shows that -2logELR converges to a chi-square distribution with one degree of freedom subject to a linear statistical functional in terms of distribution functions. However, a generalization of Owen's result to the right censored data setting is difficult since no explicit maximization can be obtained under constraint in terms of distribution functions. Pan and Zhou (2002), instead, study the EL with right censored data using a linear statistical functional constraint in terms of cumulative hazard functions. In this dissertation, we extend Owen's (1988) and Pan and Zhou's (2002) results subject to non-linear but Hadamard differentiable statistical functional constraints. In this purpose, a study of differentiable functional with respect to hazard functions is done. We also generalize our results to two sample problems. Stochastic process and martingale theories will be applied to prove the theorems. The confidence intervals based on EL method are compared with other available methods. Real data analysis and simulations are used to illustrate our proposed theorem with an application to the Gini's absolute mean difference.
252

Coordinating requirements engineering and software testing

Unterkalmsteiner, Michael January 2015 (has links)
The development of large, software-intensive systems is a complex undertaking that is generally tackled by a divide and conquer strategy. Organizations face thereby the challenge of coordinating the resources which enable the individual aspects of software development, commonly solved by adopting a particular process model. The alignment between requirements engineering (RE) and software testing (ST) activities is of particular interest as those two aspects are intrinsically connected: requirements are an expression of user/customer needs while testing increases the likelihood that those needs are actually satisfied. The work in this thesis is driven by empirical problem identification, analysis and solution development towards two main objectives. The first is to develop an understanding of RE and ST alignment challenges and characteristics. Building this foundation is a necessary step that facilitates the second objective, the development of solutions relevant and scalable to industry practice that improve REST alignment. The research methods employed to work towards these objectives are primarily empirical. Case study research is used to elicit data from practitioners while technical action research and field experiments are conducted to validate the developed  solutions in practice. This thesis contains four main contributions: (1) An in-depth study on REST alignment challenges and practices encountered in industry. (2) A conceptual framework in the form of a taxonomy providing constructs that further our understanding of REST alignment. The taxonomy is operationalized in an assessment framework, REST-bench (3), that was designed to be lightweight and can be applied as a postmortem in closing development projects. (4) An extensive investigation into the potential of information retrieval techniques to improve test coverage, a common REST alignment challenge, resulting in a solution prototype, risk-based testing supported by topic models (RiTTM). REST-bench has been validated in five cases and has shown to be efficient and effective in identifying improvement opportunities in the coordination of RE and ST. Most of the concepts operationalized from the REST taxonomy were found to be useful, validating the conceptual framework. RiTTM, on the other hand, was validated in a single case experiment where it has shown great potential, in particular by identifying test cases that were originally overlooked by expert test engineers, improving effectively test coverage.
253

EMPIRICAL PROCESSES FOR ESTIMATED PROJECTIONS OF MULTIVARIATE NORMAL VECTORS WITH APPLICATIONS TO E.D.F. AND CORRELATION TYPE GOODNESS OF FIT TESTS

Saunders, Christopher Paul 01 January 2006 (has links)
Goodness-of-fit and correlation tests are considered for dependent univariate data that arises when multivariate data is projected to the real line with a data-suggested linear transformation. Specifically, tests for multivariate normality are investigated. Let { } i Y be a sequence of independent k-variate normal random vectors, and let 0 d be a fixed linear transform from Rk to R . For a sequence of linear transforms { ( )} 1 , , n d Y Y converging almost surely to 0 d , the weak convergence of the empirical process of the standardized projections from d to a tight Gaussian process is established. This tight Gaussian process is identical to that which arises in the univariate case where the mean and standard deviation are estimated by the sample mean and sample standard deviation (Wood, 1975). The tight Gaussian process determines the limiting null distribution of E.D.F. goodness-of-fit statistics applied to the process of the projections. A class of tests for multivariate normality, which are based on the Shapiro-Wilk statistic and the related correlation statistics applied to the dependent univariate data that arises with a data-suggested linear transformation, is also considered. The asymptotic properties for these statistics are established. In both cases, the statistics based on random linear transformations are shown to be asymptotically equivalent to the statistics using the fixed linear transformation. The statistics based on the fixed linear transformation have same critical points as the corresponding tests of univariate normality; this allows an easy implementation of these tests for multivariate normality. Of particular interest are two classes of transforms that have been previously considered for testing multivariate normality and are special cases of the projections considered here. The first transformation, originally considered by Wood (1981), is based on a symmetric decomposition of the inverse sample covariance matrix. The asymptotic properties of these transformed empirical processes were fully developed using classical results. The second class of transforms is the principal components that arise in principal component analysis. Peterson and Stromberg (1998) suggested using these transforms with the univariate Shapiro-Wilk statistic. Using these suggested projections, the limiting distribution of the E.D.F. goodness-of-fit and correlation statistics are developed.
254

EMPIRICAL PROCESSES AND ROC CURVES WITH AN APPLICATION TO LINEAR COMBINATIONS OF DIAGNOSTIC TESTS

Chirila, Costel 01 January 2008 (has links)
The Receiver Operating Characteristic (ROC) curve is the plot of Sensitivity vs. 1- Specificity of a quantitative diagnostic test, for a wide range of cut-off points c. The empirical ROC curve is probably the most used nonparametric estimator of the ROC curve. The asymptotic properties of this estimator were first developed by Hsieh and Turnbull (1996) based on strong approximations for quantile processes. Jensen et al. (2000) provided a general method to obtain regional confidence bands for the empirical ROC curve, based on its asymptotic distribution. Since most biomarkers do not have high enough sensitivity and specificity to qualify for good diagnostic test, a combination of biomarkers may result in a better diagnostic test than each one taken alone. Su and Liu (1993) proved that, if the panel of biomarkers is multivariate normally distributed for both diseased and non-diseased populations, then the linear combination, using Fisher's linear discriminant coefficients, maximizes the area under the ROC curve of the newly formed diagnostic test, called the generalized ROC curve. In this dissertation, we will derive the asymptotic properties of the generalized empirical ROC curve, the nonparametric estimator of the generalized ROC curve, by using the empirical processes theory as in van der Vaart (1998). The pivotal result used in finding the asymptotic behavior of the proposed nonparametric is the result on random functions which incorporate estimators as developed by van der Vaart (1998). By using this powerful lemma we will be able to decompose an equivalent process into a sum of two other processes, usually called the brownian bridge and the drift term, via Donsker classes of functions. Using a uniform convergence rate result given by Pollard (1984), we derive the limiting process of the drift term. Due to the independence of the random samples, the asymptotic distribution of the generalized empirical ROC process will be the sum of the asymptotic distributions of the decomposed processes. For completeness, we will first re-derive the asymptotic properties of the empirical ROC curve in the univariate case, using the same technique described before. The methodology is used to combine biomarkers in order to discriminate lung cancer patients from normals.
255

AN INNOVATIVE APPROACH TO MECHANISTIC EMPIRICAL PAVEMENT DESIGN

Graves, Ronnie Clark, II 01 January 2012 (has links)
The Mechanistic Empirical Pavement Design Guide (MEPDG) developed by the National Cooperative Highway Research Program (NCHRP) project 1-37A, is a very powerful tool for the design and analysis of pavements. The designer utilizes an iterative process to select design parameters and predict performance, if the performance is not acceptable they must change design parameters until an acceptable design is achieved. The design process has more than 100 input parameters across many areas, including, climatic conditions, material properties for each layer of the pavement, and information about the truck traffic anticipated. Many of these parameters are known to have insignificant influence on the predicted performance During the development of this procedure, input parameter sensitivity analysis varied a single input parameter while holding other parameters constant, which does not allow for the interaction between specific variables across the entire parameter space. A portion of this research identified a methodology of global sensitivity analysis of the procedure using random sampling techniques across the entire input parameter space. This analysis was used to select the most influential input parameters which could be used in a streamlined design process. This streamlined method has been developed using Multiple Adaptive Regression Splines (MARS) to develop predictive models derived from a series of actual pavement design solutions from the design software provided by NCHRP. Two different model structures have been developed, one being a series of models which predict pavement distress (rutting, fatigue cracking, faulting and IRI), the second being a forward solution to predict a pavement thickness given a desired level of distress. These thickness prediction models could be developed for any subset of MEPDG solutions desired, such as typical designs within a given state or climatic zone. These solutions could then be modeled with the MARS process to produce am “Efficient Design Solution” of pavement thickness and performance predictions. The procedure developed has the potential to significantly improve the efficiency of pavement designers by allowing them to look at many different design scenarios prior to selecting a design for final analysis.
256

MODELING OF AN AIR-BASED DENSITY SEPARATOR

Ghosh, Tathagata 01 January 2013 (has links)
There is a lack of fundamental studies by means of state of the art numerical and scale modeling techniques scrutinizing the theoretical and technical aspect of air table separators as well as means to comprehend and improve the efficiency of the process. The dissertation details the development of a workable empirical model, a numerical model and a scale model to demonstrate the use of a laboratory air table unit. The modern air-based density separator achieves effective density-based separation for particle sizes greater than 6 mm. Parametric studies with the laboratory scale unit using low rank coal have demonstrated the applicability with regards to finer size fractions of the range 6 mm to 1 mm. The statistically significant empirical models showed that all the four parameters, i.e, blower and table frequency, longitudinal and transverse angle were significant in determining the separation performance. Furthermore, the tests show that an increase in the transverse angle increased the flow rate of solids to the product end and the introduction of feed results in the dampening of airflow at the feed end. The higher table frequency and feed rate had a detrimental effect on the product yield due to low residence time of particle settlement. The research further evaluated fine particle upgrading using various modeling techniques. The numerical model was evaluated using K-Epsilon and RSM turbulence formulations and validated using experimental dataset. The results prove that the effect of fine coal vortices forming around the riffles act as a transport mechanism for higher density particle movement across the table deck resulting in 43% displacement of the midlings and 29% displacement of the heavies to the product side. The velocity and vector plots show high local variance of air speeds and pressure near the feed end and an increase in feed rate results in a drop in deshaling capability of the table. The table was further evaluated using modern scale-modeling concepts and the scaling laws indicated that the vibration velocity has an integral effect on the separation performance. The difference between the full-scale model and the scaled prototype was 3.83% thus validating the scaling laws.
257

台灣地區個人捐贈的所得稅誘因之實證分析

朱紀燕, CHU CHI YEN Unknown Date (has links)
過去數十年來,當社會福利的觀念盛行於歐美各國時,台灣仍處在經濟起飛的階段,為了致力於經濟繁榮而忽略了人民的福祉。近幾年來,台灣的經濟發展已經到達一個穩定的階段,人民在滿足了基本生活需求後,也開始重視自身的福利;促使政府轉而將政策目標集中在社會福利制度的推行。在國外實行了社會福利數十年的今天,累積了許多寶貴的經驗提供台灣在政策制訂上的一個參考,社會福利制度的效益廣及全國大眾,影響層面既深且巨,我國政府在社會福利的推動上面,不可不仔細評估。 政府為了鼓勵人們從事慈善捐贈行為,利用所得稅的扣抵方式使得人民的捐贈價格降低,如此一來,捐贈價格的降低將會提高人們捐贈的誘因。在國外,許多的學者利用所得稅資料庫和家庭收支調查資料庫,針對人們的慈善捐贈動機估計其價格彈性和替代彈性,以驗證政府制訂所得扣抵以提高捐贈誘因的政策有效性;實證結果大多同意所得稅抵減的政策有效性。在台灣,政府的所得稅抵減政策適用對象除了從事慈善捐贈,尚且包括政黨捐贈、私立學校捐贈等各種非慈善捐贈,其政策目的各有不同。因為資料特性的緣故,本文僅利用台灣地區的個人所得稅資料對於台灣地區的慈善捐贈進行價格彈性和所得彈性的估計;並同時對於捐贈的各種誘因進行實證上的分析。 研究結果發現,慈善捐贈價格彈性為-4.0768,相對於國外而言,捐贈價格的變動對於個人的捐贈金額似乎有更大的影響力。當捐贈價格越低,則對於慈善捐贈的誘因確實有提高的效果;是以如果政府致力於社會福利規模的增加,為了能使慈善團體不至於因為經費不足而面臨縮減規模或是關閉的命運,在不增加政府的負擔的情況下,可以租稅減免做為鼓勵的手段以達到鼓勵捐贈的政策目的。 在所得分層估計的實證結果方面,所得介於175,000∼400,000的階層的所得彈性估計值不顯著,其餘的所得階層的所得彈性估計值均為顯著且正向關係,且所得彈性隨著可支配所得越高而隨之提高;再觀察價格彈性,只有可支配所得介於900,000∼1,800,000的樣本其估計值顯著,其餘的所得階層其估計值均不顯著。隨著所得階層越高,其所得彈性彈性隨之提高,反之其價格彈性則相對降低。此估計結果和國外文獻的結果不謀而合。 迴歸式加入其他的列舉扣除一併探討其他列舉扣除額對於慈善捐贈的影響時,發現生育醫療費對於慈善捐贈金額的大小影響並不顯著;而人身保險費則對於慈善捐贈金額的大小有顯著且負向的影響,並且和本文的理論模型結果吻合。 最後在針對納稅義務人從事慈善捐贈與否的二元選擇模型中,結果發現配偶薪資所得對於納稅義務人從事捐贈與否雖然有顯著的結果,但是根據其邊際效果觀之,兩者對於納稅義務人是否慈善捐贈的機率影響不大,年齡和受扶養人數也同樣存在著顯著的結果,相對於配偶和本人薪資所得總額的邊際效果而言,其對於從事慈善捐贈與否的機率影響較大。婚姻狀況是此迴歸模型關注的焦點,結果發現,結婚與否對於納稅義務人從事慈善捐贈與否的確有較大而顯著的影響機率;已婚的納稅義務人較未婚的納稅義務人有較大的機率去選擇從事慈善捐贈行為,其直觀的原因應該是已婚人士的生活穩定,收入來源也較為穩定,心態上和經濟上都較單身者有較大的意願從事慈善捐贈行為。至於捐贈價格對於納稅義務人從事慈善捐贈與否有著極大的邊際效果,也就是說捐贈價格的減少對於人們從事慈善捐贈有著極大的機率,此結果再次印證了政府的所得稅扣抵政策對於鼓勵捐贈的有效性。
258

Empirical Evaluations of Semantic Aspects in Software Development

Blom, Martin January 2006 (has links)
<p>This thesis presents empirical research in the field of software development with a focus on handling semantic aspects. There is a general lack of empirical data in the field of software development. This makes it difficult for industry to choose an appropriate method for their particular needs. The lack of empirical data also makes it difficult to convey academic results to the industrial world.</p><p>This thesis tries to remedy this problem by presenting a number of empirical evaluations that have been conducted to evaluate some common approaches in the field of semantics handling. The evaluations have produced some interesting results, but their main contribution is the addition to the body of knowledge on how to perform empirical evaluations in software development. The evaluations presented in this thesis include a between-groups controlled experiment, industrial case studies and a full factorial design controlled experiment. The factorial design seems like the most promising approach to use when the number of factors that need to be controlled is high and the number of available test subjects is low. A factorial design has the power to evaluate more than one factor at a time and hence to gauge the effects from different factors on the output.</p><p>Another contribution of the thesis is the development of a method for handling semantic aspects in an industrial setting. A background investigation performed concludes that there seems to be a gap between what academia proposes and how industry handles semantics in the development process. The proposed method aims at bridging this gap. It is based on academic results but has reduced formalism to better suit industrial needs. The method is applicable in an industrial setting without interfering too much with the normal way of working, yet providing important benefits. This method is evaluated in the empirical studies along with other methods for handling semantics. In the area of semantic handling, further contributions of the thesis include a taxonomy for semantic handling methods as well as an improved understanding of the relation between semantic errors and the concept of contracts as a means of avoiding and handling these errors.</p>
259

MINING UNSTRUCTURED SOFTWARE REPOSITORIES USING IR MODELS

Thomas, STEPHEN 12 December 2012 (has links)
Mining Software Repositories, which is the process of analyzing the data related to software development practices, is an emerging field which aims to aid development teams in their day to day tasks. However, data in many software repositories is currently unused because the data is unstructured, and therefore difficult to mine and analyze. Information Retrieval (IR) techniques, which were developed specifically to handle unstructured data, have recently been used by researchers to mine and analyze the unstructured data in software repositories, with some success. The main contribution of this thesis is the idea that the research and practice of using IR models to mine unstructured software repositories can be improved by going beyond the current state of affairs. First, we propose new applications of IR models to existing software engineering tasks. Specifically, we present a technique to prioritize test cases based on their IR similarity, giving highest priority to those test cases that are most dissimilar. In another new application of IR models, we empirically recover how developers use their mailing list while developing software. Next, we show how the use of advanced IR techniques can improve results. Using a framework for combining disparate IR models, we find that bug localization performance can be improved by 14–56% on average, compared to the best individual IR model. In addition, by using topic evolution models on the history of source code, we can uncover the evolution of source code concepts with an accuracy of 87–89%. Finally, we show the risks of current research, which uses IR models as black boxes without fully understanding their assumptions and parameters. We show that data duplication in source code has undesirable effects for IR models, and that by eliminating the duplication, the accuracy of IR models improves. Additionally, we find that in the bug localization task, an unwise choice of parameter values results in an accuracy of only 1%, where optimal parameters can achieve an accuracy of 55%. Through empirical case studies on real-world systems, we show that all of our proposed techniques and methodologies significantly improve the state-of-the-art. / Thesis (Ph.D, Computing) -- Queen's University, 2012-12-12 12:34:59.854
260

Organizational Identity in Practice? : -How theoretical concepts of Organizational Identity are perceived in the empirical setting of Arla Foods

Maritz, Louise, Jarne, Sarah January 2014 (has links)
An organization’s internal processes of identity management is argued to influence its communication, which in turn influence the perceived reputation of the organization. The aim of this study is to investigate how the organizational identity is reflected upon and perceived integrated in employees’ daily work. This is done through applying the internal factors of a theoretical model, comprising of identity, culture and image, at an empirical setting. Literature on organizational identity in relation to organizational culture and organizational image is reviewed followed by conducting 12 semi structured interviews with managers from the marketing and human resource departments at Arla Foods in Sweden. The findings suggest that although employees reflect on the identity, there is a gap between reflection and action, meaning that the identity is not necessarily integrated in practice in the daily work due to different understandings of the organizational culture. In relation to the model, it is suggested that culture may not be so clearly connected to identity, whereas image and identity are very closely related. Also, the context in which the employees conduct their work is shown through the empirical setting, to be important for how employees reflect upon the identity.

Page generated in 0.0332 seconds