• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 602
  • 268
  • 127
  • 64
  • 55
  • 21
  • 11
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1448
  • 231
  • 204
  • 190
  • 188
  • 142
  • 140
  • 134
  • 108
  • 101
  • 99
  • 98
  • 97
  • 94
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Bootstrap and Empirical Likelihood-based Semi-parametric Inference for the Difference between Two Partial AUCs

Huang, Xin 17 July 2008 (has links)
With new tests being developed and marketed, the comparison of the diagnostic accuracy of two continuous-scale diagnostic tests are of great importance. Comparing the partial areas under the receiver operating characteristic curves (pAUC) is an effective method to evaluate the accuracy of two diagnostic tests. In this thesis, we study the semi-parametric inference for the difference between two pAUCs. A normal approximation for the distribution of the difference between two pAUCs has been derived. The empirical likelihood ratio for the difference between two pAUCs is defined and its asymptotic distribution is shown to be a scaled chi-quare distribution. Bootstrap and empirical likelihood based inferential methods for the difference are proposed. We construct five confidence intervals for the difference between two pAUCs. Simulation studies are conducted to compare the finite sample performance of these intervals. We also use a real example as an application of our recommended intervals.
392

Parametric classification and variable selection by the minimum integrated squared error criterion

January 2012 (has links)
This thesis presents a robust solution to the classification and variable selection problem when the dimension of the data, or number of predictor variables, may greatly exceed the number of observations. When faced with the problem of classifying objects given many measured attributes of the objects, the goal is to build a model that makes the most accurate predictions using only the most meaningful subset of the available measurements. The introduction of [cursive l] 1 regularized model titling has inspired many approaches that simultaneously do model fitting and variable selection. If parametric models are employed, the standard approach is some form of regularized maximum likelihood estimation. While this is an asymptotically efficient procedure under very general conditions, it is not robust. Outliers can negatively impact both estimation and variable selection. Moreover, outliers can be very difficult to identify as the number of predictor variables becomes large. Minimizing the integrated squared error, or L 2 error, while less efficient, has been shown to generate parametric estimators that are robust to a fair amount of contamination in several contexts. In this thesis, we present a novel robust parametric regression model for the binary classification problem based on L 2 distance, the logistic L 2 estimator (L 2 E). To perform simultaneous model fitting and variable selection among correlated predictors in the high dimensional setting, an elastic net penalty is introduced. A fast computational algorithm for minimizing the elastic net penalized logistic L 2 E loss is derived and results on the algorithm's global convergence properties are given. Through simulations we demonstrate the utility of the penalized logistic L 2 E at robustly recovering sparse models from high dimensional data in the presence of outliers and inliers. Results on real genomic data are also presented.
393

Essays in Efficiency Analysis

Demchuk, Pavlo 16 September 2013 (has links)
Today a standard procedure to analyze the impact of environmental factors on productive efficiency of a decision making unit is to use a two stage approach, where first one estimates the efficiency and then uses regression techniques to explain the variation of efficiency between different units. It is argued that the abovementioned method may produce doubtful results which may distort the truth data represents. In order to introduce economic intuition and to mitigate the problem of omitted variables we introduce the matching procedure which is to be used before the efficiency analysis. We believe that by having comparable decision making units we implicitly control for the environmental factors at the same time cleaning the sample of outliers. The main goal of the first part of the thesis is to compare a procedure including matching prior to efficiency analysis with straightforward two stage procedure without matching as well as an alternative of conditional efficiency frontier. We conduct our study using a Monte Carlo study with different model specifications and despite the reduced sample which may create some complications in the computational stage we strongly agree with a notion of economic meaningfulness of the newly obtained results. We also compare the results obtained by the new method with ones previously produced by Demchuk and Zelenyuk (2009) who compare efficiencies of Ukrainian regions and find some differences between the two approaches. Second part deals with an empirical study of electricity generating power plants before and after market reform in Texas. We compare private, public and municipal power generators using the method introduced in part one. We find that municipal power plants operate mostly inefficiently, while private and public are very close in their production patterns. The new method allows us to compare decision making units from different groups, which may have different objective schemes and productive incentives. Despite the fact that at a certain point after the reform private generators opted not to provide their data to the regulator we were able to construct tree different data samples comprising two and three groups of generators and analyze their production/efficiency patterns. In the third chapter we propose a semiparametric approach with shape constrains which is consistent with monotonicity and concavity constraints. Penalized splines are used to maintain the shape constrained via nonlinear transformations of spline basis expansions. The large sample properties, an effective algorithm and method of smoothing parameter selection are presented in the paper. Monte Carlo simulations and empirical examples demonstrate the finite sample performance and the usefulness of the proposed method.
394

Statistical Parametric Mapping of fMRI data using Spectral Graph Wavelets

Behjat, Hamid January 2012 (has links)
In typical statistical parametric mapping (SPM) of fMRI data, the functional data are pre-smoothed using a Gaussian kernel to reduce noise at the cost of losing spatial specificity. Wavelet approaches have been incorporated in such analysis by enabling an efficient representation of the underlying brain activity through spatial transformation of the original, un-smoothed data; a successful framework is the wavelet-based statistical parametric mapping (WSPM) which enables integrated wavelet processing and spatial statistical testing. However, in using the conventional wavelets, the functional data are considered to lie on a regular Euclidean space, which is far from reality, since the underlying signal lies within the complex, non rectangular domain of the cerebral cortex. Thus, using wavelets that function on more complex domains such as a graph holds promise. The aim of the current project has been to integrate a recently developed spectral graph wavelet transform as an advanced transformation for fMRI brain data into the WSPM framework. We introduce the design of suitable weighted and un-weighted graphs which are defined based on the convoluted structure of the cerebral cortex. An optimal design of spatially localized spectral graph wavelet frames suitable for the designed large scale graphs is introduced. We have evaluated the proposed graph approach for fMRI analysis on both simulated as well as real data. The results show a superior performance in detecting fine structured, spatially localized activation maps compared to the use of conventional wavelets, as well as normal SPM. The approach is implemented in an SPM compatible manner, and is included as an extension to the WSPM toolbox for SPM.
395

Finite Element based Parametric Studies of a Truck Cab subjected to the Swedish Pendulum Test

Engström, Henrik, Raine, Jens January 2007 (has links)
Scania has a policy to attain a high crashworthiness standard and their trucks have to conform to Swedish cab safety standards. The main objective of this thesis is to clarify which parameter variations, present during the second part of the Swedish cab crashworthiness test on a Scania R-series cab, that have significance on the intrusion response. An LS-DYNA FE-model of the test case is analysed where parameter variations are introduced through the use of the probabilistic analysis tool LS-OPT. Example of analysed variations are the sheet thickness variation as well as the material variations such as stress-strain curve of the structural components, but also variations in the test setup such as the pendulum velocity and angle of approach on impact are taken into account. The effect of including the component forming in the analysis is investigated, where the variations on the material parameters are implemented prior to the forming. An additional objective is to analyse the influence of simulation and model dependent variations and weigh their respective effect on intrusion with the above stated physical variations. A submodel is created due to the necessity to speed up the simulations since the numerous parameter variations yield a large number of different designs, resulting in multiple analyses. Important structural component sensitivities are taken from the results and should be used as a pointer where to focus the attention when trying to increase the robustness of the cab. Also, the results show that the placement of the pendulum in the y direction (sideways seen from the driver perspective) is the most significant physical parameter variation during the Swedish pendulum test. It is concluded that to be able to achieve a fair comparison of the structural performance from repeated crash testing, this pendulum variation must be kept to a minimum. Simulation and model dependent parameters in general showed to have large effects on the intrusion. It is concluded that further investigations on individual simulation or model dependent parameters should be performed to establish which description to use. Mapping material effects from the forming simulation into the crash model gave a slight stiffer response compared to the mean pre-stretch approximations currently used by Scania. This is still however a significant result considering that Scanias approximations also included bake hardening effects from the painting process.
396

A new approach to stochastic frontier estimation: DEA+

Gstach, Dieter January 1996 (has links) (PDF)
The outcome of a production process might not only deviate from a theoretical maximum due to inefficiency, but also because of non-controllable influences. This raises the issue of reliability of Data Envelopment Analysis in noisy environments. I propose to assume an i.i.d. data generating process with bounded noise component, so that the following approach is feasible: Use DEA to estimate a pseudo frontier first (nonparametric shape estimation). Next apply a ML-technique to the DEA-estimated efficiencies, to estimate the scalar value by which this pseudo-frontier must be shifted downward to get the true production frontier (location estimation). I prove, that this approach yields consistent estimates of the true frontier. (author's abstract) / Series: Department of Economics Working Paper Series
397

Multifrequency Raman Generation in the Transient Regime

Turner, Fraser January 2006 (has links)
Two colour pumping was used to investigate the short-pulse technique of Multifrequency Raman Generation (MRG) in the transient regime of Raman scattering. In the course of this study we have demonstrated the ability to generate over thirty Raman orders spanning from the infrared to the ultraviolet, investigated the dependence of this generation on the pump intensities and the dispersion characteristics of the hollow-fibre system in which the experiment was conducted, and developed a simple computer model to help understand the exhibited behaviours. These dependence studies have revealed some characteristics that have been previously mentioned in the literature, such as the competition between MRG and self-phase modulation, but have also demonstrated behaviours that are dramatically different than anything reported on the subject. Furthermore, through a simple modification of the experimental apparatus we have demonstrated the ability to scatter a probe pulse into many Raman orders, generating bandwidth comparable to the best pump-probe experiments of MRG. By using a numeric fast Fourier transform, we predict that our spectra can generate pulses as short as 3. 3fs, with energies an order of magnitude larger than pulses of comparable duration that are made using current techniques.
398

Waveguide Sources of Photon Pairs

Horn, Rolf January 2011 (has links)
This thesis describes various methods for producing photon pairs from waveguides. It covers relevant topics such as waveguide coupling and phase matching, along with the relevant measurement techniques used to infer photon pair production. A new proposal to solve the phase matching problem is described along with two conceptual methods for generating entangled photon pairs. Photon pairs are also experimentally demonstrated from a third novel structure called a Bragg Reflection Waveguide (BRW). The new proposal to solve the phase matching problem is called Directional Quasi-Phase Matching (DQPM). It is a technique that exploits the directional dependence of the non-linear susceptiblity ($\chi^{(2)}$) tensor. It is aimed at those materials that do not allow birefringent phase-matching or periodic poling. In particular, it focuses on waveguides in which the interplay between the propagation direction, electric field polarizations and the nonlinearity can change the strength and sign of the nonlinear interaction periodically to achieve quasi-phasematching. One of the new conceptual methods for generating entangled photon pairs involves a new technique that sandwiches two waveguides from two differently oriented but similar crystals together. The idea stems from the design of a Michelson interferometer which interferes the paths over which two unique photon pair processes can occur, thereby creating entanglement in any pair of photons created in the interferometer. By forcing or sandwiching the two waveguides together, the physical space that exists in the standard Micheleson type interferometer is made non-existent, and the interferometer is effectively squashed. The result is that the two unique photon pair processes actually occupy the same physical path. This benefits the stability of the interferometer in addition to miniaturizing it. The technical challenges involved in sandwiching the two waveguides are briefly discussed. The main result of this thesis is the observation of photon pairs from the BRW. By analyzing the time correlation between two single photon detection events, spontaneous parametric down conversion (SPDC) of a picosecond pulsed ti:sapph laser is demonstrated. The process is mediated by a ridge BRW. The results show evidence for type-0, type-I and type-II phase matching of pump light at 783nm, 786nm and 789nm to down converted light that is strongly degenerate at 1566nm, 1572nm, and 1578nm respectively. The inferred efficiency of the BRW was 9.8$\cdot$10$^{-9}$ photon pairs per pump photon. This contrasts with the predicted type-0 efficiency of 2.65$\cdot$10$^{-11}$. This data is presented for the first time in such waveguides, and represents significant advances towards the integration of sources of quantum information into the existing telecommunications infrastructure.
399

Effects of deep excavations on circular tunnels in fine-grained soils

Karki, Rajendra 30 May 2006 (has links)
This thesis presents a study of the effects of deep excavations on adjacent metro or utility tunnel in soft to medium soil. The main objective of the thesis is to develop a method of estimating these effects quantitatively. Extensive review of relevant literature published in the past four decades was conducted in order to understand the trends and the key developments in this area. It was revealed from the literature review that the concurrent use of the Observational Method and the finite element method for monitoring and controlling of ground deformations around the excavation has become a norm for deep excavation projects. Several design charts and guidelines for estimation of effects of deep excavations on adjacent raft foundations or pile foundations were found in the literature; however, no such charts or guidelines were found for estimation of effects of deep excavations on existing circular tunnels. Consequently, the development of these guidelines was established as one of the objectives of this study. <p>The initial phase of the research was focused on detailed study and analysis of two well-documented case studies the Chicago Subway Renovation Project, USA and the Tan Tock Seng Hospital Deep Excavation, Singapore. The back analyses of these two case studies were carried out using the finite element software PLAXIS. Exact site conditions and input parameters for the soil and the structural components were incorporated as much as possible. Appropriate adjustments in some of the input parameters were necessary to achieve good match between the computed and the observed results. <p> The back analyses were followed by parametric studies to identify important variables controlling the mechanisms of soil-structure interaction. The variables identified from the parametric studies of the two case studies were: soil stiffness, tunnel lining thickness, the depth of the excavation, and the location of tunnel. These variables were used to conduct a series of finite element analyses using simplified geometry and ground conditions for the purpose of formulating preliminary design charts. Results from these analyses were recorded in terms of in-plane and out-of-plane distortion of tunnel lining as well as additional shear forces and bending moments induced in the tunnel lining due to an adjacent deep excavation. The results were made non-dimensional before presenting them as contour plots. These contour plots constitute preliminary design charts, which can be used for the estimation of tunnel lining deformation caused by adjacent deep excavation.<p> Based on the results of this study, it can be concluded that a finite element program (such as PLAXIS) that is able to model construction processes associated with tunnelling and deep excavation in urban environment can be an invaluable tool in exploring the mechanism of ground deformation around the deep excavation and in quantifying the effects of ground deformation on existing adjacent structures. The modeller must, however, be aware of the fact that ways of modelling a particular construction process could be different for various finite element programs. It is important to interpret the instructions given in the manual of the program correctly. <p>Detailed back analyses of well-documented deep excavation case histories are vital from the point-of-view of building confidence in the selected finite element program. Such analyses also have the potential to identify key variables influencing the soil-structure interaction. <p> Preliminary design charts proposed in this thesis are very convenient for obtaining approximate values of tunnel lining deformation caused by adjacent deep excavation. Non-dimensional nature of these design charts makes it possible to be used for any depth of the deep excavation and for tunnels of any size, depth of cover, and distance from the vertical face of the excavation. These design charts can be used by engineers and contractors for initial estimation, selection and preliminary design of excavation support system, and are particularly useful during the planning phase. Town planners and project managers, who need to decide on the feasibility, damage control and risk management aspects of a deep excavation project, may also find these design charts equally useful. It should, however, be kept in mind that the estimates obtained from these design charts are highly approximate and as such, should be taken as guidelines for decision making processes. These estimates do not replace site specific detailed analysis and monitoring.
400

Non-parametric Bayesian Learning with Incomplete Data

Wang, Chunping January 2010 (has links)
<p>In most machine learning approaches, it is usually assumed that data are complete. When data are partially missing due to various reasons, for example, the failure of a subset of sensors, image corruption or inadequate medical measurements, many learning methods designed for complete data cannot be directly applied. In this dissertation we treat two kinds of problems with incomplete data using non-parametric Bayesian approaches: classification with incomplete features and analysis of low-rank matrices with missing entries.</p><p>Incomplete data in classification problems are handled by assuming input features to be generated from a mixture-of-experts model, with each individual expert (classifier) defined by a local Gaussian in feature space. With a linear classifier associated with each Gaussian component, nonlinear classification boundaries are achievable without the introduction of kernels. Within the proposed model, the number of components is theoretically ``infinite'' as defined by a Dirichlet process construction, with the actual number of mixture components (experts) needed inferred based upon the data under test. With a higher-level DP we further extend the classifier for analysis of multiple related tasks (multi-task learning), where model components may be shared across tasks. Available data could be augmented by this way of information transfer even when tasks are only similar in some local regions of feature space, which is particularly critical for cases with scarce incomplete training samples from each task. The proposed algorithms are implemented using efficient variational Bayesian inference and robust performance is demonstrated on synthetic data, benchmark data sets, and real data with natural missing values.</p><p>Another scenario of interest is to complete a data matrix with entries missing. The recovery of missing matrix entries is not possible without additional assumptions on the matrix under test, and here we employ the common assumption that the matrix is low-rank. Unlike methods with a preset fixed rank, we propose a non-parametric Bayesian alternative based on the singular value decomposition (SVD), where missing entries are handled naturally, and the number of underlying factors is imposed to be small and inferred in the light of observed entries. Although we assume missing at random, the proposed model is generalized to incorporate auxiliary information including missingness features. We also make a first attempt in the matrix-completion community to acquire new entries actively. By introducing a probit link function, we are able to handle counting matrices with the decomposed low-rank matrices latent. The basic model and its extensions are validated on</p><p>synthetic data, a movie-rating benchmark and a new data set presented for the first time.</p> / Dissertation

Page generated in 0.0418 seconds