• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 549
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 954
  • 954
  • 221
  • 163
  • 139
  • 126
  • 97
  • 91
  • 88
  • 74
  • 72
  • 69
  • 66
  • 64
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Assembly tolerance analysis in geometric dimensioning and tolerancing

Tangkoonsombati, Choowong 25 August 1994 (has links)
Tolerance analysis is a major link between design and manufacturing. An assembly or a part should be designed based on its functions, manufacturing processes, desired product quality, and manufacturing cost. Assembly tolerance analysis performed at the design stage can reduce potential manufacturing and assembly problems. Several commonly used assembly tolerance analysis models and their limitations are reviewed in this research. Also, a new assembly tolerance analysis model is developed to improve the limitations of the existing assembly tolerance analysis models. The new model elucidates the impact of the flatness symbol (one of the Geometric Dimensioning and Tolerancing (GD&T) specification symbols) and reduces design variables into simple mathematical equations. The new model is based on beta distribution of part dimensions. In addition, a group of manufacturing variables, including quality factor, process tolerance, and mean shift, is integrated in the new assembly tolerance analysis model. A computer integrated system has been developed to handle four support systems for the performance of tolerance analysis in a single computer application. These support systems are: 1) the CAD drawing system, 2) the Geometric Dimensioning and Tolerancing (GD&T) specification system, 3) the assembly tolerance analysis model, and 4) the tolerance database operating under the Windows environment. Dynamic Data Exchange (DDE) is applied to exchange the data between two different window applications, resulting in improvement of information transfer between the support systems. In this way, the user is able to use this integrated system to select a GD&T specification, determine a critical assembly dimension and tolerance, and access the tolerance database during the design stage simultaneously. Examples are presented to illustrate the application of the integrated tolerance analysis system. / Graduation date: 1995
312

Application of statistical methods to communication problems

January 1950 (has links)
[by] Y.W. Lee. / Bibliography: p. 39. / Army Signal Corps Contract No. W-36-039 sc-32037, Project No. 102B. Dept. of the Army Project No. 3-99-10-022.
313

Analysis of epidemiological data with covariate errors

Delongchamp, Robert 18 February 1993 (has links)
In regression analysis, random errors in an explanatory variable cause the usual estimates of its regression coefficient to be biased. Although this problem has been studied for many years, routine methods have not emerged. This thesis investigates some aspects of this problem in the setting of analysis of epidemiological data. A major premise is that methods to cope with this problem must account for the shape of the frequency distribution of the true covariable, e.g., exposure. This is not widely recognized, and many existing methods focus only on the variability of the true covariable, rather than on the shape of its distribution. Confusion about this issue is exacerbated by the existence of two classical models, one in which the covariable is a sample from a distribution and the other in which it is a collection of fixed values. A unified approach is taken here, in which for the latter of these models more attention than usual is given to the frequency distribution of the fixed values. In epidemiology the distribution of exposures is often very skewed, making these issues particularly important. In addition, the data sets can be very large, and another premise is that differences in the performance of methods are much greater when the samples are very large. Traditionally, methods have largely been evaluated by their ability to remove bias from the regression estimates. A third premise is that in large samples there may be various methods that will adequately remove the bias, but they may differ widely in how nearly they approximate the estimates that would be obtained using the unobserved true values. A collection of old and new methods is considered, representing a variety of basic rationales and approaches. Some comparisons among them are made on theoretical grounds provided by the unified model. Simulation results are given which tend to confirm the major premises of this thesis. In particular, it is shown that the performance of one of the most standard approaches, the "correction for attenuation" method, is poor relative to other methods when the sample size is large and the distribution of covariables is skewed. / Graduation date: 1993
314

Geostatistical applications to salinity mapping and simulated reclamation

Al-Taher, Mohamad A. 17 December 1991 (has links)
Geostatistical methods were used to find efficient and accurate means for salinity assessment using regionalized random variables and limited sampling. The random variables selected, sodium absorption ratio (SAR), electrical conductivity (EC), and clay content were measured on samples taken over an area of fifteen square miles. Ordinary kriging and co-kriging were used as linear estimators. They were compared on the basis of average kriging variance and sum of squares for error between observed and estimated values. The results indicate a significant improvement in the average kriging variance and sum of squares by using co-kriging estimators. EC was used to estimate SAR because of the high correlation between them. This was not true for clay content. A saving of two-thirds of the cost and time was achieved by using electrical conductivity as an auxiliary variable to estimate sodium absorption ratio. The nonlinear estimator, disjunctive kriging, was an improvement over co-kriging in terms of the variances. More information at the estimation site is a more important consideration than when the estimator is linear. Disjunctive kriging was used to produce an estimate of the conditional probability that the value at an unsampled location is greater than an arbitrary cutoff level. This feature of disjunctive kriging aids salinity assessment and reclamation management. A solute transport model was used to show how saptially variable initial conditions influenced the amount of water required to reclaim a saline soil at each sampling point in a simulated leaching of the area. / Graduation date: 1992
315

Statistical Local Appearance Models for Object Recognition

Guillamet Monfulleda, David 10 March 2004 (has links)
Durant els últims anys, hi ha hagut un interès creixent en les tècniques de reconeixement d'objectes basades en imatges, on cadascuna de les quals es correspon a una aparença particular de l'objecte. Aquestes tècniques que únicament utilitzen informació de les imatges són anomenades tècniques basades en l'aparença i l'interès sorgit per aquestes tècniques és degut al seu éxit a l'hora de reconèixer objectes. Els primers mètodes basats en l'aparença es recolzaven únicament en models globals. Tot i que els mètodes globals han estat utilitzats satisfactòriament en un conjunt molt ampli d'aplicacions basades en la visió per computador (per exemple, reconeixement de cares, posicionament de robots, etc), encara hi ha alguns problemes que no es poden tractar fàcilment. Les oclusions parcials, canvis excessius en la il·luminació, fons complexes, canvis en l'escala i diferents punts de vista i orientacions dels objectes encara són un gran problema si s'han de tractar des d'un punt de vista global. En aquest punt és quan els mètodes basats en l'aparença local van sorgir amb l'objectiu primordial de reduir l'efecte d'alguns d'aquests problemes i proporcionar una representació molt més rica per ser utilitzada en entorns encara més complexes.Usualment, els mètodes basats en l'aparença local utilitzen descriptors d'alta dimensionalitat a l'hora de descriure regions locals dels objectes. Llavors, el problema de la maledicció de la dimensionalitat (curse of dimensionality) pot sorgir i la classificació dels objectes pot empitjorar. En aquest sentit, un exemple típic per alleujar la maledicció de la dimensionalitat és la utilització de tècniques basades en la reducció de la dimensionalitat. D'entre les possibles tècniques per reduir la dimensionalitat, es poden utilitzar les transformacions lineals de dades. Bàsicament, ens podem beneficiar de les transformacions lineals de dades si la projecció millora o manté la mateixa informació de l'espai d'alta dimensió original i produeix classificadors fiables. Llavors, el principal objectiu és la modelització de patrons d'estructures presents als espais d'altes dimensions en espais de baixes dimensions.La primera part d'aquesta tesi utilitza primordialment histogrames color, un descriptor local que ens proveeix d'una bona font d'informació relacionada amb les variacions fotomètriques de les regions locals dels objectes. Llavors, aquests descriptors d'alta dimensionalitat es projecten en espais de baixes dimensions tot utilitzant diverses tècniques. L'anàlisi de components principals (PCA), la factorització de matrius amb valors no-negatius (NMF) i la versió ponderada del NMF són 3 transformacions lineals que s'han introduit en aquesta tesi per reduir la dimensionalitat de les dades i proporcionar espais de baixa dimensionalitat que siguin fiables i mantinguin les estructures de l'espai original. Una vegada s'han explicat, les 3 tècniques lineals són àmpliament comparades segons els nivells de classificació tot utilitzant una gran diversitat de bases de dades. També es presenta un primer intent per unir aquestes tècniques en un únic marc de treball i els resultats són molt interessants i prometedors. Un altre objectiu d'aquesta tesi és determinar quan i quina transformació lineal s'ha d'utilitzar tot tenint en compte les dades amb que estem treballant. Finalment, s'introdueix l'anàlisi de components independents (ICA) per modelitzar funcions de densitat de probabilitats tant a espais originals d'alta dimensionalitat com la seva extensió en subespais creats amb el PCA. L'anàlisi de components independents és una tècnica lineal d'extracció de característiques que busca minimitzar les dependències d'alt ordre. Quan les seves assumpcions es compleixen, es poden obtenir característiques estadísticament independents a partir de les mesures originals. En aquest sentit, el ICA s'adapta al problema de reconeixement estadístic de patrons de dades d'alta dimensionalitat. Això s'aconsegueix utilitzant representacions condicionals a la classe i un esquema de decisió de Bayes adaptat específicament. Degut a l'assumpció d'independència aquest esquema resulta en una modificació del classificador ingenu de Bayes.El principal inconvenient de les transformacions lineals de dades esmentades anteriorment és que no consideren cap tipus de relació espacial entre les característiques locals. Conseqüentment, es presenta un mètode per reconèixer objectes tridimensionals a partir d'imatges d'escenes complexes, tot utilitzant un únic model après d'una imatge de l'objecte. Aquest mètode es basa directament en les característiques visuals locals extretes de punts rellevants dels objectes i té en compte les relacions espacials entre elles. Aquest nou esquema redueix l'ambigüitat de les representacions anteriors. De fet, es presenta una nova metodologia general per obtenir estimacions fiables de distribucions conjuntes de vectors de característiques locals de múltiples punts rellevants dels objectes. Per fer-ho, definim el concepte de k-tuples per poder representar l'aparença local de l'objecte a k punts diferents i al mateix moment les dependències estadístiques entre ells. En aquest sentit, el nostre mètode s'adapta a entorns complexes i reals demostrant una gran habilitat per detectar objectes en aquests escenaris amb resultats molt prometedors. / During the last few years, there has been a growing interest in object recognition techniques directly based on images, each corresponding to a particular appearance of the object. These techniques which use only information of images are called appearance based models and the interest in such techniques is due to its success in recognizing objects. Earlier appearance-based approaches were focused on the use of holistic approaches. In spite of the fact that global representations have been successfully used in a broad set of computer vision applications (i.e. face recognition, robot positioning, etc), there are still some problems that can not be easily solved. Partial object occlusions, severe lighting changes, complex backgrounds, object scale changes and different viewpoints or orientations of objects are still a problem if they should be faced under a holistic perspective. Then, local appearance approaches emerged as they reduce the effect of some of these problems and provide a richer representation to be used in more complex environments.Usually, local appearance methods use high dimensional descriptors to describe local regions of objects. Then, the curse of dimensionality problem appears and object classification degrades. A typical example to alleviate the curse of dimensionality problem is to use techniques based on dimensionality reduction. Among possible reduction techniques, one could use linear data transformations. We can benefit from linear data transformations if the projection improves or mantains the same information of the high dimensional space and produces reliable classifiers. Then, the main goal is to model low dimensional pattern structures present in high dimensional data.The first part of this thesis is mainly focused on the use of color histograms, a local descriptor which provides a good source of information directly related to the photometric variations of local image regions. Then, these high dimensional descriptors are projected to low dimensional spaces using several techniques. Principal Component Analysis (PCA), Non-negative Matrix Factorization (NMF) and a weighted version of NMF, the Weighted Non-negative Matrix Factorization (WNMF) are 3 linear transformations of data which have been introduced in this thesis to reduce dimensionality and provide reliable low dimensional spaces. Once introduced, these three linear techniques are widely compared in terms of performances using several databases. Also, a first attempt to merge these techniques in an unified framework is shown and results seem to be very promising. Another goal of this thesis is to determine when and which linear transformation might be used depending on the data we are dealing with. To this end, we introduce Independent Component Analysis (ICA) to model probability density functions in the original high dimensional spaces as well as its extension to model subspaces obtained using PCA. ICA is a linear feature extraction technique that aims to minimize higher-order dependencies in the extracted features. When its assumptions are met, statistically independent features can be obtained from the original measurements. We adapt ICA to the particular problem of statistical pattern recognition of high dimensional data. This is done by means of class-conditional representations and a specifically adapted Bayesian decision scheme. Due to the independence assumption this scheme results in a modification of the naive Bayes classifier.The main disadvantage of the previous linear data transformations is that they do not take into account the relationship among local features. Consequently, we present a method of recognizing three-dimensional objects in intensity images of cluttered scenes, using a model learned from one single image of the object. This method is directly based on local visual features extracted from relevant keypoints of objects and takes into account the relationship between them. Then, this new scheme reduces the ambiguity of previous representations. In fact, we describe a general methodology for obtaining a reliable estimation of the joint distribution of local feature vectors at multiple salient points (keypoints). We define the concept of k-tuple in order to represent the local appearance of the object at k different points as well as the statistical dependencies among them. Our method is adapted to real, complex and cluttered environments and we present some results of object detection in these scenarios with promising results.
316

Small Scale Stochastic Dynamics For Particle Image Velocimetry Applications

Hohenegger, Christel 16 March 2006 (has links)
Fluid velocities and Brownian effects at nanoscales in the near-wall region of microchannels can be experimentally measured in an image plane parallel to the wall using, for example, evanescent wave illumination technique combined with particle image velocimetry [R. Sadr extit{et al.}, J. Fluid. Mech. 506, 357-367 (2004)]. The depth of field of this technique being difficult to modify, reconstruction of the out-of-plane dependence of the in-plane velocity profile remains extremely challenging. Tracer particles are not only carried by the flow, but they undergo random fluctuation imposed by the proximity of the wall. We study such a system under a particle based stochastic approach (Langevin) and a probabilistic approach (Fokker-Planck). The Langevin description leads to a coupled system of stochastic differential equations. Because the simulated data will be used to test a statistical hypothesis, we pay particular attention to the strong order of convergence of the scheme developing an appropriate Milstein scheme of strong order of convergence 1. Based on the probability density function of mean in-plane displacements, a statistical solution to the problem of the reconstruction of the out-of-plane dependence of the velocity profile is proposed. We developed a maximum likelihood algorithm which determines the most likely values for the velocity profile based on simulated perfect particle position, simulated perfect mean displacements and simulated observed mean displacements. Effects of Brownian motion on the approximation of the mean displacements are briefly discussed. A matched particle is a particle that starts and ends in the same image window after a measurement time. AS soon as the computation and observation domain are not the same, the distribution of the out-of-plane distances sampled by matched particles during the measurement time is not uniform. The combination of a forward and a backward solution of the one dimensional Fokker-Planck equation is used to determine this probability density function. The non-uniformity of the resulting distribution is believed to induce a bias in the determination of slip length and is quantified for relevant experimental parameters.
317

Supervised and unsupervised PRIDIT for active insurance fraud detection

Ai, Jing, 1981- 31 August 2012 (has links)
This dissertation develops statistical and data mining based methods for insurance fraud detection. Insurance fraud is very costly and has become a world concern in recent years. Great efforts have been made to develop models to identify potentially fraudulent claims for special investigations. In a broader context, insurance fraud detection is a classification task. Both supervised learning methods (where a dependent variable is available for training the model) and unsupervised learning methods (where no prior information of dependent variable is available for use) can be potentially employed to solve this problem. First, an unsupervised method is developed to improve detection effectiveness. Unsupervised methods are especially pertinent to insurance fraud detection since the nature of insurance claims (i.e., fraud or not) is very costly to obtain, if it can be identified at all. In addition, available unsupervised methods are limited and some of them are computationally intensive and the comprehension of the results may be ambiguous. An empirical demonstration of the proposed method is conducted on a widely used large dataset where labels are known for the dependent variable. The proposed unsupervised method is also empirically evaluated against prevalent supervised methods as a form of external validation. This method can be used in other applications as well. Second, another set of learning methods is then developed based on the proposed unsupervised method to further improve performance. These methods are developed in the context of a special class of data mining methods, active learning. The performance of these methods is also empirically evaluated using insurance fraud datasets. Finally, a method is proposed to estimate the fraud rate (i.e., the percentage of fraudulent claims in the entire claims set). Since the true nature of insurance claims (and any level of fraud) is unknown in most cases, there has not been any consensus on the estimated fraud rate. The proposed estimation method is designed based on the proposed unsupervised method. Implemented using insurance fraud datasets with the known nature of claims (i.e., fraud or not), this estimation method yields accurate estimates which are superior to those generated by a benchmark naïve estimation method. / text
318

Statistical validation of kidney deficiency syndromes (KDS) and the development of a KDS questionnaire in Hong Kong Chinese women aged 40-60 years

Chen, Runqiu., 陳潤球. January 2009 (has links)
published_or_final_version / Community Medicine / Doctoral / Doctor of Philosophy
319

On the use of multiple imputation in handling missing values in longitudinal studies

Chan, Pui-shan, 陳佩珊 January 2004 (has links)
published_or_final_version / Medical Sciences / Master / Master of Medical Sciences
320

Semiparametric analysis of interval censored survival data

Long, Yongxian., 龙泳先. January 2010 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy

Page generated in 0.2233 seconds