Spelling suggestions: "subject:"dataanalysis"" "subject:"data.analysis""
441 |
FPCA Based Human-like Trajectory GeneratingDai, Wei 01 January 2013 (has links)
This thesis presents a new human-like upper limb and hand motion generating method. The work is based on Functional Principal Component Analysis and Quadratic Programming. The human-like motion generating problem is formulated in a framework of minimizing the difference of the dynamic profile of the optimal trajectory and the known types of trajectory. Statistical analysis is applied to the pre-captured human motion records to work in a low dimensional space. A novel PCA FPCA hybrid motion recognition method is proposed. This method is implemented on human grasping data to demonstrate its advantage in human motion recognition. One human grasping hierarchy is also proposed during the study. The proposed method of generating human-like upper limb and hand motion explores the ability to learn the motion kernels from human demonstration. Issues in acquiring motion kernels are also discussed. The trajectory planning method applies different weight on the extracted motion kernels to approximate the kinematic constraints of the task. Multiple means of evaluation are implemented to illustrate the quality of the generated optimal human-like trajectory compared to the real human motion records.
|
442 |
Equipment data analysis study : failure time data modeling and analysis / Failure time data modeling and analysisZhu, Chen, master of science in engineering 16 August 2012 (has links)
This report presents the descriptive data analysis and failure time modeling that can be used to find out the characteristics and pattern of failure time. Descriptive data analysis includes the mean, median, 1st quartile, 3rd quartile, frequency, standard deviation, skewness, kurtosis, minimum, maximum and range. Models like exponential distribution, gamma distribution, normal distribution, lognormal distribution, Weibull distribution and log-logistic distribution have been studied for failure time data. The data in this report comes from the South Texas Project that was collected during the last 40 years. We generated more than 1000 groups for STP failure time data based on Mfg Part Number. In all, the top twelve groups of failure time data have been selected as the study group. For each group, we were able to perform different models and obtain the parameters. The significant level and p-value were gained by Kolmogorov-Smirnov test, which is a method of goodness of fit test that represents how well the distribution fits the data. The In this report, Weibull distribution has been proved as the most appropriate model for STP dataset. Among twelve groups, eight groups come from Weibull distribution. In general, Weibull distribution is powerful in failure time modeling. / text
|
443 |
Application of space time concept in GIS for visualizing and analyzing travel survey dataLu, Xiaoyun 04 December 2013 (has links)
The classic time geography concept (space-time path) provides a powerful framework to study travel survey data which is an important source for travel behavior studies. Based on the space-time concept, this research will present a visualizing approach to analyze travel survey data. By inputting the data into GIS software such as TransCAD and ArcGIS and editing the needed information, this study will explain how to create 3D images of travel paths for showing the variation of trip distribution in relation to different social-economic factors deemed as the driving forces of such patterns. Also, this report will address the technical challenges involved in this kind of study and will discuss directions of future research. / text
|
444 |
Function-on-Function Regression with Public Health ApplicationsMeyer, Mark John 06 June 2014 (has links)
Medical research currently involves the collection of large and complex data. One such type of data is functional data where the unit of measurement is a curve measured over a grid. Functional data comes in a variety of forms depending on the nature of the research. Novel methodologies are required to accommodate this growing volume of functional data alongside new testing procedures to provide valid inferences. In this dissertation, I propose three novel methods to accommodate a variety of questions involving functional data of multiple forms. I consider three novel methods: (1) a function-on-function regression for Gaussian data; (2) a historical functional linear models for repeated measures; and (3) a generalized functional outcome regression for ordinal data. For each method, I discuss the existing shortcomings of the literature and demonstrate how my method fills those gaps. The abilities of each method are demonstrated via simulation and data application.
|
445 |
Defect records analysis in Tsing Yi Power Station香旭勳, Heung, Yok-fun. January 1989 (has links)
published_or_final_version / Statistics / Master / Master of Social Sciences
|
446 |
Advanced Data Analysis and Test Planning for Highly Reliable ProductsZhang, Ye January 2014 (has links)
Accelerated life testing (ALT) has been widely used in collecting failure time data of highly reliable products. Most parametric ALT models assume that the ALT data follows a specific probability distribution. However, the assumed distribution may not be adequate in describing the underlying failure time distribution. In this dissertation, a more generic method based on a phase-type distribution is presented to model ALT data. To estimate the parameters of such Erlang Coxian-based ALT models, both a mathematical programming approach and a maximum likelihood method are developed. To the best of our knowledge, this dissertation demonstrates, for the first time, the potential of using PH distributions for ALT data analysis. To shorten the test time of ALT, degradation tests have been studied as a useful alternative. Among many degradation tests, destructive degradation tests (DDT) have attracted much attention in reliability engineering. Moreover, some materials/products start degrading only after a random degradation initiation time that is often not even observable. In this dissertation, two-stage delayed-degradation models are developed to evaluate the reliability of a product with random initiation time. For homogeneous and heterogeneous populations, fixed-effects and random-effects Gamma processes are considered, respectively. An expectation-maximization algorithm and a bootstrap method are developed to facilitate the maximum likelihood estimation of model parameters and to construct the confidence intervals of the interested reliability index, respectively. With an Accelerated DDT model, an optimal test plan is presented to improve the statistical efficiency. In designing the ADDT experiment, decision variables related to the experiment must be determined under the constraints on limited resources, such as the number of test units and the total testing time. In this dissertation, the number of test units and stress level are pre-determined in planning an ADDT experiment. The goal is to improve the statistical efficiency by selecting appropriately allocate the test units to different stress levels to minimize the asymptotic variance of the estimator of the p-quantile of failure time. In particular, considering the random degradation initiation time, a three-level constant-stress destructive degradation test is studied. A mathematical programming problem is formulated to minimize the asymptotic variance of reliability estimate.
|
447 |
Diagnosing spatial variation patterns in manufacturing processesLee, Ho Young 30 September 2004 (has links)
This dissertation discusses a method that will aid in diagnosing the root causes of product and process variability in complex manufacturing processes when large quantities of multivariate in-process measurement data are available. As in any data mining application, this dissertation has as its objective the extraction of useful information from the data. A linear structured model, similar to the standard factor analysis model, is used to generically represent the variation patterns that result from the root causes. Blind source separation methods are investigated to identify spatial variation patterns in manufacturing data. Further, the existing blind source separation methods are extended, enhanced and improved to be a more effective, accurate and widely applicable method for manufacturing variation diagnosis. An overall strategy is offered to guide the use of the presented methods in conjunction with alternative methods.
|
448 |
Assessing Dynamic Externalities from a Cluster Perspective: The Case of the Motor Metropolis in JapanKawakami, Tetsu, Yamada, Eri 08 1900 (has links)
No description available.
|
449 |
Using Three Different Categorical Data Analysis Techniques to Detect Differential Item FunctioningStephens-Bonty, Torie Amelia 16 May 2008 (has links)
Diversity in the population along with the diversity of testing usage has resulted in smaller identified groups of test takers. In addition, computer adaptive testing sometimes results in a relatively small number of items being used for a particular assessment. The need and use for statistical techniques that are able to effectively detect differential item functioning (DIF) when the population is small and or the assessment is short is necessary. Identification of empirically biased items is a crucial step in creating equitable and construct-valid assessments. Parshall and Miller (1995) compared the conventional asymptotic Mantel-Haenszel (MH) with the exact test (ET) for the detection of DIF with small sample sizes. Several studies have since compared the performance of MH to logistic regression (LR) under a variety of conditions. Both Swaminathan and Rogers (1990), and Hildalgo and López-Pina (2004) demonstrated that MH and LR were comparable in their detection of items with DIF. This study followed by comparing the performance of the MH, the ET, and LR performance when both the sample size is small and test length is short. The purpose of this Monte Carlo simulation study was to expand on the research done by Parshall and Miller (1995) by examining power and power with effect size measures for each of the three DIF detection procedures. The following variables were manipulated in this study: focal group sample size, percent of items with DIF, and magnitude of DIF. For each condition, a small reference group size of 200 was utilized as well as a short, 10-item test. The results demonstrated that in general, LR was slightly more powerful in detecting items with DIF. In most conditions, however, power was well below the acceptable rate of 80%. As the size of the focal group and the magnitude of DIF increased, the three procedures were more likely to reach acceptable power. Also, all three procedures demonstrated the highest power for the most discriminating item. Collectively, the results from this research provide information in the area of small sample size and DIF detection.
|
450 |
Duomenų analizės galimybių kompiuterinėse matematikos sistemose palyginimas / Data analysis in Computer mathematic systemsAleksandravičiūtė, Julita 17 June 2005 (has links)
The work for data analysis of the main methods, fullfilled in Computer mathematic systems (CMS), analysis. Also analysing and comparison of the data analysis of CMS – MAPLE, MATLAB and MATHCAD. There‘s briefly described enter and reading of data, characteristics of statistic data, analysis of variance, regression, interpolation and correlation. In the last section of data system analysis possibilities according to its sophistication, comfortable usage and variety of data function fullfillment. You will finde the examples of solved tasks with CMS after each description of data analysis method.
|
Page generated in 0.0991 seconds