• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 29
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 289
  • 289
  • 142
  • 81
  • 58
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Quality Control Using Inferential Statistics in Weibull Analyses for Components Fabricated from Monolithic Ceramics

Parikh, Ankurben H. 04 April 2012 (has links)
No description available.
162

Joint spectral embeddings of random dot product graphs

Draves, Benjamin 05 October 2022 (has links)
Multiplex networks describe a set of entities, with multiple relationships among them, as a collection of networks over a common vertex set. Multiplex networks naturally describe complex systems where units connect across different modalities whereas single network data only permits a single relationship type. Joint spectral embedding methods facilitate analysis of multiplex network data by simultaneously mapping vertices in each network to points in Euclidean space, entitled node embeddings, where statistical inference is then performed. This mapping is performed by spectrally decomposing a matrix that summarizes the multiplex network. Different methods decompose different matrices and hence yield different node embeddings. This dissertation analyzes a class of joint spectral embedding methods which provides a foundation to compare these different approaches to multiple network inference. We compare joint spectral embedding methods in three ways. First, we extend the Random Dot Product Graph model to multiplex network data and establish the statistical properties of node embeddings produced by each method under this model. This analysis facilitates a full bias-variance analysis of each method and uncovers connections between these methods and methods for dimensionality reduction. Second, we compare the accuracy of algorithms which utilize these different node embeddings in a variety of multiple network inference tasks including community detection, vertex anomaly detection, and graph hypothesis testing. Finally, we perform a time and space complexity analysis of each method and present a case study in which we analyze interactions between New England sports fans on the social news aggregation and discussion website, Reddit. These findings provide a theoretical and practical guide to compare joint spectral embedding techniques and highlight the benefits and drawbacks of utilizing each method in practice.
163

Distribution of points on spherical objects and applications

Selvitella, Alessandro, Selvitella, Alessandro 10 1900 (has links)
<p>In this thesis, we discuss some results on the distribution of points on the sphere, asymptotically when both the number of points and the dimension of the sphere tend to infinity. We then give some applications of these results to some statistical problems and especially to hypothesis testing.</p> / Master of Science (MSc)
164

Essays on Economic Decision Making

Lee, Dongwoo 17 May 2019 (has links)
This dissertation focuses on exploring individual and strategic decision problems in Economics. I take a different approach in each chapter to capture various aspects of decision problems. An overview of this dissertation is provided in Chapter 1. Chapter 2 studies an individual's decision making in extensive-form games under ambiguity when the individual is ambiguous about an opponent's moves. In this chapter, a player follows Choquet Expected Utility preferences, since the standard Expected Utility cannot explain the situations of ambiguity. I raise the issue that dynamically inconsistent decision making can be derived in extensive-form games with ambiguity. To cope with this issue, this chapter provides sufficient conditions to recover dynamic consistency. Chapter 3 analyzes the strategic decision making in signaling games when a player makes an inference about hidden information from the behavioral hypothesis. The Hypothesis Testing Equilibrium (HTE) is proposed to provide an explanation for posterior beliefs from the player. The notion of HTE admits belief updates for all events including zero-probability events. In addition, this chapter introduces well-motivated modifications of HTE. Finally, Chapter 4 examines a boundedly rational individual who considers selective attributes when making a decision. It is assumed that the individual focuses on a subset of attributes that stand out from a choice set. The selective attributes model can accommodate violations of choice axioms of Independence from Irrelevant Alternative (IIA) and Regularity. / Doctor of Philosophy / This dissertation focuses on exploring individual and strategic decision problems in Economics. I take a different approach in each chapter to capture various aspects of decision problem. An overview of this dissertation is provided in Chapter 1. Chapter 2 studies an individual’s decision making in extensive-form games under ambiguity. Ambiguity describes the situation in which the information available to a decision maker is too imprecise to be summarized by a probability measure (Epstein, 1999). It is known that ambiguity causes dynamic inconsistency between ex-ante and interim decision making. This chapter provides sufficient conditions under which dynamic consistency is maintained. Chapter 3 analyzes the strategic decision making in signaling games in which there are two players: informed sender and uninformed receiver. The sender has a private information about his type and the receiver makes an inference about hidden information. This chapter suggests a notion of the Hypothesis Testing Equilibrium (HTE), which provides an alternative explanation for the receiver’s beliefs. The idea of the HTE can be used as a refinement of Perfect Bayesian Equilibrium (PBE) in signaling games to cope with the known limitations of PBE. Finally, Chapter 4 examines a boundedly rational individual who considers only salient attributes when making a decision. The individual considers an attribute only when it stands out enough in a choice set. The selective attribute model can accommodate violations of choice axioms of Independence from Irrelevant Alternative (IIA) and Regularity.
165

Application of HTML/VRML to Manufacturing Systems Engineering

Krishnamurthy, Kasthuri Rangan 22 February 2001 (has links)
Manufacturing systems are complex entities comprised of people, processes, products, information systems and data, material processing, handling, and storage systems. Because of this complexity, systems must be modeled using a variety of views and modeling formalisms. In order to design and analyze manufacturing systems, the multiple views and models often need to be considered simultaneously. However, no single tool or computing environment currently exists that allows this to be done in an efficient and intelligible manner. New tools such as HTML and VRML present a promising approach for tackling these problems. They make possible environments where the different models can coexist and where mapping/linking between the models can be achieved. This research is concerned with developing a hybrid HTML/VRML environment for manufacturing systems modeling and analysis. Experiment was performed to compare this hybrid-modeling HTML/VRML environment to the traditional database environment in order to answer typical design/analysis questions associated with manufacturing systems, and to establish the potential advantages of this approach. Analyzing results obtained from the experiment indicated that the HTML/VRML approach might result in better understanding of a manufacturing system than the traditional database approach. / Master of Science
166

Hypothesis testing procedures for non-nested regression models

Bauer, Laura L. January 1987 (has links)
Theory often indicates that a given response variable should be a function of certain explanatory variables yet fails to provide meaningful information as to the specific form of this function. To test the validity of a given functional form with sensitivity toward the feasible alternatives, a procedure is needed for comparing non-nested families of hypotheses. Two hypothesized models are said to be non-nested when one model is neither a restricted case nor a limiting approximation of the other. These non-nested hypotheses cannot be tested using conventional likelihood ratio procedures. In recent years, however, several new approaches have been developed for testing non-nested regression models. A comprehensive review of the procedures for the case of two linear regression models was presented. Comparisons between these procedures were made on the basis of asymptotic distributional properties, simulated finite sample performance and computational ease. A modification to the Fisher and McAleer JA-test was proposed and its properties investigated. As a compromise between the JA-test and the Orthodox F-test, it was shown to have an exact non-null distribution. Its properties, both analytically and empirically derived, exhibited the practical worth of such an adjustment. A Monte Carlo study of the testing procedures involving non-nested linear regression models in small sample situations (n ≤ 40) provided information necessary for the formulation of practical guidelines. It was evident that the modified Cox procedure, N̄ , was most powerful for providing correct inferences. In addition, there was strong evidence to support the use of the adjusted J-test (AJ) (Davidson and MacKinnon's test with small-sample modifications due to Godfrey and Pesaran), the modified JA-test (NJ) and the Orthodox F-test for supplemental information. Under non normal disturbances, similar results were yielded. An empirical study of spending patterns for household food consumption provided a practical application of the non-nested procedures in a large sample setting. The study provided not only an example of non-nested testing situations but also the opportunity to draw sound inferences from the test results. / Ph. D.
167

Inference of nonparametric hypothesis testing on high dimensional longitudinal data and its application in DNA copy number variation and micro array data analysis

Zhang, Ke January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Haiyan Wang / High throughput screening technologies have generated a huge amount of biological data in the last ten years. With the easy availability of array technology, researchers started to investigate biological mechanisms using experiments with more sophisticated designs that pose novel challenges to statistical analysis. We provide theory for robust statistical tests in three flexible models. In the first model, we consider the hypothesis testing problems when there are a large number of variables observed repeatedly over time. A potential application is in tumor genomics where an array comparative genome hybridization (aCGH) study will be used to detect progressive DNA copy number changes in tumor development. In the second model, we consider hypothesis testing theory in a longitudinal microarray study when there are multiple treatments or experimental conditions. The tests developed can be used to detect treatment effects for a large group of genes and discover genes that respond to treatment over time. In the third model, we address a hypothesis testing problem that could arise when array data from different sources are to be integrated. We perform statistical tests by assuming a nested design. In all models, robust test statistics were constructed based on moment methods allowing unbalanced design and arbitrary heteroscedasticity. The limiting distributions were derived under the nonclassical setting when the number of probes is large. The test statistics are not targeted at a single probe. Instead, we are interested in testing for a selected set of probes simultaneously. Simulation studies were carried out to compare the proposed methods with some traditional tests using linear mixed-effects models and generalized estimating equations. Interesting results obtained with the proposed theory in two cancer genomic studies suggest that the new methods are promising for a wide range of biological applications with longitudinal arrays.
168

Algoritmiese rangordebepaling van akademiese tydskrifte

Strydom, Machteld Christina 31 October 2007 (has links)
Opsomming Daar bestaan 'n behoefte aan 'n objektiewe maatstaf om die gehalte van akademiese publikasies te bepaal en te vergelyk. Hierdie navorsing het die invloed of reaksie wat deur 'n publikasie gegenereer is uit verwysingsdata bepaal. Daar is van 'n iteratiewe algoritme gebruik gemaak wat gewigte aan verwysings toeken. In die Internetomgewing word hierdie benadering reeds met groot sukses toegepas deur onder andere die PageRank-algoritme van die Google soekenjin. Hierdie en ander algoritmes in die Internetomgewing is bestudeer om 'n algoritme vir akademiese artikels te ontwerp. Daar is op 'n variasie van die PageRank-algoritme besluit wat 'n Invloedwaarde bepaal. Die algoritme is op gevallestudies getoets. Die empiriese studie dui daarop dat hierdie variasie spesialisnavorsers se intu¨ıtiewe gevoel beter weergee as net die blote tel van verwysings. Abstract Ranking of journals are often used as an indicator of quality, and is extensively used as a mechanism for determining promotion and funding. This research studied ways of extracting the impact, or influence, of a journal from citation data, using an iterative process that allocates a weight to the source of a citation. After evaluating and discussing the characteristics that influence quality and importance of research with specialist researchers, a measure called the Influence factor was introduced, emulating the PageRankalgorithm used by Google to rank web pages. The Influence factor can be seen as a measure of the reaction that was generated by a publication, based on the number of scientists who read and cited itA good correlation between the rankings produced by the Influence factor and that given by specialist researchers were found. / Mathematical Sciences / M.Sc. (Operasionele Navorsing)
169

Neighborhood-Oriented feature selection and classification of Duke’s stages on colorectal Cancer using high density genomic data.

Peng, Liang January 1900 (has links)
Master of Science / Department of Statistics / Haiyan Wang / The selection of relevant genes for classification of phenotypes for diseases with gene expression data have been extensively studied. Previously, most relevant gene selection was conducted on individual gene with limited sample size. Modern technology makes it possible to obtain microarray data with higher resolution of the chromosomes. Considering gene sets on an entire block of a chromosome rather than individual gene could help to reveal important connection of relevant genes with the disease phenotypes. In this report, we consider feature selection and classification while taking into account of the spatial location of probe sets in classification of Duke’s stages B and C using DNA copy number data or gene expression data from colorectal cancers. A novel method was presented for feature selection in this report. A chromosome was first partitioned into blocks after the probe sets were aligned along their chromosome locations. Then a test of interaction between Duke’s stage and probe sets was conducted on each block of probe sets to select significant blocks. For each significant block, a new multiple comparison procedure was carried out to identify truly relevant probe sets while preserving the neighborhood location information of the probe sets. Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) classification using the selected final probe sets was conducted for all samples. Leave-One-Out Cross Validation (LOOCV) estimate of accuracy is reported as an evaluation of selected features. We applied the method on two large data sets, each containing more than 50,000 features. Excellent classification accuracy was achieved by the proposed procedure along with SVM or KNN for both data sets even though classification of prognosis stages (Duke’s stages B and C) is much more difficult than that for the normal or tumor types.
170

A Bayesian nonparametric approach for the two-sample problem / Uma abordagem bayesiana não paramétrica para o problema de duas amostras

Console, Rafael de Carvalho Ceregatti de 19 November 2018 (has links)
In this work, we discuss the so-called two-sample problem Pearson and Neyman (1930) assuming a nonparametric Bayesian approach. Considering X1; : : : ; Xn and Y1; : : : ; Ym two independent i.i.d samples generated from P1 and P2, respectively, the two-sample problem consists in deciding if P1 and P2 are equal. Assuming a nonparametric prior, we propose an evidence index for the null hypothesis H0 : P1 = P2 based on the posterior distribution of the distance d (P1; P2) between P1 and P2. This evidence index has easy computation, intuitive interpretation and can also be justified in the Bayesian decision-theoretic context. Further, in a Monte Carlo simulation study, our method presented good performance when compared with the well known Kolmogorov- Smirnov test, the Wilcoxon test as well as a recent testing procedure based on Polya tree process proposed by Holmes (HOLMES et al., 2015). Finally, we applied our method to a data set about scale measurements of three different groups of patients submitted to a questionnaire for Alzheimer\'s disease diagnostic. / Neste trabalho, discutimos o problema conhecido como problema de duas amostras Pearson and Neyman (1930) utilizando uma abordagem bayesiana não-paramétrica. Considere X1; : : : ; Xn and Y1; : : : ;Ym duas amostras independentes, geradas por P1 e P2, respectivamente, o problema de duas amostras consiste em decidir se P1 e P2 são iguais. Assumindo uma priori não-paramétrica, propomos um índice de evidência para a hipótese nula H0 : P1 = P2 baseado na distribuição a posteriori da distância d (P1; P2) entre P1 e P2. O índice de evidência é de fácil implementação, tem uma interpretação intuitiva e também pode ser justificada no contexto da teoria da decisão bayesiana. Além disso, em um estudo de simulação de Monte Carlo, nosso método apresentou bom desempenho quando comparado com o teste de Kolmogorov-Smirnov, com o teste de Wilcoxon e com o método de Holmes. Finalmente, aplicamos nosso método em um conjunto de dados sobre medidas de escala de três grupos diferentes de pacientes submetidos a um questionário para diagnóstico de doença de Alzheimer.

Page generated in 0.0657 seconds