• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

Multivariate semiparametric regression models for longitudinal data

Li, Zhuokai January 2014 (has links)
Multiple-outcome longitudinal data are abundant in clinical investigations. For example, infections with different pathogenic organisms are often tested concurrently, and assessments are usually taken repeatedly over time. It is therefore natural to consider a multivariate modeling approach to accommodate the underlying interrelationship among the multiple longitudinally measured outcomes. This dissertation proposes a multivariate semiparametric modeling framework for such data. Relevant estimation and inference procedures as well as model selection tools are discussed within this modeling framework. The first part of this research focuses on the analytical issues concerning binary data. The second part extends the binary model to a more general situation for data from the exponential family of distributions. The proposed model accounts for the correlations across the outcomes as well as the temporal dependency among the repeated measures of each outcome within an individual. An important feature of the proposed model is the addition of a bivariate smooth function for the depiction of concurrent nonlinear and possibly interacting influences of two independent variables on each outcome. For model implementation, a general approach for parameter estimation is developed by using the maximum penalized likelihood method. For statistical inference, a likelihood-based resampling procedure is proposed to compare the bivariate nonlinear effect surfaces across the outcomes. The final part of the dissertation presents a variable selection tool to facilitate model development in practical data analysis. Using the adaptive least absolute shrinkage and selection operator (LASSO) penalty, the variable selection tool simultaneously identifies important fixed effects and random effects, determines the correlation structure of the outcomes, and selects the interaction effects in the bivariate smooth functions. Model selection and estimation are performed through a two-stage procedure based on an expectation-maximization (EM) algorithm. Simulation studies are conducted to evaluate the performance of the proposed methods. The utility of the methods is demonstrated through several clinical applications.
682

A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software

Stevenson, Clint W. 27 October 2006 (has links) (PDF)
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
683

Exact Analysis of Exponential Two-Component System Failure Data

Zhang, Xuan 01 1900 (has links)
<p>A survival distribution is developed for exponential two-component systems that can survive as long as at least one of the two components in the system function. It is assumed that the two components are initially independent and non-identical. If one of the two components fail (repair is impossible), the surviving component is subject to a different failure rate due to the stress caused by the failure of the other.</p> <p>In this paper, we consider such an exponential two-component system failure model when the observed failure time data are (1) complete, (2) Type-I censored, (3) Type-I censored with partial information on component failures, (4) Type-II censored and (5) Type-II censored with partial information on component failures. In these situations, we discuss the maximum likelihood estimates (MLEs) of the parameters by assuming the lifetimes to be exponentially distributed. The exact distributions (whenever possible) of the MLEs of the parameters are then derived by using the conditional moment generating function approach. Construction of confidence intervals for the model parameters are discussed by using the exact conditional distributions (when available), asymptotic distributions, and two parametric bootstrap methods. The performance of these four confidence intervals, in terms of coverage probabilities are then assessed through Monte Carlo simulation studies. Finally, some examples are presented to illustrate all the methods of inference developed here.</p> <p>In the case of Type-I and Type-II censored data, since there are no closed-form expressions for the MLEs, we present an iterative maximum likelihood estimation procedure for the determination of the MLEs of all the model parameters. We also carry out a Monte Carlo simulation study to examine the bias and variance of the MLEs.</p> <p>In the case of Type-II censored data, since the exact distributions of the MLEs depend on the data, we discuss the exact conditional confidence intervals and asymptotic confidence intervals for the unknown parameters by conditioning on the data observed.</p> / Thesis / Doctor of Philosophy (PhD)
684

Etude des modes octupolaires dans le noyau atomique de 156Gd : recherche expérimentale de la symétrie tétraédrique / Study of the octupole modes in the atomic nucleus of 156Gd : experimental search of the tetrahedral symmetry

Sengele, Loic 10 December 2014 (has links)
Les symétries géométriques jouent un rôle important dans la compréhension de la stabilité de tout système physique. En structure nucléaire, elles sont reliées à la forme du champ moyen utilisé pour décrire les propriétés des noyaux atomiques. Dans le cadre de cette thèse, nous avons utilisé les prédictions obtenues avec l'aide du Hamiltonien du champ moyen nucléaire avec le potentiel de Woods-Saxon Universel pour étudier les effets des symétries dites de « Haut-Rang ». Ces symétries ponctuelles mènent à des dégénérescences des états nucléaires d’ordre 4. Il est prédit que la symétrie tétraédrique influence la stabilité des noyaux proches des nombres magiques tétraédriques [Z,N]=[32,40,56,64,70,90-94,136]. Nous avons sélectionné la région des Terres-Rares proche du noyau doublement magique tétraédrique 154Gd pour notre étude. Dans cette région, il existe des structures de parité négative qui sont mal comprises. Or la symétrie tétraédrique, en tant que déformation octupolaire non-axiale, brise la symétrie par réflexion et doit produire des états de parité négative. Après une étude systématique des propriétés expérimentales des noyaux de la région, nous avons sélectionné le 156Gd comme objet de notre étude des modes d’excitation octupolaire. Nous avons utilisé les probabilités réduites de transition gamma pour discerner ces différents modes. Pour atteindre cet objectif, nous avons réalisé trois expériences de spectroscopie gamma à l’ILL de Grenoble avec les détecteurs EXILL et GAMS afin de mesurer les durées de vie et les intensités des transitions gamma des états candidats. L'analyse de nos résultats montre que notamment la forme tétraédrique aide à comprendre les probabilités des transitions dipolaires. Ce résultat ouvre de nouvelles perspectives expérimentales et théoriques. / Geometrical symmetries play an important role in the understanding of all physical systems. In nuclear structure they are linked to the shape of the mean-field used to describe the atomic nuclei properties. In the framework of this thesis, we have used the predictions obtained with the help of the nuclear mean-field Hamiltonian with the Universal Woods-Saxon potential to study the effects of the so-called “High-Rank” symmetries. These point-group symmetries lead to a nuclear state degeneracy of the order of 4. It is predicted that the tetrahedral symmetry affects the stability of nuclei close to the tetrahedral magic numbers [Z,N]=[32,40,56,64,70,90-94,136]. We have selected the Rare-Earth region close to the tetrahedral doubly magic nucleus 154Gd for our study. In this region, there exists negative parity structures poorly understood. Yet the tetrahedral symmetry, as related to a non-axial octupole deformation, breaks the reflection symmetry and leads to the negative parity states. Following a systematics of experimental properties of the nuclei in this region, we have selected 156Gd as the object of our study for the octupole excitation modes. We have used the reduced transitions probabilities to discriminate between these modes. To achieve this goal, we have performed three gamma spectroscopy experiments at the ILL in Grenoble with the EXILL and GAMS detectors to measure the lifetimes and the gamma transition intensities from the candidate states. The analysis of our results shows that including the tetrahedral shape helps to understand the dipole transition probabilities. This result will open new experimental and theoretical perspectives.
685

Statistical analysis of clinical trial data using Monte Carlo methods

Han, Baoguang 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies. For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
686

Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects / Modellierung unscharfer Eingabeparameter zur Wirtschaftlichkeitsuntersuchung von Wasserkraftprojekten basierend auf Random Set Theorie

Beisler, Matthias Werner 24 August 2011 (has links) (PDF)
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results. / Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.
687

Extremes of log-correlated random fields and the Riemann zeta function, and some asymptotic results for various estimators in statistics

Ouimet, Frédéric 05 1900 (has links)
No description available.
688

Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects

Beisler, Matthias Werner 25 May 2011 (has links)
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results. / Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.
689

Regime fatigue : a cognitive-psychological model for identifying a socialized negativity effect in U.S. Senatorial and Gubernatorial elections from 1960-2008

Giles, Clark Andrew 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This research project proposes to try to isolate and measure the influence of “regime fatigue” on gubernatorial elections and senatorial elections in the United States where there is no incumbent running. The research begins with a review of the negativity effect and its potential influence on schema-based impression forming by voters. Applicable literature on the topics of social clustering and homophily is then highlighted as it provides the vehicle through which the negativity effect disseminates across collections of socially-clustered individuals and ultimately contributes to changing tides of public opinion despite the fact that the political party identification can remain relatively fixed in the aggregate.
690

Revision of an artificial neural network enabling industrial sorting

Malmgren, Henrik January 2019 (has links)
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.

Page generated in 0.128 seconds