• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 68
  • 35
  • 16
  • 13
  • 11
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 367
  • 103
  • 65
  • 55
  • 53
  • 48
  • 45
  • 41
  • 41
  • 41
  • 38
  • 38
  • 34
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modeling the Performance of a Baseball Player's Offensive Production

Smith, Michael Ross 09 March 2006 (has links) (PDF)
This project addresses the problem of comparing the offensive abilities of players from different eras in Major League Baseball (MLB). We will study players from the perspective of an overall offensive summary statistic that is highly linked with scoring runs, or the Berry Value. We will build an additive model to estimate the innate ability of the player, the effect of the relative level of competition of each season, and the effect of age on performance using piecewise age curves. Using Hierarchical Bayes methodology with Gibbs sampling, we model each of these effects for each individual. The results of the Hierarchical Bayes model permit us to link players from different eras and to rank the players across the modern era of baseball (1900-2004) on the basis of their innate overall offensive ability. The top of the rankings, of which the top three were Babe Ruth, Lou Gehrig, and Stan Musial, include many Hall of Famers and some of the most productive offensive players in the history of the game. We also determine that trends in overall offensive ability in Major League Baseball exist based on different rule and cultural changes. Based on the model, MLB is currently at a high level of run production compared to the different levels of run production over the last century.
62

Simulation Studies Of Self-assembly And Phase Diagram Of Amphiphilic Molecules

Bourov, Geuorgui Kostadinov 01 January 2005 (has links)
The aim of this dissertation is to investigate self-assembled structures and the phase diagram of amphiphilic molecules of diverse geometric shapes using a number of different computer simulation methods. The semi-realistic coarse-grained model, used extensively for simulation of polymers and surfactant molecules, is adopted in an off-lattice approach to study how the geometric structure of amphiphiles affects the aggregation properties. The results of simulations show that the model system behavior is consistent with theoretical predictions, experiments and lattice simulation models. We demonstrate that by modifying the geometry of the molecules, self-assembled aggregates are altered in a way close to theoretical predictions. In several two and three dimensional off-lattice Brownian Dynamics simulations, the influence of the shape of the amphiphilic molecules on the size and form of the aggregates is studied systematically. Model phospholipid molecules, with two hydrophobic chains connected to one hydrophilic head group, are simulated and the formation of stable bilayers is observed. In addition, (practically very important) mixtures of amphiphiles with diverse structures are studied under different mixing ratios and molecular structures. We find that in several systems, with Poisson distributed chain lengths, the effect on the aggregation distribution is negligible compared to that of the pure amphiphilic system with the mean length of the Poisson distribution. The phase diagrams of different amphiphilic molecular structures are investigated in separate simulations by employing the Gibbs Ensemble Monte Carlo method with an implemented configurational-bias technique. The computer simulations of the above mentioned amphiphilic systems are done in an area where physics, biology and chemistry are closely connected and advances in applications require the use of new theoretical, experimental and simulation methods for a better understanding of their self-assembling properties. Obtained simulation results demonstrate the connection between the structure of amphiphilic molecules and the properties of their thermodynamically stable aggregates and thus build a foundation for many applications of the remarkable phenomena of amphiphilic self-assembly in the area of nanotechnology.
63

Gibbs Phenomenon for Fourier-Legendre Series / Gibbs fenomen för Fourier-Legendre serier

Andersson Svendsen, Joakim January 2023 (has links)
In this thesis, the main objective is to study the presence of Gibbs phenomenon and the Gibbs constant in Fourier-Legendre series. The occurrence of The Gibbs phenomenon is a well known consequence when approximating functions with Fourier series that have points of discontinuity. Consequently, the initial focus was to examine Fourier seriesand the occurrence of Gibbs phenomenon in this context. Next, we delve into Legendrepolynomials, showing their applicability to be expressed as a Fourier series due to theirorthogonality in [−1, 1]. We then continue to explore Gibbs phenomenon for Fourier-Legendre series. The findings proceeds to confirm the existence of the Gibbs phenomenon for Fourier-Legendre series, but most notebly, the values of the error seem to convergeto the same number as for Fourier series which is the Gibbs constant.
64

The Impact of Two-Rate Taxes on Construction in Pennsylvania

Plassmann, Florenz 10 July 1997 (has links)
The evaluation of policy-relevant economic research requires an ethical foundation. Classical liberal theory provides the requisite foundation for this dissertation, which uses various econometric tools to estimate the effects of shifting some of the property tax from buildings to land in 15 cities in Pennsylvania. Economic theory predicts that such a shift will lead to higher building activity. However, this prediction has been supported little by empirical evidence so far. The first part of the dissertation examines the effect of the land-building tax differential on the number of building permits that were issued in 219 municipalities in Pennsylvania between 1972 and 1994. For such count data a conventional analysis based on a continuous distribution leads to incorrect results; a discrete maximum likelihood analysis with a negative binomial distribution is more appropriate. Two models, a non-linear and a fixed effects model, are developed to examine the influence of the tax differential. Both models suggest that this influence is positive, albeit not statistically significant. Application of maximum likelihood techniques is computationally cumbersome if the assumed distribution of the data cannot be written in closed form. The negative binomial distribution is the only discrete distribution with a variance that is larger than its mean that can easily be applied, although it might not be the best approximation of the true distribution of the data. The second part of the dissertation uses a Markov Chain Monte Carlo method to examine the influence of the tax differential on the number of building permits, under the assumption that building permits are generated by a Poisson process whose parameter varies lognormally. Contrary to the analysis in the first part, the tax is shown to have a strong and significantly positive impact on the number of permits. The third part of the dissertation uses a fixed-effects weighted least squares method to estimate the effect of the tax differential on the value per building permit. The tax coefficient is not significantly different from zero. Still, the overall impact of the tax differential on the total value of construction is shown to be positive and statistically significant. / Ph. D.
65

A Random-Linear-Extension Test Based on Classic Nonparametric Procedures

Cao, Jun January 2009 (has links)
Most distribution free nonparametric methods depend on the ranks or orderings of the individual observations. This dissertation develops methods for the situation when there is only partial information about the ranks available. A random-linear-extension exact test and an empirical version of the random-linear-extension test are proposed as a new way to compare groups of data with partial orders. The basic computation procedure is to generate all possible permutations constrained by the known partial order using a randomization method similar in nature to multiple imputation. This random-linear-extension test can be simply implemented using a Gibbs Sampler to generate a random sample of complete orderings. Given a complete ordering, standard nonparametric methods, such as the Wilcoxon rank-sum test, can be applied, and the corresponding test statistics and rejection regions can be calculated. As a direct result of our new method, a single p-value is replaced by a distribution of p-values. This is related to some recent work on Fuzzy P-values, which was introduced by Geyer and Meeden in Statistical Science in 2005. A special case is to compare two groups when only two objects can be compared at a time. Three matching schemes, random matching, ordered matching and reverse matching are introduced and compared between each other. The results described in this dissertation provide some surprising insights into the statistical information in partial orderings. / Statistics
66

Thermodynamics on Utilization of Solid Phases in Refining Slags toward Environmental-friendly Steelmaking / 環境調和型の製鋼に向けた精錬スラグ中固相の積極利用に関する熱力学

Saito, Keijiro 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(エネルギー科学) / 甲第25403号 / エネ博第482号 / 新制||エネ||90(附属図書館) / 京都大学大学院エネルギー科学研究科エネルギー応用科学専攻 / (主査)准教授 長谷川 将克, 教授 土井 俊哉, 教授 藤本 仁 / 学位規則第4条第1項該当 / Doctor of Energy Science / Kyoto University / DFAM
67

Modélisation de la dispersion à grande échelle : évolution de laire de répartition passée et future du hêtre commun (Fagus sylvatica) en réponse aux changements climatiques / Dispersal modelling at large scale : evolution of past and future European beech distribution (Fagus sylvatica) in response to climate changes

Saltre, Frédérik 14 December 2010 (has links)
Le changement climatique actuel est tellement rapide que la plupart des espèces ligneuses tempérées européennes telles que Fagus sylvatica risquent de ne pouvoir s'adapter, ni migrer suffisamment pour répondre à ce changement. La plupart des modèles simulant l'aire de répartition potentielle de la végétation en fonction du climat considèrent une dispersion « illimitée » ou « nulle » ce qui rend difficile l'évaluation de l'importance de la dispersion par rapport au climat dans la dynamique de la végétation. Ce travail de thèse a pour objectif d'intégrer à un modèle d'aire de répartition basé sur les processus (Phenofit) un modèle phénoménologique de dispersion (modèle de dispersion de Gibbs) dans le but de dissocier l'effet du climat de l'effet de la dispersion dans la réponse de Fagus sylvatica aux changements climatiques en Europe, (i) pendant la recolonisation postglaciaire de 12000 ans BP à l'actuel et (ii) pendant le 21ème siècle. Nos résultats montrent un fort impact de la dispersion associé à un effet également important de la localisation des refuges glaciaires sur la recolonisation postglaciaire du hêtre comparé à l'effet du climat de 12000 ans BP à l'actuel. En revanche, les capacités de dispersion du hêtre ne lui permettent pas d'occuper la totalité de son aire potentielle notamment au nord de l'Europe d'ici 2100. Cette faible colonisation vers le nord de l'Europe associée à de fortes extinctions au sud de son aire de répartition causées par l'augmentation du stress hydrique conduit à une diminution drastique de l'aire de répartition du hêtre d'ici la fin du 21ème siècle. / Current climate change is so fast that some temperate tree species, like Fagus sylvatica, could not adapt nor migrate fast enough to tract their climatic niche. Most models simulating the potential distribution of vegetation as a function of climate consider unlimited or "null" dispersal, which doesn't allow assessing the importance of dispersal compared to climate in the dynamics of the vegetation. In this thesis, we integrate into a process-based species distribution model (Phenofit), a phenomenological model of dispersal (Gibbs-based model) in order to disentangle the effects of climate and dispersal in the response of Fagus sylvatica to climate change in Europe, (i) during the postglacial recolonization from 12000 years BP to present, (ii) during the 21st century. Our results show strong impact of dispersal associated with a strong effect of glacial refugees location on the beech postglacial recolonization, compared to the effect of climat e since 12000 years. Nevertheless, beech dispersal abilities are not sufficient to allow the colonization of newly suitable areas in northern Europe by 2100. This low colonization rate in Northern Europe in addition to a high extinction rate in Southern Europe due to increasing drought lead to a drastic reduction of beech distribution by the end of the 21st century.
68

Estimation et sélection de modèle pour le modèle des blocs latents / Estimation and model selection for the latent block model

Brault, Vincent 30 September 2014 (has links)
Le but de la classification est de partager des ensembles de données en sous-ensembles les plus homogènes possibles, c'est-à-dire que les membres d'une classe doivent plus se ressembler entre eux qu'aux membres des autres classes. Le problème se complique lorsque le statisticien souhaite définir des groupes à la fois sur les individus et sur les variables. Le modèle des blocs latents définit une loi pour chaque croisement de classe d'objets et de classe de variables, et les observations sont supposées indépendantes conditionnellement au choix de ces classes. Toutefois, il est impossible de factoriser la loi jointe des labels empêchant le calcul de la logvraisemblance et l'utilisation de l'algorithme EM. Plusieurs méthodes et critères existent pour retrouver ces partitions, certains fréquentistes, d'autres bayésiens, certains stochastiques, d'autres non. Dans cette thèse, nous avons d'abord proposé des conditions suffisantes pour obtenir l'identifiabilité. Dans un second temps, nous avons étudié deux algorithmes proposés pour contourner le problème de l'algorithme EM : VEM de Govaert et Nadif (2008) et SEM-Gibbs de Keribin, Celeux et Govaert (2010). En particulier, nous avons analysé la combinaison des deux et mis en évidence des raisons pour lesquelles les algorithmes dégénèrent (terme utilisé pour dire qu'ils renvoient des classes vides). En choisissant des lois a priori judicieuses, nous avons ensuite proposé une adaptation bayésienne permettant de limiter ce phénomène. Nous avons notamment utilisé un échantillonneur de Gibbs dont nous proposons un critère d'arrêt basé sur la statistique de Brooks-Gelman (1998). Nous avons également proposé une adaptation de l'algorithme Largest Gaps (Channarond et al. (2012)). En reprenant leurs démonstrations, nous avons démontré que les estimateurs des labels et des paramètres obtenus sont consistants lorsque le nombre de lignes et de colonnes tendent vers l'infini. De plus, nous avons proposé une méthode pour sélectionner le nombre de classes en ligne et en colonne dont l'estimation est également consistante à condition que le nombre de ligne et de colonne soit très grand. Pour estimer le nombre de classes, nous avons étudié le critère ICL (Integrated Completed Likelihood) dont nous avons proposé une forme exacte. Après avoir étudié l'approximation asymptotique, nous avons proposé un critère BIC (Bayesian Information Criterion) puis nous conjecturons que les deux critères sélectionnent les mêmes résultats et que ces estimations seraient consistantes ; conjecture appuyée par des résultats théoriques et empiriques. Enfin, nous avons comparé les différentes combinaisons et proposé une méthodologie pour faire une analyse croisée de données. / Classification aims at sharing data sets in homogeneous subsets; the observations in a class are more similar than the observations of other classes. The problem is compounded when the statistician wants to obtain a cross classification on the individuals and the variables. The latent block model uses a law for each crossing object class and class variables, and observations are assumed to be independent conditionally on the choice of these classes. However, factorizing the joint distribution of the labels is impossible, obstructing the calculation of the log-likelihood and the using of the EM algorithm. Several methods and criteria exist to find these partitions, some frequentist ones, some bayesian ones, some stochastic ones... In this thesis, we first proposed sufficient conditions to obtain the identifiability of the model. In a second step, we studied two proposed algorithms to counteract the problem of the EM algorithm: the VEM algorithm (Govaert and Nadif (2008)) and the SEM-Gibbs algorithm (Keribin, Celeux and Govaert (2010)). In particular, we analyzed the combination of both and highlighted why the algorithms degenerate (term used to say that it returns empty classes). By choosing priors wise, we then proposed a Bayesian adaptation to limit this phenomenon. In particular, we used a Gibbs sampler and we proposed a stopping criterion based on the statistics of Brooks-Gelman (1998). We also proposed an adaptation of the Largest Gaps algorithm (Channarond et al. (2012)). By taking their demonstrations, we have shown that the labels and parameters estimators obtained are consistent when the number of rows and columns tend to infinity. Furthermore, we proposed a method to select the number of classes in row and column, the estimation provided is also consistent when the number of row and column is very large. To estimate the number of classes, we studied the ICL criterion (Integrated Completed Likelihood) whose we proposed an exact shape. After studying the asymptotic approximation, we proposed a BIC criterion (Bayesian Information Criterion) and we conjecture that the two criteria select the same results and these estimates are consistent; conjecture supported by theoretical and empirical results. Finally, we compared the different combinations and proposed a methodology for co-clustering.
69

Modelo bayesiano para dados de sobrevivência com riscos semicompetitivos baseado em cópulas / Bayesian model for survival data with semicompeting risks based on copulas

Patiño, Elizabeth González 23 March 2018 (has links)
Motivados por um conjunto de dados de pacientes com insuficiência renal crônica (IRC), propomos uma nova modelagem bayesiana que envolve cópulas da família Arquimediana e um modelo misto para dados de sobrevivência com riscos semicompetitivos. A estrutura de riscos semicompetitivos é bastante comum em estudos clínicos em que dois eventos são de interesse, um intermediário e outro terminal, de forma tal que a ocorrência do evento terminal impede a ocorrência do intermediário mas não vice-versa. Nesta modelagem provamos que a distribuição a posteriori sob a cópula de Clayton é própria. Implementamos os algoritmos de dados aumentados e amostrador de Gibbs para a inferência bayesiana, assim como os criterios de comparação de modelos: LPML, DIC e BIC. Realizamos um estudo de simulação para avaliar o desempenho da modelagem e finalmente aplicamos a metodologia proposta para analisar os dados dos pacientes com IRC, além de outros de pacientes que receberam transplante de medula óssea. / Motivated by a dataset of patients with chronic kidney disease (CKD), we propose a new bayesian model including the Arquimedean copula and a mixed model for survival data with semicompeting risks. The structure of semicompeting risks appears frequently in clinical studies where two-types of events are involved: a nonterminal and a terminal event such that the occurrence of terminal event precludes the occurrence of the non-terminal event but not viceversa. In this work we prove that the posterior distribution is proper when the Clayton copula is used. We implement the data augmentation algorithm and Gibbs sampling for the bayesian inference, as well as some bayesian model selection criteria: LPML, BIC and DIC. We carry out a simulation study for assess the model performance and finally, our methodology is illustrated with the chronic kidney disease study.
70

Gibbs Measures for Models on Lines and Trees / Medidas de Gibbs para modelos em retas e árvores

Endo, Eric Ossami 31 July 2018 (has links)
In this thesis we study various properties of the spins models, in particular, Ising and Dyson models. We study the stability of the phase transition of the nearest-neighbor ferromagnetic Ising model when we add a perturbation to the critical external field that becomes weaker far from the root of the Cayley tree. We also study the relation between g-measures and Gibbs measures, showing that the Dyson model at sufficiently low temperature is not a g-measure. Counting contours on trees is also studied, showing the characterization of the trees that have infinite number of contours, and comparisons between various definitions of contours. We also study the measures of the spatial Gibbs random graphs, and their local convergence. / Nesta tese estudamos diversas propriedades dos modelos de spins, em particular, os modelos de Ising e Dyson. Estudamos a estabilidade da transição de fase no modelo de Ising ferromagnético de primeiros vizinhos quando adicionamos uma perturbação no campo externo crítico pela qual se torna mais fraca ao estar distante da raiz da árvore de Cayley. Estudamos a relação entre g-medidas e medidas de Gibbs, mostrando que a medida de Gibbs do modelo de Dyson a temperaturas suficientemente baixas não é uma g-medida. Também estudamos contagem de contornos em árvores, mostramos uma caracterização das árvores que possuem um número infinito de contornos de um tamanho fixo envolvendo um vértice, e comparamos entre diversas definições de contornos. Estudamos também as medidas de grafos aleatórios spatial Gibbs, e suas convergências locais.

Page generated in 0.0351 seconds