• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 149
  • 78
  • 28
  • 10
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 854
  • 120
  • 112
  • 110
  • 106
  • 106
  • 95
  • 74
  • 63
  • 60
  • 59
  • 58
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Equidistribution and L-functions in number theory.

Houde, Pierre January 1973 (has links)
No description available.
142

Bonding properties of coordinated polyhedra in molecules and crystals

Hill, Frances Cull 04 October 2006 (has links)
Force constants and electron density distributions in coordination polyhedra in molecules and crystals are modeled using Hartree-Fock molecular orbital methods. Model bond-stretching force constants calculated for coordination polyhedra in a series of nitride, oxide and sulfide molecules are ~ 10-20% larger than obtained with spectroscopic methods. Well-developed correlations obtain between the force constants and minimum energy bond lengths, effective nuclear charges and polyhedral compressibilities of molecules and crystals. Model electron density distributions calculated for a large number of molecules with MOn (n = 1, 2,3,4 or 6) coordination polyhedra show that the MO bonds of a given type vary in a regular way with the value of the electron density at bond critical points, bonded radii and the curvatures of the electron density. The bonded interactions in the polyhedra are examined in terms of criteria set forth by Bader and Essen (1984) and Cremer and Kraka (1984). / Ph. D.
143

Variances of some truncated distributions for various points of truncation

Hayles, George Carlton 30 October 2008 (has links)
The purpose of this study is to examine variances in the case of distributions obtained by truncating a given distribution at various points. In particular, the truncated distributions are restricted to nested increasing intervals, and the question is posed whether the variances of these distributions are monotonically increasing. The answer to this question is relevant to the use of conditional information for purposes of estimation and prediction. Several tables are presented in the thesis which provide evidence of the property of monotonic variance for nested increasing intervals of truncation in the case of univariate distributions., The Monte Carlo procedure is used to determine a table of standard deviations for the standard normal distribution with the same points of truncation reported by Clark(2). Clark's table is given intact, and it is used in comparison with the new table reported here as a check on the Monte Carlo procedure used in the present study. Distributions other than the standard normal distribution are examined as well, namely, a Pearson U-shaped distribution and a bimodal distribution consisting of a mixture or two Pearson distributions. Graphs of the U-shaped and bimodal distributions are given. A section is given in which dispersion for a bivariate case is examined in terms of the bivariate normal distribution. An interesting trend among the covariance matrices is observed in the data reported in that section. A separate computer program for each type of distribution was written and used to calculate the variances of the truncated distributions. FORTRAN programs and flow charts are presented in the Appendix. Explanation of the tables and procedures used to calculate the entries in the body of each table are given in each section as well as some discussion of the results presented. / Master of Science
144

Corn storage marketing strategies for Virginia

Hoover, Michael G. 22 August 2008 (has links)
The decision between selling corn at harvest or placing corn in storage is investigated. Six marketing strategies are identified and analyzed based on their ability to capture profits and avoid losses. The strategies are implemented when expected profits are positive. The strategies involve storing with no forward pricing and storing with forward pricing using futures, options and cash contracts. Three regression models are developed to forecast change in cash prices and basis. The regression models are incorporated into the strategies to help producers forecast profits and losses. Cash prices and basis are based on markets in the Northern Neck of Virginia for the 1974 to 1994 time period. The distribution of returns for each strategy are analyzed and compared using mean variance analysis and second degree stochastic dominance. The distribution of returns represent the risk associated with each strategy. The results indicate four of the six strategies are worth considering. The strategy with the highest average returns and the highest variance of returns involved storing com with no forward pricing. The strategy with no forward pricing exhibited some of the best returns, but exposed the producer to the most risk. A producer faced no risk if the strategy using cash contracts was implemented. The strategy that comprised hedging with a futures contract and the strategy that involved buying a put option and writing a call option exhibited similar returns and risk. Producers can implement the strategy that exhibits the level of risk he or she is willing to accept. A computer program is developed to assist the producer in analyzing the four strategies. / Master of Science
145

Simplified "moment distribution method"

Wu, Yen January 1963 (has links)
Moment Distribution Method was modified so that simple techniques applicable to symmetrical and anti-symmetrical frames may be applied to non-symmetrical rigid frames consisting of prismatic members. This approach simplifies considerably the calculations. Using the above approach, two different "Simplified Moment Distribution Methods" were introduced. Method No. l, an “exact” method, makes it possible to execute moment distribution in a single cycle. The “exact" values of the unknown moments are obtained by solving a set of simultaneous equations. This method is applicable to single-bay frames having an arbitrary number of stories. In the solution there is one unknown moment and one equation for each story. Method No. 2 simplifies the analysis of multiple-bay, multiple-story frames. It is a modified version of the standard moment distribution. Only half of the total number of joints has to be considered in this analysis and the convergence of the iteration process is accelerated. The presentation of the theory is preceded by the definition of a set of modified constants pertinent to the two methods. Illustrative examples for the analysis of single-bay as well as multiple-bay frames are included. / Master of Science
146

Setting location priors using beamforming improves model comparison in MEG-DCM

Carter, Matthew Edward 25 August 2014 (has links)
Modelling neuronal interactions using a directed network can be used to provide insight into the activity of the brain during experimental tasks. Magnetoencephalography (MEG) allows for the observation of the fast neuronal dynamics necessary to characterize the activity of sources and their interactions. A network representation of these sources and their connections can be formed by mapping them to nodes and their connection strengths to edge weights. Dynamic Causal Modelling (DCM) presents a Bayesian framework to estimate the parameters of these networks, as well as the ability to test hypotheses on the structure of the network itself using Bayesian model comparison. DCM uses a neurologically-informed representation of the active neural sources, which leads to an underdetermined system and increased complexity in estimating the network parameters. This work shows that inform- ing the MEG DCM source location with prior distributions defined using a MEG source localization algorithm improves model selection accuracy. DCM inversion of a group of candidate models shows an enhanced ability to identify a ground-truth network structure when source-localized prior means are used. / Master of Science
147

Champs aléatoires markoviens arborescents de distributions marginales Poisson

Côté, Benjamin 16 August 2024 (has links)
Pour une bonne modélisation mathématique de l'occurrence de phénomènes aléatoires, part fondamentale de la discipline actuarielle, il est nécessaire d'employer des distributions multivariées permettant de capturer adéquatement les relations de dépendance présentes entre les phénomènes. Celles qu'offrent les champs aléatoires markoviens, une famille de modèle probabilistes graphiques, répondent à ce besoin, les relations de dépendance qu'elles introduisent se calquant à un arbre ou à un graphe. Les champs aléatoires markoviens misent ainsi sur les riches possibilités de topologies d'arbres et de graphes pour offrir cette même richesse en termes de dépendance. Une nouvelle famille de champs aléatoires markoviens arborescents, c'est-à-dire se basant sur des arbres, est proposée. Les membres de cette famille se distinguent par le fait qu'ils ont des distributions marginales fixes de Poisson, « fixes » dans le sens que la dépendance introduite n'a pas d'impact sur elles. Des distribution marginales fixes sont inhabituelles pour un champ aléatoire markovien, bien que généralement désirables pour fins de modélisation. Cette caractéristique est possible par l'encapsulation, dans les arêtes de l'arbre, de la dynamique de propagation induite par l'opérateur d'amincissement binomial. Cela mène également à une représentation stochastique intuitive des champs aléatoires markoviens de la famille, à des méthodes simples de simulation et à des expressions analytiques pour leur fonction de masses de probabilités conjointe et leur fonction génératrice de probabilités conjointe, notamment. Quantités importantes dans un contexte actuariel, la somme des composantes du champ aléatoire markovien, interprétable comme le nombre total d'événement s'étant produits, et les contributions individuelles de ces composantes sont étudiées en profondeur. Cette analyse passe notamment par l'établissement d'ordres stochastiques. À cet effet, un nouvel ensemble partiellement ordonné est défini pour comparer des arbres aux topologies différentes selon la distribution qu'ils induisent pour la somme, ce qui est, à notre connaissance, novateur dans le contexte de modèles pobabilistes graphiques. Est offerte une comparaison de cet ensemble partiellement ordonné avec quelques autres en lien avec la théorie spectrale des graphes. / For adequate mathematical modeling of random phenomena's occurrences, it is necessary to employ multivariate distributions that appropriately capture the existing dependence relations between those phenomena. The multivariate distributions granted by Markov random fields, a family of probabilistic graphical models, answer to this need, by encrypting the dependence scheme they introduce on a tree or a graph. Markov random fields thus leverage on the rich possibilities of tree shapes and graph shapes to provide these possibilities in terms of dependence schemes. We propose a new family of tree-based Markov random fields, characterized by their Poisson marginal distributions. The marginal distributions are also fixed, meaning they are not affected by the introduced dependence. This fixedness is uncommon for Markov random fields, while being desirable for modeling purposes. It is obtained from the encapsulation, in the edges of the tree, of the propagation dynamic induced by the binomial thinning operator. This leads to an intuitive stochastic representation of Markov random fields from the proposed family, simple methods of simulation, and analytic expressions for their joint probability mass function and their joint probability generating function, notably. Important quantities in an actuarial context are the sum of the components of the Markov random field, interpreted as the total number of occurring phenomena, and the individual contributions of these components. They are thoroughly studied, notably via the use of stochastic order relations. We incidently design a new partially ordered set (poset) of trees, in order to compare trees of different shapes based on the distribution of the sum they respectively convey. To our knowledge, this approach is innovative in the context of probabilistic graphical models. We provide comparisons of the newly defined poset with some other posets of trees fetched from spectral graph theory.
148

Study of the helicity distributions of Z[gamma] production at the CMS experiment

Chakaberia, Irakli January 1900 (has links)
Doctor of Philosophy / Department of Physics / Tim Bolton / This thesis represents the first study of the helicity distributions of Z[gamma] di-boson production at hadron colliders. I use 5 fb⁻¹ of [radical]s = 7 TeV center of mass energy proton-proton collision data, collected by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC), to look at the angular distribution of the Z[gamma] [right arrow] e⁺e⁻[gamma] / [mu]⁺ [mu]⁻ [gamma] process and measure the helicity amplitudes that govern it. This study provides sensitivity to the interference terms between different quantum states and through the interference terms to the possible new physics. The final state is comprised of leptons (muon-antimuon or electron-positron pairs) with transverse momentum over 20 GeV and a photon with transverse energy over 30 GeV. Helicty amplitudes are measured for the total angular momentum of the quark-antiquark system up to J[subscript]q[subscript bar]q = 2. Four-dimensional multivariate analysis of the 2011 CMS data shows no significant deviations from the standard model prediction for the measured amplitudes.
149

From here to infinity: sparse finite versus Dirichlet process mixtures in model-based clustering

Frühwirth-Schnatter, Sylvia, Malsiner-Walli, Gertraud January 2019 (has links) (PDF)
In model-based clustering mixture models are used to group data points into clusters. A useful concept introduced for Gaussian mixtures by Malsiner Walli et al. (Stat Comput 26:303-324, 2016) are sparse finite mixtures, where the prior distribution on the weight distribution of a mixture with K components is chosen in such a way that a priori the number of clusters in the data is random and is allowed to be smaller than K with high probability. The number of clusters is then inferred a posteriori from the data. The present paper makes the following contributions in the context of sparse finite mixture modelling. First, it is illustrated that the concept of sparse finite mixture is very generic and easily extended to cluster various types of non-Gaussian data, in particular discrete data and continuous multivariate data arising from non-Gaussian clusters. Second, sparse finite mixtures are compared to Dirichlet process mixtures with respect to their ability to identify the number of clusters. For both model classes, a random hyper prior is considered for the parameters determining the weight distribution. By suitable matching of these priors, it is shown that the choice of this hyper prior is far more influential on the cluster solution than whether a sparse finite mixture or a Dirichlet process mixture is taken into consideration.
150

Robust target detection for Hyperspectral Imaging. / Détection robuste de cibles en imagerie Hyperspectrale.

Frontera Pons, Joana Maria 10 December 2014 (has links)
L'imagerie hyperspectrale (HSI) repose sur le fait que, pour un matériau donné, la quantité de rayonnement émis varie avec la longueur d'onde. Les capteurs HSI mesurent donc le rayonnement des matériaux au sein de chaque pixel pour un très grand nombre de bandes spectrales contiguës et fournissent des images contenant des informations à la fois spatiale et spectrale. Les méthodes classiques de détection adaptative supposent généralement que le fond est gaussien à vecteur moyenne nul ou connu. Cependant, quand le vecteur moyen est inconnu, comme c'est le cas pour l'image hyperspectrale, il doit être inclus dans le processus de détection. Nous proposons dans ce travail d'étendre les méthodes classiques de détection pour lesquelles la matrice de covariance et le vecteur de moyenne sont tous deux inconnus.Cependant, la distribution statistique multivariée des pixels de l'environnement peut s'éloigner de l'hypothèse gaussienne classiquement utilisée. La classe des distributions elliptiques a été déjà popularisée pour la caractérisation de fond pour l’HSI. Bien que ces modèles non gaussiens aient déjà été exploités dans la modélisation du fond et dans la conception de détecteurs, l'estimation des paramètres (matrice de covariance, vecteur moyenne) est encore généralement effectuée en utilisant des estimateurs conventionnels gaussiens. Dans ce contexte, nous analysons de méthodes d’estimation robuste plus appropriées à ces distributions non-gaussiennes : les M-estimateurs. Ces méthodes de détection couplées à ces nouveaux estimateurs permettent d'une part, d'améliorer les performances de détection dans un environment non-gaussien mais d'autre part de garder les mêmes performances que celles des détecteurs conventionnels dans un environnement gaussien. Elles fournissent ainsi un cadre unifié pour la détection de cibles et la détection d'anomalies pour la HSI. / Hyperspectral imaging (HSI) extends from the fact that for any given material, the amount of emitted radiation varies with wavelength. HSI sensors measure the radiance of the materials within each pixel area at a very large number of contiguous spectral bands and provide image data containing both spatial and spectral information. Classical adaptive detection schemes assume that the background is zero-mean Gaussian or with known mean vector that can be exploited. However, when the mean vector is unknown, as it is the case for hyperspectral imaging, it has to be included in the detection process. We propose in this work an extension of classical detection methods when both covariance matrix and mean vector are unknown.However, the actual multivariate distribution of the background pixels may differ from the generally used Gaussian hypothesis. The class of elliptical distributions has already been popularized for background characterization in HSI. Although these non-Gaussian models have been exploited for background modeling and detection schemes, the parameters estimation (covariance matrix, mean vector) is usually performed using classical Gaussian-based estimators. We analyze here some robust estimation procedures (M-estimators of location and scale) more suitable when non-Gaussian distributions are assumed. Jointly used with M-estimators, these new detectors allow to enhance the target detection performance in non-Gaussian environment while keeping the same performance than the classical detectors in Gaussian environment. Therefore, they provide a unified framework for target detection and anomaly detection in HSI.

Page generated in 0.1124 seconds