• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 10
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 11
  • 9
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Extensiones multivariantes del modelo "Besag, York y Mollié" : Aplicación al estudio de las desigualdades socioeconómicas en la mortalidad

Marí Dell'Olmo, Marc, 1978- 05 December 2012 (has links)
Esta tesis tiene dos objetivos principales. El primero es proponer métodos multivariantes para el estudio de las desigualdades socioeconómicas en la mortalidad en áreas pequeñas. El segundo es estudiar estas desigualdades en la práctica en varias ciudades españolas. En consecuencia, se han realizado cuatro estudios diferentes: dos de ellos más metodológicos y los otros dos más aplicados al estudio de las desigualdades. El primer estudio metodológico propone usar Análisis Factorial Bayesiano para el cálculo de índices de privación. Además, en este estudio se concluye que ignorar la variabilidad en la estimación del índice puede conducir a un sesgo cuando las áreas se agrupan según cuantiles del índice. En el segundo estudio se ha reformulado el modelo SANOVA de modo que es posible introducir una covariable dentro de este modelo. Asimismo, dicha reformulación permite la descomposición de la varianza de los patrones estudiados como suma de varianzas de todas las componentes del modelo. Finalmente, los estudios restantes evidencian la existencia de desigualdades socioeconómicas en la mortalidad total y en la mortalidad por las principales causas específicas en once ciudades españolas. Además, para las enfermedades isquémicas del corazón estas desigualdades parecen aumentar ligeramente en el tiempo. / This thesis has two main objectives. The first is to propose multivariate methods for the study of socioeconomic inequalities in mortality in small areas. The second is to study socioeconomic inequalities in mortality in small areas of several Spanish cities. Four different studies were conducted to attain these objectives: two of them focussed on the methodological aspects and the other two being empirical studies focussed on the study of inequalities. The first methodological study proposes the Bayesian factor analysis to calculate a deprivation index. Additionally, this study concludes that ignoring the uncertainty obtained in the estimation of the index may result in a misclassification bias when the areas are grouped according to quantiles of the index. In the second methodological study the SANOVA model has been reformulated enabling the introduction of a covariate in the model. Also, this reformulation permits the decomposition of the variance of the studied patterns into the sum of variances of all the model components. Finally, the other studies show the existence of socioeconomic inequalities in total mortality and mortality by specific causes in eleven major Spanish cities. In addition, for ischemic heart disease these inequalities appear to increase slightly over time.
32

Application of Java on Statistics Education

Tsay, Yuh-Chyuan 24 July 2000 (has links)
With the prevalence of internet, it is gradually becoming a trend to use the network as a tool of computer-added education. However, it is used to present the computer-added education with static state of the word, but it is just convenient to read for the user and there are no difference with traditional textbook. As the growing up of WWW and the development of Java, the interactive computer-added education is becoming a trend in the future and it can promote the effect of teaching basic statistics with the application of this new media. The instructor can take advantage of HTML by combining with Java Applets to achieve the display of interactive education through WWW. In this paper, we will use six examples of Java Applets about statistical computer-added education to help student easily to learn and to understand some abstract statistical concepts. The key methods to reach the goal are visualization and simulation with the display of graphics or games. Finally, we will discuss how to use the Applets and how to add the Java Applets into your homepage easily.
33

Caractérisation, modélisation et identification de sources de champ magnétique dans un véhicule électrique / Characterization, Modeling and Identification of magnetic field sources inside an electric vehicle

Pinaud, Olivier 13 November 2014 (has links)
Le véhicule électrique rassemble beaucoup d'équipements électrotechniques. Tous sont potentiellement source de champ magnétique dans l'habitacle : zone confinée où se trouvent les passagers. Il est illusoire de réaliser un modèle numérique complet tant le nombre de paramètres est important. Il est également impossible de placer des capteurs de champ partout à l'intérieur de l'habitacle. Après une étude approfondie des caractéristiques du champ magnétique mesuré dans l'habitacle, nous proposons d'allier modèle a priori et mesure de champ dans une approche Bayésienne du problème inverse. Basée sur le développement en harmonique sphérique du champ, l'apport d'information a priori oriente la solution et permet l'identification de nombreux paramètres avec très peu de mesure. / Electric vehicles have a lot of electrical devices onboard. All of them may generate electromagnetic field inside the car: a quite small space containing the passengers. A complete modeling of the vehicle can hardly be done because of the parameters number. The magnetic field measurement everywhere inside the car is also impossible. We first measure the magnetic field inside the car to study its characteristics. Then we propose to merge together a priori modeling with measurements into a Bayesian approach of the inverse problem. Based on spherical harmonic decomposition of the magnetic field, a priori information helps the resolution and gives the identified parameters with a very few measurements.
34

Chemical Analysis, Databasing, and Statistical Analysis of Smokeless Powders for Forensic Application

Dennis, Dana-Marie 01 January 2015 (has links)
Smokeless powders are a set of energetic materials, known as low explosives, which are typically utilized for reloading ammunition. There are three types which differ in their primary energetic materials; where single base powders contain nitrocellulose as their primary energetic material, double and triple base powders contain nitroglycerin in addition to nitrocellulose, and triple base powders also contain nitroguanidine. Additional organic compounds, while not proprietary to specific manufacturers, are added to the powders in varied ratios during the manufacturing process to optimize the ballistic performance of the powders. The additional compounds function as stabilizers, plasticizers, flash suppressants, deterrents, and opacifiers. Of the three smokeless powder types, single and double base powders are commercially available, and have been heavily utilized in the manufacture of improvised explosive devices. Forensic smokeless powder samples are currently analyzed using multiple analytical techniques. Combined microscopic, macroscopic, and instrumental techniques are used to evaluate the sample, and the information obtained is used to generate a list of potential distributors. Gas chromatography – mass spectrometry (GC-MS) is arguably the most useful of the instrumental techniques since it distinguishes single and double base powders, and provides additional information about the relative ratios of all the analytes present in the sample. However, forensic smokeless powder samples are still limited to being classified as either single or double base powders, based on the absence or presence of nitroglycerin, respectively. In this work, the goal was to develop statistically valid classes, beyond the single and double base designations, based on multiple organic compounds which are commonly encountered in commercial smokeless powders. Several chemometric techniques were applied to smokeless powder GC-MS data for determination of the classes, and for assignment of test samples to these novel classes. The total ion spectrum (TIS), which is calculated from the GC-MS data for each sample, is obtained by summing the intensities for each mass-to-charge (m/z) ratio across the entire chromatographic profile. A TIS matrix comprising data for 726 smokeless powder samples was subject to agglomerative hierarchical cluster (AHC) analysis, and six distinct classes were identified. Within each class, a single m/z ratio had the highest intensity for the majority of samples, though the m/z ratio was not always unique to the specific class. Based on these observations, a new classification method known as the Intense Ion Rule (IIR) was developed and used for the assignment of test samples to the AHC designated classes. Discriminant models were developed for assignment of test samples to the AHC designated classes using k-Nearest Neighbors (kNN) and linear and quadratic discriminant analyses (LDA and QDA, respectively). Each of the models were optimized using leave-one-out (LOO) and leave-group-out (LGO) cross-validation, and the performance of the models was evaluated by calculating correct classification rates for assignment of the cross-validation (CV) samples to the AHC designated classes. The optimized models were utilized to assign test samples to the AHC designated classes. Overall, the QDA LGO model achieved the highest correct classification rates for assignment of both the CV samples and the test samples to the AHC designated classes. In forensic application, the goal of an explosives analyst is to ascertain the manufacturer of a smokeless powder sample. In addition, knowledge about the probability of a forensic sample being produced by a specific manufacturer could potentially decrease the time invested by an analyst during investigation by providing a shorter list of potential manufacturers. In this work, Bayes* Theorem and Bayesian Networks were investigated as an additional tool to be utilized in forensic casework. Bayesian Networks were generated and used to calculate posterior probabilities of a test sample belonging to specific manufacturers. The networks were designed to include manufacturer controlled powder characteristics such as shape, color, and dimension; as well as, the relative intensities of the class associated ions determined from cluster analysis. Samples were predicted to belong to a manufacturer based on the highest posterior probability. Overall percent correct rates were determined by calculating the percentage of correct predictions; that is, where the known and predicted manufacturer were the same. The initial overall percent correct rate was 66%. The dimensions of the smokeless powders were added to the network as average diameter and average length nodes. Addition of average diameter and length resulted in an overall prediction rate of 70%.
35

Klasifikace dokumentů podle tématu / Document Classification

Marek, Tomáš January 2013 (has links)
This thesis deals with a document classification, especially with a text classification method. Main goal of this thesis is to analyze two arbitrary document classification algorithms to describe them and to create an implementation of those algorithms. Chosen algorithms are Bayes classifier and classifier based on support vector machines (SVM) which were analyzed and implemented in the practical part of this thesis. One of the main goals of this thesis is to create and choose optimal text features, which are describing the input text best and thus lead to the best classification results. At the end of this thesis there is a bunch of tests showing comparison of efficiency of the chosen classifiers under various conditions.
36

Introduction to Probability Theory

Chen, Yong-Yuan 25 May 2010 (has links)
In this paper, we first present the basic principles of set theory and combinatorial analysis which are the most useful tools in computing probabilities. Then, we show some important properties derived from axioms of probability. Conditional probabilities come into play not only when some partial information is available, but also as a tool to compute probabilities more easily, even when partial information is unavailable. Then, the concept of random variable and its some related properties are introduced. For univariate random variables, we introduce the basic properties of some common discrete and continuous distributions. The important properties of jointly distributed random variables are also considered. Some inequalities, the law of large numbers and the central limit theorem are discussed. Finally, we introduce additional topics the Poisson process.
37

Computational Bayesian techniques applied to cosmology

Hee, Sonke January 2018 (has links)
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.

Page generated in 0.0573 seconds