• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 60
  • 23
  • 15
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Étude des chambres réverbérantes à brassage de modes en ondes millimétriques : application à l’étude des interactions ondes-vivant / Study and design of reverberation chamber at millimeter waves : dosimetry application

Fall, Abdou Khadir 03 February 2015 (has links)
De nos jours, on assiste à l'émergence massive de nouveaux systèmes électroniques exploitant des fréquences de plus en plus élevées, particulièrement en ondes millimétriques (30-300 GHz). Il apparaît de ce fait un besoin potentiel de développement de nouveaux moyens d'essai appropriés dans le domaine millimétrique. En particulier, l'étude de la biocompatibilité de ces systèmes est clairement identifiée comme une priorité de recherche en électromagnétisme. Dans ce contexte, l'objectif de cette thèse consiste à concevoir et à évaluer les propriétés d'une chambre réverbérante à brassage de modes (CRBM) en bande Ka (26,5-40 GHz), en bande U (40-60 GHz) et en bande V (50-75 GHz). L'application visée dans cette thèse concerne la mise en place d'outils dosimétriques par caméra infrarouge en chambre réverbérante et la réalisation d'essais préliminaires sur des fantômes diélectriques à 60 GHz. Dans un premier temps, nous avons analysé numériquement le comportement statistique du champ électrique dans une cavité pré-dimensionnée. Les simulations sont réalisées à l'aide d'un outil interne de modélisation du comportement d'une CRBM basé sur la théorie des images. A l'aide du test d'ajustement statistique d'Anderson-Darling, nous avons montré que le comportement de la chambre en ondes millimétriques est en adéquation avec le modèle de Hill (champ statistiquement homogène et isotrope dans le volume de l'enceinte) . Dans un second temps, nous avons réalisé un prototype de chambre réverbérante de dimensions internes : 42,3 x 41,2 x 38,3 cm3 . Un processus de brassage par saut de fréquence est utilisé pour l'obtention de l'uniformité statistique de la densité de puissance. La chambre est équipée d'un système de positionnement fin et précis permettant l'échantillonnage spatial de la puissance sur un axe à l'intérieur de la chambre. Les accès millimétriques ont également été étudiés de sorte à réduire d'éventuelles fuites significatives. Les liaisons entre la source millimétrique et l'antenne d'émission d'une part et celles entre l'antenne de réception et l'analyseur de spectre d'autre part sont assurées par des guides d'onde. Nous avons également mis en place l'ensemble des équipements nécessaires pour le fonctionnement de la chambre (source, analyseur de spectre, mélangeur). La chambre est caractérisée dans la bande 58,5-61,5 GHz. Les résultats obtenus sont satisfaisants en termes de coefficient de qualité et de comportement statistique de la puissance mesurée dans un volume de test donné. Dans un troisième temps, nous avons modélisé puis réalisé une interface intégrée sur une des parois de la chambre pour la mesure de température par caméra infrarouge. Des mesures préliminaires sont réalisées sur un fantôme constitué essentiellement d'eau. Les résultats expérimentaux et théoriques de l'évaluation du gradient de la température sur le fantôme sont très proches. Ceci confirme que la chambre réverbérante ainsi conçue permet de soumettre l'objet sous test à une illumination statistiquement uniforme et calibrée en puissance. Un tel dispositif est un atout précieux pour des tests de compatibilité électromagnétique d'équipements électroniques dans la bande 26,5-75 GHz. Cette CRBM pourrait également permettre de réal iser des essais préliminaires dans le cadre de l'étude des interactions des ondes avec la matière vivante en millimétrique. / Nowadays, there is a massive emergence of new electronic systems operating at increasing frequencies, especially in the millimeter waves range (30-300 GHz). As a consequence, development of new appropriate test facilities in the millimeter waves range is needed. ln particular, the study of the biocompatibility of the se systems is cie arly identified as a research priority in electromagnetism. ln this context, this thesis deals with the design and the evaluation of a modestirred reverberation chamber (RC) properties in the Ka band (26.5-40 GHz), U band (40-60 GHz) and V band (50-75 GHz). The intended application in this thesis concerns the development of a dosimetric tool using an infrared camera in a reverberation chamber. Firstly, we numerically analyze the statistical behavior of the electric field in the test volume of such an RC. A numerical model based on image theory is used to simulate the cavity. With Anderson-Darling goodness-of-fit test, we show !hat the chamber behaves very weil at millimeter waves frequency in terms of statistical distribution of the field in the test volume. Secondly, a compact reverberation chamber is designed and built up, with the following internai dimensions 42.3 x 41.2 x 38.3 cm3 . The statistical uniformity of power density in the chamber volume is obtained by frequency stirring. The RC is associated with a positioning system for spatial sampling of power inside reverberation chamber. The interfaces are also studied in order to reduce any significant leakage. Waveguides are used in the transmission and reception chains to minimize losses. We have also set up ali the equipment necessary for carrying out measurements (source, spectrum analyzer, mixer). The RC is characterized in the 58.5-61.5 GHz range. The results are satisfactory in terms of the quality factor level and the statistical distribution of the power in the test volume. Thirdly, an interface is designed and integrated on one of the chamber walls for temperature measurement by an infrared camera. Preliminary measurements are performed on a phantom consisting essentially of water. Experimental results of the phantom temperature rise are in good agreement with theoretical predictions. This confirms thal the designed reverberation chamber allows to expose the deviee under test with a statistically uniform and calibrated power. Such a deviee is a valuable asse! for EMC testing of electronic equipments in the 26.5 to 60 GHz frequency range. This RC could also permit to conduct preliminary tests in the context of the millimeter waves interactions with being organisms.
42

On the modeling of asset returns and calibration of European option pricing models

Robbertse, Johannes Lodewickes 07 July 2008 (has links)
Prof. F. Lombard
43

EM algorithm for Markov chains observed via Gaussian noise and point process information: Theory and case studies

Damian, Camilla, Eksi-Altay, Zehra, Frey, Rüdiger January 2018 (has links) (PDF)
In this paper we study parameter estimation via the Expectation Maximization (EM) algorithm for a continuous-time hidden Markov model with diffusion and point process observation. Inference problems of this type arise for instance in credit risk modelling. A key step in the application of the EM algorithm is the derivation of finite-dimensional filters for the quantities that are needed in the E-Step of the algorithm. In this context we obtain exact, unnormalized and robust filters, and we discuss their numerical implementation. Moreover, we propose several goodness-of-fit tests for hidden Markov models with Gaussian noise and point process observation. We run an extensive simulation study to test speed and accuracy of our methodology. The paper closes with an application to credit risk: we estimate the parameters of a hidden Markov model for credit quality where the observations consist of rating transitions and credit spreads for US corporations.
44

Resultatpåverkan av olika fördelningar på parametern operationstid vid simuleringsstudier. : <html /> / <html /> : <html />

Bengtsson, Angelica, Kuc, Arlena January 2011 (has links)
I detta arbete har studerats hur flödet i en flerstegs- bearbetningsprocess påverkas av stokastiska fluktuationer och störningar i de enskilda processtegen. Mera bestämt har analys utförts av hur de stokastiska variationerna i operationstiderna kan och bör modelleras vid simuleringsstudier. Även hur påverkan av valet av sådana stokastiska modeller kan tänkas ha på processen i sin helhet, till exempel avseende total genomloppstid. Examensarbetet syftar till att undersöka hur val av fördelning på parametern operationstid, påverkar resultatfaktorn genomloppstid vid flödessimuleringar. För att finna svar på denna påverkan har en fallstudie utförts, med utgångspunkt av indata från en produkt som tillverkas på Volvo Aero. Denna produkt genomgår en tillverkningssekvens innehållande 18 stycken bearbetningsoperationer innefattande tre olika processtyper (automatisk, halvautomatisk och manuell). Dessa tre processtyper är i olika grad beroende av operatörers insats. De 18 bearbetningsoperationernas processtid har analyserats numeriskt och grafiskt. Programvaran Stat:fit har använts som hjälpmedel för att erhålla svar på lämplig fördelning per tillverkningsoperation samt vilka teoretiska fördelningar som är lämpliga att använda för de tre olika processtyperna. De rekommenderade fördelningsteorierna per tillverkningsoperation har genomgått fördelningstest (Chi2, Kolmogorov-Smirnov och Anderson-Darling) och använts som grund vid skapande av försöksplan till simuleringsstudien. Simuleringsstudien har utförts enligt försöksplan i programvaran Simul8. Samtliga körningar från simuleringsmodellen är statistiskt säkerställda med 95 % konfidensintervall. Fallstudien har visat att resultatpåverkan från operationstidernas fördelningstyp är relativt liten vid simulering av komplexa system där faktorer som nivå av tillverkningsvolym och tillgänglighet har större påverkan på resultatfaktorn genomloppstid. Vid enklare modeller utan begränsning i form av reducerad tillgänglighet synliggörs skillnad i simuleringsresultat av olika val av fördelning på parametern operationstid. Fördelningen av dessa simuleringsresultat styrks av den centrala gränsvärdessatsen, det vill säga att om antalet observerade värden är tillräckligt stort, uppträder resultatet som normalfördelat. / Discrete event simulation is used to imitate and analyze how systems change over time. The actual behavior of the variation in the system is interpreted by using discrete and continuous probability distributions. In the software program Simul8, simulation models are created based on the information collected from the production. Shifts, operation time and efficiency are examples of information required for the modeling process. The aim with this bachelor´s thesis was to investigate how different choice of probability distributions on the parameter operation time affects the result of a discrete event simulation. The thesis is a result of a case study performed at Volvo Aero Corporation, Sweden. The case study involves investigation of probability distribution for 18 manufacturing operations for a product. The manufacturing sequence consists of three different types of processes (automatic, semiautomatic and manual). These three types of processes need different level of instrumentality. The commercial statistical computer software, Stat:fit has been used to find proper probability distribution for each of the manufacturing operations. The results from Stat:fit have been used to analyze if there are any connections between the process type and the probability distributions. The recommended probability distributions have been tested with Goodness-of-fit tests (Chi2, Kolmogorov-Smirnov and Anderson-Darling) using Stat:fit and used in the simulation modeling. The simulation model has been validated and verified by a simulation advisor at Volvo Aero. Five different simulation models have been evaluated in Simul8, with five different types of distributions. All simulation runs have been statistical proved, in Simul8 with 95% confidence interval. The result of this study indicates that the variation of process time has limited effect for complex simulation models containing low level of efficiency and high load factors, concerning the result of throughput time. For simple models, excluded from restricted efficiency, the effect on the throughput time is featured.
45

Inférence statistique dans le modèle de mélange à risques proportionnels / Statistical inference in mixture of proportional hazards models

Ben elouefi, Rim 05 September 2017 (has links)
Dans ce travail, nous nous intéressons à l'inférence statistique dans deux modèles semi-paramétrique et non-paramétrique stratifiés de durées de vie censurées. Nous proposons tout d'abord une statistique de test d'ajustement pour le modèle de régression stratifié à risques proportionnels. Nous établissons sa distribution asymptotique sous l'hypothèse nulle d'un ajustement correct du modèle aux données. Nous étudions les propriétés numériques de ce test (niveau, puissance sous différentes alternatives) au moyen de simulations. Nous proposons ensuite une procédure permettant de stratifier le modèle à 1isques proportionnels suivant un seuil inconnu d'une variable de stratification. Cette procédure repose sur l'utilisation du test d'ajustement proposé précédemment. Une étude de simulation exhaustive est conduite pour évaluer les pe1fonnances de cette procédure. Dans une seconde partie de notre travail, nous nous intéressons à l'application du test du logrank stratifié dans un contexte de données manquantes (nous considérons la situation où les strates ne peuvent être observées chez tous les individus de l'échantillon). Nous construisons une version pondérée du logrank stratifié adaptée à ce problème. Nous en établissons la loi limite sous l'hypothèse nulle d'égalité des fonctions de risque dans les différents groupes. Les propriétés de cette nouvelle statistique de test sont évaluée au moyen de simulations. Le test est ensuite appliqué à un jeu de données médicales. / In this work, we are interested in the statistical inference in two semi-parametric and non-parametric stratified models for censored data. We first propose a goodnessof- fit test statistic for the stratified proportional hazards regression model. We establish its asymptotic distribution under the null hypothesis of a correct fit of the model. We investigate the numerical properties of this test (level, power under different alternatives) by means of simulations. Then, we propose a procedure allowing to stratify the proportional hazards model according to an unknown threshold in a stratification variable. This procedure is based on the goodness-of-fit test proposed earlier. An exhaustive simulation study is conducted to evaluate the performance of this procedure. In a second part of our work, we consider the stratified logrank test in a context of missing data (we consider the situation where strata can not be observed on all sample individuals). We construct a weighted version of the stratified logrank, adapted to this problem. We establish its asymptotic distribution under the null hypothesis of equality of the hazards functions in the different groups. The prope1ties of this new test statistic are assessed using simulatious. Finally, the test is applied to a medical dataset.
46

Goodness-of-Fit Tests For Dirichlet Distributions With Applications

Li, Yi 23 July 2015 (has links)
No description available.
47

Corrected LM goodness-of-fit tests with applicaton to stock returns

Percy, Edward Richard, Jr. 05 January 2006 (has links)
No description available.
48

Application of Distance Covariance to Time Series Modeling and Assessing Goodness-of-Fit

Fernandes, Leon January 2024 (has links)
The overarching goal of this thesis is to use distance covariance based methods to extend asymptotic results from the i.i.d. case to general time series settings. Accounting for dependence may make already difficult statistical inference all the more challenging. The distance covariance is an increasingly popular measure of dependence between random vectors that goes beyond linear dependence as described by correlation. It is defined by a squared integral norm of the difference between the joint and marginal characteristic functions with respect to a specific weight function. Distance covariance has the advantage of being able to detect dependence even for uncorrelated data. The energy distance is a closely related quantity that measures distance between distributions of random vectors. These statistics can be used to establish asymptotic limit theory for stationary ergodic time series. The asymptotic results are driven by the limit theory for the empirical characteristic functions. In this thesis we apply the distance covariance to three problems in time series modeling: (i) Independent Component Analysis (ICA), (ii) multivariate time series clustering, and (iii) goodness-of-fit using residuals from a fitted model. The underlying statistical procedures for each topic uses the distance covariance function as a measure of dependence. The distance covariance arises in various ways in each of these topics; one as a measure of independence among the components of a vector, second as a measure of similarity of joint distributions and, third for assessing serial dependence among the fitted residuals. In each of these cases, limit theory is established for the corresponding empirical distance covariance statistics when the data comes from a stationary ergodic time series. For Topic (i) we consider an ICA framework, which is a popular tool used for blind source separation and has found application in fields such as financial time series, signal processing, feature extraction, and brain imaging. The Structural Vector Autogregression (SVAR) model is often the basic model used for modeling macro time series. The residuals in such a model are given by e_t = A S_t, the classical ICA model. In certain applications, one of the components of S_t has infinite variance. This differs from the standard ICA model. Furthermore the e_t's are not observed directly but are only estimated from the SVAR modeling. Many of the ICA procedures require the existence of a finite second or even fourth moment. We derive consistency when using the distance covariance for measuring independence of residuals under the infinite variance case.Extensions to the ICA model with noise, which has a direct application to SVAR models when testing independence of residuals based on their estimated counterparts is also considered. In Topic (ii) we propose a novel methodology for clustering multivariate time series data using energy distance. Specifically, a dissimilarity matrix is formed using the energy distance statistic to measure separation between the finite dimensional distributions for the component time series. Once the pairwise dissimilarity matrix is calculated, a hierarchical clustering method is then applied to obtain the dendrogram. This procedure is completely nonparametric as the dissimilarities between stationary distributions are directly calculated without making any model assumptions. In order to justify this procedure, asymptotic properties of the energy distance estimates are derived for general stationary and ergodic time series. Topic (iii) considers the fundamental and often final step in time series modeling, assessing the quality of fit of a proposed model to the data. Since the underlying distribution of the innovations that generate a model is often not prescribed, goodness-of-fit tests typically take the form of testing the fitted residuals for serial independence. However, these fitted residuals are inherently dependent since they are based on the same parameter estimates and thus standard tests of serial independence, such as those based on the autocorrelation function (ACF) or distance correlation function (ADCF) of the fitted residuals need to be adjusted. We apply sample splitting in the time series setting to perform tests of serial dependence of fitted residuals using the sample ACF and ADCF. Here the first f_n of the n data points in the time series are used to estimate the parameters of the model. Tests for serial independence are then based on all the n residuals. With f_n = n/2 the ACF and ADCF tests of serial independence tests often have the same limit distributions as though the underlying residuals are indeed i.i.d. That is, if the first half of the data is used to estimate the parameters and the estimated residuals are computed for the entire data set based on these parameter estimates, then the ACF and ADCF can have the same limit distributions as though the residuals were i.i.d. This procedure ameliorates the need for adjustment in the construction of confidence bounds for both the ACF and ADCF, based on the fitted residuals, in goodness-of-fit testing. We also show that if f_n < n/2 then the asymptotic distribution of the tests stochastically dominate the corresponding asymptotic distributions for the true i.i.d. noise; the stochastic order gets reversed under f_n > n/2.
49

Quantifying biodiversity trends in time and space

Studeny, Angelika C. January 2012 (has links)
The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity. Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability. Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species. Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time. Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.
50

列聯表中離群細格偵測探討 / Detecting Outlying Cells in Cross-Classified Tables

施苑玉, Shi, Yuan Yu Unknown Date (has links)
在處理列聯表(Contingency table)資料時,一般我們常用卡方適合度檢定(chi-squared goodess-of-fit test)來判定模式配適的好壞。如果這個檢定是顯著的,則意謂著配適的模式並不恰當,我們則希望進一步探討可能的原因何在。這其中的一個可能原因是資料中存在所謂的離群細格(outlying cell),這些細格的觀測次數和其他細格的觀測次數呈現某種不一致的現象。   在以往的文獻中,離群細格的偵測,通常藉由不同定義的殘差(residual)作為工具,進而衍生出各種不同的偵測方法。只是,這些探討基本上僅局限於二維列聯表的情形,對於高維度的列聯表,並沒有作更進一步的詮釋。Brown (1974)提出一個逐步偵測的方法,可依序找出所有可能的離群細格,直到近似獨立(quasi-independence)的模式假設不再顯著為止。但是我們認為他所引介的這個方法所牽涉的計算程序似乎過於繁複,因此藉由簡化修改計算過程,我們提供了另一種離群細格偵測的方法。依據模擬實驗的結果發現,本文所介紹的方法與Brown的方法作比較只有過之而無不及。此外我們也探討了應用此種方法到三維列聯表的可行性和可能遭遇到的困難。 / Chi-squared goodness-of-fit tests are usually employed to test whether a model fits a contingency table well. When the test is significant, we would then like to identify the sources that cause significance. The existence of outlying cells that contribute heavily to the test statistic may be one of the reasons.   Brown (1974) offered a stepwise criteria for detecting outlying cells in two-way con-tingency tables. In attempt to simplify the lengthy calculations that are required in Brown's method, we suggest an alternative procedure in this study. Based on simulation results, we find that the procedure performs reasonably well, it even outperforms Brown's method on several occasions. In addition, some extensions and issues regarding three-way contingency tables are also addressed.

Page generated in 0.0934 seconds