• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 154
  • 45
  • 32
  • 15
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 76
  • 53
  • 50
  • 47
  • 44
  • 42
  • 42
  • 42
  • 35
  • 34
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Distributions d'auto-amorçage exactes ponctuelles des courbes ROC et des courbes de coûts

Gadoury, David January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
202

Biological effects of high energy radiation and ultra high dose rates

Zackrisson, Björn January 1991 (has links)
Recently a powerful electron accelerator, 50 MeV race-track microtron, has been taken into clinical use. This gives the opportunity to treat patients with higher x-ray and electron energies than before. Furthermore, treatments can be performed were the entire fractional dose can be delivered in parts of a second. The relative biological effectiveness (RBE) of high energy photons (up to 50 MV) was studied in vitro and in vivo. Oxygen enhancement ratio (OER) of 50 MV photons and RBE of 50 MeV electrons were investigated in vitro. Single-fraction experiments, in vitro, using V-79 Chinese hamster fibroblasts showed an RBE for 50 MV x-rays of approximately 1.1 at surviving fraction 0.01, with reference to the response to 4 MV x- rays. No significant difference in OER could be demonstrated. Fractionation experiments were carried out to establish the RBE at the clinically relevant dose level, 2 Gy. The RBE calculated for the 2 Gy/fraction experiments was 1.17. The RBEs for 20 MV x-rays and 50 MeV electrons were equal to one. In order to investigate the validity of these results, the jejunal crypt microcolony assay in mice was used to determine the RBE of 50 MV x-rays. The RBE for 50 MV x-rays in this case was estimated to be 1.06 at crypt surviving fraction 0.1. Photonuclear processes are proposed as one possible explanation to the higher RBE for 50 MV x-rays. Several studies of biological response to ionizing radiation of high absorbed dose rates have been performed, often with conflicting results. With the aim of investigating whether a difference in effect between irradiation at high dose rates and at conventional dose rates could be verified, pulsed 50 MeV electrons from a clinical accelerator were used for experiments with ultra high dose rates (mean dose rate: 3.8 x 10^ Gy/s) in comparison to conventional (mean dose rate: 9.6 x 10"^ Gy/s). V-79 cells were irradiated in vitro under both oxic and anoxic conditions. No significant difference in relative biological effectiveness (RBE) or oxygen enhancement ratio (OER) was observed for ultra high dose rates compared to conventional dose rates. A central issue in clinical radiobiological research is the prediction of responses to different radiation qualities. The choice of cell survival and dose response model greatly influences the results. In this context the relationship between theory and model is emphasized. Generally, the interpretations of experimental data are dependent on the model. Cell survival models are systematized with respect to their relations to radiobiological theories of cell kill. The growing knowledge of biological, physical, and chemical mechanisms is reflected in the formulation of new models. This study shows that recent modelling has been more oriented towards the stochastic fluctuations connected to radiation energy deposition. This implies that the traditional cell survival models ought to be complemented by models of stochastic energy deposition processes at the intracellular level. / <p>S. 1-44: sammanfattning, s. 47-130: 5 uppsatser</p> / digitalisering@umu
203

Regularisation and variable selection using penalized likelihood.

El anbari, Mohammed 14 December 2011 (has links) (PDF)
We are interested in variable sélection in linear régression models. This research is motivated by recent development in microarrays, proteomics, brain images, among others. We study this problem in both frequentist and bayesian viewpoints.In a frequentist framework, we propose methods to deal with the problem of variable sélection, when the number of variables is much larger than the sample size with a possibly présence of additional structure in the predictor variables, such as high corrélations or order between successive variables. The performance of the proposed methods is theoretically investigated ; we prove that, under regularity conditions, the proposed estimators possess statistical good properties, such as Sparsity Oracle Inequalities, variable sélection consistency and asymptotic normality.In a Bayesian Framework, we propose a global noninformative approach for Bayesian variable sélection. In this thesis, we pay spécial attention to two calibration-free hierarchical Zellner's g-priors. The first one is the Jeffreys prior which is not location invariant. A second one avoids this problem by only considering models with at least one variable in the model. The practical performance of the proposed methods is illustrated through numerical experiments on simulated and real world datasets, with a comparison betwenn Bayesian and frequentist approaches under a low informative constraint when the number of variables is almost equal to the number of observations.
204

Nonparametric estimation of the mixing distribution in mixed models with random intercepts and slopes

Saab, Rabih 24 April 2013 (has links)
Generalized linear mixture models (GLMM) are widely used in statistical applications to model count and binary data. We consider the problem of nonparametric likelihood estimation of mixing distributions in GLMM's with multiple random effects. The log-likelihood to be maximized has the general form l(G)=Σi log∫f(yi,γ) dG(γ) where f(.,γ) is a parametric family of component densities, yi is the ith observed response dependent variable, and G is a mixing distribution function of the random effects vector γ defined on Ω. The literature presents many algorithms for maximum likelihood estimation (MLE) of G in the univariate random effect case such as the EM algorithm (Laird, 1978), the intra-simplex direction method, ISDM (Lesperance and Kalbfleish, 1992), and vertex exchange method, VEM (Bohning, 1985). In this dissertation, the constrained Newton method (CNM) in Wang (2007), which fits GLMM's with random intercepts only, is extended to fit clustered datasets with multiple random effects. Owing to the general equivalence theorem from the geometry of mixture likelihoods (see Lindsay, 1995), many NPMLE algorithms including CNM and ISDM maximize the directional derivative of the log-likelihood to add potential support points to the mixing distribution G. Our method, Direct Search Directional Derivative (DSDD), uses a directional search method to find local maxima of the multi-dimensional directional derivative function. The DSDD's performance is investigated in GLMM where f is a Bernoulli or Poisson distribution function. The algorithm is also extended to cover GLMM's with zero-inflated data. Goodness-of-fit (GOF) and selection methods for mixed models have been developed in the literature, however their application in models with nonparametric random effects distributions is vague and ad-hoc. Some popular measures such as the Deviance Information Criteria (DIC), conditional Akaike Information Criteria (cAIC) and R2 statistics are potentially useful in this context. Additionally, some cross-validation goodness-of-fit methods popular in Bayesian applications, such as the conditional predictive ordinate (CPO) and numerical posterior predictive checks, can be applied with some minor modifications to suit the non-Bayesian approach. / Graduate / 0463 / rabihsaab@gmail.com
205

Nonparametric estimation of the mixing distribution in mixed models with random intercepts and slopes

Saab, Rabih 24 April 2013 (has links)
Generalized linear mixture models (GLMM) are widely used in statistical applications to model count and binary data. We consider the problem of nonparametric likelihood estimation of mixing distributions in GLMM's with multiple random effects. The log-likelihood to be maximized has the general form l(G)=Σi log∫f(yi,γ) dG(γ) where f(.,γ) is a parametric family of component densities, yi is the ith observed response dependent variable, and G is a mixing distribution function of the random effects vector γ defined on Ω. The literature presents many algorithms for maximum likelihood estimation (MLE) of G in the univariate random effect case such as the EM algorithm (Laird, 1978), the intra-simplex direction method, ISDM (Lesperance and Kalbfleish, 1992), and vertex exchange method, VEM (Bohning, 1985). In this dissertation, the constrained Newton method (CNM) in Wang (2007), which fits GLMM's with random intercepts only, is extended to fit clustered datasets with multiple random effects. Owing to the general equivalence theorem from the geometry of mixture likelihoods (see Lindsay, 1995), many NPMLE algorithms including CNM and ISDM maximize the directional derivative of the log-likelihood to add potential support points to the mixing distribution G. Our method, Direct Search Directional Derivative (DSDD), uses a directional search method to find local maxima of the multi-dimensional directional derivative function. The DSDD's performance is investigated in GLMM where f is a Bernoulli or Poisson distribution function. The algorithm is also extended to cover GLMM's with zero-inflated data. Goodness-of-fit (GOF) and selection methods for mixed models have been developed in the literature, however their application in models with nonparametric random effects distributions is vague and ad-hoc. Some popular measures such as the Deviance Information Criteria (DIC), conditional Akaike Information Criteria (cAIC) and R2 statistics are potentially useful in this context. Additionally, some cross-validation goodness-of-fit methods popular in Bayesian applications, such as the conditional predictive ordinate (CPO) and numerical posterior predictive checks, can be applied with some minor modifications to suit the non-Bayesian approach. / Graduate / 0463 / rabihsaab@gmail.com
206

Uncertainty Assessment of Hydrogeological Models Based on Information Theory / Bewertung der Unsicherheit hydrogeologischer Modelle unter Verwendung informationstheoretischer Grundlagen

De Aguinaga, José Guillermo 17 August 2011 (has links) (PDF)
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach. / Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
207

資料採礦中之模型選取

孫莓婷 Unknown Date (has links)
有賴電腦的輔助,企業或組織內部所存放的資料量愈來愈多,加速資料量擴大的速度。但是大量的資料帶來的未必是大量的知識,即使擁有功能強大的資料庫系統,倘若不對資料作有意義的分析與推論,再大的資料庫也只是存放資料的空間。過去企業或組織只把資料庫當作查詢系統,並不知道可以藉由資料庫獲取有價值的資訊,而其中資料庫的內容完整與否更是重要。由於企業所擁有的資料庫未必健全,雖然擁有龐大資料庫,但是其中資訊未必足夠。我們認為利用資料庫加值方法:插補方法、抽樣方法、模型評估等步驟,以達到擴充資訊的目的,應該可以在不改變原始資料結構之下增加資料庫訊息。 本研究主要在比較不同階段的資料經過加值動作後,是否還能與原始資料結構一致。研究架構大致分成三個主要流程,包括迴歸模型、羅吉斯迴歸模型與決策樹C5.0。經過不同階段的資料加值後,我們所獲得的結論為在迴歸模型為主要流程之下,利用迴歸為主的插補方法可以使加值後的資料庫較貼近原始資料,若想進一步採用抽樣方法縮減資料量,系統抽樣所獲得的結果會比利用簡單隨機抽樣來的好。而在決策樹C5.0的主要流程下,以類神經演算法作為插補的主要方法,在提增資訊量的同時,也使插補後的資料更接近原始資料。關於羅吉斯迴歸模型,由於間斷型變數的類別比例差異過大,致使此流程無法達到有效結論。 經由實證分析可以瞭解不同的配模方式,表現較佳的資料庫加值技術也不盡相同,但是與未插補的資料庫相比較,利用資料庫加值技術的確可以增加資訊量,使加值後的虛擬資料庫更貼近原始資料結構。 / With the fast pace of advancement in computer technology, computers have the capacity to store huge amount of data. The abundance of the data, without its proper treatment, does not necessary mean having valuable information on hand. As such, a large database system can merely serve as ways of accessing and storing. Keeping this in mind, we would like to focus on the integrity of the database. We adapt the methods where the missing values are imputed and added while leaving the data structure unmodified. The interest of this paper is to find out when the data are post value added using three different imputation methods, namely regression analysis, logistic regression analysis and C5.0 decision tree, which of the methods could provide the most consistent and resemblance value-added database to the original one. The results this paper has obtained are as the followings. The regression method, after imputation of the added value, produced the closer database structure to the original one. And in the case of having large amount of data where the smaller size of data is desired, then the systematic sampling provides a better outcome than the simple random sampling. The C5.0 decision tree method provides similar result as with the regression method. Finally with respect to the logistic regression analysis, the ratio of each class in the discrete variables is out of proportion, thereby making it difficult to make a reasonable conclusion. After going through the above studies, we have found that although the results from three different methods give slight different outcomes, one thing stands out and that is using the technique of value-added database could actually improve the authentic of the original database.
208

Primal dual pursuit: a homotopy based algorithm for the Dantzig selector

Asif, Muhammad Salman 10 July 2008 (has links)
Consider the following system model y = Ax + e, where x is n-dimensional sparse signal, y is the measurement vector in a much lower dimension m, A is the measurement matrix and e is the error in our measurements. The Dantzig selector estimates x by solving the following optimization problem minimize || x ||₁ subject to || A'(Ax - y) ||∞ ≤ ε, (DS). This is a convex program and can be recast into a linear program and solved using any modern optimization method e.g., interior point methods. We propose a fast and efficient scheme for solving the Dantzig Selector (DS), which we call "Primal-Dual pursuit". This algorithm can be thought of as a "primal-dual homotopy" approach to solve the Dantzig selector (DS). It computes the solution to (DS) for a range of successively relaxed problems, by starting with a large artificial ε and moving towards the desired value. Our algorithm iteratively updates the primal and dual supports as ε reduces to the desired value, which gives final solution. The homotopy path solution of (DS) takes with varying ε is piecewise linear. At some critical values of ε in this path, either some new elements enter the support of the signal or some existing elements leave the support. We derive the optimality and feasibility conditions which are used to update the solutions at these critical points. We also present a detailed analysis of primal-dual pursuit for sparse signals in noiseless case. We show that if our signal is S-sparse, then we can find all its S elements in exactly S steps using about "S² log n" random measurements, with very high probability.
209

Méthodes d'inférence statistique pour champs de Gibbs / Statistical inference methods for Gibbs random fields

Stoehr, Julien 29 October 2015 (has links)
La constante de normalisation des champs de Markov se présente sous la forme d'une intégrale hautement multidimensionnelle et ne peut être calculée par des méthodes analytiques ou numériques standard. Cela constitue une difficulté majeure pour l'estimation des paramètres ou la sélection de modèle. Pour approcher la loi a posteriori des paramètres lorsque le champ de Markov est observé, nous remplaçons la vraisemblance par une vraisemblance composite, c'est à dire un produit de lois marginales ou conditionnelles du modèle, peu coûteuses à calculer. Nous proposons une correction de la vraisemblance composite basée sur une modification de la courbure au maximum afin de ne pas sous-estimer la variance de la loi a posteriori. Ensuite, nous proposons de choisir entre différents modèles de champs de Markov cachés avec des méthodes bayésiennes approchées (ABC, Approximate Bayesian Computation), qui comparent les données observées à de nombreuses simulations de Monte-Carlo au travers de statistiques résumées. Afin de pallier l'absence de statistiques exhaustives pour ce choix de modèle, des statistiques résumées basées sur les composantes connexes des graphes de dépendance des modèles en compétition sont introduites. Leur efficacité est étudiée à l'aide d'un taux d'erreur conditionnel original mesurant la puissance locale de ces statistiques à discriminer les modèles. Nous montrons alors que nous pouvons diminuer sensiblement le nombre de simulations requises tout en améliorant la qualité de décision, et utilisons cette erreur locale pour construire une procédure ABC qui adapte le vecteur de statistiques résumés aux données observées. Enfin, pour contourner le calcul impossible de la vraisemblance dans le critère BIC (Bayesian Information Criterion) de choix de modèle, nous étendons les approches champs moyens en substituant la vraisemblance par des produits de distributions de vecteurs aléatoires, à savoir des blocs du champ. Le critère BLIC (Block Likelihood Information Criterion), que nous en déduisons, permet de répondre à des questions de choix de modèle plus large que les méthodes ABC, en particulier le choix conjoint de la structure de dépendance et du nombre d'états latents. Nous étudions donc les performances de BLIC dans une optique de segmentation d'images. / Due to the Markovian dependence structure, the normalizing constant of Markov random fields cannot be computed with standard analytical or numerical methods. This forms a central issue in terms of parameter inference or model selection as the computation of the likelihood is an integral part of the procedure. When the Markov random field is directly observed, we propose to estimate the posterior distribution of model parameters by replacing the likelihood with a composite likelihood, that is a product of marginal or conditional distributions of the model easy to compute. Our first contribution is to correct the posterior distribution resulting from using a misspecified likelihood function by modifying the curvature at the mode in order to avoid overly precise posterior parameters.In a second part we suggest to perform model selection between hidden Markov random fields with approximate Bayesian computation (ABC) algorithms that compare the observed data and many Monte-Carlo simulations through summary statistics. To make up for the absence of sufficient statistics with regard to this model choice, we introduce summary statistics based on the connected components of the dependency graph of each model in competition. We assess their efficiency using a novel conditional misclassification rate that evaluates their local power to discriminate between models. We set up an efficient procedure that reduces the computational cost while improving the quality of decision and using this local error rate we build up an ABC procedure that adapts the summary statistics to the observed data.In a last part, in order to circumvent the computation of the intractable likelihood in the Bayesian Information Criterion (BIC), we extend the mean field approaches by replacing the likelihood with a product of distributions of random vectors, namely blocks of the lattice. On that basis, we derive BLIC (Block Likelihood Information Criterion) that answers model choice questions of a wider scope than ABC, such as the joint selection of the dependency structure and the number of latent states. We study the performances of BLIC in terms of image segmentation.
210

Descobrindo modelos de previsão para a inflação brasileira: uma análise a partir do algoritmo Autometrics

Silva, Anderson Moriya 29 January 2016 (has links)
Submitted by anderson silva (amoriya@hotmail.com) on 2016-02-19T19:41:50Z No. of bitstreams: 1 Anderson_Moriya_Silva_final_revisao_4.pdf: 1752260 bytes, checksum: 966f44742fa7cdef87d699b314fca4f0 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2016-02-23T16:25:35Z (GMT) No. of bitstreams: 1 Anderson_Moriya_Silva_final_revisao_4.pdf: 1752260 bytes, checksum: 966f44742fa7cdef87d699b314fca4f0 (MD5) / Made available in DSpace on 2016-02-23T20:09:48Z (GMT). No. of bitstreams: 1 Anderson_Moriya_Silva_final_revisao_4.pdf: 1752260 bytes, checksum: 966f44742fa7cdef87d699b314fca4f0 (MD5) Previous issue date: 2016-01-29 / O presente trabalho tem como objetivo avaliar a capacidade preditiva de modelos econométricos de séries de tempo baseados em indicadores macroeconômicos na previsão da inflação brasileira (IPCA). Os modelos serão ajustados utilizando dados dentro da amostra e suas projeções ex-post serão acumuladas de um a doze meses à frente. As previsões serão comparadas a de modelos univariados como autoregressivo de primeira ordem - AR(1) - que nesse estudo será o benchmark escolhido. O período da amostra vai de janeiro de 2000 até agosto de 2015 para ajuste dos modelos e posterior avaliação. Ao todo foram avaliadas 1170 diferentes variáveis econômicas a cada período a ser projetado, procurando o melhor conjunto preditores para cada ponto no tempo. Utilizou-se o algoritmo Autometrics para a seleção de modelos. A comparação dos modelos foi feita através do Model Confidence Set desenvolvido por Hansen, Lunde e Nason (2010). Os resultados obtidos nesse ensaio apontam evidências de ganhos de desempenho dos modelos multivariados para períodos posteriores a 1 passo à frente. / The present work has aim to evaluate the superior predictions capabilities of econometrics time series models based on macroeconomics indicators for Brazilian inflation (IPCA). The models were adjusted in sample and the ex-post prediction are accumulating in one to twelve steps ahead. The forecasts will be compared with univariate models like first order autoregressive - AR (1) that is the chosen benchmark. The period of the sample goes through January 2000 to August 2015 for model adjustment and evaluation. It was evaluate over 1170 different economic variable for each forecast period, searching for the best predictor set for each point in time. It was used Autometrics to model selection. The models were compared the Model Confident Set, developed by Hansen, Lunde and Nason (2010). The results founded in this essay evidences gain of accuracy for one-step ahead.

Page generated in 0.0656 seconds