• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modèles log-bilinéaires en sciences actuarielles, avec applications en mortalité prospective et triangles IBNR

Delwarde, Antoine 29 March 2006 (has links)
La présente thèse vise à explorer différents types de modèles log-bilinéaires dans le domaine des sciences actuarielles. Le point de départ consiste en le modèle de Lee-Carter, utilisé pour les problèmes de projection de la mortalité. Différentes variantes sont développées, et notamment le modèle de Poisson log-bilinéaire. L'introduction de variables explicatives est également analysée. Enfin, une tentative de d'exportation de ces modèles au cas des triangles IBNR est effectuée.
12

The Nernst-Planck-Poisson Reactive Transport Model for Concrete Carbonation and Chloride Diffusion in Carbonated and Non-carbonated Concrete

Alsheet, Feras January 2020 (has links)
The intrusion of chlorides and carbon dioxide into a reinforced concrete (RC) structure can initiate corrosion of the reinforcing steel, which, due to its expansive nature, can damage the structure and adversely affects its serviceability and safety. Corrosion will initiate if at the steel surface the concrete free chloride concentration exceeds a defined limit, or its pH falls below a critical level. Hence, determination of the time to reaching these critical limits is key to the assessment of RC structures durability and service life. Due to the ionic nature of the chlorides and the bicarbonate anion (HCO3-) formed by the CO2 in the multi-ionic pore solution, the transport of both species is driven by Fickian diffusion combined with electromigration and ionic activity, which can be mathematically expressed by the Nernst-Planck-Poisson (NPP) equations. For a complete representation of the phenomenon, however, the NPP equations must be supplemented by the relevant chemical equilibrium equations to ensure chemical balance among the various species within the concrete pore solution. The combination of NPP with the chemical equilibrium equations is often termed the NPP reactive transport model. In this study, such a model is developed, coded into the MATLAB platform, validated by available experimental data, and applied to analyze the time-dependent concrete carbonation and the movement of chlorides in carbonated and non-carbonated concrete. The results of these analyses can be used to predict the time to corrosion initiation. The transient one-dimensional governing equations of NPP are numerically solved using the Galerkin’s finite element formulation in space and the backward (implicit) Euler scheme in the time domain. The associated system of chemical equilibrium equations accounts for the key homogeneous and heterogeneous chemical reactions that take place in the concrete during carbonation and chlorides transport. At each stage of the analysis, the effects of these reactions on the changes in the pore solution chemical composition, pH, cement chloride binding capacity, concrete porosity, and the hydrated cement solids volumetric ratio are determined. The study demonstrates that given accurate input data, the presently developed NPP reactive transport model can accurately simulate the complex transport processes of chlorides and CO2 in concrete as a reactive porous medium, and the ensuing physical and chemical changes that occur due to the reaction of these species with the pore solution and the other cement hydration products. This conclusion is supported by the good agreement between results of the current analyses with the corresponding available experimental data from physical tests involving carbonation, and chloride diffusion in non-carbonated and carbonated concrete. / Thesis / Doctor of Philosophy (PhD)
13

Modelo destrutivo com variável terminal em experimentos quimiopreventivos de tumores em animais

Zavaleta, Katherine Elizabeth Coaguila 12 April 2012 (has links)
Made available in DSpace on 2016-06-02T20:06:07Z (GMT). No. of bitstreams: 1 4375.pdf: 903031 bytes, checksum: 03118f406867a5d7be3cbc63571d4a2b (MD5) Previous issue date: 2012-04-12 / Financiadora de Estudos e Projetos / The chemical induction of carcinogens in chemopreventive animal experiments is becoming increasingly frequent in biological research. The purpose of these biological experiments is to evaluate the effect of a particular treatment on the rate of tumors incidence in animals. In this work, the number of promoted tumors per animal will be parametrically modeled following the suggestions given by Kokoska (1987) and Freedman et al. (1993). The study of these chemopreventive experiments will be presented in the context of the destructive model proposed by Rodrigues et al. (2010) with terminal variable that allows or censures the experiment at time of the animal death. Since the data analyzed in this field are subject to excess of zeros (Freedman et al. (1993)), we propose for the number of promoted tumors a negative binomial distribution (NB), a zero-inflated Poisson distribution (ZIP), and a zero-inflated Negative Binomial distribution (ZINB). The selection of these models will be made through the likelihood ratio test and the AIC, BIC criteria. The estimation of its parameters will be obtained by using the method of maximum likelihood, and further simulation studies will also be realized. As a future proposition to finalize this project, it is suggested the Bayesian methodology as an alternative to the method of maximum likelihood via the EM algorithm. / A indução química de substâncias cancerígenas em experimentos quimiopreventivos em animais é cada vez mais frequente em pesquisas biológicas. O objetivo destes experimentos biológicos é avaliar o efeito de um determinado tratamento na taxa de incidência de tumores em animais. Neste trabalho o número de tumores promovidos por animal será modelado parametricamente seguindo as sugestões dadas por Kokoska (1987) e por Freedman et al. (1993). O estudo desses experimentos quimiopreventivos será apresentado no contexto do modelo destrutivo proposto por Rodrigues et al. (2010) com variável terminal que condiciona ou censura o experimento no instante de morte do animal. Os dados analisados possuem uma grande quantidade de zeros, portanto será proposto para o número de tumores promovidos as seguintes distribuições: binomial negativa, a distribuição de Poisson com zeros inflacionados e a distribuição binomial negativa com zeros inflacionados. A seleção destes modelos será feita através do teste da razão de verossimilhança e os critérios AIC, BIC. As estimativas dos respectivos parâmetros serão obtidas utilizando o método de máxima verossimilhança e serão feitos estudos de simulação. Para continuar este projeto, a proposta futura é utilizar a metodologia Bayesiana como alternativa ao método de máxima verossimilhança via algoritmo EM.
14

A class of bivariate Erlang distributions and ruin probabilities in multivariate risk models

Groparu-Cojocaru, Ionica 11 1900 (has links)
Nous y introduisons une nouvelle classe de distributions bivariées de type Marshall-Olkin, la distribution Erlang bivariée. La transformée de Laplace, les moments et les densités conditionnelles y sont obtenus. Les applications potentielles en assurance-vie et en finance sont prises en considération. Les estimateurs du maximum de vraisemblance des paramètres sont calculés par l'algorithme Espérance-Maximisation. Ensuite, notre projet de recherche est consacré à l'étude des processus de risque multivariés, qui peuvent être utiles dans l'étude des problèmes de la ruine des compagnies d'assurance avec des classes dépendantes. Nous appliquons les résultats de la théorie des processus de Markov déterministes par morceaux afin d'obtenir les martingales exponentielles, nécessaires pour établir des bornes supérieures calculables pour la probabilité de ruine, dont les expressions sont intraitables. / In this contribution, we introduce a new class of bivariate distributions of Marshall-Olkin type, called bivariate Erlang distributions. The Laplace transform, product moments and conditional densities are derived. Potential applications of bivariate Erlang distributions in life insurance and finance are considered. Further, our research project is devoted to the study of multivariate risk processes, which may be useful in analyzing ruin problems for insurance companies with a portfolio of dependent classes of business. We apply results from the theory of piecewise deterministic Markov processes in order to derive exponential martingales needed to establish computable upper bounds of the ruin probabilities, as their exact expressions are intractable.
15

On the quasi-optimal convergence of adaptive nonconforming finite element methods in three examples

Rabus, Hella 23 May 2014 (has links)
Eine Vielzahl von Anwendungen in der numerischen Simulation der Strömungsdynamik und der Festkörpermechanik begründen die Entwicklung von zuverlässigen und effizienten Algorithmen für nicht-standard Methoden der Finite-Elemente-Methode (FEM). Um Freiheitsgrade zu sparen, wird in jedem Durchlauf des adaptiven Algorithmus lediglich ein Teil der Gebiete verfeinert. Einige Gebiete bleiben daher möglicherweise verhältnismäßig grob. Die Analyse der Konvergenz und vor allem die der Optimalität benötigt daher über die a priori Fehleranalyse hinausgehende Argumente. Etablierte adaptive Algorithmen beruhen auf collective marking, d.h. die zu verfeinernden Gebiete werden auf Basis eines Gesamtfehlerschätzers markiert. Bei adaptiven Algorithmen mit separate marking wird der Gesamtfehlerschätzer in einen Volumenterm und in einen Fehlerschätzerterm aufgespalten. Da der Volumenterm unabhängig von der diskreten Lösung ist, kann einer schlechten Datenapproximation durch eine lokal tiefe Verfeinerung begegnet werden. Bei hinreichender Datenapproximation wird das Gitter dagegen bezüglich des neuen Fehlerschätzerterms wie üblich level-orientiert verfeinert. Die numerischen Experimente dieser Arbeit liefern deutliche Indizien der quasi-optimalen Konvergenz für den in dieser Arbeit untersuchten adaptiven Algorithmus, der auf separate marking beruht. Der Parameter, der die Verbesserung der Datenapproximation sicherstellt, ist frei wählbar. Dadurch ist es erstmals möglich, eine ausreichende und gleichzeitig optimale Approximation der Daten innerhalb weniger Durchläufe zu erzwingen. Diese Arbeit ermöglicht es, Standardargumente auch für die Konvergenzanalyse von Algorithmen mit separate marking zu verwenden. Dadurch gelingt es Quasi-Optimalität des vorgestellten Algorithmus gemäß einer generellen Vorgehensweise für die drei Beispiele, dem Poisson Modellproblem, dem reinen Verschiebungsproblem der linearen Elastizität und dem Stokes Problem, zu zeigen. / Various applications in computational fluid dynamics and solid mechanics motivate the development of reliable and efficient adaptive algorithms for nonstandard finite element methods (FEMs). To reduce the number of degrees of freedom, in adaptive algorithms only a selection of finite element domains is marked for refinement on each level. Since some element domains may stay relatively coarse, even the analysis of convergence and more importantly the analysis of optimality require new arguments beyond an a priori error analysis. In adaptive algorithms, based on collective marking, a (total) error estimator is used as refinement indicator. For separate marking strategies, the (total) error estimator is split into a volume term and an error estimator term, which estimates the error. Since the volume term is independent of the discrete solution, if there is a poor data approximation the improvement may be realised by a possibly high degree of local mesh refinement. Otherwise, a standard level-oriented mesh refinement based on an error estimator term is performed. This observation results in a natural adaptive algorithm based on separate marking, which is analysed in this thesis. The results of the numerical experiments displayed in this thesis provide strong evidence for the quasi-optimality of the presented adaptive algorithm based on separate marking and for all three model problems. Furthermore its flexibility (in particular the free steering parameter for data approximation) allows a sufficient and optimal data approximation in just a few number of levels of the adaptive scheme. This thesis adapts standard arguments for optimal convergence to adaptive algorithms based on separate marking with a possibly high degree of local mesh refinement, and proves quasi-optimality following a general methodology for three model problems, i.e., the Poisson model problem, the pure displacement problem in linear elasticity and the Stokes equations.
16

A class of bivariate Erlang distributions and ruin probabilities in multivariate risk models

Groparu-Cojocaru, Ionica 11 1900 (has links)
Nous y introduisons une nouvelle classe de distributions bivariées de type Marshall-Olkin, la distribution Erlang bivariée. La transformée de Laplace, les moments et les densités conditionnelles y sont obtenus. Les applications potentielles en assurance-vie et en finance sont prises en considération. Les estimateurs du maximum de vraisemblance des paramètres sont calculés par l'algorithme Espérance-Maximisation. Ensuite, notre projet de recherche est consacré à l'étude des processus de risque multivariés, qui peuvent être utiles dans l'étude des problèmes de la ruine des compagnies d'assurance avec des classes dépendantes. Nous appliquons les résultats de la théorie des processus de Markov déterministes par morceaux afin d'obtenir les martingales exponentielles, nécessaires pour établir des bornes supérieures calculables pour la probabilité de ruine, dont les expressions sont intraitables. / In this contribution, we introduce a new class of bivariate distributions of Marshall-Olkin type, called bivariate Erlang distributions. The Laplace transform, product moments and conditional densities are derived. Potential applications of bivariate Erlang distributions in life insurance and finance are considered. Further, our research project is devoted to the study of multivariate risk processes, which may be useful in analyzing ruin problems for insurance companies with a portfolio of dependent classes of business. We apply results from the theory of piecewise deterministic Markov processes in order to derive exponential martingales needed to establish computable upper bounds of the ruin probabilities, as their exact expressions are intractable.
17

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
18

Measures of University Research Output

Zharova, Alona 14 February 2018 (has links)
New Public Management unterstützt Universitäten und Forschungseinrichtungen dabei, in einem stark wettbewerbsorientierten Forschungsumfeld zu bestehen. Entscheidungen unter Unsicherheit, z.B. die Verteilung von Mitteln für den Forschungsbedarf und Forschungszwecke, erfordert von Politik und Hochschulmanagement, die Beziehungen zwischen den Dimensionen der Forschungsleistung und den resultierenden oder eingehenden Zuschüssen zu verstehen. Hierfür ist es wichtig, die Variablen der wissenschaftlichen Wissensproduktion auf der Ebene von Individuen, Forschungsgruppen und Universitäten zu untersuchen. Das Kapitel 2 dieser Arbeit analysiert die Ebene der Individuen. Es verwendet die Beobachtungen der Forscherprofile von Handelsblatt (HB), Research Papers in Economics (RePEc, hier RP) und Google Scholar (GS) als meist verbreitete Ranking-Systeme in BWL und VWL im deutschsprachigen Raum. Das Kapitel 3 liefert eine empirische Evidenz für die Ebene von Forschungsgruppen und verwendet die Daten eines Sonderforschungsbereichs (SFB) zu Finanzinputs und Forschungsoutput von 2005 bis 2016. Das Kapitel beginnt mit der Beschreibung passender Performanzindikatoren, gefolgt von einer innovativen visuellen Datenanalyse. Im Hauptteil des Kapitels untersucht die Arbeit mit Hilfe eines Zeit-Fixed-Effects-Panel- Modells und eines Fixed-Effects-Poisson-Modells den Zusammenhang zwischen finanziellen Inputs und Forschungsoutputs. Das Kapitel 4 beschäftigt sich mit dem Niveau der Universitäten und untersucht die Interdependenzstruktur zwischen Drittmittelausgaben, Publikationen, Zitationen und akademischem Alter mit Hilfe eines PVARX-Modells, einer Impulsantwort und einer Zerlegung der Prognosefehlervarianz. Abschließend befasst sich das Kapitel mit den möglichen Implikationen für Politik und Entscheidungsfindung und schlägt Empfehlungen für das universitäre Forschungsmanagement vor. / New Public Management helps universities and research institutions to perform in a highly competitive research environment. Decision making in the face of uncertainty, for example distribution of funds for research needs and purposes, urges research policy makers and university managers to understand the relationships between the dimensions of research performance and the resulting or incoming grants. Thus, it is important to accurately reflect the variables of scientific knowledge production on the level of individuals, research groups and universities. Chapter 2 of this thesis introduces an analysis on the level of individuals. The data are taken from the three widely-used ranking systems in the economic and business sciences among German-speaking countries: Handelsblatt (HB), Research Papers in Economics (RePEc, here RP) and Google Scholar (GS). It proposes a framework for collating ranking data for comparison purposes. Chapter 3 provides empirical evidence on the level of research groups using data from a Collaborative Research Center (CRC) on financial inputs and research output from 2005 to 2016. First, suitable performance indicators are discussed. Second, main properties of the data are described using visualization techniques. Finally, the time fixed effects panel data model and the fixed effects Poisson model are used to analyze an interdependency between financial inputs and research outputs. Chapter 4 examines the interdependence structure between third-party expenses (TPE), publications, citations and academic age using university data on individual performance in different scientific areas. A panel vector autoregressive model with exogenous variables (PVARX), impulse response functions and a forecast error variance decomposition help to capture the relationships in the system. To summarize, the chapter addresses the possible implications for policy and decision making and proposes recommendations for university research management.
19

排列檢定法應用於空間資料之比較 / Permutation test on spatial comparison

王信忠, Wang, Hsin-Chung Unknown Date (has links)
本論文主要是探討在二維度空間上二母體分佈是否一致。我們利用排列 (permutation)檢定方法來做比較, 並藉由費雪(Fisher)正確檢定方法的想法而提出重標記 (relabel)排列檢定方法或稱為費雪排列檢定法。 我們透過可交換性的特質證明它是正確 (exact) 的並且比 Syrjala (1996)所建議的排列檢定方法有更高的檢定力 (power)。 本論文另提出二個空間模型: spatial multinomial-relative-log-normal 模型 與 spatial Poisson-relative-log-normal 模型 來配適一般在漁業中常有的右斜長尾次數分佈並包含很多0 的空間資料。另外一般物種可能因天性或自然環境因素像食物、溫度等影響而有群聚行為發生, 這二個模型亦可描述出空間資料的群聚現象以做適當的推論。 / This thesis proposes the relabel (Fisher's) permutation test inspired by Fisher's exact test to compare between distributions of two (fishery) data sets locating on a two-dimensional lattice. We show that the permutation test given by Syrjala (1996} is not exact, but our relabel permutation test is exact and, additionally, more powerful. This thesis also studies two spatial models: the spatial multinomial-relative-log-normal model and the spatial Poisson-relative-log-normal model. Both models not only exhibit characteristics of skewness with a long right-hand tail and of high proportion of zero catches which usually appear in fishery data, but also have the ability to describe various types of aggregative behaviors.
20

Adaptive least-squares finite element method with optimal convergence rates

Bringmann, Philipp 29 January 2021 (has links)
Die Least-Squares Finite-Elemente-Methoden (LSFEMn) basieren auf der Minimierung des Least-Squares-Funktionals, das aus quadrierten Normen der Residuen eines Systems von partiellen Differentialgleichungen erster Ordnung besteht. Dieses Funktional liefert einen a posteriori Fehlerschätzer und ermöglicht die adaptive Verfeinerung des zugrundeliegenden Netzes. Aus zwei Gründen versagen die gängigen Methoden zum Beweis optimaler Konvergenzraten, wie sie in Carstensen, Feischl, Page und Praetorius (Comp. Math. Appl., 67(6), 2014) zusammengefasst werden. Erstens scheinen fehlende Vorfaktoren proportional zur Netzweite den Beweis einer schrittweisen Reduktion der Least-Squares-Schätzerterme zu verhindern. Zweitens kontrolliert das Least-Squares-Funktional den Fehler der Fluss- beziehungsweise Spannungsvariablen in der H(div)-Norm, wodurch ein Datenapproximationsfehler der rechten Seite f auftritt. Diese Schwierigkeiten führten zu einem zweifachen Paradigmenwechsel in der Konvergenzanalyse adaptiver LSFEMn in Carstensen und Park (SIAM J. Numer. Anal., 53(1), 2015) für das 2D-Poisson-Modellproblem mit Diskretisierung niedrigster Ordnung und homogenen Dirichlet-Randdaten. Ein neuartiger expliziter residuenbasierter Fehlerschätzer ermöglicht den Beweis der Reduktionseigenschaft. Durch separiertes Markieren im adaptiven Algorithmus wird zudem der Datenapproximationsfehler reduziert. Die vorliegende Arbeit verallgemeinert diese Techniken auf die drei linearen Modellprobleme das Poisson-Problem, die Stokes-Gleichungen und das lineare Elastizitätsproblem. Die Axiome der Adaptivität mit separiertem Markieren nach Carstensen und Rabus (SIAM J. Numer. Anal., 55(6), 2017) werden in drei Raumdimensionen nachgewiesen. Die Analysis umfasst Diskretisierungen mit beliebigem Polynomgrad sowie inhomogene Dirichlet- und Neumann-Randbedingungen. Abschließend bestätigen numerische Experimente mit dem h-adaptiven Algorithmus die theoretisch bewiesenen optimalen Konvergenzraten. / The least-squares finite element methods (LSFEMs) base on the minimisation of the least-squares functional consisting of the squared norms of the residuals of first-order systems of partial differential equations. This functional provides a reliable and efficient built-in a posteriori error estimator and allows for adaptive mesh-refinement. The established convergence analysis with rates for adaptive algorithms, as summarised in the axiomatic framework by Carstensen, Feischl, Page, and Praetorius (Comp. Math. Appl., 67(6), 2014), fails for two reasons. First, the least-squares estimator lacks prefactors in terms of the mesh-size, what seemingly prevents a reduction under mesh-refinement. Second, the first-order divergence LSFEMs measure the flux or stress errors in the H(div) norm and, thus, involve a data resolution error of the right-hand side f. These difficulties led to a twofold paradigm shift in the convergence analysis with rates for adaptive LSFEMs in Carstensen and Park (SIAM J. Numer. Anal., 53(1), 2015) for the lowest-order discretisation of the 2D Poisson model problem with homogeneous Dirichlet boundary conditions. Accordingly, some novel explicit residual-based a posteriori error estimator accomplishes the reduction property. Furthermore, a separate marking strategy in the adaptive algorithm ensures the sufficient data resolution. This thesis presents the generalisation of these techniques to three linear model problems, namely, the Poisson problem, the Stokes equations, and the linear elasticity problem. It verifies the axioms of adaptivity with separate marking by Carstensen and Rabus (SIAM J. Numer. Anal., 55(6), 2017) in three spatial dimensions. The analysis covers discretisations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Numerical experiments confirm the theoretically proven optimal convergence rates of the h-adaptive algorithm.

Page generated in 0.0545 seconds