• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 12
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 14
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modelling children under five mortality in South Africa using copula and frailty survival models

Mulaudzi, Tshilidzi Benedicta January 2022 (has links)
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2022 / This thesis is based on application of frailty and copula models to under five child mortality data set in South Africa. The main purpose of the study was to apply sample splitting techniques in a survival analysis setting and compare clustered survival models considering left truncation to the under five child mortality data set in South Africa. The major contributions of this thesis is in the application of the shared frailty model and a class of Archimedean copulas in particular, Clayton-Oakes copula with completely monotone generator, and introduction of sample splitting techniques in a survival analysis setting. The findings based on shared frailty model show that clustering effect was sig nificant for modelling the determinants of time to death of under five children, and revealed the importance of accounting for clustering effect. The conclusion based on Clayton-Oakes model showed association between survival times of children from the same mother. It was found that the parameter estimates for the shared frailty and the Clayton-Oakes models were quite different and that the two models cannot be comparable. Gender, province, year, birth order and whether a child is part of twin or not were found to be significant factors affect ing under five child mortality in South Africa. / NRF-TDG Flemish Interuniversity Council Institutional corporation (VLIR-IUC) VLIR-IUC Programme of the University of Limpopo
32

Numerical Modelling and Statistical Analysis of Ocean Wave Energy Converters and Wave Climates

Li, Wei January 2016 (has links)
Ocean wave energy is considered to be one of the important potential renewable energy resources for sustainable development. Various wave energy converter technologies have been proposed to harvest the energy from ocean waves. This thesis is based on the linear generator wave energy converter developed at Uppsala University. The research in this thesis focuses on the foundation optimization and the power absorption optimization of the wave energy converters and on the wave climate modelling at the Lysekil wave converter test site. The foundation optimization study of the gravity-based foundation of the linear wave energy converter is based on statistical analysis of wave climate data measured at the Lysekil test site. The 25 years return extreme significant wave height and its associated mean zero-crossing period are chosen as the maximum wave for the maximum heave and surge forces evaluation. The power absorption optimization study on the linear generator wave energy converter is based on the wave climate at the Lysekil test site. A frequency-domain simplified numerical model is used with the power take-off damping coefficient chosen as the control parameter for optimizing the power absorption. The results show a large improvement with an optimized power take-off damping coefficient adjusted to the characteristics of the wave climate at the test site. The wave climate modelling studies are based on the wave climate data measured at the Lysekil test site. A new mixed distribution method is proposed for modelling the significant wave height. This method gives impressive goodness of fit with the measured wave data. A copula method is applied to the bivariate joint distribution of the significant wave height and the wave period. The results show an excellent goodness of fit for the Gumbel model. The general applicability of the proposed mixed-distribution method and the copula method are illustrated with wave climate data from four other sites. The results confirm the good performance of the mixed-distribution and the Gumbel copula model for the modelling of significant wave height and bivariate wave climate.
33

Análise de sensibilidade e resíduos em modelos de regressão com respostas bivariadas por meio de cópulas / Bivariate response regression models with copulas: Sensitivity and residual analysis

Gomes, Eduardo Monteiro de Castro 01 February 2008 (has links)
Neste trabalho são apresentados modelos de regressão com respostas bivariadas obtidos através de funções cópulas. O objetivo de utilizar estes modelos bivariados é modelar a correlação entre eventos e captar nos modelos de regressão a influência da associação entre as variáveis resposta na presença de censura nos dados. Os parâmetros dos modelos, são estimados por meio dos métodos de máxima verossimilhança e jackknife. Alguns métodos de análise de sensibilidade como influência global, local e local total de um indivíduo, são introduzidos e calculados considerando diferentes esquemas de perturbação. Uma análise de resíduos foi proposta para verificar a qualidade do ajuste dos modelos utilizados e também foi proposta novas medidas de resíduos para respostas bivariadas. Métodos de simulação de Monte Carlo foram conduzidos para estudar a distribuição empírica dos resíduos marginais e bivariados propostos. Finalmente, os resultados são aplicados à dois conjuntos de dados dsponíveis na literatura. / In this work bivariate response regression models are presented with the use of copulas. The objective of this approach is to model the correlation between events and capture the influence of this correlation in the regression parameters. The models are used in the context of survival analysis and are ¯tted to two data sets available in the literature. Inferences are obtained using maximum likelihood and Jackknife methods. Sensitivity techniques such as local and global in°uence are proposed and calculated. A residual analysis is proposed to check the adequacy of the models and simulation methods are used to asses the empirical distribution of the marginal univariate and bivariate residual measures proposed.
34

Introdução à análise não standard / Introduction to non-standard analysis

Machado, Geovani Pereira 07 December 2018 (has links)
A área conhecida como Análise Não Standard consiste na aplicação dos métodos da Teoria dos Modelos e da Teoria dos Ultrafiltros para a obtenção de extensões peculiares de sistemas matemáticos infinitos. As novas estruturas construídas segundo esse procedimento satisfazem ao Princípio da Transferência, uma propriedade de suma importância e influência a qual afirma que as mesmas sentenças de primeira ordem com quantificadores limitados são verdadeiras para o sistema original e a sua extensão. Concebida em 1961 por Abraham Robinson e aprimorada por vários matemáticos nos anos subsequentes, tal área de pesquisa provou ser bastante proveitosa e esclarecedora para diversas outras partes da Matemática, como a Topologia, a Teoria das Probabilidades, a Análise Funcional e a Análise Complexa. Manifesta-se uma reavaliação da Teoria dos Domínios Ordenados seguida de um tratamento completo e gradual das fundações da Análise Não Standard assumindo a perspectiva dos Monomorfismos Não Standard, onde adota-se como metateoria a teoria dos conjuntos de Neumann-Bernays-Gödel com o Axioma da Escolha. A fim de impulsionar a assimilação da metodologia abordada, o estudo explora as propriedades do corpo não arquimediano dos números hiper-reais de maneira intuitiva e informal, utilizando-se destas para revelar demonstrações alternativas e relativamente diretas de alguns dos principais resultados do Cálculo Diferencial e Integral, como o Teorema do Valor Intermediário, o Teorema de Bolzano-Weierstrass, o Teorema do Ponto Crítico, o Teorema da Função Inversa e o Teorema Fundamental do Cálculo. / The field known as Non-standard Analysis consists in the application of the methods of Model Theory and Ultrafilter Theory to the attainment of peculiar extensions of infinite mathematical systems. The new structures produced under that procedure satisfy the Transfer Principle, a property of the utmost importance and influence which states that the same first-order sentences with bounded quantifiers are true for the original system and its extension. Conceived in 1961 by Abraham Robinson and improved by a number of mathematicians in the following years, such area of research has proved to be very fruitful and illuminating to many other parts of Mathematics, such as Topology, Probability Theory, Functional Analysis and Complex Analysis. The work presents a reexamination of the Theory of Ordered Domains followed by a thorough and gradual treatment of the foundations of Non-standard Analysis under the perspective of Non-standard Monomorphisms, where Neumann-Bernays-Gödels set theory with the Axiom of Choice is adopted as metatheory. In order to boost the assimilation of the methodology put forward, the study explores the properties of the non-archimedean field of hyperreal numbers in an intuitive and informal fashion, employing them to reveal alternative and relatively direct proofs of some of the main results of Differential and Integral Calculus, such as the Intermediate Value Theorem, the Bolzano-Weierstrass Theorem, the Extreme Value Theorem, the Inverse Function Theorem and the Fundamental Theorem of Calculus.
35

Introdução à análise não standard / Introduction to non-standard analysis

Geovani Pereira Machado 07 December 2018 (has links)
A área conhecida como Análise Não Standard consiste na aplicação dos métodos da Teoria dos Modelos e da Teoria dos Ultrafiltros para a obtenção de extensões peculiares de sistemas matemáticos infinitos. As novas estruturas construídas segundo esse procedimento satisfazem ao Princípio da Transferência, uma propriedade de suma importância e influência a qual afirma que as mesmas sentenças de primeira ordem com quantificadores limitados são verdadeiras para o sistema original e a sua extensão. Concebida em 1961 por Abraham Robinson e aprimorada por vários matemáticos nos anos subsequentes, tal área de pesquisa provou ser bastante proveitosa e esclarecedora para diversas outras partes da Matemática, como a Topologia, a Teoria das Probabilidades, a Análise Funcional e a Análise Complexa. Manifesta-se uma reavaliação da Teoria dos Domínios Ordenados seguida de um tratamento completo e gradual das fundações da Análise Não Standard assumindo a perspectiva dos Monomorfismos Não Standard, onde adota-se como metateoria a teoria dos conjuntos de Neumann-Bernays-Gödel com o Axioma da Escolha. A fim de impulsionar a assimilação da metodologia abordada, o estudo explora as propriedades do corpo não arquimediano dos números hiper-reais de maneira intuitiva e informal, utilizando-se destas para revelar demonstrações alternativas e relativamente diretas de alguns dos principais resultados do Cálculo Diferencial e Integral, como o Teorema do Valor Intermediário, o Teorema de Bolzano-Weierstrass, o Teorema do Ponto Crítico, o Teorema da Função Inversa e o Teorema Fundamental do Cálculo. / The field known as Non-standard Analysis consists in the application of the methods of Model Theory and Ultrafilter Theory to the attainment of peculiar extensions of infinite mathematical systems. The new structures produced under that procedure satisfy the Transfer Principle, a property of the utmost importance and influence which states that the same first-order sentences with bounded quantifiers are true for the original system and its extension. Conceived in 1961 by Abraham Robinson and improved by a number of mathematicians in the following years, such area of research has proved to be very fruitful and illuminating to many other parts of Mathematics, such as Topology, Probability Theory, Functional Analysis and Complex Analysis. The work presents a reexamination of the Theory of Ordered Domains followed by a thorough and gradual treatment of the foundations of Non-standard Analysis under the perspective of Non-standard Monomorphisms, where Neumann-Bernays-Gödels set theory with the Axiom of Choice is adopted as metatheory. In order to boost the assimilation of the methodology put forward, the study explores the properties of the non-archimedean field of hyperreal numbers in an intuitive and informal fashion, employing them to reveal alternative and relatively direct proofs of some of the main results of Differential and Integral Calculus, such as the Intermediate Value Theorem, the Bolzano-Weierstrass Theorem, the Extreme Value Theorem, the Inverse Function Theorem and the Fundamental Theorem of Calculus.
36

A base de conhecimento para o ensino de sólidos arquimedianos

Almeida, Talita Carvalho Silva de 29 May 2015 (has links)
Made available in DSpace on 2016-04-27T16:57:38Z (GMT). No. of bitstreams: 1 Talita Carvalho Silva de Almeida.pdf: 3209474 bytes, checksum: c7199d8619e815e18cc31281d3c30c9a (MD5) Previous issue date: 2015-05-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This research aims to identify the teaching knowledge mobilized for Archimedean solids are taught. Thus, the research question was: which knowledge base for the Archimedean solid education in basic school? To answer this question we resort to a bibliographical study developed based on material already prepared, consisting of books and scientific articles. The theoretical framework was based on Mathematical Knowledge for Teaching, in sense Ball, Thames and Phelps, and Technological Knowledge for Education, in sense Mishra and Koehler, both obtained with advances in the initial proposal of Shulman and colleagues about the knowledge base for teaching and the Anthropological Theory of Didactic Yves Chevallard. Such references were fundamental in the composition of a scene that showed that teaching knowledge are minimally involved in the Archimedean solids teaching process. The methodological choice for the literature contributed to the achievement of the desired goal, since it allowed us to find aspects of knowledge not evidenced in studies by Shulman. The choice of a mathematical procedure performed by Renaissance as Mathematics Reference Model Epistemological led us to an Mathematics Organization and a possible Didactic Organization for Archimedean solids helping us to realize that the teaching knowledge come from the interaction of three particular components of knowledge, mathematical knowledge, technological knowledge and didactic knowledge / O presente trabalho tem como objetivo identificar os saberes docentes mobilizados para que sólidos arquimedianos sejam ensinados. Assim, a pergunta de pesquisa foi: qual base de conhecimento para o ensino de sólidos arquimedianos na escola básica? Para responder a esta questão, recorremos a um estudo bibliográfico desenvolvido com base em material já elaborado, constituídos principalmente de artigos científicos. O referencial teórico baseou-se no Conhecimento Matemático para o Ensino, no sentido de Ball, Thames e Phelps, e no Conhecimento Tecnológico para o Ensino, no sentido de Mishra e Koehler, ambos obtidos com avanços na proposta inicial de Shulman, e colaboradores acerca da base de conhecimento para o ensino e na Teoria Antropológica do Didático de Yves Chevallard. Tais referenciais foram fundamentais para a composição de um cenário que evidenciasse quais saberes docentes estão minimamente envolvidos no processo de ensino de sólidos arquimedianos. A escolha metodológica pela pesquisa bibliográfica contribuiu para o alcance do objetivo desejado, visto que nos permitiu encontrar aspectos do conhecimento não evidenciados nos estudos de Shulman. A escolha de um procedimento matemático realizado por renascentistas como Modelo Epistemológico de Referência nos conduziu a uma Organização Matemática e uma possível Organização Didática para sólidos arquimedianos, nos ajudando a perceber que os saberes docentes são provenientes da interação de três componentes particulares de conhecimento, conhecimento matemático, conhecimento tecnológico e conhecimento didático
37

Análise de dados com riscos semicompetitivos / Analysis of Semicompeting Risks Data

Elizabeth Gonzalez Patino 16 August 2012 (has links)
Em análise de sobrevivência, usualmente o interesse esté em estudar o tempo até a ocorrência de um evento. Quando as observações estão sujeitas a mais de um tipo de evento (por exemplo, diferentes causas de óbito) e a ocorrência de um evento impede a ocorrência dos demais, tem-se uma estrutura de riscos competitivos. Em algumas situações, no entanto, o interesse está em estudar dois eventos, sendo que um deles (evento terminal) impede a ocorrência do outro (evento intermediário), mas não vice-versa. Essa estrutura é conhecida como riscos semicompetitivos e foi definida por Fine et al.(2001). Neste trabalho são consideradas duas abordagens para análise de dados com essa estrutura. Uma delas é baseada na construção da função de sobrevivência bivariada por meio de cópulas da família Arquimediana e estimadores para funções de sobrevivência são obtidos. A segunda abordagem é baseada em um processo de três estados, conhecido como processo doença-morte, que pode ser especificado pelas funções de intensidade de transição ou funções de risco. Neste caso, considera-se a inclusão de covariáveis e a possível dependência entre os dois tempos observados é incorporada por meio de uma fragilidade compartilhada. Estas metodologias são aplicadas a dois conjuntos de dados reais: um de 137 pacientes com leucemia, observados no máximo sete anos após transplante de medula óssea, e outro de 1253 pacientes com doença renal crônica submetidos a diálise, que foram observados entre os anos 2009-2011. / In survival analysis, usually the interest is to study the time until the occurrence of an event. When observations are subject to more than one type of event (e.g, different causes of death) and the occurrence of an event prevents the occurrence of the other, there is a competing risks structure. In some situations, nevertheless, the main interest is to study two events, one of which (terminal event) prevents the occurrence of the other (nonterminal event) but not vice versa. This structure is known as semicompeting risks, defined initially by Fine et al. (2001). In this work, we consider two approaches for analyzing data with this structure. One approach is based on the bivariate survival function through Archimedean copulas and estimators for the survival functions are obtained. The second approach is based on a process with three states, known as Illness-Death process, which can be specified by the transition intensity functions or risk functions. In this case, the inclusion of covariates and a possible dependence between the two times is taken into account by a shared frailty. These methodologies are applied to two data sets: the first one is a study with 137 patients with leukemia that received an allogeneic marrow transplant, with maximum follow up of 7 years; the second is a data set of 1253 patientswith chronic kidney disease on dialysis treatment, followed from 2009 until 2011.
38

Análise de dados com riscos semicompetitivos / Analysis of Semicompeting Risks Data

Patino, Elizabeth Gonzalez 16 August 2012 (has links)
Em análise de sobrevivência, usualmente o interesse esté em estudar o tempo até a ocorrência de um evento. Quando as observações estão sujeitas a mais de um tipo de evento (por exemplo, diferentes causas de óbito) e a ocorrência de um evento impede a ocorrência dos demais, tem-se uma estrutura de riscos competitivos. Em algumas situações, no entanto, o interesse está em estudar dois eventos, sendo que um deles (evento terminal) impede a ocorrência do outro (evento intermediário), mas não vice-versa. Essa estrutura é conhecida como riscos semicompetitivos e foi definida por Fine et al.(2001). Neste trabalho são consideradas duas abordagens para análise de dados com essa estrutura. Uma delas é baseada na construção da função de sobrevivência bivariada por meio de cópulas da família Arquimediana e estimadores para funções de sobrevivência são obtidos. A segunda abordagem é baseada em um processo de três estados, conhecido como processo doença-morte, que pode ser especificado pelas funções de intensidade de transição ou funções de risco. Neste caso, considera-se a inclusão de covariáveis e a possível dependência entre os dois tempos observados é incorporada por meio de uma fragilidade compartilhada. Estas metodologias são aplicadas a dois conjuntos de dados reais: um de 137 pacientes com leucemia, observados no máximo sete anos após transplante de medula óssea, e outro de 1253 pacientes com doença renal crônica submetidos a diálise, que foram observados entre os anos 2009-2011. / In survival analysis, usually the interest is to study the time until the occurrence of an event. When observations are subject to more than one type of event (e.g, different causes of death) and the occurrence of an event prevents the occurrence of the other, there is a competing risks structure. In some situations, nevertheless, the main interest is to study two events, one of which (terminal event) prevents the occurrence of the other (nonterminal event) but not vice versa. This structure is known as semicompeting risks, defined initially by Fine et al. (2001). In this work, we consider two approaches for analyzing data with this structure. One approach is based on the bivariate survival function through Archimedean copulas and estimators for the survival functions are obtained. The second approach is based on a process with three states, known as Illness-Death process, which can be specified by the transition intensity functions or risk functions. In this case, the inclusion of covariates and a possible dependence between the two times is taken into account by a shared frailty. These methodologies are applied to two data sets: the first one is a study with 137 patients with leukemia that received an allogeneic marrow transplant, with maximum follow up of 7 years; the second is a data set of 1253 patientswith chronic kidney disease on dialysis treatment, followed from 2009 until 2011.
39

Modelling dependence in actuarial science, with emphasis on credibility theory and copulas

Purcaru, Oana 19 August 2005 (has links)
One basic problem in statistical sciences is to understand the relationships among multivariate outcomes. Although it remains an important tool and is widely applicable, the regression analysis is limited by the basic setup that requires to identify one dimension of the outcomes as the primary measure of interest (the "dependent" variable) and other dimensions as supporting this variable (the "explanatory" variables). There are situations where this relationship is not of primary interest. For example, in actuarial sciences, one might be interested to see the dependence between annual claim numbers of a policyholder and its impact on the premium or the dependence between the claim amounts and the expenses related to them. In such cases the normality hypothesis fails, thus Pearson's correlation or concepts based on linearity are no longer the best ones to be used. Therefore, in order to quantify the dependence between non-normal outcomes one needs different statistical tools, such as, for example, the dependence concepts and the copulas. This thesis is devoted to modelling dependence with applications in actuarial sciences and is divided in two parts: the first one concerns dependence in frequency credibility models and the second one dependence between continuous outcomes. In each part of the thesis we resort to different tools, the stochastic orderings (which arise from the dependence concepts), and copulas, respectively. During the last decade of the 20th century, the world of insurance was confronted with important developments of the a posteriori tarification, especially in the field of credibility. This was dued to the easing of insurance markets in the European Union, which gave rise to an advanced segmentation. The first important contribution is due to Dionne & Vanasse (1989), who proposed a credibility model which integrates a priori and a posteriori information on an individual basis. These authors introduced a regression component in the Poisson counting model in order to use all available information in the estimation of accident frequency. The unexplained heterogeneity was then modeled by the introduction of a latent variable representing the influence of hidden policy characteristics. The vast majority of the papers appeared in the actuarial literature considered time-independent (or static) heterogeneous models. Noticeable exceptions include the pioneering papers by Gerber & Jones (1975), Sundt (1988) and Pinquet, Guillén & Bolancé (2001, 2003). The allowance for an unknown underlying random parameter that develops over time is justified since unobservable factors influencing the driving abilities are not constant. One might consider either shocks (induced by events like divorces or nervous breakdown, for instance) or continuous modifications (e.g. due to learning effect). In the first part we study the recently introduced models in the frequency credibility theory, which can be seen as models of time series for count data, adapted to actuarial problems. More precisely we will examine the kind of dependence induced among annual claim numbers by the introduction of random effects taking unexplained heterogeneity, when these random effects are static and time-dependent. We will also make precise the effect of reporting claims on the a posteriori distribution of the random effect. This will be done by establishing some stochastic monotonicity property of the a posteriori distribution with respect to the claims history. We end this part by considering different models for the random effects and computing the a posteriori corrections of the premiums on basis of a real data set from a Spanish insurance company. Whereas dependence concepts are very useful to describe the relationship between multivariate outcomes, in practice (think for instance to the computation of reinsurance premiums) one need some statistical tool easy to implement, which incorporates the structure of the data. Such tool is the copula, which allows the construction of multivariate distributions for given marginals. Because copulas characterize the dependence structure of random vectors once the effect of the marginals has been factored out, identifying and fitting a copula to data is not an easy task. In practice, it is often preferable to restrict the search of an appropriate copula to some reasonable family, like the archimedean one. Then, it is extremely useful to have simple graphical procedures to select the best fitting model among some competing alternatives for the data at hand. In the second part of the thesis we propose a new nonparametric estimator for the generator, that takes into account the particularity of the data, namely censoring and truncation. This nonparametric estimation then serves as a benchmark to select an appropriate parametric archimedean copula. This selection procedure will be illustrated on a real data set.
40

Análise de sensibilidade e resíduos em modelos de regressão com respostas bivariadas por meio de cópulas / Bivariate response regression models with copulas: Sensitivity and residual analysis

Eduardo Monteiro de Castro Gomes 01 February 2008 (has links)
Neste trabalho são apresentados modelos de regressão com respostas bivariadas obtidos através de funções cópulas. O objetivo de utilizar estes modelos bivariados é modelar a correlação entre eventos e captar nos modelos de regressão a influência da associação entre as variáveis resposta na presença de censura nos dados. Os parâmetros dos modelos, são estimados por meio dos métodos de máxima verossimilhança e jackknife. Alguns métodos de análise de sensibilidade como influência global, local e local total de um indivíduo, são introduzidos e calculados considerando diferentes esquemas de perturbação. Uma análise de resíduos foi proposta para verificar a qualidade do ajuste dos modelos utilizados e também foi proposta novas medidas de resíduos para respostas bivariadas. Métodos de simulação de Monte Carlo foram conduzidos para estudar a distribuição empírica dos resíduos marginais e bivariados propostos. Finalmente, os resultados são aplicados à dois conjuntos de dados dsponíveis na literatura. / In this work bivariate response regression models are presented with the use of copulas. The objective of this approach is to model the correlation between events and capture the influence of this correlation in the regression parameters. The models are used in the context of survival analysis and are ¯tted to two data sets available in the literature. Inferences are obtained using maximum likelihood and Jackknife methods. Sensitivity techniques such as local and global in°uence are proposed and calculated. A residual analysis is proposed to check the adequacy of the models and simulation methods are used to asses the empirical distribution of the marginal univariate and bivariate residual measures proposed.

Page generated in 0.0602 seconds