• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 7
  • 1
  • Tagged with
  • 19
  • 19
  • 11
  • 7
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Hierarchical Bayesian Model for the Unmixing Analysis of Compositional Data subject to Unit-sum Constraints

Yu, Shiyong 15 May 2015 (has links)
Modeling of compositional data is emerging as an active area in statistics. It is assumed that compositional data represent the convex linear mixing of definite numbers of independent sources usually referred to as end members. A generic problem in practice is to appropriately separate the end members and quantify their fractions from compositional data subject to nonnegative and unit-sum constraints. A number of methods essentially related to polytope expansion have been proposed. However, these deterministic methods have some potential problems. In this study, a hierarchical Bayesian model was formulated, and the algorithms were coded in MATLABÒ. A test run using both a synthetic and real-word dataset yields scientifically sound and mathematically optimal outputs broadly consistent with other non-Bayesian methods. Also, the sensitivity of this model to the choice of different priors and structure of the covariance matrix of error were discussed.
12

Ortotanásia no direito penal brasileiro / Orthothanasia in the brazilian criminal law

Massola, Luis Felipe Grandi 06 June 2012 (has links)
Made available in DSpace on 2016-04-26T20:21:01Z (GMT). No. of bitstreams: 1 Luis Felipe Grandi Massola.pdf: 361739 bytes, checksum: 552fd47fb2882b099ff9860210a77af5 (MD5) Previous issue date: 2012-06-06 / The right to life is a fundamental right guaranteed by constitution, but, exceptionally, can be relativized. That can be seen from the analysis of some devices that permeate the legal parental rights, both constitutional and infra. In this sense, this paper seeks to demonstrate the urgent need for orthothanasia to be approved explicitly in the Penal Code as a new cause of unlawful exclusionary and thus treated as a new kind of relative right to life. The right to have a dignified death represents the real and effective application of the principle of human dignity to those who are severely ill with signs of imminent death, avoiding unnecessary suffering cruelty of therapy. From this perspective devices are analyzed in the New Code of Medical Ethics that legitimize the adoption of orthothanasia from the standpoint of medical ethics and its relations with the Draft of the Special Part of the Penal Code which gives the orthothanasia legal cause of unlawful exclusionary . Finally, considerations are made about the need to adopt the so-called palliative care also under criminal perspective, harmonizing the principles highlighted with the notion of humanizing the death process / O direito à vida é um direito fundamental constitucionalmente assegurado, mas que, excepcionalmente, pode ser relativizado. É o que se verifica da análise de alguns dispositivos que permeiam o ordenamento jurídico pátrio, tanto no plano constitucional, como infraconstitucional. Neste sentido, a presente dissertação procura demonstrar a necessidade imperiosa de que a ortotanásia seja positivada expressamente no Código Penal como nova causa excludente de ilicitude e, assim, considerada como nova espécie de relativização do direito à vida. É que o direito à morte digna representa a real e efetiva aplicação do princípio da dignidade da pessoa humana àqueles gravemente enfermos que se encontram com quadro de morte iminente e inevitável, evitando sofrimento desnecessário da crueldade terapêutica. Sob este prisma é que são analisados dispositivos do Novo Código de Ética Médica que legitimam a adoção da ortotanásia do ponto de vista da ética médica e suas relações com o Anteprojeto da Parte Especial do Código Penal que confere à ortotanásia natureza jurídica de causa excludente de ilicitude. Ao final, considerações são tecidas sobre a necessidade da adoção dos chamados cuidados paliativos também sob a ótica penal, harmonizando os preceitos em destaque com a noção da humanização do processo de morte
13

Análise de dados com riscos semicompetitivos / Analysis of Semicompeting Risks Data

Elizabeth Gonzalez Patino 16 August 2012 (has links)
Em análise de sobrevivência, usualmente o interesse esté em estudar o tempo até a ocorrência de um evento. Quando as observações estão sujeitas a mais de um tipo de evento (por exemplo, diferentes causas de óbito) e a ocorrência de um evento impede a ocorrência dos demais, tem-se uma estrutura de riscos competitivos. Em algumas situações, no entanto, o interesse está em estudar dois eventos, sendo que um deles (evento terminal) impede a ocorrência do outro (evento intermediário), mas não vice-versa. Essa estrutura é conhecida como riscos semicompetitivos e foi definida por Fine et al.(2001). Neste trabalho são consideradas duas abordagens para análise de dados com essa estrutura. Uma delas é baseada na construção da função de sobrevivência bivariada por meio de cópulas da família Arquimediana e estimadores para funções de sobrevivência são obtidos. A segunda abordagem é baseada em um processo de três estados, conhecido como processo doença-morte, que pode ser especificado pelas funções de intensidade de transição ou funções de risco. Neste caso, considera-se a inclusão de covariáveis e a possível dependência entre os dois tempos observados é incorporada por meio de uma fragilidade compartilhada. Estas metodologias são aplicadas a dois conjuntos de dados reais: um de 137 pacientes com leucemia, observados no máximo sete anos após transplante de medula óssea, e outro de 1253 pacientes com doença renal crônica submetidos a diálise, que foram observados entre os anos 2009-2011. / In survival analysis, usually the interest is to study the time until the occurrence of an event. When observations are subject to more than one type of event (e.g, different causes of death) and the occurrence of an event prevents the occurrence of the other, there is a competing risks structure. In some situations, nevertheless, the main interest is to study two events, one of which (terminal event) prevents the occurrence of the other (nonterminal event) but not vice versa. This structure is known as semicompeting risks, defined initially by Fine et al. (2001). In this work, we consider two approaches for analyzing data with this structure. One approach is based on the bivariate survival function through Archimedean copulas and estimators for the survival functions are obtained. The second approach is based on a process with three states, known as Illness-Death process, which can be specified by the transition intensity functions or risk functions. In this case, the inclusion of covariates and a possible dependence between the two times is taken into account by a shared frailty. These methodologies are applied to two data sets: the first one is a study with 137 patients with leukemia that received an allogeneic marrow transplant, with maximum follow up of 7 years; the second is a data set of 1253 patientswith chronic kidney disease on dialysis treatment, followed from 2009 until 2011.
14

Análise de dados com riscos semicompetitivos / Analysis of Semicompeting Risks Data

Patino, Elizabeth Gonzalez 16 August 2012 (has links)
Em análise de sobrevivência, usualmente o interesse esté em estudar o tempo até a ocorrência de um evento. Quando as observações estão sujeitas a mais de um tipo de evento (por exemplo, diferentes causas de óbito) e a ocorrência de um evento impede a ocorrência dos demais, tem-se uma estrutura de riscos competitivos. Em algumas situações, no entanto, o interesse está em estudar dois eventos, sendo que um deles (evento terminal) impede a ocorrência do outro (evento intermediário), mas não vice-versa. Essa estrutura é conhecida como riscos semicompetitivos e foi definida por Fine et al.(2001). Neste trabalho são consideradas duas abordagens para análise de dados com essa estrutura. Uma delas é baseada na construção da função de sobrevivência bivariada por meio de cópulas da família Arquimediana e estimadores para funções de sobrevivência são obtidos. A segunda abordagem é baseada em um processo de três estados, conhecido como processo doença-morte, que pode ser especificado pelas funções de intensidade de transição ou funções de risco. Neste caso, considera-se a inclusão de covariáveis e a possível dependência entre os dois tempos observados é incorporada por meio de uma fragilidade compartilhada. Estas metodologias são aplicadas a dois conjuntos de dados reais: um de 137 pacientes com leucemia, observados no máximo sete anos após transplante de medula óssea, e outro de 1253 pacientes com doença renal crônica submetidos a diálise, que foram observados entre os anos 2009-2011. / In survival analysis, usually the interest is to study the time until the occurrence of an event. When observations are subject to more than one type of event (e.g, different causes of death) and the occurrence of an event prevents the occurrence of the other, there is a competing risks structure. In some situations, nevertheless, the main interest is to study two events, one of which (terminal event) prevents the occurrence of the other (nonterminal event) but not vice versa. This structure is known as semicompeting risks, defined initially by Fine et al. (2001). In this work, we consider two approaches for analyzing data with this structure. One approach is based on the bivariate survival function through Archimedean copulas and estimators for the survival functions are obtained. The second approach is based on a process with three states, known as Illness-Death process, which can be specified by the transition intensity functions or risk functions. In this case, the inclusion of covariates and a possible dependence between the two times is taken into account by a shared frailty. These methodologies are applied to two data sets: the first one is a study with 137 patients with leukemia that received an allogeneic marrow transplant, with maximum follow up of 7 years; the second is a data set of 1253 patientswith chronic kidney disease on dialysis treatment, followed from 2009 until 2011.
15

Méthodes quantitatives pour l'étude asymptotique de processus de Markov homogènes et non-homogènes / Quantitative methods for the asymptotic study of homogeneous and non-homogeneous Markov processes

Delplancke, Claire 28 June 2017 (has links)
L'objet de cette thèse est l'étude de certaines propriétés analytiques et asymptotiques des processus de Markov, et de leurs applications à la méthode de Stein. Le point de vue considéré consiste à déployer des inégalités fonctionnelles pour majorer la distance entre lois de probabilité. La première partie porte sur l'étude asymptotique de processus de Markov inhomogènes en temps via des inégalités de type Poincaré, établies par l'analyse spectrale fine de l'opérateur de transition. On se place d'abord dans le cadre du théorème central limite, qui affirme que la somme renormalisée de variables aléatoires converge vers la mesure gaussienne, et l'étude est consacrée à l'obtention d'une borne à la Berry-Esseen permettant de quantifier cette convergence. La distance choisie est une quantité naturelle et encore non étudiée dans ce cadre, la distance du chi-2, complétant ainsi la littérature relative à d'autres distances (Kolmogorov, variation totale, Wasserstein). Toujours dans le contexte non-homogène, on s'intéresse ensuite à un processus peu mélangeant relié à un algorithme stochastique de recherche de médiane. Ce processus évolue par sauts de deux types (droite ou gauche), dont la taille et l'intensité dépendent du temps. Une majoration de la distance de Wasserstein d'ordre 1 entre la loi du processus et la mesure gaussienne est établie dans le cas où celle-ci est invariante sous la dynamique considérée, et étendue à des exemples où seule la normalité asymptotique est vérifiée. La seconde partie s'attache à l'étude des entrelacements entre processus de Markov (homogènes) et gradients, qu'on peut interpréter comme un raffinement du critère de Bakry-Emery, et leur application à la méthode de Stein, qui est un ensemble de techniques permettant de majorer la distance entre deux mesures de probabilité. On prouve l'existence de relations d'entrelacement du second ordre pour les processus de naissance-mort, allant ainsi plus loin que les relations du premier ordre connues. Ces relations sont mises à profit pour construire une méthode originale et universelle d'évaluation des facteurs de Stein relatifs aux mesures de probabilité discrètes, qui forment une composante essentielle de la méthode de Stein-Chen. / The object of this thesis is the study of some analytical and asymptotic properties of Markov processes, and their applications to Stein's method. The point of view consists in the development of functional inequalities in order to obtain upper-bounds on the distance between probability distributions. The first part is devoted to the asymptotic study of time-inhomogeneous Markov processes through Poincaré-like inequalities, established by precise estimates on the spectrum of the transition operator. The first investigation takes place within the framework of the Central Limit Theorem, which states the convergence of the renormalized sum of random variables towards the normal distribution. It results in the statement of a Berry-Esseen bound allowing to quantify this convergence with respect to the chi-2 distance, a natural quantity which had not been investigated in this setting. It therefore extends similar results relative to other distances (Kolmogorov, total variation, Wasserstein). Keeping with the non-homogeneous framework, we consider a weakly mixing process linked to a stochastic algorithm for median approximation. This process evolves by jumps of two sorts (to the right or to the left) with time-dependent size and intensity. An upper-bound on the Wasserstein distance of order 1 between the marginal distribution of the process and the normal distribution is provided when the latter is invariant under the dynamic, and extended to examples where only the asymptotic normality stands. The second part concerns intertwining relations between (homogeneous) Markov processes and gradients, which can be seen as refinment of the Bakry-Emery criterion, and their application to Stein's method, a collection of techniques to estimate the distance between two probability distributions. Second order intertwinings for birth-death processes are stated, going one step further than the existing first order relations. These relations are then exploited to construct an original and universal method of evaluation of discrete Stein's factors, a key component of Stein-Chen's method.
16

Dimensionamento de equipes de trabalho por meio de modelos probabilísticos / Size of work teams by means of probabilistic models

Freitas, Christiano Michel Fernandes 18 May 2018 (has links)
Submitted by Liliane Ferreira (ljuvencia30@gmail.com) on 2018-07-18T13:36:22Z No. of bitstreams: 2 Dissertação - Christiano Michel Fernandes Freitas - 2018.pdf: 3053695 bytes, checksum: 0d910878cf5ec6ac8091d4ef7816758e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-07-18T15:20:11Z (GMT) No. of bitstreams: 2 Dissertação - Christiano Michel Fernandes Freitas - 2018.pdf: 3053695 bytes, checksum: 0d910878cf5ec6ac8091d4ef7816758e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-07-18T15:20:11Z (GMT). No. of bitstreams: 2 Dissertação - Christiano Michel Fernandes Freitas - 2018.pdf: 3053695 bytes, checksum: 0d910878cf5ec6ac8091d4ef7816758e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-05-18 / This work proposes the modeling of a production system with three manufacturing units, in order to allow the optimal dimensioning of maintainers and the accomplishment of a sensitivity analysis that allows to evaluate the reliability of the obtained results. A Quasi-Birthand- Death (QBD) process is used to model the productive units, and through infinitesimal generators, the input probabilities for the developed code are obtained. Organizations usually define their supporter teams empirically, which can compromise organizational strategies. Thus, the code offers assistance in the decision making of these professionals. Thus, three production units X, Y and Z were modeled and the minimum dimensioning of maintainers that each unit had to be performed. Thus, the X unit with two maintainers provides a 70% probability of remaining in operation, the Y unit with three provides 76%, and finally, the Y unit with only one maintainer allows an 80% chance of remaining in operation. Bymeans of the sensitivity analysis, it was noticed thatwhen disturbing the infinitesimal generator the values of probability of operation tend to approximate to 100% whereas a maintainer is added, however, when the fourth maintainer is added, there is little variation in the system. However,when the system is stressed by the growth of the randomvariable t, the reliability of the results tends to decrease, whereas with a maintainer, the probability of functioning falls considerably over time, and in contrast, with four maintainers, the permanence of operating state tends to be distant. / O presente trabalho tem como objetivos: modelar um sistema de produção com três unidades fabris, de modo a permitir o dimensionamento ideal de mantenedores, e a realização de uma análise de sensibilidade para avaliar a confiabilidade dos resultados obtidos pelo mesmo modelo. Um processo de quase nascimento e morte (Quasi-Birth-and-Death - QBD) é utilizado para modelar as unidades produtivas, e por meio dos geradores infinitesimais, são obtidas as probabilidades de entrada para o código desenvolvido. Geralmente as organizações definem suas equipes de mantenedores de forma empírica, fato que pode comprometer as estratégias organizacionais. Sendo assim, o código oferece auxílio na tomada de decisões destes profissionais. Deste modo, foram modeladas três unidades de produção X, Y e Z e realizado o dimensionamento mínimo de mantenedores que cada unidade deve ter. Observou-se que a unidade X com no mínimo dois mantenedores proporciona 70% de probabilidade de permanecer em funcionamento, a unidade Y com três, proporciona 76%, e a unidade Z, com um mantenedor, possibilita 80%. Por meio da análise de sensibilidade, notou-se que ao perturbar o gerador infinitesimal os valores de probabilidade de funcionamento tendem a se aproximar de 100% à medida que se acrescenta um mantenedor, no entanto, quando se acrescenta o quarto mantenedor, existe pouca variação no sistema. Já em relação ao tempo, quando se estressa o sistema por meio do crescimento da variável aleatória t , a confiabilidade dos resultados tende a diminuir, sendo que com um mantenedor, a probabilidade de funcionamento cai consideravelmente ao longo do tempo, e em contrapartida, com quatro mantenedores, a permanência de estado de funcionamento tende a ser duradoura.
17

Scalable Estimation and Testing for Complex, High-Dimensional Data

Lu, Ruijin 22 August 2019 (has links)
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time. / Doctor of Philosophy / With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
18

Využití teorie hromadné obsluhy při návrhu a optimalizaci paketových sítí / Queueing theory utilization in packet network design and optimization process

Rýzner, Zdeněk January 2011 (has links)
This master's thesis deals with queueing theory and its application in designing node models in packet-switched network. There are described general principles of designing queueing theory models and its mathematical background. Further simulator of packet delay in network was created. This application implements two described models - M/M/1 and M/G/1. Application can be used for simulating network nodes and obtaining basic network characteristics like packet delay or packet loss. Next, lab exercise was created, in that exercise students familiarize themselves with basic concepts of queueing theory and examine both analytical and simulation approach to solving queueing systems.
19

L'acte poétique de la "transfiguralité" : pratiques de l'autoportrait entre écriture et photographie

Lalonde, Johanne 12 1900 (has links)
No description available.

Page generated in 0.0579 seconds