Spelling suggestions: "subject:"poisson distribution"" "subject:"boisson distribution""
101 |
Management of City Traffic, Using Wireless Sensor Networks with Dynamic ModelRahman, Mustazibur 16 April 2014 (has links)
Road network of a region is of a paramount importance in the overall development. Management of road traffic is a key factor for the city authority and reducing the road traffic congestion is a significant challenge in this perspective. In this thesis, a Wireless Sensor Network (WSN) based road-traffic monitoring scheme with dynamic mathematical traffic model is presented that will not necessarily include all adjacent intersections of a block; rather the important major intersections of a city. The objective of this scheme is to reduce the congestion by re-routing the vehicles to better performing road-segments by informing the down-stream drivers through broadcasting the congestion information in a dedicated radio channel. The dynamic model can provide with the instantaneous status of the traffic of the road-network. The scheme is a WSN based multi-hop relay network with hierarchical architecture and composed of ordinary nodes, Cluster-Head nodes, Base Stations, Gateway nodes and Monitoring and Control Centers (MCC) etc. Through collecting the traffic information, MCC will check the congestion status and in defining the congestion, threshold factors have been used in this model. For the congested situation of a road-segment, a cost function has been defined as a performance indicator and estimated using the weight factors (importance) of these selected intersections.
This thesis considered a traffic network with twelve major intersections of a city with four major directions. Traffic arrivals in these intersections are assumed to follow Poisson distribution. Model was simulated in Matlab with traffic generated through Poisson Random Number Generator and cost function was estimated for the congestion status of the road-segments over a simulation period of 1440 minutes starting from midnight.
For optimization purpose we adopted two different approaches; in the first approach,
performance of the scheme was evaluated for all threshold factor values iteratively one at a time, applying a threshold factor value to define threshold capacities of all the road segments; traffic was generated and relative cost has been estimated following the model specifications with the purpose of congestion avoidance. In the second approach, different values of threshold factor have been used for different road segments for determining the optimum set-up, and exhaustive search technique has been applied with a smaller configuration in order to keep computations reachable. Simulation results show the capacity of this scheme to improve the traffic performance by reducing the congestion level with low congestion costs.
|
102 |
Variation in prey availability and feeding success of larval Radiated Shanny (Ulvaria subbifurcata Storer) from Conception Bay, NewfoundlandYoung, Kelly Victoria 10 July 2008 (has links)
Recruitment of pelagic fish populations is believed to be regulated during the planktonic larval stage due to high rates of mortality during the early life stages. Starvation is thought to be one of the main sources of mortality, despite the fact that there is rarely a strong correlation between the feeding success of larval fish and food availability as measured in the field. This lack of relationship may be caused in part by (i) inadequate sampling of larval fish prey and (ii) the use of total zooplankton abundance or biomass as proxies for larval food availability. Many feeding studies rely on measures of average prey abundance which do not adequately capture the variability, or patchiness, of the prey field as experienced by larval fish. Previous studies have shown that larvae may rely on these patches to increase their feeding success. I assess the variability in the availability of larval fish prey over a range of scales and model the small-scale distribution of prey in Conception Bay, Newfoundland. I show that the greatest variability in zooplankton abundance existed at the meter scale, and that larval fish prey were not randomly distributed within the upper mixed layer. This will impact both how well we can model the stochastic nature of larval fish cohorts, as well as how well we can study larval fish feeding from gut content analyses. Expanding on six years of previous lab and field studies on larval Radiated Shanny (Ulvaria subbifurcata) from Conception Bay, Newfoundland, I assess the feeding success, niche breadth (S) and weight-specific feeding rates (SPC, d-1) of the larvae to determine whether there are size-based patterns evident across the years. I found that both the amount of food in the guts and the niche breadth of larvae increased with larval size. There was a shift from low to high SPC with increasing larval size, suggesting that foraging success increases as the larvae grow. My results suggest that efforts should be made to estimate the variability of prey abundance at scales relevant to larval fish foraging rather than using large-scale average abundance estimates, since small-scale prey patchiness likely plays a role in larval fish feeding dynamics. In addition, the characteristics of zooplankton (density, size and behaviour) should be assessed as not all zooplankton are preyed upon equally by all sizes of larval fish. Overall, this thesis demonstrates that indices based on averages fail to account for the variability in the environment and in individual larval fish, which may be confounding the relationship between food availability and larval growth.
|
103 |
Variation in prey availability and feeding success of larval Radiated Shanny (Ulvaria subbifurcata Storer) from Conception Bay, NewfoundlandYoung, Kelly Victoria 10 July 2008 (has links)
Recruitment of pelagic fish populations is believed to be regulated during the planktonic larval stage due to high rates of mortality during the early life stages. Starvation is thought to be one of the main sources of mortality, despite the fact that there is rarely a strong correlation between the feeding success of larval fish and food availability as measured in the field. This lack of relationship may be caused in part by (i) inadequate sampling of larval fish prey and (ii) the use of total zooplankton abundance or biomass as proxies for larval food availability. Many feeding studies rely on measures of average prey abundance which do not adequately capture the variability, or patchiness, of the prey field as experienced by larval fish. Previous studies have shown that larvae may rely on these patches to increase their feeding success. I assess the variability in the availability of larval fish prey over a range of scales and model the small-scale distribution of prey in Conception Bay, Newfoundland. I show that the greatest variability in zooplankton abundance existed at the meter scale, and that larval fish prey were not randomly distributed within the upper mixed layer. This will impact both how well we can model the stochastic nature of larval fish cohorts, as well as how well we can study larval fish feeding from gut content analyses. Expanding on six years of previous lab and field studies on larval Radiated Shanny (Ulvaria subbifurcata) from Conception Bay, Newfoundland, I assess the feeding success, niche breadth (S) and weight-specific feeding rates (SPC, d-1) of the larvae to determine whether there are size-based patterns evident across the years. I found that both the amount of food in the guts and the niche breadth of larvae increased with larval size. There was a shift from low to high SPC with increasing larval size, suggesting that foraging success increases as the larvae grow. My results suggest that efforts should be made to estimate the variability of prey abundance at scales relevant to larval fish foraging rather than using large-scale average abundance estimates, since small-scale prey patchiness likely plays a role in larval fish feeding dynamics. In addition, the characteristics of zooplankton (density, size and behaviour) should be assessed as not all zooplankton are preyed upon equally by all sizes of larval fish. Overall, this thesis demonstrates that indices based on averages fail to account for the variability in the environment and in individual larval fish, which may be confounding the relationship between food availability and larval growth.
|
104 |
Srovnání znalostí z teorie elektromagnetického pole u laiků a odborníků v rámci civilní nouzové připravenosti / The comparison of knowledge of electromagnetic field theory for laymen and experts within the civil emergency preparednessVESELÁ, Barbora January 2016 (has links)
The thesis "Comparison of knowledge of electromagnetic field theory of the laity and experts in the context of civil emergency preparedness" to put three goals: 1. The formation of the structure of an electromagnetic field for experts. 2. The reaching of the comparison of knowledge among experts and laymen. 3. Statistical processing of the results. The author has set the following hypotheses: H1. Theoretical distribution of knowledge in a sample of the general public will have a normal distribution . H2. Theoretical distribution of knowledge in a sample of professional community will not have a normal distribution. H3. The comparison of knowledge among the experts and the laymen will lead to an alternative hypothesis. The thesis was based on the knowledge of the theory curricular process. On the basis of this theory was made up not only the structure of the electromagnetic field, but also the questionnaire. An important step in this thesis was the creating a model structure of electromagnetic field . The structure was based on an analysis of the scientific system - the system of educational programs in the field of civil protection.The same structure was applied to the general public. An important step was to compare the knowledge of protect the population from experts and laymen. This issue has not been investigated in detail and it did not compare the knowledge of laymen and experts in the studied physics. The idea came from the possibility of extraordinarily events where respondents can meet with electromagnetic fields and will need the relevant theoretical knowledge. The aim was to the statistical evaluate of the applied questionnaires. There were applied nonparametric and parametric testing as the verification methods. The theoretical division of knowledge of experts is supposed Poisson distribution, on the contrary, the theoretical division of the general public should have a normal distribution. There was also compared the difference between knowledge of laymen and professionals. The using of the statistical methods have been received and confirmed the hypothesis and the thesis goals were fulfilled.
|
105 |
Distribuição COM-Poisson na análise de dados de experimentos de quimioprevenção do câncer em animaisRibeiro, Angélica Maria Tortola 16 March 2012 (has links)
Made available in DSpace on 2016-06-02T20:06:06Z (GMT). No. of bitstreams: 1
4336.pdf: 1594022 bytes, checksum: ff2370b4d516b9cdf6dd6da3be557c42 (MD5)
Previous issue date: 2012-03-16 / Financiadora de Estudos e Projetos / Experiments involving chemical induction of carcinogens in animals are common in the biological area. Interest in these experiments is, in general, evaluating the chemopreventive effect of a substance in the destruction of damaged cells. In this type of study, two variables of interest are the number of induced tumors and their development times. We explored the use of statistical model proposed by Kokoska (1987) for the analysis of experimental data of chemoprevention of cancer in animals. We flexibility the Kokoska s model, subsequently used by Freedman (1993), whereas for the variable number of tumors induced Conway-Maxwell Poisson (COM-Poisson) distribution. This distribution has demonstrated efficiency due to its great flexibility, when compared to other discrete distributions to accommodate problems related to sub-dispersion and super-dispersion often found in count data. The purpose of this paper is to adapt the theory of long-term destructive model (Rodrigues et al., 2011) for experiments chemoprevention of cancer in animals, in order to evaluate the effectiveness of cancer treatments. Unlike the proposed Rodrigues et al. (2011), we formulate a model for the variable number of detected malignant tumors per animal, assuming that the probability of detection is no longer constant, but dependent on the time step. This is an extremely important approach to cancer chemoprevention experiments, because it makes the analysis more realistic and accurate. We conducted a simulation study, in order to evaluate the efficiency of the proposed model and to verify the asymptotic properties of maximum likelihood estimators. We also analyze a real data set presented in the article by Freedman (1993), to demonstrate the efficiency of the COM-Poisson model compared to results obtained by him with the Poisson and Negative Binomial distributions. / Experimentos que envolvem a indução química de substâncias cancerígenas em animais são comuns na área biológica. O interesse destes experimentos é, em geral, avaliar o efeito de uma substância quimiopreventiva na destruição das células danificadas. Neste tipo de estudo, duas variáveis de interesse são o número de tumores induzidos e seus tempos de desenvolvimento. Exploramos o uso do modelo estatístico proposto por Kokoska (1987) para a análise de dados de experimentos de quimioprevenção de câncer em animais. Flexibilizamos o modelo de Kokoska (1987), posteriormente utilizado por Freedman (1993), considerando para a variável número de tumores induzidos a distribuição Conway-Maxwell Poisson (COM-Poisson). Esta distribuição tem demonstrado eficiência devido à sua grande flexibilidade, quando comparada a outras distribuições discretas, para acomodar problemas relacionados à subdispersão e sobredispersão encontrados frequentemente em dados de contagem. A proposta deste trabalho consiste em adaptar a teoria de modelo destrutivo de longa duração (Rodrigues et al., 2011) para experimentos de quimioprevenção do câncer em animais, com o propósito de avaliar a eficiência de tratamentos contra o câncer. Diferente da proposta de Rodrigues et al. (2011), formulamos um modelo para a variável número de tumores malignos detectados por animal, supondo que sua probabilidade de detecção não é mais constante, e sim dependente do instante de tempo. Esta é uma abordagem extremamente importante para experimentos quimiopreventivos de câncer, pois torna a análise mais realista e precisa. Realizamos um estudo de simulação com o propósito de avaliar a eficiência do modelo proposto e verificar as propriedades assintóticas dos estimadores de máxima verossimilhança. Analisamos também um conjunto de dados reais apresentado no artigo de Freedman (1993), visando demonstrar a eficiência do modelo COM-Poisson em relação aos resultados por ele obtidos com as distribuições Poisson e Binomial Negativa.
|
106 |
Extensões dos modelos de sobrevivência referente a distribuição WeibullVigas, Valdemiro Piedade 07 March 2014 (has links)
Made available in DSpace on 2016-06-02T20:06:09Z (GMT). No. of bitstreams: 1
5822.pdf: 1106242 bytes, checksum: 613a82d7af4c6f40b60637e4c7122121 (MD5)
Previous issue date: 2014-03-07 / Financiadora de Estudos e Projetos / In this dissertation, two models of probability distributions for the lifetimes until the occurrence of the event produced by a specific cause for elements in a population are reviewed. The first revised model is called the Weibull-Poisson (WP) which has been proposed by Louzada et al. (2011a). This model generalizes the exponential-Poisson distributions proposed by Kus (2007) and Weibull. The second, called long-term model, has been proposed by several authors and it considers that the population is not homogeneous in relation to the risk of event occurence by the cause studied. The population has a sub-population that consists of elements who are not liable do die by the specific cause in study. These elements are considered as immune or cured. In relation to the elements who are at risk the minimum value of time of the event accurance is observed. In the review of WP the expressions of the survival function, quantile function, probability density function, and of the hazard function, as well the expression of the non-central moments of order k and the distribution of order statistics are detailed. From this review we propose, in an original way, studies of the simulation to analyze the paramenters of frequentist properties of maximum likelihood estimators for this distribution. And also we also present results related to the inference about the parameters of this distribution, both in the case in which the data set consists of complete observations of lifetimes, and also in the case in which it may contain censored observations. Furthermore, we present in this paper, in an original way a regression model in a form of location and scale when T has WP distribution. Another original contribution of this dissertation is to propose the distribution of long-term Weibull-Poisson (LWP). Besides studying the LWP in the situation in which the covariates are included in the analysis. We also described the functions that characterize this distribution (distribution function, quantile function, probability density function and the hazard function). Moreover we describe the expression of the moment of order k, and the density function of a statistical order. A study by simulation viii of this distribution is made through maximum likelihood estimators. Applications to real data set illustrate the applicability of the two considered models. / Nesta dissertação são revistos dois modelos de distribuições de probabilidade para os tempos de vida até a ocorrência do evento provocado por uma causa específica para elementos em uma população. O primeiro modelo revisto é o denominado Weibull-Poisson (WP) que foi proposto por Louzada et al. (2011a), esse modelo generaliza as distribuições exponencial Poisson proposta por Kus (2007) e Weibull. O segundo, denominado modelo de longa duração, foi proposto por vários autores e considera que a população não é homogênea em relação ao risco de ocorrência do evento pela causa em estudo. A população possui uma sub-população constituída de elementos que não estão sujeitos ao evento pela causa especifica em estudo, sendo considerados como imunes ou curados. Em relação à parcela dos elementos que estão em risco observa-se o valor mínimo dos tempos da ocorrência do evento. Na revisão sobre a WP são detalhadas as expressões da função de sobrevivência, da função quantil, da função densidade de probabilidade e da função de risco, bem como a expressão dos momentos não centrais de ordem k e a distribuição de estatísticas de ordem. A partir desta revisão, é proposta de forma original, estudos de simulação com o objetivo de analisar as propriedades frequentistas dos estimadores de máxima verossimilhança dos parâmetros desta distribuição. E apresenta-se resultados relativos à inferência sobre os parâmetros desta distribuição, tanto no caso em que o conjunto de dados consta de observações completas de tempos de vida, como no caso em que ele possa conter observações censuradas. Alem disso, apresentamos de forma original neste trabalho um modelo de regressão na forma de locação e escala quando T tem distribuição WP. Outra contribuição original dessa dissertação é propor a distribuição de longa duração Weibull-Poisson (LWP), alem de estudar a LWP na situação em que as covariáveis são incluídas na análise. Realizou-se também a descrição das funções que caracterizam essa distribuição (função distribuição, função quantil, função densidade de probabilidade e função de risco). Assim como a descrição da expressão do momento de ordem k e da função densidade da estatística de ordem. É feito um estudo por simulação desta distribuição via máxima verossimilhança. Aplicações à conjuntos de dados reais ilustram a utilidade dos dois modelos considerados.
|
107 |
Distribuição de Poisson bivariada aplicada à previsão de resultados esportivosSilva, Wesley Bertoli da 23 April 2014 (has links)
Made available in DSpace on 2016-06-02T20:06:10Z (GMT). No. of bitstreams: 1
6128.pdf: 965623 bytes, checksum: 08d957ba051c6348918f8348a857eff7 (MD5)
Previous issue date: 2014-04-23 / Financiadora de Estudos e Projetos / The modelling of paired counts data is a topic that has been frequently discussed in several threads of research. In particular, we can cite bivariate counts, such as the analysis of sports scores. As a result, in this work we present the bivariate Poisson distribution to modelling positively correlated scores. The possible independence between counts is also addressed through the double Poisson model, which arises as a special case of the bivariate Poisson model. The main characteristics and properties of these models are presented and a simulation study is conducted to evaluate the behavior of the estimates for different sample sizes. Considering the possibility of modeling parameters by insertion of predictor variables, we present the structure of the bivariate Poisson regression model as a general case as well as the structure of an effects model for application in sports data. Particularly, in this work we will consider applications to Brazilian Championship Serie A 2012 data, in which the effects will be estimated by double Poisson and bivariate Poisson models. Once obtained the fits, the probabilities of scores occurence are estimated and then we obtain forecasts for the outcomes. In order to obtain more accurate forecasts, we present the weighted likelihood method from which it will be possible to quantify the relevance of the data according to the time they were observed. / A modelagem de dados provenientes de contagens pareadas e um típico que vem sendo frequentemente abordado em diversos segmentos de pesquisa. Em particular, podemos citar os casos em que as contagens de interesse são bivariadas, como por exemplo na analise de placares esportivos. Em virtude disso, neste trabalho apresentamos a distribuição Poisson bivariada para os casos em que as contagens de interesse sao positivamente correlacionadas. A possível independencia entre as contagens tambem e abordada por meio do modelo Poisson duplo, que surge como caso particular do modelo Poisson bivariado. As principais características e propriedades desses modelos são apresentadas e um estudo de simulação é realizado, visando avaliar o comportamento das estimativas para diferentes tamanhos amostrais. Considerando a possibilidade de se modelar os parâmetros por meio da inserçao de variáveis preditoras, apresentamos a estrutura do modelo de regressão Poisson bivariado como caso geral, bem como a estrutura de um modelo de efeitos para aplicação a dados esportivos. Particularmente, neste trabalho vamos considerar aplicações aos dados da Serie A do Campeonato Brasileiro de 2012, na qual os efeitos serão estimados por meio dos modelos Poisson duplo e Poisson bivariado. Uma vez obtidos os ajustes, estimam-se as probabilidades de ocorrência dos placares e, a partir destas, obtemos previsões para as partidas de interesse. Com o intuito de se obter previsões mais acuradas para as partidas, apresentamos o metodo da verossimilhança ponderada, a partir do qual seria possível quantificar a relevância dos dados em função do tempo em que estes foram observados.
|
108 |
Beräkningsmetoder för säkerhetslager : En jämförande fallstudie på Växjö TransportkylaEkholm, Micaela, Grahn, Annie January 2018 (has links)
Uppsatsen undersöker vilka mål Växjö Transportkyla har vid dimensionering av deras säkerhetslager. Genom att studera litteratur och genomföra intervjuer på företaget sammanställs målen till följande: Hög servicenivå Låg kapitalbindning Kontroll över storleken på säkerhetslager Differentierade servicenivåer Användarvänlig beräkning av säkerhetslager baserad på teori Den andra frågeställningen för uppsatsen syftar till att bestämma vilken beräkningsmetod som Växjö Transportkyla bör välja för att uppnå de mål som satts upp. Fem olika beräkningsmetoder testas och det undersöks även vilken statistisk fördelning produkternas efterfråga har. Efter att ha testat de fem beräkningsmetoderna på data insamlad från Växjö Transportkyla blir slutsatsen att för de 426 poissonfördelade produkterna bör SERV2 poissonfördelning användas. För de övriga ca 700 artiklarna finns inget resultat då efterfrågefördelningen endast gick att bestämma för ett fåtal av dem. En rekommendation till företaget blir att testa fler fördelningar och på så sätt komma fram till bästa resultat på övriga produkter. Det poängteras även att det är bra att mäta servicenivåer och differentiera servicenivåmålet för olika produkter. Uppsatsen ger sitt teoretiska bidrag i form av en generalisering av vilka produktegenskaper som påverkar valet av beräkningsmetod vid säkerhetslageroptimering och hur. De fyra övriga beräkningsmetoderna jämförs här med SERV2 poissonfördelning för att se om det vid vissa produktegenskaper passar med en annan beräkningsmetod. Detta leder fram till slutsatsen att SERV2 normalfördelning var den av de fyra beräkningsmetoderna som hade mest korrekt resultat och det var vid medellånga ledtider och låg efterfrågan. / The thesis examines which goals Växjö Transportkyla has with calculating their safety stock. By analysing literature and responses from interviews at the company, the following goals have been put together: High service level Low tied-up costs Control over the size of the safety stock Differentiated service levels User-friendly calculation of safety stock based on theory The purpose of the second research question is to decide which safety stock calculation should be used by Växjö Transportkyla to reach the goals. Five different safety stock calculations are tested and the statistical distribution of the products’ demand is examined. After testing the safety stock calculations on data from Växjö Transportkyla, the conclusion is that for the 426 poisson distributed products, SERV2 poisson distribution should be used. For the rest of the products, which are around 700, there is no result since the correct distribution only was found for a few of them. A recommendation for the company is to test more distributions to be able to calculate the safety stock for the rest of their products. It is also pointed out that the company should measure their service levels and differentiate the service level for the different products. The thesis provides a theoretical contribution by examining which product features affect the choice of safety stock calculation and how. The four remaining safety stock calculations are compared to SERV2 normal distribution to examine if it is plausible for certain product features to use another safety stock calculation. This leads to the conclusion that SERV2 normal distribution was the one out of the four safety stock calculations that had the most correct result in relation to SERV2 poisson distribution and that was for the product features of middle-long lead times and low demand.
|
109 |
Modelos alternativos em filas M/G/1Prado, Silvia Maria 26 November 2015 (has links)
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-04-25T19:31:06Z
No. of bitstreams: 1
TeseSMP.pdf: 1382232 bytes, checksum: 8758d122cc415ab540988c4f92e38cc8 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-05-02T13:03:01Z (GMT) No. of bitstreams: 1
TeseSMP.pdf: 1382232 bytes, checksum: 8758d122cc415ab540988c4f92e38cc8 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-05-02T13:03:17Z (GMT) No. of bitstreams: 1
TeseSMP.pdf: 1382232 bytes, checksum: 8758d122cc415ab540988c4f92e38cc8 (MD5) / Made available in DSpace on 2017-05-02T13:06:24Z (GMT). No. of bitstreams: 1
TeseSMP.pdf: 1382232 bytes, checksum: 8758d122cc415ab540988c4f92e38cc8 (MD5)
Previous issue date: 2015-11-26 / Não recebi financiamento / The main aim of this work is to develop alternative queuing models to M/ G/l, in which arrivals follow a Poisson process, the total number of customers on the system and the total number of service channels are unknown. Our interest is just to observe the service channel that will offer the maximum or minimum service time. Wherefore, the service distributions are obtained from the composition of the Conwav-Maxwell-Poisson distribution truncated at zero, used to model the number of service channels, with the general distribution to the maximum and minimum service time. Thus, we obtain new distributions for service time, which are called Maximum-Conwav-Maxwell-Poisson-general, denoted by MAXCOMPG distribution, and Minimum-Conwav-Maxwell-Poisson-general, denoted by MINCOMPG distribution, consequently, we obtain the queue models M/MAXCOMPG/1 and M/MINCOMPG/ 1, respectively. As general distributions, we use the distributions exponential, Weibull and Birnbaum Saunders, To illustrate the proposed queue models, a simulation study is done and also real data are used. / Este trabalho tem como objetivo apresentar modelos de filas alternativos ao M/G/l, nos quais as chegadas seguem um processo de Poisson, o número total de usuários no sistema e o número total de canais de atendimento são desconhecidos. Neste caso, observamos apenas o canal de serviço que irá oferecer o máximo ou o mínimo tempo de serviço. Para isto, as distribuições de serviço são obtidas a partir da composição da distribuição Conwav-Maxwell-Poisson truncada no ponto zero, usada para modelar o número de canais de atendimento, com uma distribuição geral para o máximo e o mínimo tempos de serviço. Desta forma, surgem novas distribuições de serviço que são denominadas de Máximo-Conwav-Maxwell-Poisson-geral, denotada por distribuição MAXCOMPG, e Mínimo-Conwav-Maxwell-Poisson-geral, denotada por distribuição MINCOMPG, e, assim, obtemos os modelos de fila M MAXCOMPG 1 e M MINCOMPG 1. Como distribuições gerais usamos as distribuições exponencial, Weibull e Birnbaum Saunders, Para ilustrar os modelos de fila propostos um amplo estudo de simulação é feito e dados reais também são utilizados.
|
110 |
Modeling strategies for complex hierarchical and overdispersed data in the life sciences / Estratégias de modelagem para dados hierárquicos complexos e com superdispersão em ciências biológicasIzabela Regina Cardoso de Oliveira 24 July 2014 (has links)
In this work, we study the so-called combined models, generalized linear mixed models with extension to allow for overdispersion, in the context of genetics and breeding. Such flexible models accommodates cluster-induced correlation and overdispersion through two separate sets of random effects and contain as special cases the generalized linear mixed models (GLMM) on the one hand, and commonly known overdispersion models on the other. We use such models while obtaining heritability coefficients for non-Gaussian characters. Heritability is one of the many important concepts that are often quantified upon fitting a model to hierarchical data. It is often of importance in plant and animal breeding. Knowledge of this attribute is useful to quantify the magnitude of improvement in the population. For data where linear models can be used, this attribute is conveniently defined as a ratio of variance components. Matters are less simple for non-Gaussian outcomes. The focus is on time-to-event and count traits, where the Weibull-Gamma-Normal and Poisson-Gamma-Normal models are used. The resulting expressions are sufficiently simple and appealing, in particular in special cases, to be of practical value. The proposed methodologies are illustrated using data from animal and plant breeding. Furthermore, attention is given to the occurrence of negative estimates of variance components in the Poisson-Gamma-Normal model. The occurrence of negative variance components in linear mixed models (LMM) has received a certain amount of attention in the literature whereas almost no work has been done for GLMM. This phenomenon can be confusing at first sight because, by definition, variances themselves are non-negative quantities. However, this is a well understood phenomenon in the context of linear mixed modeling, where one will have to make a choice between a hierarchical and a marginal view. The variance components of the combined model for count outcomes are studied theoretically and the plant breeding study used as illustration underscores that this phenomenon can be common in applied research. We also call attention to the performance of different estimation methods, because not all available methods are capable of extending the parameter space of the variance components. Then, when there is a need for inference on such components and they are expected to be negative, the accuracy of the method is not the only characteristic to be considered. / Neste trabalho foram estudados os chamados modelos combinados, modelos lineares generalizados mistos com extensão para acomodar superdispersão, no contexto de genética e melhoramento. Esses modelos flexíveis acomodam correlação induzida por agrupamento e superdispersão por meio de dois conjuntos separados de efeitos aleatórios e contem como casos especiais os modelos lineares generalizados mistos (MLGM) e os modelos de superdispersão comumente conhecidos. Tais modelos são usados na obtenção do coeficiente de herdabilidade para caracteres não Gaussianos. Herdabilidade é um dos vários importantes conceitos que são frequentemente quantificados com o ajuste de um modelo a dados hierárquicos. Ela é usualmente importante no melhoramento vegetal e animal. Conhecer esse atributo é útil para quantificar a magnitude do ganho na população. Para dados em que modelos lineares podem ser usados, esse atributo é convenientemente definido como uma razão de componentes de variância. Os problemas são menos simples para respostas não Gaussianas. O foco aqui é em características do tipo tempo-até-evento e contagem, em que os modelosWeibull-Gama-Normal e Poisson-Gama-Normal são usados. As expressões resultantes são suficientemente simples e atrativas, em particular nos casos especiais, pelo valor prático. As metodologias propostas são ilustradas usando dados de melhoramento animal e vegetal. Além disso, a atenção é voltada à ocorrência de estimativas negativas de componentes de variância no modelo Poisson-Gama- Normal. A ocorrência de componentes de variância negativos em modelos lineares mistos (MLM) tem recebido certa atenção na literatura enquanto quase nenhum trabalho tem sido feito para MLGM. Esse fenômeno pode ser confuso a princípio porque, por definição, variâncias são quantidades não-negativas. Entretanto, este é um fenômeno bem compreendido no contexto de modelagem linear mista, em que a escolha deverá ser feita entre uma interpretação hierárquica ou marginal. Os componentes de variância do modelo combinado para respostas de contagem são estudados teoricamente e o estudo de melhoramento vegetal usado como ilustração confirma que esse fenômeno pode ser comum em pesquisas aplicadas. A atenção também é voltada ao desempenho de diferentes métodos de estimação, porque nem todos aqueles disponíveis são capazes de estender o espaço paramétrico dos componentes de variância. Então, quando há a necessidade de inferência de tais componentes e é esperado que eles sejam negativos, a acurácia do método de estimação não é a única característica a ser considerada.
|
Page generated in 0.0988 seconds