• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 13
  • 8
  • 6
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 98
  • 98
  • 51
  • 22
  • 19
  • 14
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Método de partição produto aplicado à Krigagem

Almeida, Maria de Fátima Ferreira January 2019 (has links)
Orientador: José Sílvio Govone / Resumo: As variáveis aleatórias no espaço estão definidas como funções aleatórias sujeitas à teoria das variáveis regionalizadas. Para assumir continuidade espacial com um número limitado de realizações da variável aleatória são necessárias as hipóteses de estacionariedade, as quais envolvem diferentes graus de homogeneidade espacial. Formalmente, uma variável regionalizada Z é estacionária se os momentos estatísticos de Z(s+h) forem os mesmos para qualquer vetor h. A hipótese de estacionariedade de primeira ordem é definida como a hipótese de que o momento de primeira ordem da distribuição da função aleatória Z(s) é constante em toda a área. A hipótese intrínseca é baseada no cálculo de médias globais das semivariancias, com a pressuposição de estacionariedade de 1a ordem e da estacionariedade da variância dos incrementos. Embora muitas variáveis sejam suscetível a dupla ou múltipla estacionariedade, estas estruturas espaciais não são levadas em consideração pelo semivariograma usual. Na perspectiva de solucionar o problema apontado, buscou-se identificar os locais dos pontos de mudança na média que definem mais de uma estrutura de semivariancia, com o objetivo de melhorar a qualidade dos mapas de Krigagem Ordinária. Para isso, foi utilizado o Método de Partição Produto (MPP), com enfoque espacial, denominado Método de Partição Produto Espacial (MPPs). Para separar os grupos, foi criada uma função de busca de ponto de mudança na média utilizando o modelo hierárquico bayesiano, denom... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The random variables in space are defined by random functions subject to regionalized variable theory. To assume spatial continuity with a limited number of realization of the random variable, we need to assume stationarity hypotheses, which involve different degrees of spatial homogeneity. Formally, a regionalized variable Z is stationary if statistical moments of Z(s + h) are the same for any vector h. The first order stationarity hypothesis is defined to be the hypothesis that first order moment of the distribution of the random function Z(s) is constant throughout the area. The intrinsic hypothesis is based on the computation of global means of semivariate models, with the assumption of 1st order stationarity and incremental variation stationarity. Although many variables are capable of double or multiple stationarity, these spatial structures are not taken into account by the usual semivariogram, and, consequently, cause acuracy problems in Kriging maps. In order xvii to solve the described problem, it was identify the points of change in the average with the objective of improving the quality and accuracy of the maps of Ordinary Kriging. To separate the groups, a mean change point search function was created using the Bayesian hierarchical model, called the Space Product Partition Model (MPPs). Two databases were used to test the model’s potential to separate spatially dependent groups, in which the former suspected a change in mean while in the latter. “ Data2 ”, there... (Complete abstract click electronic access below) / Doutor
62

Making Models with Bayes

Olid, Pilar 01 December 2017 (has links)
Bayesian statistics is an important approach to modern statistical analyses. It allows us to use our prior knowledge of the unknown parameters to construct a model for our data set. The foundation of Bayesian analysis is Bayes' Rule, which in its proportional form indicates that the posterior is proportional to the prior times the likelihood. We will demonstrate how we can apply Bayesian statistical techniques to fit a linear regression model and a hierarchical linear regression model to a data set. We will show how to apply different distributions to Bayesian analyses and how the use of a prior affects the model. We will also make a comparison between the Bayesian approach and the traditional frequentist approach to data analyses.
63

Robot Motion and Task Learning with Error Recovery

Chang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.
64

Discriminative object categorization with external semantic knowledge

Hwang, Sung Ju 25 September 2013 (has links)
Visual object category recognition is one of the most challenging problems in computer vision. Even assuming that we can obtain a near-perfect instance level representation with the advances in visual input devices and low-level vision techniques, object categorization still remains as a difficult problem because it requires drawing boundaries between instances in a continuous world, where the boundaries are solely defined by human conceptualization. Object categorization is essentially a perceptual process that takes place in a human-defined semantic space. In this semantic space, the categories reside not in isolation, but in relation to others. Some categories are similar, grouped, or co-occur, and some are not. However, despite this semantic nature of object categorization, most of the today's automatic visual category recognition systems rely only on the category labels for training discriminative recognition with statistical machine learning techniques. In many cases, this could result in the recognition model being misled into learning incorrect associations between visual features and the semantic labels, from essentially overfitting to training set biases. This limits the model's prediction power when new test instances are given. Using semantic knowledge has great potential to benefit object category recognition. First, semantic knowledge could guide the training model to learn a correct association between visual features and the categories. Second, semantics provide much richer information beyond the membership information given by the labels, in the form of inter-category and category-attribute distances, relations, and structures. Finally, the semantic knowledge scales well as the relations between categories become larger with an increasing number of categories. My goal in this thesis is to learn discriminative models for categorization that leverage semantic knowledge for object recognition, with a special focus on the semantic relationships among different categories and concepts. To this end, I explore three semantic sources, namely attributes, taxonomies, and analogies, and I show how to incorporate them into the original discriminative model as a form of structural regularization. In particular, for each form of semantic knowledge I present a feature learning approach that defines a semantic embedding to support the object categorization task. The regularization penalizes the models that deviate from the known structures according to the semantic knowledge provided. The first semantic source I explore is attributes, which are human-describable semantic characteristics of an instance. While the existing work treated them as mid-level features which did not introduce new information, I focus on their potential as a means to better guide the learning of object categories, by enforcing the object category classifiers to share features with attribute classifiers, in a multitask feature learning framework. This approach essentially discovers the common low-dimensional features that support predictions in both semantic spaces. Then, I move on to the semantic taxonomy, which is another valuable source of semantic knowledge. The merging and splitting criteria for the categories on a taxonomy are human-defined, and I aim to exploit this implicit semantic knowledge. Specifically, I propose a tree of metrics (ToM) that learns metrics that capture granularity-specific similarities at different nodes of a given semantic taxonomy, and uses a regularizer to isolate granularity-specific disjoint features. This approach captures the intuition that the features used for the discrimination of the parent class should be different from the features used for the children classes. Such learned metrics can be used for hierarchical classification. The use of a single taxonomy can be limited in that its structure is not optimal for hierarchical classification, and there may exist no single optimal semantic taxonomy that perfectly aligns with visual distributions. Thus, I next propose a way to overcome this limitation by leveraging multiple taxonomies as semantic sources to exploit, and combine the acquired complementary information across multiple semantic views and granularities. This allows us, for example, to synthesize semantics from both 'Biological', and 'Appearance'-based taxonomies when learning the visual features. Finally, as a further exploration of more complex semantic relations different from the previous two pairwise similarity-based models, I exploit analogies, which encode the relational similarities between two related pairs of categories. Specifically, I use analogies to regularize a discriminatively learned semantic embedding space for categorization, such that the displacements between the two category embeddings in both category pairs of the analogy are enforced to be the same. Such a constraint allows for a more confusing pair of categories to benefit from a clear separation in the matched pair of categories that share the same relation. All of these methods are evaluated on challenging public datasets, and are shown to effectively improve the recognition accuracy over purely discriminative models, while also guiding the recognition to be more semantic to human perception. Further, the applications of the proposed methods are not limited to visual object categorization in computer vision, but they can be applied to any classification problems where there exists some domain knowledge about the relationships or structures between the classes. Possible applications of my methods outside the visual recognition domain include document classification in natural language processing, and gene-based animal or protein classification in computational biology. / text
65

Approximation de la distribution a posteriori d'un modèle Gamma-Poisson hiérarchique à effets mixtes

Nembot Simo, Annick Joëlle 01 1900 (has links)
La méthode que nous présentons pour modéliser des données dites de "comptage" ou données de Poisson est basée sur la procédure nommée Modélisation multi-niveau et interactive de la régression de Poisson (PRIMM) développée par Christiansen et Morris (1997). Dans la méthode PRIMM, la régression de Poisson ne comprend que des effets fixes tandis que notre modèle intègre en plus des effets aléatoires. De même que Christiansen et Morris (1997), le modèle étudié consiste à faire de l'inférence basée sur des approximations analytiques des distributions a posteriori des paramètres, évitant ainsi d'utiliser des méthodes computationnelles comme les méthodes de Monte Carlo par chaînes de Markov (MCMC). Les approximations sont basées sur la méthode de Laplace et la théorie asymptotique liée à l'approximation normale pour les lois a posteriori. L'estimation des paramètres de la régression de Poisson est faite par la maximisation de leur densité a posteriori via l'algorithme de Newton-Raphson. Cette étude détermine également les deux premiers moments a posteriori des paramètres de la loi de Poisson dont la distribution a posteriori de chacun d'eux est approximativement une loi gamma. Des applications sur deux exemples de données ont permis de vérifier que ce modèle peut être considéré dans une certaine mesure comme une généralisation de la méthode PRIMM. En effet, le modèle s'applique aussi bien aux données de Poisson non stratifiées qu'aux données stratifiées; et dans ce dernier cas, il comporte non seulement des effets fixes mais aussi des effets aléatoires liés aux strates. Enfin, le modèle est appliqué aux données relatives à plusieurs types d'effets indésirables observés chez les participants d'un essai clinique impliquant un vaccin quadrivalent contre la rougeole, les oreillons, la rub\'eole et la varicelle. La régression de Poisson comprend l'effet fixe correspondant à la variable traitement/contrôle, ainsi que des effets aléatoires liés aux systèmes biologiques du corps humain auxquels sont attribués les effets indésirables considérés. / We propose a method for analysing count or Poisson data based on the procedure called Poisson Regression Interactive Multilevel Modeling (PRIMM) introduced by Christiansen and Morris (1997). The Poisson regression in the PRIMM method has fixed effects only, whereas our model incorporates random effects. As well as Christiansen and Morris (1997), the model studied aims at doing inference based on adequate analytical approximations of posterior distributions of the parameters. This avoids the use of computationally expensive methods such as Markov chain Monte Carlo (MCMC) methods. The approximations are based on the Laplace's method and asymptotic theory. Estimates of Poisson mixed effects regression parameters are obtained through the maximization of their joint posterior density via the Newton-Raphson algorithm. This study also provides the first two posterior moments of the Poisson parameters involved. The posterior distributon of these parameters is approximated by a gamma distribution. Applications to two datasets show that our model can be somehow considered as a generalization of the PRIMM method since it also allows clustered count data. Finally, the model is applied to data involving many types of adverse events recorded by the participants of a drug clinical trial which involved a quadrivalent vaccine containing measles, mumps, rubella and varicella. The Poisson regression incorporates the fixed effect corresponding to the covariate treatment/control as well as a random effect associated with the biological system of the body affected by the adverse events.
66

Theory and Practice of Globally Optimal Deformation Estimation

Tian, Yuandong 01 September 2013 (has links)
Nonrigid deformation modeling and estimation from images is a technically challenging task due to its nonlinear, nonconvex and high-dimensional nature. Traditional optimization procedures often rely on good initializations and give locally optimal solutions. On the other hand, learning-based methods that directly model the relationship between deformed images and their parameters either cannot handle complicated forms of mapping, or suffer from the Nyquist Limit and the curse of dimensionality due to high degrees of freedom in the deformation space. In particular, to achieve a worst-case guarantee of ∈ error for a deformation with d degrees of freedom, the sample complexity required is O(1/∈d). In this thesis, a generative model for deformation is established and analyzed using a unified theoretical framework. Based on the framework, three algorithms, Data-Driven Descent, Top-down and Bottom-up Hierarchical Models, are designed and constructed to solve the generative model. Under Lipschitz conditions that rule out unsolvable cases (e.g., deformation of a blank image), all algorithms achieve globally optimal solutions to the specific generative model. The sample complexity of these methods is substantially lower than that of learning-based approaches, which are agnostic to deformation modeling. To achieve global optimality guarantees with lower sample complexity, the structureembedded in the deformation model is exploited. In particular, Data-driven Descentrelates two deformed images that are far away in the parameter space by compositionalstructures of deformation and reduce the sample complexity to O(Cd log 1/∈).Top-down Hierarchical Model factorizes the local deformation into patches once theglobal deformation has been estimated approximately and further reduce the samplecomplexity to O(Cd/1+C2 log 1/∈). Finally, the Bottom-up Hierarchical Model buildsrepresentations that are invariant to local deformation. With the representations, theglobal deformation can be estimated independently of local deformation, reducingthe sample complexity to O((C/∈)d0) (d0 ≪ d). From the analysis, this thesis showsthe connections between approaches that are traditionally considered to be of verydifferent nature. New theoretical conjectures on approaches like Deep Learning, arealso provided. practice, broad applications of the proposed approaches have also been demonstrated to estimate water distortion, air turbulence, cloth deformation and human pose with state-of-the-art results. Some approaches even achieve near real-time performance. Finally, application-dependent physics-based models are built with good performance in document rectification and scene depth recovery in turbulent media.
67

Robot Motion and Task Learning with Error Recovery

Chang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.
68

On Boundaries of Statistical Models / Randeigenschaften statistischer Modelle

Kahle, Thomas 24 June 2010 (has links) (PDF)
In the thesis "On Boundaries of Statistical Models" problems related to a description of probability distributions with zeros, lying in the boundary of a statistical model, are treated. The distributions considered are joint distributions of finite collections of finite discrete random variables. Owing to this restriction, statistical models are subsets of finite dimensional real vector spaces. The support set problem for exponential families, the main class of models considered in the thesis, is to characterize the possible supports of distributions in the boundaries of these statistical models. It is shown that this problem is equivalent to a characterization of the face lattice of a convex polytope, called the convex support. The main tool for treating questions related to the boundary are implicit representations. Exponential families are shown to be sets of solutions of binomial equations, connected to an underlying combinatorial structure, called oriented matroid. Under an additional assumption these equations are polynomial and one is placed in the setting of commutative algebra and algebraic geometry. In this case one recovers results from algebraic statistics. The combinatorial theory of exponential families using oriented matroids makes the established connection between an exponential family and its convex support completely natural: Both are derived from the same oriented matroid. The second part of the thesis deals with hierarchical models, which are a special class of exponential families constructed from simplicial complexes. The main technical tool for their treatment in this thesis are so called elementary circuits. After their introduction, they are used to derive properties of the implicit representations of hierarchical models. Each elementary circuit gives an equation holding on the hierarchical model, and these equations are shown to be the "simplest", in the sense that the smallest degree among the equations corresponding to elementary circuits gives a lower bound on the degree of all equations characterizing the model. Translating this result back to polyhedral geometry yields a neighborliness property of marginal polytopes, the convex supports of hierarchical models. Elementary circuits of small support are related to independence statements holding between the random variables whose joint distributions the hierarchical model describes. Models for which the complete set of circuits consists of elementary circuits are shown to be described by totally unimodular matrices. The thesis also contains an analysis of the case of binary random variables. In this special situation, marginal polytopes can be represented as the convex hulls of linear codes. Among the results here is a classification of full-dimensional linear code polytopes in terms of their subgroups. If represented by polynomial equations, exponential families are the varieties of binomial prime ideals. The third part of the thesis describes tools to treat models defined by not necessarily prime binomial ideals. It follows from Eisenbud and Sturmfels' results on binomial ideals that these models are unions of exponential families, and apart from solving the support set problem for each of these, one is faced with finding the decomposition. The thesis discusses algorithms for specialized treatment of binomial ideals, exploiting their combinatorial nature. The provided software package Binomials.m2 is shown to be able to compute very large primary decompositions, yielding a counterexample to a recent conjecture in algebraic statistics.
69

Bayesian Uncertainty Quantification for Large Scale Spatial Inverse Problems

Mondal, Anirban 2011 August 1900 (has links)
We considered a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a high dimension spatial field. The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provides a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion and Discrete Cosine transform were used for dimension reduction of the random spatial field. Furthermore, we used a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. The need for multiple evaluations of the forward model on a high dimension spatial field (e.g. in the context of MCMC) together with the high dimensionality of the posterior, results in many computation challenges. We developed two-stage reversible jump MCMC method which has the ability to screen the bad proposals in the first inexpensive stage. Channelized spatial fields were represented by facies boundaries and variogram-based spatial fields within each facies. Using level-set based approach, the shape of the channel boundaries was updated with dynamic data using a Bayesian hierarchical model where the number of points representing the channel boundaries is assumed to be unknown. Statistical emulators on a large scale spatial field were introduced to avoid the expensive likelihood calculation, which contains the forward simulator, at each iteration of the MCMC step. To build the emulator, the original spatial field was represented by a low dimensional parameterization using Discrete Cosine Transform (DCT), then the Bayesian approach to multivariate adaptive regression spline (BMARS) was used to emulate the simulator. Various numerical results were presented by analyzing simulated as well as real data.
70

Efeitos de idade na sobrevivência aparente de aves de sub-bosque na floresta Amazônica

Pizarro Muñoz, Jenny Alejandra January 2016 (has links)
A observação de gradientes latitudinais em aspectos da história de vida de aves tem motivado o estudo da evolução e variabilidade das histórias de vida nestes organismos. Um exemplo bem documentado é a variação no tamanho da ninhada, onde aves de latitudes menores tendem a ter ninhadas menores do que os seus homólogos de latitudes altas. Uma hipótese que visa explicar esta variação propõe que a sobrevivência em latitudes tropicais é maior para compensar o tamanho da ninhada menor e evitar a extinção das populações. Esta explicação tem tido grande aceitação e apoio por parte de alguns estudos, mas tem sido questionada por outros que não encontraram taxas de sobrevivência mais elevadas em aves tropicais. De modo implícito, todos estes estudos basearam seus resultados na sobrevivência de indivíduos adultos. As populações com o tamanho da ninhada menor não poderiam crescer da mesma maneira que as populações com ninhadas maiores; portanto, se justifica acreditar que algo deve mudar com a latitude para manter o balanço em tamanho populacional. Na busca por explicações alternativas para a persistência das populações de aves tropicais com relativamente pequenos tamanhos de ninhada, surge outra hipótese que propõe que, se não houver diferenças na sobrevivência de indivíduos adultos entre latitudes, o aspecto fundamental que varia é a sobrevivência juvenil, com sobrevivência maior para os juvenis das zonas tropicais em comparação com os juvenis das zonas temperadas. No entanto, atualmente há pouca evidência que suporta esta conclusão. Os resultados contrastantes desses estudos sugerem a falta de um consenso geral sobre a hipótese de que as aves tropicais têm taxas de sobrevivência mais elevadas do que as aves de regiões temperadas, motivando a formulação de hipóteses alternativas e convidando novos testes de hipótese. Neste estudo, pretendemos a) avaliar o efeito da idade sobre a sobrevivência em aves tropicais, estimando as probabilidades anuais de sobrevivência aparentes idade-específicas para um conjunto de aves passeriformes de sub-bosque na Amazônia central brasileira; e b) contribuir para o debate sobre o gradiente latitudinal na sobrevivência de adultos, comparando nossas estimativas com estimativas de outras latitudes. Para estimar a sobrevivência idade-específica ajustamos aos nossos dados um modelo Cormack-Jolly-Seber (CJS) hierárquico para n espécies, que trata os parâmetros espécie-específicos como efeitos aleatórios, que são estimados e que descrevem todo o conjunto de espécies; para comparação de métodos, ajustamos uma versão de efeitos fixos do modelo. Para a determinação da idade das aves usamos o sistema WRP. Apresentamos uma nova variante do modelo CJS com um parâmetro de mistura para a sobrevivência de aves de idade incerta no momento da primeira captura. Encontramos efeito forte da idade na sobrevivência, com probabilidades de sobrevivência menor para os jovens do que para os adultos; evidência de efeito latitude sobre a sobrevivência, que suporta a hipótese amplamente aceita de variação na sobrevivência com a latitude; e discutimos diferenças metodológicas interessantes entre modelo de efeitos aleatórios e fixos relacionados com a precisão das estimativas e o âmbito de inferência, que nos levam a concluir que os modelos de efeitos aleatórios são os mais adequados para a nossa análise. Concluímos que não é necessário invocar uma hipótese alternativa de maior sobrevivência juvenil nos trópicos a fim de explicar o gradiente latitudinal no tamanho da ninhada. / The observation of latitudinal gradients in bird life history traits has motivated the study of avian life history evolution and variability. A well-documented example is the variation in clutch size, where lower latitude birds tend to have smaller clutches than their higher latitude counterparts. A hypothesis that explains this variation proposes that survival in tropical latitudes is higher to compensate for smaller clutch size and prevent population extinctions. This explanation has had a wide acceptance and support by some studies, but has been questioned by others who have not found such higher survival rates in tropical birds. In an implicit manner, all these studies have based their results on adult survival. Populations with smaller clutch size would not be able to grow as well as populations with larger clutches; therefore one is justified to believe that something else must change with latitude. In the search for alternative explanations to the persistence of tropical bird populations with relatively small clutch sizes it has also been proposed that, if there were no differences in adult survival among latitudes, the fundamental trait that varies is juvenile survival, with higher survival rates for tropical juveniles birds than for temperate ones. However, currently there is little evidence that supports this conclusion. The contrasting results of those studies suggest a lack of a general consensus about the hypothesis that tropical birds have higher survival rates than birds of temperate regions, motivating the formulation of alternative hypotheses, and inviting further tests of the hypothesis. In our study we aim to a) assess the effect of age on survival in a tropical bird community, estimating age-specific annual apparent survival probabilities for a set of passerine understory birds from the central Brazilian Amazon; and b) contribute to the debate about the latitudinal gradient in adult survival by comparing our adult survival estimates to estimates of temperate-zone adult survival probabilities. To estimate the age-specific survival we fit to our data a hierarchical multispecies Cormack-Jolly-Seber (CJS) model for n species, that treats species-specific parameters as random effects that are estimated and that describe the whole assemblage of species; for comparison of methods, we also fit a fixed-effects version of the model. To age birds we use the cycle-based WRP system. We introduce a novel variant of CJS model with a mixture component for the survival of birds of uncertain age at the time of banding. We found strong effect of age on survival, with juveniles surviving less than adults; evidence of latitude effect on survival, that supports the widely accepted hypothesis of variation on survival with latitude; and methodological differences between random and fixed effects model related to precision of estimates and scope of inference, that lead us to conclude that random-effects models are more appropriate for our analysis. We conclude that there is no reason for an alternative latitudinal trend in juvenile survival to account for the general trend in clutch size.

Page generated in 0.0995 seconds