• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 62
  • 4
  • 1
  • Tagged with
  • 145
  • 113
  • 82
  • 82
  • 82
  • 82
  • 82
  • 48
  • 41
  • 27
  • 13
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Protein structural dynamics and thermodynamics from advanced simulation techniques

Cazzolli, Giorgia January 2013 (has links)
In this work we apply simulation techniques, namely Monte Carlo simulations and a path integral based method called Dominant Reaction Pathways (DRP) approach, in order to study aspects of dynamics and thermodynamics in three different families of peculiar proteins. These proteins are, for reasons such as the presence of an intermediate state in the folding path or topological constraints or large size, different from ideal systems, as may be considered small globular proteins that fold in a two state manner. The first treated topic is represented by the colicin immunity proteins IM9 and IM7, very similar in structure but with an apparently different folding mechanism. Our simulations suggest that the two proteins should fold with a similar folding mechanism via a populated on-pathway intermediate state. Then, two classes of pheromones that live in temperate and arctic water respectively are investigated. The two types of pheromones, despite the high structural similarity, show a different thermodynamic behavior, that could be explained, according to our results, by considering the role played by the location of CYS-CYS bonds along the chain. Finally, the conformational changes occurring in serpin proteins are studied. The serpins are very flexible, with a large size, more than 350 residues, and slow dynamics, from hours to weeks, completely beyond the possibilities of the simulation techniques to date. In this thesis we present the first all-atom simulations, obtained with the DRP approach, of the mechanism related to serpins and a complete characterization of the serpin dynamics is performed. Moreover, important implications for what concerns medical research field, in particular in drug design, are drown from this detailed analysis.
122

Network identification via multivariate correlation analysis

Chiari, Diana Elisa January 2019 (has links)
In this thesis an innovative approach to assess connectivity in a complex network was proposed. In network connectivity studies, a major problem is to estimate the links between the elements of a system in a robust and reliable way. To address this issue, a statistical method based on Pearson’s correlation coefficient was proposed. The former inherits the versatility of the latter, declined in a general applicability to any kind of system and the capability to evaluate cross–correlation of time series pairs both simultaneously and at different time lags. In addition, our method has an increased “investigation power”, allowing to estimate correlation at different time scale–resolutions. The method was tested on two very different kind of systems: the brain and a set of meteorological stations in the Trentino region. In both cases, the purpose was to reconstruct the existence of significant links between the elements of the two systems at different temporal resolutions. In the first case, the signals used to reconstruct the networks are magnetoencephalographic (MEG) recordings acquired from human subjects in resting–state. Zero–delays cross–correlations were estimated on a set of MEG time series corresponding to the regions belonging to the default mode network (DMN) to identify the structure of the fully–connected brain networks at different time scale resolutions. A great attention was devoted to test the correlation significance, estimated by means of surrogates of the original signal. The network structure is defined by means of the selection of four parameter values: the level of significance α, the efficiency η0, and two ranking parameters, R1 and R2, used to merge the results obtained from the whole dataset in a single average behav- ior. In the case of MEG signals, the functional fully–connected networks estimated at different time scale resolutions were compared to identify the best observation window at which the network dynamics can be highlighted. The resulting best time scale of observation was ∼ 30 s, in line with the results present in the scientific liter- ature. The same method was also applied to meteorological time series to possibly assess wind circulation networks in the Trentino region. Although this study is pre- liminary, the first results identify an interesting clusterization of the meteorological stations used in the analysis.
123

Silicon nanocrystals downshifting for photovoltaic applications

Sgrignuoli, Fabrizio January 2013 (has links)
In conventional silicon solar cell, the collection probability of light generated carries shows a drop in the high energy range 280-400nm. One of the methods to reduce this loss, is to implement nanometre sized semiconductors on top of a solar cell where high energy photons are absorbed and low energy photons are re-emitted. This effect, called luminescence down-shifter (LDS), modifies the incident solar spectrum producing an enhancement of the energy conversion efficiency of a cell. We investigate this innovative effect using silicon nanoparticles dispersed in a silicon dioxide matrix as active material. In particular, I proposed to model these structures using a transfer matrix approach to simulate its optical properties in combination with a 2D device simulator to estimate the electrical performance. Based on the optimized layer sequences, high efficiency cells were produced within the european project LIMA characterized by silicon quantum dots as active layer. Experimental results demonstrate the validity of this approach by showing an enhancement of the short circuit current density with up to 4%. In addition, a new configuration was proposed to improve the solar cell performances. Here the silicon nanoparticles are placed on a cover glass and not directly on the silicon cells. The aim of this study was to separate the silicon nanocrystals (Si-NCs) layer from the cell. In this way, the solar device is not affected by the Si-NCs layer during the fabrication process, i.e. the surface passivation quality of the cell remains unaffected after the application of the LDS layer. Using this approach, the downshifting contribution can be quantified separately from the passivation effect, as compared with the previous method based on the Si-NCs deposition directly on the solar devices. By suitable choice of the dielectric structures, an improvement in short circuit current of up 1% due to the LDS effect is demonstrated and simulated.
124

Progress of Monte Carlo methods in nuclear physics using EFT-based NN interaction and in hypernuclear systems.

Armani, Paolo January 2011 (has links)
Introduction In this thesis I report the work of my PhD; it treated two different topics, both related by a third one, that is the computational method that I use to solve them. I worked on EFT-theories for nuclear systems and on Hypernuclei. I tried to compute the ground state properties of both systems using Monte Carlo methods. In the first part of my thesis I briefly describe the Monte Carlo methods that I used: VMC (Variational Monte Carlo), DMC (Diffusion Monte Carlo), AFDMC (Auxiliary Field Diffusion Monte Carlo) and AFQMC (Auxiliary Field Quantum Monte Carlo) algorithms. I also report some new improvements relative to these methods that I tried or suggested: I remember the fixed hypernode extension (§ 2.6.2) for the DMC algorithm, the inclusion of the L2 term (§ 3.10) and of the exchange term (§ 3.11) into the AFDMC propagator. These last two are based on the same idea used by K. Schmidt to include the spin-orbit term in the AFDMC propagator (§ 3.9). We mainly use the AFDMC algorithm but at the end of the first part I describe also the AFQMC method. This is quite similar in principle to AFDMC, but it was newer used for nuclear systems. Moreover, there are some details that let us hope to be able to overcome with AFQMC some limitations that we find in AFDMC algorithm. However we do not report any result relative to AFQMC algorithm, because we start to implement it in the last months and our code still requires many tests and debug. In the second part I report our attempt of describing the nucleon-nucleon interaction using EFT-theory within AFDMC method. I explain all our tests to solve the ground state of a nucleus within this method; hence I show also the problems that we found and the attempts that we tried to overcome them before to leave this project. In the third part I report our work about Hypernuclei; we tried to fit part of the ΛN interaction and to compute the Hypernuclei Λ-hyperon separation energy. Nevertheless we found some good and encouraging results, we noticed that the fixed-phase approximation used in AFDMC algorithm was not so small like assumed. Because of that, in order to obtain interesting results, we need to improve this approximations or to use a better method; hence we looked at AFQMC algorithm aiming to quickly reach good results.
125

A new approach to optimal embedding of time series

Perinelli, Alessio 20 November 2020 (has links)
The analysis of signals stemming from a physical system is crucial for the experimental investigation of the underlying dynamics that drives the system itself. The field of time series analysis comprises a wide variety of techniques developed with the purpose of characterizing signals and, ultimately, of providing insights on the phenomena that govern the temporal evolution of the generating system. A renowned example in this field is given by spectral analysis: the use of Fourier or Laplace transforms to bring time-domain signals into the more convenient frequency space allows to disclose the key features of linear systems. A more complex scenario turns up when nonlinearity intervenes within a system's dynamics. Nonlinear coupling between a system's degrees of freedom brings about interesting dynamical regimes, such as self-sustained periodic (though anharmonic) oscillations ("limit cycles"), or quasi-periodic evolutions that exhibit sharp spectral lines while lacking strict periodicity ("limit tori"). Among the consequences of nonlinearity, the onset of chaos is definitely the most fascinating one. Chaos is a dynamical regime characterized by unpredictability and lack of periodicity, despite being generated by deterministic laws. Signals generated by chaotic dynamical systems appear as irregular: the corresponding spectra are broad and flat, prediction of future values is challenging, and evolutions within the systems' state spaces converge to strange attractor sets with noninteger dimensionality. Because of these properties, chaotic signals can be mistakenly classified as noise if linear techniques such as spectral analysis are used. The identification of chaos and its characterization require the assessment of dynamical invariants that quantify the complex features of a chaotic system's evolution. For example, Lyapunov exponents provide a marker of unpredictability; the estimation of attractor dimensions, on the other hand, highlights the unconventional geometry of a chaotic system's state space. Nonlinear time series analysis techniques act directly within the state space of the system under investigation. However, experimentally, full access to a system's state space is not always available. Often, only a scalar signal stemming from the dynamical system can be recorded, thus providing, upon sampling, a scalar sequence. Nevertheless, by virtue of a fundamental theorem by Takens, it is possible to reconstruct a proxy of the original state space evolution out of a single, scalar sequence. This reconstruction is carried out by means of the so-called embedding procedure: m-dimensional vectors are built by picking successive elements of the scalar sequence delayed by a lag L. On the other hand, besides posing some necessary conditions on the integer embedding parameters m and L, Takens' theorem does not provide any clue on how to choose them correctly. Although many optimal embedding criteria were proposed, a general answer to the problem is still lacking. As a matter of fact, conventional methods for optimal embedding are flawed by several drawbacks, the most relevant being the need for a subjective evaluation of the outcomes of applied algorithms. Tackling the issue of optimally selecting embedding parameters makes up the core topic of this thesis work. In particular, I will discuss a novel approach that was pursued by our research group and that led to the development of a new method for the identification of suitable embedding parameters. Rather than most conventional approaches, which seek a single optimal value for m and L to embed an input sequence, our approach provides a set of embedding choices that are equivalently suitable to reconstruct the dynamics. The suitability of each embedding choice m, L is assessed by relying on statistical testing, thus providing a criterion that does not require a subjective evaluation of outcomes. The starting point of our method are embedding-dependent correlation integrals, i.e. cumulative distributions of embedding vector distances, built out of an input scalar sequence. In the case of Gaussian white noise, an analytical expression for correlation integrals is available, and, by exploiting this expression, a gauge transformation of distances is introduced to provide a more convenient representation of correlation integrals. Under this new gauge, it is possible to test—in a computationally undemanding way—whether an input sequence is compatible with Gaussian white noise and, subsequently, whether the sequence is compatible with the hypothesis of an underlying chaotic system. These two statistical tests allow ruling out embedding choices that are unsuitable to reconstruct the dynamics. The estimation of correlation dimension, carried out by means of a newly devised estimator, makes up the third stage of the method: sets of embedding choices that provide uniform estimates of this dynamical invariant are deemed to be suitable to embed the sequence.The method was successfully applied to synthetic and experimental sequences, providing new insight into the longstanding issue of optimal embedding. For example, the relevance of the embedding window (m-1)L, i.e. the time span covered by each embedding vector, is naturally highlighted by our approach. In addition, our method provides some information on the adequacy of the sampling period used to record the input sequence.The method correctly distinguishes a chaotic sequence from surrogate ones generated out of it and having the same power spectrum. The technique of surrogate generation, which I also addressed during my Ph. D. work to develop new dedicated algorithms and to analyze brain signals, allows to estimate significance levels in situations where standard analytical algorithms are unapplicable. The novel embedding approach being able to tell apart an original sequence from surrogate ones shows its capability to distinguish signals beyond their spectral—or autocorrelation—similarities.One of the possible applications of the new approach concerns another longstanding issue, namely that of distinguishing noise from chaos. To this purpose, complementary information is provided by analyzing the asymptotic (long-time) behaviour of the so-called time-dependent divergence exponent. This embedding-dependent metric is commonly used to estimate—by processing its short-time linearly growing region—the maximum Lyapunov exponent out of a scalar sequence. However, insights on the kind of source generating the sequence can be extracted from the—usually overlooked—asymptotic behaviour of the divergence exponent. Moreover, in the case of chaotic sources, this analysis also provides a precise estimate of the system's correlation dimension. Besides describing the results concerning the discrimination of chaotic systems from noise sources, I will also discuss the possibility of using the related correlation dimension estimates to improve the third stage of the method introduced above for the identification of suitable embedding parameters. The discovery of chaos as a possible dynamical regime for nonlinear systems led to the search of chaotic behaviour in experimental recordings. In some fields, this search gave plenty of positive results: for example, chaotic dynamics was successfully identified and tamed in electronic circuits and laser-based optical setups. These two families of experimental chaotic systems eventually became versatile tools to study chaos and its possible applications. On the other hand, chaotic behaviour is also looked for in climate science, biology, neuroscience, and even economics. In these fields, nonlinearity is widespread: many smaller units interact nonlinearly, yielding a collective motion that can be described by means of few, nonlinearly coupled effective degrees of freedom. The corresponding recorded signals exhibit, in many cases, an irregular and complex evolution. A possible underlying chaotic evolution—as opposed to a stochastic one—would be of interest both to reveal the presence of determinism and to predict the system's future states. While some claims concerning the existence of chaos in these fields have been made, most results are debated or inconclusive. Nonstationarity, low signal-to-noise ratio, external perturbations and poor reproducibility are just few among the issues that hinder the search of chaos in natural systems. In the final part of this work, I will briefly discuss the problem of chasing chaos in experimental recordings by considering two example sequences, the first one generated by an electronic circuit and the second one corresponding to recordings of brain activity. The present thesis is organized as follows. The core concepts of time series analysis, including the key features of chaotic dynamics, are presented in Chapter 1. A brief review of the search for chaos in experimental systems is also provided; the difficulties concerning this quest in some research fields are also highlighted. Chapter 2 describes the embedding procedure and the issue of optimally choosing the related parameters. Thereupon, existing methods to carry out the embedding choice are reviewed and their limitations are pointed out. In addition, two embedding-dependent nonlinear techniques that are ordinarily used to characterize chaos, namely the estimation of correlation dimension by means of correlation integrals and the assessment of maximum Lyapunov exponent, are presented. The new approach for the identification of suitable embedding parameters, which makes up the core topic of the present thesis work, is the subject of Chapter 3 and 4. While Chapter 3 contains the theoretical outline of the approach, as well as its implementation details, Chapter 4 discusses the application of the approach to benchmark synthetic and experimental sequences, thus illustrating its perks and its limitations. The study of the asymptotic behaviour of the time-dependent divergent exponent is presented in Chapter 5. The alternative estimator of correlation dimension, which relies on this asymptotic metric, is discussed as a possible improvement to the approach described in Chapters 3, 4. The search for chaos out of experimental data is discussed in Chapter 6 by means of two examples of real-world recordings. Concluding remarks are finally drawn in Chapter 7.
126

[en] INTERACTION BETWEEN EDUCATION, FERTILITY AND POLITICAL ECONOMIC AND ITS CONSEQUENCES FOR DISTRIBUTION OF INCOME / [pt] INTERAÇÃO ENTRE EDUCAÇÃO, FECUNDIDADE E ECONOMIA POLÍTICA E SUAS CONSEQÜÊNCIAS PARA A DISTRIBUIÇÃO DE RENDA

GABRIEL BUCHMANN 14 September 2007 (has links)
[pt] Este trabalho constrói um modelo que gera uma dinâmica onde interagem (i) decisões educacionais, tanto individuais, quanto no nível público, (ii) decisões da fecundidade, e (iii) a economia política de uma sociedade. Esta interação, em conseqüência, determina: (i) a qualidade relativa do ensino público e privado e sua distribuição, (ii) o diferencial de fecundidade entre grupos sociais e (iii) a distribuição do poder político que, conjuntamente, determinam a evolução da distribuição de renda na sociedade. Apesar de ser um modelo geral, busca-se a adequação a alguns fatos estilizados e a evidências empíricas encontradas no Brasil, um país muito desigual, com um sistema democrático relativamente novo e com sérios problemas educacionais Resolvo então o equilíbrio estático e calibro os parâmetros, resolvendo a dinâmica numericamente. Mostro que, se a democracia funcionar bem, teremos um equilíbrio sem desigualdade no longo prazo, e explico então as forças que nos mantêm em uma armadilha de desigualdade elevada. / [en] This paper builds a model whose main idea is to generate a dynamics in which (i) educational decisions, at the individual as well at the public level, (ii) fertility decisions, and (iii) the political economy of a society interact and determine the (i) relative quality of public and private education and its distribution, (ii)the fertility differential between the groups and (iii) the distribution of political power, which jointly shape the evolution of income distribution in society. In spite of being a general model, it fits some stylized facts and empirical evidence found in Brazil, a very unequal country with a quite young democracy and very serious educational problems. We solve for the static equilibrium and then calibrate the parameters and solve it numerically. We find that if democracy works well, then we will have no inequality in the long run, and explain which are the forces that maintain us in a high inequality trap.
127

Estudo teórico da redução fotocatalítica do gás carbônico a metanol utilizando dióxido de titânio / Theoretical study of photocatalic reduction of carbon dioxide to methanol by titan dioxide

Juarez Valença Abdalla Junior 29 March 2011 (has links)
O aumento da concentração de gases de efeito estufa (GEE) de fontes antropogênicas tem sido visto como uma das principais contribuições para o aquecimento global, ameaçando a vida no planeta. O dióxido de carbono (CO2) é um dos principais GEE e suas fontes geradoras estão relacionadas a processos essenciais à sociedade, como a produção de energia, produtos diversos e transporte. A conversão do CO2 parece ser uma alternativa promissora para reduzir a emissão deste gás na atmosfera. De particular interesse para este estudo, a fotorredução do CO2 a metanol pode contribuir para mitigar o problema dos GEE, gerando um importante insumo da indústria química. Assim, este estudo tem por objetivo utilizar cálculos quanto-mecânicos, como a Teoria do Funcional Densidade, para investigar um caminho para a reação de redução do CO2. A reação de redução do CO2 à ácido fórmico, formaldeído e metanol foi estudada sem a interação com o catalisador e na presença de dióxido de titânio como catalisador. Foi realizada uma comparação de cálculos em diferentes níveis teóricos e diferentes bases, com dados experimentais. Para os cálculos envolvendo o TiO2, foi utilizado o nível B3LYP/6-31G(d,p). Frequências vibracionais também foram calculadas para cada etapa, permitindo a identificação de possíveis estados de transição e estimativa de barreiras de reação (energia potencial). Cálculos de coordenada intrínseca de reação (IRC) foram empregados para confirmar que os estados de transição encontrados, estão relacionados a cada etapa estudada. A comparação da redução do dióxido de carbono a ácido fórmico, com e sem catalisador, mostrou que a presença do dióxido de titânio reduziu em mais de 25,0% a barreira reacional desta etapa
128

AnÃlise nÃo linear de compÃsitos laminados utilizando o mÃtodo dos elementos finitos / Nonlinear analysis of laminated composites using the finite element method

Edson Moreira Dantas JÃnior 29 August 2014 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / Materiais compÃsitos vem sendo amplamente estudados devido aos seus inÃmeros benefÃcios em relaÃÃo aos materiais metÃlicos, principalmente a elevada razÃo resistÃncia/peso, bom iso-lamento tÃrmico e boa resistÃncia à fadiga. CompÃsitos laminados, foco do presente trabalho, sÃo produzidos pelo empilhamento de um conjunto delÃminas, cada uma composta de fibras unidirecionais ou bidirecionais imersas em uma matriz polimÃrica. As estruturas de materiais compÃsitos apresentam comportamento nÃo linear, tanto fÃsico quanto geomÃtrico. Devido à elevada resistÃncia, estruturas de material compÃsito tendem a ser bastante esbeltas, podendo apresentar grandes deslocamentos e problemas de estabilidade. Adicionalmente, a consideraÃÃo da nÃo linearidade fÃsica tambÃm à importante para a simulaÃÃo de falha de estruturas laminadas. Um dos modos de falha mais importantes destas estruturas à a delaminaÃÃo, que consiste no descolamento de duas lÃminas adjacentes. No projeto de estruturas laminadas, o MÃtodo dos Elementos Finitos à a ferramenta de anÃlise mais utilizada devido a sua robustez, precisÃo e relativa simplicidade. Afim de permitir a anÃlise nÃo linear de estruturas laminadas submetidas a grandes deslocamentos, foi desenvolvida neste trabalho uma formulaÃÃo de elementos finitos sÃlidos laminados baseados na abordagem Lagrangiana Total. A simulaÃÃo do inÃcio e propagaÃÃo da delaminaÃÃo foi realizada neste trabalho utilizando Modelos de Zona Coesiva. Para este fim, foi desenvolvida uma formulaÃÃo de elementos isoparamÃtricos de interface com espessura nula e utilizados diferentes modelos constitutivos para representar a relaÃÃo entre as tensÃes e os deslocamentos relativos das faces da trinca coesiva, incluindo tanto o caso de modo I puro quanto de modo misto. As formulaÃÃes desenvolvidas neste trabalho foram implementadas no software de cÃdigo aberto FAST utilizando afilosofiade ProgramaÃÃo Orientada a Objetos. Estas implementaÃÃes sÃo apresentadas utilizando as convenÃÃes da UML. VÃrios exemplos foram utilizados para verificar e validar as implementaÃÃes realizadas. Excelentes resultados foram obtidos utilizando elementos sÃlidos laminados na anÃlise de estruturas de casca, mesmo empregando malhas com apenas um elemento sÃlido na espessura. No que diz respeito à delaminaÃÃo, verificou-se que o uso de Modelos de Zona Coesiva requer muito cuidado na escolha dos parÃmetros utilizados na anÃlise, principalmente no que diz respeito à relaÃÃo tensÃo-deslocamento relativo, tamanho dos elementos e mÃtodo de integraÃÃo numÃrica. Contudo, utilizando-se a integraÃÃo de Newton-Cotes e elementos de interface de tamanho adequado, obteve-se uma concordÃncia muito boa com resultados teÃricos e experimentais disponÃveis na literatura. De forma geral,verificou-se que o modelo coesivo exponencial apresenta maior robustez e eficiÃncia computacional que o modelo bilinear. / Composite materials has been widely studied thought the years because of it benefits compared to metals (elevated resistance/weight ratio, good thermal isolation and good fatigue resistance). Laminate composites are the focus of this work. Produced by stacked layers of fibers embed- ded on polymeric matrices, structures of composite materials presents material and geometrical non-linear behavior. Because of it elevated resistance, composite materials allow designers to create very slender structures which might present large displacements and stability problems. Additionally, considering material non-linearity is also important for collapse simulation of la- minated structures. One of the most important failure modes on laminated structures is delami- nation. Delamination is the detachment of adjacent layers. On laminated structures simulation, the Finite Element Method is one of the most used analysis tool. It is a robust, precise and relative simple operating tool. Intending analyzing non-linear behavior of laminated structures subjected to large displacements, was developed on this work a laminated solid finite element formulation based on Full Lagrangian formulation. Simulation of delamination beginning and propagation was developed on this work using Cohesive Zone models. To achieve this goal, an isoparametric formulation of interface finite elements without thickness and many constitutive models to represent the relation tension à displacement jump (relative displacement between crack faces) were developed. These models consider pure mode I and mixed mode. The formu- lations developed on this work were implemented on the open source finite element code FAST using Oriented Object Programing philosophy. These implementations are presented on UML conventions. Many examples were tested for verifying and validating all the implementations. Excellent results were obtained using laminated solid elements on the analysis of a shell struc- ture, even using meshes with only one element though thickness. On the delamination analysis, was verified that Cohesive Zone Models are very sensible related to the parameters used on the analysis, mainly tension à displacement jump model, size of elements and numerical integra- tion. Spite of it, using Newton-Cotes integration and interface elements of appropriate size, good agreements were obtained compared with theoretical results obtained on literature. In general, was observed that cohesive exponential model presents greater robustness and compu- tational efficiency than bilinear model.
129

Estrutura de metacomunidades de peixes em uma microbacia da Mata Atlântica / Metacommunities structures of fish in a watershed in the Atlantic Forest

Almeida, Rodrigo da Silva 22 October 2013 (has links)
Made available in DSpace on 2016-06-02T19:26:22Z (GMT). No. of bitstreams: 1 ALMEIDA_Rodrigo_Silva_2013.pdf: 2567444 bytes, checksum: 0ebfbd744721cec70932c58cb440d2f2 (MD5) Previous issue date: 2013-10-22 / Universidade Federal de Minas Gerais / The concept of metacommunities incorporates the influence of local and regional factors in structuring communities and has supported recent approaches related to patterns of distribution, abundance and species interactions. Within this concept, the species sorting model that assumes that a community is governed only by local factors, while the mass effect model assumes that a community is governed by local and regional effects. A stream system has a hierarchical dendritic network, in which the upstream water bodies are smaller and tend to increase in the downstream direction due to the connection with other streams. To stream fish, the dispersion process by the channel reveals the importance of regional factors on the structure of metacommunities, while habitat heterogeneity acts locally, and may reveal boundaries between metacommunities. This study aims to determine whether there is the formation of metacommunities along the longitudinal gradient and models which explain this structure. We captured fish populations and raise environmental information in three streams (15 reaches of 70 m) of a small watershed of the Atlantic. A principal component analysis (PCA) was applied to describe the environmental gradient and to verify the occurrence of this metacommunities apply a gradient elements of metacommunities analysis (EMS). The pattern of distance decay relationships (DDR) was used to verify whether metacommunities are influenced by the effects of local and / or regional and so seek explanations for the pattern found. We can say that there are two distinct communities (clementsian pattern of metacommunities), one nearest the mouth with no defined pattern caused by a process mass effect and a more flush with standard board from a species sorting process. Distinguish these communities is of great interest to conservation purposes and the development of strategies for biomonitoring. For example, for communities downstream, where the environmental changes caused by urbanization are more prevalent and there is the possibility of removing this source of environmental impact, ensure connectivity of streams is a good strategy since the species moving more intensely and are very dependent on the source of colonization. On the other hand, communities further upstream, even if there is a disconnect with the region downstream promoted by the damming of streams for water harvesting, these communities will remain theoretically provided that local conditions are guaranteed, ie, there is a pressing conservation environmental integrity of these excerpts. / O conceito de metacomunidades incorpora a influência de fatores locais e regionais na estruturação das comunidades e tem subsidiado abordagens recentes relacionadas aos padrões de distribuição, abundância e interações entre espécies. Dentro deste conceito, o modelo species sorting que assume que uma comunidade é governada somente por fatores locais, enquanto, o modelo mass effect assume que uma comunidade é governada por efeitos locais e regionais. Um sistema de riachos apresenta uma rede dendrítica hierárquica, no qual os corpos d água à montante são menores e tendem a aumentar no sentido jusante devido à conexão com outros riachos. Para peixes de riacho, o processo de dispersão pelo canal revela a importância de fatores regionais sobre a estrutura de metacomunidades, enquanto a heterogeneidade de hábitats age localmente, podendo revelar limites entre as metacomunidades. O presente estudo tem como objetivo verificar se existe a formação de metacomunidades ao longo do gradiente longitudinal e quais modelos explicam esta estrutura. Nós capturamos a ictiofauna e levantamos informações ambientais em três riachos (15 trechos de 70 m) de uma pequena microbacia da Mata Atlântica. Uma análise de componentes principais (PCA) foi aplicada para descrever o gradiente ambiental e, para verificar a ocorrência de metacomunidades neste gradiente aplicamos uma análise de elementos de metacomunidades (EMS - Elements of Metacommunities Analysis). O padrão de decaimento da similaridade pela distância (DDR - Distance Decay Relationships) foi utilizado para verificar se as metacomunidades são influenciadas pelos efeitos locais e/ou regionais e assim buscar explicações para o padrão encontrado. Podemos afirmar que existem duas comunidades distintas (padrão clementsiano de metacomunidades), uma mais próxima da foz sem padrão definido ocasionado por um processo mass effect e outra mais à montante com padrão de tabuleiro proveniente de um processo species sorting. Distinguir estas comunidades é de grande interesse para propósitos conservacionistas e na definição de estratégias de biomonitoramento. Por exemplo, para as comunidades mais à jusante, onde as alterações ambientais causadas pela urbanização são mais preponderantes e não há mais a possibilidade de retirar esta fonte de impacto ambiental, garantir a conectividade dos riachos é uma boa estratégia visto que as espécies se deslocam de maneira mais intensa e são muito dependentes da fonte de colonização. Por outro lado, as comunidades mais a montante, mesmo que ocorra uma desconexão com a região à jusante promovida pelo represamento dos riachos para captação de água, estas comunidades teoricamente se manterão desde que as condições locais sejam garantidas, ou seja, é premente a conservação da integridade ambiental destes trechos.
130

Padrões de muda de penas e reprodução em aves florestais no parque estadual Carlos Botelho, estado de São Paulo / Patterns of moulting and breeding in forest birds in Carlos Botelho State Park, São Paulo state.

Medolago, Cesar Augusto Bronzatto 08 November 2013 (has links)
Made available in DSpace on 2016-06-02T19:26:23Z (GMT). No. of bitstreams: 1 MEDOLAGO_Cesar_2013.pdf: 1874610 bytes, checksum: 3e19d8b8517c60b82a8deacf9f5614ce (MD5) Previous issue date: 2013-11-08 / This study aims to describe the pattern of moulting and reproduction, evaluating their temporal overlap in an assembly of birds in the Atlantic Forest. These events tend to have little or no overlap due to high energy costs involved, but some authors argue that in tropical regions, they may present a significant overlap, since the period of resource abundance would be longer in this region. We also noticed the amount of fat deposition, because this phenomenon is important in thermo-isolation, energy reserves and development of egg-yolk. It is possible that environmental variables act directly on the breeding period of birds, which in turn influences the moult, it is expected that this starts right after the breeding season, when the young leave their nests. There taking into account that ecological groups, such as trophic guilds, may show different patterns for the periods, since the supply of food resources varies temporally in a different way for each group. It were determined five areas in Carlos Botelho State Park , state of São Paulo ( 24 ° 06 ' 55'' , 24 º 14' 41'' S , 47 º 47 ' 18'' and 48 º 07' 17'' W), which were sampled from June 2012 to May 2013, once a month, during the daytime, using lines with ten mist nets (3x12m , mesh 36mm ). Each bird was received a numbered metal band provided by CEMAVE. With a total of 4650 mistnet-hours were held 700 catches and 130 were recaptures, totaling 54 species, all residents. The period of moult of flight concentrated from November to April, with its peak in February. Incubation began in August, with the highest percentage of individuals presenting brood patch occurred in the months of November and December, declining from February, when the percentage of young individuals in the assemblage began to increase. The highest percentage of individuals with fat deposition occurred in the months comprising the coldest period of the year. The incubation period began at the end of the dry season, increasing with the photoperiod, reaching its peak in November. Thus, the young individuals leave their nests in the beginning of the hot season, when the supply of food resources would be higher, which would support the new individuals in the community as well as the start of moult period. There was little difference in the incubation period and fat deposition between trophic guilds and no difference in their moult period. The overlap between the events found in this study was 7 %, which confirms the tendency to avoid the overlap of these cycles, even in tropical regions, such as the Atlantic Forest, due to high energy costs involved. / O presente estudo tem por objetivos descrever o padrão de muda de penas e reprodução, avaliando sua sobreposição temporal, em uma assembleia de aves na Mata Atlântica. Esses eventos tendem a apresentar pouca ou nenhuma sobreposição devido aos altos custos energéticos envolvidos; porém alguns autores defendem que em regiões tropicais, eles poderiam apresentar uma maior sobreposição, já que a o período de abundância de recursos é mais longo nessa região. Foi verificado também o período de acúmulo de gordura na região da fúrcula, pois esse fenômeno tem importância no isolamento térmico, reserva de energia e desenvolvimento da gema do ovo. É possível que as variáveis ambientais atuem diretamente sobre o período de reprodução das aves, que por sua vez, teriam influência sobre a muda de penas, pois é de se esperar que esse comece logo em seguida ao período reprodutivo, juntamente com a saída dos jovens dos ninhos. Há de levar-se em conta que grupos ecológicos, como as guildas tróficas, podem apresentar padrões distintos em relação a esses períodos, já que a oferta recursos alimentares varia temporalmente, de maneira diferente, para cada um desses grupos. Para isso foram determinadas cinco áreas no Parque Estadual Carlos Botelho, estado de São Paulo (24º 06 55 , 24º 14 41 S, 47º 47 18 e 48º 07 17 W), que foram amostradas de junho de 2012 a maio de 2013, uma vez por mês, durante o período diurno, utilizando-se linhas de dez redes de neblina (3x12m, malha 36mm). Foram utilizadas anilhas metálicas padrão CEMAVE para marcar os indivíduos. Com um total de 4650 horas-rede, foram realizadas 700 capturas, das quais 130 foram recapturas, totalizando 54 espécies, todas residentes. O período de muda de penas de voo concentrou-se de novembro a abril, apresentando seu auge em fevereiro. A incubação teve início em agosto, sofrendo influência do fotoperíodo e a maior porcentagem de indivíduos apresentando placa de incubação se deu nos meses de novembro e dezembro, declinando a partir de fevereiro, quando a porcentagem de indivíduos jovens começou a aumentar. A maior porcentagem de indivíduos com acúmulo de gordura se deu nos meses que compreende o período mais frio do ano. O período de incubação iniciou-se no final da estação seca, atingindo seu ápice em novembro. Assim, a saída dos jovens dos ninhos coincidiria com o início da estação mais quente, quando a oferta de recursos alimentares seria maior, o que suportaria os novos indivíduos na comunidade, bem como a realização da muda de penas. Houve uma pequena diferença no período de incubação e de acúmulo de gordura entre as guildas tróficas e não houve diferença entre seus períodos de muda. A sobreposição entre os eventos encontrada nesse estudo foi de 7%, o que corrobora a tendência em evitar a sobreposição desses ciclos, mesmo em regiões tropicais, como é o caso da Mata Atlântica, devido aos altos custos energéticos envolvidos.

Page generated in 0.0738 seconds