• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 15
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Méthodes d'analyse de données et modèles bayésiens appliqués au contexte des inégalités socio-territoriales de santé et des expositions environnementales / Data analysis techniques and Bayesian models applied to the context of social health inequalities and environmental exposures

Lalloué, Benoit 06 December 2013 (has links)
Cette thèse a pour but d'améliorer les connaissances concernant les techniques d'analyse de données et certains modèles bayésiens dans le domaine de l'étude des inégalités sociales et environnementales de santé. À l'échelle géographique de l'IRIS sur les agglomérations de Paris, Marseille, Lyon et Lille, l'événement sanitaire étudié est la mortalité infantile dont on cherchera à expliquer le risque avec des données socio-économiques issues du recensement et des expositions environnementales comme la pollution de l'air, les niveaux de bruit et la proximité aux industries polluantes, au trafic automobile ou aux espaces verts. Deux volets principaux composent cette thèse. Le volet analyse de données détaille la mise au point d'une procédure de création d'indices socio-économiques multidimensionnels et la conception d'un package R l'implémentant, puis la création d'un indice de multi-expositions environnementales. Pour cela, on utilise des techniques d'analyse de données pour synthétiser l'information et fournir des indicateurs composites utilisables directement par les décideurs publics ou dans le cadre d'études épidémiologiques. Le second volet concerne les modèles bayésiens et explique le modèle « BYM ». Celui-ci permet de prendre en compte les aspects spatiaux des données et est mis en oeuvre pour estimer le risque de mortalité infantile. Dans les deux cas, les méthodes sont présentées et différents résultats de leur utilisation dans le contexte ci-dessus exposés. On montre notamment l'intérêt de la procédure de création d'indices socio-économiques et de multi-expositions, ainsi que l'existence d'inégalités sociales de mortalité infantile dans les agglomérations étudiées / The purpose of this thesis is to improve the knowledge about and apply data mining techniques and some Bayesian model in the field of social and environmental health inequalities. On the neighborhood scale on the Paris, Marseille, Lyon and Lille metropolitan areas, the health event studied is infant mortality. We try to explain its risk with socio-economic data retrieved from the national census and environmental exposures such as air pollution, noise, proximity to traffic, green spaces and industries. The thesis is composed of two parts. The data mining part details the development of a procedure of creation of multidimensional socio-economic indices and of an R package that implements it, followed by the creation of a cumulative exposure index. In this part, data mining techniques are used to synthesize information and provide composite indicators amenable for direct usage by stakeholders or in the framework of epidemiological studies. The second part is about Bayesian models. It explains the "BYM" model. This model allows to take into account the spatial dimension of the data when estimating mortality risks. In both cases, the methods are exposed and several results of their usage in the above-mentioned context are presented. We also show the value of the socio-economic index procedure, as well as the existence of social inequalities of infant mortality in the studied metropolitan areas
22

A robust multi-objective statistical improvement approach to electric power portfolio selection

Murphy, Jonathan Rodgers 13 November 2012 (has links)
Motivated by an electric power portfolio selection problem, a sampling method is developed for simulation-based robust design that builds on existing multi-objective statistical improvement methods. It uses a Bayesian surrogate model regressed on both design and noise variables, and makes use of methods for estimating epistemic model uncertainty in environmental uncertainty metrics. Regions of the design space are sequentially sampled in a manner that balances exploration of unknown designs and exploitation of designs thought to be Pareto optimal, while regions of the noise space are sampled to improve knowledge of the environmental uncertainty. A scalable test problem is used to compare the method with design of experiments (DoE) and crossed array methods, and the method is found to be more efficient for restrictive sample budgets. Experiments with the same test problem are used to study the sensitivity of the methods to numbers of design and noise variables. Lastly, the method is demonstrated on an electric power portfolio simulation code.
23

Mesures géodésiques et modélisation de la convergence oblique au travers de failles transformantes. Application au bord Nord du Plateau Tibétain et à la Californie du Sud / Geodetic measurements and modeling of oblique convergence across transform faults. Application to the Northern Tibetan Plateau and to Southern California

Daout, Simon 21 November 2016 (has links)
Je me focalise sur trois grands systèmes de failles transformantes obliques au Tibet et en Californie du Sud, et ce, afin de mieux comprendre et quantifier les relations entre les différentes structures qui les définissent. L'interférométrie radar à Synthèse d'Ouverture (InSAR) dispose du potentiel pour cartographier et localiser précisément la déformation sur des zones étendues et ainsi contraindre la géométrie des structures profondes. Cependant son utilisation en milieu naturel se trouve fortement entravée par la décorrelation due à la végétation, au relief, et aux cycles de gel et dégel, mais aussi par les délais troposphériques et les rampes orbitales résiduelles. J'ai développé des méthodes pour palier ces limitations. Au Tibet, j'ai ainsi traité les archives du satellite Envisat au niveau de deux zones de lacune sismique, à la bordure Nord du plateau, se présentant comme des zones intéressantes pour étudier le partitionnement de la convergence: le système de faille de Haiyuan au north-est Tibet et la faille sénestre de l'Altyn Tagh, au nord-ouest du plateau. Une attention spécifique sur les déformations liées au pergélisol m'a permis de (1) retrouver la continuité du signal sur de grandes zones, (2) de quantifier le comportement temporel des cycles de gel et dégel des sédiments recouvrant le pergélisol, (3) d'isoler les zones stables des sédiments se déformant. Je montre que les déformations saisonnières sont fortement dépendantes des unités géomorphologiques et que la fonte du pergélisol est plus important à faible qu'à haute altitude. J'analyse aussi le signal saisonnier au travers la marche topographique et je définie un proxy pour les incertitudes de la correction atmosphérique. J'observe un gradient de déformation au travers la faille de l'Altyn Tagh de l'ordre de 11-15 mm/an et un alignement claire de la déformation dans le Tarim, parallèle à la faille de l'Altyn Tagh, ainsi que des soulèvements de l'ordre de 1 mm/an associés à des chevauchements. Ce travail montre aussi un gradient de déformation associé à la terminaison ouest de la faille du Kunlun, re-définissant ainsi la géométrie des blocs tectoniques dans cette région. Parallèlement à cette acquisition de données, je développe des outils d'inversion basés sur des algorithmes de Monte Carlo afin d'explorer l'ensemble des géométries en accord avec les observations et d'estimer la compatibilité de la déformation actuelle avec des modèles tectoniques long-termes. Je montre ainsi une convergence uniforme de 8.5-11.5 mm/an et d'orientation N81-98E à travers le système de faille d'Haiyuan et quantifie son partitionnement le long des différentes structures. Par ailleurs, j'applique mon approche en Californie du Sud, au niveau du « Big Bend » de la faille de San Andreas où, en analogie avec des modèles structuraux géologiques, j'utilise des lois de conservations du mouvement pour contraindre la géométrie des chevauchements aveugles. Je montre la compatibilité du champs de déformation actuel avec un décollement grande échelle et quantifie une accumulation de contrainte de 2.5 mm/an le long de la structure majeure sous Los Angeles. / I focus on three major oblique transform faults in Tibet and in Southern California, in order to better measure and quantify the present-day strain accumulation on these structures. Interferometric synthetic Aperture Radar (InSAR) has the potential to map and localize precisely the deformation over wide areas and thus constrain the deep geometry of these structures. However, its application in natural environments in hindered by strong decorrelation of the radar phase due to vegetation, relief, and freeze and thaw cycles, but also due to variable tropospheric phase delays across topographic feature and long-wavelength residual orbital ramps. Here, I develop methodologies to circumvent these limitations and separate tectonic from other parasite signals. In Tibet, I process data from the Envisat satellite archives, at the boundary of the Tibetan plateau, in two seismic gaps, which appear interesting to study the partitioning of the convergence: the Haiyuan Fault system in northeastern Tibet and the left-lateral Altyn Tagh Fault, in northwestern Tibet. A specific focus on the permafrost related deformation signal allows us to: (1) correctly unwrap interferograms from north to south, (2) quantify the temporal behavior of the freeze/thaw cycles, and (3) isolate bedrock pixels that are not affected by the permafrost signal for further tectonic analysis. I show that the seasonal subsidence depends greatly on the geological land unit and that lower elevations are thawing faster than higher elevations. I analyze the atmospheric signal across the high plateau margin and estimate proxy for the uncertainty on atmospheric corrections. I observe a strike-slip deformation of around 11-15 mm/yr across the Altyn Tagh fault, a clear line of concentrated strike-slip deformation of around 3 mm/yr within the Tarim basin, trending parallel to the Altyn Tagh Fault trace, as well as thrust signal uplifting terraces at a rate of 1 mm/yr. This work also shows a strain accumulation around the west extension of the south trace of the Kunlun Fault, redefining the block boundaries in northwestern Tibet. In parallel this data acquisition, I develop Monte Carlo inversion tools in order to explore the various geometries in agreement with observations and estimate the compatibility of actual surface displacements with long-term slip partitioning models. I thus show a uniform convergence rate of 8.5-11.5 mm/yr with a N81-98E across the Haiyuan fault system and quantify the partitioning along the various structures. I also apply my approach in Southern California, across the « Big Bend » of the San Andreas Fault, where, in analogy with structural geological models, I use conservation of motion to help constraining the geometry and the kinematics of blind thrust faults. I show the compatibility of surface displacements with a large-scale décollement and quantify a loading rate of 2.5 mm/yr along the major thrust structure developing under Los Angeles.
24

Determinantes da adesão a tratados de patentes, 1970-2000: a Convenção de Paris e o Tratado de Cooperação de patentes / The determinants of the accession of the accession of patent treaties, 1970-2000: the Paris Convention and Patents Cooperation Treaty

Manoel Galdino Pereira Neto 30 September 2011 (has links)
Neste trabalho investigamos os determinantes da adesão de países a dois tratados internacionais de patentes: A Convenção de Paris e o Tratado de Cooperação de Patentes (TCP). Por meio de um modelo hierárquico Bayesiano, apresentamos evidências de que fatores domésticos são importantes para predizer adesão aos tratados estudados. Porém, quais fatores são importantes dependem do tipo de tratado. Para o TCP, que é um tratado que visa reduzir custos de transação, a legislação doméstica de patentes não é relevante. Para a Convenção de Paris, que limita as opções de política na área de patente, a legislação doméstica é fator relevante. Nós mostramos também que os ganhos diretos de participar dos tratados, medido pelo número de patentes no exterior, é uma variável importante e positivamente associada à probabilidade de adesão a ambos os acordos. Apresentamos ainda evidências de que variáveis sistêmicas são importantes e que as mudanças no sistema internacional nos últimos 30 anos são fatores importantes para explicar a adesão. / In this paper we investigate the determinants of the accession of two international patent treaties: the Paris Convention and Patent Cooperation Treaty (PCT). Through a Bayesian hierarchical model, we present evidence that domestic factors are important in predicting accession to the treaties studied. However, what factors are important depends on the type of treaty. For TCP, which is a treaty aimed at reducing transaction costs, the domestic law of patents is not important. For the Paris Convention, which limits the options in the area of patent policy, domestic law is a relevant factor. We also show that the direct gains from participating in treaties, as measured by the number of patents abroad, is an important variable and positively associated with the likelihood of ratification to both agreements. We also present evidence that systemic variables are important and that changes in the international system over the past 30 years are important factors to explain the membership to the treaties.
25

Time Dynamic Topic Models

Jähnichen, Patrick 30 March 2016 (has links) (PDF)
Information extraction from large corpora can be a useful tool for many applications in industry and academia. For instance, political communication science has just recently begun to use the opportunities that come with the availability of massive amounts of information available through the Internet and the computational tools that natural language processing can provide. We give a linguistically motivated interpretation of topic modeling, a state-of-the-art algorithm for extracting latent semantic sets of words from large text corpora, and extend this interpretation to cover issues and issue-cycles as theoretical constructs coming from political communication science. We build on a dynamic topic model, a model whose semantic sets of words are allowed to evolve over time governed by a Brownian motion stochastic process and apply a new form of analysis to its result. Generally this analysis is based on the notion of volatility as in the rate of change of stocks or derivatives known from econometrics. We claim that the rate of change of sets of semantically related words can be interpreted as issue-cycles, the word sets as describing the underlying issue. Generalizing over the existing work, we introduce dynamic topic models that are driven by general (Brownian motion is a special case of our model) Gaussian processes, a family of stochastic processes defined by the function that determines their covariance structure. We use the above assumption and apply a certain class of covariance functions to allow for an appropriate rate of change in word sets while preserving the semantic relatedness among words. Applying our findings to a large newspaper data set, the New York Times Annotated corpus (all articles between 1987 and 2007), we are able to identify sub-topics in time, \\\\textit{time-localized topics} and find patterns in their behavior over time. However, we have to drop the assumption of semantic relatedness over all available time for any one topic. Time-localized topics are consistent in themselves but do not necessarily share semantic meaning between each other. They can, however, be interpreted to capture the notion of issues and their behavior that of issue-cycles.
26

Modelagem Bayesiana dos tempos entre extrapolações do número de internações hospitalares: associação entre queimadas de cana-de-açúcar e doenças respiratórias / Bayesian modelling of the times between peaks of hospital admissions: association between sugar cane plantation burning and respiratory diseases

Sicchieri, Mayara Piani Luna da Silva 19 December 2012 (has links)
As doenças respiratórias e a poluição do ar são temas de muitos trabalhos científicos, porém a relação entre doenças respiratórias e queimadas de cana-de-açúcar ainda é pouco estudada. A queima da palha da cana-de-açúcar é uma prática comum em grande parte do Estado de São Paulo, com especial destaque para os dados da região de Ribeirão Preto. Os focos de queimadas são detectados por satélites do CPTEC/INPE (Centro de Previsão de Tempo e Estudos Climáticos do Instituto Nacional de Pesquisas Espaciais) e neste trabalho consideramos o tempo entre dias de extrapolação do número de internações diárias. Neste trabalho introduzimos diferentes modelos estatísticos para analisar dados de focos de queimadas e suas relações com as internações por doenças respiratórias. Propomos novos modelos para analisar estes dados, na presença ou não da covariável, que representa o número de queimadas. Sob o enfoque Bayesiano, usando os diferentes modelos propostos, encontramos os sumários a posteriori de interesse utilizando métodos de simulação de Monte Carlo em Cadeias de Markov. Também usamos técnicas Bayesianas para discriminar os diferentes modelos. Para os dados da região de Ribeirão Preto, encontramos modelos que levam à obtenção das inferências a posteriori com grande precisão e vericamos que a presença da covariável nos traz um grande ganho na qualidade dos dados ajustados. Os resultados a posteriori nos sugerem evidências de uma relação entre as queimadas e o tempo entre as extrapolações do número de internações, ou seja, de que quando observamos um maior número de queimadas anteriores à extrapolação, também observamos que o tempo entre as extrapolações é menor. / Relations between respiratory diseases and air pollution has been the goals of many scientic works, but the relation between respiratory diseases and sugar cane burning still is not well studied in the literature. Pre-harvest burning of sugarcane elds used primarily to get rid of the dried leaves is common in most of São Paulo state, Southeast Brazil, especially in the Ribeirão Preto region. The locals of pre-harvest sugar cane burning are detected by surveillance satellites of the CPTEC/INPE (Center of Climate Prediction of the Space Research National Institute). In this work, we consider as our data of interest, the time in days, between peaks numbers of hospitalizations due to respiratory diseases. Dierent statistical models are assumed to analyze the data of pre-harvest burning of sugar cane elds and their relations with hospitalizations due to respiratory diseases. These new models are considered to analyze data sets in presence or not of covariates, representing the numbers of pre-harvest burning of sugar cane elds. Under a Bayesian approach, we get the posterior summaries of interest using MCMC (Markov Chain Monte Carlo) methods. We also use dierent existing Bayesian discrimination methods to choose the best model. In our case, considering the data of Ribeirão Preto region, we observed that the models in presence of covariates give accurate inferences and good t for the data. We concluded that there is evidence of a relationship between respiratory diseases and sugar cane burning, that is, larger numbers of pre-harvest sugar cane burning, implies in larger numbers of hospitalizations due to respiratory diseases. In this case, we also observe small times (days) between extra numbers of hospitalizations.
27

Distribuição exponencial generalizada: uma análise bayesiana aplicada a dados de câncer / Generalized exponential distribution: a Bayesian analysis applied to cancer data

Boleta, Juliana 19 December 2012 (has links)
A técnica de análise de sobrevivência tem sido muito utilizada por pesquisadores na área de saúde. Neste trabalho foi usada uma distribuição em análise de sobrevivência recentemente estudada, chamada distribuição exponencial generalizada. Esta distribuição foi estudada sob todos os aspectos: para dados completos e censurados, sob a presençaa de covariáveis e considerando sua extensão para um modelo multivariado derivado de uma função cópula. Para exemplificação desta nova distribuição, foram utilizados dados reais de câncer (leucemia mielóide aguda e câncer gástrico) que possuem a presença de censuras e covariáveis. Os dados referentes ao câncer gástrico tem a particularidade de apresentar dois tempos de sobrevida, um relativo ao tempo global de sobrevida e o outro relativo ao tempo de sobrevida livre do evento, que foi utilizado para a aplicação do modelo multivariado. Foi realizada uma comparação com outras distribuições já utilizadas em análise de sobrevivência, como a distribuiçãoo Weibull e a Gama. Para a análise bayesiana adotamos diferentes distribuições a priori para os parâmetros. Foi utilizado, nas aplicações, métodos de simulação de MCMC (Monte Carlo em Cadeias de Markov) e o software Winbugs. / Survival analysis methods has been extensively used by health researchers. In this work it was proposed the use a survival analysis model recently studied, denoted as generalized exponential distribution. This distribution was studied in all respects: for complete data and censored, in the presence of covariates and considering its extension to a multivariate model derived from a copula function. To exemplify the use of these models, it was considered real cancer lifetime data (acute myeloid leukemia and gastric cancer) in presence of censored data and covariates. The assumed cancer gastric lifetime data has two survival responses, one related to the total lifetime of the patient and another one related to the time free of the disease, that is, multivariate data associated to each patient. In these applications there was considered a comparative study with standard existing lifetime distributions, as Weibull and gamma distributions.For a Bayesian analysis we assumed different prior distributions for the parameters of the model. For the simulation of samples of the joint posterior distribution of interest, we used standard MCMC (Markov Chain Monte Carlo) methods and the software Winbugs.
28

Distribuição exponencial generalizada: uma análise bayesiana aplicada a dados de câncer / Generalized exponential distribution: a Bayesian analysis applied to cancer data

Juliana Boleta 19 December 2012 (has links)
A técnica de análise de sobrevivência tem sido muito utilizada por pesquisadores na área de saúde. Neste trabalho foi usada uma distribuição em análise de sobrevivência recentemente estudada, chamada distribuição exponencial generalizada. Esta distribuição foi estudada sob todos os aspectos: para dados completos e censurados, sob a presençaa de covariáveis e considerando sua extensão para um modelo multivariado derivado de uma função cópula. Para exemplificação desta nova distribuição, foram utilizados dados reais de câncer (leucemia mielóide aguda e câncer gástrico) que possuem a presença de censuras e covariáveis. Os dados referentes ao câncer gástrico tem a particularidade de apresentar dois tempos de sobrevida, um relativo ao tempo global de sobrevida e o outro relativo ao tempo de sobrevida livre do evento, que foi utilizado para a aplicação do modelo multivariado. Foi realizada uma comparação com outras distribuições já utilizadas em análise de sobrevivência, como a distribuiçãoo Weibull e a Gama. Para a análise bayesiana adotamos diferentes distribuições a priori para os parâmetros. Foi utilizado, nas aplicações, métodos de simulação de MCMC (Monte Carlo em Cadeias de Markov) e o software Winbugs. / Survival analysis methods has been extensively used by health researchers. In this work it was proposed the use a survival analysis model recently studied, denoted as generalized exponential distribution. This distribution was studied in all respects: for complete data and censored, in the presence of covariates and considering its extension to a multivariate model derived from a copula function. To exemplify the use of these models, it was considered real cancer lifetime data (acute myeloid leukemia and gastric cancer) in presence of censored data and covariates. The assumed cancer gastric lifetime data has two survival responses, one related to the total lifetime of the patient and another one related to the time free of the disease, that is, multivariate data associated to each patient. In these applications there was considered a comparative study with standard existing lifetime distributions, as Weibull and gamma distributions.For a Bayesian analysis we assumed different prior distributions for the parameters of the model. For the simulation of samples of the joint posterior distribution of interest, we used standard MCMC (Markov Chain Monte Carlo) methods and the software Winbugs.
29

Bayesian models and algoritms for protein secondary structure and beta-sheet prediction

Aydin, Zafer 17 September 2008 (has links)
In this thesis, we developed Bayesian models and machine learning algorithms for protein secondary structure and beta-sheet prediction problems. In protein secondary structure prediction, we developed hidden semi-Markov models, N-best algorithms and training set reduction procedures for proteins in the single-sequence category. We introduced three residue dependency models (both probabilistic and heuristic) incorporating the statistically significant amino acid correlation patterns at structural segment borders. We allowed dependencies to positions outside the segments to relax the condition of segment independence. Another novelty of the models is the dependency to downstream positions, which is important due to asymmetric correlation patterns observed uniformly in structural segments. Among the dataset reduction methods, we showed that the composition based reduction generated the most accurate results. To incorporate non-local interactions characteristic of beta-sheets, we developed two N-best algorithms and a Bayesian beta-sheet model. In beta-sheet prediction, we developed a Bayesian model to characterize the conformational organization of beta-sheets and efficient algorithms to compute the optimum architecture, which includes beta-strand pairings, interaction types (parallel or anti-parallel) and residue-residue interactions (contact maps). We introduced a Bayesian model for proteins with six or less beta-strands, in which we model the conformational features in a probabilistic framework by combining the amino acid pairing potentials with a priori knowledge of beta-strand organizations. To select the optimum beta-sheet architecture, we analyzed the space of possible conformations by efficient heuristics, in which we significantly reduce the search space by enforcing the amino acid pairs that have strong interaction potentials. For proteins with more than six beta-strands, we first computed beta-strand pairings using the BetaPro method. Then, we computed gapped alignments of the paired beta-strands in parallel and anti-parallel directions and chose the interaction types and beta-residue pairings with maximum alignment scores. Accurate prediction of secondary structure, beta-sheets and non-local contacts should improve the accuracy and quality of the three-dimensional structure prediction.
30

Tree growth and mortality and implications for restoration and carbon sequestration in Australian subtropical semi-arid forests and woodlands

John Dwyer Unknown Date (has links)
Many researchers have highlighted the dire prospects for biodiversity in fragmented agricultural landscapes and stressed the need for increasing the area of, and connectivity between, natural ecosystems. Some have advocated the use of naturally regenerating forest ecosystems for sequestering atmospheric carbon, with opportunities for dual restoration and carbon benefits. However, no studies have explicitly explored the feasibility of obtaining such dual benefits from a regenerating woody ecosystem. This thesis aims to provide a detailed assessment of the restoration and carbon potential of Brigalow regrowth, an extensive naturally regenerating ecosystem throughout the pastoral regions of north eastern Australia. It combines observational, experimental and modelling techniques to describe the agricultural legacy of pastoral development, identify constraints to restoration and explore methods to remove these constraints. A review of existing ecological knowledge of Brigalow ecosystems is provided in chapter 3, along with discussion of policy and socio-economic issues that are likely to influence how and to what extent regrowth is utilised for restoration and carbon purposes in the Brigalow Belt. The review found restoring regrowth is likely to have benefits for a wide range of native flora and fauna, including the endangered bridled nailtail wallaby. Knowledge gaps exist relating to the landscape ecology of Brigalow regrowth and the impacts of management and climate change on carbon and restoration potential. Also, a conflict exists between short-term carbon sequestration and long-term restoration goals. Regional demand for high biomass regrowth as a carbon offset is likely to be high but ambiguities in carbon policy threaten to diminish the use of natural regrowth for reforestation projects. A large cross-sectional study of regrowth is presented in chapter 4. Data were analysed using multi-level / hierarchical Bayesian models (HBMs). Firstly, we found that repeated attempts at clearing Brigalow regrowth increases stem densities and densities remain high over the long term, particularly in high rainfall areas and on clay soils with deep gilgais. Secondly, higher density stands have slower biomass accumulation and structural development in the long term. Spatial extrapolations of the HBMs indicated that the central and eastern parts of the study region are most environmentally suitability for biomass accumulation, however these may not correspond to the areas that historically supported the highest biomass Brigalow forests. We conclude that carbon and restoration goals are largely congruent within regions of similar climate. At the regional scale however, spatial prioritisation of restoration and carbon projects may only be aligned in areas with higher carbon potential. Given the importance of stem density in determining restoration and carbon potential, an experimental thinning trial was established in dense Brigalow regrowth in southern Queensland (chapter 5). Four treatments were applied in a randomised block design and growth and mortality of a subset of stems was monitored for two years. Data were analysed using mixed-effects models and HBMs and the latter were subsequently used to parameterise an individual-based simulation model of stand structural development and biomass accumulation over 50 years. The main findings of this study were that growth and mortality of stems is influenced by the amount of space available to each stem (a neighbourhood effect) and that thinning accelerates structural development and increases woody species diversity. The examination of neighbourhood effects is taken further by considering drought-related mortality in a Eucalyptus savanna ecosystem (chapter 6). For this work a multi-faceted approach was employed including spatial pattern analyses and statistical models of stem survival to test three competing hypotheses relating to neighbourhood effects on drought related tree mortality. The main finding of this study was that neighbour density and microsite effects both influence drought-related mortality and the observed patterns can readily be explained by an interaction between these two factors. As a whole, this thesis contributes the following scientific insights: (1) restoration and carbon goals may be aligned for naturally regenerating woody ecosystems, but the degree of goal congruence will vary across the landscape in question, (2) while some woody ecosystems retain an excellent capacity to regenerate naturally, the agricultural legacy may still have long term effects on restoration and carbon potential, (3) neighbourhood effects that operate at the stem scale strongly influence dynamics at the ecosystem scale.

Page generated in 0.0736 seconds