• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 124
  • 23
  • 17
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 136
  • 132
  • 75
  • 66
  • 49
  • 43
  • 40
  • 33
  • 29
  • 28
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Comando e controle no contexto da digitalização : um estudo com base em modelagem computacional / Command and control in the context of digitization: a study based on computational modeling

Bertol, Frederico Licks January 2018 (has links)
Este trabalho propõe uma discussão em torno dos impactos da digitalização sobre sistemas militares de comando e controle. A hipótese central é que o emprego intensivo de tecnologias digitais está associado a um maior risco de sobrecarga informacional nesses sistemas. Isso se aplica em especial às forças militares que adotaram doutrinas de viés tecnocrático, como a guerra centrada em redes. No primeiro capítulo, discutimos o contexto no qual nosso tema de pesquisa se insere, fazendo uma breve retrospectiva do processo de digitalização e também definindo alguns conceitos-chave. No segundo capítulo, em formato de artigo, apresentamos o modelo computacional que foi desenvolvido para simular o funcionamento de um sistema de comando e controle sob a condição de sobrecarga informacional. O artigo também reúne uma revisão crítica das abordagens sobre comando e controle, com ênfase na literatura sobre guerra centrada em redes. O terceiro e último capítulo traz algumas conclusões sobre o emprego da modelagem computacional como metodologia de pesquisa e o estado atual do debate sobre guerra centrada e redes. / This work proposes a discussion on the impacts of digitization over military command and control systems. The central hypothesis is that the intensive deployment of digital technologies is associated to a greater risk of informational overload in those systems. This applies especially to military forces that have adopted doctrines with a technocratic bias, such as the network-centric warfare. In the first chapter, we discuss the context that encompass our research topic, making a brief retrospective of the process of digitization and defining some key concepts. In the second chapter, in form of article, we present the computational model developed for simulating the operation of a command and control system under the condition of informational overload. The article also contains a critical review on the command and control approaches, with emphasis on the literature about network-centric warfare. The third and last chapter brings out some conclusions regarding the use of computational modeling as a research method and the current state of the debate on network-centric warfare.
72

Modelo computacional do aumento da tenacidade do concreto de reforço por fibras utilizando ANSYS

Friedrich, Leandro Ferreira 09 May 2016 (has links)
Submitted by Cátia Araújo (catia.araujo@unipampa.edu.br) on 2017-01-26T14:42:35Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Modelo computacional do aumento da tenacidade do concreto de reforço por fibras utilizando ANSYS.pdf: 6320923 bytes, checksum: 34e8ae31406d5b12e787bd91316b32b5 (MD5) / Approved for entry into archive by Cátia Araújo (catia.araujo@unipampa.edu.br) on 2017-01-26T14:43:33Z (GMT) No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Modelo computacional do aumento da tenacidade do concreto de reforço por fibras utilizando ANSYS.pdf: 6320923 bytes, checksum: 34e8ae31406d5b12e787bd91316b32b5 (MD5) / Made available in DSpace on 2017-01-26T14:43:33Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Modelo computacional do aumento da tenacidade do concreto de reforço por fibras utilizando ANSYS.pdf: 6320923 bytes, checksum: 34e8ae31406d5b12e787bd91316b32b5 (MD5) Previous issue date: 2016-05-09 / Neste trabalho, apresenta-se um modelo computacional abrangente para analise do compósito cimenticeo reforçado por fibras, utilizando o software ANSYS que tem como base o método de elementos finitos. As simulações tem como foco, simular uma única fibra inserida na matriz de concreto, analisando a contribuição individual de cada fibra para a resistência final do compósito. Através de um procedimento unido ANSYS e MATLAB e possível unir o modelo em elementos finitos e a modelagem matemática. Utilizando o modelo de zonas coesivas (CZM) e a relação ԏ (s), o atrito da interface e simulado e as propriedades interfaciais analisadas. Os resultados da forca de ponte versus abertura de trinca são comparados com ensaios experimentais obtidos na literatura para diferentes tipos de fibras enterradas em matriz cimenticea com diferentes ângulos de inclinação em relação a superfície fraturada. Utilizando a superfície de falha para o concreto de William-Warnke, o spalling na matriz e quantificado. Para entender como o spalling se forma e se propaga a influencia ainda pouco estudada da distribuição de pressão na interface e analisada. Os resultados mostram que existem parâmetros ótimos que aumentam significativamente a tenacidade. Tanto o spalling como a distribuição de pressão na interface comprovam o porque das características mecânicas de fibra, matriz e interface e claro da geometria da fibra, favorecem ou não a melhora na resistência mecânica do compósito. / This work present a computational model for analysis of fiber reinforced cementitious composite using ANSYS which is based on the finite element method. The modeling is focus on simulating a single fiber pullout inserted into the concrete matrix and analyzing the individual contribution of each fiber to the ultimate strength and toughening of the composite. Through a computational procedure united ANSYS and MATLAB it is possible to connect the finite elements model to mathematical model. By use of the cohesive zones model (CZM) and ԏ (s) relationship, the friction on the interface is simulated and the interfacial performance as well as the pressure distribution at the interface are analyzed. The results of bridging force versus crack opening are compared with experimental tests obtained in the literature for different types of fibers embedded into cement matrix with different angles of inclination relative to the fractured surface. Using the William-Warnke failure surface for the concrete, the spalling in the matrix is quantified. To understand how the spalling is formed and propagated the pressure distribution on the interface that still little studied is analyzed. The results show that there are optimun parameters that could significantly improve the strength and toughness of composites. Both spalling and the pressure distribution on the interface show why the mechanical properties of fiber, matrix and interface and geometry of the fiber, favor or not the improvement in mechanical strength and toughness of the composite.
73

Eficiência de equações empíricas utilizadas para determinar lâmina de lixiviação de sais e modelagem da distribuição do sódio no solo / Efficiency of empirical equations used to determine salt leaching water depth and modeling of sodium distribution in soil

Franco, Elenilson Moreira 19 June 2013 (has links)
A definição adequada da lâmina para lixiviação de sais e recuperação de solos salinos depende da qualidade dos resultados obtidos por meio das diversas equações disponíveis para esse fim. Sabendo disso, objetivou-se, com este trabalho: a) avaliar a eficiência de equações empíricas utilizadas para determinar a lâmina de água necessária à recuperação de solos salinos, bem como, b) a caracterização da mobilidade e distribuição do íon sódio em colunas de solo usando dados experimentais e simulados no modelo computacional MIDI. O estudo constou de etapas experimentais e de simulação e foi conduzido nas dependências do Departamento de Engenharia de Biossistemas da Escola Superior de Agricultura \"Luiz de Queiroz\" - ESALQ/USP, Piracicaba - SP. O experimento em casa de vegetação consistiu na aplicação de três lâminas de lixiviação para lavagem e recuperação de dois materiais de solos, armazenados em 36 colunas. Anteriormente, cada solo foi artificialmente salinizado, por meio da aplicação de cloreto de sódio, elevando-se a condutividade elétrica da solução do solo para valores aproximados de 3,0 e 6,0 dS m-1. Assim, os tratamentos, em delineamento de blocos ao acaso, com três repetições, corresponderam a um fatorial de 3 x 2 x 2, decorrente das combinações de três lâminas de lixiviação com dois tipos de solo e dois níveis de salinidade. As lâminas, calculadas a partir do volume de poros de cada solo, foram aplicadas por meio de um sistema de irrigação (gotejamento) a uma vazão de 8 L h-1. Após a aplicação das lâminas, a solução do solo de cada coluna foi extraída e levada ao laboratório para se determinar a condutividade elétrica e concentração de sódio. Nesta etapa foram avaliadas as alterações nas características químicas do solo, em resposta à aplicação das lâminas. Em seguida, equações empíricas foram utilizadas para estimar as concentrações de sais remanescentes na solução do solo, em função das lâminas de lixiviação aplicadas; enquanto que o modelo MIDI foi empregado para simular a distribuição do sódio no perfil do solo. Os cenários teóricos gerados a partir do uso das equações e do modelo MIDI foram comparados com os resultados experimentais, observados nos ensaios com as colunas de solos instaladas na casa de vegetação. As concentrações de sódio e, consequentemente, os valores de condutividade elétrica da solução do solo reduziram de maneira inversamente proporcional com a aplicação das lâminas de lixiviação; sendo os melhores resultados observados no solo arenoso, em função da maior mobilidade do sódio neste material. De maneira geral, as equações testadas foram mais eficientes no solo arenoso e, dentre elas, a proposta de Cordeiro (2001) foi a que apresentou respostas mais coerentes com os resultados obtidos experimentalmente. / The method to properly determine salt leaching water depth and recovery of saline soils depends on the quality of the results obtained by various equations available for this purpose. The objectives of this research were: a) to evaluate the efficiency of empirical equations used to determine the water depth required for saline soils reclamation and b) to characterize the mobility and distribution of sodium in soil columns using experimental and simulated data via the MIDI model. The study consisted of experimental and simulated steps and was carried out at the Department of Biosystems Engineering (\"Luiz de Queiroz\" College of Agriculture - ESALQ/USP), in Piracicaba, SP. The greenhouse experiment began by applying three leaching water depth for washing and reclaiming two soil types stored in 36 columns. Previously each soil sample was artificially salinized by applying sodium chloride, increasing electrical conductivity (EC) in the soil solution to approximate values of 3.0 and 6.0 dS m-1. Thus, the treatments in random block design, with three replications, corresponded to a factorial 3 x 2 x 2, arisen from the combinations of three water depth with two soils types and two levels of salinity. The water depth was calculated based on the pore volume of each soil type, were applied by drip irrigation system at a flow rate of 8 L h-1. After the water depth application, the soil solution of each column was extracted and taken to the laboratory to determine the EC and sodium concentration. The changes in soil chemical properties in response to application of the water depths were then evaluated. Empirical equations were used to estimate the remaining sodium concentrations in the soil solution according to the applied water depth; while the MIDI model was used to simulate the sodium ion distribution in the soil profile. The theoretical scenarios generated from the use of the equations along with the MIDI model were compared with the experimental results observed in tests with soil columns installed in the greenhouse. The sodium concentrations and the values of EC in the soil solution were reduced inversely proportional to the application of leaching water depth. The best results were observed in sandy soil, owing to the greater mobility of sodium in this material. In general, the equations tested in sandy soil were more efficient and, among them, the one proposed by Cordeiro (2001) was the most accurate when compared to results obtained experimentally.
74

Método de modelagem de arquitetura corporativa. / Modeling method for enterprise architecture.

Rosa, Fabio Alexandre Justo 02 September 2008 (has links)
Business Process Management (BPM) [4] é uma forma sistemática e estruturada para analisar, melhorar, controlar e gerenciar processos com a meta de melhorar a qualidade de produtos e serviços [2]. No decorrer da última década, diversos estudos têm apontado para a importância da integração entre modelos de negócios e arquitetura de Tecnologia da Informação (TI) na busca de uma fundação para execução efetiva dos objetivos de negócios [1]. A proposta desta dissertação consiste num método estruturado de modelagem de Arquitetura Corporativa, baseado em perfis UML para modelagem de fluxos de processo de negócio [3], aplicações, dados e infra-estrutura de TI. O método proposto é validado com um estudo de caso no qual é detalhada toda a interdependência entre um processo de negócio e a arquitetura de TI que o suporta, ou seja, a Arquitetura Corporativa[1] do processo de negócio. / Business Process Management (BPM) [4] is a systematic and structured approach to analyze, improve, control and, manage processes with the goal of enhancing quality of products and services [2]. In the past decade, several studies indicate the importance of integrating business models and Information Technology (IT) Architecture, aiming an effective foundation for execution of business objectives [2]. This work proposal consists of a structured method to document Enterprise Architecture, based on UML Profiles for Business Process Modeling [3] and IT Infrastructure. The proposed method is validated using a case study in which all the interdependence of a Business Process and the necessary IT architecture to support it is detailed, i.e, the Enterprise Architecture [1] for that Business Process.
75

Computational and experimental investigations of laser drilling and welding for microelectronic packaging

Han, Wei 13 May 2004 (has links)
Recent advances in microelectronics and packaging industry are characterized by a progressive miniaturization in response to a general trend toward higher integration and package density. Corresponding to this are the challenges to traditional manufacturing processes. Some of these challenges can be satisfied by laser micromachining, because of its inherent advantages. In laser micromachining, there is no tool wear, the heat affected zone can be localized into a very small area, and the laser micromachining systems can be operated at a very wide range of speeds. Some applications of laser micromachining include pulsed Nd:YAG laser spot welding for the photonic devices and laser microdrilling in the computer printed circuit board market. Although laser micromachining has become widely used in microelectronics and packaging industry, it still produces results having a variability in properties and quality due to very complex phenomena involved in the process, including, but not limited to, heat transfer, fluid flow, plasma effects, and metallurgical problems. Therefore, in order to utilize the advantages of laser micromachining and to achieve anticipated results, it is necessary to develop a thorough understanding of the involved physical processes, especially those relating to microelectronics and packaging applications. The objective of this Dissertation was to study laser micromachining processes, especially laser drilling and welding of metals or their alloys, for the microscale applications. The investigations performed in this Dissertation were based on analytical, computational, and experimental solutions (ACES) methodology. More specifically, the studies were focused on development of a consistent set of equations representing interaction of the laser beam with materials of interest in this Dissertation, solution of these equations by finite difference method (FDM) and finite element method (FEM), experimental demonstration of laser micromachining, and correlation of the results. The contributions of this Dissertation include: 1)development of a finite difference method (FDM) program with color graphic interface, which has the capability of adjusting the laser power distributions, coefficient of energy absorption, and nonlinear material properties of the workpiece as functions of temperature, and can be extended to calculate the fluid dynamic phenomena and the profiles of laser micromachined workpieces, 2)detailed investigations of the effect of laser operating parameters on the results of the profiles and dimensions of the laser microdrilled or microwelded workpiece, which provide the guideline and advance currently existing laser micromachining processes, 3)use, for the first time, of a novel optoelectronic holography (OEH) system, which provides non-contact full-field deformation measurements with sub-micrometer accuracy, for quantitative characterization of thermal deformations of the laser micromachined parts, 4)experimental evaluations of strength of laser microwelds as the function of laser power levels and number of microwelds, which showed the lower values than the strength of the base material due to the increase of hardness at the heat affected zone (HAZ) of the microwelds, 5)measurements of temperature profiles during laser microwelding, which showed good correlations with computational results, 6)detailed considerations of absorption of laser beam energy, effect of thermal and aerodynamic conditions due to shielding gas, and the formation of plasma and its effect on laser micromachining processes. The investigations presented in this Dissertation show viability of the laser micromachining processes, account for the considerations required for a better understanding of laser micromachining processes, and provide guideline which can help explaining and advancing the currently existing laser micromachining processes. Results of this Dissertation will facilitate improvements and optimizations of the state-of-the-art laser micromachining techniques and enable the emerging technologies related to the multi-disciplinary field of microelectronics and packaging for the future.
76

Understanding complex systems through computational modeling and simulation / Comprendre les systèmes complexes par la modélisation et la simulation computationnelles

Le, Xuan Tuan 18 January 2017 (has links)
Les approches de simulation classiques ne sont en général pas adaptées pour traiter les aspects de complexité que présentent les systèmes complexes tels que l'émergence ou l'adaptation. Dans cette thèse, l'auteur s'appuie sur ses travaux menés dans le cadre d'un projet de simulation sur l’épidémie de grippe en France associée à des interventions sur une population en considérant le phénomène étudié comme un processus diffusif sur un réseau complexe d'individus, l'originalité réside dans le fait que la population y est considérée comme un système réactif. La modélisation de tels systèmes nécessite de spécifier explicitement le comportement des individus et les réactions de ceux-cis tout en produisant un modèle informatique qui doit être à la fois flexible et réutilisable. Les diagrammes d'états sont proposés comme une approche de programmation reposant sur une modélisation validée par l'expertise. Ils correspondent également à une spécification du code informatique désormais disponibles dans les outils logiciels de programmation agent. L'approche agent de type bottom-up permet d'obtenir des simulations de scénario "what-if" où le déroulement des actions peut nécessiter que les agents s'adaptent aux changements de contexte. Cette thèse propose également l'apprentissage pour un agent par l'emploi d'arbre de décision afin d'apporter flexibilité et lisibilité pour la définition du modèle de comportement des agents et une prise de décision adaptée au cours de la simulation. Notre approche de modélisation computationnelle est complémentaire aux approches traditionnelles et peut se révéler indispensable pour garantir une approche pluridisciplinaire validable par l'expertise. / Traditional approaches are not sufficient, and sometimes impossible in dealing with complexity issues such as emergence, self-organization, evolution and adaptation of complex systems. As illustrated in this thesis by the practical work of the author in a real-life project, the spreading of infectious disease as well as interventions could be considered as difusion processes on complex networks of heterogeneous individuals in a society which is considered as a reactive system. Modeling of this system requires explicitly specifying of each individual’s behaviors and (re)actions, and transforming them into computational model which has to be flexible, reusable, and ease of coding. Statechart, typical for model-based programming, is a good solution that the thesis proposes. Bottom-up agent based simulation finds emergence episodes in what-if scenarios that change rules governing agent’s behaviors that requires agents to learn to adapt with these changes. Decision tree learning is proposed to bring more flexibility and legibility in modeling of agent’s autonomous decision making during simulation runtime. Our proposition for computational models such as agent based models are complementary to traditional ones, and in some case they are unique solutions due to legal, ethical issues.
77

Estudos estruturais e computacionais das proteínas tirosina fosfatase A e B de Mycobacterium tuberculosis / Structural and computational studies from protein tyrosine phosphatase A and B of Mycobacterium tuberculosis.

Vanessa Kiraly Thomaz Rodrigues 27 October 2016 (has links)
Tuberculose (TB) é um grave problema de saúde pública, sendo a segunda maior causa de morte entre doenças infecto contagiosas. Em 2014, 9,6 milhões de casos e, aproximadamente, 1,5 milhão de mortes foram reportados. O Programa Nacional de Controle da Tuberculose preconiza para o tratamento a administração simultânea de quatro medicamentos. Contudo, casos de tratamento inadequado favorecem o surgimento de cepas multirresistentes e extensivamente resistentes aos medicamentos disponíveis. Diante disso, torna-se urgente a necessidade de investigar novos alvos moleculares e desenvolver novos fármacos que sejam úteis e eficazes para o tratamento da infecção. As proteínas tirosina fosfatases (PTPs) constituem uma grande família de enzimas responsáveis pela hidrólise do fosfato ligado aos resíduos de tirosina em proteínas. A importância destas fosfatases reside no fato de estarem envolvidas na regulação de uma série de funções celulares, tais como crescimento, interação intercelular, metabolismo, transcrição, motilidade e resposta imune. A partir da análise do genoma do Mycabacteirum tuberculosis, foram identificadas duas proteínas tirosinas fosfatases (PtpA e PtpB), responsáveis pela sua sobrevivência nos macrófagos do hospedeiro. Ambas as enzimas têm sido exploradas como alvo molecular para o desenvolvimento de novos fármacos para a tuberculose. Nessa dissertação, as sequências gênicas que codificam para as enzimas PtpA e PtpB de M. tuberculosis foram clonadas com sucesso nos vetores de expressão. A expressão solúvel das proteínas permitiu o estabelecimento de um protocolo padronizado de purificação. Ensaios de cristalização foram conduzidos e cristais de proteínas obtidos tiveram os dados cristalográficos coletados. Para a enzima PtpB foi possível determinar a estrutura cristalográfica em alta resolução em complexo com um grupo fosfato no sítio catalítico. Essa estrutura foi então utilizada na etapa posterior de descoberta de novos candidatos a inibidores. Os trabalhos computacionais conduzidos incluíram uma combinação de estratégias para a identificação de pontos de interação relevantes para o processo de reconhecimento molecular e ligação bem como para a construção de modelos farmacofóricos 3D específicos para cada enzima. Esses dados foram utilizados para a seleção de um conjunto de 8 candidatos a inibidores da PtpA e 5 candidatos a inibidores da PtpB. Portanto, estudos de biologia molecular estrutural e química medicinal foram empregados com sucesso para o estabelecimento de uma plataforma produtiva dos alvos selecionados bem como para a seleção de novos candidatos a inibidores. / Tuberculosis (TB) is a serious public health problem and the second leading cause of death among infectious diseases. In 2014, 9.6 million cases, and approximately 1.5 million deaths were reported. The National Program for Tuberculosis Control recommends the simultaneous administration of four drugs as treatment for the disease. However, inadequate treatment determines the emergence of multidrug- and extensively-resistant strains to available drugs. Therefore, new molecular targets and drugs are urgently needed for the treatment of the infection. The protein tyrosine phosphatases (PTPs) are a large family of enzymes responsible for the hydrolysis of phosphate group bound to tyrosine residues in proteins. The importance of these molecules is related to the regulation of a number of cellular functions, including growth, intercellular interaction, metabolism, transcription, motility and immune response. Based on Mycabacteirum tuberculosis genome analysis, two protein tyrosine phosphatases (PTPA and PtpB) were related to mycobacterium survival in host macrophages. Both enzymes have been explored as a molecular target for the development of new drugs for TB. In this dissertation, the gene sequences encoding the enzymes PtpA and PtpB from M. tuberculosis were successfully cloned in expression vectors. The soluble expression of the proteins allowed the establishment of a standardized purification protocol. Crystallization assays were conducted, protein crystals were obtained, and crystallographic data were collected. We determine the crystallographic structure of PtpB in complex with a phosphate group in the catalytic site at high resolution. This structure was then used in the subsequent step for the discovery of new inhibitor candidates. Computational studies included a combination of strategies for identifying interaction points relevant to the process of molecular recognition and binding as well as the construction of 3D pharmacophore models specific for each enzyme. These data were used to select a set of 8 and 5 compounds as PtpA and PtpB inhibitor candidates, respectively. Therefore, structural molecular biology and medicinal chemistry studies have been successfully conducted for the establishment of a platform aimed to the production of the selected targets as well as for the selection of new inhibitor candidates.
78

Método de modelagem de arquitetura corporativa. / Modeling method for enterprise architecture.

Fabio Alexandre Justo Rosa 02 September 2008 (has links)
Business Process Management (BPM) [4] é uma forma sistemática e estruturada para analisar, melhorar, controlar e gerenciar processos com a meta de melhorar a qualidade de produtos e serviços [2]. No decorrer da última década, diversos estudos têm apontado para a importância da integração entre modelos de negócios e arquitetura de Tecnologia da Informação (TI) na busca de uma fundação para execução efetiva dos objetivos de negócios [1]. A proposta desta dissertação consiste num método estruturado de modelagem de Arquitetura Corporativa, baseado em perfis UML para modelagem de fluxos de processo de negócio [3], aplicações, dados e infra-estrutura de TI. O método proposto é validado com um estudo de caso no qual é detalhada toda a interdependência entre um processo de negócio e a arquitetura de TI que o suporta, ou seja, a Arquitetura Corporativa[1] do processo de negócio. / Business Process Management (BPM) [4] is a systematic and structured approach to analyze, improve, control and, manage processes with the goal of enhancing quality of products and services [2]. In the past decade, several studies indicate the importance of integrating business models and Information Technology (IT) Architecture, aiming an effective foundation for execution of business objectives [2]. This work proposal consists of a structured method to document Enterprise Architecture, based on UML Profiles for Business Process Modeling [3] and IT Infrastructure. The proposed method is validated using a case study in which all the interdependence of a Business Process and the necessary IT architecture to support it is detailed, i.e, the Enterprise Architecture [1] for that Business Process.
79

Anthropogenic Fire and the Development of Neolithic Agricultural Landscapes: Connecting Archaeology, Paleoecology, and Fire Science to Evaluate Human Impacts on Fire Regimes

January 2019 (has links)
abstract: The recent emergence of global ‘megafires’ has made it imperative to better understand the role of humans in altering the size, distribution, and seasonality of fires. The dynamic relationship between humans and fire is not a recent phenomenon; rather, fire has deep roots in our biological and cultural evolution. Because of its long-term perspective, archaeology is uniquely positioned to investigate the social and ecological drivers behind anthropogenic fire. However, the field faces challenges in creating solution-oriented research for managing fire in the future. In this dissertation, I originate new methods and approaches to archaeological data that enable us to interpret humans’ long-term influences on fire regimes. I weave together human niche construction theory and ecological resilience, creating connections between archaeology, paleoecology, and fire ecology. Three, stand-alone studies illustrate the usefulness of these methods and theories for charting changes in land-use, fire-regimes, and vegetation communities during the Neolithic Transition (7600 - 3800 cal. BP) in eastern Spain. In the first study (Ch. II), I analyze archaeological survey data using Bayesian methods to extract land-use intensities from mixed surface assemblages from a case study in the Canal de Navarrés. The second study (Ch. III) builds on the archaeological data collected computational model of landscape fire, charcoal dispersion, and deposition to test how multiple models of natural and anthropogenic fire activity contributed to the formation a single sedimentary charcoal dataset from the Canal de Navarrés. Finally, the third study (Ch. IV) incorporates the modeling and data generated in the previous chapters into sampling and analysis of sedimentary charcoal data from alluvial contexts in three study areas throughout eastern Spain. Results indicate that anthropogenic fire played a significant role in the creation of agricultural landscapes during the Neolithic period, but sustained, low-intensity burning after the late Neolithic period maintained the human created niche for millennia beyond the arrival of agro-pastoral land-use. With global fire activity on the rise, it is vital to incorporate perspectives on the origins, development, and maintenance of human-fire relationships to effectively manage fire in today’s coupled social-ecological landscapes. / Dissertation/Thesis / Doctoral Dissertation Anthropology 2019
80

Understanding the partitioning of rainfall by the maize canopy through computational modelling and physical measurements

Frasson, Renato Prata de Moraes 01 December 2011 (has links)
The interception and redirection of rainfall by vegetation has implications for many fields such as remote sensing of soil moisture, satellite observation of rainfall, and the modeling of runoff, climate, and soil erosion. Although the modeling of rainfall partitioning by forests has received attention in the past, partitioning caused by crops has been overlooked. The present work proposes a two front experimental and computational methodology to comprehensively study rainfall interception and partitioning by the maize canopy. In the experimental stage, we deployed two compact weather stations, two optical disdrometers, and five tipping bucket rain gauges. Two of the tipping bucket rain gauges were modified to measure throughfall while two were adapted to measure stemflow. The first optical disdrometer allowed for inspection of the unmodified drop-size and velocity distributions, whereas the second disdrometer measured the corresponding distributions under the canopy. This indicates that the outcome of the interaction between the hydrometeors and the canopy depends on the drop diameter. In the computational stage, we created a model that uses drop-size and velocity distributions as well as a three-dimensional digital canopy to simulate the movement of raindrops on the surfaces of leaves. Our model considers interception, redirection, retention, coalescence, breakup, and re-interception of drops to calculate the stemflow, throughfall, and equivalent height of precipitation stored on plants for a given storm. Moreover, the throughfall results are presented as two-dimensional matrices, where each term corresponds to the accumulated volume of drops that dripped at a given location. This allows insight into the spatial distribution of throughfall beneath the foliage. Finally, we examine the way in which the maize canopy modifies the drop-size distribution by recalculating the drop velocity based on the raindrop's size and detachment height and by storing the counts of drops in diameter-velocity classes that are consistent with the classes used by disdrometers in the experimental study.

Page generated in 0.5983 seconds