• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 35
  • 30
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 176
  • 176
  • 28
  • 27
  • 26
  • 22
  • 20
  • 15
  • 14
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Ανάπτυξη υπολογιστικού μοντέλου προσωμοίωσης φθοριζόντων υλικών ανιχνευτών ιατρικής απεικόνισης με τεχνικές Monte Carlo / Development of computerized simulation model on phosphor materials detectors of medical imaging by Monte Carlo methods

Λιαπαρίνος, Παναγιώτης Φ. 23 October 2007 (has links)
Οι ενδογενείς ιδιότητες των φθοριζόντων υλικών ανιχνευτών ιατρικής απεικόνισης, παίζουν πολύ σημαντικό ρόλο στην απόδοση των ενισχυτικών πινακίδων που χρησιμοποιούνται σε ιατρικά απεικονιστικά συστήματα. Σε προηγούμενες μελέτες φθοριζόντων υλικών κοκκώδους μορφής, είτε με αναλυτικές μεθόδους είτε με τεχνικές Monte Carlo, οι τιμές των οπτικών παραμέτρων καθώς και οι πιθανότητες αλληλεπίδρασης του φωτός υπολογίστηκαν με τη βοήθεια τεχνικών προσαρμογής (fitting) σε πειραματικά δεδομένα. Ωστόσο, είχε παρατηρηθεί ότι στηριζόμενοι σε πειραματικά δεδομένα και τεχνικές προσαρμογής, οι οπτικοί παράμετροι ενός συγκεκριμένου υλικού μεταβάλλονται εντός ενός σημαντικού εύρους τιμών (π.χ. είχαν δημοσιευτεί, για το ίδιο πάχος υλικού διαφορετικές τιμές ενεργούς διατομής οπτικής σκέδασης). Στην παρούσα διδακτορική διατριβή αναπτύχθηκε ένα υπολογιστικό μοντέλο προσωμοίωσης φθοριζόντων υλικών κοκκώδους μορφής, με τεχνικές Monte Carlo, με σκοπό τη μελέτη διάδοσης των ακτίνων-χ και του φωτός. Το μοντέλο στηρίχθηκε μόνο στις φυσικές ιδιότητες των φθοριζόντων υλικών. Κάνοντας χρήση της θεωρίας σκέδασης Mie και με τη βοήθεια του μιγαδικού συντελεστή διάθλασης των υλικών, χρησιμοποιήθηκαν μικροσκοπικές πιθανότητες αλληλεπίδρασης του φωτός. Η εγκυρότητα του μοντέλου πιστοποιήθηκε συγκρίνοντας αποτελέσματα (π.χ. ποσοστό απορρόφησης ακτίνων-χ, στατιστική κατανομή μετατροπής των ακτίνων-χ σε φωτόνια φωτός, αριθμός εκπεμπόμενων οπτικών φωτονίων, κατανομή του φωτός στην έξοδο του ανιχνευτή) με δημοσιευμένα πειραματικά δεδομένα για το φθορίζον υλικό Gd2O2S:Tb (ενισχυτική πινακίδα τύπου Kodak Min-R). Τα αποτελέσματα έδειξαν την εξάρτηση της συνάρτησης μεταφοράς διαμόρφωσης (MTF) από το μέγεθος του κόκκου και από τον αριθμό των κόκκων ανα μονάδα μάζας (πακετοποιημένη πυκνότητα: packing density). Προβλέφθηκε ότι ενισχυτικές πινακίδες με φθορίζον υλικό υψηλού αριθμού κόκκων ανά μονάδα όγκου και χαμηλής τιμής μεγέθους κόκκου μπορούν να παρουσιάσουν καλύτερη απόδοση ως προς την ποσότητα και την κατανομή του εκπεμπόμενου φωτός σε σχέση με τις συμβατικές ενισχυτικές πινακίδες, κάτω απ’ τις ίδιες πειραματικές συνθήκες (π.χ. ενέργεια ακτίνων-χ, πάχος ενισχυτικής πινακίδας). / The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e. variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials were studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd2O2S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd2O2S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd2O2S:Tb screens, under similar conditions (x-ray incident energy, screen thickness).
42

Uncertainty in the Bifurcation Diagram of a Model of Heart Rhythm Dynamics

Ring, Caroline January 2014 (has links)
<p>To understand the underlying mechanisms of cardiac arrhythmias, computational models are used to study heart rhythm dynamics. The parameters of these models carry inherent uncertainty. Therefore, to interpret the results of these models, uncertainty quantification (UQ) and sensitivity analysis (SA) are important. Polynomial chaos (PC) is a computationally efficient method for UQ and SA in which a model output Y, dependent on some independent uncertain parameters represented by a random vector &xi;, is approximated as a spectral expansion in multidimensional orthogonal polynomials in &xi;. The expansion can then be used to characterize the uncertainty in Y.</p><p>PC methods were applied to UQ and SA of the dynamics of a two-dimensional return-map model of cardiac action potential duration (APD) restitution in a paced single cell. Uncertainty was considered in four parameters of the model: three time constants and the pacing stimulus strength. The basic cycle length (BCL) (the period between stimuli) was treated as the control parameter. Model dynamics was characterized with bifurcation analysis, which determines the APD and stability of fixed points of the model at a range of BCLs, and the BCLs at which bifurcations occur. These quantities can be plotted in a bifurcation diagram, which summarizes the dynamics of the model. PC UQ and SA were performed for these quantities. UQ results were summarized in a novel probabilistic bifurcation diagram that visualizes the APD and stability of fixed points as uncertain quantities.</p><p>Classical PC methods assume that model outputs exist and reasonably smooth over the full domain of &xi;. Because models of heart rhythm often exhibit bifurcations and discontinuities, their outputs may not obey the existence and smoothness assumptions on the full domain, but only on some subdomains which may be irregularly shaped. On these subdomains, the random variables representing the parameters may no longer be independent. PC methods therefore must be modified for analysis of these discontinuous quantities. The Rosenblatt transformation maps the variables on the subdomain onto a rectangular domain; the transformed variables are independent and uniformly distributed. A new numerical estimation of the Rosenblatt transformation was developed that improves accuracy and computational efficiency compared to existing kernel density estimation methods. PC representations of the outputs in the transformed variables were then constructed. Coefficients of the PC expansions were estimated using Bayesian inference methods. For discontinuous model outputs, SA was performed using a sampling-based variance-reduction method, with the PC estimation used as an efficient proxy for the full model.</p><p>To evaluate the accuracy of the PC methods, PC UQ and SA results were compared to large-sample Monte Carlo UQ and SA results. PC UQ and SA of the fixed point APDs, and of the probability that a stable fixed point existed at each BCL, was very close to MC UQ results for those quantities. However, PC UQ and SA of the bifurcation BCLs was less accurate compared to MC results.</p><p>The computational time required for PC and Monte Carlo methods was also compared. PC analysis (including Rosenblatt transformation and Bayesian inference) required less than 10 total hours of computational time, of which approximately 30 minutes was devoted to model evaluations, compared to approximately 65 hours required for Monte Carlo sampling of the model outputs at 1 &times; 10<super>6</super> &xi; points.</p><p>PC methods provide a useful framework for efficient UQ and SA of the bifurcation diagram of a model of cardiac APD dynamics. Model outputs with bifurcations and discontinuities can be analyzed using modified PC methods. The methods applied and developed in this study may be extended to other models of heart rhythm dynamics. These methods have potential for use for uncertainty and sensitivity analysis in many applications of these models, including simulation studies of heart rate variability, cardiac pathologies, and interventions.</p> / Dissertation
43

A theoretical and experimental study of automotive catalytic converters

Clarkson, Rory John January 1995 (has links)
In response to the increasingly widespread use of catalytic converters for meeting automotive exhaust emission regulations considerable attention is currently being directed towards improving their performance. Experimental analysis is costly and time consuming. A desirable alternative is computational modelling. This thesis describes the development of a fully integrated computational model for simulating monolith type automotive catalytic converters. Two commercial CFD codes, PHOENICS and STAR-CD, were utilised to implement established techniques for modelling the flow field in catalyst assemblies. To appraise the accuracy of the flow field predictions an isothermal steady flow rig was designed and developed. A selection of axisymmetric inlet diffusers and 180o expansions were tested, with the velocity profile across the monolith, the wall static pressure distribution along the inlet section and the total pressure drop across the assembly being measured. These datum sets were compared with predictions using a variety of turbulence models and solution algorithms. The closest agreement was achieved with a two-layer near wall approach, coupled to the fully turbulent version of the RNG k-ε model, and a nominally second order differencing scheme. Even with these approaches the predicted velocity profiles were too flat, the maximum velocity being as much as 17.5% too low. Agreement on pressure drops was better, the error being consistently less than 10%. These results illustrate that present modelling techniques are insufficiently reliable for accurate predictions. It is suggested that the major reason for the relatively poor performance of these techniques is the neglecting of channel entrance effects in the monolith pressure drop term. Despite these weaknesses it was possible to show that the model reproduces the correct trends, and magnitude of change, in pressure drop and velocity distributions as the catalyst geometry changes. The PHONETICS flow field model was extended to include the heat transfer, mass transfer and chemical reactions associated with catalysts. The methodology is based on an equivalent continuum approach. The result is a reacting model capable of simulating the three-dimensional distribution of solid and gas temperatures, species concentrations and flow field variables throughout the monolith mat and the effects that moisture has on the transient warm-up of the monolith. To assess the reacting model’s accuracy use was made of published light-off data from a catalyst connected to a test bed engine. Comparison with predicted results showed that the model was capable of reproducing the correct type, and time scales, of temperature and conversion efficiency behaviour during the warm-up cycle. From these predictions it was possible to show that the flow distribution across the monolith can significantly change during light-off. Following the identification, and subsequent modelling, of the condensation and evaporation of water during the warm-up process it was possible to show that, under the catalyst conditions tested, these moisture effects do not affect light-off times. Conditions under which moisture might affect light-off have been suggested. Although the general level of model accuracy may be acceptable for studying many catalyst phenomena, known deficiencies in the reaction kinetics used, errors in the flow field predictions, uncertainty over many of the physical constants and necessary model simplifications mean that accurate quantitative predictions are still lacking. Improving the level of accuracy will require a systematic experimental approach followed by model refinements.
44

Characterization of Evoked Potentials During Deep Brain Stimulation in the Thalamus

Kent, Alexander Rafael January 2013 (has links)
<p>Deep brain stimulation (DBS) is an established surgical therapy for movement disorders. The mechanisms of action of DBS remain unclear, and selection of stimulation parameters is a clinical challenge and can result in sub-optimal outcomes. Closed-loop DBS systems would use a feedback control signal for automatic adjustment of DBS parameters and improved therapeutic effectiveness. We hypothesized that evoked compound action potentials (ECAPs), generated by activated neurons in the vicinity of the stimulating electrode, would reveal the type and spatial extent of neural activation, as well as provide signatures of clinical effectiveness. The objective of this dissertation was to record and characterize the ECAP during DBS to determine its suitability as a feedback signal in closed-loop systems. The ECAP was investigated using computer simulation and <italic>in vivo</italic> experiments, including the first preclinical and clinical ECAP recordings made from the same DBS electrode implanted for stimulation. </p><p>First, we developed DBS-ECAP recording instrumentation to reduce the stimulus artifact and enable high fidelity measurements of the ECAP at short latency. <italic>In vitro</italic> and <italic>in vivo</italic> validation experiments demonstrated the capability of the instrumentation to suppress the stimulus artifact, increase amplifier gain, and reduce distortion of short latency ECAP signals.</p><p>Second, we characterized ECAPs measured during thalamic DBS across stimulation parameters in anesthetized cats, and determined the neural origin of the ECAP using pharmacological interventions and a computer-based biophysical model of a thalamic network. This model simulated the ECAP response generated by a population of thalamic neurons, calculated ECAPs similar to experimental recordings, and indicated the relative contribution from different types of neural elements to the composite ECAP. Signal energy of the ECAP increased with DBS amplitude or pulse width, reflecting an increased extent of activation. Shorter latency, primary ECAP phases were generated by direct excitation of neural elements, whereas longer latency, secondary phases were generated by post-synaptic activation.</p><p>Third, intraoperative studies were conducted in human subjects with thalamic DBS for tremor, and the ECAP and tremor responses were measured across stimulation parameters. ECAP recording was technically challenging due to the presence of a wide range of stimulus artifact magnitudes across subjects, and an electrical circuit equivalent model and finite element method model both suggested that glial encapsulation around the DBS electrode increased the artifact size. Nevertheless, high fidelity ECAPs were recorded from acutely and chronically implanted DBS electrodes, and the energy of ECAP phases was correlated with changes in tremor. </p><p>Fourth, we used a computational model to understand how electrode design parameters influenced neural recording. Reducing the diameter or length of recording contacts increased the magnitude of single-unit responses, led to greater spatial sensitivity, and changed the relative contribution from local cells or passing axons. The effect of diameter or contact length varied across phases of population ECAPs, but ECAP signal energy increased with greater contact spacing, due to changes in the spatial sensitivity of the contacts. In addition, the signal increased with glial encapsulation in the peri-electrode space, decreased with local edema, and was unaffected by the physical presence of the highly conductive recording contacts.</p><p>It is feasible to record ECAP signals during DBS, and the correlation between ECAP characteristics and tremor suggests that this signal could be used in closed-loop DBS. This was demonstrated by implementation in simulation of a closed-loop system, in which a proportional-integral-derivative (PID) controller automatically adjusted DBS parameters to obtain a target ECAP energy value, and modified parameters in response to disturbances. The ECAP also provided insight into neural activation during DBS, with the dominant contribution to clinical ECAPs derived from excited cerebellothalamic fibers, suggesting that activation of these fibers is critical for DBS therapy.</p> / Dissertation
45

Computational principles for an autonomous active vision system

Sherbakov, Lena Oleg 22 January 2016 (has links)
Vision research has uncovered computational principles that generalize across species and brain area. However, these biological mechanisms are not frequently implemented in computer vision algorithms. In this thesis, models suitable for application in computer vision were developed to address the benefits of two biologically-inspired computational principles: multi-scale sampling and active, space-variant, vision. The first model investigated the role of multi-scale sampling in motion integration. It is known that receptive fields of different spatial and temporal scales exist in the visual cortex; however, models addressing how this basic principle is exploited by species are sparse and do not adequately explain the data. The developed model showed that the solution to a classical problem in motion integration, the aperture problem, can be reframed as an emergent property of multi-scale sampling facilitated by fast, parallel, bi-directional connections at different spatial resolutions. Humans and most other mammals actively move their eyes to sample a scene (active vision); moreover, the resolution of detail in this sampling process is not uniform across spatial locations (space-variant). It is known that these eye-movements are not simply guided by image saliency, but are also influenced by factors such as spatial attention, scene layout, and task-relevance. However, it is seldom questioned how previous eye movements shape how one learns and recognizes an object in a continuously-learning system. To explore this question, a model (CogEye) was developed that integrates active, space-variant sampling with eye-movement selection (the where visual stream), and object recognition (the what visual stream). The model hypothesizes that a signal from the recognition system helps the where stream select fixation locations that best disambiguate object identity between competing alternatives. The third study used eye-tracking coupled with an object disambiguation psychophysics experiment to validate the second model, CogEye. While humans outperformed the model in recognition accuracy, when the model used information from the recognition pathway to help select future fixations, it was more similar to human eye movement patterns than when the model relied on image saliency alone. Taken together these results show that computational principles in the mammalian visual system can be used to improve computer vision models.
46

Mapeamento, avaliação e modelagem das condições ambientais de aviários de diferentes tipologias durante a fase inicial de crescimento de frangos de corte / Mapping, assessment and modeling of environmental conditions in different types of aviaries during early growth of broilers chickens

Hernandez, Robinson Osorio 19 June 2012 (has links)
Made available in DSpace on 2015-03-26T13:23:45Z (GMT). No. of bitstreams: 1 texto completo.pdf: 2926433 bytes, checksum: 1d89c84f869b552ddc31a34c52d1ff66 (MD5) Previous issue date: 2012-06-19 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Brazilian poultry industry is the activity that has the largest and most advanced technological collection from the agricultural sector in the country, which does not imply that productive problems still do not persist, specifically related to the ambience. In order to optimize the production performance in poultry sector, it is necessary to adequate the aviaries, with techniques that address the thermal and hygienic issues of the internal environment of the grow out houses and a more efficient use of energy. This study assessed the thermal and air internal environments in three grow out houses of different types, representative of poultry production in South America: the first one set with a positive pressure ventilation system in tunnel mode, the second one set with lateral positive pressure ventilation system and the third with negative pressure ventilation system in tunnel mode. Data have been collected during the first phase of growth, in the winter and spring of 2011. Air quality analyzes have been conducted verifying environment s concentrations of CO, CO2 and NH3 and thermal comfort was assessed, through maps of temperature, relative humidity and ITU, in addition to modeling of the internal thermal behavior in CFD for the grow out house with negative pressure ventilation system in tunnel mode. In each chapter, statistical analyzes have been conducted specific to the thermal environment, air quality, and finally to validate the computational model. / A avicultura industrial brasileira é a atividade que possui o maior e mais avançado acervo tecnológico dentre o setor agropecuário no País, o que não implica que não persistam ainda problemas produtivos, especificamente no relacionado com a ambiência. Com o fim de otimizar o desempenho produtivo no setor avícola, faz-se necessária a adequação do ambiente de criação das aves, com técnicas que atendam as questões térmicas e higiênicas do ambiente interno dos aviários e com maior eficiência energética. Este trabalho avaliou os ambientes térmico e aéreo internos em três galpões de diferentes tipologias representativos da produção avícola da América do Sul: o primeiro com sistema de ventilação de pressão positiva em modo túnel, o segundo com sistema de ventilação de pressão positiva lateral e o terceiro com sistema de ventilação de pressão negativa em modo túnel, durante a primeira fase de crescimento no inverno e na primavera de 2011. Foram feitas análises da qualidade do ar em termos das concentrações de CO, CO2 e NH3 do ambiente e do conforto térmico que incluem mapas de temperatura, umidade relativa do ar e ITU, além da modelagem do comportamento térmico interno em CFD para o galpão com sistema de ventilação de pressão negativa em modo túnel. Em cada capítulo, foram feitas análises estatísticas específicas para o ambiente térmico, a qualidade do ar e, finalmente, para validação do modelo computacional.
47

Avaliação de lavagem incompleta de sais em neossolo flúvico utilizando modelagem computacional

MONTEIRO, Adriano Luiz Normandia 31 May 2007 (has links)
Submitted by Mario BC (mario@bc.ufrpe.br) on 2016-06-30T12:04:31Z No. of bitstreams: 1 Adriano Luiz Normandia Monteiro.pdf: 2165253 bytes, checksum: a56a08a2a876133289d03517e9c8d77a (MD5) / Made available in DSpace on 2016-06-30T12:04:31Z (GMT). No. of bitstreams: 1 Adriano Luiz Normandia Monteiro.pdf: 2165253 bytes, checksum: a56a08a2a876133289d03517e9c8d77a (MD5) Previous issue date: 2007-05-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This study has been developed in two locations which are typical of communal agriculture in the Northeast semi-arid, in alluvial valleys: Nossa Senhora do Rosário Farm, municipal district of Pesqueira, in the “agreste” area of Pernambuco State and another in the municipal district of Belo Jardim-PE. The objective was to evaluate a computational model to simulate scenarios involving salinization in alluvial soils in the semi-arid Northeast and to simulate the reduction effect on soil salinity in drainage leaching lisimeters and the precipitation as an element to reduce salt content. Based on numeric simulations with finite elements, using the model HYDRUS-1D, it was evaluated flow and transport of salts. For model calibration, of the model field data of water content and soil water pressure head, electric conductivity of soil solution, in adition to soil water characteristic parameters. Using measured actual evapotranspiration of the crop separated: potential evaporation (Ep) and potential transpiration (Tp), and precipitation, besides irrigation with different leaching depths, it was verified that the model is a satisfactory tool to simulate flow and transport in the studied situations. The leach should be used as handling for reduction of the salinity in the root zone since effective precipitations can drain in the winter periods. Incomplete leaching can be an effective alternative to reduce salinity in the root zone, when rainfall might occur. / O presente estudo foi desenvolvido em duas localidades que são modelos típicos de pequena agricultura familiar no semi-árido do Nordeste, em vales aluviais: a Fazenda Nossa Senhora do Rosário, localizada do município de Pesqueira-PE, região Agreste do Estado de Pernambuco, e em lotes irrigados na zona de rural do município de Belo Jardim, região Agreste do Estado de Pernambuco. O objetivo foi calibrar e validar modelo computacional para simular cenários que envolvam o transporte de sais em solos aluviais no semi-árido nordestino, e prever o efeito de lâminas de lixiviação e da precipitação no controle da salinidade do solo em lisímetros de drenagem. Com base em simulações numéricas, utilizando o modelo HYDRUS-1D, avaliou-se o fluxo e o transporte de sais. Para calibração do modelo foram utilizados dados do potencial matricial da água no solo, condutividade elétrica da solução do solo, além de dados da curva característica de retenção de água no solo. Usando medidas de campo de evapotranspiração potencial, e precipitação, foi possível inferir que o modelo apresentou-se como ferramenta adequada nas simulações realizadas, para diferentes cenários de lâminas de lixiviação. Verificou-se experimentalmente e numericamente que a lixiviação incompleta pode ser utilizada como alternativa de manejo para redução da salinidade na zona radicular, desde que precipitações efetivas possam complementar as lavagens.
48

Modelo computacional para análise de transiente hidráulico em canais / Computational model for the study unsteady open-channel flows

Stênio de Sousa Venâncio 03 July 2003 (has links)
Este trabalho representa a continuidade de estudos envolvendo a problemática dos escoamentos com superfície livre, contemplando a análise do fenômeno transiente em canais, a partir do modelo matemático unidimensional de Saint-Venant. Para tanto, é desenvolvido um modelo computacional em linguagem FORTRAN, capaz de avaliar o comportamento do escoamento não permanente. As equações hidrodinâmicas completas são discretizadas por um esquema completamente implícito de diferenças finitas e aplicadas no modelo computacional para a avaliação de dois casos. O modelo é previamente testado para um caso simples, cujos resultados são analisados viabilizando o modelo. No primeiro caso, o modelo é aplicado ao canal de alimentação da Usina Hidrelétrica Monjolinho em São Carlos-SP, para avaliar a necessidade de vertedouro quando se dá o fechamento brusco da turbina, e a ocorrência da entrada de ar na mesma quando da sua abertura repentina. No segundo caso, procurou-se avaliar o desenvolvimento do escoamento no Canal do Trabalhador, responsável pelo abastecimento da cidade de Fortaleza-CE. Com manobras de enchimento e esvaziamento do sistema, é possível determinar o tempo de antecedência de liga-desliga do sistema de recalque a partir das alturas dágua e velocidades de ocorrência, permitindo também a automação para as operações de controle. Em ambos os casos o modelo reproduziu resultados que ilustram com coerência os conceitos pré-estabelecidos, constituindo numa ferramenta útil para análise do fenômeno transiente nos escoamentos em condutos livres. / This work presents a computational model developed in FORTRAN language for the study of unsteady open-channel flows with the use of Saint-Venant one-dimensional equation. The discretization of hydrodynamic equations are presented in a completely implicit method of finite differences and applied in the model for the investigation of two cases, besides the one used previously to test the model. In the first case, the model is applied for a channel that supplies the Monjolinho hydroelectric plant in Sao Carlos SP, aiming to evaluate the need of a spillway when the turbine is closed and the flow abruptly stopped, as well as the occurrence of air entering the turbine when it is opened instantaneously. In the second case, the model simulates the development of the flow in the Trabalhador channel, responsible for the water supply in the city of Fortaleza - CE, in order to make possible the automation of operational control, based on data of flow velocity and water level. In both cases the model is presented as a useful tool for the analysis of unsteady open-channel flows, showing results and coherency with theory.
49

Construção de modelos de árvores arteriais usando diferentes expoentes para a lei de bifurcação

Meneses, Lucas Diego Mota 30 September 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-07T19:04:39Z No. of bitstreams: 1 lucasdiegomotameneses.pdf: 10631997 bytes, checksum: be72daf41245404c708aab9a8c58d9a1 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-10T12:23:50Z (GMT) No. of bitstreams: 1 lucasdiegomotameneses.pdf: 10631997 bytes, checksum: be72daf41245404c708aab9a8c58d9a1 (MD5) / Made available in DSpace on 2017-03-10T12:23:50Z (GMT). No. of bitstreams: 1 lucasdiegomotameneses.pdf: 10631997 bytes, checksum: be72daf41245404c708aab9a8c58d9a1 (MD5) Previous issue date: 2016-09-30 / Modelos computacionais de árvores arteriais são utilizados como substratos geométricos em simulações hemodinâmicas. A construção destes modelos é mandatória para adequada representação das redes vasculares periféricas devido à escassez de dados anatômicos destas redes. Os modelos relatados na literatura são classificados em: anatômico, a parâmetro condensado, fractal e otimizados. O crescimento dos modelos fractais e otimizados dependem de uma lei de bifurcação, que controla a relação entre os raios dos vasos envolvidos na bifurcação através de um expoente. Neste trabalho, investiga-se a construção de modelos otimizados inspirados no método CCO (Constrained Constructive Optimization) usando novas abordagens para a escolha do expoente da lei de bifurcação. Estas estratégias são formuladas com funções degrau e sigmoidal dependentes do número de bifurcações proximais. Os dados morfométricos dos modelos são comparados com outros experimentais e teóricos da literatura. Os resultados obtidos comprovam que o expoente de bifurcação influencia nas estruturas geométrica e topológica dos modelos. / Computational models of arterial trees are used as geometric substrates in hemodynamic simulations. The construction of these models is mandatory for appropriated representation of the peripheral vascular networks due to lack of anatomical data of these networks. The models reported in the literature are classified into: anatomical, lumped parameter, fractal and optimized. The growth of the fractal and optimized models depend on a bifurcation law, which controls the relationship between the radii of the vessels involved in the bifurcation through an exponent. This work investigates the construction of optimized models inspired by the CCO (Constructive Constrained Optimization) method using new approaches to the choice of the exponent of the bifurcation law. Theses strategies are formulated as step and sigmoid functions depend on number of proximal bifurcations. Morphometric data from models are compared with other experimental and theoretical data of the literature. The results obtained show that the bifurcation exponent influences the geometrical and topological structures of the models.
50

The neural circuitry of fear conditioning : a theoretical account / Le circuit neuronal du conditionnement à la peur : une perspective théorique

Angelhuber, Martin 27 October 2016 (has links)
Conditionnement à la peur est un paradigme réussi pour comprendre les substrats neuronaux de l’apprentissage et de l’émotion. Dans cette thèse, je présente deux modèles informatiques des structures du cerveau qui sous-tendent l'acquisition de la peur conditionnée. Le première modèle est utilisé pour enquêter sur l’effet des changements de l’inhibition tonique sur le traitement des informations reçues. On confirme que la diminution de l’inhibition tonique d’une population augmente la réactivité du réseau. Ensuite, le modèle est analysé d’une perspective fonctionnelle et des prédictions qui découlent de cette proposition sont discutées. En outre, je présenterai un modèle systématique, basé sur un type de modèle de conditionnement récemment introduit utilisant des variables latentes. Je propose que l’interaction entre les neurones dans l’amygdale basale code pour l’interface entre ces variables latentes. Le modèle couvre une large gamme d’effets et l’analyse produit un certain nombre de prédictions vérifiables. / Fear conditioning is a successful paradigm for studying neural substrates of emotional learning. In this thesis, two computational models of the underlying neural circuitry are presented. First, the effects of changes in neuronal membrane conductance on input processing are analyzed in a biologically realistic model. We show that changes in tonic inhibitory conductance increase the responsiveness of the network to inputs. Then, the model is analyzed from a functional perspective and predictions that follow from this proposition are discussed. Next, a systems level model is presented based on a recent high-level approach to conditioning. It is proposed that the interaction between fear and extinction neurons in the basal amygdala is a neural substrate of the switching between latent states, allowing the animal to infer causal structure. Important behavioral and physiological results are reproduced and predictions and questions that follow from the main hypothesis are considered.

Page generated in 0.1132 seconds