Spelling suggestions: "subject:"leveraging""
231 |
Bayesian Methods for Genetic Association StudiesXu, Lizhen 08 January 2013 (has links)
We develop statistical methods for tackling two important problems in genetic association studies. First, we propose
a Bayesian approach to overcome the winner's curse in genetic studies. Second, we consider a Bayesian latent variable
model for analyzing longitudinal family data with pleiotropic phenotypes.
Winner's curse in genetic association studies refers to the estimation bias of the reported odds ratios (OR) for an associated
genetic variant from the initial discovery samples. It is a consequence of the sequential procedure in which the estimated
effect of an associated genetic
marker must first pass a stringent significance threshold. We propose
a hierarchical Bayes method in which a spike-and-slab prior is used to account
for the possibility that the significant test result may be due to chance.
We examine the robustness of the method using different priors corresponding
to different degrees of confidence in the testing results and propose a
Bayesian model averaging procedure to combine estimates produced by different
models. The Bayesian estimators yield smaller variance compared to
the conditional likelihood estimator and outperform the latter in the low power studies.
We investigate the performance of the method with simulations
and applications to four real data examples.
Pleiotropy occurs when a single genetic factor influences multiple quantitative or qualitative phenotypes, and it is present in
many genetic studies of complex human traits. The longitudinal family studies combine the features of longitudinal studies
in individuals and cross-sectional studies in families. Therefore, they provide more information about the genetic and environmental factors associated with the trait of interest. We propose a Bayesian latent variable modeling approach to model multiple
phenotypes simultaneously in order to detect the pleiotropic effect and allow for longitudinal and/or family data. An efficient MCMC
algorithm is developed to obtain the posterior samples by using hierarchical centering and parameter expansion techniques.
We apply spike and slab prior methods to test whether the phenotypes are significantly associated with the latent disease status. We compute
Bayes factors using path sampling and discuss their application in testing the significance of factor loadings and the indirect fixed effects. We examine the performance of our methods via extensive simulations and
apply them to the blood pressure data from a genetic study of type 1 diabetes (T1D) complications.
|
232 |
Analysis of the spatial heterogeneity of land surface parameters and energy flux densities / Analyse der räumlichen Heterogenität von Landoberflächenparametern und EnergieflussdichtenTittebrand, Antje 02 August 2011 (has links) (PDF)
This work was written as a cumulative doctoral thesis based on reviewed publications.
Climate projections are mainly based on the results of numeric simulations from global or regional climate models. Up to now processes between atmosphere and land surface are only rudimentarily known. This causes one of the major uncertainties in existing models. In order to reduce parameterisation uncertainties and to find a reasonable description of sub grid heterogeneities, the determination and evaluation of parameterisation schemes for modelling require as many datasets from different spatial scales as possible. This work contributes to this topic by implying different datasets from different platforms. Its objective was to analyse the spatial heterogeneity of land surface parameters and energy flux densities obtained from both satellite observations with different spatial and temporal resolutions and in-situ measurements. The investigations were carried out for two target areas in Germany. First, satellite data for the years 2002 and 2003 were analysed and validated from the LITFASS-area (Lindenberg Inhomogeneous Terrain - Fluxes between Atmosphere and Surface: a longterm Study). Second, the data from the experimental field sites of the FLUXNET cluster around Tharandt from the years 2006 and 2007 were used to determine the NDVI (Normalised Difference Vegetation Index for identifying vegetated areas and their "condition").
The core of the study was the determination of land surface characteristics and hence radiant and energy flux densities (net radiation, soil heat flux, sensible and latent heat flux) using the three optical satellite sensors ETM+ (Enhanced Thematic Mapper), MODIS (Moderate Resolution Imaging Spektroradiometer) and AVHRR 3 (Advanced Very High Resolution Radiometer) with different spatial (30 m – 1 km) and temporal (1 day – 16 days) resolution. Different sensor characteristics and different data sets for land use classifications can both lead to deviations of the resultant energy fluxes between the sensors. Thus, sensor differences were quantified, sensor adaptation methods were implemented and a quality analysis for land use classifications was performed. The result is then a single parameterisation scheme that allows for the determination of the energy fluxes from all three different sensors.
The main focus was the derivation of the latent heat flux (L.E) using the Penman-Monteith (P-M) approach. Satellite data provide measurements of spectral reflectance and surface temperatures. The
P-M approach requires further surface parameters not offered by satellite data. These parameters include the NDVI, Leaf Area Index (LAI), wind speed, relative humidity, vegetation height and roughness length, for example. They were derived indirectly from the given satellite- or in-situ measurements. If no data were available so called default values from literature were taken. The quality of these parameters strongly influenced the exactness of the radiant- and energy fluxes. Sensitivity studies showed that NDVI is one of the most important parameters for determination of evaporation. In contrast it could be shown, that the parameters as vegetation height and measurement height have only minor influence on L.E, which justifies the use of default values for these parameters.
Due to the key role of NDVI a field study was carried out investigating the spatial variability and sensitivity of NDVI above five different land use types (winter wheat, corn, grass, beech and spruce). Methods to determine this parameter not only from space (spectral), but also from in-situ tower measurements (broadband) and spectrometer data (spectral) were compared. The best agreement between the methods was found for winter wheat and grass measurements in 2006. For these land use types the results differed by less than 10 % and 15 %, respectively. Larger differences were obtained for the forest measurements. The correlation between the daily MODIS-NDVI data and the in-situ NDVI inferred from the spectrometer and the broadband measurements were r=0.67 and r=0.51, respectively.
Subsequently, spatial variability of land surface parameters and fluxes were analysed. The several spatial resolutions of the satellite sensors can be used to describe subscale heterogeneity from one scale to the other and to study the effects of spatial averaging. Therefore land use dependent parameters and fluxes were investigated to find typical distribution patterns of land surface properties and energy fluxes. Implying the distribution patterns found here for albedo and NDVI from ETM+ data in models has high potential to calculate representative energy flux distributions on a coarser scale. The distribution patterns were expressed as probability density functions (PDFs). First results of applying PDFs of albedo, NDVI, relative humidity, and wind speed to the L.E computation are encouraging, and they show the high potential of this method.
Summing up, the method of satellite based surface parameter- and energy flux determination has been shown to work reliably on different temporal and spatial scales. The data are useful for detailed analyses of spatial variability of a landscape and for the description of sub grid heterogeneity, as it is needed in model applications. Their usability as input parameters for modelling on different scales is the second important result of this work. The derived vegetation parameters, e.g. LAI and plant cover, possess realistic values and were used as model input for the Lokalmodell of the German Weather Service. This significantly improved the model results for L.E. Additionally, thermal parameter fields, e.g. surface temperature from ETM+ with 30 m spatial resolution, were used as input for SVAT-modelling (Soil-Vegetation-Atmosphere-Transfer scheme). Thus, more realistic L.E results were obtained, providing highly resolved areal information. / Die vorliegende Arbeit wurde auf der Grundlage begutachteter Publikationen als kumulative Dissertation verfasst.
Klimaprognosen basieren im Allgemeinen auf den Ergebnissen numerischer Simulationen mit globalen oder regionalen Klimamodellen. Eine der entscheidenden Unsicherheiten bestehender Modelle liegt in dem noch unzureichenden Verständnis von Wechselwirkungsprozessen zwischen der Atmosphäre und Landoberflächen und dem daraus folgenden Fehlen entsprechender Parametrisierungen. Um das Problem einer unsicheren Modell-Parametrisierung aufzugreifen und zum Beispiel subskalige Heterogenität in einer Art und Weise zu beschreiben, dass sie für Modelle nutzbar wird, werden für die Bestimmung und Evaluierung von Modell-Parametrisierungsansätzen so viele Datensätze wie möglich benötigt. Die Arbeit trägt zu diesem Thema durch die Verwendung verschiedener Datensätze unterschiedlicher Plattformen bei. Ziel der Studie war es, aus Satellitendaten verschiedener räumlicher und zeitlicher Auflösung sowie aus in-situ Daten die räumliche Heterogenität von Landoberflächenparametern und Energieflussdichten zu bestimmen. Die Untersuchungen wurden für zwei Zielgebiete in Deutschland durchgeführt. Für das LITFASS-Gebiet (Lindenberg Inhomogeneous Terrain - Fluxes between Atmosphere and Surface: a longterm Study) wurden Satellitendaten der Jahre 2002 und 2003 untersucht und validiert. Zusätzlich wurde im Rahmen dieser Arbeit eine NDVI-Studie (Normalisierter Differenzen Vegetations Index: Maß zur Detektierung von Vegetationflächen, deren Vitalität und Dichte) auf den Testflächen des FLUXNET Clusters um Tharandt in den Jahren 2006 und 2007 realisiert.
Die Grundlage der Arbeit bildete die Bestimmung von Landoberflächeneigenschaften und daraus resultierenden Energieflüssen, auf Basis dreier optischer Sensoren (ETM+ (Enhanced Thematic Mapper), MODIS (Moderate Resolution Imaging Spectroradiometer) und AVHRR 3 (Advanced Very High Resolution Radiometer)) mit unterschiedlichen räumlichen (30 m – 1 km) und zeitlichen (1 – 16 Tage) Auflösungen. Unterschiedliche Sensorcharakteristiken, sowie die Verwendung verschiedener, zum Teil ungenauer Datensätze zur Landnutzungsklassifikation führen zu Abweichungen in den Ergebnissen der einzelnen Sensoren. Durch die Quantifizierung der Sensorunterschiede, die Anpassung der Ergebnisse der Sensoren aneinander und eine Qualitätsanalyse von verschiedenen Landnutzungsklassifikationen, wurde eine Basis für eine vergleichbare Parametrisierung der Oberflächenparameter und damit auch für die daraus berechneten Energieflüsse geschaffen.
Der Schwerpunkt lag dabei auf der Bestimmung des latenten Wärmestromes (L.E) mit Hilfe des Penman-Monteith Ansatzes (P-M). Satellitendaten liefern Messwerte der spektralen Reflexion und der Oberflächentemperatur. Die P-M Gleichung erfordert weitere Oberflächenparameter wie zum Beispiel den NDVI, den Blattflächenindex (LAI), die Windgeschwindigkeit, die relative Luftfeuchte, die Vegetationshöhe oder die Rauhigkeitslänge, die jedoch aus den Satellitendaten nicht bestimmt werden können. Sie müssen indirekt aus den oben genannten Messgrößen der Satelliten oder aus in-situ Messungen abgeleitet werden. Stehen auch aus diesen Quellen keine Daten zur Verfügung, können sogenannte Standard- (Default-) Werte aus der Literatur verwendet werden. Die Qualität dieser Parameter hat einen großen Einfluss auf die Bestimmung der Strahlungs- und Energieflüsse. Sensitivitätsstudien im Rahmen der Arbeit zeigen die Bedeutung des NDVI als einen der wichtigsten Parameter in der Verdunstungsbestimmung nach P-M. Im Gegensatz dazu wurde deutlich, dass z. B. die Vegetationshöhe und die Messhöhe einen relativ kleinen Einfluss auf L.E haben, so dass für diese Parameter die Verwendung von Standardwerten gerechtfertigt ist.
Aufgrund der Schlüsselrolle, welche der NDVI in der Bestimmung der Verdunstung einnimmt, wurden im Rahmen einer Feldstudie Untersuchungen des NDVI über fünf verschiedenen Landnutzungstypen (Winterweizen, Mais, Gras, Buche und Fichte) hinsichtlich seiner räumlichen Variabilität und Sensitivität, unternommen. Dabei wurden verschiedene Bestimmungsmethoden getestet, in welchen der NDVI nicht nur aus Satellitendaten (spektral), sondern auch aus in-situ Turmmessungen (breitbandig) und Spekrometermessungen (spektral) ermittelt wird. Die besten Übereinstimmungen der Ergebnisse wurden dabei für Winterweizen und Gras für das Jahr 2006 gefunden. Für diese Landnutzungstypen betrugen die Maximaldifferenzen aus den drei Methoden jeweils 10 beziehungsweise 15 %. Deutlichere Differenzen ließen sich für die Forstflächen verzeichnen. Die Korrelation zwischen Satelliten- und Spektrometermessung betrug r=0.67. Für Satelliten- und Turmmessungen ergab sich ein Wert von r=0.5.
Basierend auf den beschriebenen Vorarbeiten wurde die räumliche Variabilität von Landoberflächenparametern und Flüssen untersucht. Die unterschiedlichen räumlichen Auflösungen der Satelliten können genutzt werden, um zum einen die subskalige Heterogenität zu beschreiben, aber auch, um den Effekt räumlicher Mittelungsverfahren zu testen. Dafür wurden Parameter und Energieflüsse in Abhängigkeit der Landnutzungsklasse untersucht, um typische Verteilungsmuster dieser Größen zu finden. Die Verwendung der Verteilungsmuster (in Form von Wahrscheinlichkeitsdichteverteilungen – PDFs), die für die Albedo und den NDVI aus ETM+ Daten gefunden wurden, bietet ein hohes Potential als Modellinput, um repräsentative PDFs der Energieflüsse auf gröberen Skalen zu erhalten. Die ersten Ergebnisse in der Verwendung der PDFs von Albedo, NDVI, relativer Luftfeuchtigkeit und Windgeschwindigkeit für die Bestimmung von L.E waren sehr ermutigend und zeigten das hohe Potential der Methode.
Zusammenfassend lässt sich feststellen, dass die Methode der Ableitung von Oberflächenparametern und Energieflüssen aus Satellitendaten zuverlässige Daten auf verschiedenen zeitlichen und räumlichen Skalen liefert. Die Daten sind für eine detaillierte Analyse der räumlichen Variabilität der Landschaft und für die Beschreibung der subskaligen Heterogenität, wie sie oft in Modellanwendungen benötigt wird, geeignet. Ihre Nutzbarkeit als Inputparameter in Modellen auf verschiedenen Skalen ist das zweite wichtige Ergebnis der Arbeit. Aus Satellitendaten abgeleitete Vegetationsparameter wie der LAI oder die Pflanzenbedeckung liefern realistische Ergebnisse, die zum Beispiel als Modellinput in das Lokalmodell des Deutschen Wetterdienstes implementiert werden konnten und die Modellergebnisse von L.E signifikant verbessert haben. Aber auch thermale Parameter, wie beispielsweise die Oberflächentemperatur aus ETM+ Daten in 30 m Auflösung, wurden als Eingabeparameter eines Soil-Vegetation-Atmosphere-Transfer-Modells (SVAT) verwendet. Dadurch erhält man realistischere Ergebnisse für L.E, die hochaufgelöste Flächeninformationen bieten.
|
233 |
Essays on forecasting and Bayesian model averagingEklund, Jana January 2006 (has links)
This thesis, which consists of four chapters, focuses on forecasting in a data-rich environment and related computational issues. Chapter 1, “An embarrassment of riches: Forecasting using large panels” explores the idea of combining forecasts from various indicator models by using Bayesian model averaging (BMA) and compares the predictive performance of BMA with predictive performance of factor models. The combination of these two methods is also implemented, together with a benchmark, a simple autoregressive model. The forecast comparison is conducted in a pseudo out-of-sample framework for three distinct datasets measured at different frequencies. These include monthly and quarterly US datasets consisting of more than 140 predictors, and a quarterly Swedish dataset with 77 possible predictors. The results show that none of the considered methods is uniformly superior and that no method consistently outperforms or underperforms a simple autoregressive process. Chapter 2. “Forecast combination using predictive measures” proposes using out-of-sample predictive likelihood as the basis for BMA and forecast combination. In addition to its intuitive appeal, the use of the predictive likelihood relaxes the need to specify proper priors for the parameters of each model. We show that the forecast weights based on the predictive likelihood have desirable asymptotic properties. And that these weights will have better small sample properties than the traditional in-sample marginal likelihood when uninformative priors are used. In order to calculate the weights for the combined forecast, a number of observations, a hold-out sample, is needed. There is a trade off involved in the size of the hold-out sample. The number of observations available for estimation is reduced, which might have a detrimental effect. On the other hand, as the hold-out sample size increases, the predictive measure becomes more stable and this should improve performance. When there is a true model in the model set, the predictive likelihood will select the true model asymptotically, but the convergence to the true model is slower than for the marginal likelihood. It is this slower convergence, coupled with protection against overfitting, which is the reason the predictive likelihood performs better when the true model is not in the model set. In Chapter 3. “Forecasting GDP with factor models and Bayesian forecast combination” the predictive likelihood approach developed in the previous chapter is applied to forecasting GDP growth. The analysis is performed on quarterly economic dataset from six countries: Canada, Germany, Great Britain, Italy, Japan and United States. The forecast combination technique based on both in-sample and out-of-sample weights is compared to forecasts based on factor models. The traditional point forecast analysis is extended by considering confidence intervals. The results indicate that forecast combinations based on the predictive likelihood weights have better forecasting performance compared with the factor models and forecast combinations based on the traditional in-sample weights. In contrast to common findings, the predictive likelihood does improve upon an autoregressive process for longer horizons. The largest improvement over the in-sample weights is for small values of hold-out sample sizes, which provides protection against structural breaks at the end of the sample period. The potential benefits of model averaging as a tool for extracting the relevant information from a large set of predictor variables come at the cost of considerable computational complexity. To avoid evaluating all the models, several approaches have been developed to simulate from the posterior distributions. Markov chain Monte Carlo methods can be used to directly draw from the model posterior distributions. It is desirable that the chain moves well through the model space and takes draws from regions with high probabilities. Several computationally efficient sampling schemes, either one at a time or in blocks, have been proposed for speeding up convergence. There is a trade-off between local moves, which make use of the current parameter values to propose plausible values for model parameters, and more global transitions, which potentially allow faster exploration of the distribution of interest, but may be much harder to implement efficiently. Local model moves enable use of fast updating schemes, where it is unnecessary to completely reestimate the new, slightly modified, model to obtain an updated solution. The last fourth chapter “Computational efficiency in Bayesian model and variable selection” investigates the possibility of increasing computational efficiency by using alternative algorithms to obtain estimates of model parameters as well as keeping track of their numerical accuracy. Also, various samplers that explore the model space are presented and compared based on the output of the Markov chain. / Diss. Stockholm : Handelshögskolan, 2006
|
234 |
Control of the human thumb and fingersYu, Wei Shin, Prince of Wales Medical Research Institute, Faculty of Medicine, UNSW January 2009 (has links)
In daily activities, hand use is dominated by individuated thumb and finger movements, and by grasping. This thesis focused on the level of ???independence??? of the digits and its relationship to hand grasps, from the level of the motor units to the level of synergistic grasping forces. Four major studies were conducted in healthy adult volunteers. First, spike-triggered averages of forces produced by single motor units in flexor pollicis longus (FPL) in a grasp posture showed small but significant loading of the index, but not other fingers. This reflected a neural rather than anatomical coupling, as intramuscular stimulation produced minimal effect in any finger. Also, FPL had a surprisingly large number of low-force motor units and this may account for the thumb???s exceptional dexterity and force stability compared with the fingers. Second, independent control of extensor digitorum (ED) was more limited than flexor digitorum profundus (FDP), as more ED motor units of a ???test??? finger were recruited inadvertently by extension than by flexion of adjacent digits. Third, ???force enslavement??? in maximal voluntary tasks was greater in digit extension than flexion. The distribution of force enslavement (and deficits) matched the pattern of daily use of the digits (alone and in combination), and reveals a neural control system which preferentially lifts fingers together from an object by extension but allows an individual digit to flex to contact an object so the finger pads can engage in exploration and grasping. Finally, during grasping, irrespective of whether a digit had been lifted from the object, coherence among forces generated by the digits was similar. In addition, the coherence between finger forces was independent of any contraction of the thumb, was stable over 2 months, and required no learning. The pattern of coherence between digital grasping forces may be closely related to the level of digit independence and daily use. Overall, the grasp synergy was remarkably invariant over the various tasks and over time. In summary, this thesis demonstrates novel aspects of the properties of FPL, the lack of complete independence of the digits, and robustness in the production of flexion forces in hand grasps.
|
235 |
Reduced order modeling, nonlinear analysis and control methods for flow control problemsKasnakoglu, Cosku, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 135-144).
|
236 |
Efficient index structures for video databasesAcar, Esra 01 February 2008 (has links) (PDF)
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
|
237 |
Equações com impasse e problemas de perturbação singular /Cardin, Pedro Toniol. January 2011 (has links)
Orientador: Paulo Ricardo da Silva / Banca: João Carlos da Rocha Medrado / Banca: Fernando de Osório Mello / Banca: Claudio Aguinaldo Buzzi / Banca: Vanderlei Minori Horita / Resumo: Neste trabalho estudamos sistemas diferenciais forçados, também conhecidos como sistemas de equações com impasse. Estudamos os casos onde tais sistemas são suaves e os casos onde são possivelmente descontínuos. Usando técnicas de perturbação singular obtemos alguns resultados sobre a dinâmica destes sistemas em vizinhanças dos conjuntos de impasse. No caso suave, a Teoria de Fenichel clássica e crucial para o desenvolvimento dos principais resultados. Para o caso com descontinuidades, uma teoria similar a Teoria de Fenichel 'e desenvolvida. Além disso, estudamos a bifurcação de ciclos limites das órbitas periódicas de um centro diferencial linear quando perturbamos tal centro dentro de uma classe de sistemas diferenciais lineares por partes com impasse / Abstract: In this work we study constrained differential systems, also known as systems of equations with impasse. We study the cases where such systems are smo oth and the cases where they are p ossibly discontinuous. Using singular p erturbation techniques we obtain some results on the dynamic of these systems in neighb orho o ds of the impasse sets. In smo oth case, the classical Fenichel's Theory is crucial for the development of the main results. For the case with discontinuity, a similar theory to Fenichel's Theory is develop ed. Moreover, we study the bifurcation of limit cycles from the p erio dic orbits of a linear differential center when we p erturb such center inside a class of piecewise linear differential systems with impasse / Doutor
|
238 |
Macroscopic model and numerical simulation of elastic canopy flowsPauthenet, Martin 11 September 2018 (has links) (PDF)
We study the turbulent flow of a fluid over a canopy, that we model as a deformable porous medium. This porous medium is more precisely a carpet of fibres that bend under the hydrodynamic load, hence initiating a fluid-structure coupling at the scale of a fibre's height (honami). The objective of the thesis is to develop a macroscopic model of this fluid-structure interaction in order to perform numerical simulations of this process. The volume averaging method is implemented to describe the large scales of the flow and their interaction with the deformable porous medium. An hybrid approach is followed due to the non-local nature of the solid phase; While the large scales of the flow are described within an Eulerian frame by applying the method of volume averaging, a Lagrangian approach is proposed to describe the ensemble of fibres. The interface between the free-flow and the porous medium is handle with a One-Domain- Approach, which we justify with the theoretical development of a mass- and momentum- balance at the fluid/porous interface. This hybrid model is then implemented in a parallel code written in C$++$, based on a fluid- solver available from the \openfoam CFD toolbox. Some preliminary results show the ability of this approach to simulate a honami within a reasonable computational cost. Prior to implementing a macroscopic model, insight into the small-scale is required. Two specific aspects of the small-scale are therefore studied in details; The first development deals with the inertial deviation from Darcy's law. A geometrical parameter is proposed to describe the effect of inertia on Darcy's law, depending on the shape of the microstructure of the porous medium. This topological parameter is shown to efficiently characterize inertia effects on a diversity of tested microstructures. An asymptotic filtration law is then derived from the closure problem arising from the volume averaging method, proposing a new framework to understand the relationship between the effect of inertia on the macroscopic fluid-solid force and the topology of the microstructure of the porous medium. A second research axis is then investigated. As we deal with a deformable porous medium, we study the effect of the pore-scale fluid-structure interaction on the filtration law as the flow within the pores is unsteady, inducing time-dependent fluidstresses on the solid- phase. For that purpose, we implement pore-scale numerical simulations of unsteady flows within deformable pores, focusing for this preliminary study on a model porous medium. Owing to the large displacements of the solid phase, an immersed boundary approach is implemented. Two different numerical methods are compared to apply the no-slip condition at the fluid-solid interface: a diffuse interface approach and a sharp interface approach. The objective is to find the proper method to afford acceptable computational time and a good reliability of the results. The comparison allows a cross-validation of the numerical results, as the two methods compare well for our cases. This numerical campaign shows that the pore-scale deformation has a significant impact on the pressure drop at the macroscopic scale. Some fundamental issues are then discussed, such as the size of a representative computational domain or the form of macroscopic equations to describe the momentum transport within a soft deformable porous medium.
|
239 |
Equações com impasse e problemas de perturbação singularCardin, Pedro Toniol [UNESP] 18 March 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:32:50Z (GMT). No. of bitstreams: 0
Previous issue date: 2011-03-18Bitstream added on 2014-06-13T18:07:15Z : No. of bitstreams: 1
cardin_pt_dr_sjrp.pdf: 479456 bytes, checksum: 52785d20631e0d11a14a241fde1ae7c9 (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Neste trabalho estudamos sistemas diferenciais forçados, também conhecidos como sistemas de equações com impasse. Estudamos os casos onde tais sistemas são suaves e os casos onde são possivelmente descontínuos. Usando técnicas de perturbação singular obtemos alguns resultados sobre a dinâmica destes sistemas em vizinhanças dos conjuntos de impasse. No caso suave, a Teoria de Fenichel clássica e crucial para o desenvolvimento dos principais resultados. Para o caso com descontinuidades, uma teoria similar a Teoria de Fenichel ´e desenvolvida. Além disso, estudamos a bifurcação de ciclos limites das órbitas periódicas de um centro diferencial linear quando perturbamos tal centro dentro de uma classe de sistemas diferenciais lineares por partes com impasse / In this work we study constrained differential systems, also known as systems of equations with impasse. We study the cases where such systems are smo oth and the cases where they are p ossibly discontinuous. Using singular p erturbation techniques we obtain some results on the dynamic of these systems in neighb orho o ds of the impasse sets. In smo oth case, the classical Fenichel’s Theory is crucial for the development of the main results. For the case with discontinuity, a similar theory to Fenichel’s Theory is develop ed. Moreover, we study the bifurcation of limit cycles from the p erio dic orbits of a linear differential center when we p erturb such center inside a class of piecewise linear differential systems with impasse
|
240 |
Existência e estabilidade de órbitas periódicas da Equação de Van der Pol-Mathieu / Existence and stability of periodic orbits of van der Pol-Mathieu equationPereira, Franciele Alves da Silveira Gonzaga 28 February 2012 (has links)
In this work some existence and stability results of periodic orbits of van der Pol-Mathieu
Equation are studied. By using the Averaging Theorem we are able to prove, under mild
conditions, the existence of two asymptotically stable periodic orbits of this equation. Moreover,
the existence of invariant quadrics can be settled in plane phase of this equation. / Neste trabalho alguns resultados sobre existência e estabilidade de soluções periódicas da
equação de van der Pol-Mathieu são estudados. Por meio do Teorema da Média é provado, sob
condições adequadas, que esta equação possui duas órbitas periódicas assintóticamente estáveis.
Além disso é obtida a existência de cônicas invariantes no plano de fase desta equação. / Mestre em Matemática
|
Page generated in 0.0862 seconds