• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 39
  • 28
  • 12
  • 10
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 352
  • 131
  • 84
  • 54
  • 52
  • 35
  • 32
  • 28
  • 28
  • 25
  • 25
  • 25
  • 24
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Teoria do Averaging para campos de vetores suaves por partes / The Averaging theory for piecewise smooth vector fields

Velter, Mariana Queiroz 05 February 2016 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-19T12:02:56Z No. of bitstreams: 2 Dissertação - Mariana Queiroz Velter - 2016.pdf: 3434033 bytes, checksum: 280742df0a3947cbf0f1aa8039428a72 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-19T12:04:25Z (GMT) No. of bitstreams: 2 Dissertação - Mariana Queiroz Velter - 2016.pdf: 3434033 bytes, checksum: 280742df0a3947cbf0f1aa8039428a72 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2016-05-19T12:04:25Z (GMT). No. of bitstreams: 2 Dissertação - Mariana Queiroz Velter - 2016.pdf: 3434033 bytes, checksum: 280742df0a3947cbf0f1aa8039428a72 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2016-02-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In this work the first-order Averaging theory will be studied. This theory replaces the problem of finding and quantifying limit cycles of a vector field by the problem of finding positive zeros of a function. We present the classical Averaging method (done for C 2 smooth vector fields) and we apply it to some special cases of quadratic polynomial vector fields in R3. Afterwards, we show a generalization of the Averaging method proposed in [3], which uses Brouwer degree theory in order to extend the method to continuous vector field, in other words, the differentiability of a vector field is no longer required. Finally, we will study the Averaging theory for piecewise smooth vector fields, presented in [14] using the regularization technique for piecewise smooth vector fields, see [22]. Also we will apply it to a class of polynomial vector field defined by parts, known as Kukles fields, see [16]. / Neste trabalho a teoria do Averaging de primeira ordem será estudada. Teoria essa que consiste em transferir o problema de encontrar e quantificar os ciclos limites de um determinado campo de vetores para o problema de encontrar zeros positivos de uma determinada função. Apresentaremos o método do Averaging clássico para campos de vetores suaves, o qual assume que o referido campo é, no mínimo, de classe C 2 e aplicaremos o método em alguns campos de vetores polinomiais quadráticos em R3 particulares. Em seguida, apresentaremos uma generalização do método do Averaging, proposto em [3], que utiliza a teoria do grau topológico de Brouwer para que esse seja válido para campos de vetores somente contínuos, ou seja, nesse contexto, a diferenciabilidade não é necessária. Por fim, estudaremos a teoria do Averaging para campos de vetores suaves por partes, apresentada em [14] que utiliza a técnica de regularização de campos de vetores suaves por partes, veja [22], e o aplicaremos a uma classe de campos de vetores polinomiais por partes, denominada campos Kukles estudada em [16].
52

Novel pharmacometric methods to improve clinical drug development in progressive diseases / Place de nouvelles approches pharmacométriques pour optimiser le développement clinique des médicaments dans le secteur des maladies progressives

Buatois, Simon 26 November 2018 (has links)
Suite aux progrès techniques et méthodologiques dans le secteur de la modélisation, l’apport de ces approches est désormais reconnu par l’ensemble des acteurs de la recherche clinique et pourrait avoir un rôle clé dans la recherche sur les maladies progressives. Parmi celles-ci les études pharmacométriques (PMX) sont rarement utilisées pour répondre aux hypothèses posées dans le cadre d’études dites de confirmation. Parmi les raisons évoquées, les analyses PMX traditionnelles ignorent l'incertitude associée à la structure du modèle lors de la génération d'inférence statistique. Or, ignorer l’étape de sélection du modèle peut aboutir à des intervalles de confiance trop optimistes et à une inflation de l’erreur de type I. Pour y remédier, nous avons étudié l’apport d’approches PMX innovantes dans les études de choix de dose. Le « model averaging » couplée à un test du rapport de « vraisemblance combiné » a montré des résultats prometteurs et tend à promouvoir l’utilisation de la PMX dans les études de choix de dose. Pour les études dites d’apprentissage, les approches de modélisation sont utilisées pour accroitre les connaissances associées aux médicaments, aux mécanismes et aux maladies. Dans cette thèse, les mérites de l’analyse PMX ont été évalués dans le cadre de la maladie de Parkinson. En combinant la théorie des réponses aux items à un modèle longitudinal, l’analyse PMX a permis de caractériser adéquatement la progression de la maladie tout en tenant compte de la nature composite du biomarqueur. Pour conclure, cette thèse propose des méthodes d’analyses PMX innovantes pour faciliter le développement des médicaments et/ou les décisions des autorités réglementaires. / In the mid-1990, model-based approaches were mainly used as supporting tools for drug development. Restricted to the “rescue mode” in situations of drug development failure, the impact of model-based approaches was relatively limited. Nowadays, the merits of these approaches are widely recognised by stakeholders in healthcare and have a crucial role in drug development for progressive diseases. Despite their numerous advantages, model-based approaches present important drawbacks limiting their use in confirmatory trials. Traditional pharmacometric (PMX) analyses relies on model selection, and consequently ignores model structure uncertainty when generating statistical inference. The problem of model selection is potentially leading to over-optimistic confidence intervals and resulting in a type I error inflation. Two projects of this thesis aimed at investigating the value of innovative PMX approaches to address part of these shortcomings in a hypothetical dose-finding study for a progressive disorder. The model averaging approach coupled to a combined likelihood ratio test showed promising results and represents an additional step towards the use of PMX for primary analysis in dose-finding studies. In the learning phase, PMX is a key discipline with applications at every stage of drug development to gain insight into drug, mechanism and disease characteristics with the ultimate goal to aid efficient drug development. In this thesis, the merits of PMX analysis were evaluated, in the context of Parkinson’s disease. An item-response theory longitudinal model was successfully developed to precisely describe the disease progression of Parkinson’s disease patients while acknowledging the composite nature of a patient-reported outcome. To conclude, this thesis enhances the use of PMX to aid efficient drug development and/or regulatory decisions in drug development.
53

Softwarové vybavení měřicí trati / Software for measuring track

Pikula, Stanislav January 2011 (has links)
The master's thesis summarizes the theory of flow measurement by differential pressure sensors, especially by Normalized Orifice, and summarizes theory concerning Averaging Pitot Tube. It is briefly described LabVIEW programming environment and flow measuring track for which the software was developed. The thesis describes creation of concept of program and in particular its final realization. The program provide observing actual events on measuring track, saving data to file, detail analysis of stored data and creation of measurement report. The main objective is to determine the Averaging Pitot Tube coefficient.
54

Crises bancaires et défauts souverains : quels déterminants, quels liens ? / Banking crises and sovereign defaults : Which determinants, which links?

Jedidi, Ons 01 December 2015 (has links)
L’objectif de cette thèse est la mise en place d’un Système d’Alerte Précoce comme instrument de prévision de la survenance des crises bancaires et des crises de la dette souveraine dans 48 pays de 1977 à 2010. Il s’agit à la fois d’identifier les facteurs capables de prédire ces événements et ceux annonçant leurs interactions éventuelles. La présente étude propose une approche à la fois originale et robuste qui tient compte de l’incertitude des modèles et des paramètres par la méthode de combinaison bayésienne des modèles de régression ou Bayesian Model Averaging (BMA). Nos résultats montrent que les avoirs étrangers nets en pourcentage du total des actifs, la dette à court terme en pourcentage des réserves totales et enfin la dette publique en pourcentage du PIB ont un pouvoir prédictif élevé pour expliquer les crises de la dette souveraine pour plusieurs pays. De plus, la croissance de l’activité et du crédit bancaire, le degré de libéralisation financière et le poids de la dette extérieure sont des signaux décisifs des crises bancaires. Notre approche offre le meilleur compromis entre les épisodes manqués et les fausses alertes. Enfin, nous étudions le lien entre les crises bancaires et les crises de la dette souveraine pour 62 pays de 1970 à 2011, en développant une approche basée sur un modèle Vecteur Auto-Régressif (VAR). Nos estimations montrent une relation significative et bidirectionnelle entre les deux types d’évènements. / The main purpose of this thesis is the development of an Early Warning System to predict banking and sovereign debt crises in 48 countries from 1977 to 2010. We are interested in identifying both factors that predict these events and those announcing their possible interactions. In particular, our empirical works provide an original and robust approach accounting for model and parameter uncertainty by means of the Bayesian Model Averaging method. Our results show that: Net foreign assets to total assets, short term debt to total reserves, and public debt to GDP have a high predictive power to signal sovereign debt crises in many countries. Furthermore, the growth rates of economic activity and credit, financial liberalization, and the external indebtedness are decisive signals of banking crises. Our approach offers the best compromise between missed episodes and false alarms. Finally, we study the link between banking and sovereign debt crises for 62 countries from 1970 to 2011 by developing an approach based on a Vector Autoregressive model (VAR). Our estimates show a significant two-way relationship between the two types of events.
55

Accuracy of perturbation theory for slow-fast Hamiltonian systems

Su, Tan January 2013 (has links)
There are many problems that lead to analysis of dynamical systems with phase variables of two types, slow and fast ones. Such systems are called slow-fast systems. The dynamics of such systems is usually described by means of different versions of perturbation theory. Many questions about accuracy of this description are still open. The difficulties are related to presence of resonances. The goal of the proposed thesis is to establish some estimates of the accuracy of the perturbation theory for slow-fast systems in the presence of resonances. We consider slow-fast Hamiltonian systems and study an accuracy of one of the methods of perturbation theory: the averaging method. In this thesis, we start with the case of slow-fast Hamiltonian systems with two degrees of freedom. One degree of freedom corresponds to fast variables, and the other degree of freedom corresponds to slow variables. Action variable of fast sub-system is an adiabatic invariant of the problem. Let this adiabatic invariant have limiting values along trajectories as time tends to plus and minus infinity. The difference of these two limits for a trajectory is known to be exponentially small in analytic systems. We obtain an exponent in this estimate. To this end, by means of iso-energetic reduction and canonical transformations in complexified phase space, we reduce the problem to the case of one and a half degrees of freedom, where the exponent is known. We then consider a quasi-linear Hamiltonian system with one and a half degrees of freedom. The Hamiltonian of this system differs by a small, ~ε, perturbing term from the Hamiltonian of a linear oscillatory system. We consider passage through a resonance: the frequency of the latter system slowly changes with time and passes through 0. The speed of this passage is of order of ε. We provide asymptotic formulas that describe effects of passage through a resonance with an improved accuracy O(ε3/2). A numerical verification is also provided. The problem under consideration is a model problem that describes passage through an isolated resonance in multi-frequency quasi-linear Hamiltonian systems. We also discuss a resonant phenomenon of scattering on resonances associated with discretisation arising in a numerical solving of systems with one rotating phase. Numerical integration of ODEs by standard numerical methods reduces continuous time problems to discrete time problems. For arbitrarily small time step of a numerical method, discrete time problems have intrinsic properties that are absent in continuous time problems. As a result, numerical solution of an ODE may demonstrate dynamical phenomena that are absent in the original ODE. We show that numerical integration of systems with one fast rotating phase leads to a situation of such kind: numerical solution demonstrates phenomenon of scattering on resonances, that is absent in the original system.
56

Multi-Model Bayesian Analysis of Data Worth and Optimization of Sampling Scheme Design

Xue, Liang January 2011 (has links)
Groundwater is a major source of water supply, and aquifers form major storage reservoirs as well as water conveyance systems, worldwide. The viability of groundwater as a source of water to the world's population is threatened by overexploitation and contamination. The rational management of water resource systems requires an understanding of their response to existing and planned schemes of exploitation, pollution prevention and/or remediation. Such understanding requires the collection of data to help characterize the system and monitor its response to existing and future stresses. It also requires incorporating such data in models of system makeup, water flow and contaminant transport. As the collection of subsurface characterization and monitoring data is costly, it is imperative that the design of corresponding data collection schemes is cost-effective. A major benefit of new data is its potential to help improve one's understanding of the system, in large part through a reduction in model predictive uncertainty and corresponding risk of failure. Traditionally, value-of-information or data-worth analyses have relied on a single conceptual-mathematical model of site hydrology with prescribed parameters. Yet there is a growing recognition that ignoring model and parameter uncertainties render model predictions prone to statistical bias and underestimation of uncertainty. This has led to a recent emphasis on conducting hydrologic analyses and rendering corresponding predictions by means of multiple models. We develop a theoretical framework of data worth analysis considering model uncertainty, parameter uncertainty and potential sample value uncertainty. The framework entails Bayesian Model Averaging (BMA) with emphasis on its Maximum Likelihood version (MLBMA). An efficient stochastic optimization method, called Differential Evolution Method (DEM), is explored to aid in the design of optimal sampling schemes aiming at maximizing data worth. A synthetic case entailing generated log hydraulic conductivity random fields is used to illustrate the procedure. The proposed data worth analysis framework is applied to field pneumatic permeability data collected from unsaturated fractured tuff at the Apache Leap Research Site (ALRS) near Superior, Arizona.
57

A platform for probabilistic Multimodel and Multiproduct Streamflow Forecasting

Roy, Tirthankar, Serrat-Capdevila, Aleix, Gupta, Hoshin, Valdes, Juan 01 1900 (has links)
We develop and test a probabilistic real-time streamflow-forecasting platform, Multimodel and Multiproduct Streamflow Forecasting (MMSF), that uses information provided by a suite of hydrologic models and satellite precipitation products (SPPs). The SPPs are bias-corrected before being used as inputs to the hydrologic models, and model calibration is carried out independently for each of the model-product combinations (MPCs). Forecasts generated from the calibrated models are further bias-corrected to compensate for the deficiencies within the models, and then probabilistically merged using a variety of model averaging techniques. Use of bias-corrected SPPs in streamflow forecasting applications can overcome several issues associated with sparsely gauged basins and enable robust forecasting capabilities. Bias correction of streamflow significantly improves the forecasts in terms of accuracy and precision for all different cases considered. Results show that the merging of individual forecasts from different MPCs provides additional improvements. All the merging techniques applied in this study produce similar results, however, the Inverse Weighted Averaging (IVA) proves to be slightly superior in most cases. We demonstrate the implementation of the MMSF platform for real-time streamflow monitoring and forecasting in the Mara River basin of Africa (Kenya & Tanzania) in order to provide improved monitoring and forecasting tools to inform water management decisions.
58

Evaluation of the Performance of Three Satellite Precipitation Products over Africa

Serrat-Capdevila, Aleix, Merino, Manuel, Valdes, Juan, Durcik, Matej 13 October 2016 (has links)
We present an evaluation of daily estimates from three near real-time quasi-global Satellite Precipitation Products-Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), and Climate Prediction Center (CPC) Morphing Technique (CMORPH)-over the African continent, using the Global Precipitation Climatology Project one Degree Day (GPCP-1dd) as a reference dataset for years 2001 to 2013. Different types of errors are characterized for each season as a function of spatial classifications (latitudinal bands, climatic zones and topography) and in relationship with the main rain-producing mechanisms in the continent: the Intertropical Convergence Zone (ITCZ) and the East African Monsoon. A bias correction of the satellite estimates is applied using a probability density function (pdf) matching approach, with a bias analysis as a function of rain intensity, season and latitude. The effects of bias correction on different error terms are analyzed, showing an almost elimination of the mean and variance terms in most of the cases. While raw estimates of TMPA show higher efficiency, all products have similar efficiencies after bias correction. PERSIANN consistently shows the smallest median errors when it correctly detects precipitation events. The areas with smallest relative errors and other performance measures follow the position of the ITCZ oscillating seasonally over the equator, illustrating the close relationship between satellite estimates and rainfall regime.
59

Bayesian methods for the construction of robust chronologies

Lee, Sharen Woon Yee January 2012 (has links)
Bayesian modelling is a widely used, powerful approach for reducing absolute dating uncertainties in archaeological research. It is important that the methods used in chronology building are robust and reflect substantial prior knowledge. This thesis focuses on the development and evaluation of two novel, prior models: the trapezoidal phase model; and the Poisson process deposition model. Firstly, the limitations of the trapezoidal phase model were investigated by testing the model assumptions using simulations. It was found that a simple trapezoidal phase model does not reflect substantial prior knowledge and the addition of a non-informative element to the prior was proposed. An alternative parameterisation was also presented, to extend its use to a contiguous phase scenario. This method transforms the commonly-used abrupt transition model to allow for gradual changes. The second phase of this research evaluates the use of Bayesian model averaging in the Poisson process deposition model. The use of model averaging extends the application of the Poisson process model to remove the subjectivity involved in model selection. The last part of this thesis applies these models to different case studies, including attempts at resolving the Iron Age chronological debate in Israel, at determining the age of an important Quaternary tephra, at refining a cave chronology, and at more accurately modelling the mid-Holocene elm decline in the British Isles. The Bayesian methods discussed in this thesis are widely applicable in modelling situations where the associated prior assumptions are appropriate. Therefore, they are not limited to the case studies addressed in this thesis, nor are they limited to analysing radiocarbon chronologies.
60

GDP forecasting and nowcasting : Utilizing a system for averaging models to improve GDP predictions for six countries around the world

Lundberg, Otto January 2017 (has links)
This study was issued by Swedbank because they wanted too improve their GDP growth forecast capabilites.  A program was developed and tested on six countries; USA, Sweden, Germany, UK, Brazil and Norway. In this paper I investigate if I can reduce forecasting error for GDP growth by taking a smart average from a variety of models compared to both the best individual models and a random walk. I combine the forecasts from four model groups: Vector autoregression, principal component analysis, machine learning and random walk. The smart average is given by a system that give more weight to the predictions of models with a lower historical error. Different weighting schemas are explored; how far into the past should we look? How much should bad performance be punished? I show that for the six countries studied the smart average outperforms the single best model and that for five out of six countries it beats a random walk by at least 25%. / Den här studien beställdes av Swedbank eftersom de ville förbättra sin BNP-prediktionsförmåga. Ett dataprogram utvecklades och testades på sex länder; USA, Sverige, Tyskland, Storbritannien, Brasilien och Norge. I den här rapporten undersöker jag om jag kan minska felmarginalen för BNP-utvecklingsprognoser genom att ta ett smart genomsnitt från flera olika modeller jämfört med både den bästa individuella modellen och en random walk. Jag kombinerar prognoser från fyra modellgrupper: Vektor autoregression, principalkomponentanalys, maskininlärning och random walk. Det smarta genomsnittet skapas genom att ge mer vikt till de modeller som har lägst historiskt felmarginal. Olika viktningsscheman utforskas; hur långt bak i tiden ska vi mäta? Hur hårt ska dåliga prediktioner bestraffas? Jag visar att för de sex länderna i studien presterar det smarta genomsnittet bättre än den enskilt bästa modellen och fem av de sex länderna slår en random walk med mer än 25%.

Page generated in 0.0822 seconds