• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 7
  • 3
  • Tagged with
  • 23
  • 23
  • 11
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Observation missions with UAVs : defining and learning models for active perception and proposition of an architecture enabling repeatable distributed simulations / Missions d'observations pour des drones : définition et apprentissage de modèles pour la perception active, et proposition d'une architecture permettant des simulations distribuées répétables

Reymann, Christophe 08 July 2019 (has links)
Cette thèse se focalise sur des tâches de perceptions pour des drones à voilures fixes (UAV). Lorsque la perception est la finalité, un bon modèle d'environnement couplé à la capacité de prédire l'impact de futures observations sur celui-ci est crucial. La perception active traite de l'intégration forte entre modèles de perception et processus de raisonnement, permettant au robot d'acquérir des informations pertinentes à propos du statut de la mission et de replanifier sa trajectoire de mesure en réaction à des évènements et résultats imprévisibles. Ce manuscrit décrit deux approches pour des tâches de perception active, dans deux scénarios radicalement différents. Le premier est celui de la cartographie des phénomènes météorologiques de petite échelle et fortement dynamiques, en particulier de nuages de type cumulus. L'approche présentée utilise la régression par processus Gaussien pour construire un modèle d'environnement, les hyper-paramètres étant appris en ligne. Des métriques de gain d'information sont introduites pour évaluer la qualité de futures trajectoires d'observation. Un algorithme de planification stochastique est utilisé pour optimiser une fonction d'utilité équilibrant maximisation du gain d'information avec des buts de minimisation du coût énergétique. Dans le second scénario, un UAV cartographie des champs de grandes cultures pour les besoins de l'agriculture de précision. Utilisant le résultat d'un algorithme de localisation et cartographie simultanée (SLAM), une approche nouvelle pour la construction d'un modèle d'erreurs relatives est proposée. Ce modèle est appris à partir d'attributs provenant des structures de données du SLAM, ainsi que de la topologie sous-jacente du graphe de covisibilité formé par les observations. Tous les développement ont été testés en simulation. Se focalisant sur la problématique de gestion de l'avancement tu temps et de la synchronisation de simulateurs hétérogènes dans une architecture distribuée, une solution originale basée sur une architecture décentralisée est proposée. / This thesis focuses on perception tasks for an unmanned aerial vehicle (UAV). When sensing is the finality, having a good environment model as well as being capable of predicting the impacts of future observations is crucial. Active perception deals with integrating tightly perception models in the reasoning process, enabling the robot to gain knowledge about the status of its mission and to replan its sensing trajectory to react to unforeseen events and results. This manuscript describes two approaches for active perception tasks, in two radically different settings. The first one deals with mapping highly dynamic and small scale meteorological phenomena such as cumulus clouds. The presented approach uses Gaussian Process Regression to build environment models, learning its hyperparameters online. Normalized marginal information metrics are introduced to compute the quality of future observation trajectories. A stochastic planning algorithm is used to optimize an utility measure balancing maximization of theses metrics with energetic minimization goals. The second setting revolves around mapping crop fields for precision agriculture purposes. Using the output of a monocular graph Simultaneous Localization and Mapping (SLAM) algorithm, a novel approach to building a relative error model is proposed. This model is learned both from features extracted from the SLAM algorithm’s data structures, as well as the underlying topology of the covisibility graph of the observations. All developments have been tested using realistic, distributed simulations. An analysis of the simulation issue in robotics is proposed. Focusing on the problem of managing time advancement of multiple interconnected simulators, a novel solution based on a decentralized scheme is presented.
12

Modelos mistos lineares elípticos com erros de medição / Elliptical linear mixed models with measurement errors

Borssoi, Joelmir André 20 February 2014 (has links)
O objetivo principal deste trabalho é estudar modelos mistos lineares elípticos em que uma das variáveis explicativas ou covariáveis é medida com erros, sob a abordagem estrutural. O trabalho é apresentado numa notação longitudinal, todavia a covariável medida com erros pode ser observada temporalmente ou como medidas repetidas. Assumimos uma estrutura hierárquica apropriada com distribuição elíptica conjunta para os erros envolvidos, porém a inferência é desenvolvida sob uma abordagem marginal em que consideramos a distribuição marginal da resposta e da variável medida com erros. Procedimentos de influência local em que o esquema de perturbação é escolhido de forma apropriada são desenvolvidos. Um exemplo para motivação é apresentado e analisado através dos procedimentos apresentados neste trabalho. Detalhamos nos apêndices os principais procedimentos necessários para o desenvolvimento do modelo proposto. / The aim of this thesis is to study elliptical linear mixed models in which one of the explanatory variables is subject to measurement error under the structural assumption. The work is presented by assuming a longitudinal structure, however the explanatory variable may be observed along the time or as repeated measures. A joint hierarchical structure is assumed for the elliptical errors, but the inference is made under the marginal structure. The methodology of local influence is applied with the perturbation schemes being selected appropriately. A motivation example is presented and analysed by the procedures developed in this work. All the main derivations for the development of the proposed model are presented in the appendices.
13

Modelos mistos lineares elípticos com erros de medição / Elliptical linear mixed models with measurement errors

Joelmir André Borssoi 20 February 2014 (has links)
O objetivo principal deste trabalho é estudar modelos mistos lineares elípticos em que uma das variáveis explicativas ou covariáveis é medida com erros, sob a abordagem estrutural. O trabalho é apresentado numa notação longitudinal, todavia a covariável medida com erros pode ser observada temporalmente ou como medidas repetidas. Assumimos uma estrutura hierárquica apropriada com distribuição elíptica conjunta para os erros envolvidos, porém a inferência é desenvolvida sob uma abordagem marginal em que consideramos a distribuição marginal da resposta e da variável medida com erros. Procedimentos de influência local em que o esquema de perturbação é escolhido de forma apropriada são desenvolvidos. Um exemplo para motivação é apresentado e analisado através dos procedimentos apresentados neste trabalho. Detalhamos nos apêndices os principais procedimentos necessários para o desenvolvimento do modelo proposto. / The aim of this thesis is to study elliptical linear mixed models in which one of the explanatory variables is subject to measurement error under the structural assumption. The work is presented by assuming a longitudinal structure, however the explanatory variable may be observed along the time or as repeated measures. A joint hierarchical structure is assumed for the elliptical errors, but the inference is made under the marginal structure. The methodology of local influence is applied with the perturbation schemes being selected appropriately. A motivation example is presented and analysed by the procedures developed in this work. All the main derivations for the development of the proposed model are presented in the appendices.
14

Some problems in the theory & application of graphical models

Roddam, Andrew Wilfred January 1999 (has links)
A graphical model is simply a representation of the results of an analysis of relationships between sets of variables. It can include the study of the dependence of one variable, or a set of variables on another variable or sets of variables, and can be extended to include variables which could be considered as intermediate to the others. This leads to the concept of representing these chains of relationships by means of a graph; where variables are represented by vertices, and relationships between the variables are represented by edges. These edges can be either directed or undirected, depending upon the type of relationship being represented. The thesis investigates a number of outstanding problems in the area of statistical modelling, with particular emphasis on representing the results in terms of a graph. The thesis will study models for multivariate discrete data and in the case of binary responses, some theoretical results are given on the relationship between two common models. In the more general setting of multivariate discrete responses, a general class of models is studied and an approximation to the maximum likelihood estimates in these models is proposed. This thesis also addresses the problem of measurement errors. An investigation into the effect that measurement error has on sample size calculations is given with respect to a general measurement error specification in both linear and binary regression models. Finally, the thesis presents, in terms of a graphical model, a re-analysis of a set of childhood growth data, collected in South Wales during the 1970s. Within this analysis, a new technique is proposed that allows the calculation of derived variables under the assumption that the joint relationships between the variables are constant at each of the time points.
15

Testy statistických hypotéz za přítomnosti chyb měření / Tests of statistical hypotheses in measurement error models

Navrátil, Radim January 2014 (has links)
The behavior of rank procedures in measurement error models was studied - if tests and estimates stay valid and applicable when there are some measurement errors involved and if not how to modify these procedures to be able to do some statistical inference. A new rank test for the slope parameter in regression model based on minimum distance esti- mator and an aligned rank test for an intercept were proposed. The (asymptotic) bias of R-estimator in measurement error model was also investigated. Besides measurement errors the problem of heteroscedastic model errors was considered - regression rank score tests of heteroscedasticity with nuisance regression and tests of regression with nuisance heterosce- dasticity were proposed. Finally, in location model tests and estimates of shift parameter for various measurement errors were studied. All the results were derived theoretically and then demonstrated numerically with examples or simulations.
16

Towards a flexible statistical modelling by latent factors for evaluation of simulated responses to climate forcings

Fetisova, Ekaterina January 2017 (has links)
In this thesis, using the principles of confirmatory factor analysis (CFA) and the cause-effect concept associated with structural equation modelling (SEM), a new flexible statistical framework for evaluation of climate model simulations against observational data is suggested. The design of the framework also makes it possible to investigate the magnitude of the influence of different forcings on the temperature as well as to investigate a general causal latent structure of temperature data. In terms of the questions of interest, the framework suggested here can be viewed as a natural extension of the statistical approach of 'optimal fingerprinting', employed in many Detection and Attribution (D&amp;A) studies. Its flexibility means that it can be applied under different circumstances concerning such aspects as the availability of simulated data, the number of forcings in question, the climate-relevant properties of these forcings, and the properties of the climate model under study, in particular, those concerning the reconstructions of forcings and their implementation. It should also be added that although the framework involves the near-surface temperature as a climate variable of interest and focuses on the time period covering approximately the last millennium prior to the industrialisation period, the statistical models, included in the framework, can in principle be generalised to any period in the geological past as soon as simulations and proxy data on any continuous climate variable are available.  Within the confines of this thesis, performance of some CFA- and SEM-models is evaluated in pseudo-proxy experiments, in which the true unobservable temperature series is replaced by temperature data from a selected climate model simulation. The results indicated that depending on the climate model and the region under consideration, the underlying latent structure of temperature data can be of varying complexity, thereby rendering our statistical framework, serving as a basis for a wide range of CFA- and SEM-models, a powerful and flexible tool. Thanks to these properties, its application ultimately may contribute to an increased confidence in the conclusions about the ability of the climate model in question to simulate observed climate changes. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Manuscript. Paper 3: Manuscript. Paper 3: Manuscript.</p>
17

Automatic Development of Pharmacokinetic Structural Models

Hamdan, Alzahra January 2022 (has links)
Introduction: The current development strategy of population pharmacokinetic models is a complex and iterative process that is manually performed by modellers. Such a strategy is time-demanding, subjective, and dependent on the modellers’ experience. This thesis presents a novel model building tool that automates the development process of pharmacokinetic (PK) structural models. Methods: Modelsearch is a tool in Pharmpy library, an open-source package for pharmacometrics modelling, that searches for the best structural model using an exhaustive stepwise search algorithm. Given a dataset, a starting model and a pre-specified model search space of structural model features, the tool creates and fits a series of candidate models that are then ranked based on a selection criterion, leading to the selection of the best model. The Modelsearch tool was used to develop structural models for 10 clinical PK datasets (5 orally and 5 i.v. administered drugs). A starting model for each dataset was generated using the assemblerr package in R, which included a first-order (FO) absorption without any absorption delay for oral drugs, one-compartment disposition, FO elimination, a proportional residual error model, and inter-individual variability on the starting model parameters with a correlation between clearance (CL) and central volume of distribution (VC). The model search space included aspects of absorption and absorption delay (for oral drugs), distribution and elimination. In order to understand the effects of different IIV structures on structural model selection, five model search approaches were investigated that differ in the IIV structure of candidate models: 1. naïve pooling, 2. IIV on starting model parameters only, 3. additional IIV on mean delay time parameter, 4. additional diagonal IIVs on newly added parameters, and 5. full block IIVs. Additionally, the implementation of structural model selection in the workflow of the fully automatic model development was investigated. Three strategies were evaluated: SIR, SRI, and RSI depending on the development order of structural model (S), IIV model (I) and residual error model (R). Moreover, the NONMEM errors encountered when using the tool were investigated and categorized in order to be handled in the automatic model building workflow. Results: Differences in the final selected structural models for each drug were observed between the five different model search approaches. The same distribution components were selected through Approaches 1 and 2 for 6/10 drugs. Approach 2 has also identified an absorption delay component in 4/5 oral drugs, whilst the naïve pooling approach only identified an absorption delay model in 2 drugs. Compared to Approaches 1 and 2, Approaches 3, 4 and 5 tended to select more complex models and more often resulted in minimization errors during the search. For the SIR, SRI and RSI investigations, the same structural model was selected in 9/10 drugs with a significant higher run time in RSI strategy compared to the other strategies. The NONMEM errors were categorized into four categories based on the handling suggestions which is valuable to further improve the tool in its automatic error handling. Conclusions: The Modelsearch tool was able to automatically select a structural model with different strategies of setting the IIV model structure. This novel tool enables the evaluation of numerous combinations of model components, which would not be possible using a traditional manual model building strategy. Furthermore, the tool is flexible and can support multiple research investigations for how to best implement structural model selection in a fully automatic model development workflow.
18

Regressão logística com erro de medida: comparação de métodos de estimação / Logistic regression model with measurement error: a comparison of estimation methods

Rodrigues, Agatha Sacramento 27 June 2013 (has links)
Neste trabalho estudamos o modelo de regressão logística com erro de medida nas covariáveis. Abordamos as metodologias de estimação de máxima pseudoverossimilhança pelo algoritmo EM-Monte Carlo, calibração da regressão, SIMEX e naïve (ingênuo), método este que ignora o erro de medida. Comparamos os métodos em relação à estimação, através do viés e da raiz do erro quadrático médio, e em relação à predição de novas observações, através das medidas de desempenho sensibilidade, especificidade, verdadeiro preditivo positivo, verdadeiro preditivo negativo, acurácia e estatística de Kolmogorov-Smirnov. Os estudos de simulação evidenciam o melhor desempenho do método de máxima pseudoverossimilhança na estimação. Para as medidas de desempenho na predição não há diferença entre os métodos de estimação. Por fim, utilizamos nossos resultados em dois conjuntos de dados reais de diferentes áreas: área médica, cujo objetivo está na estimação da razão de chances, e área financeira, cujo intuito é a predição de novas observações. / We study the logistic model when explanatory variables are measured with error. Three estimation methods are presented, namely maximum pseudo-likelihood obtained through a Monte Carlo expectation-maximization type algorithm, regression calibration, SIMEX and naïve, which ignores the measurement error. These methods are compared through simulation. From the estimation point of view, we compare the different methods by evaluating their biases and root mean square errors. The predictive quality of the methods is evaluated based on sensitivity, specificity, positive and negative predictive values, accuracy and the Kolmogorov-Smirnov statistic. The simulation studies show that the best performing method is the maximum pseudo-likelihood method when the objective is to estimate the parameters. There is no difference among the estimation methods for predictive purposes. The results are illustrated in two real data sets from different application areas: medical area, whose goal is the estimation of the odds ratio, and financial area, whose goal is the prediction of new observations.
19

Regressão logística com erro de medida: comparação de métodos de estimação / Logistic regression model with measurement error: a comparison of estimation methods

Agatha Sacramento Rodrigues 27 June 2013 (has links)
Neste trabalho estudamos o modelo de regressão logística com erro de medida nas covariáveis. Abordamos as metodologias de estimação de máxima pseudoverossimilhança pelo algoritmo EM-Monte Carlo, calibração da regressão, SIMEX e naïve (ingênuo), método este que ignora o erro de medida. Comparamos os métodos em relação à estimação, através do viés e da raiz do erro quadrático médio, e em relação à predição de novas observações, através das medidas de desempenho sensibilidade, especificidade, verdadeiro preditivo positivo, verdadeiro preditivo negativo, acurácia e estatística de Kolmogorov-Smirnov. Os estudos de simulação evidenciam o melhor desempenho do método de máxima pseudoverossimilhança na estimação. Para as medidas de desempenho na predição não há diferença entre os métodos de estimação. Por fim, utilizamos nossos resultados em dois conjuntos de dados reais de diferentes áreas: área médica, cujo objetivo está na estimação da razão de chances, e área financeira, cujo intuito é a predição de novas observações. / We study the logistic model when explanatory variables are measured with error. Three estimation methods are presented, namely maximum pseudo-likelihood obtained through a Monte Carlo expectation-maximization type algorithm, regression calibration, SIMEX and naïve, which ignores the measurement error. These methods are compared through simulation. From the estimation point of view, we compare the different methods by evaluating their biases and root mean square errors. The predictive quality of the methods is evaluated based on sensitivity, specificity, positive and negative predictive values, accuracy and the Kolmogorov-Smirnov statistic. The simulation studies show that the best performing method is the maximum pseudo-likelihood method when the objective is to estimate the parameters. There is no difference among the estimation methods for predictive purposes. The results are illustrated in two real data sets from different application areas: medical area, whose goal is the estimation of the odds ratio, and financial area, whose goal is the prediction of new observations.
20

Revisiting stormwater quality conceptual models in a large urban catchment : Online measurements, uncertainties in data and models / Révision des modèles conceptuels de qualité des eaux pluviales sur un grand bassin versant urbain : Mesures en continue, incertitudes sur les données et les modèles

Sandoval Arenas, Santiago 05 December 2017 (has links)
Les modèles de Rejets Urbains par Temps de Pluie (MRUTP) de Matières en Suspension (MES) dans les systèmes d’assainissement urbains sont essentiels pour des raisons scientifiques, environnementales, opérationnelles et réglementaires. Néanmoins, les MRUTP ont été largement mis en question, surtout pour reproduire des données mesurées en continu à l’exutoire des grands bassins versants. Dans cette thèse, trois limitations potentielles des MRUTP traditionnels ont été étudiées dans un bassin versant de 185 ha (Chassieu, France), avec des mesures en ligne de 365 événements pluvieux : a) incertitudes des données dû aux conditions sur le terrain, b) incertitudes dans les modèles hydrologiques et mesures de pluie et c) incertitudes dans les structures traditionnelles des MRUTP. Ces aspects sont approfondis dans six apports séparés, dont leurs résultats principaux peuvent être synthétisés comme suites : a) Acquisition et validation des données : (i) quatre stratégies d’échantillonnage pendant des événements pluvieux sont simulées et évaluées à partir de mesures en ligne de MES et débit. Les intervalles d’échantillonnage recommandés sont de 5 min, avec des erreurs moyennes entre 7 % et 20 % et des incertitudes sur ces erreurs d’environ 5 %, selon l’intervalle d’échantillonnage; (ii) la probabilité de sous-estimation de la concentration moyenne dans la section transversale du réseau est estimée à partir de deux méthodologies. Une méthodologie montre des sous-estimations de MES plus réelles (vers 39 %) par apport à l'autre (vers 269 %). b) Modèles hydrologiques et mesures de pluie : (iii) une stratégie d’estimation de paramètres d’un modèle conceptuel pluie-débit est proposée, en analysant la variabilité des paramètres optimaux obtenus à partir d’un calage Bayésien évènement-par-évènement; (iv) une méthode pour calculer les précipitations moyennes sur un bassin versant est proposée, sur la base du même modèle hydrologique et les données de débit. c) MRUTP (pollutographes) : (v) la performance de modélisation à partir du modèle traditionnel courbe d’étalonnage (RC) a été supérieur aux différents modèles linéaires de fonctions de transfert (TF), surtout en termes de parcimonie et précision des simulations. Aucune relation entre les potentielles erreurs de mesure de la pluie et les conditions hydrologiques définies en (iii) et (iv) avec les performances de RC et TFs n’a pu être établie. Des tests statistiques renforcent que l’occurrence des évènements non-représentables par RC ou TF au cours de temps suit une distribution aléatoire (indépendante de la période sèche précédente); (vi) une méthode de reconstruction Bayésienne de variables d’état virtuelles indique que des processus potentiellement manquants dans une description RC sont ininterprétables en termes d’un unique état virtuel de masse disponible dans le bassin versant qui diminue avec le temps, comme nombre de modèles traditionnels l’ont supposé. / Total Suspended Solids (TSS) stormwater models in urban drainage systems are often required for scientific, legal, environmental and operational reasons. However, these TSS stormwater traditional model structures have been widely questioned, especially when reproducing data from online measurements at the outlet of large urban catchments. In this thesis, three potential limitations of traditional TSS stormwater models are analyzed in a 185 ha urban catchment (Chassieu, Lyon, France), by means 365 rainfall events monitored online: a) uncertainties in TSS data due to field conditions; b) uncertainties in hydrological models and rainfall measurements and c) uncertainties in the stormwater quality model structures. These aspects are investigated in six separate contributions, whose principal results can be summarized as follows: a) TSS data acquisition and validation: (i) four sampling strategies during rainfall events are simulated and evaluated by online TSS and flow rate measurements. Recommended sampling time intervals are of 5 min, with average sampling errors between 7 % and 20 % and uncertainties in sampling errors of about 5 %, depending on the sampling interval; (ii) the probability of underestimating the cross section mean TSS concentration is estimated by two methodologies. One method shows more realistic TSS underestimations (about 39 %) than the other (about 269 %). b) Hydrological models and rainfall measurements: (iii) a parameter estimation strategy is proposed for conceptual rainfall-runoff model by analyzing the variability of the optimal parameters obtained by single-event Bayesian calibrations, based on clusters and graphs representations. The new strategy shows more performant results in terms of accuracy and precision in validation; (iv) a methodology aimed to calculate “mean” areal rainfall estimation is proposed, based on the same hydrological model and flow rate data. Rainfall estimations by multiplying factors over constant-length time window and rainfall zero records filled with a reverse model show the most satisfactory results compared to further rainfall estimation models. c) Stormwater TSS pollutograph modelling: (v) the modelling performance of the traditional Rating Curve (RC) model is superior to different linear Transfer Function models (TFs), especially in terms of parsimony and precision of the simulations. No relation between the rainfall corrections or hydrological conditions defined in (iii) and (iv) with performances of RC and TFs could be established. Statistical tests strengthen that the occurrence of events not representable by the RC model in time is independent of antecedent dry weather conditions; (vi) a Bayesian reconstruction method of virtual state variables indicate that potential missing processes in the RC description are hardly interpretable as a unique state of virtual available mass over the catchment decreasing over time, as assumed by a great number of traditional models.

Page generated in 0.1159 seconds