• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 37
  • 26
  • 6
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 171
  • 28
  • 28
  • 23
  • 20
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Modélisation statistique pour données fonctionnelles : approches non-asymptotiques et méthodes adaptatives / Statistical modeling for functional data : non-asymptotic approaches and adaptive methods

Roche, Angelina 07 July 2014 (has links)
L'objet principal de cette thèse est de développer des estimateurs adaptatifs en statistique pour données fonctionnelles. Dans une première partie, nous nous intéressons au modèle linéaire fonctionnel et nous définissons un critère de sélection de la dimension pour des estimateurs par projection définis sur des bases fixe ou aléatoire. Les estimateurs obtenus vérifient une inégalité de type oracle et atteignent la vitesse de convergence minimax pour le risque lié à l'erreur de prédiction. Pour les estimateurs définis sur une collection de modèles aléatoires, des outils de théorie de la perturbation ont été utilisés pour contrôler les projecteurs aléatoires de manière non-asymptotique. D'un point de vue numérique, cette méthode de sélection de la dimension est plus rapide et plus stable que les méthodes usuelles de validation croisée. Dans une seconde partie, nous proposons un critère de sélection de fenêtre inspiré des travaux de Goldenshluger et Lepski, pour des estimateurs à noyau de la fonction de répartition conditionnelle lorsque la covariable est fonctionnelle. Le risque de l'estimateur obtenu est majoré de manière non-asymptotique. Des bornes inférieures sont prouvées ce qui nous permet d'établir que notre estimateur atteint la vitesse de convergence minimax, à une perte logarithmique près. Dans une dernière partie, nous proposons une extension au cadre fonctionnel de la méthodologie des surfaces de réponse, très utilisée dans l'industrie. Ce travail est motivé par une application à la sûreté nucléaire. / The main purpose of this thesis is to develop adaptive estimators for functional data.In the first part, we focus on the functional linear model and we propose a dimension selection device for projection estimators defined on both fixed and data-driven bases. The prediction error of the resulting estimators satisfies an oracle-type inequality and reaches the minimax rate of convergence. For the estimator defined on a data-driven approximation space, tools of perturbation theory are used to solve the problems related to the random nature of the collection of models. From a numerical point of view, this method of dimension selection is faster and more stable than the usual methods of cross validation.In a second part, we consider the problem of bandwidth selection for kernel estimators of the conditional cumulative distribution function when the covariate is functional. The method is inspired by the work of Goldenshluger and Lepski. The risk of the estimator is non-asymptotically upper-bounded. We also prove lower-bounds and establish that our estimator reaches the minimax convergence rate, up to an extra logarithmic term.In the last part, we propose an extension to a functional context of the response surface methodology, widely used in the industry. This work is motivated by an application to nuclear safety.
102

Condições de regularidade para o modelo de regressão com parametrização geral / Regularity conditions for the regression model with general parameterization

Loose, Laís Helen 24 May 2019 (has links)
Este trabalho objetiva apresentar um estudo detalhado e sistemático de algumas condições de regularidade para inferências baseadas em máxima verossimilhança no modelo de regressão elíptico multivariado com parametrização geral proposto em Lemonte e Patriota (2011). O modelo em estudo tem vários modelos importantes como casos particulares, entre eles temos os modelos lineares e não lineares homocedásticos e heterocedásticos, modelos mistos, modelos heterocedásticos com erros nas variáveis e na equação, modelos multiníveis, entre outros. As condições de regularidade estudadas estão associadas à identificabilidade do modelo, à existência, à unicidade, à consistência e à normalidade assintótica dos estimadores de máxima verossimilhança (EMV) e à distribuição assintótica das estatísticas de testes. Para isso, são enunciadas as condições suficientes e formalizados os teoremas que garantem a existência, unicidade, consistência e normalidade assintótica dos EMV e a distribuição assintótica das estatísticas de teste usuais. Além disso, os resultados de cada teorema são comentados e as demonstrações são apresentadas com detalhes. Inicialmente, considerou-se o modelo sob a suposição de normalidade dos erros, para, na sequência, ser possível generalizar os resultados para o caso elíptico. A fim de exemplificar os resultados obtidos, foram verificadas, analiticamente, a validade de algumas condições e os resultados de alguns teoremas em casos particulares do modelo geral. Ademais, foi desenvolvido um estudo de simulação em que uma das condições é violada adotando o modelo heterocedástico com erros nas variáveis e na equação. Por meio de simulações de Monte Carlo foram avaliados os impactos sobre a consistência e normalidade assintótica dos EMV. / This work aims to present a detailed and systematic study of some regularity conditions for inferences based on maximum likelihood in the multivariate elliptic regression model with general parameterization proposed in Lemonte and Patriota (2011). The model under study has several important models as particular cases, among them we have the linear and non-linear homocedastic and heterocedastic models, mixed models, heterocedastic models with errors in the variables and in the equation, multilevel models, among others. The regularity conditions studied are associated with the identifiability of the model, existence, uniqueness, consistency and asymptotic normality of the maximum likelihood estimators (MLE) and the asymptotic distribution of some test statistics. Sufficient conditions are stated to guarantee the existence, unicity, consistency and asymptotic normality of the MLE and the asymptotic distribution of the usual test statistics. In addition, the results of each theorem are commented and the proof are presented in detail. Initially, the model was considered under the assumption of normality of the errors, and then the results were generalized for the elliptical case. In order to exemplify the attained results, some particular cases of the general model are analyzed analytically, the validity of some conditions and the results of some theorems are verified. In addition, a simulation study is developed with one of the conditions violated under the heterocedastic model with errors in the variables and in the equation. By means of Monte Carlo simulations, the impacts of this violation on the consistency and the asymptotic normality of the MLE are evaluated.
103

Análise e implementação de estimadores de estados em processos químicos. / Analysis and Implementation of state estimators in chemical processes.

Rincón Cuellar, Franklin David 27 March 2013 (has links)
Neste trabalho são apresentadas estratégias para a estimação, em processos químicos, de estados, parâmetros e covariâncias do ruído de processo e das medidas que são testadas com dados experimentais. Para a estimação de estados e parâmetros foram implementadas desde a técnica mais tradicional, o filtro estendido de Kalman (EKF) até as mais modernas da literatura, como o filtro de Kalman Unscented (UKF) e o Moving Horizon Estimator (MHE). A técnica Autocovariance Least-Squares (ALS) permite a estimação das matrizes de covariância do processo e das medidas a partir dos estados medidos dos processos analisados. Três processos foram analisados com as técnicas citadas: a reação de hidrólise de anidrido acético, o aquecimento de um reator de polimerização completamente carregado (sem iniciador) e por fim oito reações diferentes de polimerização em emulsão. Os resultados mostraram que uma sintonia por tentativa e erro para as matrizes de covariância não apresenta um desempenho adequado. Adicionalmente, o UKF mostra um melhor desempenho, quando comparado com o EKF para o monitoramento de processos de polimerização regime em batelada com covariâncias obtidas através de otimização direta. Quando a estimação da covariância com a técnica ALS é implementada e os resultados utilizados em estimadores estocásticos, o desempenho dos estimadores recursivos melhora consideravelmente. Além disso, o MHE mostrou ser uma ferramenta robusta para o monitoramento do coeficiente global de troca térmica (UA) e do calor gerado pela reação para a polimerização em emulsão em regime semi-contínuo. Finalmente, duas características vantajosas da metodologia proposta devem ser destacadas: a independência em relação ao valor inicial para o estado UA e o fato de um único conjunto de matrizes de covariância (quando obtida pela técnica ALS) poder ser utilizado em reações diferentes, sem necessidade de sintonizar novamente as matrizes para cada reação. / In this work, strategies for state, parameter and covariance estimation in chemical processes are presented and tested with experimental data. For state and parameter estimation techniques have been implemented that spread from the traditional Extended Kalman Filter (EKF) to the most modern techniques from literature, such as the Unscented Kalman Filter (UKF) and the Moving Horizon Estimator (MHE). The Autocovariance Least-Squares technique (ALS) allows the covariance matrices of the process and measurement noise to be estimated based on the measured states of the processes analyzed. Three cases were studied using these techniques: the hydrolysis of acetic anhydride, the warming-up stage of a fully charged polymerization reactor (without initiator) to the desired temperature and finally, eight different emulsion polymerization reaction runs. Results showed that determining covariance matrices by trial and error does not lead to an adequate performance. Additionally, the UKF presents a better performance than the EKF for batch polymerization processes with covariance matrices obtained by direct optimization. When the estimation of the covariance is performed by the ALS technique and they are used in a stochastic estimator, the performance of the recursive estimators is considerably improved. Furthermore, the MHE proved to be a robust tool for monitoring the overall heat transfer coefficient (UA) and the heat of reaction for fedbatch emulsion polymerization. Finally, two positive features of the proposed methodology must be highlighted, its low dependency on the initial state condition of UA and the fact that a unique set of covariance matrices (when obtained by the ALS technique) can be used for different reaction runs, without the necessity of tuning the matrices again for each reaction.
104

Asymptotischer Vergleich höherer Ordnung adaptiver linearer Schätzer in Regressionsmodellen

Ilouga, Pierre Emmanuel 20 April 2001 (has links)
Das Hauptanliegen dieser Arbeit ist der asymptotische Vergleich verschiedener nichtparametrischer adaptiver Schätzer des Mittelwertvektors in einer Regressionssituation mit wachsender Anzahl von Wiederholungen an festen Versuchspunkten. Da die adaptiven Schätzer nicht mehr linear in den Beobachtungen sind, wird ihre mittleren quadratischen Fehler durch ihre Risiken höherer Ordnung approximiert und dadurch wird ein Vergleich unter Annahme normalverteilter Beobachtungsfehlern ermöglicht. Es wird gezeigt, daß der Plug-In Schätzer des unbekannten Mittelwertvektors in dritter Ordnung besser ist als die anderen adaptiven Schätzer und, daß die mit Hilfe der Full Cross-validation Idee konstruierte Schätzer des Mittelwertvektors besser ist als die Schätzung mit Cross-validation, falls die unbekannte Regressionsfunktion "unglatt" ist. In speziellen Situationen ist die Full Cross-validation Adaptation besser als die Adaptationen mit Hilfe der in der Arbeit betrachteten "automatischen" Kriterien. Es wird außerdem einen Schätzer des Mittelwertvektors mit einer Plug-In Idee konstruiert, dessen Risiken zweiter Ordnung kleiner sind als die Risiken zweiter Ordnung der anderen adaptiven Schätzer. Wenn aber eine Vermutung vorliegt, daß die unbekannte Regressionsfunktion "sehr glatt" ist, wird bewiesen daß der Projektionsschätzer den kleinsten mittleren quadratischen Fehler liefert. / The main objective of this thesis is the asymptotic comparison of various nonparametric adaptive estimators of the mean vector in a regression situation with increasing number of replications at fixed design points. Since the adaptive estimators are no longer linear in the observations, one approximates their mean square errors by their heigher order risks and a comparison under the assumption of normal distributed errors of the observations will be enabled. It is shown that the Plug-In estimators of the unknown mean vector is better in third order than the others adaptive estimators and that the estimator defined with the full cross-validation idea is better than the estimator with cross-validation, if the unknown regression function is "non smooth". In some special situations, the full cross-validation adaptation will be better than the adaptations using a minimizer of the "automatic" criteria considered in this thesis. Additionally, an estimator of the mean vector is proposed, whose second order risk is smaller than the second order risks of the other adaptive estimators. If however one presumes that the unknown regression function is "very smooth", then it is shown that the projection estimator gives the smallest mean square error.
105

Finite sample analysis of profile M-estimators

Andresen, Andreas 02 September 2015 (has links)
In dieser Arbeit wird ein neuer Ansatz für die Analyse von Profile Maximierungsschätzern präsentiert. Es werden die Ergebnisse von Spokoiny (2011) verfeinert und angepasst für die Schätzung von Komponenten von endlich dimensionalen Parametern mittels der Maximierung eines Kriteriumfunktionals. Dabei werden Versionen des Wilks Phänomens und der Fisher-Erweiterung für endliche Stichproben hergeleitet und die dafür kritische Relation der Parameterdimension zum Stichprobenumfang gekennzeichnet für den Fall von identisch unabhängig verteilten Beobachtungen und eines hinreichend glatten Funktionals. Die Ergebnisse werden ausgeweitet für die Behandlung von Parametern in unendlich dimensionalen Hilberträumen. Dabei wir die Sieve-Methode von Grenander (1981) verwendet. Der Sieve-Bias wird durch übliche Regularitätsannahmen an den Parameter und das Funktional kontrolliert. Es wird jedoch keine Basis benötigt, die orthogonal in dem vom Model induzierten Skalarprodukt ist. Weitere Hauptresultate sind zwei Konvergenzaussagen für die alternierende Maximisierungsprozedur zur approximation des Profile-Schätzers. Alle Resultate werden anhand der Analyse der Projection Pursuit Prozedur von Friendman (1981) veranschaulicht. Die Verwendung von Daubechies-Wavelets erlaubt es unter natürlichen und üblichen Annahmen alle theoretischen Resultate der Arbeit anzuwenden. / This thesis presents a new approach to analyze profile M-Estimators for finite samples. The results of Spokoiny (2011) are refined and adapted to the estimation of components of a finite dimensional parameter using the maximization of a criterion functional. A finite sample versions of the Wilks phenomenon and Fisher expansion are obtained and the critical ratio of parameter dimension to sample size is derived in the setting of i.i.d. samples and a smooth criterion functional. The results are extended to parameters in infinite dimensional Hilbert spaces using the sieve approach of Grenander (1981). The sieve bias is controlled via common regularity assumptions on the parameter and functional. But our results do not rely on an orthogonal basis in the inner product induced by the model. Furthermore the thesis presents two convergence results for the alternating maximization procedure. All results are exemplified in an application to the Projection Pursuit Procedure of Friendman (1981). Under a set of natural and common assumptions all theoretical results can be applied using Daubechies wavelets.
106

Propriedades de filtros lineares para sistemas lineares com saltos markovianos a tempo discreto / Properties of linear filters for discrete-time Markov jump linear systems.

Gomes, Maria Josiane Ferreira 12 March 2015 (has links)
Este trabalho é dedicado ao estudo do erro de estimação em filtragem linear para sistemas lineares com parâmentros sujeitos a saltos markovianos a tempo discreto. Indroduzimos o conceito de alcançabilidade média para uma classe de sistemas. Construímos um conjunto de matrizes de alcançabilidade e mostramos que o conceito usual de alcan- çabilidade definido através da positividade do gramiano é caracterizado pela definição por posto completo destas matrizes. A alcançabilidade média funciona como condição necessária e suficiente para positividade do segundo momento do estado do sistema, resultado esse que auxilia na caracterização da positividade uniforme da matriz de covariância do erro de estimação. Abordamos a estabilidade de estimadores com a interpretação de que a covariância do erro permanece limitada na presença de erro de qualquer magnitude no modelo do ruído, que é uma característica relevante para aplicações. Apresentamos uma prova de que filtros markovianos são estáveis sempre que o segundo momento condicionado é positivo. Exemplos numéricos encontram-se inclusos. / This work studies linear filtering for discrete-time systems with Markov jump parameters. We introduce a notion of average reachability for these systems and present a set of matrices playing the role of reachability matrices, in the sense that their rank is full if and only if the system is average reachable. Reachability is also a sufficient condition for the second moment of the system to be positive. Uniform positiveness of the error covariance matrix is studied for general (possibly non-markovian) linear estimators, relying on the state second moment positiveness. Satbility of linear markovian estimators is also addressed, allowing to show that markovian estimators are stable whenever the system is reachable, with the interpretation that the error covariance remains bounded in the presence of error of any magnitude in the model of the noise, which is a relevant feature for applications. Numerical examples are included.
107

[en] ELECTRICAL ENERGY CONDITIONAL DEMAND ANALYSIS USING ROBUST REGRESSION: APLICATION TO A REAL CASE / [pt] ANÁLISE CONDICIONADA DA DEMANDA DE ENERGIA ELÉTRICA: APLICAÇÃO A UM CASO REAL

ERICK ROMARIO DE PAULA 11 October 2006 (has links)
[pt] Este trabalho tem como objetivo avaliar o uso da técnica Análise Condicionada da Demanda, que é uma metodologia que quebra o consumo de energia elétrica (neste trabalho do setor residencial) em suas partes por equipamento e por uso final, via Regressão Robusta em contrapartida à utilização da regressão clássica, na estimação do consumo de energia elétrica por uso final do setor residencial. Para isto foram realizadas análises via regressão linear múltipla e também análises via regressão robusta (estimadores robustos). Serão realizadas as duas análises para efeito de comparação entre o método clássico MQO - Mínimos Quadrados Ordinários, que não é o ideal, pois os dados violam os pressupostos para utilização desta técnica, e o método robusto, menos sensível a desvios de pressupostos / [en] This work has the purpose of evaluating the use of the technique Conditional Demand Analysis - CDA, which is a methodology that segregates the consumption of electric energy (on this work about the residential sector) is its parts per equipment and per final use through the Robust Regression, in counterpart of using the classic regression, in the estimation of the electric energy consumption for final use on the residential sector. For this purpose analyses will be made using the multiple linear regression and also analyses using the robust regression (robust estimators). The two analyses will be made for comparing the classic method Squared Minimums Usual - MQO, which is not the ideal one because the data violates the requirements for using this kind of method, and the robust method, less sensible to detours of the requirements.
108

Residual Error Estimation And Adaptive Algorithms For Fluid Flows

Ganesh, N 05 1900 (has links)
The thesis deals with the development of a new residual error estimator and adaptive algorithms based on the error estimator for steady and unsteady fluid flows in a finite volume framework. The aposteriori residual error estimator referred to as R--parameter, is a measure of the local truncation error and is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. A detailed and systematic study of the R--parameter on linear and non--linear hyperbolic problems, involving continuous flows and discontinuities is performed. Simple theoretical analysis and extensive numerical experiments are performed to establish the fact that the R--parameter is a valid estimator at limiter--free continuous flow regions, but is rendered inconsistent at discontinuities and with limiting. The R--parameter is demonstrated to work equally well on different mesh topologies and detects the sources of error, making it an ideal choice to drive adaptive strategies. The theory of the error estimation is also extended for unsteady flows, both on static and moving meshes. The R--parameter can be computed with a low computational overhead and is easily incorporated into existing finite volume codes with minimal effort. Adaptive refinement algorithms for steady flows are devised employing the residual error estimator. For continuous flows devoid of limiters, a purely R--parameter based adaptive algorithm is designed. A threshold length scale derived from the estimator determines the refinement/derefinement criterion, leading to a self--evolving adaptive algorithm devoid of heuristic parameters. On the other hand, for compressible flows involving discontinuities and limiting, a hybrid adaptive algorithm is proposed. In this hybrid algorithm, error indicators are used to flag regions for refinement, while regions of derefinement are detected using the R--parameter. Two variants of these algorithms, which differ in the computation of the threshold length scale are proposed. The disparate behaviour of the R--parameter for continuous and discontinuous flows is exploited to design a simple and effective discontinuity detector for compressible flows. For time--dependent flow problems, a two--step methodology is proposed for adaptive grid refinement. In the first step, the ``best" mesh at any given time instant is determined. The second step involves predicting the evolution of flow phenomena over a period of time and refines regions into which the flow features would progress into. The latter step is implemented using a geometric--based ``Refinement Level Projection" strategy which guarantees that the flow features remain in adapted zones between successive adaptive cycles and hence uniform solution accuracy. Several numerical experiments involving inviscid and viscous flows on different grid topologies are performed to illustrate the success of the proposed adaptive algorithms. Appendix 1 Candidate's response to the comments/queries of the examiners The author would like to thank the reviewers for their appreciation of the work embodied in the thesis and for their comments. The clarifications to the comments and queries posed in the reviews are summarized below. Referee 1 Q: The example of mesh refinement for RANS solution with shock was performed with isotropic mesh, while the author claims that it is appropriate with anisotropic mesh. If this is the case, why did he not demonstrate that ? As the author knows well, in the case of full 3--D configuration, isotropic adaptation will lead to substantial grid points. The large mesh will hamper timely turnaround time of simulation. Therefore it would be a significant contribution to the aero community if this point is investigated at a later date. Response: The author is of the view that for most practical situations, a pragmatic approach to mesh adaptation for RANS computations would merely involve generating a viscous padding of adequate fineness around the body and allowing for grid adaptation only in the outer potential region. Of course, this method would allow for grid adaptation in the outer layers of viscous padding only to the extent the smoothness criterion is satisfied while adapting the grids in the potential region. This completely obviates point addition to the wall (CAD surface) and there by avoids all complexities (like loss in automation) resulting from the interaction with the surface modeler while adding point on the wall. This method is expected to do well for attached flows and mildly separated flows. This method is expected to do well even for problems involving shock - boundary layer interaction, owing to the fact that the shock is normal to the wall surface (recall, a flow aligned grid is ideal to capture such shocks), as long as the interaction does not result in a massive separation. This approach has already been demonstrated in section 4.5.3 where in adaptive high-lift computations have been performed. Isotropic adaptation retains the goodness of the zero level grid and therefore the robustness of the solver does not suffer through successive levels of grid adaptation. This procedure may result in large number of volumes. On the other hand, the anisotropic refinement may result in significantly less number of volumes, but the mesh quality may have badly degenerated during successive levels of adaptation leading to difficulties in convergence. Therefore, the choice of either of these strategies is effectively dictated by requirements on grid quality and grid size. Also, it is generally understood that building tools for anisotropic adaptation are more complicated as compared to those required for isotropic adaptation, while anisotropic refinement may not require point addition on the wall. Considering these facts, in the view of the author, this issue is an open issue and his personal preference would be to use isotropic refinement or a hybrid strategy employing a combination of these methodologies, particularly considering aspects of solution quality. Finally, in both the examples cited by the reviewer (sections 6.4.5 & 6.4.6) the objective was to demonstrate the efficacy of the new adaptive algorithm (using error indicators and the residual estimator), rather than evaluating the pros & cons of isotropic and anisotropic refinement strategies. In the sections cited above, the author has merely highlighted the advantages of the refinement strategies in specific context of the problem considered and these statements need not be considered as general. Referee 2 Q: For convection problems, a good error estimator must be able to distinguish between locally generated error and convected error. The thesis says the residual error estimator is able to do this and some numerical evidence is presented, but can the candidate comment how the estimator is able to achieve this ? Response: The ultimate aim of any AMR strategy is to reduce the global error. The residual error estimator proposed in this work measures the local truncation error. It has been shown in the context of a linear convective equation that the global error in a cell consists of two parts--the locally generated error in the cell (which is the R--parameter) and the local error transported from other cells in the domain. Either of these errors are dependent on the local error itself and any algorithm that reduces the local truncation error (sources of error) will reduce the global error in the domain. This conclusion is supported by the test case of isentropic flow past an airfoil (Chapter 3, C, Pg 79), where refinement based on the R--parameter leads to lower global error levels than a global error based refinement itself. Q: While analysing the R--parameter in Section 3.3, the operator δ2 is missing. Response: The analysis in Section 3.3 is based on Eq.(3.3) (Pg 58) which provides the local truncation error. As can be seen from Eq.(3.14), the LHS represents the discrete operator acting on the numerical solution (which is zero) and the first term on the RHS is the exact operator acting on the numerical solution (which is I[u]). Consequently the truncation terms T1 and T2 contribute to the truncation error R1 . However, from the viewpoint of computing the error estimate on a discretised domain, we need to replace the exact operator I by a higher order discrete operator δ2 . This gives the R-parameter, which has contributions from R1 as well as discretisation errors due to the higher order operator, R2 . When the latter is negligible compared to the former, the R--parameter is an estimate of the local truncation error. The truncation error depends on the accuracy of the reconstruction procedure used in obtaining the numerical solution and hence on the discrete operator δ1. On very similar lines, it can be shown that operator δ2 leads to a formal second order accuracy and this operator is only required in computing the residual error estimate. Q: What does the phrase "exact derivatives of the numerical solution" mean ? Response: This statement exemplifies the fact that the numerical solution is the exact solution to the modified partial differential equation and that the truncation terms T1 and T2 that constitute the R--parameter are functions of the derivatives of this numerical solution. Q: For the operator δ2 quadratic reconstruction is employed. Is the exact or numerical flux function used ? Response: The operator δ2 is a higher order discrete approximation to the exact operator I. Therefore, a quadratic polynomial with a three--point Gauss quadrature has been used in the error estimation procedure. Error estimation does not involve issues with convergence associated with the flow solver and therefore an exact flux function has been employed with the δ2 operator. Nevertheless, it is also possible to use the same numerical flux function as employed in the flow solver for error estimation also. Q: The same stencil of grid points is used for the solution update and the error estimation. Does this not lead to an increased stencil size ? Response: In comparison to reconstruction using higher degree polynomials such as cubic and quartic reconstruction, quadratic reconstruction involves only a smaller stencil of points consisting of the node--sharing neighbours of a cell. The use of such a support stencil is sufficient for linear reconstruction also and adds to the robustness of the flow solver, although a linear reconstruction can, in principle, work with a smaller support stencil. A possible alternative to using quadratic reconstruction (and hence a slightly larger stencil) is to adopt a Defect Correction strategy to obtain derivatives to higher order accuracy and needs to be explored in detail. Q: How is the R--parameter computed for viscous flows ? Response: The computation of the R--parameter for viscous flows is on the same lines as for inviscid flows. The gradients needed for viscous flux computation at the face centers are obtained using quadratic reconstruction. The procedure for calculation of the R--parameter for steady flows (both inviscid and viscous) is the step--by--step algorithm in Section 3.5. Q: In some cases, regions ahead of the shock show no coarsening. Response: The adaptive algorithm proposed in this work does not allow for coarsening of the initial mesh, and regions ahead of the shock remain unaffected (because of uniform flow) at all levels of refinement. Q: Do adaptation strategies terminate automatically atleast for steady flows ? Response: The adaptation strategies (RAS and HAS) must, in principle by virtue of construction of the algorithm, automatically terminate for steady flows. In the HAS algorithms though, there are certain heuristic criteria for termination of refinement especially at shocks/turbulent boundary layers. In this work, a maximum of four cycles of refinement/derefinement have only been carried out and therefore an automatic termination of the adaptive strategies were no studied. Q: How do residual--based adaptive strategies compare and contrast with adjoint--based approaches which are now becoming popular for goal--oriented adaptation ? Adjoint--based methods involve solution to the adjoint problem in addition to solving the primal problem, which represents a substantial computational cost. A timing study for a typical 3D problem[2] indicates that the solution of the adjoint problem (which needs the computation of the Jacobian and sensitivities of the functional) could require as much as one--half of the total time needed to compute the flow solution. On the contrary, R--parameter based refinement involves no additional information than that required by the flow solver and is roughly equivalent to one explicit iteration of the flow solver (Section 3.5.1). For practical 3--D applications, adjoint--based approaches will lead to a prohibitively high cost, and more so for dynamic adaptation. This is also exemplified by the fact that there has been only few recent works on 3D adaptive computations based on adjoint error estimation (which consider only inviscid flows)[1,2]. Goal--oriented adaptation involves reducing the error in some functional of interest. This can be achieved within the framework of R--parameter based adaptation, by introducing additional termination criteria based on integrated quantities. Within an automated adaptation loop, such an algorithm would terminate when the integrated quantities do not change appreciably with refinement levels. This is in contrast to the adjoint--based approach which strives to reduce the error in the functional below a certain threshold. Considering the fact that reducing the residual leads to reducing the global error itself, the R--parameter based adaptive algorithm would also lead to accurate estimates of the integrated quantities (which depend on the numerical solution). This is also reflected in the fact that the R--parameter based adaptation for the three--element NHLP configuration predicts the lift and drag coefficients to reasonable accuracy, as shown in Section 4.5.3. The author is of the belief that the R--parameter based adaptive algorithm holds huge promise for adaptive simulations of flow past complex geometries, both in terms of computational cost and solution accuracy. This is exemplified by successful adaptive simulations of inviscid flow past ONERA M6 wing as well as a conventional missile configuration[3]. A more concrete comparison of the R--parameter based and adjoint--based approaches would involve systematically solving a set of problems by both approaches and has not been considered in this thesis. [1] Nemec and Aftosmis,``Adjoint error estimation and adaptive refinement for embedded--boundary cartesian meshes", AIAA Paper 2007--4187, 2007. [2] Wintzer, Nemec and Aftosmis,``Adjoint--based adaptive mesh refinement for sonic boom prediction", AIAA Paper 2008--6593, 2008. [3] Nikhil Shende, ``A general purpose flow solver for Euler equations", Ph.D. Thesis, Dept. of Aerospace Engg., Indian Institute of Science, 2005.
109

Analysis of Longitudinal Surveys with Missing Responses

Carrillo Garcia, Ivan Adolfo January 2008 (has links)
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. The National Longitudinal Survey of Children and Youth (NLSCY), a large scale survey with a complex sampling design and conducted by Statistics Canada, follows a large group of children and youth over time and collects measurement on various indicators related to their educational, behavioral and psychological development. One of the major objectives of the study is to explore how such development is related to or affected by familial, environmental and economical factors. The generalized estimating equation approach, sometimes better known as the GEE method, is the most popular statistical inference tool for longitudinal studies. The vast majority of existing literature on the GEE method, however, uses the method for non-survey settings; and issues related to complex sampling designs are ignored. This thesis develops methods for the analysis of longitudinal surveys when the response variable contains missing values. Our methods are built within the GEE framework, with a major focus on using the GEE method when missing responses are handled through hot-deck imputation. We first argue why, and further show how, the survey weights can be incorporated into the so-called Pseudo GEE method under a joint randomization framework. The consistency of the resulting Pseudo GEE estimators with complete responses is established under the proposed framework. The main focus of this research is to extend the proposed pseudo GEE method to cover cases where the missing responses are imputed through the hot-deck method. Both weighted and unweighted hot-deck imputation procedures are considered. The consistency of the pseudo GEE estimators under imputation for missing responses is established for both procedures. Linearization variance estimators are developed for the pseudo GEE estimators under the assumption that the finite population sampling fraction is small or negligible, a scenario often held for large scale population surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study. The results show that the pseudo GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous response and binary response.
110

Analysis of Longitudinal Surveys with Missing Responses

Carrillo Garcia, Ivan Adolfo January 2008 (has links)
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. The National Longitudinal Survey of Children and Youth (NLSCY), a large scale survey with a complex sampling design and conducted by Statistics Canada, follows a large group of children and youth over time and collects measurement on various indicators related to their educational, behavioral and psychological development. One of the major objectives of the study is to explore how such development is related to or affected by familial, environmental and economical factors. The generalized estimating equation approach, sometimes better known as the GEE method, is the most popular statistical inference tool for longitudinal studies. The vast majority of existing literature on the GEE method, however, uses the method for non-survey settings; and issues related to complex sampling designs are ignored. This thesis develops methods for the analysis of longitudinal surveys when the response variable contains missing values. Our methods are built within the GEE framework, with a major focus on using the GEE method when missing responses are handled through hot-deck imputation. We first argue why, and further show how, the survey weights can be incorporated into the so-called Pseudo GEE method under a joint randomization framework. The consistency of the resulting Pseudo GEE estimators with complete responses is established under the proposed framework. The main focus of this research is to extend the proposed pseudo GEE method to cover cases where the missing responses are imputed through the hot-deck method. Both weighted and unweighted hot-deck imputation procedures are considered. The consistency of the pseudo GEE estimators under imputation for missing responses is established for both procedures. Linearization variance estimators are developed for the pseudo GEE estimators under the assumption that the finite population sampling fraction is small or negligible, a scenario often held for large scale population surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study. The results show that the pseudo GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous response and binary response.

Page generated in 0.0717 seconds