Spelling suggestions: "subject:"[een] SMOOTHING"" "subject:"[enn] SMOOTHING""
421 |
The Influence of Disease Mapping Methods on Spatial Patterns and Neighborhood Characteristics for Health RiskRuckthongsook, Warangkana 12 1900 (has links)
This thesis addresses three interrelated challenges of disease mapping and contributes a new approach for improving visualization of disease burdens to enhance disease surveillance systems. First, it determines an appropriate threshold choice (smoothing parameter) for the adaptive kernel density estimation (KDE) in disease mapping. The results show that the appropriate threshold value depends on the characteristics of data, and bandwidth selector algorithms can be used to guide such decisions about mapping parameters. Similar approaches are recommended for map-makers who are faced with decisions about choosing threshold values for their own data. This can facilitate threshold selection. Second, the study evaluates the relative performance of the adaptive KDE and spatial empirical Bayes for disease mapping. The results reveal that while the estimated rates at the state level computed from both methods are identical, those at the zip code level are slightly different. These findings indicate that using either the adaptive KDE or spatial empirical Bayes method to map disease in urban areas may provide identical rate estimates, but caution is necessary when mapping diseases in non-urban (sparsely populated) areas. This study contributes insights on the relative performance in terms of accuracy of visual representation and associated limitations. Lastly, the study contributes a new approach for delimiting spatial units of disease risk using straightforward statistical and spatial methods and social determinants of health. The results show that the neighborhood risk map not only helps in geographically targeting where but also in tailoring interventions in those areas to those high risk populations. Moreover, when health data is limited, the neighborhood risk map alone is adequate for identifying where and which populations are at risk. These findings will benefit public health tasks of planning and targeting appropriate intervention even in areas with limited and poor-quality health data. This study not only fills the identified gaps of knowledge in disease mapping but also has a wide range of broader impacts. The findings of this study improve and enhance the use of the adaptive KDE method in health research, provide better awareness and understanding of disease mapping methods, and offer an alternative method to identify populations at risk in areas with limited health data. Overall, these findings will benefit public health practitioners and health researchers as well as enhance disease surveillance systems.
|
422 |
Integrative analysis of data from multiple experimentsRonen, Jonathan 22 July 2020 (has links)
Auf die Entwicklung der Hochdurchsatz-Sequenzierung (HTS) folgte eine Reihe
von speziellen Erweiterungen, die erlauben verschiedene zellbiologischer Aspekte wie Genexpression, DNA-Methylierung, etc. zu messen. Die Analyse dieser Daten erfordert die Entwicklung von Algorithmen, die einzelne Experimenteberücksichtigen oder mehrere Datenquellen gleichzeitig in betracht nehmen. Der letztere Ansatz bietet besondere Vorteile bei Analyse von einzelligen RNA-Sequenzierung (scRNA-seq) Experimenten welche von besonders hohem technischen Rauschen, etwa durch den Verlust an Molekülen durch die Behandlung geringer Ausgangsmengen, gekennzeichnet sind. Um diese experimentellen Defizite auszugleichen, habe ich eine Methode namens netSmooth entwickelt, welche die scRNA-seq-Daten entrascht und fehlende Werte mittels Netzwerkdiffusion über ein Gennetzwerk imputiert. Das Gennetzwerk reflektiert dabei erwartete Koexpressionsmuster von Genen. Unter Verwendung eines Gennetzwerks, das aus Protein-Protein-Interaktionen aufgebaut ist, zeige ich, dass netSmooth anderen hochmodernen scRNA-Seq-Imputationsmethoden bei der Identifizierung von Blutzelltypen in der Hämatopoese, zur Aufklärung von Zeitreihendaten unter Verwendung eines embryonalen Entwicklungsdatensatzes und für die Identifizierung von Tumoren der Herkunft für scRNA-Seq von Glioblastomen überlegen ist. netSmooth hat einen freien Parameter, die Diffusionsdistanz, welche durch datengesteuerte Metriken optimiert werden kann. So kann netSmooth auch dann eingesetzt werden, wenn der optimale Diffusionsabstand nicht explizit mit Hilfe von externen Referenzdaten optimiert werden kann. Eine integrierte Analyse ist auch relevant wenn multi-omics Daten von mehrerer Omics-Protokolle auf den gleichen biologischen Proben erhoben wurden. Hierbei erklärt jeder einzelne dieser Datensätze nur einen Teil des zellulären Systems, während die gemeinsame Analyse ein vollständigeres Bild ergibt. Ich entwickelte eine Methode namens maui, um eine latente Faktordarstellungen von multiomics Daten zu finden. / The development of high throughput sequencing (HTS) was followed by a swarm of protocols utilizing HTS to measure different molecular aspects such as gene expression (transcriptome), DNA methylation (methylome) and more. This opened opportunities for developments of data analysis algorithms and procedures that consider data produced by different experiments. Considering data from seemingly unrelated experiments is particularly beneficial for Single cell RNA sequencing (scRNA-seq). scRNA-seq produces particularly noisy data, due to loss of nucleic acids when handling the small amounts in single cells, and various technical biases. To address these challenges, I developed a method called netSmooth, which de-noises and imputes scRNA-seq data by applying network diffusion over a gene network which encodes expectations of co-expression patterns. The gene network is constructed from other experimental data. Using a gene network constructed from protein-protein interactions, I show that netSmooth outperforms other state-of-the-art scRNA-seq imputation methods at the identification of blood cell types in hematopoiesis, as well as elucidation of time series data in an embryonic development dataset, and identification of tumor of origin for scRNA-seq of glioblastomas. netSmooth has a free parameter, the diffusion distance, which I show can be selected using data-driven metrics. Thus, netSmooth may be used even in cases when the diffusion distance cannot be optimized explicitly using ground-truth labels. Another task which requires in-tandem analysis of data from different experiments arises when different omics protocols are applied to the same biological samples. Analyzing such multiomics data in an integrated fashion, rather than each data type (RNA-seq, DNA-seq, etc.) on its own, is benefitial, as each omics experiment only elucidates part of an integrated cellular system. The simultaneous analysis may reveal a comprehensive view.
|
423 |
Confidence bands in quantile regression and generalized dynamic semiparametric factor modelsSong, Song 01 November 2010 (has links)
In vielen Anwendungen ist es notwendig, die stochastische Schwankungen der maximalen Abweichungen der nichtparametrischen Schätzer von Quantil zu wissen, zB um die verschiedene parametrische Modelle zu überprüfen. Einheitliche Konfidenzbänder sind daher für nichtparametrische Quantil Schätzungen der Regressionsfunktionen gebaut. Die erste Methode basiert auf der starken Approximation der empirischen Verfahren und Extremwert-Theorie. Die starke gleichmäßige Konsistenz liegt auch unter allgemeinen Bedingungen etabliert. Die zweite Methode beruht auf der Bootstrap Resampling-Verfahren. Es ist bewiesen, dass die Bootstrap-Approximation eine wesentliche Verbesserung ergibt. Der Fall von mehrdimensionalen und diskrete Regressorvariablen wird mit Hilfe einer partiellen linearen Modell behandelt. Das Verfahren wird mithilfe der Arbeitsmarktanalysebeispiel erklärt. Hoch-dimensionale Zeitreihen, die nichtstationäre und eventuell periodische Verhalten zeigen, sind häufig in vielen Bereichen der Wissenschaft, zB Makroökonomie, Meteorologie, Medizin und Financial Engineering, getroffen. Der typische Modelierungsansatz ist die Modellierung von hochdimensionalen Zeitreihen in Zeit Ausbreitung der niedrig dimensionalen Zeitreihen und hoch-dimensionale zeitinvarianten Funktionen über dynamische Faktorenanalyse zu teilen. Wir schlagen ein zweistufiges Schätzverfahren. Im ersten Schritt entfernen wir den Langzeittrend der Zeitreihen durch Einbeziehung Zeitbasis von der Gruppe Lasso-Technik und wählen den Raumbasis mithilfe der funktionalen Hauptkomponentenanalyse aus. Wir zeigen die Eigenschaften dieser Schätzer unter den abhängigen Szenario. Im zweiten Schritt erhalten wir den trendbereinigten niedrig-dimensionalen stochastischen Prozess (stationär). / In many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
|
424 |
Geometric approach to multi-scale 3D gesture comparisonOchoa Mayorga, Victor Manuel 11 1900 (has links)
The present dissertation develops an invariant framework for 3D gesture comparison studies. 3D gesture comparison without Lagrangian models is challenging not only because of the lack of prediction provided by physics, but
also because of a dual geometry representation, spatial dimensionality and non-linearity associated to 3D-kinematics.
In 3D spaces, it is difficult to compare curves without an alignment operator since it is likely that discrete curves are not synchronized and do not share a common point in space. One has to assume that each and every single trajectory in the space is unique. The common answer is to assert the similitude between two or more trajectories as estimating an average distance error from the aligned curves, provided that the alignment operator is found.
In order to avoid the alignment problem, the method uses differential geometry for position and orientation curves. Differential geometry not only reduces the spatial dimensionality but also achieves view invariance. However,
the nonlinear signatures may be unbounded or singular. Yet, it is shown that pattern recognition between intrinsic signatures using correlations is robust for position and orientation alike.
A new mapping for orientation sequences is introduced in order to treat quaternion and Euclidean intrinsic signatures alike. The new mapping projects a 4D-hyper-sphere for orientations onto a 3D-Euclidean volume. The projection uses the quaternion invariant distance to map rotation sequences into 3D-Euclidean curves. However, quaternion spaces are sectional discrete spaces.
The significance is that continuous rotation functions can be only approximated for small angles. Rotation sequences with large angle variations can only be interpolated in discrete sections.
The current dissertation introduces two multi-scale approaches that improve numerical stability and bound the signal energy content of the intrinsic signatures. The first is a multilevel least squares curve fitting method similar to Haar wavelet. The second is a geodesic distance anisotropic kernel filter.
The methodology testing is carried out on 3D-gestures for obstetrics training. The study quantitatively assess the process of skill acquisition and transfer of manipulating obstetric forceps gestures. The results show that the multi-scale correlations with intrinsic signatures track and evaluate gesture differences between experts and trainees.
|
425 |
Geometric approach to multi-scale 3D gesture comparisonOchoa Mayorga, Victor Manuel Unknown Date
No description available.
|
426 |
Changements dans la répartition des décès selon l'âge : une approche non paramétrique pour l'étude de la mortalité adulteOuellette, Nadine 03 1900 (has links)
Au cours du siècle dernier, nous avons pu observer une diminution remarquable de la mortalité dans toutes les régions du monde, en particulier dans les pays développés. Cette chute a été caractérisée par des modifications importantes quant à la répartition des décès selon l'âge, ces derniers ne se produisant plus principalement durant les premiers âges de la vie mais plutôt au-delà de l'âge de 65 ans. Notre étude s'intéresse spécifiquement au suivi fin et détaillé des changements survenus dans la distribution des âges au décès chez les personnes âgées. Pour ce faire, nous proposons une nouvelle méthode de lissage non paramétrique souple qui repose sur l'utilisation des P-splines et qui mène à une expression précise de la mortalité, telle que décrite par les données observées. Les résultats de nos analyses sont présentés sous forme d'articles scientifiques, qui s'appuient sur les données de la Human Mortality Database, la Base de données sur la longévité canadienne et le Registre de la population du Québec ancien reconnues pour leur fiabilité. Les conclusions du premier article suggèrent que certains pays à faible mortalité auraient récemment franchi l'ère de la compression de la mortalité aux grands âges, ère durant laquelle les décès au sein des personnes âgées tendent à se concentrer dans un intervalle d'âge progressivement plus court. En effet, depuis le début des années 1990 au Japon, l'âge modal au décès continue d'augmenter alors que le niveau d'hétérogénéité des durées de vie au-delà de cet âge demeure inchangé. Nous assistons ainsi à un déplacement de l'ensemble des durées de vie adultes vers des âges plus élevés, sans réduction parallèle de la dispersion de la mortalité aux grands âges. En France et au Canada, les femmes affichent aussi de tels développements depuis le début des années 2000, mais le scénario de compression de la mortalité aux grands âges est toujours en cours chez les hommes. Aux États-Unis, les résultats de la dernière décennie s'avèrent inquiétants car pour plusieurs années consécutives, l'âge modal au décès, soit la durée de vie la plus commune des adultes, a diminué de manière importante chez les deux sexes. Le second article s'inscrit dans une perspective géographique plus fine et révèle que les disparités provinciales en matière de mortalité adulte au Canada entre 1930 et 2007, bien décrites à l'aide de surfaces de mortalité lissées, sont importantes et méritent d'être suivies de près. Plus spécifiquement, sur la base des trajectoires temporelles de l'âge modal au décès et de l'écart type des âges au décès situés au-delà du mode, les différentiels de mortalité aux grands âges entre provinces ont à peine diminué durant cette période, et cela, malgré la baisse notable de la mortalité dans toutes les provinces depuis le début du XXe siècle. Également, nous constatons que ce sont précisément les femmes issues de provinces de l'Ouest et du centre du pays qui semblent avoir franchi l'ère de la compression de la mortalité aux grands âges au Canada. Dans le cadre du troisième et dernier article de cette thèse, nous étudions la longévité des adultes au XVIIIe siècle et apportons un nouvel éclairage sur la durée de vie la plus commune des adultes à cette époque. À la lumière de nos résultats, l'âge le plus commun au décès parmi les adultes canadiens-français a augmenté entre 1740-1754 et 1785-1799 au Québec ancien. En effet, l'âge modal au décès est passé d'environ 73 ans à près de 76 ans chez les femmes et d'environ 70 ans à 74 ans chez les hommes. Les conditions de vie particulières de la population canadienne-française à cette époque pourraient expliquer cet accroissement. / Over the course of the last century, we have witnessed major improvements in the level of mortality in regions all across the globe, in particular in developed countries. This remarkable mortality decrease has also been characterized by fundamental changes in the mortality profile by age. Indeed, deaths are no longer occurring mainly at very young ages but rather at advanced ages such as above age 65. Our research focuses on monitoring and understanding historical changes in the age-at-death distribution among the elderly population. We propose a new flexible nonparametric smoothing approach based on P-splines leading to detailed mortality representations, as described by actual data. The results are presented in three scientific papers, which rest upon reliable data taken from the Human Mortality Database, the Canadian Human Mortality Database, and the Registre de la population du Québec ancien. Findings from the first paper suggest that some low mortality countries may have recently reached the end of the old-age compression of mortality era, where deaths among the elderly population tend to concentrate into a progressively shorter age interval over time. Indeed, since the early 1990s in Japan, the modal age at death continues to increase while reductions in the variability of age at death above the mode have stopped. Thus, the distribution of age at death at older ages has been sliding towards higher ages without changing its shape. In France and Canada, women show such developments since the early 2000s, whereas men are still boldly engaged in an old-age mortality compression regime. In the USA, the picture for the latest decade is worrying because for several consecutive years in that timeframe, women and men have both recorded important declines in their modal age at death, which corresponds to the most common age at death among adults. The second paper takes a look within national boundaries and examines regional adult mortality differentials in Canada between 1930 and 2007. Smoothed mortality surfaces reveal that provincial disparities among adults in general and among the elderly population in particular are substantial in this country and deserve to be monitored closely. More specifically, based on modal age at death and standard deviation above the mode time trends, provincial disparities at older ages have barely reduced during the period studied, despite the great mortality improvements recorded in all provinces since the early XXth century. Also, we find that women who have reached the end of the old-age compression of mortality era in Canada are respectively those of Western and Central provinces. The last paper focuses on adult longevity during the XVIIIth century in historical Quebec and provides new insight on the most common adult age at death. Indeed, our analysis reveals that the modal age at death increased among French-Canadian adults between 1740-1754 and 1785-1799. In 1740-1754, it was estimated at 73 years among females and at about 70 years among males. By 1785-1799, modal age at death estimates were almost 3 years higher for females and 4 years higher for males. Specific living conditions of the French-Canadian population at the time could explain these results.
|
427 |
[en] ANALYSIS TECHNIQUES FOR CONTROLLING ELECTRIC POWER FOR HIGH FREQUENCY DATA: APPLICATION TO THE LOAD FORECASTING / [pt] ANÁLISE DE TÉCNICAS PARA CONTROLE DE ENERGIA ELÉTRICA PARA DADOS DE ALTA FREQUÊNCIA: APLICAÇÃO À PREVISÃO DE CARGAJULIO CESAR SIQUEIRA 08 January 2014 (has links)
[pt] O objetivo do presente trabalho é o desenvolvimento de um algoritmo
estatístico de previsão da potência transmitida pela usina geradora termelétrica de
Linhares, localizada no Espírito Santo, medida no ponto de entrada da rede da
concessionária regional, a ser integrado em plataforma composta por sistema
supervisório em tempo real em ambiente MS Windows. Para tal foram
comparadas as metodologias de Modelos Arima(p,d,q), regressão usando
polinômios ortogonais e técnicas de amortecimento exponencial para identificar a
mais adequada para a realização de previsões 5 passos-à-frente. Os dados
utilizados são provenientes de observações registradas a cada 5 minutos, contudo,
o alvo é produzir estas previsões para observações registradas a cada 5 segundos.
Os resíduos estimados do modelo ajustado foram analisados via gráficos de
controle para checar a estabilidade do processo. As previsões produzidas serão
usadas para subsidiar decisões dos operadores da usina, em tempo real, de forma a
evitar a ultrapassagem do limite de 200.000 kW por mais de quinze minutos. / [en] The objective of this study is to develop a statistical algorithm to predict
the power transmitted by a thermoelectric power plant in Linhares, located at
Espírito Santo state, measured at the entrance of the utility regional grid, which
will be integrated to a platform formed by a real time supervisor system
developed in MS Windows. To this end we compared Arima (p,d,q), Regression
using Orthogonal Polynomials and Exponential Smoothing techniques to identify
the best suited approach to make predictions five steps ahead. The data used are
observations recorded every 5 minutes, however, the target is to produce these
forecasts for observations recorded in every five seconds. The estimated residuals
of the fitted model were analysed via control charts to check on the stability of
the process. The forecasts produced by this model will be used to help not to
exceed the 200.000 kW energy generation upper bound for more than fifteen
minutes.
|
428 |
Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados / Co-estimation geostatistical methods: a study of the correlation between variables at results precisionWatanabe, Jorge 29 February 2008 (has links)
Esta dissertação de mestrado apresenta os resultados de uma investigação sobre os métodos de co-estimativa comumente utilizados em geoestatística. Estes métodos são: cokrigagem ordinária; cokrigagem colocalizada e krigagem com deriva externa. Além disso, a krigagem ordinária foi considerada apenas a título de ilustração como esse método trabalha quando a variável primária estiver pobremente amostrada. Como sabemos, os métodos de co-estimativa dependem de uma variável secundária amostrada sobre o domínio a ser estimado. Adicionalmente, esta variável deveria apresentar correlação linear com a variável principal ou variável primária. Geralmente, a variável primária é pobremente amostrada enquanto a variável secundária é conhecida sobre todo o domínio a ser estimado. Por exemplo, em exploração petrolífera, a variável primária é a porosidade medida em amostras de rocha retiradas de testemunhos e a variável secundária é a amplitude sísmica derivada de processamento de dados de reflexão sísmica. É importante mencionar que a variável primária e a variável secundária devem apresentar algum grau de correlação. Contudo, nós não sabemos como eles funcionam dependendo do grau de correlação. Esta é a questão. Assim, testamos os métodos de co-estimativa para vários conjuntos de dados apresentando diferentes graus de correlação. Na verdade, esses conjuntos de dados foram gerados em computador baseado em algoritmos de transformação de dados. Cinco valores de correlação foram considerados neste estudo: 0,993, 0,870, 0,752, 0,588 e 0,461. A cokrigagem colocalizada foi o melhor método entre todos testados. Este método tem um filtro interno que é aplicado no cálculo do peso da variável secundária, que por sua vez depende do coeficiente de correlação. De fato, quanto maior o coeficiente de correlação, maior é o peso da variável secundária. Então isso significa que este método funciona mesmo quando o coeficiente de correlação entre a variável primária e a variável secundária é baixo. Este é o resultado mais impressionante desta pesquisa. / This master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
|
429 |
Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados / Co-estimation geostatistical methods: a study of the correlation between variables at results precisionJorge Watanabe 29 February 2008 (has links)
Esta dissertação de mestrado apresenta os resultados de uma investigação sobre os métodos de co-estimativa comumente utilizados em geoestatística. Estes métodos são: cokrigagem ordinária; cokrigagem colocalizada e krigagem com deriva externa. Além disso, a krigagem ordinária foi considerada apenas a título de ilustração como esse método trabalha quando a variável primária estiver pobremente amostrada. Como sabemos, os métodos de co-estimativa dependem de uma variável secundária amostrada sobre o domínio a ser estimado. Adicionalmente, esta variável deveria apresentar correlação linear com a variável principal ou variável primária. Geralmente, a variável primária é pobremente amostrada enquanto a variável secundária é conhecida sobre todo o domínio a ser estimado. Por exemplo, em exploração petrolífera, a variável primária é a porosidade medida em amostras de rocha retiradas de testemunhos e a variável secundária é a amplitude sísmica derivada de processamento de dados de reflexão sísmica. É importante mencionar que a variável primária e a variável secundária devem apresentar algum grau de correlação. Contudo, nós não sabemos como eles funcionam dependendo do grau de correlação. Esta é a questão. Assim, testamos os métodos de co-estimativa para vários conjuntos de dados apresentando diferentes graus de correlação. Na verdade, esses conjuntos de dados foram gerados em computador baseado em algoritmos de transformação de dados. Cinco valores de correlação foram considerados neste estudo: 0,993, 0,870, 0,752, 0,588 e 0,461. A cokrigagem colocalizada foi o melhor método entre todos testados. Este método tem um filtro interno que é aplicado no cálculo do peso da variável secundária, que por sua vez depende do coeficiente de correlação. De fato, quanto maior o coeficiente de correlação, maior é o peso da variável secundária. Então isso significa que este método funciona mesmo quando o coeficiente de correlação entre a variável primária e a variável secundária é baixo. Este é o resultado mais impressionante desta pesquisa. / This master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
|
430 |
Advancing Optimal Control Theory Using Trigonometry For Solving Complex Aerospace ProblemsKshitij Mall (5930024) 17 January 2019 (has links)
<div>Optimal control theory (OCT) exists since the 1950s. However, with the advent of modern computers, the design community delegated the task of solving the optimal control problems (OCPs) largely to computationally intensive direct methods instead of methods that use OCT. Some recent work showed that solvers using OCT could leverage parallel computing resources for faster execution. The need for near real-time, high quality solutions for OCPs has therefore renewed interest in OCT in the design community. However, certain challenges still exist that prohibits its use for solving complex practical aerospace problems, such as landing human-class payloads safely on Mars.</div><div><br></div><div>In order to advance OCT, this thesis introduces Epsilon-Trig regularization method to simply and efficiently solve bang-bang and singular control problems. The Epsilon-Trig method resolves the issues pertaining to the traditional smoothing regularization method. Some benchmark problems from the literature including the Van Der Pol oscillator, the boat problem, and the Goddard rocket problem verified and validated the Epsilon-Trig regularization method using GPOPS-II.</div><div><br></div><div>This study also presents and develops the usage of trigonometry for incorporating control bounds and mixed state-control constraints into OCPs and terms it as Trigonometrization. Results from literature and GPOPS-II verified and validated the Trigonometrization technique using certain benchmark OCPs. Unlike traditional OCT, Trigonometrization converts the constrained OCP into a two-point boundary value problem rather than a multi-point boundary value problem, significantly reducing the computational effort required to formulate and solve it. This work uses Trigonometrization to solve some complex aerospace problems including prompt global strike, noise-minimization for general aviation, shuttle re-entry problem, and the g-load constraint problem for an impactor. Future work for this thesis includes the development of the Trigonometrization technique for OCPs with pure state constraints.</div>
|
Page generated in 0.0493 seconds