Spelling suggestions: "subject:"lagrange multiplier"" "subject:"malgrange multiplier""
21 |
Transformation model selection by multiple hypotheses testingLehmann, Rüdiger January 2014 (has links)
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
|
22 |
[en] USING LINEAR AND NON-LINEAR APPROACHES TO MODEL THE BRAZILIAN ELECTRICITY SPOT PRICE SERIES / [pt] MODELOS LINEARES E NÃO LINEARES NA MODELAGEM DO PREÇO SPOT DE ENERGIA ELÉTRICA DO BRASILLUIZ FELIPE MOREIRA DO AMARAL 17 July 2003 (has links)
[pt] Nesta dissertação, estratégias de modelagem são
apresentadas envolvendo modelos de séries temporais
lineares e não lineares para modelar a série do preço
spot no mercado elétrico brasileiro. Foram usados, dentre
os lineares, os modelos ARIMA(p,d,q) proposto por Box,
Jenkins e Reinsel (1994) e os modelos de regressão
dinâmica. Dentre os não lineares, o modelo escolhido foi o
STAR desenvolvido, inicialmente, por Chan e Tong (1986) e,
posteriormente, por Teräsvista (1994). Para este modelo,
testes do tipo Multiplicador de Lagrange foram usados para
testar linearidade, bem como para avaliar os modelos
estimados. Além disso, foi também utilizada uma proposta
para os valores iniciais do algoritmo de otimização,
desenvolvido por Franses e Dijk (2000). Estimativas do
filtro de Kalman suavizado foram usadas para substituir os
valores da série de preço durante o racionamento de energia
ocorrido no Brasil. / [en] In this dissertation, modeling strategies are presented
involving linear and non-linear time series models to model
the spot price of Brazil s electrical energy market. It has
been used, among the linear models, the modeling approach
of Box, Jenkins and Reinsel (1994) i.e., ARIMA(p,d,q)
models, and dynamic regression. Among the non-linear ones,
the chosen model was the STAR developed, initially,
by Chan and Tong (1986) and, later, by Teräsvirta (1994).
For this model, the Lagrange Multipliers test, to measure
the degree of non linearity of the series , as well as to
evaluate the estimated model was used. Moreover, it was
also used a proposal for the initial values of the
optimization algorithm, developed by Franses and Dijk
(2000). The smoothed Kalman filter estimates were used in
order to provide values for the spot price series during
the energy shortage period.
|
23 |
Numerical analysis of some saddle point formulation with X-FEM type approximation on cracked or fictitious domains / Analyse numérique d'une certaine formulation du col avec une approximation de type X-FEM sur des domaines fissurés ou fictifsAmdouni, Saber 31 January 2013 (has links)
Ce mémoire de thèse à été réalisée dans le cadre d'une collaboration scientifique avec "La Manufacture Française des Pneumatiques Michelin". Il porte sur l'analyse mathématique et numérique de la convergence et de la stabilité de formulations mixtes ou hybrides de problèmes d'optimisation sous contrainte avec la méthode des multiplicateurs de Lagrange et dans le cadre de la méthode éléments finis étendus (XFEM). Tout d'abord, nous essayons de démontrer la stabilité de la discrétisation X-FEM pour le problème d'élasticité linéaire incompressible en statique. Le deuxième axe, qui représente le contenu principal de la thèse est dédié à l'étude de certaines méthodes de multiplicateur de Lagrange stabilisées. La particularité de ces méthodes est que la stabilité du multiplicateur est assurée par l'ajout de termes supplémentaires dans la formulation faible. Dans ce contexte, nous commençons par l'étude de la méthode de stabilisation de Barbosa-Hughes appliquée au problème de contact unilatéral sans frottement avec XFEM cut-off. Ensuite, nous construisons une nouvelle méthode basée sur des techniques de projections locales pour stabiliser un problème de Dirichlet dans le cadre de X-FEM et une approche de type domaine fictif. Nous faisons aussi une étude comparative entre la stabilisation avec la technique de projection locale et la stabilisation de Barbosa-Hughes. Enfin, nous appliquons cette nouvelle méthode de stabilisation aux problèmes de contact unilatéral en élastostatique avec frottement de Tresca dans le cadre de X-FEM. / This Ph.D. thesis was done in collaboration with "La Manufacture Française des Pneumatiques Michelin". It concerns the mathematical and numerical analysis of convergence and stability of mixed or hybrid formulation of constrained optimization problem with Lagrange multiplier method in the framework of the eXtended Finite Element Method (XFEM). First we try to prove the stability of the X-FEM discretization for incompressible elastostatic problem by ensured a LBB condition. The second axis, which present the main content of the thesis, is dedicated to the use of some stabilized Lagrange multiplier methods. The particularity of these stabilized methods is that the stability of the multiplier is provided by adding supplementary terms in the weak formulation. In this context, we study the Barbosa-Hughes stabilization technique applied to the frictionless unilateral contact problem with XFEM-cut-off. Then we present a new consistent method based on local projections for the stabilization of a Dirichlet condition in the framework of extended finite element method with a fictitious domain approach. Moreover we make comparative study between the local projection stabilization and the Barbosa-Hughes stabilization. Finally we use the local projection stabilization to approximate the two-dimensional linear elastostatics unilateral contact problem with Tresca frictional in the framework of the eXtended Finite Element Method X-FEM.
|
24 |
Mortar finite element method for cell response to applied electric fieldPérez, Cesar Augusto Conopoima 25 October 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-11T16:41:11Z
No. of bitstreams: 1
cesaraugustoconopoimaperez.pdf: 4395089 bytes, checksum: 9e33b57e376886bbc7ff8300d693cf87 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-22T16:42:49Z (GMT) No. of bitstreams: 1
cesaraugustoconopoimaperez.pdf: 4395089 bytes, checksum: 9e33b57e376886bbc7ff8300d693cf87 (MD5) / Made available in DSpace on 2018-01-22T16:42:49Z (GMT). No. of bitstreams: 1
cesaraugustoconopoimaperez.pdf: 4395089 bytes, checksum: 9e33b57e376886bbc7ff8300d693cf87 (MD5)
Previous issue date: 2017-10-25 / A resposta passiva e ativa de uma célula biológica a um campo elétrico é estudada aplicando um Método de Elementos Finitos Mortar MEFM. A resposta de uma célula é um processo com duas escalas temporais, o primeiro na escala de microsegundos para a polarização da célula e o segundo na escala de milisegundos para a resposta ativa devido a dinâmica complexa das correntes nos canais iônicos da membrana celular. O modelo matemático para descrever a dinâmica da resposta celular é baseado na lei de conservação de corrente elétrica em um meio condutor. Introduzindo uma variável adicional conhecida como multiplicador de Lagrange definido na interface da célula, o problema de valor de fronteira associado a conservação de corrente elétrica é desacoplado do problema de valor inicial associado a responta passiva e ativa da célula. O método proposto permite resolver o problema da distribuição de potencial elétrico em um arranjo geométrico arbitrário de células. Com o objetivo de validar a metodologia apresentada, a convergência espacial do método é numericamente investigada e a solução aproxima e exata que descreve a polarização de uma célula, são comparadas. Finalmente, para demonstrar a efetividade do método, a resposta ativa a um campo elétrico aplicado num arranjo de células de geometria arbitraria é investigada. / The response of passive and active biological cell to applied electric field is investigated with a Mortar Finite Element Method MFEM. Cells response is a process with two different time scales, one in microseconds for the cell polarization and the other in milliseconds for the active response of the cell due to the complex dynamics of the ion-channel current on the cell membrane. The mathematical model to describe the dynamics of the cell response is based on the conservation law of electric current in a conductive medium. By introducing an additional variable known as Lagrange multiplier defined on the cell interface, the boundary value problem associated to the conservation of electric current is decoupled from the initial value problem associated to the passive and active response of the cell. The proposed method allows to solve electric potential distribution in arbitrary cell geometry and arrangements. In order to validate the presented methodology, the h-convergence order of the MFEM is numerically investigated. The numerical and exact solutions describing cell polarization are also compared. Finally, to demonstrate the effectiveness of the method, the active response to an applied electric field in cells clusters and cells with arbitrary geometry are investigated.
|
25 |
Modélisation multivariée hétéroscédastique et transmission financière / Multivariate heteroskedastic modelling and financial transmissionSanhaji, Bilel 02 December 2014 (has links)
Cette thèse de doctorat composée de trois chapitres contribue au développement de tests statistiques et à analyser la transmission financière dans un cadre multivarié hétéroscédastique. Le premier chapitre propose deux tests du multiplicateur de Lagrange de constance des corrélations conditionnelles dans les modèles GARCH multivariés. Si l'hypothèse nulle repose sur des corrélations conditionnelles constantes, l'hypothèse alternative propose une première spécification basée sur des réseaux de neurones artificiels et une seconde représentée par une forme fonctionnelle inconnue qui est linéarisée à l'aide d'un développement de Taylor.Dans le deuxième chapitre, un nouveau modèle est introduit dans le but de tester la non-linéarité des (co)variances conditionnelles. Si l'hypothèse nulle repose sur une fonction linéaire des innovations retardées au carré et des (co)variances conditionnelles, l'hypothèse alternative se caractérise quant à elle par une fonction de transition non-linéaire : exponentielle ou logistique ; une configuration avec effets de levier est également proposée. Dans les deux premiers chapitres, les expériences de simulations et les illustrations empiriques montrent les bonnes performances de nos tests de mauvaise spécification.Le dernier chapitre étudie la transmission d'information en séance et hors séance de cotation en termes de rendements et de volatilités entre la Chine, l'Amérique et l'Europe. Le problème d'asynchronicité est considéré avec soin dans la modélisation bivariée avec la Chine comme référence. / This Ph.D. thesis composed by three chapters contributes to the development of test statistics and to analyse financial transmission in a multivariate heteroskedastic framework.The first chapter proposes two Lagrange multiplier tests of constancy of conditional correlations in multivariate GARCH models. Whether the null hypothesis is based on constant conditional correlations, the alternative hypothesis proposes a first specification based on artificial neural networks, and a second specification based on an unknown functional form linearised by a Taylor expansion.In the second chapter, a new model is introduced in order to test for nonlinearity in conditional (co)variances. Whether the null hypothesis is based on a linear function of the lagged squared innovations and the conditional (co)variances, the alternative hypothesis is characterised by a nonlinear exponential or logistic transition function; a configuration with leverage effects is also proposed.In the two first chapters, simulation experiments and empirical illustrations show the good performances of our misspecification tests.The last chapter studies daytime and overnight information transmission in terms of returns and volatilities between China, America and Europe. The asynchronicity issue is carefully considered in the bivariate modelling with China as benchmark.
|
26 |
Robust Subspace Estimation Using Low-rank Optimization. Theory And Applications In Scene Reconstruction, Video Denoising, And Activity Recognition.Oreifej, Omar 01 January 2013 (has links)
In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time. Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories . . . etc). If the assumption that these observations are drawn from a liner subspace (or can be linearly approximated) is valid, then the goal is to represent each observation as a linear combination of a compact basis, while maintaining a minimal reconstruction error. One of the earliest, yet most popular, approaches to achieve that is Principal Component Analysis (PCA). However, PCA can only handle Gaussian noise, and thus suffers when the observations are contaminated with gross and sparse outliers. To this end, in this dissertation, we focus on estimating the subspace robustly using low-rank optimization, where the sparse outliers are detected and separated through the `1 norm. The robust estimation has a two-fold advantage: First, the obtained basis better represents the actual subspace because it does not include contributions from the outliers. Second, the detected outliers are often of a specific interest in many applications, as we will show throughout this thesis. We demonstrate four different formulations and applications for low-rank optimization. First, we consider the problem of reconstructing an underwater sequence by removing the iii turbulence caused by the water waves. The main drawback of most previous attempts to tackle this problem is that they heavily depend on modelling the waves, which in fact is ill-posed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In contrast, we propose a novel approach which outperforms the state-of-the-art. The intuition behind our method is that in a sequence where the water is static, the frames would be linearly correlated. Therefore, in the presence of water waves, we may consider the frames as noisy observations drawn from a the subspace of linearly correlated frames. However, the noise introduced by the water waves is not sparse, and thus cannot directly be detected using low-rank optimization. Therefore, we propose a data-driven two-stage approach, where the first stage “sparsifies” the noise, and the second stage detects it. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences. Secondly, we consider a closely related situation where an independently moving object is also present in the turbulent video. More precisely, we consider video sequences acquired in a desert battlefields, where atmospheric turbulence is typically present, in addition to independently moving targets. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. Therefore, we address the iv problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and `1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise, and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by `1 norm. Second, since the object’s motion is linear and intrinsically different than the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects. In addition to robustly detecting the subspace of the frames of a sequence, we consider using trajectories as observations in the low-rank optimization framework. In particular, in videos acquired by moving cameras, we track all the pixels in the video and use that to estimate the camera motion subspace. This is particularly useful in activity recognition, which typically requires standard preprocessing steps such as motion compensation, moving object detection, and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In contrast, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned diffi- culties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions v of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features, which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets, and obtained promising results. Finally, we consider a more challenging problem referred to as complex event recognition, where the activities of interest are complex and unconstrained. This problem typically pose significant challenges because it involves videos of highly variable content, noise, length, frame size . . . etc. In this extremely challenging task, high-level features have recently shown a promising direction as in [53, 129], where core low-level events referred to as concepts are annotated and modelled using a portion of the training data, then each event is described using its content of these concepts. However, because of the complex nature of the videos, both the concept models and the corresponding high-level features are significantly noisy. In order to address this problem, we propose a novel low-rank formulation, which combines the precisely annotated videos used to train the concepts, with the rich high-level features. Our approach finds a new representation for each event, which is not only low-rank, but also constrained to adhere to the concept annotation, thus suppressing the noise, and maintaining a consistent occurrence of the concepts in each event. Extensive experiments on large scale real world dataset TRECVID Multimedia Event Detection 2011 and 2012 demonstrate that our approach consistently improves the discriminativity of the high-level features by a significant margin.
|
27 |
回転軸系の時間領域実験的同定法の開発とその応用に関する研究安田, 仁彦, 叶, 建瑞, 神谷, 恵輔 03 1900 (has links)
科学研究費補助金 研究種目:基盤研究(C) 課題番号:10650238 研究代表者:安田 仁彦 研究期間:1998-1999年度
|
Page generated in 0.0837 seconds