Spelling suggestions: "subject:"stationarity"" "subject:"stationnarity""
71 |
O mercado de derivativos cambiais no Brasil e suas tendênciasMachado, Marcelo Rocha January 2007 (has links)
Submitted by Marcia Bacha (marcia.bacha@fgv.br) on 2011-04-12T19:42:27Z
No. of bitstreams: 1
000406591.pdf: 5825955 bytes, checksum: a09d43a0b53cba39f568bd3b137cef16 (MD5) / Approved for entry into archive by Marcia Bacha(marcia.bacha@fgv.br) on 2011-04-12T19:44:07Z (GMT) No. of bitstreams: 1
000406591.pdf: 5825955 bytes, checksum: a09d43a0b53cba39f568bd3b137cef16 (MD5) / Made available in DSpace on 2011-04-12T19:44:32Z (GMT). No. of bitstreams: 1
000406591.pdf: 5825955 bytes, checksum: a09d43a0b53cba39f568bd3b137cef16 (MD5)
Previous issue date: 2008-05-01 / Com a entrada do regime cambial flutuante no Brasil a partir de 1999, o mercado de derivativos cambiais se desenvolveu muito. A crescente demanda das empresas e instituições financeiras pelos produtos de hedge cambial junto a um novo panorama econômico mundial foram as causas desse desenvolvimento. Esse trabalho procura encontrar tendências para o mercado de derivativos cambiais brasileiro estimando parâmetros através de regressões entre séries não-estacionárias, porém cointegradas. E utilizado o modelo de correção de erros para fazer as previsões. Os resultados mostram que o crescimento do mercado ocorre em função da corrente de comércio exterior e PIB, que os produtos mais utilizados para operações de curto e longo prazos tendem a ser o dólar futuro e as opções cambiais e que, no futuro, algumas outras moedas terão participação significativa no mercado brasileiro.
|
72 |
Signal Processing on Graphs - Contributions to an Emerging Field / Traitement du signal sur graphes - Contributions à un domaine émergentGirault, Benjamin 01 December 2015 (has links)
Ce manuscrit introduit dans une première partie le domaine du traitement du signal sur graphe en commençant par poser les bases d'algèbre linéaire et de théorie spectrale des graphes. Nous définissons ensuite le traitement du signal sur graphe et donnons des intuitions sur ses forces et faiblesses actuelles comparativement au traitement du signal classique. En seconde partie, nous introduisons nos contributions au domaine. Le chapitre 4 cible plus particulièrement l'étude de la structure d'un graphe par l'analyse des signaux temporels via une transformation graphe vers série temporelle. Ce faisant, nous exploitons une approche unifiée d'apprentissage semi-supervisé sur graphe dédiée à la classification pour obtenir une série temporelle lisse. Enfin, nous montrons que cette approche s'apparente à du lissage de signaux sur graphe. Le chapitre 5 de cette partie introduit un nouvel opérateur de translation sur graphe définit par analogie avec l'opérateur classique de translation en temps et vérifiant la propriété clé d'isométrie. Cet opérateur est comparé aux deux opérateurs de la littérature et son action est décrite empiriquement sur quelques graphes clés. Le chapitre 6 décrit l'utilisation de l'opérateur ci-dessus pour définir la notion de signal stationnaire sur graphe. Après avoir étudié la caractérisation spectrale de tels signaux, nous donnons plusieurs outils essentiels pour étudier et tester cette propriété sur des signaux réels. Le dernier chapitre s'attache à décrire la boite à outils \matlab développée et utilisée tout au long de cette thèse. / This dissertation introduces in its first part the field of signal processing on graphs. We start by reminding the required elements from linear algebra and spectral graph theory. Then, we define signal processing on graphs and give intuitions on its strengths and weaknesses compared to classical signal processing. In the second part, we introduce our contributions to the field. Chapter 4 aims at the study of structural properties of graphs using classical signal processing through a transformation from graphs to time series. Doing so, we take advantage of a unified method of semi-supervised learning on graphs dedicated to classification to obtain a smooth time series. Finally, we show that we can recognize in our method a smoothing operator on graph signals. Chapter 5 introduces a new translation operator on graphs defined by analogy to the classical time shift operator and verifying the key property of isometry. Our operator is compared to the two operators of the literature and its action is empirically described on several graphs. Chapter 6 describes the use of the operator above to define stationary graph signals. After giving a spectral characterization of these graph signals, we give a method to study and test stationarity on real graph signals. The closing chapter shows the strength of the matlab toolbox developed and used during the course of this PhD.
|
73 |
Análise harmônica dos totais de chuvas mensais de regiões homogêneas do Estado do Rio Grande do Sul / Harmonic analysis of the total rainfall monthly of homogeneous regions of the state of Rio Grande do SulCardoso, Daniel Souza, Cardoso, Daniel Souza 24 March 2010 (has links)
Made available in DSpace on 2014-08-20T14:25:50Z (GMT). No. of bitstreams: 1
dissertacao_daniel_souza_cardoso.pdf: 4876863 bytes, checksum: f5480536e10753263d2ac710e5a3c660 (MD5)
Previous issue date: 2010-03-24 / Whereas the State of Rio Grande do Sul (RS), have an economy directly dependent on agriculture and livestock sectors, which in different studies are reported as dependent on the variability of certain climatological elements, and the RS element water is regarded as fundamental. We conducted a study of the monthly total rainfall, to long 60 years (1948/2007), collected from 31 meteorological stations (EMs) and distributed geographically in the state. In the interest of contributing to the local society to predict possible shortages, and / or development of public policies for the use of water resources in urban and rural areas.
In order to obtain a model that can provide an approximation of the behavior of the average rainfall for each of the six homogeneous regions, as defined in the literature (Marques, 2005), held has an harmonic analysis of the data previously adjusted to 30-day months. Before the analisys, the properties were checked for normality, homogeneity of variance and stationarity. The data tested for normality and homogeneity of variances, have not passed satisfactory in these tests and, hence, there was a transformation of data, generating new data sets that met the conditions of homogeneity of variance and normality. The relative increase in the trend, to long 60 years, ranged from 2,7 to 13,3% in the six homogeneous regions.
Through harmonic analysis was obtained models that adequately represent the behavior of the average rainfall for the six homogeneous regions of RS, consisting of 3 or 4 sine waves, with one representation from 81 to 95% of the variability of the data. It was possible to found that some harmonics stood out, by have higher representation of the variability of the observed data, and the harmonic half stood out, in 50% of the models, and the harmonics quarterly and annual stood out, in 33,33% and 16,66% of, respectively. The models were tested to forecast, within the interval of 2003/2007, evolving in time according to the trend of time series of each region, being validated in residual analysis, by of residuals autocorrelation . Showing up as appropriate for forecast of future values. / Considerando que o Estado do Rio Grande do Sul (RS), possui uma economia diretamente dependente dos setores pecuário e agrícola, que em diferentes estudos são apontados como dependentes da variabilidade de alguns elementos climatológicos, e que no RS o elemento hídrico é considerado como fundamental. Realizou-se um estudo dos totais mensais de chuva ao longo de 60 anos (1948/2007), coletados de 31 estações meteorológicas (EMs) bem distribuídas, geograficamente, no Estado. Com o interesse de contribuir para a sociedade local, na previsão de possíveis racionamentos, e/ou na elaboração de políticas públicas para o uso dos recursos hídricos, nas áreas urbana e rural.
Com o objetivo de obter um modelo, que possa apresentar uma aproximação do comportamento da precipitação pluvial média de cada uma das seis regiões homogêneas, já definidas na literatura (Marques, 2005), realizou-se uma análise harmônica dos dados previamente ajustados à meses de 30 dias. Antes da análise foram verificadas as propriedades de normalidade, homogeneidade de variâncias e estacionariedade. Os dados submetidos aos testes de normalidade, e de homogeneidade de variâncias, não obtiveram aprovação satisfatória nestes testes e, daí, realizou-se uma transformação de dados, gerando novos conjuntos de dados, que satisfizeram as condições de homogeneidade de variâncias e normalidade. O aumento relativo da tendência ao longo de 60 anos, variou de 2,7 a 13,3% nas seis regiões homogêneas.
Através da análise harmônica obteve-se modelos que representam adequadamente o comportamento da precipitação pluvial média para as seis regiões homogêneas do RS, constituídos por 3 ou 4 ondas senoidais, apresentando uma representatividade de 81 a 95% da variabilidade dos dados. Foi possível constatar que alguns harmônicos destacaram-se por apresentar maior representatividade da variabilidade dos dados observados, sendo que o harmônico semestral destacou-se em 50% dos modelos, e que os harmônicos quadrimestral e anual destacaram-se em 33,33% e 16,66% destes, respectivamente. Os modelos foram testados para previsão, compreendida no intervalo de 2003/2007, evoluindo no tempo de acordo com a tendência das séries temporais de cada região, sendo validados na análise residual pela autocorrelação dos resíduos. Mostrando-se como adequados para previsão de valores futuros.
|
74 |
Approches nouvelles des modèles GARCH multivariés en grande dimension / New approaches for high-dimensional multivariate GARCH modelsPoignard, Benjamin 15 June 2017 (has links)
Ce document traite du problème de la grande dimension dans des processus GARCH multivariés. L'auteur propose une nouvelle dynamique vine-GARCH pour des processus de corrélation paramétrisés par un graphe non dirigé appelé "vine". Cette approche génère directement des matrices définies-positives et encourage la parcimonie. Après avoir établi des résultats d'existence et d'unicité pour les solutions stationnaires du modèle vine-GARCH, l'auteur analyse les propriétés asymptotiques du modèle. Il propose ensuite un cadre général de M-estimateurs pénalisés pour des processus dépendants et se concentre sur les propriétés asymptotiques de l'estimateur "adaptive Sparse Group Lasso". La grande dimension est traitée en considérant le cas où le nombre de paramètres diverge avec la taille de l'échantillon. Les résultats asymptotiques sont illustrés par des expériences simulées. Enfin dans ce cadre l'auteur propose de générer la sparsité pour des dynamiques de matrices de variance covariance. Pour ce faire, la classe des modèles ARCH multivariés est utilisée et les processus correspondants à celle-ci sont estimés par moindres carrés ordinaires pénalisés. / This document contributes to high-dimensional statistics for multivariate GARCH processes. First, the author proposes a new dynamic called vine-GARCH for correlation processes parameterized by an undirected graph called vine. The proposed approach directly specifies positive definite matrices and fosters parsimony. The author provides results for the existence and uniqueness of stationary solution of the vine-GARCH model and studies its asymptotic properties. He then proposes a general framework for penalized M-estimators with dependent processes and focuses on the asymptotic properties of the adaptive Sparse Group Lasso regularizer. The high-dimensionality setting is studied when considering a diverging number of parameters with the sample size. The asymptotic properties are illustrated through simulation experiments. Finally, the author proposes to foster sparsity for multivariate variance covariance matrix processes within the latter framework. To do so, the multivariate ARCH family is considered and the corresponding parameterizations are estimated thanks to penalized ordinary least square procedures.
|
75 |
Preprocessing Data: A Study on Testing Transformations for Stationarity of Financial Data / Förbehandling av data: En studie som testar transformationer för stationaritet av finansiell dataBarwary, Sara, Abazari, Tina January 2019 (has links)
In thesis within Industrial Economics and Applied Mathematics in cooperation with Svenska Handelsbanken given transformations was examined in order to assess their ability to make a given time series stationary. In addition, a parameter α belonging to each of the transformation formulas was to be decided. To do this an extensive study of previous research was conducted and two different tests of hypothesis where obtained to confirm output. A result was concluded where a value or interval for α was chosen for each transformation. Moreover, the first difference transformation is proven to have a positive effect on stationarity of financial data. / Det här kandidatexamensarbetet inom Industriell Ekonomi och tillämpad matematik i samarbete med Handelsbanken undersöker givna transformationer för att bedöma deras förmåga att göra givna tidsserier stationära. Dessutom skulle en parameter α tillhörande varje transformations formel bestämmas. För att göra detta utfördes en omfattande studie av tidigare forskning och två olika hypotestester gjordes för att bekräfta output. Ett resultat sammanställdes där ett värde eller ett intervall för α valdes till varje transformation. Dessutom visade det sig att "first difference" transformationen är bra för stationäritet av finansiell data.
|
76 |
A Framework to Model Bond Liquidity / Ett ramverk för att modellera obligationslikviditetIssa, Alan January 2023 (has links)
The liquidity of financial assets can be studied in various different ways. In this thesis, liquidity is defined as the cost and time required to liquidate a position. While the liquidity of highly traded financial instruments like stocks is typically determined by analyzing the order book, the lack of an order book for over-the-counter bond trading presents challenges for estimating bond liquidity. The objective of this thesis is to develop a framework for estimating the cost and time required to liquidate a bond position. To achieve this, we propose a theoretical order book model based on the order book of more actively traded instruments, and estimate the model parameters using bond transaction data. The volume available to trade in the theoretical order book was modelled as gamma distributed stochastic process. The distribution of the liquidation cost could thereafter be derived where the parameters were estimated using the maximum likelihood estimation. The liquidation time, or liquidity horizon, was then determined through the solution of an optimization problem. The proposed framework for estimating bond liquidity produced promising results. The estimated parameters of the gamma distributed stochastic process accurately captured the behavior of bond trading volumes, allowing for a reliable estimation of the distribution of liquidation costs. Additionally, the optimization problem used to determine the liquidity horizon produced reasonable estimates. / Likviditeten hos finansiella tillgångar kan studeras på olika sätt. I denna uppsats definieras likviditeten som kostnaden och tiden som krävs för att likvidera en position. Medans likviditeten hos aktivt handlade finansiella tillgångar som aktier vanligtvis bestäms genom att analysera orderboken, så medför bristen på en orderbok för handel med "over-the-counter" obligationer utmaningar för att uppskatta likviditeten för dem. Syftet med denna uppsats är att utveckla ett ramverk för att uppskatta kostnaden och tiden som krävs för att likvidera en obligationsposition. För att uppnå detta föreslår vi en teoretisk orderboksmodell baserad på orderboken för mer aktivt handlade instrument, och uppskattar modellparametrarna med hjälp av data för obligationsaffärer. Volymen som är tillgänglig att handla i den teoretiska orderboken modellerades som en gammafördelad stokastisk process. Fördelningen av likvidationskostnaden kunde sedan härledas där parametrarna uppskattades med hjälp av maximum likelihood-estimering. Likvidationstiden, eller likvidationshoristonten, bestämdes sedan genom att lösa ett optimeringsproblem. Det föreslagna ramverket för att uppskatta likviditeten hos obligationer gav lovande resultat. De uppskattade parametrarna för den gammafördelade stokastiska processen fångade noggrant upp beteendet hos handelsvolymerna för obligationer, vilket möjliggjorde en pålitlig uppskattning av fördelning av likvidationskostnader. Optimeringsproblemet som användes för att bestämma likviditetshorisontens gav dessutom rimliga uppskattningar.
|
77 |
Unit root, outliers and cointegration analysis with macroeconomic applicationsRodríguez, Gabriel 10 1900 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal. / In this thesis, we deal with three particular issues in the literature on nonstationary time series. The first essay deals with various unit root tests in the context of structural change. The second paper studies some residual based tests in order to identify cointegration. Finally, in the third essay, we analyze several tests in order to identify additive outliers in nonstationary time series. The first paper analyzes the hypothesis that some time series can be characterized as stationary with a broken trend. We extend the class of M-tests and ADF test for a unit root to the case where a change in the trend function is allowed to occur at an unknown time. These tests (MGLS, ADFGLS) adopt the Generalized Least Squares (GLS) detrending approach to eliminate the set of deterministic components present in the model. We consider two models in the context of the structural change literature. The first model allows for a change in slope and the other for a change in slope as well as intercept. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal test (PF-Ls) which allows us to find the power envelope. The asymptotic critical values of the tests are tabulated and we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. Two methods to select the break point are analyzed. A first method estimates the break point that yields the minimal value of the statistic. In the second method, the break point is selected such that the absolute value of the t-statistic on the change in slope is maximized. We show that the MGLS and PTGLS tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power of the tests in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. In an empirical application, we consider two U.S. macroeconomic annual series widely used in the unit root literature: real wages and common stock prices. Our results suggest a rejection of the unit root hypothesis. In other words, we find that these series can be considered as trend stationary with a broken trend. Given the fact that using the GLS detrending approach allows us to attain gains in the power of the unit root tests, a natural extension is to propose this approach to the context of tests based on residuals to identify cointegration. This is the objective of the second paper in the thesis. In fact, we propose residual based tests for cointegration using local GLS detrending to eliminate separately the deterministic components in the series. We consider two cases, one where only a constant is included and one where a constant and a time trend are included. The limiting distributions of various residuals based tests are derived for a general quasi-differencing parameter and critical values are tabulated for values of c = 0 irrespective of the nature of the deterministic components and also for other values as proposed in the unit root literature. Simulations show that GLS detrending yields tests with higher power. Furthermore, using c = -7.0 or c = -13.5 as the quasi-differencing parameter, based on the two cases analyzed, is preferable. The third paper is an extension of a recently proposed method to detect outliers which explicitly imposes the null hypothesis of a unit root. it works in an iterative fashion to select multiple outliers in a given series. We show, via simulation, that under the null hypothesis of no outliers, it has the right size in finite samples to detect a single outlier but when applied in an iterative fashion to select multiple outliers, it exhibits severe size distortions towards finding an excessive number of outliers. We show that this iterative method is incorrect and derive the appropriate limiting distribution of the test at each step of the search. Whether corrected or not, we also show that the outliers need to be very large for the method to have any decent power. We propose an alternative method based on first-differenced data that has considerably more power. The issues are illustrated using two US/Finland real exchange rate series.
|
78 |
Econometric forecasting of financial assets using non-linear smooth transition autoregressive modelsClayton, Maya January 2011 (has links)
Following the debate by empirical finance research on the presence of non-linear predictability in stock market returns, this study examines forecasting abilities of nonlinear STAR-type models. A non-linear model methodology is applied to daily returns of FTSE, S&P, DAX and Nikkei indices. The research is then extended to long-horizon forecastability of the four series including monthly returns and a buy-and-sell strategy for a three, six and twelve month holding period using non-linear error-correction framework. The recursive out-of-sample forecast is performed using the present value model equilibrium methodology, whereby stock returns are forecasted using macroeconomic variables, in particular the dividend yield and price-earnings ratio. The forecasting exercise revealed the presence of non-linear predictability for all data periods considered, and confirmed an improvement of predictability for long-horizon data. Finally, the present value model approach is applied to the housing market, whereby the house price returns are forecasted using a price-earnings ratio as a measure of fundamental levels of prices. Findings revealed that the UK housing market appears to be characterised with asymmetric non-linear dynamics, and a clear preference for the asymmetric ESTAR model in terms of forecasting accuracy.
|
79 |
Location-based estimation of the autoregressive coefficient in ARX(1) models.Kamanu, Timothy Kevin Kuria January 2006 (has links)
<p>In recent years, two estimators have been proposed to correct the bias exhibited by the leastsquares (LS) estimator of the lagged dependent variable (LDV) coefficient in dynamic regression models when the sample is finite. They have been termed as &lsquo / mean-unbiased&rsquo / and &lsquo / medianunbiased&rsquo / estimators. Relative to other similar procedures in the literature, the two locationbased estimators have the advantage that they offer an exact and uniform methodology for LS estimation of the LDV coefficient in a first order autoregressive model with or without exogenous regressors i.e. ARX(1).</p>
<p><br />
However, no attempt has been made to accurately establish and/or compare the statistical properties among these estimators, or relative to those of the LS estimator when the LDV coefficient is restricted to realistic values. Neither has there been an attempt to  / compare their performance in terms of their mean squared error (MSE) when various forms of the exogenous regressors are considered. Furthermore, only implicit confidence intervals have been given for the &lsquo / medianunbiased&rsquo / estimator. Explicit confidence bounds that are directly usable for inference are not available for either estimator. In this study a new estimator of the LDV coefficient is proposed / the &lsquo / most-probably-unbiased&rsquo / estimator. Its performance properties vis-a-vis the existing estimators are determined and compared when the parameter space of the LDV coefficient is restricted. In addition, the following new results are established: (1) an explicit computable form for the density of the LS estimator is derived for the first time and an efficient method for its numerical evaluation is proposed / (2) the exact bias, mean, median and mode of the distribution of the LS estimator are determined in three specifications of the ARX(1) model / (3) the exact variance and MSE of LS estimator is determined / (4) the standard error associated with the determination of same quantities when simulation rather than numerical integration method is used are established and the methods are compared in terms of computational time and effort / (5) an exact method of evaluating the density of the three estimators is described / (6) their exact bias, mean, variance and MSE are determined and analysed / and finally, (7) a method of obtaining the explicit exact confidence intervals from the distribution functions of the estimators is proposed.</p>
<p><br />
The discussion and results show that the estimators are still biased in the usual sense: &lsquo / in expectation&rsquo / . However the bias is substantially reduced compared to that of the LS estimator. The findings are important in the specification of time-series regression models, point and interval estimation, decision theory, and simulation.</p>
|
80 |
Optimization of nonsmooth first order hyperbolic systemsStrogies, Nikolai 16 November 2016 (has links)
Wir betrachten Optimalsteuerungsprobleme, die von partiellen Differentialgleichungen beziehungsweise Variationsungleichungen mit Differentialoperatoren erster Ordnung abhängen. Wir führen die Reformulierung eines Tagebauplanungsproblems, das auf stetigen Funktionen beruht, ein. Das Resultat ist ein Optimalsteuerungsproblem für Viskositätslösungen einer Eikonalgleichung. Die Existenz von Lösungen dieses und bestimmter Hilfsprobleme, die von semilinearen PDG‘s mit künstlicher Viskosität abhängen, wird bewiesen, Stationaritätsbedingungen hergeleitet und ein schwaches Konsistenzresultat für stationäre Punkte präsentiert. Des Weiteren betrachten wir Optimalsteuerungsprobleme, die von stationären Variationsungleichungen erster Art mit linearen Differentialoperatoren erster Ordnung abhängen. Wir diskutieren Lösbarkeit und Stationaritätskonzepte für diese Probleme. Für letzteres vergleichen wir Ergebnisse, die entweder durch die Anwendung von Penalisierungs- und Regularisierungsansätzen direkt auf Ebene von Differentialoperatoren erster Ordnung oder als Grenzwertprozess von Stationaritätssystemen für viskositätsregularisierte Optimalsteuerungsprobleme unter passenden Annahmen erhalten werden. Um die Konsistenz von ursprünglichem und regularisierten Problemen zu sichern, wird ein bekanntes Ergebnis für Lösungen von VU’s mit degeneriertem Differentialoperator erweitert. In beiden Fällen ist die erhaltene Stationarität schwächer als W-stationarität. Die theoretischen Ergebnisse werden anhand numerischer Beispiele verifiziert. Wir erweitern diese Ergebnisse auf Optimalsteuerungsprobleme bezüglich zeitabhängiger VU’s mit Differentialoperatoren erster Ordnung. Hierfür wird die Existenz von Lösungen bewiesen und erneut ein Stationaritätssystem mit Hilfe verschwindender Viskositäten unter bestimmten Beschränktheitsannahmen hergeleitet. Die erhaltenen Ergebnisse werden anhand von numerischen Beispielen verifiziert. / We consider problems of optimal control subject to partial differential equations and variational inequality problems with first order differential operators. We introduce a reformulation of an open pit mine planning problem that is based on continuous functions. The resulting formulation is a problem of optimal control subject to viscosity solutions of a partial differential equation of Eikonal Type. The existence of solutions to this problem and auxiliary problems of optimal control subject to regularized, semilinear PDE’s with artificial viscosity is proven. For the latter a first order optimality condition is established and a mild consistency result for the stationary points is proven. Further we study certain problems of optimal control subject to time-independent variational inequalities of the first kind with linear first order differential operators. We discuss solvability and stationarity concepts for such problems. In the latter case, we compare the results obtained by either utilizing penalization-regularization strategies directly on the first order level or considering the limit of systems for viscosity-regularized problems under suitable assumptions. To guarantee the consistency of the original and viscosity-regularized problems of optimal control, we extend known results for solutions to variational inequalities with degenerated differential operators. In both cases, the resulting stationarity concepts are weaker than W-stationarity. We validate the theoretical findings by numerical experiments for several examples. Finally, we extend the results from the time-independent to the case of problems of optimal control subject to VI’s with linear first order differential operators that are time-dependent. After establishing the existence of solutions to the problem of optimal control, a stationarity system is derived by a vanishing viscosity approach under certain boundedness assumptions and the theoretical findings are validated by numerical experiments.
|
Page generated in 0.1482 seconds