Spelling suggestions: "subject:"power found"" "subject:"power sound""
51 |
Short Proofs May Be Spacious : Understanding Space in ResolutionNordström, Jakob January 2008 (has links)
Om man ser på de bästa nu kända algoritmerna för att avgöra satisfierbarhet hos logiska formler så är de allra flesta baserade på den så kallade DPLL-metoden utökad med klausulinlärning. De två viktigaste gränssättande faktorerna för sådana algoritmer är hur mycket tid och minne de använder, och att förstå sig på detta är därför en fråga som har stor praktisk betydelse. Inom området beviskomplexitet svarar tids- och minnesåtgång mot längd och minne hos resolutionsbevis för formler i konjunktiv normalform (CNF-formler). En lång rad arbeten har studerat dessa mått och även jämfört dem med bredden av bevis, ett annat mått som visat sig höra nära samman med både längd och minne. Mer formellt är längden hos ett bevis antalet rader, dvs. klausuler, bredden är storleken av den största klausulen, och minnet är maximala antalet klausuler som man behöver komma ihåg samtidigt om man under bevisets gång bara får dra nya slutsatser från klausuler som finns sparade. För längd och bredd har man lyckats visa en rad starka resultat men förståelsen av måttet minne har lämnat mycket i övrigt att önska. Till exempel så är det känt att minnet som behövs för att bevisa en formel är minst lika stort som den nödvändiga bredden, men det har varit en öppen fråga om minne och bredd kan separeras eller om de två måtten mäter "samma sak" i den meningen att de alltid är asymptotiskt lika stora för en formel. Det har också varit okänt om det faktum att det finns ett kort bevis för en formel medför att formeln också kan bevisas i litet minne (motsvarande påstående är sant för längd jämfört med bredd) eller om det tvärtom kan vara så att längd och minne är "helt orelaterade" på så sätt att även korta bevis kan kräva maximal mängd minne. I denna avhandling presenterar vi först ett förenklat bevis av trade-off-resultatet för längd jämfört med minne i (Hertel och Pitassi 2007) och visar hur samma idéer kan användas för att visa ett par andra exponentiella avvägningar i relationerna mellan olika beviskomplexitetsmått för resolution. Sedan visar vi att det finns formler som kan bevisas i linjär längd och konstant bredd men som kräver en mängd minne som växer logaritmiskt i formelstorleken, vilket vi senare förbättrar till kvadratroten av formelstorleken. Dessa resultat separerar således minne och bredd. Genom att använda andra men besläktade idéer besvarar vi därefter frågan om hur minne och längd förhåller sig till varandra genom att separera dem på starkast möjliga sätt. Mer precist visar vi att det finns CNF-formler av storlek O(n) som har resolutionbevis i längd O(n) och bredd O(1) men som kräver minne minst Omega(n/log n). Det gemensamma temat för dessa resultat är att vi studerar formler som beskriver stenläggningsspel, eller pebblingspel, på riktade acykliska grafer. Vi bevisar undre gränser för det minne som behövs för den så kallade pebblingformeln över en graf uttryckt i det svart-vita pebblingpriset för grafen i fråga. Slutligen observerar vi att vår optimala separation av minne och längd i själva verket är ett specialfall av en mer generell sats. Låt F vara en CNF-formel och f:{0,1}^d->{0,1} en boolesk funktion. Ersätt varje variabel x i F med f(x_1, ..., x_d) och skriv om denna nya formel på naturligt sätt som en CNF-formel F[f]. Då gäller, givet att F och f har rätt egenskaper, att F[f] kan bevisas i resolution i väsentligen samma längd och bredd som F, men att den minimala mängd minne som behövs för F[f] är åtminstone lika stor som det minimala antalet variabler som måste förekomma samtidigt i ett bevis för F. / Most state-of-the-art satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The two main bottlenecks for such algorithms are the amounts of time and memory used. Thus, understanding time and memory requirements for clause learning algorithms, and how these requirements are related to one another, is a question of considerable practical importance. In the field of proof complexity, these resources correspond to the length and space of resolution proofs for formulas in conjunctive normal form (CNF). There has been a long line of research investigating these proof complexity measures and relating them to the width of proofs, another measure which has turned out to be intimately connected with both length and space. Formally, the length of a resolution proof is the number of lines, i.e., clauses, the width of a proof is the maximal size of any clause in it, and the space is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. While strong results have been established for length and width, our understanding of space has been quite poor. For instance, the space required to prove a formula is known to be at least as large as the needed width, but it has remained open whether space can be separated from width or whether the two measures coincide asymptotically. It has also been unknown whether the fact that a formula is provable in short length implies that it is also provable in small space (which is the case for length versus width), or whether on the contrary these measures are "completely unrelated" in the sense that short proofs can be maximally complex with respect to space. In this thesis, as an easy first observation we present a simplified proof of the recent length-space trade-off result for resolution in (Hertel and Pitassi 2007) and show how our ideas can be used to prove a couple of other exponential trade-offs in resolution. Next, we prove that there are families of CNF formulas that can be proven in linear length and constant width but require space growing logarithmically in the formula size, later improving this exponentially to the square root of the size. These results thus separate space and width. Using a related but different approach, we then resolve the question about the relation between space and length by proving an optimal separation between them. More precisely, we show that there are families of CNF formulas of size O(n) that have resolution proofs of length O(n) and width O(1) but for which any proof requires space Omega(n/log n). All of these results are achieved by studying so-called pebbling formulas defined in terms of pebble games over directed acyclic graphs (DAGs) and proving lower bounds on the space requirements for such formulas in terms of the black-white pebbling price of the underlying DAGs. Finally, we observe that our optimal separation of space and length is in fact a special case of a more general phenomenon. Namely, for any CNF formula F and any Boolean function f:{0,1}^d->{0,1}, replace every variable x in F by f(x_1, ..., x_d) and rewrite this new formula in CNF in the natural way, denoting the resulting formula F[f]. Then if F and f have the right properties, F[f] can be proven in resolution in essentially the same length and width as F but the minimal space needed for F[f] is lower-bounded by the number of variables that have to be mentioned simultaneously in any proof for F. / QC 20100831
|
52 |
TOA-Based Robust Wireless Geolocation and Cramér-Rao Lower Bound Analysis in Harsh LOS/NLOS EnvironmentsYin, Feng, Fritsche, Carsten, Gustafsson, Fredrik, Zoubir, Abdelhak M January 2013 (has links)
We consider time-of-arrival based robust geolocation in harsh line-of-sight/non-line-of-sight environments. Herein, we assume the probability density function (PDF) of the measurement error to be completely unknown and develop an iterative algorithm for robust position estimation. The iterative algorithm alternates between a PDF estimation step, which approximates the exact measurement error PDF (albeit unknown) under the current parameter estimate via adaptive kernel density estimation, and a parameter estimation step, which resolves a position estimate from the approximate log-likelihood function via a quasi-Newton method. Unless the convergence condition is satisfied, the resolved position estimate is then used to refine the PDF estimation in the next iteration. We also present the best achievable geolocation accuracy in terms of the Cramér-Rao lower bound. Various simulations have been conducted in both real-world and simulated scenarios. When the number of received range measurements is large, the new proposed position estimator attains the performance of the maximum likelihood estimator (MLE). When the number of range measurements is small, it deviates from the MLE, but still outperforms several salient robust estimators in terms of geolocation accuracy, which comes at the cost of higher computational complexity.
|
53 |
Convergence Of Lotz-raebiger Nets On Banach SpacesErkursun, Nazife 01 June 2010 (has links) (PDF)
The concept of LR-nets was introduced and investigated firstly by H.P. Lotz in [27] and by F. Raebiger in [30]. Therefore we call such nets Lotz-Raebiger nets, shortly LR-nets. In this thesis
we treat two problems on asymptotic behavior of these operator nets.
First problem is to generalize well known theorems for Ces`aro averages of a single operator to LR-nets, namely to generalize the Eberlein and Sine theorems. The second problem is related
to the strong convergence of Markov LR-nets on L1-spaces. We prove that the existence of a lower-bound functions is necessary and sufficient for asymptotic stability of LR-nets of
Markov operators.
|
54 |
Statistical methods for reconstruction of entry, descent, and landing performance with application to vehicle designDutta, Soumyo 13 January 2014 (has links)
There is significant uncertainty in our knowledge of the Martian atmosphere and the aerodynamics of the Mars entry, descent, and landing (EDL) systems. These uncertainties result in conservatism in the design of the EDL vehicles leading to higher system masses and a broad range of performance predictions. Data from flight instrumentation onboard Mars EDL systems can be used to quantify these uncertainties, but the existing dataset is sparse and many parameters of interest have not been previously observable. Many past EDL reconstructions neither utilize statistical information about the uncertainty of the measured data nor quantify the uncertainty of the estimated parameters. Statistical estimation methods can blend together disparate data types to improve the reconstruction of parameters of interest for the vehicle. For example, integrating data obtained from aeroshell-mounted pressure transducers, inertial measurement unit, and radar altimeter can improve the estimates of the trajectory, atmospheric profile, and aerodynamic coefficients, while also quantifying the uncertainty in these estimates. These same statistical methods can be leveraged to improve current engineering models in order to reduce conservatism in future EDL vehicle design. The work in this thesis presents a comprehensive methodology for parameter reconstruction and uncertainty quantification while blending dissimilar Mars EDL datasets. Statistical estimation methods applied include the Extended Kalman Filter, Unscented Kalman Filter, and Adaptive Filter. The estimators are applied in a manner in which the observability of the parameters of interest is maximized while using the sparse, disparate EDL dataset. The methodology is validated with simulated data and then applied to estimate the EDL performance of the 2012 Mars Science Laboratory. The reconstruction methodology is also utilized as a tool for improving vehicle design and reducing design conservatism. A novel method of optimizing the design of future EDL atmospheric data systems is presented by leveraging the reconstruction methodology. The methodology identifies important design trends and the point of diminishing returns of atmospheric data sensors that are critical in improving the reconstruction performance for future EDL vehicles. The impact of the estimation methodology on aerodynamic and atmospheric engineering models is also studied and suggestions are made for future EDL instrumentation.
|
55 |
Approche algébrique de problèmes d'ordonnancement de type flowshop avec contraintes de délais / Algebraic approach for flowshop scheduling problems with time lagsVo, Nhat Vinh 12 February 2015 (has links)
Nous abordons dans cette thèse des problèmes de flowshop de permutation soumis des contraintes de délais minimaux et maximaux avec deux types de travaux principaux : 1. Nous avons modélisé, en utilisant l'algèbre MaxPlus, des problèmes de flowshop de permutation m-machines soumis une famille de contraintes : de délais minimaux, de délais maximaux, de sans attente, de délais fixes, de temps de montage indé- pendant de la séquence, de temps de démontage indépendant de la séquence, de blocage, de dates de début au plus tæt ainsi que de durées de latence. Des matrices caractérisant complètement leurs travaux associés ont été élaborées. Nous avons fait apparaître un problème central soumis des contraintes de délais minimaux et maximaux. 2. Nous avons élaboré des bornes inférieures pour le makespan et pour la somme (pondérée ou non) des dates de fin. Ces bornes inférieures ont été incorporées dans des procédures par séparation et évaluation. Nous avons généralisé les bornes inférieures de Lageweg et al. pour des contraintes quelconques et amélioré une borne inférieure de la littérature. L'utilisation de chacune de ces bornes inférieures ainsi que de leurs combinaisons ont été testées. Une famille de bornes inférieures pour la somme (pondérée ou non) des dates de fin a été élaborée basée sur la résolution d'un problème une machine et sur la résolution d'un problème de voyageur de commerce. Une politique de sélection de bornes inférieures a été proposée pour combiner les bornes inférieures. Bien qu'il s'agisse d'un problème de NP-difficile, l'efficacité de ces bornes inférieures a été vérifiée l'aide de tests. / In this thesis, permutation flowshop problems with minimal and maximal delay constraints were considered through two following principal tasks were particularly tackled. 1. In the first task, m-machine permutation flowshop problems with a family of constraints (minimal delays, maximal delays, no-wait, fixed delays, sequence-independent setup times, sequence-independent removal times, blocking, ready dates, duration of latency) were modeled using MaxPlus algebra. Job associated matrices which totally characterize these jobs were elaborated. The modeling led to reveal a central problem with constraints of minimal and maximal delays. 2. In the second task, lower bounds for makespan and for total (weighted or unweighted) completion times were elaborated. These lower bounds were incorporated in branchand-bound procedures. The lower bounds of Lageweg et al. were generalized for any constraint and a existed lower bound was improved. The usage of each of these lower bounds as well as that of their combinations was tested. A family of lower bounds for total (weighted or non-weighted) completion times was elaborated thanks to the solution of a one-machine problem and the solution of a traveling salesman problem. A lower bound selection strategy was proposed in order to combine these lower bounds. Despite necessity to solve a NP-hard problem, the effectiveness of these lower bounds was verified by numerical tests.
|
56 |
Dois ensaios sobre política monetária ótima aplicada ao Banco Central do Brasil : preferências no período do regime de metas para a inflação e consideração da restrição de não negatividade SulSchifino, Lucas Aronne January 2013 (has links)
O objetivo desta dissertação, composta por dois ensaios, é estudar a condução da política monetária brasileira com base no arcabouço teórico popularizado por Svensson (1997) e Ball (1999), centrado em um banco central otimizador restrito à estrutura da economia. Nesse sentido, o primeiro ensaio atualiza o trabalho de Aragón e Portugal (2009), buscando identificar as preferências do Banco Central do Brasil (BCB) durante a vigência do Regime de Metas de Inflação (RMI) por meio da calibragem do modelo otimização intertemporal. Os resultados mostram que, enquanto o hiato do produto possui menos de 10% de ponderação nas preferências do BCB, a extensão da amostra de 2007 até 2011 aumenta o peso do objetivo de suavização da taxa de juros. Apesar disso, a meta de inflação permanece preponderante nas preferências da autoridade monetária brasileira. Ademais, expandindo a metodologia para investigar se a trajetória da taxa de juros reflete fear of floating por parte do BCB, os resultados evidenciam que a taxa de câmbio não parece desempenhar papel relevante em seus objetivos. O segundo ensaio objetiva verificar as consequências da consideração da restrição de não negatividade sobre a taxa de juros nominal, ignorada em grande parte da literatura, quando o modelo de otimização da política monetária é aplicado ao caso brasileiro. Para obtenção da solução do modelo restrito recorre-se ao método numérico de colocação (collocation method), proposto por Kato e Nishiyama (2005). A despeito da intuição de que a restrição de não negatividade deve ser irrelevante para a formulação de regras monetárias ótimas em países de inflação moderada para alta, como o Brasil, os resultados encontrados mostram que, mesmo levando em conta os estados pelos quais transitou a economia brasileira nos últimos 12 anos, tal relevância pode ser verificada, mas depende crucialmente dos parâmetros de preferências atribuídos ao banco central. No que diz respeito à identificação das preferências do BCB, um exercício de calibragem produz resultados não conclusivos, com algumas evidências de relevância da restrição de não negatividade. / The purpose of this dissertation, composed of two essays, is to assess the Brazilian monetary policy using the theoretical framework popularized by Svensson (1997) and Ball (1999), based on an optimizer central bank restricted to the structure of the economy. The first essay updates Aragon and Portugal’s (2009) paper, seeking to identify the Central Bank of Brazil’s (CBB) preferences during inflation targeting regime using model calibration. The results show that while the weight on output gap in CBB’s preferences is less than 10%, the sample extension from 1997 to 2011 increases the importance of interest rate smoothing. Nevertheless, inflation stabilization remains predominant in CBB’s objectives. Furthermore, expanding the methodology to check the presence of fear of floating behavior, the results show that exchange rate does not seem to play a relevant role in the monetary authority’s preferences. The second essay aims to verify the consequences of the non-negativity constraint (zero lower bound) on nominal interest rate in the same type of model applied to Brazil. To obtain the solution of the restricted model the numerical collocation method, proposed by Kato and Nishiyama (2005), is adopted. Despite the intuition that the non-negativity constraint should be irrelevant to the formulation of optimal monetary rules in countries with moderate to high inflation, such as Brazil, the results show that, even taking into account the states Brazilian economy has been through during inflation targeting regime, this relevance can be ascertained, but depends crucially on the preferences parameters imputed to the central bank. Regarding the consequences of the zero lower bound to identification of CBB’ preferences, a calibration exercise produces inconclusive results, with some evidence of relevance of the non-negativity constraint.
|
57 |
Politika nízkých úrokových měr a změny v cenách aktiv: Empirická analýza / Low Interest Rates and Asset Price Fluctuations: Empirical EvidenceAli, Bano January 2018 (has links)
The thesis focuses on estimating the effect of expansionary monetary policy concerning asset prices, specifically house and stock prices as they are of pri- mary importance in financial markets. A structural vector autoregressive model is used including data for the Euro Area, the United Kingdom, and the United States from 2007 to 2017. Moreover, instead of short-term nominal interest rate, the shadow policy rate is used to measure the stance of both conventional and unconventional monetary policy. It is useful when policy rates of central banks are at or near zero as it neglects the zero-lower bound. Using both impulse response functions and forecast error variance decomposition, results suggest that higher interest rates are indeed associated with lower asset prices. That is confirmed by including two different estimates of shadow rates into the model and observing the effect for two specific types of assets. More precisely, house prices react almost immediately showing the most substantial decrease for the United Kingdom, while stock prices slightly increase at first and de- crease afterward with similar size of the effect for all areas under consideration. Finally, the discussion of how the monetary authority should react to asset price fluctuations is provided, summarizing the vast amount of literature...
|
58 |
Dois ensaios sobre política monetária ótima aplicada ao Banco Central do Brasil : preferências no período do regime de metas para a inflação e consideração da restrição de não negatividade SulSchifino, Lucas Aronne January 2013 (has links)
O objetivo desta dissertação, composta por dois ensaios, é estudar a condução da política monetária brasileira com base no arcabouço teórico popularizado por Svensson (1997) e Ball (1999), centrado em um banco central otimizador restrito à estrutura da economia. Nesse sentido, o primeiro ensaio atualiza o trabalho de Aragón e Portugal (2009), buscando identificar as preferências do Banco Central do Brasil (BCB) durante a vigência do Regime de Metas de Inflação (RMI) por meio da calibragem do modelo otimização intertemporal. Os resultados mostram que, enquanto o hiato do produto possui menos de 10% de ponderação nas preferências do BCB, a extensão da amostra de 2007 até 2011 aumenta o peso do objetivo de suavização da taxa de juros. Apesar disso, a meta de inflação permanece preponderante nas preferências da autoridade monetária brasileira. Ademais, expandindo a metodologia para investigar se a trajetória da taxa de juros reflete fear of floating por parte do BCB, os resultados evidenciam que a taxa de câmbio não parece desempenhar papel relevante em seus objetivos. O segundo ensaio objetiva verificar as consequências da consideração da restrição de não negatividade sobre a taxa de juros nominal, ignorada em grande parte da literatura, quando o modelo de otimização da política monetária é aplicado ao caso brasileiro. Para obtenção da solução do modelo restrito recorre-se ao método numérico de colocação (collocation method), proposto por Kato e Nishiyama (2005). A despeito da intuição de que a restrição de não negatividade deve ser irrelevante para a formulação de regras monetárias ótimas em países de inflação moderada para alta, como o Brasil, os resultados encontrados mostram que, mesmo levando em conta os estados pelos quais transitou a economia brasileira nos últimos 12 anos, tal relevância pode ser verificada, mas depende crucialmente dos parâmetros de preferências atribuídos ao banco central. No que diz respeito à identificação das preferências do BCB, um exercício de calibragem produz resultados não conclusivos, com algumas evidências de relevância da restrição de não negatividade. / The purpose of this dissertation, composed of two essays, is to assess the Brazilian monetary policy using the theoretical framework popularized by Svensson (1997) and Ball (1999), based on an optimizer central bank restricted to the structure of the economy. The first essay updates Aragon and Portugal’s (2009) paper, seeking to identify the Central Bank of Brazil’s (CBB) preferences during inflation targeting regime using model calibration. The results show that while the weight on output gap in CBB’s preferences is less than 10%, the sample extension from 1997 to 2011 increases the importance of interest rate smoothing. Nevertheless, inflation stabilization remains predominant in CBB’s objectives. Furthermore, expanding the methodology to check the presence of fear of floating behavior, the results show that exchange rate does not seem to play a relevant role in the monetary authority’s preferences. The second essay aims to verify the consequences of the non-negativity constraint (zero lower bound) on nominal interest rate in the same type of model applied to Brazil. To obtain the solution of the restricted model the numerical collocation method, proposed by Kato and Nishiyama (2005), is adopted. Despite the intuition that the non-negativity constraint should be irrelevant to the formulation of optimal monetary rules in countries with moderate to high inflation, such as Brazil, the results show that, even taking into account the states Brazilian economy has been through during inflation targeting regime, this relevance can be ascertained, but depends crucially on the preferences parameters imputed to the central bank. Regarding the consequences of the zero lower bound to identification of CBB’ preferences, a calibration exercise produces inconclusive results, with some evidence of relevance of the non-negativity constraint.
|
59 |
Dois ensaios sobre política monetária ótima aplicada ao Banco Central do Brasil : preferências no período do regime de metas para a inflação e consideração da restrição de não negatividade SulSchifino, Lucas Aronne January 2013 (has links)
O objetivo desta dissertação, composta por dois ensaios, é estudar a condução da política monetária brasileira com base no arcabouço teórico popularizado por Svensson (1997) e Ball (1999), centrado em um banco central otimizador restrito à estrutura da economia. Nesse sentido, o primeiro ensaio atualiza o trabalho de Aragón e Portugal (2009), buscando identificar as preferências do Banco Central do Brasil (BCB) durante a vigência do Regime de Metas de Inflação (RMI) por meio da calibragem do modelo otimização intertemporal. Os resultados mostram que, enquanto o hiato do produto possui menos de 10% de ponderação nas preferências do BCB, a extensão da amostra de 2007 até 2011 aumenta o peso do objetivo de suavização da taxa de juros. Apesar disso, a meta de inflação permanece preponderante nas preferências da autoridade monetária brasileira. Ademais, expandindo a metodologia para investigar se a trajetória da taxa de juros reflete fear of floating por parte do BCB, os resultados evidenciam que a taxa de câmbio não parece desempenhar papel relevante em seus objetivos. O segundo ensaio objetiva verificar as consequências da consideração da restrição de não negatividade sobre a taxa de juros nominal, ignorada em grande parte da literatura, quando o modelo de otimização da política monetária é aplicado ao caso brasileiro. Para obtenção da solução do modelo restrito recorre-se ao método numérico de colocação (collocation method), proposto por Kato e Nishiyama (2005). A despeito da intuição de que a restrição de não negatividade deve ser irrelevante para a formulação de regras monetárias ótimas em países de inflação moderada para alta, como o Brasil, os resultados encontrados mostram que, mesmo levando em conta os estados pelos quais transitou a economia brasileira nos últimos 12 anos, tal relevância pode ser verificada, mas depende crucialmente dos parâmetros de preferências atribuídos ao banco central. No que diz respeito à identificação das preferências do BCB, um exercício de calibragem produz resultados não conclusivos, com algumas evidências de relevância da restrição de não negatividade. / The purpose of this dissertation, composed of two essays, is to assess the Brazilian monetary policy using the theoretical framework popularized by Svensson (1997) and Ball (1999), based on an optimizer central bank restricted to the structure of the economy. The first essay updates Aragon and Portugal’s (2009) paper, seeking to identify the Central Bank of Brazil’s (CBB) preferences during inflation targeting regime using model calibration. The results show that while the weight on output gap in CBB’s preferences is less than 10%, the sample extension from 1997 to 2011 increases the importance of interest rate smoothing. Nevertheless, inflation stabilization remains predominant in CBB’s objectives. Furthermore, expanding the methodology to check the presence of fear of floating behavior, the results show that exchange rate does not seem to play a relevant role in the monetary authority’s preferences. The second essay aims to verify the consequences of the non-negativity constraint (zero lower bound) on nominal interest rate in the same type of model applied to Brazil. To obtain the solution of the restricted model the numerical collocation method, proposed by Kato and Nishiyama (2005), is adopted. Despite the intuition that the non-negativity constraint should be irrelevant to the formulation of optimal monetary rules in countries with moderate to high inflation, such as Brazil, the results show that, even taking into account the states Brazilian economy has been through during inflation targeting regime, this relevance can be ascertained, but depends crucially on the preferences parameters imputed to the central bank. Regarding the consequences of the zero lower bound to identification of CBB’ preferences, a calibration exercise produces inconclusive results, with some evidence of relevance of the non-negativity constraint.
|
60 |
Modelling income, wealth, and expenditure data by use of EconophysicsOltean, Elvis January 2016 (has links)
In the present paper, we identify several distributions from Physics and study their applicability to phenomena such as distribution of income, wealth, and expenditure. Firstly, we apply logistic distribution to these data and we find that it fits very well the annual data for the entire income interval including for upper income segment of population. Secondly, we apply Fermi-Dirac distribution to these data. We seek to explain possible correlations and analogies between economic systems and statistical thermodynamics systems. We try to explain their behaviour and properties when we correlate physical variables with macroeconomic aggregates and indicators. Then we draw some analogies between parameters of the Fermi-Dirac distribution and macroeconomic variables. Thirdly, as complex systems are modelled using polynomial distributions, we apply polynomials to the annual sets of data and we find that it fits very well also the entire income interval. Fourthly, we develop a new methodology to approach dynamically the income, wealth, and expenditure distribution similarly with dynamical complex systems. This methodology was applied to different time intervals consisting of consecutive years up to 35 years. Finally, we develop a mathematical model based on a Hamiltonian that maximises utility function applied to Ramsey model using Fermi-Dirac and polynomial utility functions. We find some theoretical connections with time preference theory. We apply these distributions to a large pool of data from countries with different levels of development, using different methods for calculation of income, wealth, and expenditure.
|
Page generated in 0.0538 seconds