• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 8
  • 7
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 48
  • 27
  • 27
  • 24
  • 16
  • 16
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The efficiency of monetary policy during the zero lower bound period / Efektivnost monetární politiky při nulových sazbách

Mandok, Denis January 2016 (has links)
This thesis explores efficiency of monetary policy under zero lower bound condition. At first, is defined a role of monetary policy and criteria to judge the effectiveness of monetary policy are introduced. Then reactions of central banks of USA, Japan and Euro zone are explored. Thesis found out that monetary policy can be effective under a condition of zero lower bound. As last is introduced idea how to improve current monetary regime.
32

Received Signal Strength-Based Localization of Non-Collaborative Emitters in the Presence of Correlated Shadowing

Taylor, Ryan Charles 23 May 2013 (has links)
RSS-based localization is a promising solution for estimating the position of a non-collaborative emitter using a network of collaborative sensors. This paper examines RSS-based localization and differential RSS (DRSS) localization in the presence of correlated shadowing with no knowledge of the emitter's reference power.  A new non-linear least squares (NLS) DRSS location estimator that uses correlated shadowing information to improve performance is introduced. The existing maximum likelihood (ML) estimator and Cram\' er Rao lower bound (CRLB) for RSS-based localization given do not account for correlated shadowing. This paper presents a new ML estimator and CRLB for RSS-based localization that account for spatially correlated shadowing and imperfect knowledge of the emitter's reference power. The performance of the ML estimator is compared to the CRLB under different simulation conditions. The ML estimator is shown to be biased when the number of sensors is small or the shadowing variance is large. The effects of correlated shadowing on an RSS-based location estimator are thoroughly examined. It is proven that an increase in correlated shadowing will improve the accuracy of an RSS-based location estimator. It is also demonstrated that the ideal sensor geometry which minimizes the average error becomes more compact as correlation is increased. A geometric dilution of precision (GDOP) formulation is derived that provides a metric for the effect of the position of the sensors and emitter on the location estimator performance. A measurement campaign is conducted that characterizes the path loss at 3.4 GHz. The measurements are compared to the log-distance model. The errors between the model and the measurements, which should theoretically be Gaussian, have a Kurtosis value of 1.31. The errors were determined to be spatially correlated with an average correlation coefficient of 0.5 at a distance of 160 meters. The performance of the location estimators in simulation is compared to the performance using measurements from the measurement campaign. The performance is very similar, with the largest difference between the simulated and actual results in the ML estimator. In both cases, the new NLS DRSS estimator outperformed the other estimators and achieved the CRLB. / Master of Science
33

Optimizing Linear Queries Under Differential Privacy

Li, Chao 01 September 2013 (has links)
Private data analysis on statistical data has been addressed by many recent literatures. The goal of such analysis is to measure statistical properties of a database without revealing information of individuals who participate in the database. Differential privacy is a rigorous privacy definition that protects individual information using output perturbation: a differentially private algorithm produces statistically indistinguishable outputs no matter whether the database contains a tuple corresponding to an individual or not. It is straightforward to construct differentially private algorithms for many common tasks and there are published algorithms to support various tasks under differential privacy. However methods to design error-optimal algorithms for most non-trivial tasks are still unknown. In particular, we are interested in error-optimal algorithms for sets of linear queries. A linear query is a sum of counts of tuples that satisfy a certain condition, which covers the scope of many aggregation tasks including count, sum and histogram. We present the matrix mechanism, a novel mechanism for answering sets of linear queries under differential privacy. The matrix mechanism makes a clear distinction between a set of queries submitted by users, called the query workload, and an alternative set of queries to be answered under differential privacy, called the query strategy. The answer to the query workload can then be computed using the answer to the query strategy. Given a query workload, the query strategy determines the distribution of the output noise and the power of the matrix mechanism comes from adaptively choosing a query strategy that minimizes the output noise. Our analyses also provide a theoretical measure to the quality of different strategies for a given workload. This measure is then used in accurate and approximate formulations to the optimization problem that outputs the error-optimal strategy. We present a lower bound of error to answer each workload under the matrix mechanism. The bound reveals that the hardness of a query workload is related to the spectral properties of the workload when it is represented in matrix form. In addition, we design an approximate algorithm, which generates strategies generated by our a out perform state-of-art mechanisms over (epsilon, delta)-differential privacy. Those strategies lead to more accurate data analysis while preserving a rigorous privacy guarantee. Moreover, we also combine the matrix mechanism with a novel data-dependent algorithm, which achieves differential privacy by adding noise that is adapted to the input data and to the given query workload.
34

Expansionary contractions and fiscal free lunches: too good to be true?

McManus, R., Ozkan, F.G., Trzeciakiewicz, Dawid 11 September 2017 (has links)
Yes / This paper builds a framework to jointly examine the possibility of both `expansionary fiscal contractions' (austerity increasing output) and `fiscal free lunches' (expansions reducing government debt), arguments supported by the austerity and stimulus camps, respectively, in recent debates. We propose a new metric quantifying the budgetary implications of fiscal action, a key aspect of fiscal policy particularly at the monetary zero lower bound. We find that austerity needs to be highly persistent and credible to be expansionary, and stimulus temporary, responsive and well-targeted in order to lower debt. We conclude that neither are likely, especially during periods of economic distress.
35

Aspects of Interface between Information Theory and Signal Processing with Applications to Wireless Communications

Park, Sang Woo 14 March 2013 (has links)
This dissertation studies several aspects of the interface between information theory and signal processing. Several new and existing results in information theory are researched from the perspective of signal processing. Similarly, some fundamental results in signal processing and statistics are studied from the information theoretic viewpoint. The first part of this dissertation focuses on illustrating the equivalence between Stein's identity and De Bruijn's identity, and providing two extensions of De Bruijn's identity. First, it is shown that Stein's identity is equivalent to De Bruijn's identity in additive noise channels with specific conditions. Second, for arbitrary but fixed input and noise distributions, and an additive noise channel model, the first derivative of the differential entropy is expressed as a function of the posterior mean, and the second derivative of the differential entropy is expressed in terms of a function of Fisher information. Several applications over a number of fields, such as statistical estimation theory, signal processing and information theory, are presented to support the usefulness of the results developed in Section 2. The second part of this dissertation focuses on three contributions. First, a connection between the result, proposed by Stoica and Babu, and the recent information theoretic results, the worst additive noise lemma and the isoperimetric inequality for entropies, is illustrated. Second, information theoretic and estimation theoretic justifications for the fact that the Gaussian assumption leads to the largest Cramer-Rao lower bound (CRLB) is presented. Third, a slight extension of this result to the more general framework of correlated observations is shown. The third part of this dissertation concentrates on deriving an alternative proof for an extremal entropy inequality (EEI), originally proposed by Liu and Viswanath. Compared with the proofs, presented by Liu and Viswanath, the proposed alternative proof is simpler, more direct, and more information-theoretic. An additional application for the extremal inequality is also provided. Moreover, this section illustrates not only the usefulness of the EEI but also a novel method to approach applications such as the capacity of the vector Gaussian broadcast channel, the lower bound of the achievable rate for distributed source coding with a single quadratic distortion constraint, and the secrecy capacity of the Gaussian wire-tap channel. Finally, a unifying variational and novel approach for proving fundamental information theoretic inequalities is proposed. Fundamental information theory results such as the maximization of differential entropy, minimization of Fisher information (Cramer-Rao inequality), worst additive noise lemma, entropy power inequality (EPI), and EEI are interpreted as functional problems and proved within the framework of calculus of variations. Several extensions and applications of the proposed results are briefly mentioned.
36

Turing machine algorithms and studies in quasi-randomness

Kalyanasundaram, Subrahmanyam 09 November 2011 (has links)
Randomness is an invaluable resource in theoretical computer science. However, pure random bits are hard to obtain. Quasi-randomness is a tool that has been widely used in eliminating/reducing the randomness from randomized algorithms. In this thesis, we study some aspects of quasi-randomness in graphs. Specifically, we provide an algorithm and a lower bound for two different kinds of regularity lemmas. Our algorithm for FK-regularity is derived using a spectral characterization of quasi-randomness. We also use a similar spectral connection to also answer an open question about quasi-random tournaments. We then provide a "Wowzer" type lower bound (for the number of parts required) for the strong regularity lemma. Finally, we study the derandomization of complexity classes using Turing machine simulations. 1. Connections between quasi-randomness and graph spectra. Quasi-random (or pseudo-random) objects are deterministic objects that behave almost like truly random objects. These objects have been widely studied in various settings (graphs, hypergraphs, directed graphs, set systems, etc.). In many cases, quasi-randomness is very closely related to the spectral properties of the combinatorial object that is under study. In this thesis, we discover the spectral characterizations of quasi-randomness in two different cases to solve open problems. A Deterministic Algorithm for Frieze-Kannan Regularity: The Frieze-Kannan regularity lemma asserts that any given graph of large enough size can be partitioned into a number of parts such that, across parts, the graph is quasi-random. . It was unknown if there was a deterministic algorithm that could produce a parition satisfying the conditions of the Frieze-Kannan regularity lemma in deterministic sub-cubic time. In this thesis, we answer this question by designing an O(n[superscript]w) time algorithm for constructing such a partition, where w is the exponent of fast matrix multiplication. Even Cycles and Quasi-Random Tournaments: Chung and Graham in had provided several equivalent characterizations of quasi-randomness in tournaments. One of them is about the number of "even" cycles where even is defined in the following sense. A cycle is said to be even, if when walking along it, an even number of edges point in the wrong direction. Chung and Graham showed that if close to half of the 4-cycles in a tournament T are even, then T is quasi-random. They asked if the same statement is true if instead of 4-cycles, we consider k-cycles, for an even integer k. We resolve this open question by showing that for every fixed even integer k geq 4, if close to half of the k-cycles in a tournament T are even, then T must be quasi-random. 2. A Wowzer type lower bound for the strong regularity lemma. The regularity lemma of Szemeredi asserts that one can partition every graph into a bounded number of quasi-random bipartite graphs. Alon, Fischer, Krivelevich and Szegedy obtained a variant of the regularity lemma that allows one to have an arbitrary control on this measure of quasi-randomness. However, their proof only guaranteed to produce a partition where the number of parts is given by the Wowzer function, which is the iterated version of the Tower function. We show here that a bound of this type is unavoidable by constructing a graph H, with the property that even if one wants a very mild control on the quasi-randomness of a regular partition, then any such partition of H must have a number of parts given by a Wowzer-type function. 3. How fast can we deterministically simulate nondeterminism? We study an approach towards derandomizing complexity classes using Turing machine simulations. We look at the problem of deterministically counting the exact number of accepting computation paths of a given nondeterministic Turing machine. We provide a deterministic algorithm, which runs in time roughly O(sqrt(S)), where S is the size of the configuration graph. The best of the previously known methods required time linear in S. Our result implies a simulation of probabilistic time classes like PP, BPP and BQP in the same running time. This is an improvement over the currently best known simulation by van Melkebeek and Santhanam.
37

Performance and Implementation Aspects of Nonlinear Filtering

Hendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
38

Analise de tecnicas de localização em redes de sensores sem fio / Analysis of localization techniques in wireless sensor networks

Moreira, Rafael Barbosa 26 February 2007 (has links)
Orientador: Paulo Cardieri / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-09T19:04:48Z (GMT). No. of bitstreams: 1 Moreira_RafaelBarbosa_M.pdf: 769599 bytes, checksum: 765bba4630a38b38a3832828cf0947b7 (MD5) Previous issue date: 2007 / Resumo: Nesta dissertação, o problema da localização em redes de sensores sem fio é investigado. É apresentada uma análise de desempenho de técnicas de localização por meio de simulação e por meio da avaliação do limite de Cramér-Rao para o erro de localização. Em ambas as formas de análise foram avaliados efeitos de diversos fatores no desempenho, relacionados à topologia da rede e ao ambiente de propagação . Na análise por meio de simulação, foram consideradas as técnicas de localização baseadas em observações de potência do sinal recebido, enquanto que na análise usando o limite de Cramér-Rao, foram analisadas também as técnicas baseadas no tempo de chegada e no ângulo de chegada do sinal recebido. Este trabalho também avaliou os efeitos da polarização das estimativas de distâncias (usadas no processo de localização) no limite inferior de Cramér-Rao. Esta polarização é geralmente desprezada na literatura, o que pode levar a imprecisões no cálculo do limite de Cramér-Rao, em certas condições de propagação. Uma nova expressão para este limite foi derivada para um caso simples de estimador, considerando agora a polarização. Tomando como base o desenvolvimento desta nova expressão, foi derivada também uma nova expressão para o limite inferior de Cramér-Rao considerando os efeitos do desvanecimento lognormal e do desvanecimento Nakagami do canal de propagação / Abstract: This dissertation investigates on the localization problem in wireless sensor networks. A performance analysis of localization techniques through simulations and the Cramér-Rao lower bound is presented. The effects of several parameters on the localization performance are investigated, including network topology and propagation environment. The simulation analysis considered localization techniques based on received signal strength observations, while the Cramér-Rao analysis considered also techniques based on the time of arrival and angle of arrival of the received signal. This work also investigated how the Cramér-Rao limit is affected by the observation bias in localization techniques based on the received signal strength. This bias is usually neglected in the literature, what may lead to imprecisions on the Cramér-Rao limit computation under certain propagation conditions. A new expression for this limit was derived for a simple estimator case, now considering the bias. With the development of this new expression, it was also derived a new expression for the Cramér-Rao lower bound considering the effects of lognormal fading and Nakagami fading on the propagation channel / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
39

Spotřeba, cenová očekávání a deflačně-recesní spirála / Consumption, price expectations and deflatively-recessive spiral

Plný, Petr January 2017 (has links)
This thesis examines the relationship between price expectations and current consumption. Especially, whether the postponement of final consumption expenditure by households, as a result of their declining price expectations; which may be a deflatively-recessive spiral starter; coincides with economic theory and practice. Based on this, appropriate economic policy recommendations can be drawn. The analysis in the framework of intertemporal consumer model of two periods extended by inflation and the risk confirms this hypothesis. Price expectations positively affect current consumption through the intertemporal substitution effect of real interest rate changes. However, certain assumptions must be fulfilled. Especially, the economy must be in a fixed nominal interest rate environment, the substitution effect must not be offset by the effect of a change in the expected real disposable income or the income effect of the change in the real interest rate and the households must have a higher disposable income so that they can afford to postpone consumption. These findings coincide with the conclusions of the empirical analyzes mentioned in this thesis.
40

Měnová politika ČNB v situaci zero lower bound / Monetary policy of CNB in zero lower bound

Bohatec, Martin January 2014 (has links)
This thesis deals with CNB interventions in favor of the exchange rate depreciation of November 2013. The theoretical part presents alternative tools for unconventional monetary policy when zero lower bound is binding. This thesis then describes the experience of other central banks that responded to low interest rates. Intervention of CNB is contextualized in the Czech financial system and previous economic development. The thesis analyses alternative instruments which CNB might have accessed. Based on the analyzed data this thesis concludes that despite the long-term maintenance of weakened Koruna above the level of announced exchange rate pledge the sufficiently loose monetary policy was not translated into desired price increase.

Page generated in 0.0374 seconds