Spelling suggestions: "subject:"power found"" "subject:"power sound""
41 |
Měnová intervence ČNB ve světovém kontextu / CNB Monetary Intervention in Global ContextBejdová, Markéta January 2014 (has links)
This diploma thesis describes the intervention of the Czech National Bank in November 2013 in terms of current economic theory and practice as well as its justification. In the first theoretical part is presented actual literature dealing with the zero lower bound (ZLB). This part is divided to general description of ZLB, its prevention and possibilities of monetary policy. Further are described experiences from Japan and other countries which achieved ZLB. In this context is also presented intervention made by the Czech National Bank. Hypothesis whether the intervention was justified is tested by linear and nonlinear Taylor rule. Estimate is made by the least squares method using quarterly data from the CSO and CNB. The results of model confirm the hypothesis, which means that the optimal monetary policy interest rate would fall below zero.
|
42 |
Studies in Multiple-Antenna Wireless CommunicationsPeel, Christian Bruce 27 January 2004 (has links) (PDF)
Wireless communications systems are used today in a variety of milieux, with a recurring theme: users and applications regularly require higher throughput. Multiple antennas enable higher throughput and/or more robust performance than single-antenna communications, with no increase in power or frequency bandwidth. Systems are required which achieve the full potential of this "space-time" communication channel under the significant challenges of time-varying fading, multiple users, and the choice of appropriate coding schemes. This dissertation is focused on solutions to these problems. For the single-user case, there are many well-known coding techniques available; in the first part of this dissertation, the performance of two of these methods are analyzed.
Trained and differential modulation are simple coding techniques for single-user time-varying channels. The performance of these coding methods is characterized for a channel having a constant specular component plus a time-varying diffuse component. A first- order auto-regressive model is used to characterize diffuse channel coefficients that vary from symbol to symbol, and is shown to lead to an effective SNR that decreases with time. A lower bound on the capacity of trained modulation is found for the specular/diffuse channel. This bound is maximized over the training length, training frequency, training signal, and training power. Trained modulation is shown to have higher capacity than differential coding, despite the effective SNR penalty of trained modulation versus differential methods.
The second part of the dissertation considers the multi-user, multi-antenna channel, for which capacity-approaching codes were previously unavailable. Precoding with the channel inverse is shown to provide capacity that approaches a constant as the number of users and antennas simultaneously increase. To overcome this limitation, a simple encoding algorithm is introduced that operates close to capacity at sum-rates of tens of bits/channel-use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the energy of the transmitted signal. Simulation results are presented which support our analysis and algorithm development.
|
43 |
An Empirical Study on the Reversal Interest Rate / En empirisk studie på brytpunktsräntanBerglund, Pontus, Kamangar, Daniel January 2020 (has links)
Previous research suggests that a policy interest rate cut below the reversal interest rate reverses the intended effect of monetary policy and becomes contractionary for lending. This paper is an empirical investigation into whether the reversal interest rate was breached in the Swedish negative interest rate environment between February 2015 and July 2016. We find that banks with a greater reliance on deposit funding were adversely affected by the negative interest rate environment, relative to other banks. This is because deposit rates are constrained by a zero lower bound, since banks are reluctant to introduce negative deposit rates for fear of deposit withdrawals. We show with a difference-in-differences approach that the most affected banks reduced loans to households and raised 5 year mortgage lending rates, as compared to the less affected banks, in the negative interest rate environment. These banks also experienced a drop in profitability, suggesting that the zero lower bound on deposits caused the lending spread of banks to be squeezed. However, we do not find evidence that the reversal rate has been breached. / Tidigare forskning menar att en sänkning av styrräntan under brytpunktsräntan gör att penningpolitiken får motsatt effekt och blir åtstramande för utlåning. Denna rapport är en empirisk studie av huruvida brytpunktsräntan passerades i det negativa ränteläget mellan februari 2015 och juli 2016 i Sverige. Våra resultat pekar på att banker vars finansiering till större del bestod av inlåning påverkades negativt av den negativa styrräntan, relativt till andra banker. Detta beror på att inlåningsräntor är begränsade av en lägre nedre gräns på noll procent. Banker är ovilliga att introducera negativa inlåningsräntor för att undvika att kunder tar ut sina insättningar och håller kontanter istället. Vi visar med en "difference-in-differences"-analys att de mest påverkade bankerna minskade lån till hushåll och höjde bolåneräntor med 5-åriga löptider, relativt till mindre påverkade banker, som konsekvens av den negativa styrräntan. Dessa banker upplevde även en minskning av lönsamhet, vilket indikerar att noll som en nedre gräns på inlåningsräntor bidrog till att bankernas räntemarginaler minskade. Vi hittar dock inga bevis på att brytpunktsräntan har passerats.
|
44 |
Statistical Methods for Image Change Detection with UncertaintyLingg, Andrew James January 2012 (has links)
No description available.
|
45 |
Statistical Analysis of Geolocation Fundamentals Using Stochastic GeometryO'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited.
Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy.
Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability.
Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position.
The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?"
Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target.
Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
|
46 |
Účinnost nekonvenční měnové politiky na nulové spodní hranici úrokových sazeb: využití DSGE přístupu / The Effectiveness of Unconventional Monetary Policy Tools at the Zero Lower Bound: A DSGE ApproachMalovaná, Simona January 2014 (has links)
The central bank is not able to further ease monetary conditions once it ex- hausts the space for managing short-term policy rate. Then it has to turn its attention to unconventional measures. The thesis provides a discussion about the suitability of different unconventional policy tools in the Czech situation while the foreign exchange (FX) interventions have proven to be the most appropriate choice. A New Keynesian small open economy DSGE model estimated for the Czech Republic is enhanced to model the FX interventions and to compare dif- ferent monetary policy rules at the zero lower bound (ZLB). The thesis provides three main findings. First, the volatility of the real and nominal macroeconomic variables is magnified in the response to the domestic demand shock, the for- eign financial shock and the foreign inflation shock. Second, the volatility of prices decreases significantly if the central bank adopts price-level or exchange rate targeting rule. Third, intervening to fix the nominal exchange rate on some particular target or to correct a misalignment of the real exchange rate from its fundamentals serves as a good stabilizer of prices while intervening to smooth the nominal exchange rate movements increases the overall macroeconomic volatility at the ZLB. 1
|
47 |
Diophantine perspectives to the exponential function and Euler’s factorial seriesSeppälä, L. (Louna) 30 April 2019 (has links)
Abstract
The focus of this thesis is on two functions: the exponential function and Euler’s factorial series. By constructing explicit Padé approximations, we are able to improve lower bounds for linear forms in the values of these functions. In particular, the dependence on the height of the coefficients of the linear form will be sharpened in the lower bound.
The first chapter contains some necessary definitions and auxiliary results needed in later chapters.We give precise definitions for a transcendence measure and Padé approximations of the second type. Siegel’s lemma will be introduced as a fundamental tool in Diophantine approximation. A brief excursion to exterior algebras shows how they can be used to prove determinant expansion formulas. The reader will also be familiarised with valuations of number fields.
In Chapter 2, a new transcendence measure for e is proved using type II Hermite-Padé approximations to the exponential function. An improvement to the previous transcendence measures is achieved by estimating the common factors of the coefficients of the auxiliary polynomials.
The exponential function is the underlying topic of the third chapter as well. Now we study the common factors of the maximal minors of some large block matrices that appear when constructing Padé-type approximations to the exponential function. The factorisation of these minors is of interest both because of Bombieri and Vaaler’s improved version of Siegel’s lemma and because they are connected to finding explicit expressions for the approximation polynomials. In the beginning of Chapter 3, two general theorems concerning factors of Vandermonde-type block determinants are proved.
In the final chapter, we concentrate on Euler’s factorial series which has a positive radius of convergence in p-adic fields. We establish some non-vanishing results for a linear form in the values of Euler’s series at algebraic integer points. A lower bound for this linear form is derived as well.
|
48 |
Theory on lower bound energy and quantum chemical study of the interaction between lithium clusters and fluorine/fluoride / Théorie de l'énergie limite inférieure et étude de chimie quantique de l’interaction entre des agrégats de lithium et un fluor/fluorureBhowmick, Somnath 18 December 2015 (has links)
En chimie quantique, le principe variationnel est largement utilisé pour calculer la limite supérieure de l'énergie exacte d'un système atomique ou moléculaire. Des méthodes pour calculer la valeur limite inférieure de l'énergie existent mais sont bien moins connues. Une méthode précise pour calculer une telle limite inférieure permettrait de fournir une barre d'erreur théorique pour toute méthode de chimie quantique. Nous avons appliqué des méthodes de type variance pour calculer différentes énergies limites inférieures de l'atome d'hydrogène en utilisant des fonctions de base gaussiennes. L'énergie limite supérieure se trouve être toujours plus précise que ces différentes limites inférieures, i.e. plus proche de l'énergie exacte. L'importance de points singuliers sur l'évaluation de valeurs moyennes d'opérateurs quantiques a également été soulignée.Nous avons étudié les réactions d'adsorption d'un atome de fluor et d'un ion fluorure sur de petits agrégats de lithium Li$_n$ (n=2-15), à l'aide de méthodes de chimie quantique précises. Pour le plus petit système, nous avons montré que la formation de complexes stables Li$_2$F et Li$_2$F$^-$ se produit par un transfert d'électrons sans barrière et à longue portée, de Li$_2$ vers F pour le système neutre et l'inverse pour le système anionique. De telles réactions pourraient être rapides à très basse température. De plus, les complexes formés présentent des caractéristiques uniques de "longue liaison". Pour les systèmes plus gros Li$_n$F/Li$_n$F$^-$ ($n\geqslant4$), nous avons montré que les énergies d'adsorption peuvent être aussi grandes que 6~eV selon le site d'adsorption et que plus d'un état électronique est impliqué dans le processus d'adsorption. Les complexes formés présentent des propriétés intéressantes de "super alcalins" et pourraient servir d'unités de base dans la synthèse de composés à transfert de charge avec des propriétés ajustables. / In quantum chemistry, the variational principle is widely used to calculate an upper bound to the true energy of an atomic or molecular system. Methods for calculating the lower bound value to the energy exist but are much less known. An accurate method to calculate such a lower bound would allow to provide a theoretical error bar for any quantum chemistry method. We have applied variance-like methods to calculate different lower bound energies of a hydrogen atom using Gaussian basis functions. The upper bound energy is found to be always more accurate than the lower bound energies, i.e. closer to the exact energy. The importance of singular points on mean value evaluation of quantum operators has also been brought to attention.The adsorption reactions of atomic fluorine (F) and fluoride (F$^-$) on small lithium clusters Li$_n$ (n=2-15) have been investigated using accurate quantum chemistry ab initio methods. For the smallest system, we have shown that the formation of the stable Li$_2$F and Li$_2$F$^-$ complexes proceeds via a barrierless long-range electron transfer, from the Li$_2$ to F for the neutral and conversely from F$^-$ to Li$_2$ for the anionic system. Such reactions could be fast at very low temperature. Furthermore, the formed complexes show unique long bond characteristics. For the bigger Li$_n$F/Li$_n$F$^-$ systems ($n\geqslant 4$), we have shown that the adsorption energies can be as large as 6~eV depending on the adsorption site and that more than one electronic state is implied in the adsorption process. The formed complexes show interesting "superalkali" properties and could serve as building blocks in the synthesis of charge-transfer compounds with tunable properties.
|
49 |
Le meilleur des cas pour l’ordonnancement de groupes : Un nouvel indicateur proactif-réactif pour l’ordonnancement sous incertitudes / The best-case for groups of permutable operations : A new proactive-reactive parameter for scheduling under uncertaintiesYahouni, Zakaria 23 May 2017 (has links)
Cette thèse représente une étude d'un nouvel indicateur d'aide à la décision pour le problème d'ordonnancement d'ateliers de production sous présence d'incertitudes. Les contributions apportées dans ce travail se situent dans le contexte des groupes d'opérations permutables. Cette approche consiste à proposer une solution d'ordonnancement flexible caractérisant un ensemble fini non-énuméré d'ordonnancements. Un opérateur est ensuite censé sélectionner l'ordonnancement qui répond le mieux aux perturbations survenues dans l'atelier. Nous nous intéressons plus particulièrement à cette phase de sélection et nous mettons l'accent sur l’intérêt de l'humain pour la prise de décision. Dans un premier temps, nous présentons le meilleur des cas; indicateur d'aide à la décision pour le calcul du meilleur ordonnancement caractérisé par l'ordonnancement de groupes. Nous proposons des bornes inférieures pour le calcul des dates de début/fin des opérations. Ces bornes sont ensuite implémentées dans une méthode de séparation et d'évaluation permettant le calculer du meilleur des cas. Grâce à des simulations effectuées sur des instances de job shop de la littérature, nous mettons l'accent sur l'utilité et la performance d'un tel indicateur dans un système d'aide à la décision. Enfin, nous proposons une Interface Homme-Machine (IHM) adaptée à l'ordonnancement de groupes et pilotée par un système d'aide à la décision multicritères. L'implémentation de cette IHM sur un cas d'étude réel a permis de soulever certaines pratiques efficaces pour l'aide à la décision dans le contexte de l'ordonnancement sous incertitudes. / This thesis represents a study of a new decision-aid criterion for manufacturing scheduling under uncertainties. The contributions made in this work relate to the groups of permutable operations context. This approach consists of proposing a flexible scheduling solution characterizing a non-enumerated and finite set of schedules. An operator is then supposed to select the appropriate schedule that best copes with the disturbances occurred on the shop floor. We focus particularly on this selection phase and we emphasize the important of the human for decision making. First, we present the best-case; a decision-aid criterion for computing the best schedule characterized by the groups of permutable operations method. We propose lower bounds for computing the best starting/completion time of operations. These lower bounds are then implemented in a branch and bound procedure in order to compute the best-case. Through to several simulations carried out on literature benchmark instances, we stress the usefulness of such criterion in a decision-aid system. Finally, we propose a Human-Machine-Interface (HMI) adapted to the groups of permutable operations and driven by a multi-criteria decision-aid system. The implementation results of this HMI on a real case study provided some insight about the practice of decision-making and scheduling under uncertainties.
|
50 |
Short Proofs May Be Spacious : Understanding Space in ResolutionNordström, Jakob January 2008 (has links)
Om man ser på de bästa nu kända algoritmerna för att avgöra satisfierbarhet hos logiska formler så är de allra flesta baserade på den så kallade DPLL-metoden utökad med klausulinlärning. De två viktigaste gränssättande faktorerna för sådana algoritmer är hur mycket tid och minne de använder, och att förstå sig på detta är därför en fråga som har stor praktisk betydelse. Inom området beviskomplexitet svarar tids- och minnesåtgång mot längd och minne hos resolutionsbevis för formler i konjunktiv normalform (CNF-formler). En lång rad arbeten har studerat dessa mått och även jämfört dem med bredden av bevis, ett annat mått som visat sig höra nära samman med både längd och minne. Mer formellt är längden hos ett bevis antalet rader, dvs. klausuler, bredden är storleken av den största klausulen, och minnet är maximala antalet klausuler som man behöver komma ihåg samtidigt om man under bevisets gång bara får dra nya slutsatser från klausuler som finns sparade. För längd och bredd har man lyckats visa en rad starka resultat men förståelsen av måttet minne har lämnat mycket i övrigt att önska. Till exempel så är det känt att minnet som behövs för att bevisa en formel är minst lika stort som den nödvändiga bredden, men det har varit en öppen fråga om minne och bredd kan separeras eller om de två måtten mäter "samma sak" i den meningen att de alltid är asymptotiskt lika stora för en formel. Det har också varit okänt om det faktum att det finns ett kort bevis för en formel medför att formeln också kan bevisas i litet minne (motsvarande påstående är sant för längd jämfört med bredd) eller om det tvärtom kan vara så att längd och minne är "helt orelaterade" på så sätt att även korta bevis kan kräva maximal mängd minne. I denna avhandling presenterar vi först ett förenklat bevis av trade-off-resultatet för längd jämfört med minne i (Hertel och Pitassi 2007) och visar hur samma idéer kan användas för att visa ett par andra exponentiella avvägningar i relationerna mellan olika beviskomplexitetsmått för resolution. Sedan visar vi att det finns formler som kan bevisas i linjär längd och konstant bredd men som kräver en mängd minne som växer logaritmiskt i formelstorleken, vilket vi senare förbättrar till kvadratroten av formelstorleken. Dessa resultat separerar således minne och bredd. Genom att använda andra men besläktade idéer besvarar vi därefter frågan om hur minne och längd förhåller sig till varandra genom att separera dem på starkast möjliga sätt. Mer precist visar vi att det finns CNF-formler av storlek O(n) som har resolutionbevis i längd O(n) och bredd O(1) men som kräver minne minst Omega(n/log n). Det gemensamma temat för dessa resultat är att vi studerar formler som beskriver stenläggningsspel, eller pebblingspel, på riktade acykliska grafer. Vi bevisar undre gränser för det minne som behövs för den så kallade pebblingformeln över en graf uttryckt i det svart-vita pebblingpriset för grafen i fråga. Slutligen observerar vi att vår optimala separation av minne och längd i själva verket är ett specialfall av en mer generell sats. Låt F vara en CNF-formel och f:{0,1}^d->{0,1} en boolesk funktion. Ersätt varje variabel x i F med f(x_1, ..., x_d) och skriv om denna nya formel på naturligt sätt som en CNF-formel F[f]. Då gäller, givet att F och f har rätt egenskaper, att F[f] kan bevisas i resolution i väsentligen samma längd och bredd som F, men att den minimala mängd minne som behövs för F[f] är åtminstone lika stor som det minimala antalet variabler som måste förekomma samtidigt i ett bevis för F. / Most state-of-the-art satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The two main bottlenecks for such algorithms are the amounts of time and memory used. Thus, understanding time and memory requirements for clause learning algorithms, and how these requirements are related to one another, is a question of considerable practical importance. In the field of proof complexity, these resources correspond to the length and space of resolution proofs for formulas in conjunctive normal form (CNF). There has been a long line of research investigating these proof complexity measures and relating them to the width of proofs, another measure which has turned out to be intimately connected with both length and space. Formally, the length of a resolution proof is the number of lines, i.e., clauses, the width of a proof is the maximal size of any clause in it, and the space is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. While strong results have been established for length and width, our understanding of space has been quite poor. For instance, the space required to prove a formula is known to be at least as large as the needed width, but it has remained open whether space can be separated from width or whether the two measures coincide asymptotically. It has also been unknown whether the fact that a formula is provable in short length implies that it is also provable in small space (which is the case for length versus width), or whether on the contrary these measures are "completely unrelated" in the sense that short proofs can be maximally complex with respect to space. In this thesis, as an easy first observation we present a simplified proof of the recent length-space trade-off result for resolution in (Hertel and Pitassi 2007) and show how our ideas can be used to prove a couple of other exponential trade-offs in resolution. Next, we prove that there are families of CNF formulas that can be proven in linear length and constant width but require space growing logarithmically in the formula size, later improving this exponentially to the square root of the size. These results thus separate space and width. Using a related but different approach, we then resolve the question about the relation between space and length by proving an optimal separation between them. More precisely, we show that there are families of CNF formulas of size O(n) that have resolution proofs of length O(n) and width O(1) but for which any proof requires space Omega(n/log n). All of these results are achieved by studying so-called pebbling formulas defined in terms of pebble games over directed acyclic graphs (DAGs) and proving lower bounds on the space requirements for such formulas in terms of the black-white pebbling price of the underlying DAGs. Finally, we observe that our optimal separation of space and length is in fact a special case of a more general phenomenon. Namely, for any CNF formula F and any Boolean function f:{0,1}^d->{0,1}, replace every variable x in F by f(x_1, ..., x_d) and rewrite this new formula in CNF in the natural way, denoting the resulting formula F[f]. Then if F and f have the right properties, F[f] can be proven in resolution in essentially the same length and width as F but the minimal space needed for F[f] is lower-bounded by the number of variables that have to be mentioned simultaneously in any proof for F. / QC 20100831
|
Page generated in 0.0497 seconds